uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,992,863
arxiv
\section{Introduction} Action quality assessment (AQA) in sports videos aims to evaluate the quality of the performed action \cite{Pirsiavash_ECCV_2014, Yu_2021_ICCV, Tang_2020_CVPR}. AQA can be used for a number of sports to provide assessments with higher accuracy and consistency while reducing or even completely removing human involvement. For example, it can be employed to predict scores of sports events, rank athletes based on their moves, or analyze their moves to estimate future injury risks. Therefore, it can assist in a number of crucial factors in sports including self-assessment, performance analysis, player selection, and objective judgment. AQA provides affordable self-assessments for aspiring athletes to help them evaluate and improve their performance. Building a robust AQA method requires a diverse dataset encompassing elaborate scenarios and variations. However, existing AQA datasets have limitations in different aspects. Firstly, most datasets are collected from a small number of events. This limits subject variations, which is crucial to design generalized AQA methods. The difference in skills and techniques among athletes is usually very narrow, and with fewer participants, it becomes challenging to demonstrate detailed attributes. Less number of events result in limited scene variations. A realistic AQA benchmark should attain large subject, action, and scene variations. Secondly, most datasets have very restrictive camera viewpoints. Thus, an athlete is visible from only one side. Furthermore, since the camera focus and distance to the athlete are constant, there are no scale variations. The third factor is that existing datasets only provide single-modality data. Thus existing AQA methods are unable to exploit modality-specific techniques or multimodal learning. Finally, the most important limitation of existing datasets is the short duration of the videos. Samples in those datasets usually last a few seconds. Thus, designing advanced techniques to model long-term semantics is not possible. \begin{figure}[t!] \centerline{\includegraphics[width=6cm]{Figs/workflow.png}} \caption{AQA conceptual workflow: discriminative non-local attention focuses on latent spatio-temporal association. The gender classification head forces the model to learn the distinctive features of female and male athletes.} \label{work_flow} \end{figure} \IEEEpubidadjcol To overcome these limitations we propose a new dataset, AGF-Olympics. Our dataset comprises artistic gymnastic floor routines with around 1.5 minutes of video duration. Since floor routines demonstrate highly complex maneuvers, they can provide the best opportunity to study intricate body part movements and their association with action quality. Ours is the only AQA dataset that provides the locations of athlete skeletal joints in each video sample. In gymnastic floor routines, an athlete performs many complex exercises where specific positions of the head, hands, and legs are critical to assess execution level. For example, the same double-twisting double-backs done by two athletes will achieve different scores depending on the rigidity of their bodies. Skeleton poses provide exact body positions and can be used to encode perceptual creativity representing subtle skill variations. However, visual information is also necessary. Videos give details of the floor and other visual cues, such as ``steps over the boundary line" penalties. Moreover, dynamic camera movement creates extensive view and scale variations. Our dataset is collected from the largest number of events, thus providing the highest subject variations as well. Our second contribution is a deep learning architecture for AQA. Existing human action-related methods \cite{MinsiW_2020_PAMI, liu2020disentangling, YuxinC_2021_ICCV, Dave_2022_CVPR} usually focus on classifying atomic action videos, performed with a few body part movements. A single frame is often enough to identify the actions in these simple videos. However, sports videos are more challenging as they incorporate several atomic actions with complex movements. Furthermore, the quality of these actions is spatially and temporally localized. Spatial distribution is often incomprehensible, and temporal characteristics dominate the overall quality. Therefore, these action classification methods are not optimal for sports AQA. Sports AQA focuses on analyzing the entire spatio-temporal associations. Analyzing comprehensive associations has some inherent challenges. For example, assessment in gymnastics will severely degrade due to a missed step or incorrect posture at any moment, resulting in a lower score. An AQA method must consider these incidents. Occlusion is another challenge with sports videos. Athletes' body parts are often occluded by nearby spectators, objects, or overlaid advertisements. Also, when the camera focuses on the athlete's face, it is hard to interpret what the athlete is doing in those frames. Therefore, an AQA method must automatically disregard these irrelevant frames while tracking performance measurement over the entire progression. Furthermore, sports videos have a wide range of complexity in motions and postures. It is essential to determine the local and global spatio-temporal relationships. AQA methods must connect these local individual patterns with global trends to evaluate fine-grained performances. Existing AQA methods can be divided into two research directions: injury and score prediction. For injury prediction, researchers primarily focused on analyzing irregular reaction forces and moments to assess non-contact injury risks \cite{Johnson_2018_MBEC, Johnson_2019_TBE, Johnson_2019_EJB, Johnson_2021_TBE, Mundt_2021_sensors, Goldacre_2021_ISBSC, morris_2021_ISBSC}. For score prediction, end-to-end deep learning models are usually employed to map sports videos to final scores \cite{Parmar_2017_CVPR_Workshops, Bertasius_2017_ICCV, Doughty_CVPR_2018, Yongjun_2018, Parmar_2019_CVPR, Parmar_2019_WACV, Xu_TCSVT_2020, Tang_2020_CVPR, Yu_2021_ICCV}. Some researchers introduced the Gaussian distribution of score to incorporate uncertainty \cite{Tang_2020_CVPR} or look-up tables to leverage predefined rules \cite{Nekoui_2020_CVPR_Workshops}. These techniques provide additional information in the learning procedure. However, these AQA methods overlook the importance of mapping longer temporal semantics and only focus on short-duration sports, where videos are mostly captured from a single viewpoint. Long-duration sports have an extended temporal range, and mapping extended semantic correlation is a critical limitation in current AQA methods. For example, videos of gymnastic floor routines have dynamic viewing angles captured using multiple cameras. These videos usually extend to a couple of minutes, representing highly complex postures compared to sports events like diving and gymnastic vault \cite{Yongjun_2018, Parmar_2019_WACV, pan_iccv_2019, Parmar_2019_CVPR, Nekoui_2020_CVPR_Workshops, Tang_2020_CVPR, Yu_2021_ICCV} To address this particular aspect, we propose a deep learning module: Discriminative Non-Local Attention (DNLA), that focuses on learning long-term semantics. DNLA extracts the most prominent features localized in spatio-temporal space and eliminates less informative features. Thus it concentrates the learning exclusively on identifying intricate relationships that cause the highest impact on the final score. Figure~\ref{work_flow} illustrates our overall workflow at the conceptual level. In addition to score regression, our gender classification module enhances focus on gender-specific subtleties. Floor routines have some differences based on gender. Female athletes must combine artistry with the presentation of force and techniques, while male athletes only focus on the latter. Thus gender classification enables the model to comprehend inherent distinctions. We also present a conceptual design of an AQA app. This app can be used by athletes to measure progress and compare techniques. Since our dataset incorporates only Olympic video samples, the AQA model deployed in the app is capable of providing Olympic-level score predictions. Our main contributions are summarized as follows: \begin{itemize} \item \textbf{New dataset:} We propose a new dataset AGF-Olympics that contains videos and athlete skeletal locations in each frame along with their scores. We collected video samples from Olympic events that reflect difficulty and execution at their best. Our dataset provides challenging real word scenarios with diverse camera views, subjects, and occlusions. The skeleton sequences in our dataset allow researchers to analyze intricate structural pose variations. \item \textbf{Discriminative Non-local Attention module:} Our proposed learning module creates a sparse temporal mapping to enhance semantic correlation over an extended temporal range for AQA in sports videos. It enables extracting distinguished global semantic associations which are often subdued in long-range temporal progression. \item \textbf{Experimental results:} Experimental results on the AGF-Olympics dataset with existing methods indicate clear challenges. Moreover, comparative evaluations of our proposed method demonstrate the advantages of selective sparse attention. \end{itemize} The remaining paper is organized as follows: Section~\ref{litreature_rev} describes existing AQA methods. Section~\ref{Dataset} represents the proposed dataset. Section~\ref{Methodology} illustrates details of the proposed architecture. Section~\ref{Experiments} provides experimental results, comparisons with existing methods, and ablation analysis. Conclusions are provided in Section~\ref{Conclusions}. \section{Related Work}\label{litreature_rev} In this section, we explore publicly available AQA datasets and existing methods. \subsection{Action Quality Assessment datasets} MIT-Dive and MIT-Skate \cite{Pirsiavash_ECCV_2014} are one of the first AQA datasets. MIT-Dive has 159 samples. All videos in the dataset are captured at a 60 fps frame rate with 150 average number of frames per sample. The final quality scores range between 20 to 100. Whereas, MIT-Skate contains 150 sample videos. The frame rate is 24 fps, and the quality scores range between 0 to 100. Another dataset called UNLV-Dive \cite{Parmar_2017_CVPR_Workshops} is an extended version of MIT-Dive. It contains an additional 211 samples and also includes 176 vault samples named as UNLV-Vault. Many methods have been evaluated on these datasets, especially MIT-Dive and UNLV-Dive \cite{Pirsiavash_ECCV_2014, Parmar_2017_CVPR_Workshops, Parmar_2019_WACV, Xu_TCSVT_2020, Nekoui_2021_WACV}. AQA-7 \cite{Parmar_2019_WACV} contains 7 types of sports: diving, vault, skiing, snowboarding, synchronized diving (3m and 10m), and trampoline. Diving is incorporated from the UNLV-Dive dataset. These sports are of a short sequence with a maximum of 25 seconds duration. MTL-AQA \cite{Parmar_2019_CVPR} includes diving videos. Along with action quality scores, this dataset provides commentaries and frame-level pose annotations. MTL-AQA has been used for multi-task learning. Fis-V is a figure skating dataset proposed in \cite{Xu_TCSVT_2020}. It provides two different scores, technical element (TES) and program component (PCS). Samples are collected only from ladies' single short program. Although these datasets provide an opportunity to analyze action quality assessment methods, they have limitations in many aspects. For example, these datasets primarily focus on the diving action/sport e.g. MIT-Dive \cite{Pirsiavash_ECCV_2014}, UNLV-Dive \cite{Parmar_2017_CVPR_Workshops}, MTL-AQA \cite{Parmar_2019_CVPR} etc. Given the nature of the diving action, samples in these datasets are of very short duration i.e., about a few seconds. Also, diving videos are captured from a single-camera viewpoint. Thus these datasets are limited in terms of view and scale variations. Although MIT-Skate \cite{Pirsiavash_ECCV_2014} and Fis-V \cite{Xu_TCSVT_2020} contain long-duration sports, they are collected from a limited number of events, resulting in limited subject variations. To overcome the limitations of the existing dataset, we propose the AGF-Olympics which introduces some unique challenges. Our dataset incorporates long sequence videos with an average duration of 1.5 minutes duration (maximum 7051 frames), and significant variations in camera viewpoints, subjects, scales, and background scenes. Our dataset also provides skeletal joint locations in the video frames. Furthermore, we have collected samples from the longest period i.e., four Olympics and trials from 2008 to 2021. Table \ref{dataset_comparison} illustrates a comparison of existing datasets with AGF-Olympics. \subsection{Video-based AQA methods} Most AQA methods use raw video frames to predict scores. The C3D-LSTM method in \cite{Parmar_2017_CVPR_Workshops} performs clip-based feature aggregation with score prediction to evaluate the UNLV-Dive dataset. C3D-AVG-MTL method utilizes a multi-task approach for AQA score regression, caption generation, and factorized action classification on the MTL-AQA dataset \cite{Parmar_2019_CVPR}. However, the frame-level annotation of body poses for action classification and commentary scripts can limit the possibility of extension and transfer learning. The C3D-LSTM method in \cite{Parmar_2019_WACV}, utilizes clip-based C3D features with LSTM to learn temporal modeling. It was evaluated on six out of the seven different sports events in AQA-7. The trampoline was excluded due to its much longer duration. This highlights the difficulty of modeling sports actions over an extended temporal range. USDL method uses a multi-pathway to predict multiple score distributions provided in AQA-7, emulating scoring systems in sports by multiple judges \cite{Tang_2020_CVPR}. CoRe + GART exploits contrastive learning to discriminate subtle differences in samples \cite{Yu_2021_ICCV}. It is evaluated on AQA-7 and MTL-AQA. C3D + S-LSTM + M-LSTM illustrated using the multi-scale convolutional skip LSTM to capture sequential information in the long-range figure skating videos in Fis-V \cite{Xu_TCSVT_2020}. However, it reports a high mean squared error (MSE) rate of 19.91, indicating a huge difference from actual scores. \subsection{Pose based AQA methods} Few AQA methods exploit the distinct characteristics of pose features. The Pose + DCT method applies hierarchical independent subspace analysis and DCT/DFT features from poses to assess quality \cite{Pirsiavash_ECCV_2014}. The joint difference and joint commonality modules in \cite{pan_iccv_2019} use patch videos of major body joints from AQA-7 to extract motion features to represent better joint dependency. Nonetheless, cropping local patches around joints is very time-consuming. The FALCONS method incorporates a difficulty assessor that uses a look-up table to assess the difficulty level of sub-activities performed during diving \cite{Nekoui_2020_CVPR_Workshops}. EAGLE-Eye, on the other hand, uses a joint coordination assessor on pose heatmaps to learn temporal dependencies using multiscale convolution \cite{Nekoui_2021_WACV}. Experimental results indicate that these methods have insufficient spatial and temporal coordination over an extended time to achieve similar performance for longer sports. Therefore, we address these issues in this work by focusing on extreme-level pose contortion and long video sequences. \section{Proposed AGF-Olympic AQA Dataset}\label{Dataset} In this section, we describe our proposed AGF-Olympic dataset. We introduce detailed data collection strategies and related statistics. \subsection{Data Collection:} We started the data collection procedure by surveying broadcast videos of gymnastic floors on different video-sharing websites. We carefully skimmed through hundreds of videos to ensure that only high-standard videos are selected that contain skilled performances. For example, amateur-level gymnastic routines do not incorporate many highly valued skills, and the difficulty level is also low. On the other hand, professionals train for years before competing in the Olympic games. Therefore, we selected events only from the Olympic trials, qualifying rounds, and finals to acquire high-quality samples with rich skill variations. We also ensured data collection with a range of action quality. Since 2008, athletes are scored in three areas: difficulty, execution, and penalty instead of the legacy scoring on a scale of 10. Therefore, we did not include events before 2008 to keep uniformity in the scoring system. The dataset comprises all gymnastic floor routines from the individual, all-around, teams, qualifications, and trial events in both women's and men's categories. We discarded samples where athletes could not complete a routine due to injuries or other interruptions. \subsection{Data Modality:} Our dataset provides two modalities: RGB videos and 2D joint position information (skeleton representation). We meticulously trimmed all samples at the start and end of each complete routine and used FFmpeg to extract individual clips. Then we used OpenPose to detect 2D skeleton joints of the athletes in videos \cite{openpose2019}. We discarded the frames that did not contain any skeleton. We stored the video files in mp4 and the skeletons in JSON format. Figure~\ref{skeleton_OP} (a) demonstrates the configuration of the body joints, and (b) illustrates the joint adjacency matrix. \begin{figure}[!htbp] \centerline{\includegraphics[height=1.6in, width=3in]{Figs/Openpose_skel.png}} \caption{Configuration of 25 body joints extracted from videos using OpenPose \cite{openpose2019}. (a) Skeleton view with corresponding joint indexes (b) Adjacency matrix representing the joint neighborhood.} \label{skeleton_OP} \end{figure} Figure~\ref{samles_from_dataset} illustrates samples from our dataset in both RGB frames and corresponding skeleton poses. The first three rows represent diverse poses of female and male athletes from different camera viewpoints. The last two rows demonstrate four types of error frames, missing body joints (due to self-occlusion), massive scale difference, blurriness (due to rapid movements), and erroneous skeleton detection (detecting joints of an audience with higher confidence than the athlete). \begin{figure*}[!htbp] \centerline{\includegraphics[width=\linewidth]{Figs/sample_frames.png}} \caption{Sample video frames and corresponding skeleton frames of the AGF-Olympic dataset. The first three rows show a variety of subjects and camera views. The fourth row illustrates (from left) missing body joints (red circle at the origin (0,0) in skeleton view), scale variation due to upper body focus causing more than 50\% joints missing, and huge scale differences in the detected joints compared to others. The last pair in the fourth row and the first in the fifth row represent blurry samples. The final pair in the fifth row demonstrates incorrect skeleton detection. The skeleton of an audience gets incorrectly detected as the athlete due to having higher confidence. Green boxes indicate that the skeletons are accepted without any change, orange boxes indicate that the missing joints have been interpolated and red boxes indicate frames have been discarded.} \label{samles_from_dataset} \end{figure*} \subsection{Data diversity:} Compared to most existing AQA datasets, which focus on a monotonic camera view, our dataset provides a challenging scenario with a highly uncertain viewpoint. Since we collected samples from broadcast videos, they are a composition of clips captured from different cameras with high movements to keep the athletes on a clear vantage point. Thus, samples contain higher disparity in viewpoints such as top, left-side, right-side, back, or just upper body view. Some camera viewpoints focus exclusively on the upper body of an athlete. Thus, video frames with these viewpoints fail to generate a meaningful skeleton. We discarded these skeletons along with the corresponding RGB video frames. Besides, cameras often focus solely on the athlete, causing that person to cover most parts of the video frames. A broader viewing angle, on the other hand, causes an athlete to cover only a tiny fraction of the video frames. It causes highly random scale variations due to continuous changes in body/frame ratio. Moreover, our dataset contains diverse and complex poses such as twists, flips, and somersaults. Existing AQA datasets that incorporate diving or vault do not have these challenges. \subsection{Data Statistics:} We downloaded a total of 83 hours of videos. Since original broadcasting videos have varied resolutions up to 1920x1080 pixels, we restricted all samples to have 256x256 pixels resolution. Video duration is around 1.3 to 2.0 minutes with an average of 2500 frames. The dataset includes three types of annotations: difficulty, execution, and penalty scores, except for the all-around event, as they only provide total scores. Total scores have a standard deviation of 0.88. In case scores were missing from broadcast videos, we cross-checked Olympic official websites and other online resources to obtain the correct scores. Each sample comes with three additional pieces of information: event, gender, and year. These can be used to generate different evaluation criteria or learning techniques. Figure~\ref{score_distribution} shows the distribution of final scores in train and test splits for female and male athletes. The total number of samples for the floor routine is 500: female athletes 326, and male athletes are 174. \begin{figure}[!hbtp] \centerline{\includegraphics[width=\columnwidth]{Figs/score_hist1.png}} \caption{Score distribution in our dataset: top row train (left) and test (right) splits of female athletes and bottom row train (left), and test (right) splits of male athletes.} \label{score_distribution} \end{figure} Table \ref{dataset_comparison} compares our proposed dataset with the existing AQA dataset. Our dataset covers the highest number of events spanning four Olympics, and with a challenging intra-sample scale and view variations with two different data modalities. \begin{table*}[!hbtp] \scriptsize \centering \caption{Comparison of our proposed and existing AQA datasets\label{dataset_comparison}} \begin{tabular}{|p{55pt}|cc|P{42pt}|P{30pt}|P{25pt}|ccc|P{27pt}|P{36pt}|P{55pt}|} \hline \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{2}{c|}{\textbf{Modality}} & \multirow{2}{*}{\textbf{Gender}} & \multirow{2}{*}{\textbf{Samples}} & \multirow{2}{*}{\makecell{\textbf{Events}}} & \multicolumn{3}{c|}{\textbf{Variations}} & \multirow{2}{*}{\makecell{\textbf{Year} \\ \textbf{range}}} & \multirow{2}{*}{\makecell{\textbf{No of frames}\\ \textbf{(max)}}} & \multirow{2}{*}{\textbf{Labels}} \\ \cline{2-3} \cline{7-9} & \multicolumn{1}{c}{\textbf{RGB}} & \textbf{SKEL} & & & & \multicolumn{1}{c}{\textbf{Scene}} & \textbf{View} & \textbf{Scale} & & & \\ \hline MIT Dive \cite{Pirsiavash_ECCV_2014} & \checkmark & $\times$ & Male & 159 & 1 & $\times$ & $\times$ & $\times$ & 2012 & 151 & AQA score\\ \hline MIT Skate \cite{Pirsiavash_ECCV_2014} & \checkmark & $\times$ & Male, Female & 171 & 13 & \checkmark & \checkmark & \checkmark & 2012 & 5823 & AQA score\\ \hline UNLV-Dive \cite{Parmar_2017_CVPR_Workshops} & \checkmark & $\times$ & Male & 370 & 1 & $\times$ & $\times$ & $\times$ & 2012 & 151 & AQA score\\ \hline UNLV-Vault \cite{Parmar_2017_CVPR_Workshops} & \checkmark & $\times$ & Male, Female & 176 & 7 & $\times$ & $\times$ & $\times$ & 2012-15 & 100 & AQA score\\ \hline AQA-7 \cite{Parmar_2019_WACV} & \checkmark & $\times$ & Male & 1189 & - & \checkmark & $\times$ & $\times$ & - & 618 & AQA score\\ \hline MLT AQA \cite{Parmar_2019_CVPR} & \checkmark & $\times$ & Male, Female & 1412 & 16 & \checkmark & $\times$ & $\times$ & 2012-14 & 274 & AQA score, Action Class, Caption\\ \hline Fis-V \cite{Xu_TCSVT_2020} & \checkmark & $\times$ & Female & 500 & - & \checkmark & \checkmark & \checkmark & - & 4300 & AQA score\\ \hline \textbf{AGF-Olympics (Ours)} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{Male, Female} & \textbf{500} & \textbf{35} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{2008-20} & \textbf{\makecell{7051}} & \textbf{AQA score, Gender, Difficulty} \\ \hline \end{tabular} \end{table*} \section{Proposed Architecture}\label{Methodology} In this section, we introduce our proposed AQA model which includes three modules: feature encoder that embeds raw frames to a higher dimensional matrix subspace, DNLA that generates a sparse representation, and MLP that creates the final mapping. Figure~\ref{proposed_architecture} illustrates the overall architecture. The following sections describe the technical functionalities of each module. \begin{figure*}[!htbp] \centerline{\includegraphics[width=\linewidth]{Figs/model_aqa.png}} \caption{Overview of our architecture: Input frames are segmented into 7 non-overlapping clips of $T$ frames. JFE extracts features from skeleton joint clips, and AFE extracts appearance features from RGB clips. DNLA models subtle long-term spatio-temporal dependency and creates a sparse feature representation from the dense feature embedding coming from the base encoder (JFE or AFE). Finally, MLP maps feature space to the final score.} \label{proposed_architecture} \end{figure*} \subsection{Joint Feature Encoder (JFE)} We experimented with two GCN architectures as JFE, a simple GCN model and MSG3D \cite{liu2020disentangling}, to extract detailed joint interactions. The first GCN model includes one spatial graph convolution layer followed by two separable temporal convolutions. It uses a physical joint adjacency matrix with layer-wise scalar dynamic adaptation. MSG3D is a more advanced GCN model using multi-scale aggregation with unified spatio-temporal modeling to extract complex relationships. Joint adjacency is critical in understanding activities, and scaled \emph{k}-adjacency matrix in MSG3D allows a more efficient representation. We used features from the last GCN layers of JFE as input to our DNLA module. \subsection{Appearance Feature Encoder (AFE)} We experimented with X3D \cite{x3d_2020} and I3D \cite{Carreira_2017_CVPR} networks as our appearance feature encoder module. The progressive expansion of x3D with lower channel dimensions and higher spatio-temporal resolution allows effective video feature extraction. We extracted features from the last convolutional layer (conv5) of X3D. A similar process has been used in the case of I3D. Features for each clip are combined along the temporal dimension to generate the final feature matrix. \subsection{Discriminative Non-local Attention (DNLA)} Long-range dependencies are often captured using large receptive fields in convolutional or recurrent operations. Repetitive local calculations are inefficient, and optimization can be arduous. It becomes significantly complex if the relationship between distant positions in space and time needs to be addressed. Classical non-local mean operation \cite{Buades2005} assumes that under stationary conditions, patches in an image have a certain similarity, and the similarity details increase as the image size grows. Thus a pixel can be estimated from neighboring pixels in the image. This phenomenon extends to videos as well. Particularly, sports videos have several overlapping moments with similar human activities. Non-local operations can be exploited to capture these distant dependencies. It is orthogonal to self-attention in transformer \cite{NIPS2017_transformer} that has achieved tremendous success in machine translation. Non-local network \cite{Non_local_wang} uses non-local mean to calculate the response of a spatio-temporal pixel position as a weighted sum of all pixels over the spatial and temporal dimensions. Thus the calculated features capture relation over the entire space-time instead of a local patch provided by kernels in convolution or consecutive frames in recurrent networks. Gymnastic floor routines usually have a large number of frames. Traditional video or skeleton-based architectures are not designed to handle such extended temporal modeling. These architectures usually evaluate samples with very short lengths, lasting a few seconds. To process longer videos, existing works use independent segments of a video and concatenate the segment-wise encoded features. It results in an asynchronous dense feature matrix. This method works if the video does not contain complex spatio-temporal dependency or the downstream task is simple, such as action recognition. But it fails to demonstrate equal performance for more elaborate tasks such as action quality assessment. To address the discrepancy in the segments, we introduce discriminative non-local attention that computes correlation among distant feature positions. Thus it creates a sparse mapping by extracting the most prominent features over the temporal dimension. Our vanilla feature distillation module applies an adaptive average pooling with an overlapping stride and max pooling over the space-time feature space. Finally, the pooled channels are concatenated and passed through MLP for score mapping. We address it as VFD. Next, we use a non-local layer to enhance correlation significance among distant positions. A generic non-local calculation from \cite{Non_local_wang} is defined as in Eq~\ref{NL_block}. \begin{equation}\label{NL_block} y_i = \frac{1}{C(x)}\sum_{\forall_j}f(x_i,x_j)g(x_j) \end{equation} Here, $i$ is the index of the output position, and $j$ represents all positions over space and time for the input $x$. The pairwise function $f$ performs scalar multiplication between $i$ and all $j$ positions. The unary function $g$ computes output at position $j$. The final output is normalized by a factor of $C(x)$. We evaluated two different pairwise functions $f$: Embedded Gaussian and Concatenation \cite{Non_local_wang}. Embedded Gaussian performed superior in our experiments. It is defined as in Eq~\ref{embd_G}. \begin{equation}\label{embd_G} f(x_i,x_j) = e^{{\theta (x_i)}^T{\phi(x_j)}}, \end{equation} where, $\theta (x_i) = W_\theta x_i$ and $\phi (x_j) = W_\phi x_j$ are two embedded representation of the input matrix. Our vanilla distillation module suppresses redundant or less informative attributes without losing distinguishing semantic details. On the other hand, non-local convolutional operation facilitates learning correlation among distant frames. We bridge these two learning procedures by unifying the idea of discriminative distillation of VFD into non-local convolution. We tested two different approaches: DNLA$_\mu$ and DNLA$_\delta$. In DNLA$_\mu$, we combine a parallel branch with the original NL block that acts as an embedded residual connection. It takes the $g(x)$ of Eq~\ref{NL_block} as input and creates a motion matrix by subtracting each consecutive frame. Each frame in the new matrix expresses the joint motions and will have higher values whenever the athlete does an intense activity. We use a binary mask to deactivate the lower quantile of the matrix. It subdues irrelevant details and adds a dominant feature pipeline. After processing through an encoding layer, the embedded feature is fused into the main branch. The $\theta$ and $\phi$ operations in the non-local network (Eq~\ref{embd_G}.) extend the space-time domain into a generic embedding. DNLA$_\delta$ distributes the spatial and temporal feature spaces into two separate encoding heads. This independent computation incentivizes flexibility by allowing robust cross-spacetime interaction and augments non-local attention by retaining discriminative spatial and temporal dependency. \section{Experiments}\label{Experiments} We evaluate our proposed AQA method using our AGF-Olympics dataset. First, we discuss data preprocessing techniques and then the experimental results. \subsection{Data preprocessing} Skeletons extracted from sports videos are very noisy due to extreme movements. OpenPose \cite{openpose2019} often fails to detect skeletons in frames where the athlete is in high motion and appears blurry. Besides, sports replays often focus on the upper body of athletes, resulting in missing almost 50\% of the joint coordinates. We discard frames that have less than 20 joint coordinates. In frames where the number of missing joints is a maximum of 5, we used joint interpolation to calculate the missing joints from immediate neighbors instead of completely discarding the frames. We use K-hop association to identify neighboring joints in case immediate neighbors are also missing. Eq~\ref{joint_intp_eq} represents K-hop joint interpolation where $K$ is a set of neighboring joints, f is a function that chooses the closest two neighboring joints from the set K and i and j represent their indexes. \begin{equation}\label{joint_intp_eq} X_{new} = \frac{f(K_i) + f(K_j)}{2};\qquad f(K_{i/j})\neq0\ \end{equation} \subsection{Results} We compare our method to existing AQA methods that achieved good performance in predicting scores on the six sports of the AQA-7 dataset. These sports include diving, vault, skiing, snowboarding, synchronized diving 3m, and synchronized diving 10m. All these sports are of short duration, containing a maximum of 151 frames, and do not involve extended sequences of complex movements. These methods achieved on average 0.74 Sp. Corr. on the AQA-7 dataset with a minimum value of 0.60 and maximum of 0.88 whereas on our dataset average Sp. Corr. is 0.44 with a minimum value of 0.19 and a maximum of 0.61. It is clearly evident that when experimented on lengthy and complex sports, their performance is not on par. Details are represented in Table \ref{compare_methods_on_our_dataset}. The table also shows that our proposed method outperforms existing methods by a large margin and has a significantly smaller number of parameters. \begin{table}[!htbp] \scriptsize \caption{Performance comparison of different models on our proposed AGF-Olympics dataset.\label{compare_methods_on_our_dataset}} \setlength{\tabcolsep}{3pt} \centering \begin{tabular}{|p{85pt}|P{50pt}|P{35pt}|P{43pt}|} \hline \textbf{Model} & \textbf{Architecture} & \textbf{Model parameters} & \textbf{Corr. on AGF-Olympic}\\ \hline C3D-LSTM \cite{Parmar_2017_CVPR_Workshops} & Full model & 65.67 M & 0.43 \\ \hline C3D-LSTM \cite{Parmar_2019_WACV} & Full model & 65.67 M & 0.43 \\ \hline CoRe \cite{Yu_2021_ICCV} & Full model & 14.80 M & 0.19 \\ \hline Eagle-eye \cite{Nekoui_2021_WACV} & W/o JCA Stream & 84.22 M & 0.54 \\ \hline USDL \cite{Tang_2020_CVPR} & Full model & 12.59 M & 0.61 \\ \hline \textbf{Proposed model} & \textbf{Full model} & \textbf{05.01 M} & \textbf{0.68} \\ \hline \end{tabular} \end{table} Table \ref{JFE_ablate_results} reports experimental results with different architectural choices of our method on the skeleton sequences of AGF-Olympics. We conducted comprehensive ablation studies to evaluate the effectiveness of different architectural modules. For the base joint feature encoder, we used two different GCN models. First, we used a simple GCN model with no bells and whistles. Then we used a more advanced MSG3D model that achieved outstanding performance in the action recognition task \cite{liu2020disentangling}. MSG3D uses multi-scale and multi-window techniques with an inflated adjacency matrix. With everything else kept the same, MSG3D achieved a 14\% increase in Spearman correlation over the simple GCN model. Subsequently, we investigated different distillation processes to smooth the asynchronicity in the segmented feature and create a sparse mapping. Our vanilla distillation module VFD uses pooling techniques. Then we experimented with the non-local module that stimulates self-attention and enhances global modeling. Finally, we introduce discriminative distillation in the non-local module to intensify attention on the most contributing attributes. With the discriminative non-local attention module, MSG3D + DNLA$_\delta$(emb) achieved around 38.7\% increase in Spearman correlation compared to VFD. \begin{table}[!htbp] \scriptsize \caption{Results for different architectural choices in the joint stream. Base joint feature encoder (JFE): a GCN or MSG3D. VFD indicates the vanilla feature distillation and DNLA$_{\mu/\delta}$ indicates our proposed discriminative non-local attention module.\label{JFE_ablate_results}} \setlength{\tabcolsep}{3pt} \centering \begin{tabular}{|p{88pt}|P{28pt}|P{20pt}|P{20pt}|P{18pt}|P{38pt}|} \hline \textbf{Method} & \textbf{Base Encoder} & \textbf{SP Corr} & \textbf{Test loss} & \textbf{\begin{tabular}[c]{@{}c@{}}Class\\ loss\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Joint\\ interpolation\end{tabular}}\\ \hline JFE + VFD + MLP & GCN & 0.27 & 1.36 & $\times$ & \checkmark\\ \hspace*{0.5cm} + VFD + MLP & - & 0.32 & 1.33 & \checkmark & $\times$\\ \hspace*{0.5cm} + VFD + MLP & - & \textbf{0.40} & 1.11 & \checkmark & \checkmark\\ \hline \hspace*{0.5cm} + VFD + MLP & MSG3D & 0.31 & 0.45 & $\times$ & \checkmark\\ \hspace*{0.5cm} + NLA$_{(emb)}$ + MLP & - & 0.32 & 36.86 & $\times$ & \checkmark \\ \hspace*{0.5cm} + NLA$_{(cat)}$ + MLP & - & 0.35 & 0.08 & $\times$ & \checkmark \\ \hspace*{0.5cm} + DNLA$_{\mu_{(emb)}}$ + MLP & - & 0.34 & 0.04 & $\times$ & \checkmark \\ \hspace*{0.5cm} + DNLA$_{\mu_{(cat)}}$ + MLP & - & 0.35 & 0.08 & $\times$ & \checkmark \\ \hspace*{0.5cm} + DNLA$_{\mu_{(emb)}}$ + MLP & - & 0.41 & 0.04 & \checkmark & \checkmark \\ \hspace*{0.5cm} + DNLA$_{\delta_{(emb)}}$ + MLP & - & 0.43 & 1.16 & $\times$ & \checkmark \\ \hspace*{0.5cm} + DNLA$_{\delta_{(emb)}}$ + MLP & - & \textbf{0.45} & 0.36 & \checkmark & \checkmark \\ \hline \end{tabular} \end{table} Figure~\ref{skel_heat} illustrates the skeleton heatmap from different variants of the skeleton model. With DNLA, the AQA model was able to focus on the most prominent joints. \begin{figure}[!hbtp] \centerline{\includegraphics[height=1.5in, width=2.5in]{Figs/skel_heat.png}} \caption{Skeleton heatmap from the last conv layer of different variants of the skeleton model, (a) VFD (b) DNLA$_\mu$(emb) and (c) DNLA$_\delta$(emb). It is evident that compared to VFD which uses similar weights to most of the joints, variants of DNLA can progressively localize joint focus. The DNLA$_\delta$(emb) attends more to limbs and the waist joints, which is reasonable considering gymnastic movements involve these body parts the most.} \label{skel_heat} \end{figure} Table \ref{AFE_ablate_results} represents experimental results on the RGB videos of AGF-Olympics. AFE + VFD$_\delta$ + MLP achieved the best result. We evaluated two different loss functions, MSE + weighted MAE and USDL from \cite{Tang_2020_CVPR}. \begin{table}[!htbp] \scriptsize \caption{Results for different architectural choices in appearance stream. The base appearance feature encoder is X3D or I3D. VFD$_{\mu/\delta}$ refers to different kernel sizes to extract discriminative features\label{AFE_ablate_results}.} \setlength{\tabcolsep}{3pt} \centering \begin{tabular}{|p{86pt}|P{35pt}|P{33pt}|P{30pt}|P{35pt}|} \hline \textbf{Method} & \textbf{Base Encoder} & \textbf{SP Corr} & \textbf{Test loss} & \textbf{Loss fn}\\ \hline AFE + VFD$_\mu$ + MLP & X3D & 0.57 & 1.05 & L1 + w*L2\\ \hspace*{0.5cm} + VFD$_\delta$ + MLP & - & 0.68 & 0.57 & L1 + w*L2\\ \hline \hspace*{0.5cm} + VFD$_\mu$ + MLP & I3D & 0.61 & 1.89 & USDL\\ \hspace*{0.5cm} + DNLA$_{\mu{(emb)}}$ + MLP & - & 0.64 & 1.24 & USDL\\ \hline \end{tabular} \end{table} Figure~\ref{app_sample} shows a sample phone app that can be used to communicate with the model deployed on the cloud to assess athletes. The start/stop button starts recording. Pressing the analyze button generates a RestAPI call to the cloud to predict the score. The stats button displays the available score progress chart, and the reset button clears the viewing area. \begin{figure}[!tbhp] \centerline{\includegraphics[width=\columnwidth]{Figs/app_sample.png}} \caption{App layout showing a sample video captured and its predicted score. The graph shows score records.} \label{app_sample} \end{figure} \section{Conclusion} We proposed the AGF-Olympics dataset that provides a challenging scenario for long-range temporal modeling in action quality assessment. The complexity of the gymnastic floor combined with the highly random environment in broadcast videos offers a unique situation. Our dataset is substantially more diverse and has subtle intricacies compared to existing action quality assessment datasets. Furthermore, our proposed discriminative non-local attention module presents a new mapping technique to extract extended temporal affinities. Thus it enhances correspondence between isolated activities and overall patterns. With rigorous experiments, we showed that this module significantly improves performance in AQA methods. \section*{Acknowledgments} The research is carried out while Sania Zahan is the recipient of a University Postgraduate Award and a University of Western Australia International Fee Scholarship. Professor Ajmal Mian is the recipient of an Australian Research Council Future Fellowship Award (project number FT210100268) funded by the Australian Government. \begin{comment} {
1,314,259,992,864
arxiv
\section{Introduction} Game theory, launched in 1928 by John von Neumann in a paper \cite{neumann} and developed in 1944 by John von Neumann and Oskar Morgenstern in a book \cite{neumann2} is one of the branches of applied mathematics. The aim of this theory is mathematical modeling of behavior of rational players in conflict situations. The players are assumed to strive to maximizing their own benefit and take into account all possible ways of behaving of the other players. Within this theory new ideas are still proposed. One of the latest trends is to define and study new solution concepts. A fundamental concept used for predicting players' behavior is Nash equilibrium \cite{nash}. It defines a strategy vector at which no player has a profitable deviation from a strategy of that vector. Beside of that kind of stability Nash equilibrium always exists in finite games. Therefore, the use of Nash equilibria is a first step in finding reasonable moves of the players. Nash equilibria may often indicate non-optimal solutions. Moreover, a game may have multiple Nash equilibria that imply different outcomes. Then one can use refinements of Nash equilibria that impose additional restrictions on strategy vectors \cite{myerson}. In the case of many games like the Prisoner's Dilemma game, Nash equilibria may imply very low payoffs compared to other payoffs available in the game. This undoubtedly have had an impact on the promotion of non-Nash equilibrium based solution concepts. There is a significant number of articles devoted to solution concepts that do not derive from Nash equilibra. One of the best references here is \cite{fourny}. The paper introduces the Perfectly Transparent Equilibrium - the idea based on rounds elimination of strategy profiles that do not imply maximin payoffs. Another notion worth mentioning is Berge equilibrium \cite{berge, colman, pbf}. The concept is based on altruism in the sense that each player's aim is to maximize the payoffs of the other players. The subject of our work is the notion of Kantian equilibrium \cite{roemerbook}. It follows from a line of reasoning suggested by Hofstadter \cite{hofstadter}. His idea assumes that the players are superrational. This means that they are rational and also they are able to conduct a meta-analysis taking into account that the other players has the same reasoning. In the case of a symmetric game, the players have exactly the same strategic position. So, if one player predicts a rational strategy, the other players should also come up with the identical strategy. Examining profiles consisting of the same strategies makes finding Kantian equilibria a lot easier compared to Nash equilibria. The task is actually to maximize a payoff function with respect to a strategy of one of the players. This is particularly relevant in studying a quantum game in which a player's unitary strategy depends even on three parameters. Quantum game theory is a field developed on the border of game theory and quantum information. This is an interdisciplinary area of research. It assumes that games are played with the aid of objects that behave according to the laws of quantum mechanics. The theory was initiated with considering a simple extensive-form game in \cite{meyer}. D. Meyer showed that a player equipped with unitary operators has a winning strategy. Another fundamental paper on quantum games is \cite{eisert}. The scheme defined by J. Eisert, M. Wilkens and M. Lewenstein was the first formal protocol of playing a general $2\times 2$ game. According to \cite{eisert}, players' strategies are unitary operators that depend on two parameters. These operators are performed on a maximally entangled two-qubit state. L. Marinatto and T. Weber introduced an alternative model of playing a quantum game. In their scheme for a $2\times 2$ game players' strategies are restricted to two unitary operators (the identity and the Pauli operator $X$). The operators are performed on a fixed two-qubit state (not necessarily entangled). Quantum game theory also includes quantum models with infinite strategy sets. A minimalistic model of quantum Cournot duopoly introduced by H. Li, J. Du and S. Massar in \cite{lidumassar} and generalized in \cite{fraduopol}. Our work focuses on Kantian equilibria in $2\times 2$ symmetric game and its quantum counterpart. We generalize the previous findings presented in \cite{roemerbook} by deriving the general formula for Kantian equilibria. We then examine this solution concept with respect to the Eisert-Wilkens-Lewenstein quantum approach to the game. \section{Preliminaries} In this section, we review relevant notions from classical and quantum game theory that are needed to follow our work. The basic model of games studied in game theory is a game in strategic form. \begin{definition}\textup{\cite{maszler}}~ A game in strategic form (or in normal form) is an ordered triple $(N, (S_{i})_{i\in N}, (u_{i})_{i\in N})$, in which \begin{itemize} \item $N = \{1,2, \dots, r\}$ is a finite set of players. \item $S_{i}$ is the set of strategies of player $i$, for every player $i\in N$. \item $u_{i}\colon S_{1}\times S_{2} \times \cdots \times S_{r} \to \mathds{R}$ is a function associating each vector of strategies $s= (s_{i})_{i\in N}$ with the payoff $u_{i}(s)$ to player $i$, for every player $i\in N$. \end{itemize} \end{definition} A game in strategic form proceeds in the following way. Each player $i \in N$ chooses one of her strategies $s_{i}\in S_{i}$. In this way, the players determine a strategy vector $(s_{1}, s_{2}, \dots, s_{r})$. Then, for each player $i$, the payoff function $u_{i}$ determines a payoff $u_{i}(s_{1}, s_{2}, \dots, s_{r})$. A player can also choose their own strategies according to a probability distribution. Then we say that she plays a mixed strategy. Formally, a mixed strategy is an element of the set \cite{maszler} \begin{equation} \Sigma_{i} = \left\{\sigma_{i}\colon S_{i} \to [0,1]\colon \sum_{s_{i} \in S_{i}}\sigma_{i}(s_{i}) = 1\right\}. \end{equation} In particular case, if a strategic-form game has two players and each player has two strategies, the game can be written as a $2\times 2$ matrix in which each element is a pair of real numbers \begin{equation}\label{2x2g} (A,B) = \begin{pmatrix} (a_{00}, b_{00}) & (a_{01}, b_{01})\\ (a_{10}, b_{10}) & (a_{11}, b_{11}) \end{pmatrix}. \end{equation} Player 1's strategies are identified with the rows and player 2's strategies are identified with the columns. Players' mixed strategies in the case of game (\ref{2x2g}) will be denoted by $(p,1-p)$ and $(q,1-q)$, respectively. Then the expected payoff resulting from playing the mixed strategies is \begin{equation} \begin{aligned} &u_{1}((p,1-p), (q,1-q)) = pqa_{00} + p(1-q)a_{01} + (1-p)qa_{10} + (1-p)(1-q)a_{11}, \\ &u_{2}((p,1-p), (q,1-q)) = pqb_{00} + p(1-q)b_{01} + (1-p)qb_{10} + (1-p)(1-q)b_{11}. \end{aligned} \end{equation} Symmetry is common in two-player normal form games. It is particularly visible in $2\times 2$ bimatrix games discussed in many game theory textbooks. Games such as the Prisoner's Dilemma, Chicken or Stag Hunt are the examples of symmetric games. Informally, one can say that a symmetric game is one that looks the same for all the players \cite{binmore}. A more precise definition is as follows \cite{osborne, plan}: \begin{definition}\textup{\cite{plan}}~ Let $N=\{1,2\}$ and $S_{1} = S_{2}$. A game $\Gamma = (N, (S_{1}, S_{2}), (u_{1}, u_{2}))$ is symmetric if for all pairs of strategies $(x,y) \in S_{1}\times S_{2}$ \begin{equation}\label{symmetric} u_{1}(x,y) = u_{2}(y,x). \end{equation} \end{definition} \noindent In the case of a finite symmetric two-player game, condition (\ref{symmetric}) means that $B = A^{T}$ in (\ref{2x2g}). Then a symmetric $2\times 2$ bimatrix game takes the following form: \begin{equation} \begin{pmatrix}\label{sym2x2} (a_{00}, a_{00}) & (a_{01}, a_{10})\\ (a_{10}, a_{01}) & (a_{11}, a_{11}) \end{pmatrix}. \end{equation} The Eisert-Wilkens-Lewenstein (EWL) scheme \cite{eisert} has undoubtedly been one of the most used scheme for quantum games. In the EWL scheme, players' strategies are unitary operators that each of two players acts on a maximally entangled state. In the literature there are a few descriptions of the EWL scheme that are strategically equivalent. In what follows, we present a concise form that we adapted for the purpose of my research. Let \begin{equation} |\Psi\rangle = \frac{|00\rangle + i|11\rangle}{\sqrt{2}}, \quad C_{0} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad C_{1} = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}. \end{equation} For $k,l \in \{0,1\}$ define \begin{equation} |\Psi_{kl}\rangle = C_{k}\otimes C_{l} |\Psi\rangle. \end{equation} Then $\{|\Psi_{kl}\rangle \colon k,l \in \{0,1\}\}$ is a basis for $\mathds{C}^2\otimes \mathds{C}^2$. \begin{definition}\label{ewldef} The Eisert-Wilkens-Lewenstein approach to game (\ref{2x2g}) is defined by a triple $(N, (D_{i})_{i\in N}, (v_{i})_{i\in N})$, where \begin{itemize} \item $N = \{1,2\}$ is the set of players, \item $D_{i}$ is a set of unitary operators from $\mathsf{SU}(2)$ with typical element \begin{equation} U_{i}(\theta_{i}, \alpha_{i}, \beta_{i}) = \begin{pmatrix} e^{i\alpha}\cos\frac{\theta}{2} & ie^{i\beta}\sin\frac{\theta}{2} \\ ie^{-i\beta}\sin\frac{\theta}{2} & e^{-i\alpha}\cos\frac{\theta}{2} \end{pmatrix}, \quad \theta_{i} \in [0,\pi], \alpha, \beta \in [0,2\pi), \end{equation} \item $v_{i}\colon D_{1}\otimes D_{2} \to \mathds{R}$ is player $i$'s payoff function given by \begin{align} &v_{1}(U_{1}\otimes U_{2}) = \sum^1_{k,l=0}a_{kl}|\langle \Psi_{kl}|U_{1}\otimes U_{2}|\Psi\rangle|^2\\ &v_{2}(U_{1}\otimes U_{2}) = \sum^1_{k,l=0}b_{kl}|\langle \Psi_{kl}|U_{1}\otimes U_{2}|\Psi\rangle|^2, \end{align} where $a_{kl}$ and $b_{kl}$ for $k,l \in \{0,1\}$ are the payoffs of (\ref{2x2g}). \end{itemize} \end{definition} According to Definition~\ref{ewldef}, the EWL approach to (\ref{2x2g}) can be regarded as a normal-form game in which the strategies are unitary operators that the players perform on the state $|\Psi\rangle$. The payoff function is then the expected value of measurement on the final state, \begin{equation} U_{1}\otimes U_{2}|\Psi\rangle = \sum^1_{k,l = 0}\langle \Psi_{kl}|U_{1}\otimes U_{2}|\Psi\rangle|\Psi_{kl}\rangle \end{equation} with respect to the basis $\{|\Psi_{kl}\rangle \colon k,l \in \{0,1\}\}$. One of the main features of the von Neumann-Morgenstern utility says that if player $i$'s preferences in a game are represented by the expected value of payoffs, then every positive affine transformation of the payoffs also represents these preferences. \begin{definition}\textup{\cite{maszler}} Let $u\colon X \to \mathds{R}$ be a function. A function $v\colon X \to \mathds{R}$ is a positive affine transformation of $u$ if there exists a positive real number $\alpha >0$ and a real number $\beta$ such that for each $x\in X$ \begin{equation} v(x) = \alpha u(x) + \beta. \end{equation} \end{definition} \begin{theorem}\textup{\cite{maszler}}\label{th1} If $u_{i}$ is a linear utility function representing player $i$'s preferences, then every positive affine transformation of $u_{i}$ is also a linear utility function representing the preferences. \end{theorem} The next example illustrates Theorem~\ref{th1}. \begin{example} Let us consider the following bimatrix game: \begin{equation}\label{nonzerogame} \begin{pmatrix} (-14, 15) & (-2,-3) \\ (-4,0) & (12,12) \end{pmatrix}. \end{equation} If we transform player 1 and 2's payoffs by the positive affine transformations $\frac{1}{2}x + 5$ and $\frac{1}{3}x - 3$, respectively, we obtain \begin{equation}\label{zerogame} \begin{pmatrix} (-2,2) & (4,-4) \\ (3,-3) & (-1,1) \end{pmatrix}. \end{equation} Although (\ref{zerogame}) is a zero-sum game in contrast to (\ref{nonzerogame}), both games are equivalent with respect to players' preferences about the result of the game. Both games have the unique Nash equilibrium $((2/5, 3/5), (1/2, 1/2))$. \end{example} More generally, if $(p_{i})^m_{i=1}$ is a probability distribution over player $i$'s payoffs $(a_{i})^m_{i=1}$ and $\sum^m_{i=1} p_{i}a_{i}$ is the expected payoff then for $\alpha \in \mathds{R}_{+}$ and $\beta \in \mathds{R}$ \begin{equation} \alpha \sum^m_{i=1} p_{i}a_{i} +\beta = \sum^m_{i=1}\alpha p_{i}a_{i} + \sum^m_{i=1}\beta p_{i} = \sum^m_{i=1}(\alpha a_{i} + \beta)p_{i}. \end{equation} Hence, if a player prefers a probability distribution $(p_{i})^m_{i=1}$ over $(p'_{i})^m_{i=1}$ then \begin{equation} \sum^m_{i=1} p_{i}a_{i} \geq \sum^m_{i=1} p'_{i}a_{i} \Leftrightarrow \sum^m_{i=1}(\alpha a_{i} + \beta)p_{i} \geq \sum^m_{i=1}(\alpha a_{i} + \beta)p'_{i}. \end{equation} Since the expected payoff in the EWL scheme is also in the form of $\sum^m_{i=1} p_{i}a_{i}$, where $a_{i}$ are the payoffs in the classical game, a positive affine transformation of the classical game does not change players' preferences in the associated EWL game. \section{Kantian equilibria in $2\times2$ symmetric games} The definition of Kantian equilibrium varies according to a type of game \cite{roemer, istrate}. In what follows, we reproduce the one concerning a mixed extension of a finite normal-form game. Let us consider a normal-form game $(N, (S_{i})_{i\in N}, (u_{i})_{i\in N})$ that have identical strategy sets, i.e., $S_{1} = S_{2} = \dots = S_{n} = S$ and let $\Delta(S)$ be the set of probability distributions on $S$. \begin{definition}\label{SKE} A simple Kantian equilibrium (SKE) is a vector $(\tau^*, \tau^*, \dots, \tau^*) \in \Delta(S)^n$ such that \begin{equation} \tau^* \in \argmax_{\tau \in \Delta(S)}{u_{i}(\tau, \tau, \dots, \tau)}. \end{equation} \end{definition} Obviously, Definition~\ref{SKE} can be easily modified when one considers pure strategies $S$ or quantum strategies (i.e., unitary operators) instead of $\Delta(S)$. In \cite{roemerbook} simple Kantian equilibria are found for a few examples of $2\times2$ symmetric games. Our results generalize that of \cite{roemerbook}. We provide a concise formula for a general $2\times 2$ symmetric game. Let us consider a two-player symmetric game (\ref{sym2x2}). Let us first consider the case $a_{00} \ne a_{11}$. There is no loss of generality in assuming that $a_{00} > a_{11}$. To simplify Kanitan equilibrium analysis let us apply a positive affine transformation to (\ref{sym2x2}) in the following form: \begin{equation} f(x) = \frac{1}{a_{00} - a_{11}}(x-a_{11}). \end{equation} Then, game (\ref{sym2x2}) is transformed into a preference-equivalent game \begin{equation}\label{sym10} \begin{pmatrix} (1, 1) & (a, d-a)\\ (d-a, a) & (0, 0) \end{pmatrix}, \end{equation} where \begin{equation}\label{sub1} a = \frac{a_{01}-a_{11}}{a_{00}-a_{11}}, ~ d = \frac{a_{01}+ a_{10} - 2a_{11}}{a_{00} - a_{11}}. \end{equation} Let us first determine the expected payoff $u_{i}((p, 1-p),(p,1-p))$ of player $i$ resulting from playing a strategy vector $((p, 1-p),(p,1-p))$ in (\ref{sym10}). We obtain \begin{equation}\label{equ1u2} u_{1}((p, 1-p),(p,1-p)) = u_{2}((p, 1-p),(p,1-p)) = p^2 + d(1-p)p, \quad p\in [0,1]. \end{equation} By Definition~\ref{SKE}, simple Kantian equilibria in (\ref{sym10}) are determined by points that maximize (\ref{equ1u2}). We shall consider two cases. If $d>1$, then the point $p^* = -d/(2(1-d))$ is a local maximum point of $x^2+d(1-x)x, ~x\in \mathds{R}$. It maximizes (\ref{equ1u2}) if $0\leq p^* \leq 1$ or $d\geq 2$. For $1<d<2$, \begin{equation} \frac{d}{dp}u_{i}((p, 1-p),(p,1-p)) = 2p(1-d) + d > 0. \end{equation} Hence, $p^* =1$ maximizes (\ref{equ1u2}) for $1<d<2$. If $d \leq 1$, function (\ref{equ1u2}) attains its maximum at one of the endpoints of $[0,1]$. In this case, it is the point $p^*=1$. Summarizing, we have thus proved the following lemma: \begin{lemma}\label{lemma1} In a symmetric $2\times 2$ game in the form of (\ref{sym10}), if $u_{i}((p,1-p),(q,1-q))$ is player $i$'s payoff function, then it follows that \begin{equation}\label{formula1} \argmax_{p\in [0,1]} u_{i}((p,1-p),(p,1-p))= \begin{cases} \{1\} &\text{if}~ d<2,\\ \left\{\frac{-d}{2(1-d)}\right\} &\text{if}~d\geq 2. \end{cases} \end{equation} \end{lemma} Let us now consider bimatrix (\ref{sym2x2}) in which $a_{00} = a_{11}$. By using a positive affine transformation $g(x) = x-a_{00}$ we are left with the task of determining simple Kantian equilibria in \begin{equation}\label{sym00} \begin{pmatrix} (0, 0) & (b, e-b)\\ (e-b, b) & (0, 0) \end{pmatrix}, \end{equation} where \begin{equation}\label{sub2} b = a_{01}-a_{00}, ~ e = a_{01}+ a_{10} - 2a_{00}. \end{equation} Now, the problem of finding SKE comes down to determining the points that maximize \begin{equation} u_{i}((p,1-p),(p,1-p)) = ep(1-p),\quad p\in [0,1]. \end{equation} We leave it to the reader to verify the following lemma: \begin{lemma}\label{lemma2} In a symmetric $2\times 2$ game in the form of (\ref{sym00}), if $u_{i}((p,1-p),(q, 1-q))$ is player $i$'s payoff function, then it follows that \begin{equation}\label{formula2} \argmax_{p\in [0,1]}u_{i}((p,1-p),(p,1-p)) = \begin{cases} \{0,1\} &\text{if}~e<0, \\ [0,1] &\text{if}~e=0, \\ \left\{\frac{1}{2}\right\} &\text{if}~e>0. \end{cases} \end{equation} \end{lemma} Although formulae~(\ref{formula1}) and (\ref{formula2}) find simple Kantian equilibria for games~(\ref{sym10}) and (\ref{sym00}), respectively, Lemmas~\ref{lemma1} and \ref{lemma2} enable us to generalize the results to arbitrary symmetric $2\times 2$ games (\ref{sym2x2}). \begin{proposition}\label{prop1} Let $u_{i}((p,1-p),(q, 1-q))$ be a player $i$'s payoff function in symmetric $2\times 2$ game (\ref{sym2x2}) in which $a_{00} \geq a_{11}$. Then \begin{equation}\label{propeq1} \argmax_{p\in [0,1]}u_{i}((p,1-p),(p,1-p)) = \begin{cases} \{1\} &\text{if}~~a_{01}+a_{10} - 2a_{00} \leq 0 ~~\text{and}~~ a_{00}>a_{11}, \\ \{0,1\} &\text{if}~~a_{01}+a_{10} - 2a_{00} < 0 ~~\text{and}~~ a_{00}=a_{11},\\ [0,1] &\text{if}~~a_{01}+a_{10} - 2a_{00} = 0 ~~\text{and}~~ a_{00}=a_{11}, \\ \left\{\frac{a_{01} + a_{10} - 2a_{11}}{2(a_{01}+a_{10} - a_{00}-a_{11})}\right\} &\text{if}~~a_{01}+a_{10} - 2a_{00} > 0 ~~\text{and}~~ a_{00}\geq a_{11}. \end{cases} \end{equation} \end{proposition} \begin{proof} The first part of (\ref{propeq1}) follows from the first part of (\ref{formula1}), i.e., the condition $d<2$ is equivalent to $a_{01} + a_{10} - 2a_{00} < 0$ by (\ref{sub1}). Moreover, $-d/(2(1-d)) = 1$ for $d=2$. Similarly, from the first and second part of (\ref{formula2}) we obtain the second and third part of (\ref{propeq1}). Substituting the form of $d$ given in (\ref{sub1}) into $-d/(2(1-d))$ we obtain \begin{equation}\label{mix} p = \frac{a_{01} + a_{10} - 2a_{11}}{2(a_{01} +a_{10} - a_{00} - a_{11})}. \end{equation} In particular, $p$ given in (\ref{mix}) is equal to 1/2 for $a_{00} = a_{11}$. \end{proof} \begin{corollary} Formula (\ref{propeq1}) in Proposition~\ref{prop1} also applies to game (\ref{sym2x2}) with $a_{00}<a_{11}$ by the reverse numbering of players' strategies, i.e., by swapping two rows and two columns in the bimatrix. Then we get a game that is isomorphic to (\ref{sym2x2}) and $a_{00}>a_{11}$. The value of $p$ resulting from (\ref{propeq1}) is then the probability of playing the second strategy in the initial game. \end{corollary} SKE given by formula (\ref{propeq1}) implies the following payoff outcomes in a symmetric $2\times 2$ game: \begin{equation}\label{paygen} \max_{p\in [0,1]}u_{i}((p,1-p),(p,1-p)) = \begin{cases} a_{00} &\text{if}~~a_{01}+ a_{10} - 2a_{00} \leq 0, \\ \frac{(a_{01}+a_{10})^2 - 4a_{00}a_{11}}{4(a_{01} + a_{10} - a_{00}-a_{11})} &\text{if}~~a_{01}+ a_{10} - 2a_{00} > 0. \end{cases} \end{equation} In what follows, we apply (\ref{propeq1}) and (\ref{paygen}) to concrete examples of bimatrix $2\times 2$ games. \begin{example}\label{ex2} \begin{enumerate} \item[1.] The general structure of the Prisoner's Dilemma game can be expressed by (\ref{sym2x2}) that satisfies \begin{equation}\label{pd} a_{10}>a_{00}>a_{11}>a_{01} ~~\text{and}~~ a_{00}>\frac{a_{01}+a_{01}}{2}. \end{equation} The second condition of (\ref{pd}) is aimed at preventing the players from alternating between their first and second strategies. The condition coincides with the one of (\ref{propeq1}). This means that the simple Kantian equilibrium is $((p,1-p),(p,1-p)) = ((1,0),(1,0))$ with the payoff outcome $a_{00}$ for each player. \item[2.] A symmetric game isomorphic to the Battle of the Sexes game can be described by (\ref{sym2x2}) such that \begin{equation} a_{01} > a_{10} > a_{00} = a_{11}. \end{equation} By (\ref{prop1}), the strategy of SKE is \begin{equation} p = \frac{a_{01}+a_{10}-2a_{00}}{2(a_{01}+a_{10}-2a_{00})} = \frac{1}{2} \end{equation} with the resulting payoff \begin{equation} \frac{(a_{01}+a_{10})^2-4a^2_{00}}{4(a_{01}+a_{10}-2a_{00})} = \frac{1}{4}(2a_{00} + a_{01} + a_{10}). \end{equation} \end{enumerate} \end{example} \section{Kantian equilibrum in the EWL-type quantum games} One of the main motivation for studying quantum games is to search for reasonable quantum strategy profiles that would imply higher payoffs than ones implied by classical strategies. In this section we examine simple Kantian equilibria in the EWL quantum game to see whether players can benefit from playing quantum game. First, we need to make sure that the game generated by the EWL scheme is symmetric so that we can apply the notion of simple Kantian equilibrium. \begin{lemma} The Eisert-Wilkens-Lewenstein approach to a symmetric $2\times 2$ game is a symmetric game. \end{lemma} \begin{proof} Our proof starts with the observation that \begin{equation}\label{cztery} \begin{aligned} &\langle \Psi_{00}|U_{1}\otimes U_{2}|\Psi \rangle = \langle \Psi_{00}|U_{2}\otimes U_{1}|\Psi \rangle = \cos(\alpha_{1}+\alpha_{2})\cos\frac{\theta_{1}}{2}\cos\frac{\theta_{2}}{2} + \sin(\beta_{1}+\beta_{2})\sin\frac{\theta_{1}}{2}\sin\frac{\theta_{2}}{2}, \\ &\langle \Psi_{01}|U_{1}\otimes U_{2}|\Psi \rangle = \langle \Psi_{10}|U_{2}\otimes U_{1}|\Psi \rangle = \cos(\alpha_{1}-\beta_{2})\cos\frac{\theta_{1}}{2}\sin\frac{\theta_{2}}{2} + \sin(\alpha_{2}-\beta_{1})\sin\frac{\theta_{1}}{2}\cos\frac{\theta_{2}}{2},\\ &\langle \Psi_{10}|U_{1}\otimes U_{2}|\Psi \rangle = \langle \Psi_{01}|U_{2}\otimes U_{1}|\Psi \rangle = \cos(\alpha_{2}-\beta_{1})\sin\frac{\theta_{1}}{2}\cos\frac{\theta_{2}}{2} + \sin(\alpha_{1}-\beta_{2})\cos\frac{\theta_{1}}{2}\sin\frac{\theta_{2}}{2},\\ &\langle \Psi_{11}|U_{1}\otimes U_{2}|\Psi \rangle = \langle \Psi_{11}|U_{2}\otimes U_{1}|\Psi \rangle = \cos(\beta_{1}+\beta_{2})\sin\frac{\theta_{1}}{2}\sin\frac{\theta_{2}}{2} - \sin(\alpha_{1}+\alpha_{2})\cos\frac{\theta_{1}}{2}\cos\frac{\theta_{2}}{2}. \end{aligned} \end{equation} Since it is assumed that the $2\times 2$ game is symmetric then $a_{kl} = b_{lk}$ for $k,l \in \{0,1\}$. From (\ref{cztery}) it follows that \begin{align} v_{1}(U_{1}\otimes U_{2}) &= \sum^1_{k,l=0}a_{kl}|\langle \Psi_{kl}|U_{1}\otimes U_{2}|\Psi\rangle|^2 = \sum^1_{k,l=0}a_{lk}|\langle \Psi_{kl}|U_{2}\otimes U_{1}|\Psi\rangle|^2 \nonumber \\&= \sum^1_{k,l=0}b_{kl}|\langle\Psi_{kl}|U_{2}\otimes U_{1}|\Psi\rangle|^2 = v_{2}(U_{2}\otimes U_{1}). \end{align} This finishes the proof. \end{proof} Let us first consider a game given by bimatrix (\ref{sym10}). The payoff functions in the EWL scheme associated with (\ref{sym10}) are symmetric. In particular, \begin{align} &|\langle \Psi_{00}|U^{\otimes 2}|\Psi\rangle|^2 = \left(\cos2\alpha \cos^2\frac{\theta}{2} + \sin2\beta \sin^2\frac{\theta}{2}\right)^2, \\ &|\langle \Psi_{01}|U^{\otimes 2}|\Psi\rangle|^2 = |\langle \Psi_{10}|U^{\otimes 2}|\Psi\rangle|^2 = \left((\cos(\alpha - \beta)+ \sin(\alpha - \beta))\cos\frac{\theta}{2}\sin\frac{\theta}{2}\right)^2. \end{align} Therefore, for $i=1,2$, player $i$'s payoff function in the EWL game can be written as \begin{equation} v_{i}(U,U) = |\langle \Psi_{00}|U^{\otimes 2}|\Psi\rangle|^2 + d|\langle \Psi_{01}|U^{\otimes 2}|\Psi\rangle|^2. \end{equation} Obviously, the squared magnitudes of $|\langle \Psi_{ij}|U^{\otimes 2}|\Psi\rangle|^2$ sum to unity. Moreover, it is easy to check that \begin{equation} \max_{U \in \mathsf{SU}(2)}|\langle \Psi_{00}|U^{\otimes 2}|\Psi\rangle|^2 = 1, \quad \max_{U \in \mathsf{SU}(2)}|\langle \Psi_{01}|U^{\otimes 2}|\Psi\rangle|^2 = \frac{1}{2}. \end{equation} Note also that for $U' \in \argmax_{U\in \mathsf{SU}(2)}|\langle \Psi_{00}|U^{\otimes 2}|\Psi\rangle|^2$ we have $|\langle \Psi_{01}|U'^{\otimes 2}|\Psi\rangle|^2 = 0$. Similarly, for \begin{equation} U'' \in \argmax_{U\in \mathsf{SU}(2)}|\langle \Psi_{01}|U^{\otimes 2}|\Psi\rangle|^2 \end{equation} we see that $|\langle \Psi_{00}|U''^{\otimes 2}|\Psi\rangle|^2 = 0$. Hence, for $d>2$, \begin{equation} \max_{U\in \mathsf{SU}(2)}v_{i}(U,U) = \max_{U\in \mathsf{SU}(2)}d|\langle \Psi_{01}|U^{\otimes 2}|\Psi\rangle|^2 = \frac{d}{2}. \end{equation} Otherwise, $\max_{U\in \mathsf{SU}(2)}v_{i}(U,U) = 1$. The bimatrix (\ref{sym00}) can be handled in much the same way. The payoff function $v_{i}(U,U) = e|\langle \Psi_{01}|U^{\otimes 2}|\Psi\rangle|^2$ attains its maximum equal to $e/2$ if $e>0$, and $0$ otherwise. By using the inverse transformation $f^{-1}(y) = (a_{00}-a_{11})y + a_{11}$, $g^{-1}(y) = y+a_{11}$ and the substitutions (\ref{sub1}) and (\ref{sub2}), we can return to general payoffs to obtain $f^{-1}(d/2) = g^{-1}(e/2) = (a_{01}+a_{10})/2$ and $f^{-1}(1) = g^{-1}(0) = a_{00}$. Summarizing, we have thus proved the following proposition: \begin{proposition} Let $v_{i}(U,U)$ be a player $i$'s payoff function in the Eisert-Wilkens-Lewenstein quantum approach to symmetric $2\times 2$ game (\ref{sym2x2}) in which $a_{00} \geq a_{11}$. Then \begin{equation}\label{propquantum} \max_{U\in \mathsf{SU}(2)}v_{i}(U,U) = \begin{cases} a_{00} &\text{if}~~a_{01}+a_{10} - 2a_{00} \leq 0, \\ \frac{a_{01}+a_{10}}{2} &\text{if}~~a_{01}+a_{10} - 2a_{00} > 0. \end{cases} \end{equation} \end{proposition} Comparing formulae (\ref{paygen}) and (\ref{propquantum}) for $a_{01}+a_{10} - 2a_{00} \leq 0$ we find that SKE implies the same payoff in both the classical and quantum game. For $a_{01}+a_{10} - 2a_{00} > 0$, SKE played in the quantum game results in a strictly better payoff than in the classical one. Indeed, \begin{equation} \frac{a_{01} + a_{10}}{2} - \frac{(a_{01}+a_{10})^2 - 4a_{00}a_{11}}{4(a_{01}+a_{10} - a_{00}-a_{11})} = \frac{(a_{01}+a_{10} - 2a_{00})(a_{01}+a_{10}-2a_{11})}{4(a_{01}+a_{10}-a_{00}-a_{11})} >0. \end{equation} Let us reconsider the games from Example~\ref{ex2}. \begin{example} \begin{enumerate} \item The Prisoner's Dilemma played with the use of the EWL scheme implies $a_{00}$--the result of SKE in the classical game. An example of a strategy of SKE is $U(\theta, \alpha, \beta) = U(0,\pi/2, 0)$. \item A symmetric game to isomorphic to the Battle of the Sexes game satisfies the condition $a_{01}+a_{10}-2a_{00} > 0$. Therefore, the payoff predicted by SKE is $(a_{01}+a_{10})/2$. Given particular payoffs in the Battle of the Sexes, e.g., $a_{01} = 5, a_{10} = 3, a_{00} = a_{11} = 1$, this means that SKE in the quantum game yields the payoff of 4, whereas playing SKE in the classical game results in the payoff of 2.5. \end{enumerate} \end{example} \section{Conclusions} Simple Kantian equilibria provide us with a prediction how a symmetric game might be played. It is based on the assumption that players choose the same strategies when they each face the same strategic position in the game. It greatly simplifies the analysis required to find a reasonable strategy profile as one comes down to finding a maximum of a function. Our work has shown that the notion of SKE is suitable for both the classical and quantum games. By applying a positive affine transformation we simplified the structure of a symmetric $2\times 2$ game and derived a general formula of SKE in the classical game and possible payoff outcomes of SKE in the quantum game. We found that the result from playing SKE in the quantum game is at least as good as in the classical game. Moreover, in many cases, the players benefit from playing the quantum game getting a strictly higher payoff. Our work also aimed to show the advantages of non-Nashian solution concepts in quantum games. Pure Nash equilibria do not usually exist when the strategy sets are equal to $\mathsf{SU}(2)$ in the EWL model. If they can be found, the Nash equilibria are trivial in the sense that they determine payoffs equal to the Nash equilibrium payoffs in the classical game. In contrast, SKE always exist in the quantum approach to symmetric $2\times 2$ game which allows us to compare reasonable outcomes when the game is played in a classical and quantum manner. Our studies also initiate further research on symmetric games of higher dimension. Our future research will aim to examine $n$-person symmetric games as well as other non-Nashian solution concepts in quantum games.
1,314,259,992,865
arxiv
\section{Introduction and group testing model} The group testing problem was introduced by Dorfman \cite{dorfman} in the 1940s, and captures the idea of efficiently isolating a small subset $\sett{K}$ of defective items in a larger set containing $N$ items. The models used vary slightly, but the fundamental setting is that we perform a sequence of tests, each defined by a testing pool of items. The outcome of the test depends on the number of defective items in the pool. The most basic model, which we refer to as `standard noiseless group testing' is that the test outcome equals 1 if and only if the testing pool contains at least one defective item. Given $T$ tests, the group testing problem requires us to design test pools and estimation algorithms to maximise ${\mathbb P}({\rm suc})$, the success probability (probability of recovering the defective set exactly). This paper focuses on converse results, giving upper bounds on the ${\mathbb P}({\rm suc})$ that can be achieved by any algorithm given $T$ tests. We generalize the following strong result proved by Baldassini, Johnson and Aldridge \cite[Theorem 3.1]{johnsonc10}: \begin{theorem} \label{thm:bja} Suppose the defective set $\sett{K}$ is chosen uniformly from the $\binom{N}{K}$ possible sets of given size $K$. For adaptive or non-adaptive standard noiseless group testing: \begin{equation} \label{eq:johnson} {\mathbb P}({\rm suc}) \leq \frac{2^T}{\binom{N}{K}}. \end{equation} \end{theorem} We extend this result to a variety of settings. We first discuss four dichotomies in the modelling of the group testing problem; (i) combinatorial vs probabilistic (ii) binary vs non-binary (iii) noisy vs noiseless (iv) adaptive vs non-adaptive. There are a number of further variations beyond these, as described in an ever-increasing body of literature. \begin{enumerate} \item { \bf [Combinatorial vs Probabilistic]} The first categorisation concerns the way the defective items are chosen. Combinatorial group testing (see for example \cite{johnson33,atia,johnsonc10,chan}) is the model from Theorem \ref{thm:bja}: we suppose there is a fixed number $K$ of defective items, and the defective set $\sett{K}$ is chosen uniformly from the $\binom{N}{K}$ possible sets of this size. In probabilistic group testing (see for example \cite{johnsonc12,li5}) the $i$th item is defective independently with probability $p_i$ (with $p_i$ not necessarily identical). In fact, we put both these models in a common setting: \begin{definition} We write $\vc{U} \in \{ 0, 1 \}^N$ for the (random) defectivity vector, where component $U_i$ is the indicator of the event that the $i$th item is defective. For any vector $\vc{u} \in \{0,1 \}^N$ write $ \prv{U}{u} = {\mathbb P}( \vc{U} = \vc{u})$, and define entropy \begin{equation} \label{eq:entropydef} H(\vc{U}) = -\sum_{\vc{u} \in \{0,1 \}^N} \prv{U}{u} \log_2 \prv{U}{u}.\end{equation} \end{definition} \begin{example} For the two models as described above: \begin{enumerate} \item For combinatorial group testing, since $\vc{U}$ is uniform over $\binom{N}{K}$ outcomes, the entropy $H(\vc{U}) = \log_2 \binom{N}{K}$. \item For probabilistic group testing, the entropy $H(\vc{U}) = \sum_{i=1}^N h(p_i)$, where $h(t) := - t \log_2 t - (1-t) \log_2 (1-t)$ is the binary entropy function. If $p_i \equiv p$ then $H(\vc{U}) = N h(p)$. \end{enumerate} \end{example} However, Corollary \ref{cor:inforate} below shows that results resembling Theorem \ref{thm:bja} can be proved for general sources satisfying the Shannon-McMillan-Breiman theorem. This includes settings where the defectivity vector is generated by a stationary ergodic Markov chain, which is a natural model of a setting where nearest neighbours are susceptible to infection. \item {\bf [Binary vs Non-binary]} The second variation comes in terms of the set of outcomes $\sett{Y}$ that each test can produce. We refer to $\sett{Y}$ as the alphabet, since in this paper (as in \cite{johnson33,atia} and other papers) we consider an analogy between group testing and the channel coding problem. It is most standard to consider the binary case, where $\sett{Y} = \{ 0, 1 \}$, though other models are possible (see for example \cite[Section 6.3]{aldridge} for a detailed review). For brevity we will only consider the binary case in this paper (though our techniques will be valid in a more general setting). We write $\vc{Y} \in \sett{Y}^T = \{ 0,1 \}^T$ for the outcome of the group testing process. \item { \bf[Noisy vs Noiseless]} The third difference concerns the way in which the outcome of each test is formed. To fix notation, we perform a sequence of $T$ tests defined by test pools $\sett{X}_1, \ldots, \sett{X}_T$, where each $\sett{X}_t \subseteq \{ 1, 2, \ldots, N \}$. We represent this by a binary test matrix $\sett{X} = (x_{it} : i = 1, \ldots, N \mbox{\;and\;} t = 1,\ldots, T)$, where $x_{it} = 1$ if and only if item $i$ is included in the $t$th pool (a concatenation of column vectors given by the indicator functions of the $T$ test pools). Since the test design may be random, we write $\mat{X}$ for a random variable giving a test matrix of this form. For a test matrix $\mat{X}$ and defectivity vector $\vc{u}$, a key object of interest is the vector $\vc{K} = \vc{u}^T \mat{X}$. Here, the $t$th component of $\vc{K}$ is $K_t = \sum_{i=1}^N x_{it} {\mathbb I}( \mbox{item $i \in \mat{X}$})$, the total number of defective items appearing in the $t$th test. Observe that $K_t$ is a deterministic function of $\vc{u}$ and $\mat{X}_t$ (and does not depend on any other variables). It is useful to define $\vc{X}$ via $X_t = {\mathbb I}(K_t \geq 1)$. We assume that the group testing model is static, memoryless and satisfies the `Only Defects Matter' property introduced by Aldridge \cite{aldridge2, aldridge}: \begin{definition}[Only Defects Matter] \label{def:odm} We assume that the $t$th test outcome $Y_t$ is a random function of $K_t$ (so $\vc{Y}$ is conditionally independent of $\vc{U}$ given $\vc{K}$, and $Y_t$ is conditionally independent of $(K_s)_{s \neq t}$ given $K_t$). Further, for some fixed transition matrix $P$, we assume \begin{equation} \label{eq:odm} {\mathbb P}( Y_t = y| K_t = k) = P(y | k), \mbox{\;\;\; for all $y,k,t$}. \end{equation} \end{definition} Note that Definition \ref{def:odm} includes the noiseless standard group testing case, where we simply take $\vc{Y} = \vc{X}$. To understand Definition \ref{def:odm}, we can consider the case where $\vc{X}$ is fed (symbol by symbol) through a memoryless noisy channel, independent of the defectivity vector $\vc{U}$, and any randomness in the group testing design. In the notation of \eqref{eq:odm}, we assume that $P(1 |k ) \equiv P(1|1)$ for all $k \geq 1$; in the noiseless case we take $P(1 | k) \equiv 1$ and $P(0 | 0) = 1$. However, Definition \ref{def:odm} allows a wider range of noise models, including the dilution channel of Atia and Saligrama \cite{atia}, where we take $P(0|k) = (1-u)^k$ for some $u$. For a fixed test matrix $\mat{X} = \sett{X}$, as in \cite{johnsonc10}, in the noiseless case the testing procedure naturally defines a mapping $\vd{\theta}( \cdot, \sett{X}): \{0,1 \}^N \rightarrow \{ 0, 1 \}^T$. That is, given defectivity vector $\vc{u} \in \{ 0,1 \}^N$, we write the vector function $\vd{\theta}$ with components given by scalar function $\theta$, in the form \begin{equation} \label{eq:vecfn} \vd{\theta}(\vc{u}, \sett{X}) = \left( \theta(\vc{u}, \sett{X}_1), \theta(\vc{u}, \sett{X}_2), \ldots , , \theta(\vc{u}, \sett{X}_T) \right). \end{equation} where $\theta( \vc{u}, \sett{X}_t) = {\mathbb I}(K_t \geq 1) = X_t$. \item {\bf [Adaptive vs Non-adaptive]} The final distinction is whether we design the test matrix using an adaptive or a non-adaptive strategy. In the non-adaptive case the entire test matrix $\mat{X} = \sett{X}$ needs to be chosen in advance of the tests. In contrast, in the adaptive case, the $(t+1)$st test pool $\sett{X}_{t+1}$ is chosen based on a knowledge of previous test pools $\sett{X}_{1,t} := \left\{ \sett{X}_1, \ldots, \sett{X}_{t} \right\}$ and test outcomes $ \vc{Y}_{1,t} := \left\{ Y_1, \ldots, Y_{t} \right\}$. We can think (see \cite{aldridge}) that adaptive group testing corresponds to channel coding with feedback, and non-adaptive group testing to coding with no feedback. Clearly (see \cite{johnsonc10}), we can do no worse in the adaptive setting than for non-adaptive group testing, but it remains an open and interesting question to determine precisely by how much adaptivity can improve performance. We argue that a key tool in understanding adaptive group testing is directed information theory. This was first introduced by Marko \cite{marko} in the 1970s, with interest revived by the work of Massey \cite{massey} in the 1990s, and developed further by authors such as Kramer \cite{kramer4} and Schreiber \cite{schreiber}. In particular, as described by Massey \cite{massey}, many authors make an incorrect probabilistic formulation of such simple objects as discrete memoryless channels with feedback. A correct formulation requires the use of the causal conditional probability distribution as studied for example by Kramer \cite{kramer4}. We use the notation of the review paper \cite[Equation (7)]{amblard}, that for sequences $\vc{x} = (x_1, \ldots, x_T)$ and $\vc{y} = (y_1, \ldots, y_T)$, and subsequences $\vc{y}_{1,t-1} = (y_1, \ldots, y_{t-1})$, \begin{equation} \label{eq:causalcond} P_{\vc{X} \| \vc{Y}}\left( \vc{x} \| \vc{y} \right) := \prod_{t=1}^T \prv{X_t|X_{1,t-1}, Y_{1,t}}{x_t | x_{1,t-1}, y_{1,t}}. \end{equation} Note that for any fixed $\vc{y}$, the fact that it is formed as a product of probability distributions means that $\sum_{\vc{x}} P_{\vc{X} \| \vc{Y}}\left( \vc{x} \| \vc{y} \right) = 1$. Using this probability distribution implies the form of the directed information of Marko \cite{marko} (see also the later definition of transfer entropy by Schreiber \cite{schreiber}). In Lemma \ref{lem:jointprob} below, assuming the Only Defects Matter property Definition \ref{def:odm}, we decompose the joint probability of $(\vc{U}, \sett{X}, \vc{Y})$ in the general adaptive setting, using the term $P_{\mat{X} \| \vc{Y}^-} \left( \sett{X} \| \vc{y}^- \right)$, which is defined in \eqref{eq:jointprob2}. Here we use the causal conditional probability notation \eqref{eq:causalcond} above, with superscript $\vc{y}^{-}$ referring to the fact that there is a lag in the index of $\vc{y}$ in \eqref{eq:jointprob2} compared with \eqref{eq:causalcond} (we choose the set $\sett{X}_i$ based on a knowledge of the previous sets $\sett{X}_{1,t-1}$ and test outcomes $\vc{y}_{1,t-1}$). The decomposition in Lemma \ref{lem:jointprob} shows that the $t$th output symbol $Y_t$ is conditionally independent of $\vc{U}$, given values $K_{1,t}$ and previous outputs $\vc{Y}_{1,t-1}$. This is precisely the definition of a causal system between $\vc{K}$ and $\vc{Y}$ given by Massey in \cite[Equation (8)]{massey}, under which condition the capacity of a discrete memoryless channel is not increased by feedback. \end{enumerate} Regardless of these variations, we always make an estimate $\vc{Z} = \widehat{\vc{U}}$, based only on a knowledge of outputs $\vc{Y} = \vc{y}$ and test matrix $\mat{X} = \sett{X}$, using a probabilistic estimator (decoder) that gives $\vc{Z} = \vc{z}$ with probability \begin{equation} \label{eq:decoder} \prv{Z|Y,\mat{X}}{z|y,\sett{X}}. \end{equation} The main results of the paper are Theorem \ref{thm:ppvmain}, which gives an upper bound on ${\mathbb P}({\rm suc})$ in the non-adaptive case, and Proposition \ref{prop:basic}, which gives the corresponding result in the adaptive case. The strength of these results is illustrated in results such as Examples \ref{ex:bsc} and \ref{ex:adaptivenoisy}, where we calculate bounds on the success probability in the case where $\vc{X}$ forms the input and $\vc{Y}$ the output of a binary symmetric channel with error probability $p$. We illustrate these bounds in Figure \ref{fig:bscadaptive}, where the upper bounds on ${\mathbb P}({\rm suc})$ in both adaptive and non-adaptive cases are plotted in the finite blocklength case of combinatorial group testing with $N=500$, $K = 10$, $p=0.11$. The structure of the paper is as follows. In Section \ref{sec:existing} we review existing results concerning group testing converses. In Section \ref{sec:hyptest} we use an argument based on the paper by Polyanskiy, Poor and Verd\'{u} \cite{polyanskiy2} to prove Theorem \ref{thm:ppvmain}, giving a strong converse for non-adaptive group testing, and discuss the bounds for the binary symmetric channel case. In Section \ref{sec:adaptive} we discuss the adaptive case, by extending arguments first given for channels with feedback by Kemperman \cite{kemperman}, Kesten \cite{kesten} and Wolfowitz \cite{wolfowitz3} (see also Gallager \cite{gallager2}). We prove a bound (Proposition \ref{prop:basic}) which specializes in the noiseless case to give a result (Theorem \ref{thm:noiseless}) which generalizes Theorem \ref{thm:bja}. We consider examples of this noiseless result in the probabilistic case in Section \ref{sec:noiselessexamples}. Finally in Section \ref{sec:adaptivenoisy} we apply Proposition \ref{prop:basic} in the noisy adaptive case. The proofs of the main theorems are given in Appendices. While this paper only considers group testing, we remark that this problem lies in the area of sparse inference, which includes problems such as compressed sensing and matrix completion, and it is likely that results proved here will extend to more general settings. The paper \cite{aksoylar} gives a review of links between group testing and other sparse inference problems. Group testing itself has a number of applications, including cognitive radios \cite{aldridge2,atia2,johnsonc12}, network tomography \cite{cheraghchi} and efficient gene sequencing \cite{erlich,shental}. The bounds proved here should provide fundamental performance limits in these contexts. \section{Existing converse results} \label{sec:existing} It is clear from information-theoretic considerations that to find all the defectives in the noiseless case will require at least $T^* = H(\vc{U})$ (the ``magic number'') tests. In the language of channel coding, the focus of this paper is on converse results; that is given $O(T^*)$ tests, we give strong upper bounds on the success probability ${\mathbb P}({\rm suc})$ of any possible algorithm. There has been considerable work on the achievability part of the problem, in developing group testing algorithms and proving performance guarantees. Early work on group testing considered algorithms which could be proved to be order optimal (see for example the analysis of \cite{balding,du,dyachkov3}), often using combinatorial properties such as separability or disjunctness. More recently there has been interest (see for example \cite{johnson33,aldridge3,atia,chan,li5,malyutov,malyutov4,scarlett,wadayama,wadayama2}) in finding the best possible constant, that is to find algorithms which succeed with high probability using $T = c T^* = c H(\vc{U})$ tests, for $c$ as small as possible. In this context, the paper \cite{johnsonc10} defined the capacity of combinatorial group testing problems, a definition extended to both combinatorial and probabilistic group testing in \cite{johnsonc12}. We state this definition for both weak and strong capacity in the sense of Wolfowitz: \begin{definition} \label{def:capacity} Consider a sequence of group testing problems where the $i$th problem has defectivity vector $\vc{U}^{(i)}$, and consider algorithms which are given $T(i)$ tests. We think of $H(\vc{U}^{(i)})/T(i)$ (the number of bits of information learned per test) as the rate of the algorithm and refer to a constant $C$ as the weak group testing capacity if for any $\epsilon > 0$: \begin{enumerate} \item any sequence of algorithms with \begin{equation} \label{eq:lower} \liminf_{i \rightarrow \infty} \frac{ H(\vc{U}^{(i)}) }{T(i)} \geq C+ \epsilon, \end{equation} has success probability ${\mathbb P}({\rm suc})$ bounded away from 1, \item and there exists a sequence of algorithms with \begin{equation} \label{eq:upper} \liminf_{i \rightarrow \infty} \frac{H(\vc{U}^{(i)}) }{T(i)} \geq C - \epsilon \end{equation} with success probability ${\mathbb P}({\rm suc}) \rightarrow 1$. \end{enumerate} We call $C$ the strong capacity if ${\mathbb P}({\rm suc}) \rightarrow 0$ for any sequence of algorithms satisfying \eqref{eq:lower}. \end{definition} For example, in \cite{johnsonc10} we prove that noiseless adaptive combinatorial group testing has strong capacity 1. This result is proved by combining Hwang's Generalized Binary Splitting Algorithm \cite{hwang} (which is essentially optimal -- see also \cite{johnsonc10,du} for a discussion of this) with the converse result Theorem \ref{thm:bja}. However, even in the noiseless non-adaptive case the optimal algorithm remains unclear, although some results are known in some regimes, under assumptions about the distribution of $\mat{X}$ (see for example \cite{aldridge3,johnson33,scarlett,wadayama}). In general, capacity results are asymptotic in character, whereas we will consider the finite blocklength regime (in the spirit of \cite{polyanskiy2}) and prove bounds on ${\mathbb P}({\rm suc})$ for any size of problem. We briefly review existing converse results. First, we mention that results (often referred to as folklore) can be proved using arguments based on Fano's inequality. \begin{lemma} Using $T$ tests: \begin{enumerate} \item For combinatorial group testing Chan et al. \cite[Theorem 1]{chan} give \begin{equation} \label{eq:chan} {\mathbb P}({\rm suc}) \leq \frac{T}{\log_2 \binom{N}{K}}. \end{equation} \item For probabilistic group testing Li et al. \cite[Theorem 1]{li5} give \begin{equation} \label{eq:li} {\mathbb P}({\rm suc}) \leq \frac{T}{N h(p)}. \end{equation} \end{enumerate} \end{lemma} In order to understand the relationship between \eqref{eq:chan} and Theorem \ref{thm:bja}; fix $\delta > 0$ and use $T = T^*(1-\delta)$ tests, in a regime where $\log_2 \binom{N}{K} \rightarrow \infty$ and hence $T^* \rightarrow \infty$. Chan et al.'s result \eqref{eq:chan} gives that ${\mathbb P}({\rm suc}) \leq (1-\delta)$, whereas \eqref{eq:johnson} implies ${\mathbb P}({\rm suc}) \leq 2^{-\delta T^*}$. In the language of Definition \ref{def:capacity}, Chan et al. \cite{chan} give a weak converse whereas Baldassini, Johnson and Aldridge \cite{johnsonc10} give a strong converse. In fact, \eqref{eq:johnson} shows that the success probability converges to zero exponentially fast. To understand why Chan et al's result \eqref{eq:chan} is not as strong as Theorem \ref{thm:bja}, we examine the proof in \cite{chan}. At the heart of it lies an argument based on Fano's inequality, bounding the entropy $H( \vc{U} | \vc{Z}) $ using the decomposition \begin{eqnarray} H( \vc{U} | \vc{Z} = \vc{z}) = H( E | \vc{Z} = \vc{z}) + {\mathbb P}( E = 1 | \vc{Z} = \vc{z}) H( \vc{U} | \vc{Z} = \vc{z}, E = 1), \end{eqnarray} where $E$ is the indicator of the error event $\vc{U} \neq \vc{Z}$. In \cite{chan} this last term is bounded by $\log_2 \binom{N}{K}$, since a priori $\vc{U}$ could be any defective set other than $\vc{z}$. However, in practice, this is a significant overestimate. For example, in the noiseless case there is a relatively small collection of defective sets that a particular defective set $\vc{U}$ can mistakenly be estimated as (referred to as $A(\cdot)$ later in this paper). In this case, using inferences such as those in the {\texttt{DD}} algorithm of \cite{johnson33}, any item which appears in a test pool $\sett{X}_t$ giving result $Y_t = 0$ cannot be defective. Essentially, Theorem \ref{thm:bja} exploits this type of fact. However results corresponding to Theorem \ref{thm:bja} were not previously known even for noiseless probabilistic group testing, let alone more general settings, including noisy channels and other models for defectivity. Tan and Atia \cite[Theorem 2]{tan} do prove a strong converse for combinatorial group testing, however, they do not achieve exponential decay. Since the result of the test only depends on whether the items in the defective set $\sett{K}$ are present, we can restrict our attention to the submatrix $\mat{X}_{\sett{L}}$ indexed by subsets $\sett{L} \subseteq \sett{K}$. \begin{theorem}[\cite{tan}, Theorem 2] Write $\zeta_T := T^{-1/4} {\mathbb P}({\rm suc})^{-1/2}$ and $\eta_T := T^{-1} + h(T^{-1/4})$. If the components of $\mat{X}$ are independent and identically distributed then for each $\sett{L}$, then $T$ (the number of tests required to achieve the given probability of success) satisfies: $$ T( I( \mat{X}_{\sett{K} \setminus \sett{L}}; \mat{X}_{\sett{L}}, \vc{Y}) + \eta_T) \geq (1 - \zeta_T) \log_2 \binom{ N - |\sett{L}|}{K - |\sett{L}|}.$$ \end{theorem} Rearranging, and writing $I = I( \mat{X}_{\sett{K} \setminus \sett{L}}; \mat{X}_{\sett{L}}, \vc{Y})$ we obtain $$ {\mathbb P}({\rm suc}) \leq \frac{1}{T^{1/2} \left( 1 - T (I + \eta_T)/ \log_2 \binom{ N - |\sett{L}|}{K - |\sett{L}|} \right)^2} \simeq \frac{1}{T^{1/2} \delta^2},$$ taking $T^* = \log_2 \binom{ N - |\sett{L}|}{K - |\sett{L}|}/I$ for $T = (1-\delta) T^*$ . This gives a strong converse, though not the exponential decay achieved in \eqref{eq:johnson} above. However Tan and Atia's results \cite{tan} are valid in a variety of settings and noise models. Pedagogically, it is worth noting a parallel between these various approaches and treatments of the channel coding problem in the literature. That is \eqref{eq:chan}, due to Chan et al. \cite{chan}, is proved using an argument based on Fano's inequality, parallelling the proof of Shannon's noisy coding theorem exemplified for example in \cite[Section 8.9]{cover}. The argument of Tan and Atia \cite{tan} is based on Marton's blowing up lemma, mirroring the treatment of Shannon's theorem in the book of Csisz\'{a}r and K\"{o}rner \cite[Section 6]{csiszar6}. Our work in the non-adaptive case is based on the more recent work of Polyanskiy, Poor and Verd\'{u} \cite{polyanskiy2}, which has been adapted to the problem of data compression in \cite{kostina}. The paper \cite{polyanskiy3}, written by the same authors as \cite{polyanskiy2}, extends their approach to channels with feedback, which corresponds to the adaptive case of group testing. We prefer to give bounds based on older works of Gallager \cite{gallager2}, Kemperman \cite{kemperman}, Kesten \cite{kesten} and Wolfowitz \cite{wolfowitz3}. Note that in the non-adaptive case, as described in \cite[Section III.G]{polyanskiy2}, the results of Polyanskiy et al. are stronger than the results of Wolfowitz \cite{wolfowitz3} and Gallager \cite{gallager2} type. However, in the adaptive case, these earlier results appear easier to modify in the group testing context. \section{Hypothesis testing and non-adaptive group testing} \label{sec:hyptest} We first state a result, Theorem \ref{thm:ppvmain}, which implies strong converse results that generalize Theorem \ref{thm:bja} in the non-adaptive case. The key observation comes from Polyanskiy, Poor and Verd\'{u} \cite{polyanskiy2} who found a relationship between channel coding and hypothesis testing. Since the Neyman-Pearson lemma gives us the optimal hypothesis test, the paper \cite{polyanskiy2} deduces strong bounds on coding error probabilities. \begin{definition} Write $\beta_{1-\epsilon}( P, Q)$ for the smallest possible type II error for hypothesis tests (with type I error probability $\epsilon$) deciding between $P$ and $Q$. \end{definition} Our contribution in this section is to use this same analogy for the group testing problem, given a process generating random chosen defective sets $\vc{U}$ (a source). To some extent this is simply a question of adapting the notation of \cite{polyanskiy2}. However, one generalization which is important for us is that we do not require that $\vc{U}$ is uniform (allowing us to consider probabilistic as well as combinatorial group testing). This was not considered in \cite{polyanskiy2}, largely because for channel coding it seems less natural to consider non-uniform $\vc{U}$. Since we consider non-adaptive group testing, we fix $\mat{X} = \sett{X}$ in advance. We write $\prv{KY}{k,y}$ for the joint probability distribution of $\vc{K}$ and $\vc{Y}$ and consider an algorithm which estimates (decodes) the defective set $\vc{Z} = \widehat{\vc{U}}$, using only outputs $\vc{Y}$ and test matrix $\mat{X} = \sett{X}$. Since $\mat{X} = \sett{X}$ is fixed, we simplify the notation of \eqref{eq:decoder} above and write $\prv{Z|Y}{z|y}$ for the probability that the estimator gives $\vc{Z} = \vc{z}$ when $\vc{Y} = \vc{y}$. We prove the following key result: \begin{theorem} \label{thm:ppvmain} Suppose that the group testing model satisfies the Only Defects Matter property, Definition \ref{def:odm}. For any non-adaptive choice of test design, any estimation rule $\prV{Z|Y}$ and probability mass function $\qrV{Y}$: \begin{equation} \label{eq:bounds} \beta_{1-\epsilon}(\prV{KY}, \prV{K} \times \qrV{Y}) \leq \sum_{\vc{z} \in \{0,1 \}^N} \prv{U}{z} \qst{z}, \end{equation} where $\qst{z} = \sum_{\vc{y}} \qrv{Y}{y} \prv{Z|Y}{z|y}$ is the probability that $ \vc{Y} \sim \qrV{Y}$ is decoded to $\vc{z}$ and $1-\epsilon = {\mathbb P}({\rm suc})$. \end{theorem} \begin{proof} See Appendix \ref{sec:proofppvmain}. \end{proof} \begin{example} In the noiseless non-adaptive case consider any defective set distribution $\prV{U}$. Taking $\qrV{Y} \equiv 1/2^T$, the optimal rule is to accept $\prV{KY}$ with probability $1-\epsilon$ if $\vc{x} = \vc{y}$, and to reject $\prV{KY}$ otherwise (this corresponds to taking $\lambda = 1-\epsilon$ and $d^* = 0$ in Example \ref{ex:bsc} above). We obtain by Theorem \ref{thm:ppvmain} that \begin{equation} \label{eq:optimalch} (1-\epsilon)/2^T = \beta_{1-\epsilon}( \prV{KY}, \prV{K} \times \qrV{Y}) \leq \sum_{\vc{z} \in \{0,1 \}^N} \prv{U}{z} \qst{z}. \end{equation} \begin{enumerate} \item {\bf [Uniform case]} In particular, if the $\prv{U}{z} = {\mathbb I}(\vc{z} \in \sett{M})/M$ for some set $\sett{M}$ of size $M = |\sett{M}|$, the RHS of \eqref{eq:optimalch} becomes $ \leq 1/M$, this means that (exactly as in \cite[Theorem 27]{polyanskiy2}): \begin{equation} \label{eq:uniform} \beta_{1-\epsilon}( \prV{KY}, \prV{K} \times \qrV{Y}) \leq 1/M \end{equation} We deduce from Theorem \ref{thm:ppvmain} that \begin{equation} \label{eq:noiselessiid} {\mathbb P}({\rm suc}) \leq 2^T/M, \end{equation} confirming Theorem \ref{thm:bja} (under the additional assumption of non-adaptivitity; we discuss how to remove this assumption in Theorem \ref{thm:noiseless} below). \item \label{ex:noiseless} {\bf [General case]} In general, in the noiseless non-adaptive case, we write $\curP{m}$ for the sum of the largest $m$ values of $\prv{U}{z}$. For each defective set $\vc{U}$, we write $\vc{X} = \vc{Y} = \vd{\theta}(\vc{U})$. For a particular $\vc{y}$, we write $A(\vc{y}) = \vd{\theta}^{-1}(\vc{y}) = \{ \vc{z}: \vd{\theta}(\vc{z}) = \vc{y} \}$ for the defective sets that get mapped to $\vc{y}$ by the testing procedure. We write $\pmax{y} = \max_{\vc{z} \in A(\vc{y})} \prv{U}{z} $ for the maximum probability in $A(\vc{y})$ and $\sett{U}^*(\vc{y}) = \{\vc{u}: \prv{U}{u} = \pmax{y} \}$ for the collection of defective sets achieving this probability. For each $\vc{x}$, pick a string $\vc{u}^*(\vc{x}) \in \sett{U}^*(\vc{x})$ in any arbitrary fashion; and note that there are up to $2^T$ strings $\vc{u}^*(\vc{x})$, which are distinct, since they each map to a different value under $\theta$. These various definitions are illustrated in Figure \ref{fig:setfigure}. In general using \eqref{eq:optimalch} we deduce that \begin{eqnarray} {\mathbb P}( {\rm suc} ) = (1-\epsilon) & \leq & 2^T \sum_{\vc{z} \in \{0,1 \}^N} \prv{U}{z} \qst{z} \nonumber \\ & \leq & \sum_{\vc{y} \in \{ 0, 1 \}^T } \sum_{\vc{z} \in \{ 0, 1 \}^N} \prv{U}{z} \prv{Z|Y}{z|y} \label{eq:matches} \\ & \leq & \sum_{\vc{y} \in \{ 0, 1 \}^T } \sum_{\vc{z} \in A(\vc{y})} \pmax{y} \prv{Z|Y}{u|y} \label{eq:countinga} \\ & \leq & \sum_{\vc{y} \in \{ 0, 1 \}^T } \pmax{y} \nonumber \\ & = & \sum_{\vc{y} \in \{ 0,1 \}^T} \prV{U} \left( \vc{u}^*(\vc{y}) \right) \nonumber \\ & \leq & \curP{2^T}. \label{eq:counting} \end{eqnarray} Here \eqref{eq:countinga} follows since for given $\vc{y}$ the success probability is maximised by restricting to $\prv{Z|Y}{z|y}$ supported on the set $\vc{z} \in A(\vc{y})$, so we know that $\prv{U}{z} \leq \pmax{y}$. \eqref{eq:counting} follows since there are at most $2^T$ separate messages $\vc{X} = \vc{x}$, so at most $2^T$ distinct values $\vc{u}^*(\vc{x})$. This result generalizes \eqref{eq:noiselessiid}. \end{enumerate} \end{example} Note that (as expected) the success probability is maximised by the maximum likelihood decoder $\prV{Z|Y}$ which places all its support on members of $\vc{U}^*(\vc{y})$. In Theorem \ref{thm:noiseless} below we extend the result \eqref{eq:counting} to hold even in the adaptive case, extending Theorem \ref{thm:bja}. Theorem \ref{thm:ppvmain} gives a converse for the non-adaptive binary symmetric channel case: \begin{example} \label{ex:bsc} Suppose the output of standard combinatorial noiseless non-adaptive group testing $\vc{X}$ is fed through a memoryless binary symmetric channel with error probability $p < 1/2$ to produce $\vc{Y}$. We write $x_i = {\mathbb I} (k_i \geq 1)$, and observe that $P(y_i | k_i) = (1-p)^{T-d(x_i,y_i)} p^{d(x_i,y_i)}$, where $d$ represents the Hamming distance. Hence if $\qrV{Y} \equiv 1/2^T$, the likelihood ratio is $$ \frac{ \prv{KY}{k,y}}{\prV{K} \times \qrv{Y}{k,y}} = \frac{ \prv{Y|K}{y|k}}{ \qrv{Y}{y}} = \frac{ p^{d( \vc{x}, \vc{y})} (1-p)^{T - d( \vc{x}, \vc{y})}} {1 /2^{T}} \propto \left( \frac{p}{1-p} \right)^{d( \vc{x}, \vc{y})}.$$ By the Neyman-Pearson lemma, the optimal rule is to accept $\prV{KY}$ if $d( \vc{x}, \vc{y}) < d^*$, to accept $\prV{KY}$ with probability $\lambda$ if $d( \vc{x}, \vc{y}) = d^*$ and to reject $\prV{KY}$ otherwise. In calculations which are essentially the same as in \cite[Theorem 35]{polyanskiy2}, we can find $d^*$ and $\lambda$ using \eqref{eq:uniform}: \begin{eqnarray*} \frac{1}{\binom{N}{K}} \geq \beta_{1-\epsilon}( \prV{KY}, \prV{K} \times \qrV{Y}) & = & {\mathbb P}( {\rm Bin}(T,1/2) \leq d^*-1) + \lambda {\mathbb P} ({\rm Bin}(T, 1/2) = d^*). \end{eqnarray*} Then, for this value of $d^*$ we write that \begin{eqnarray*} {\mathbb P}({\rm suc}) & = & 1- {\mathbb P}(\mbox{\;type I error\;}) = \sum_{\vc{x}, \vc{y}} \prv{KY}{k,y} {\mathbb P}(\mbox{\;accept $\prV{KY}$\;}) \\ & = & {\mathbb P}( {\rm Bin}(T,p) \leq d^*-1) + \lambda {\mathbb P} ({\rm Bin}(T,p) = d^*). \end{eqnarray*} In Figure \ref{fig:bscnonadaptive}, we plot this in the case $N=500$, $K=10$, $p=0.11$, and for comparison plot the Fano bound taken from \cite[Theorem 2]{chan}: \begin{equation} \label{eq:fanocombnoisy} {\mathbb P}({\rm suc}) \leq \frac{T(1-h(p))}{\log_2 \binom{N}{K}}. \end{equation} In Figure \ref{fig:bscnonadaptiverates} we give the group testing analogue of \cite[Figure 1]{polyanskiy2}. We use the regime of \cite{johnson33}; that is, we vary $N$ and take $K = \lceil N^{1-\beta} \rceil$, where $\beta = 0.37$ (this gives the value $K=10$ for $N=500$). Again taking $p=0.11$, we fix ${\mathbb P}({\rm suc}) = 0.999$, and use the lower bound on $T$ corresponding to the analysis above. This gives an upper bound on the rate $\log_2 \binom{N}{K}/T$, which we plot in Figure \ref{fig:bscnonadaptiverates}. Note that in this finite size regime, exactly as in \cite[Figure 1]{polyanskiy2}, the resulting rate bound is significantly smaller than the capacity $C= 1-h(p) = 0.500$, which we only approach asymptotically. \end{example} \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{./bscnonadaptive.pdf} \caption{Combinatorial non-adaptive group testing with $N = 500$ and $K=10$, where the output $\vc{X}$ of standard noiseless group testing is fed into a memoryless binary symmetric channel with $p=0.11$. We vary the number of tests $T$ between 70 and 165, and plot the success probability on the $y$ axis. We plot the upper bound on ${\mathbb P}({\rm suc})$ given by Example \ref{ex:bsc} using $\square$. For comparison, we plot the (weaker) Fano bound \eqref{eq:fanocombnoisy} taken from \cite{chan} as $\circ$. \label{fig:bscnonadaptive}} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{./bscnonadaptiverates.pdf} \caption{Combinatorial non-adaptive group testing for various values of $N$ and $K=N^{0.37}$, where the output $\vc{X}$ of standard noiseless group testing is fed into a memoryless binary symmetric channel with $p=0.11$. In each case we choose $T$ large enough such that the success probability ${\mathbb P}({\rm suc}) = 0.999$. We plot the upper bound on the rate given by Example \ref{ex:bsc}, and observe that this is significantly lower than the value of the capacity $C = 0.500$ in this finite blocklength regime. \label{fig:bscnonadaptiverates}} \end{center} \end{figure} \section{Adaptive group testing} \label{sec:adaptive} We now consider adaptive group testing, and give a result (Proposition \ref{prop:basic}) which implies a strong converse, assuming that a concentration inequality is satisfied. For any $t$, we write $\vc{Y}_{1,t} = \{ Y_1, \ldots, Y_t \}$ and $\sett{X}_{1,t} = \{ \sett{X}_1, \ldots, \sett{X}_t \}$. We first prove the following representation result for the joint probability distribution of $(\vc{U}, \mat{X}, \vc{Y})$ under the model of adaptivity: \begin{lemma} \label{lem:jointprob} Assuming the Only Defects Matter property (Definition \ref{def:odm}) with transition matrix ${\mathbb P}(Y_t = y|K_t = k) = P(y|k)$ for all $k,y,t$, we can write \begin{equation} \label{eq:jointprob} \prv{U, \mat{X}, Y}{u, \sett{X}, y} = \prv{U}{u} P_{\mat{X} \| \vc{Y}^-} \left( \sett{X} \| \vc{y}^- \right) \prod_{t=1}^T P( y_t | k_t), \end{equation} where $k_t = \vc{u}.\sett{X}_t$ is the number of defectives in the $t$th test and \begin{equation} \label{eq:jointprob2} P_{\mat{X} \| \vc{Y}^-} \left( \sett{X} \| \vc{y}^- \right) := \prod_{t=1}^T \prv{\mat{X}_t| \vc{Y}_{1,t-1}, \mat{X}_{1,t-1}}{ \sett{X}_t | \vc{y}_{1,t-1}, \sett{X}_{1,t-1} } \end{equation} is the causal conditional probability, with the key property that for any fixed $\vc{y}$: \begin{equation} \label{eq:sumx} \sum_{\sett{X}} P_{\mat{X} \| \vc{Y}^-} \left( \sett{X} \| \vc{y}^- \right) = 1. \end{equation} \end{lemma} \begin{proof} We write (omitting the subscripts on ${\mathbb P}$ for brevity) a collapsing product of the form: \begin{eqnarray} {\mathbb P}( \vc{u}, \sett{X}, \vc{y}) & = & {\mathbb P}( \vc{u}) \prod_{t=1}^T \frac{ {\mathbb P} \left( \vc{ u}, \sett{X}_{1,t}, \vc{y}_{1,t} \right)} { {\mathbb P} \left( \vc{ u}, \sett{X}_{1,t-1}, \vc{y}_{1,t-1} \right)} \nonumber \\ & = & {\mathbb P}( \vc{u}) \prod_{t=1}^T {\mathbb P} \left( \sett{X}_t, y_t | \vc{ u}, \sett{X}_{1,t-1}, \vc{y}_{1,t-1} \right) \nonumber \\ & = & {\mathbb P}( \vc{u}) \prod_{t=1}^T {\mathbb P} \left( y_t | \sett{X}_t, \vc{ u}, \sett{X}_{1,t-1}, \vc{y}_{1,t-1} \right) {\mathbb P} \left( \sett{X}_t | \vc{ u}, \sett{X}_{1,t-1}, \vc{y}_{1,t-1} \right) \label{eq:tosplit} \\ & = & {\mathbb P}( \vc{u}) \prod_{t=1}^T {\mathbb P} \left( y_t | k_t \right) {\mathbb P} \left( \sett{X}_t | \sett{X}_{1,t-1}, \vc{y}_{1,t-1} \right) \nonumber \end{eqnarray} where we remove the conditioning from the terms in \eqref{eq:tosplit} since $y_t$ is the result of sending $k_t = \vc{U}^T \sett{X}_t $ through a memoryless channel (the output of which is independent of previous test designs and their output) and since the choice of the $t$th test pool $\sett{X}_t$ is conditionally independent of $\vc{U}$, given the previous tests and their output. \end{proof} Next we adapt arguments given by Wolfowitz \cite{wolfowitz3} which give strong converses for symmetric channels (even with feedback). Wolfowitz's book \cite{wolfowitz3} reviews earlier work of his \cite{wolfowitz2}, and results of Kemperman \cite{kemperman} and Kesten \cite{kesten}. Wolfowitz \cite{wolfowitz3} and Kemperman \cite{kemperman} use Chebyshev's inequality to bound tail probabilities; the fact that stronger results than Chebyshev could be used for the case without feedback was stated as \cite[Exercise 5.35]{gallager2}. Note the similarity of Proposition \ref{prop:basic} to \cite[Theorem 5.8.5]{gallager2}, a result originally due to Wolfowitz and discussed for example as \cite[Theorem 10]{polyanskiy2}. \begin{definition} \label{def:typset} Fix a probability mass function $\qrV{Y}$ on $\{ 0, 1 \}^T$. Define the typical set $\typv{\beta}$ by \begin{equation} \label{eq:typset} \typv{\beta} := \left\{ (\vc{k}, \vc{y}): \frac{ \prod_{t=1}^T P \left( y_t | k_t \right)}{ \qrv{Y}{y} } \leq \exp( T \beta) \right\} .\end{equation} (Note that this set can be expressed in terms of the information density $ i(\vc{k} ; \vc{y})$ of \cite{polyanskiy2}). \end{definition} Write $\prv{Z|Y, \mat{X}}{z|y,\sett{X}}$ for the probability that some algorithm estimates the defective set as $\widehat{\vc{U}} = \vc{Z} = \vc{z} \in \{0,1\}^N$ when the group testing process with test matrix $\mat{X} = \sett{X}$ returns $\vc{Y} = \vc{y}$. \begin{proposition} \label{prop:basic} Take any probability mass function $\qrV{Y}$ on $\{ 0, 1 \}^T$. For any model of group testing (adaptive or non-adaptive), the success probability satisfies \begin{equation} \label{eq:basic} {\mathbb P}( {\rm suc} ) \leq \exp( T \beta) \sum_{\vc{z} \in \{0,1 \}^N} \prv{U}{z} \qst{z} + {\mathbb P} \left( (\vc{K}, \vc{Y}) \notin \typv{\beta} \right) . \end{equation} where we write $\qst{z} = \sum_{ \vc{y} , \sett{X} } \prv{Z|Y, \mat{X}}{z|y,\sett{X}} P_{\mat{X} \| \vc{Y}^-} \left( \sett{X} \| \vc{y}^- \right) \qrv{Y}{y} $. \end{proposition} \begin{proof} See Section \ref{sec:proofbasic}. \end{proof} Note that by \eqref{eq:sumx}, we know that $\qst{z}$ is a probability mass function since: \begin{eqnarray} \sum_{\vc{z} \in \{ 0,1 \}^N} \qst{z} & = & \sum_{ \vc{y} , \sett{X} } P_{\mat{X} \| \vc{Y}^-} \left( \sett{X} \| \vc{y}^- \right) \qrv{Y}{y} \sum_{\vc{z} \in \{ 0,1 \}^N} \prv{Z|Y, \mat{X}}{z|y,\sett{X}} \nonumber \\ & = & \sum_{ \vc{y} \in \{0,1 \}^T} \qrv{Y}{y} \sum_{\sett{X} } P_{\mat{X} \| \vc{Y}^-} \left( \sett{X} \| \vc{y}^- \right) = \sum_{ \vc{y} \in \{ 0,1 \}^T} \qrv{Y}{y} = 1. \label{eq:qsumx} \end{eqnarray} We use Proposition \ref{prop:basic} to prove a result which extends Theorem \ref{thm:bja} for general defective set distributions $\prV{U}$ in the noiseless binary case. This result applies to both adaptive and non-adaptive group testing. \begin{theorem} \label{thm:noiseless} For noiseless adaptive binary group testing, if we write $\curP{m}$ for the sum of the largest $m$ values of $\prv{U}{z}$ then $$ {\mathbb P}({\rm suc}) \leq \curP{2^T}.$$ \end{theorem} \begin{proof} See Section \ref{sec:proofnoiseless}. \end{proof} For combinatorial group testing, since $\prV{U}$ is uniform on a set of size $\binom{N}{K}$, Theorem \ref{thm:noiseless} implies that $\curP{m} = m/\binom{N}{K}$ and we recover Theorem \ref{thm:bja}. We show how sharp this result is in Figure \ref{fig:bja}, which is reproduced from \cite[Figure 1]{johnsonc10}. \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{./successhistogrambja.pdf} \caption{(Reproduced from \cite[Figure 1]{johnsonc10}). Success probability for noiseless adaptive combinatorial group testing with $(K,N) = (10, 500)$ and $(30, 9699)$ (these numbers are chosen to match the regime $K = N^{1-\beta}$, as in Figure \ref{fig:bscnonadaptiverates}). The upper bound on success probability of Theorem \ref{thm:bja} is plotted in red, and the upper bound \eqref{eq:chan} (from \cite[Equation (6)]{chan}) in blue. The dotted vertical line is at $\log_2 \binom{N}{K}$ (the magic number). To illustrate how sharp Theorem \ref{thm:bja} is, we compare this with simulated (empirical) results of practical algorithms. The empirical success probability of the HGBSA of Hwang \cite{hwang} is plotted as a bright green line, and the related algorithm analysed in \cite[Section IV]{johnsonc10} is plotted in dark green. \label{fig:bja}} \end{center} \end{figure} \begin{corollary} \label{cor:inforate} Consider a sequence $\vc{U}^{(i)}$ of defectivity vectors of length $i$, generated as independent realisations of a stationary ergodic stochastic process of entropy rate $H$. Given $T^{(i)} = (H-\epsilon) i$ tests to solve the $i$th noiseless adaptive group testing problem, the success probability tends to zero. (Hence the strong capacity cannot be more than 1). \end{corollary} \begin{proof} We define the typical set \begin{equation} \label{eq:typsmb} \typv{\epsilon}^{(i)} = \left\{ \left| \frac{ - \log \prv{U^{(i)}}{u}}{i} - H \right| \leq \frac{\epsilon}{2} \right\} \end{equation} By the Shannon-McMillan-Breiman theorem (AEP) (see for example \cite[Theorem 15.7.1]{cover}), the probability ${\mathbb P}(\typv{\epsilon}^{(i)}) \rightarrow 1$. Then, in Theorem \ref{thm:noiseless}, the $2^{T(i)}$ strings of largest probability will certainly be contained in a list containing the elements of $ (\typv{\epsilon}^{(i)})^c$ and the $2^{T(i)}$ strings of largest probability in $\typv{\epsilon}^{(i)}$. Since, by definition, any string in $\typv{\epsilon}^{(i)}$ has probability less than $2^{- i H + i \epsilon/2}$, we deduce that \begin{eqnarray*} {\mathbb P}({\rm suc}) \leq \curPi{2^{T(i)}} & \leq & {\mathbb P} \left( (\typv{\epsilon}^{(i)})^c \right) + 2^{T(i)} 2^{- i H + i \epsilon/2} \\ & = & {\mathbb P} \left( (\typv{\epsilon}^{(i)})^c \right) + 2^{- i \epsilon/2} \end{eqnarray*} Given a quantitative form of the Shannon-McMillan-Breiman theorem (proved for example using the concentration inequalities described in \cite{raginsky}), we can deduce an explicit (exponential) rate of convergence to zero of ${\mathbb P}({\rm suc})$. \end{proof} We give more explicit bounds which show how Theorem \ref{thm:noiseless} can be applied in the noiseless probabilistic case in Section \ref{sec:noiselessexamples} below. Section \ref{sec:adaptivenoisy} contains an illustrative example of results that can be proved using Proposition \ref{prop:basic}, in the noisy adaptive case where $\vc{U}$ is uniformly distributed on a set $\sett{M}$ of size $M$ and there is a binary symmetric channel with error probability $p$ between $\vc{X}$ and $\vc{Y}$. Baldassini's thesis \cite{baldassini} developed and analysed algorithms in the noisy adaptive case. However, it remains an open problem to find capacity-achieving algorithms, even for examples such as the binary symmetric channel. \section{Noiseless adaptive probabilistic group testing} \label{sec:noiselessexamples} In this section, we give examples of bounds which can be proved using Theorem \ref{thm:noiseless} for noiseless adaptive probabilistic group testing. Note the similarity between the calculations in Examples \ref{ex:bsc} and \ref{ex:idprob}; in the former case we control concentration of channel probabilities, in the latter we control source probabilities (see also \cite[Theorem 35]{polyanskiy2}). Note that the control of the source strings with highest probabilities is an operation that lies at the analysis of the finite blocklength data compression problem in \cite{kostina}. \subsection{Noiseless probabilistic group testing with identical $p$} \begin{example} \label{ex:idprob} We consider the identical Probabilistic case, where $p_i \equiv p < 1/2$, so $\prv{U}{z} = p^w (1-p)^{N-w}$, where $w = w(\vc{z})$ is the Hamming weight of $\vc{z}$. Write \begin{equation} \label{eq:elnt} L^{*}_{N,T} := \min \left\{ L: \sum_{i=0}^{L} \binom{N}{i} \geq 2^T \right\} \end{equation} and define $s \geq 0$ via \begin{equation} \label{eq:ldef} 2^T = \sum_{i=0}^{L^{*}_{N,T}-1} \binom{N}{i} + s, \end{equation} meaning the $2^T$ highest probability defective sets are all of those of weight $\leq L^{*}_{N,T} -1$, plus $s$ of weight $L^{*}_{N,T}$. We evaluate $\curP{2^T}$ in this case to obtain a bound which we plot in Figure \ref{fig:bernoulli}: \begin{equation} \label{eq:curpval} {\mathbb P}({\rm suc}) \leq \curP{2^T} = \sum_{i=0}^{L^{*}_{N,T}-1} \binom{N}{i} p^{i} (1-p)^{N-i} + s p^{L^{*}_{N,T}} (1-p)^{N-L^{*}_{N,T}}.\end{equation} \end{example} \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{./converse.pdf} \caption{Noiseless probabilistic adaptive group testing in the case of $p_i \equiv 1/50$, $N = 500$. We vary the number of tests $T$ between 0 and 100, and plot the success probability on the $y$ axis. We plot the upper bound on ${\mathbb P}({\rm suc})$ given by \eqref{eq:curpval} using $\square$. For comparison, we plot the (weaker) Fano bound \eqref{eq:li} of \cite{li5} as $\circ$. The approximation \eqref{eq:approxnorm} is plotted as $+$. \label{fig:bernoulli}} \end{center} \end{figure} \begin{remark} We give a Gaussian approximation to the bound \eqref{eq:curpval}, in the spirit of \cite{polyanskiy2}. Since we need to control tail probabilities we use the approximation given by Chernoff bounds (see Theorem \ref{thm:chernoff}). In particular, if we take $L = L(y): = N p + y \sqrt{N p(1-p)}$ and $T(y)= N h(L(y)/N)$ then \eqref{eq:chernoff2} gives that \begin{equation} {\mathbb P}( {\rm Bin}(N,1/2) \leq L(y)) \simeq 2^{ - N +T(y)}, \end{equation} giving an approximate solution to \eqref{eq:elnt} as required. Substituting in \eqref{eq:curpval} we obtain \begin{eqnarray} \label{eq:approxnorm} {\mathbb P}({\rm suc}) \leq \curP{2^{T(y)}} & \simeq & {\mathbb P}( {\rm Bin}(N, p) \leq L(y)) \simeq \Phi(y), \end{eqnarray} using a second normal approximation. For example, if $y=0$ then $T = T(0) = N h(p)$ (the magic number) and $L = N p$, and $\curP{2^T} \simeq \Phi(0) = 1/2$. \end{remark} Indeed using the Chernoff bound, we use \eqref{eq:approxnorm} to deduce a strong capacity result: \begin{corollary} Noiseless binary probabilistic group testing has strong capacity $C=1$ in any regime where $p \rightarrow 0$ and $N p \rightarrow \infty$. \end{corollary} \begin{proof}[Sketch proof] For any $p \leq 1/2$ and $\epsilon > 0$, we consider the asymptotic regime where $T = N h(p-\epsilon)$ as $N \rightarrow \infty$. Choosing $L = N(p-\epsilon/2)$, we know that using standard bounds (see for example \cite[Equation (12.40)]{cover}) $$ \sum_{i=0}^L \binom{N}{i} \geq \binom{N}{L} \geq \frac{2^{Nh(L/N)}}{N+1} \geq \frac{2^{N h(p-\epsilon/2)}}{N+1},$$ which is larger than $2^T$ in the asymptotic regime. Hence, summing over the strings of weight $\leq L$ will give at least the $2^T$ strings of highest probability, and we deduce by Theorem \ref{thm:chernoff} that $$ {\mathbb P}({\rm suc}) \leq {\mathbb P}( {\rm Bin}(N,p) \leq N(p - \epsilon/2)) \leq 2^{- N D(p -\epsilon/2 \| p)},$$ which tends to zero exponentially fast. This complements the performance guarantee proved in \cite{johnsonc12}, strengthening the result of \cite[Corollary 1.5]{johnsonc12} where the corresponding weak capacity result was stated using \eqref{eq:li}. \end{proof} \subsection{Noiseless probabilistic group testing with non-identical $p$} In the case of probabilistic group testing, for non-identical $p_i$, the analysis is more complicated, and the form of the tightest bounds depends on the distribution of values of $p_i$. We assume that $1/2 \geq p_1 \geq p_2 \geq \ldots \geq p_N$ and write $\zeta_i = \log( (1-p_i)/p_i) \geq 0$. For a given value of $c$, we write $\sett{U}_c$ for the collection of defective sets with probability $\geq c$. That is \begin{eqnarray} \sett{U}_c & = & \{ \vc{u}: {\mathbb P}(\vc{U} = \vc{u}) \geq c \} = \left\{ \vc{u}: \sum_{i=1}^N (p_i - u_i) \zeta_i \geq \log c + h(U) \right\}. \end{eqnarray} The key idea is to find a value $c$, such that we can guarantee that $|\sett{U}_c| \geq 2^T$ (we discuss how to do this in Lemma \ref{lem:findc} below). Then, we use concentration inequalities to bound the total probability ${\mathbb P}(|\sett{U}_c|)$ from above (this is done in Lemma \ref{lem:tailbound}). Then by construction we know that $\curP{2^T} \leq {\mathbb P}(|\sett{U}_c|)$, and we deduce an upper bound on the success probability, stated in Theorem \ref{thm:final}. The details are given in Section \ref{sec:prooffinal}. \begin{theorem} \label{thm:final} Non-identical Probabilistic group testing has success probability bounded by $$ {\mathbb P}({\rm suc}) \leq \exp \left( - \frac{(\log c^* + h( \vc{U})^2}{4 L} \right), \mbox{ if $0 \leq (\log c^* + h( \vc{U})) \leq \frac{L}{M}$,} $$ where \begin{equation} \label{eq:landm} L = \sum_{i=1}^N p_i (1-p_i) \left( \log \left( \frac{1-p_i}{p_i} \right) \right)^2, \mbox{\;\; and \;\;} M = \max_i (1-p_i) \log \left( \frac{1-p_i}{p_i} \right), \end{equation} and we write $$ c^* = \max_{T \leq R \leq N} \left( \prod_{i=1}^{R-L^{*}_{R,T}} (1-p_i) \prod_{i=R-L^{*}_{R,T}+1}^R p_i \right). $$ \end{theorem} \begin{proof} See Section \ref{sec:prooffinal}. \end{proof} \section{Noisy adaptive group testing example} \label{sec:adaptivenoisy} We now use Proposition \ref{prop:basic} to prove a bound on ${\mathbb P}({\rm suc})$ in a noisy example. For simplicity we state the following example in the case of uniform $\vc{U}$. Further generalizations (in the spirit of Theorem \ref{thm:noiseless}) are possible by adapting the proofs along the lines of Section \ref{sec:proofnoiseless}. \begin{example} \label{ex:adaptivenoisy} Suppose $\vc{U}$ is uniformly distributed on a set $\sett{M}$ of size $M$. and the noise channel between $\vc{X}$ and $\vc{Y}$ forms a binary symmetric channel. Recall that \eqref{eq:basic} states that \begin{equation} \label{eq:basic3} {\mathbb P}( {\rm suc} ) \leq \exp( T \beta) \sum_{\vc{z} \in \{0,1 \}^N} \prv{U}{z} \qst{z} + {\mathbb P} \left( (\vc{K}, \vc{Y}) \notin \typv{\beta} \right). \end{equation} We write $C := 1 - h(p) $ for the Shannon capacity of the binary symmetric channel, and take $\beta = C + \epsilon/2$. By \eqref{eq:qsumx}, since $\prv{U}{z} \equiv 1/M$ the first term of \eqref{eq:basic3} becomes $\exp(T (C + \epsilon/2))/M$. Since $X_t = {\mathbb I}(K_t \geq 1)$, and the channel matrix $P(y | k)$ is constant for values of $k \geq 1$, it collapses down to a channel matrix from $x$ to $y$. Further, taking $\qrV{Y} \equiv 1/2^T$, for $p < 1/2$, the event that \begin{eqnarray} \left\{ (\vc{k}, \vc{y}) \notin \typv{C+\epsilon/2} \right\} & \Longleftrightarrow & \left\{ \frac{1}{T} \left( \sum_{i=1}^T \log P(y_i | k_i) - \log Q(y_i) \right) > C + \frac{\epsilon}{2} \right\} \nonumber \\ & \Longleftrightarrow & \left\{ 1 + \frac{d(\vc{x}, \vc{y})}{T} \log p + \left( 1 - \frac{d(\vc{x}, \vc{y})}{T} \right) \log(1-p) > C + \frac{\epsilon}{2} \right\} \nonumber \\ & \Longleftrightarrow & \left\{ - \left( d(\vc{x}, \vc{y}) - T p \right) \log \left( \frac{(1-p)}{p} \right) > \frac{\epsilon T}{2} \right\} \label{eq:asymp} \end{eqnarray} We deduce results, both in the asymptotic (capacity) sense and the finite blocklength regime. \begin{enumerate} \item Consider a sequence of group testing problems, where the $i$th problem has $\vc{U}^{(i)}$ uniformly distributed on a set $\sett{M}(i)$ of size $M(i)$, we can deduce that any sequence of algorithms using $T(i)$ tests has \begin{equation} \label{eq:lowerval} \liminf_{i \rightarrow \infty} \frac{ \log M(i)}{ T(i)} = \frac{ H(\vc{U}^{(i)}) }{T(i)} \geq C+ \epsilon, \end{equation} has success probability ${\mathbb P}({\rm suc}) \rightarrow 0$, and hence strong capacity bounded above by $C$. This follows by the considerations above, since \eqref{eq:asymp} tells us that \eqref{eq:basic3} becomes \begin{eqnarray*} {\mathbb P}({\rm suc}) & \leq & \frac{1}{M} \exp(T (C + \epsilon/2)) + {\mathbb P}( {\rm Bin}(T,p) \leq T q) \\ & \leq & \exp(-T \epsilon/2) + 2^{ - T D(q \| p)} \end{eqnarray*} where $q = p - \epsilon/(2 \log((1-p)/p))$, and we deduce (exponential) convergence to zero, using the Chernoff bound Theorem \ref{thm:chernoff} below. \item For any $d^*$, we can consider the set $ \{ d \leq d^* \}$ which (by \eqref{eq:asymp}) corresponds to taking $\epsilon T/2 = - (d^* - T p) \log((1-p)/p)$. Then \eqref{eq:basic3} becomes \begin{eqnarray} {\mathbb P}({\rm suc}) & \leq & \min_{d^*} \left[ \frac{1}{M} \exp \left( T C + (d^* - T p) \log \left( \frac{(1-p)}{p} \right) \right) + {\mathbb P} \left( {\rm Bin}(T,p) \leq d^* \right) \right]. \;\;\;\; \label{eq:toplotbscadap} \end{eqnarray} We illustrate this bound in Figure \ref{fig:bscadaptive}, where we compare it with the bounds derived in the adaptive case in Example \ref{ex:bsc}. \end{enumerate} \end{example} \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{./bscadaptive.pdf} \caption{Combinatorial non-adaptive group testing with $N = 500$ and $K=10$, where the output $\vc{X}$ of standard noiseless group testing is fed into a memoryless binary symmetric channel with $p=0.11$. We vary the number of tests $T$ between 70 and 165, and plot the success probability on the $y$ axis. We plot the bound \eqref{eq:toplotbscadap} (from Example \ref{ex:adaptivenoisy}) on the success probability for adaptive algorithms as $\triangle$. Since this is exactly the same scenario as Figure \ref{fig:bscnonadaptive}, we add the points from that figure for comparison. That is, we plot the upper bound on ${\mathbb P}({\rm suc})$ for non-adaptive algorithms given by Example \ref{ex:bsc} using $\square$, showing a small adaptivity gap between upper bounds. Again, we plot the (weaker) Fano bound \eqref{eq:fanocombnoisy} taken from \cite{chan} as $\circ$. \label{fig:bscadaptive}} \end{center} \end{figure} \clearpage
1,314,259,992,866
arxiv
\section{Introduction} Topological and Geometric Data Analysis (TGDA) describes an emerging set of analytic tools which leverage the underlying shape of a data set to produce topological summaries. These techniques have been particularly successful at providing new insight for high dimensional data sets, topological data structures and biological data sets \cite{gameiro_topological_2015}\cite{kelin_multidimensional_2015}\cite{nicolau_topology_2011}. An ideal topological summary should discriminate well between different spaces, be stable to perturbations of the initial data, and amenable to statistical analysis. Persistent homology (PH) has become a ubiquitous tool in the TGDA arsenal. PH studies the homology groups of a family of topological spaces built upon a data set. The associated topological summary for a $1$-parameter family is given in terms of a persistence diagram marking the parameter values for births and deaths of homological features. The persistent diagram may equivalently be thought of as a multiset of points in $\mathbb{R}^2$. Qualitatively one considers long lived homological features detected as inherent to the data set and the short lived features as noise. Nevertheless in various applications it has been revealed that the short lived features provide important discriminating information in classification. For example in analysing the topology of brain arteries it was found the $28^\text{th}$ longest persisting feature provided the most useful discriminating information \cite{bendich_persistent_2016}. We would like therefore to have a statistical framework and topological summary in which one can detect statistically significant topological features of large, medium and small persistence, and in particular does not discard short lived features as noise. The space of persistence diagrams does not enjoy desired properties for traditional statistical analysis. For example, a collection of persistence diagrams may not have a well defined mean. As a result there have been various attempts to vectorize the persistence diagram, in order that the summary is more amenable to statistical analysis and machine learning techniques. The article \cite{Bubenik:2015} introduces a stable vectorization of the persistence diagram, the persistence landscape, a function in Lebesgue $p$-space. This summary naturally enjoys unique means and one can perform traditional hypothesis tests upon this summary and numerical statistics derived from this summary. There are several natural situations where we may wish to build a richer structure of topological spaces on a data set and track the changes to the homology whilst varying multiple parameters. For example in \cite{Keller2018} the topology of chemical compounds is studied using 2-parameter filtrations. The PH theory becomes wildly more complicated when the family of topological spaces on our data set is indexed by multiple parameters. The associated family of homology groups is known as a multiparameter persistence module. The theory of multiparameter persistence modules is presented in \cite{Carlsson2009} and unlike the single parameter case where we may associate a persistence diagram to a module, there is not an analogous complete discrete invariant in the multiparameter setting. There exist various approaches to define invariants for multiparameter persistence modules in the literature. The rank invariant of a module has been studied in the context of $H_0$-modules and shape matching, and has been shown to be stable when endowed with a matching distance \cite{frosini_persistent_2011} \cite{cerri_betti_2013}. An alternative approach uses algebraic geometry to construct numeric functions on multiparameter persistence modules \cite{Skryzalin2017}, generalising the ring of algebraic functions on barcode space for the single parameter case \cite{Adcock2013}. However, the disadvantage of both of these approaches is that equipped with their vector space norms, these invariants are unstable with respect to the natural distance to consider on multiparameter modules, the interleaving distance. Another approach is to study algebraic invariants associated to the multigraded algebra structure of multiparameter persistence modules \cite{2017Otter}. In this article we introduce a family of new stable invariants for multiparameter persistence modules, naturally extending the results of \cite{Bubenik:2015} from the setting of single parameter persistence modules to multiparameter persistence modules. Our incomplete invariants, the \textit{multiparameter persistence landscapes}, are derived from the rank invariant associated to a multiparameter persistence module, and are continuous functions in Lesbegue $p$-space. As such, they are naturally endowed with a distance function and are well suited to statistical analysis, since again there is a uniquely defined mean associated to multiple landscapes. The natural inner-product structure on the landscape functions gives rise to a positive-definite kernel, which can be leveraged by machine learning algorithms. The multiparameter landscape functions are sensitive to homological features of large, medium and small persistence. The landscapes also have the advantage of being interpretable since they are closely related to the rank invariant. Moreover one can derive stable $\mathbb{R}$-valued numeric invariants from the landscape functions using the linear functionals in the dual space. We can produce confidence intervals and perform hypothesis tests on these numeric invariants which are viewed as $\mathbb{R}$-valued random variables. In the $2$-parameter case we visualise the multiparameter landscapes as a surface $\lambda : \mathbb{R}^2 \to \mathbb{R}$. We shall present computational examples in the $2$-parameter case using the RIVET software presented in \cite{Lesnick2015}. These examples will serve a range of potential applications, demonstrating that the landscapes are sensitive to both the topology and geometry of a data set. \subsection{Outline of Content} We begin by introducing multiparameter persistence theory in Section \ref{Multiparameter-Theory}. We shall present the interleaving distance on multiparameter modules and define the persistence weighted Wasserstein distance for interval decomposable modules. In Section \ref{Landscapes} we will recall the definition and properties of the single parameter persistence landscape. We shall consider possible generalisations of the persistence landscape to multiparameter persistence modules and then explore the properties and discriminating power of our chosen generalisation. Section \ref{Stability} contains the proof of stability of our multiparameter persistence landscapes with respect to the interleaving distance and persistence weighted Wasserstein distance. We include also an optimality result showing that the collection of landscapes associated to a multiparameter persistence module contains almost all the information contained in the rank invariant. In Section \ref{Statistics} we shall collect results from the literature regarding Banach Space valued random variables and demonstrate how these results apply to multiparameter persistence landscapes. Finally, in Section \ref{Computations} we provide a conservative estimate for the time complexity of computing multiparameter persistence landscapes, perform some example computations of the multiparameter landscapes and apply statistical tests to various data sets. \section*{Acknowledgements} The author would like to give recognition to the \textit{Theory and Foundations of TGDA} workshop hosted at Ohio State University which facilitated useful conversations with experts in TGDA. The author wishes to thank his supervisor Ulrike Tillmann for her guidance and support with this project, and Peter Bubenik for helpful suggestions. The author gratefully acknowledges support from EPSRC studentship EP/N509711/1 and EPSRC grant EP/R018472/1. \section{Multiparameter Persistence Theory} \label{Multiparameter-Theory} The theory of persistent homology is well developed for topological spaces filtered over a single parameter. Under appropriate finiteness conditions, the Krull-Schmidt Decomposition Theorem establishes the barcode as a complete invariant \cite{Crawley-Boevey15}. In the more general situation we study our space is filtered over multiple parameters. What we call multiparameter persistence is sometimes termed multidimensional persistence in the literature. We shall reserve \textit{parameter} to describe variables over which we are filtering our space, and \textit{dimension} to refer to homological dimension. The general decomposition theorem available in the single parameter case does not generalise to multiparameter modules. In the language of Quiver representations, single parameter persistence is the study of $A_n$-type quiver representations and their infinite extensions, whilst in contrast multiparameter persistence concerns the more complicated representations of wild-type quivers. However recent work has shown that affording additional structure to poset modules results in an appropriate generalisation of Gabriel's Theorem under certain conditions \cite{Ogle2018}. Let us begin with an exposition of multiparameter persistence theory. We shall carry two equivalent perspectives of multiparameter persistence modules; a categorical perspective which will serve efficient descriptions of interleavings, and a module theoretic perspective which will serve an efficient way to describe modules via presentations. The following example gives a construction of a multiparameter persistence module. \begin{ex}(Sublevel-set Multiparameter Persistence Module) Let $X$ be a topological space and $f : X \to \mathbb{R}^n$ a filtering function. We can associate a family of topological subspaces indexed by vectors $\vec{a} = (a_1,...,a_n) \in \mathbb{R}^n$ induced by $f$: $$X_\vec{a} = \{ x\in X\ |\ f(x)_i < a_i \ \forall \ i = 1,...,n\}$$ this is known as the sublevel set filtration. For any $\vec{b}\in \mathbb{R}^n$ such that $a_i \leq b_i$ for all $i = 1,..,n$, we have an inclusion map $X_\vec{a} \hookrightarrow X_\vec{b}$. If we let $H$ denote a singular homology functor with coefficients in a field then applying this functor to the collection $\{X_\vec{a}\}$ and the appropriate inclusion maps gives rise to a family of vector spaces and linear maps known as a Sublevel-set Multiparameter Persistence Module. \end{ex} \subsection{Multiparameter Persistence Modules} Let $P_n$ denote the monoid ring of the monoid $([0,\infty)^n,+)$ over a field $\mathbb{F}$. Equivalently one may think of $P_n$ as a pseudo-polynomial ring $\mathbb{F}[x_1,...,x_n]$ in which exponents are only required to be non-negative and can be non-integral. Let $A_n$ denote the polynomial ring $\mathbb{F}[x_1,...,x_n]$ or analogously the monoid ring of $(\mathbb{N}^n,+)$ over $\mathbb{F}$. Let $\vec{P}$ denote the category associated to the poset $P$, so that $\vec{R}^n$ and $\vec{Z}^n$ denote the categories associated to the posets $(\mathbb{R}^n,\leq)$ and $(\mathbb{Z}^n,\leq)$ under the standard coordinate-wise partial orders. Let $\vec{Vect}$ denote the category of vector spaces and linear maps over $\mathbb{F}$, and $\vec{vect}$ denote the subcategory of finite dimensional vector spaces. Moreover let us denote the $\textbf{C}$ valued functors on $\textbf{P}$ by $\textbf{C}^\textbf{P}$. \begin{defn}(Persistence Module) Let $M$ be a module over the ring $P_n$. We say $M$ is a persistence module if $M$ is an $\mathbb{R}^n$-graded $P_n$-module. That is to say $M$ has a decomposition as a $\mathbb{F}$-vector space $M = \bigoplus_{\vec{a}\in \mathbb{R}^n} M_\vec{a}$ compatible with the action of $P_n$: $$m \in M_\vec{a} \implies \vec{x}^\vec{b} \cdot m \in M_{\vec{a}+\vec{b}}$$ Recall that we require morphisms of graded modules to respect grading and be compatible with the module structure. \end{defn} In the setting of a sublevel-set persistence module the vector space at each grade is $H(X_\vec{a})$, and the action of $\vec{x}^\vec{b}$ on $H(X_\vec{a})$ is given by the linear map of homology groups induced by the inclusion $X_\vec{a} \hookrightarrow X_{\vec{a}+\vec{b}}$. \begin{defn}(Persistence Module) Let $M$ be an element of the functor category $\vec{Vect}^{\vec{R}^n}$ then $M$ is a persistence module. A morphism of persistence modules is simply a natural transformation $M \Rightarrow M'$. \end{defn} \begin{defn}(Discrete Persistence Module) Let $M$ be a module over the ring $A_n$. We say $M$ is a discrete persistence module if $M$ is an $\mathbb{Z}^n$-graded $A_n$-module. \end{defn} \begin{defn}(Discrete Persistence Module) Let $M$ be an element of the functor category $\vec{Vect}^{\vec{Z}^n}$ then $M$ is a discrete persistence module. \end{defn} The equivalence of the two perspectives is simply saying that we have an equivalence of categories between $\mathbb{R}^n$-graded $P_n$-$\vec{Mod}$ and $\vec{Vect}^{\vec{R}^n}$. We have introduced the notion of discrete persistence modules since the computations we present shall deal with discrete persistence modules approximating continuous persistence modules. This approximation can be taken to an arbitrary degree of accuracy (under the interleaving distance) with increasing computational cost. Discretization is not the only approach one can take in computations involving continuous modules. In contrast \cite{Miller2017} develops a primary decomposition of modules which facilitates a finite description of a wide class of persistence modules which would require infinitely many generators if discretized. Our only obstruction to using this approach rather than discretization is the lack of available software to cope with these presentations. Associated to a multiparameter module is a family of single parameter modules whose collection of barcodes is known as the fibered barcode space. \begin{defn}(Fibered Barcode Space) Let $\vec{L}$ denote the subposet of $\vec{R}^n$ corresponding to a positively sloped line $L\subset \mathbb{R}^n$. Let $\iota_L : (\vec{R},\|\cdot\|_\infty) \to (\vec{R}^n,\|\cdot\|_\infty)$ denote the isometric embedding with $\iota_L( \vec{R})=\vec{L}$ and $\iota_L(0)\in \{x_n = 0\}$. Then for $M\in\vec{Vect}^{\vec{R}^n}$ the composite $M^L = M\circ \iota_L$ is a single parameter persistence module, and thus has an associated barcode $\mathcal{B}(M^L)$. Let $\mathcal{L}$ denote the set of positively sloped lines then the collection $\{\mathcal{B}(M^L) \ :\ L \in \mathcal{L} \}$ is known as the fibered barcode space of $M$. \end{defn} We shall see later that we will be able to reduce the computation of certain multiparameter landscapes to queries of the fibered barcode space. \subsection{Presentations} Let us now develop the theory required to define presentations of persistence modules. \begin{defn}(Translation Endofunctors)\cite{Bubenik2015} Let $\vec{P}$ be the category associated to a preordered set (proset) and let $\Gamma : \vec{P} \to \vec{P}$ be an endofunctor. We say that $\Gamma$ is a translation. Since $\Gamma$ is a functor $\Gamma$ is monotone $x\leq y \implies \Gamma(x)\leq \Gamma(y)$. We say that $\Gamma$ is increasing if $x \leq \Gamma(x)$ for all $x \in P$. Let $\textbf{Trans}_{\textbf{P}}$ denote the set of increasing translations of $\vec{P}$ and observe that $\textbf{Trans}_{\textbf{P}}$ is a monoid with respect to composition. \end{defn} It is straight forward to see that $\textbf{Trans}_{\textbf{P}}$ also has a natural proset structure with preorder $\Gamma \leq K \Leftrightarrow \Gamma(x) \leq K(x) \text{ for all } x$ . This preorder is compatible with the monoid structure and $\Gamma \leq K$ implies there is a unique natural transformation $\eta_{K}^{\Gamma} : \Gamma \Rightarrow K $. If $\textbf{P}$ is a poset then so is $\textbf{Trans}_{\textbf{P}}$. \begin{defn}(Shift Functor) Let $F$ be an element of the functor category $\vec{C}^\vec{P}$ and $\Gamma$ a translation endofunctor. Let $F(\Gamma)$ denote $F\circ \Gamma \in \vec{C}^\vec{P}$, we call this functor the $\Gamma$ shift of $F$. \end{defn} For a persistence module $M$ we shall write $M(\vec{a})$ to denote the shift by the translation in $\mathbb{R}^n$, $\Gamma_{\vec{a}}(\vec{x})= \vec{x} + \vec{a}$. We define a graded set to be some subset $\mathcal{X} \subset J \times P$ where $J$ is an arbitrary indexing set and $P$ is a grading set. For an element of a graded set $(j,\vec{a})$, we shall refer to $\vec{a}$ as the grade of $j$, gr$(j) = \vec{a}$. \begin{defn}(Free Module) Let $\mathcal{X}$ be an $\mathbb{R}^n$-graded set we define the free module on $\mathcal{X}$ to be: $$ \text{Free}[\mathcal{X}] = \bigoplus_{j\in J} P_n(-\text{gr}(j))$$ \end{defn} The notion of a free module on a graded set can equivalently be defined using a univeral property characterisation. We say a subset $\mathcal{R} \subset M$, of a persistence module is homogeneous if $\mathcal{R} \subset \cup_{\vec{a}\in \mathbb{R}^n} M_\vec{a}$, that is to say each element has a well defined grade. \begin{defn}(Presentations) Let $\mathcal{X}$ be a graded set and $\mathcal{R}$ a homogeneous subset of the free module on $\mathcal{X}$ generating the submodule $\langle \mathcal{R} \rangle$. We say that a persistence module $M$ has presentation $\langle \mathcal{X} | \mathcal{R}\rangle$ if: $$ M \cong \frac{\text{Free}[\mathcal{X}]}{\langle \mathcal{R} \rangle}$$ We say that a presentation is finite if both $\mathcal{X}$ and $\mathcal{R}$ are finite. Let $I$ denote the ideal of $P_n$ generated by the elements $\{\vec{x}^\vec{a} \ | \ \vec{a} > 0 \}$ and let $\Phi_{\langle \mathcal{X} | \mathcal{R}\rangle} : \text{Free}[\mathcal{R}] \to \text{Free}[\mathcal{X}]$ be the map induced by the inclusion $\mathcal{R} \hookrightarrow \text{Free}[\mathcal{X}]$. We say that a presentation of $M$ is minimal if $\mathcal{R} \subset I \cdot \text{Free}[\mathcal{X}]$ and $\ker_{\Phi_{\langle \mathcal{X} | \mathcal{R}\rangle}} \subset I \cdot \text{Free}[\mathcal{R}]$. \end{defn} \begin{defn}(Multiparameter Betti Numbers) Let $M$ be a persistence module, then the associated Betti numbers are maps $\xi_i(M) : \mathbb{R}^n \to \mathbb{N}$ defined by: $$ \xi_i(M)(\vec{a}) = \dim_{\mathbb{F}} (\text{Tor}^{P_n}_i(M,P_n / I P_n)_\vec{a})$$ Standard homological algebra arguments establish that the Betti numbers are well defined (see \cite{Lesnick2015} for details). If $\langle \mathcal{X} | \mathcal{R}\rangle$ is a minimal presentation for $M$ then $\xi_0(M)(\vec{a}) = |\text{gr}^{-1}_\mathcal{X}(\vec{a})|$ and $\xi_1(M)(\vec{a}) = |\text{gr}^{-1}_\mathcal{R}(\vec{a})|$ \end{defn} The multiparameter Betti numbers are related to the initial topological space with $\xi_0(M)$ marking the filtration values for the birth of homological features, and $\xi_1(M)$ marking the filtration values for relations between features. \subsection{Generalised Interleavings} We shall now adopt the notion of a generalised interleaving from \cite{Bubenik2015} and define an interleaving distance on multiparameter persistence modules. For a more detailed account of interleavings of multiparameter persistence modules see also \cite{Lesnick2012}. \begin{defn}(Interleaving) \cite{Bubenik2015} Let $F,G \in \vec{C}^\vec{P}$ be modules and $\Gamma, K \in \textbf{Trans}_{\textbf{P}}$. We say that $F,G$ are $(\Gamma, K)$-interleaved if there exist natural transformations $\varphi : F \Rightarrow G\Gamma$, $\psi: G\Rightarrow FK$ satisfying the coherence criteria that $(\psi \Gamma)\varphi = F \eta^{\id}_{K\Gamma} $, $(\varphi K)\psi = G \eta^{\id}_{\Gamma K}$ where $\eta^{\id}_{\alpha}$ denotes the unique natural transformation between the translations $\id \leq \alpha$. \end{defn} An interleaving may be thought of as an approximate isomorphism. Indeed if we take $\Gamma = K = \id$ then $F,G$ are $(\Gamma, K)$-interleaved if and only if $F,G$ are isomorphic. By warping the poset with translations $\Gamma,K$ we admit flexibility to the rigid notion of isomorphism. In order to introduce an associated distance we must assign a weight to the translations to quantify how close the interleaving is to an isomorphism. \begin{defn}(Sublinear Projections and $\varepsilon$-Interleavings) \cite{Bubenik2015} A sublinear projection is a function $ \omega : \textbf{Trans}_{\textbf{P}} \rightarrow [0,\infty]$ such that $\omega_{\id}=0$ and $\omega_{\Gamma K} \leq \omega_{\Gamma} + \omega_{K}$. We say that a translation $\Gamma$ is an $\varepsilon$-translation if $\omega_{\Gamma} \leq \varepsilon$. We say modules $F,G \in \textbf{C}^\textbf{P}$ are $\varepsilon$-interleaved with respect to $\omega$ if they are $(\Gamma,K)$-interleaved for some pair of $\varepsilon$-translations. \end{defn} \begin{prop}(Induced Interleaving Distance) \cite{Bubenik2015} Given a sublinear projection $\omega$ and modules $F,G \in \textbf{C}^\textbf{P}$, we have an induced interleaving distance given by: $$ d^{\omega}(F,G) = \inf \{ \varepsilon \geq 0 : F,G \text{ are $\varepsilon$-interleaved with respect to $\omega$} \}$$ \end{prop} A common sublinear projection to consider is given by $\omega_{\Gamma} = \| \Gamma - \id \|_\infty$. We may sometimes refer to the interleaving distance induced by this sublinear projection as simply \textit{the interleaving distance}. We will specify the sublinear projection when we wish to consider an alternative interleaving distance. The dual notion to a sublinear projection is a \textit{superlinear family} from which one can also derive an interleaving distance. \begin{defn}(Superlinear Family) \cite{Bubenik2015} Let $\Omega : [ 0,\infty ) \to \vec{Trans}_{\vec{P}}$ be a superlinear function: $\Omega_{\varepsilon_1+ \varepsilon_2} \geq \Omega_{\varepsilon_1}\Omega_{\varepsilon_2}$. Then we say that $\Omega$ is a superlinear family. \end{defn} \begin{prop}(Induced Interleaving Distance) \cite{Bubenik2015} Given a superlinear family $\Omega$ and modules $F,G \in \textbf{C}^\textbf{P}$, we have an induced interleaving distance given by: $$ d^{\Omega}(F,G) = \inf \{ \varepsilon \geq 0 : F,G \text{ are $\Omega_\varepsilon$-interleaved} \}$$ \end{prop} The interleaving distance has been shown to be NP-hard to compute \cite{Botnan17}. Nevertheless the interleaving distance is a very natural distance to consider on persistence modules. \cite{Lesnick2012} establishes that the interleaving distance is universal amongst stable distances on persistence modules, that is to say any other stable distance is bounded above by the interleaving distance. This property provides a strong justification for considering the interleaving distance on multiparameter modules. \subsection{Discretization and Continuous Extension} In Section \ref{Computations} our computations will be simplified by restricting a continuous module to a finite grid and then dealing with the continuous extension of this discretization. We will show that restricting to a finite grid gives us a suitable approximation to our module with respect to the interleaving distance between modules. \begin{defn}(Grid Function) Let $\mathcal{G}: \mathbb{Z}^n \to \mathbb{R}^n$ be defined by component-wise increasing functions $\mathcal{G}_i : \mathbb{Z} \to \mathbb{R}$ with $\sup \mathcal{G}_i = \sup -\mathcal{G}_i = \infty$. Then we say $\mathcal{G}$ is a grid function. Let us define the size of $\mathcal{G}$ to be $$|\mathcal{G}| = \max_{i\in[n]}\sup_{z \in \mathbb{Z}}|\mathcal{G}_i(z) - \mathcal{G}_i(z+1)|$$ \end{defn} \begin{defn}(Discretization) Let $M\in \vec{Vect}^{\vec{R}^n}$ be a persistence module and $\mathcal{G}$ a grid function. We say that $M\circ \mathcal{G} \in \vec{Vect}^{\vec{Z}^n}$ is the $\mathcal{G}$-discretization of $M$. \end{defn} \begin{defn}(Continuous Extension)\cite{Lesnick2015} Let $Q\in \vec{Vect}^{\vec{Z}^n}$ be a discrete persistence module and $\mathcal{G}$ a grid function. For $\vec{x}\in \mathbb{R}^n$ let us define floor and ceiling functions: $$\lfloor \vec{x} \rfloor_\mathcal{G} = \max\{\vec{z}\in \Ima \mathcal{G} \ |\ \vec{z} \leq \vec{x}\} \text{ and } \lceil \vec{x} \rceil_\mathcal{G} = \min\{\vec{z}\in \Ima \mathcal{G} \ |\ \vec{z} \geq \vec{x}\}$$ We define the continuous extension $E_{\mathcal{G}}(Q)\in\vec{Vect}^{\vec{R}^n}$ to be the persistence module with: $$E_{\mathcal{G}}(Q)_{\vec{a}} = Q_{\mathcal{G}^{-1}(\lfloor \vec{a} \rfloor_\mathcal{G})} \text{ and } E_{\mathcal{G}}(Q)(\vec{a} \leq \vec{b}) = Q(\mathcal{G}^{-1}(\lfloor \vec{a} \rfloor_\mathcal{G}) \leq \mathcal{G}^{-1}(\lfloor \vec{b} \rfloor_\mathcal{G}))$$ With the obvious action on the morphisms of $\vec{Vect}^{\vec{Z}^n}$, we have defined a functor $ E_{\mathcal{G}}: \vec{Vect}^{\vec{Z}^n} \to \vec{Vect}^{\vec{R}^n}$. \end{defn} The following proposition shows that discretization is stable with respect to the interleaving distance, and so we may produce an arbitrarily close approximation to a persistence module by restricting the module to a grid of sufficiently small size. \begin{prop} Let $M \in \vec{Vect}^{\vec{R}^n}$ be a persistence module, $\mathcal{G}$ a grid function and $\omega$ the sublinear projection given by $\omega_{\Gamma} = \| \Gamma - \id \|_\infty$ then we have that: $$ d^{\omega}(M, E_{\mathcal{G}}(M \circ \mathcal{G})) \leq | \mathcal{G} | $$ \end{prop} \begin{proof} The modules $M, E_{\mathcal{G}}(M \circ \mathcal{G})$ are $(\lceil \cdot \rceil_{\mathcal{G}} ,\id)$-interleaved with natural transformations given by the appropriate internal morphisms of $M$. \end{proof} \subsection{Interval Decomposable Modules} Given the complicated nature of unconstrained persistence modules, it is common to consider subclasses of multiparameter persistence modules. \begin{defn}(Interval Decomposable Modules) Let $P$ be a poset. We define a subposet $I\leq P$ to be an interval if $s,t\in I, s\leq r\leq t \implies r\in I$ and for any $s,t\in I \ \exists \ r_i \in I$ connecting $s$ and $t$, $s =r_0 \leq r_1 \geq r_2 \leq r_3 \geq ... \leq r_n = t$. The interval module $\mathds{1}^{\vec{I}} \in \vec{vect}^{\vec{P}}$ associated to an interval $I$ has a one dimensional vector space at each $a\in I$ and internal isomorphisms given by the identity wherever possible. We say a module $M \in \vec{vect}^{\vec{P}}$ is interval decomposable if $M \cong \bigoplus_{j \in \mathcal{J}} \mathds{1}^{\vec{I}_j}$, for some multiset of intervals $\{\{I_j\}\}$. \end{defn} It is straight forward to verify that the endomorphism ring of an interval summand is given by the field over which we are working. In particular this ring is local and so the Krull-Schmidt-Remak-Azumaya Theorem guarantees that the decomposition of an interval decomposable module is unique up to reordering. We can thus assign the multiset of intervals in the decomposition of a module $M$ to be the barcode $\mathcal{B}(M) = \{\{I_j\}\}$. Restricting our attention to interval decomposable modules we can see the complicated nature of interleavings of multiparameter modules. The class of interval decomposable modules admit a bottleneck distance. \begin{defn}($\varepsilon$-Matching) Let $\{I_j\ \ | \ j \in \mathcal{J}\}$ and $\{J_k\ \ | \ k \in \mathcal{K}\}$ be multisets of intervals. We say a partial bijection $\sigma : \mathcal{J} \nrightarrow \mathcal{K}$ is an $\varepsilon$-matching if $d^{\omega}(\mathds{1}^{I_j},\mathds{1}^{J_{\sigma(j)}}) \leq \varepsilon$ for matched intervals and $d^{\omega}(\mathds{1}^{I_j},0),d^{\omega}(0,\mathds{1}^{J_{k}}) \leq \varepsilon$ for unmatched intervals. \end{defn} \begin{defn}(Bottleneck Distance) Let $M$ and $N$ be interval decomposable modules. The bottleneck distance between the modules is given by: $$ d_B(M,N) = \inf \{\varepsilon \geq 0 \ |\ \mathcal{B}(M),\mathcal{B}(N) \text{ admit an } \varepsilon \text{-matching} \}$$ \end{defn} One would hope to attain a result analogous to the isometry theorem for ordinary one dimensional persistent homology relating the bottleneck distance and the interleaving distance. In the single parameter case an $\varepsilon$-interleaving induces an $\varepsilon$-matching between summands \cite{bauer_induced_2014}. In contrast, the interleaving distance and bottleneck distance do not coincide for multiparameter interval decomposable modules. Certainly the bottleneck distance provides an upper bound on the interleaving distance. However general interleavings of interval decomposable multiparameter modules do not necessarily induce a matching of interval summands. This is best illustrated by the example provided in \cite{Bjerkevik2016} for which the optimal matching between 1-interleaved modules is a 3-matching, see Figure \ref{BottleneckVsInterleaving}. We can further define a Wasserstein distance for interval decomposable modules. \begin{defn}($p$-Wasserstein Distance) Let $M,N$ be interval decomposable persistence modules with barcodes $\{I_j\ \ | \ j \in \mathcal{J}\}$ and $\{J_\kappa \ \ | \ \kappa \in \mathcal{K}\}$ respectively. Assume the cardinality of these barcodes coincide, by appending a collection of empty intervals to each barcode. For a matching $\sigma : \mathcal{J} \to \mathcal{K}$ let $\varepsilon_j = d^{\omega}(\mathds{1}^{I_j},\mathds{1}^{J_{\sigma(j)}})$. The $p$-Wasserstein distance is given by: $$d_{W_p}(M,N) = \inf_{\sigma : \mathcal{J} \to \mathcal{K} } \left[\sum_\mathcal{J} \varepsilon_j^p \right]^{\frac{1}{p}}$$ \end{defn} The bottleneck distance is simply the $\infty$-Wasserstein distance. If we wish to place extra emphasis on intervals with large persistence we may use the persistence weighted $p$-Wasserstein distance. \begin{defn}(Persistence Weighted $p$-Wasserstein Distance) Let $M,N$ be interval decomposable persistence modules with barcodes $\{I_j\ \ | \ j \in \mathcal{J}\}$ and $\{J_\kappa \ \ | \ \kappa \in \mathcal{K}\}$. For a subset $A\subset \mathbb{R}^n$ let $|A|$ denote the Euclidean volume. The persistence weighted $p$-Wasserstein distance is given by: $$d_{\overline{W}_p}(M,N) = \inf_{\sigma : \mathcal{J} \to \mathcal{K} } \left[\sum_\mathcal{J} |I_j \cup J_{\sigma(j)}|\varepsilon_j^p \right]^{\frac{1}{p}}$$ \end{defn} The landscape distance we introduce in the following section is similar to the persistence weighted $p$-Wasserstein Distance, and can be defined for persistence modules which do not admit an interval decomposition. In Section \ref{Stability} we will show that our invariant is stable with respect to interleaving distance and the persistence weighted Wasserstein distance. In particular the distance function on multiparameter landscapes provides a lower bound on interleaving distance. Given the NP-hardness of computing interleaving distance this lower bound may prove useful for comparing multiparameter persistence modules \cite{Botnan17}. \begin{figure} \centering \begin{subfigure}[b]{0.3\linewidth} \begin{tikzpicture}[scale = 0.7] \filldraw[fill=green, fill opacity = 0.25] (0,0.5) rectangle (5,5.5); \filldraw[fill=green, fill opacity = 0.25] (0,-.5) rectangle (6,5.5); \filldraw[fill=green, fill opacity = 0.25] (1,.5) rectangle (5,4.5); \node [anchor=south] (I) at (3,-1.5) {$M$}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \begin{tikzpicture}[scale = 0.7] \filldraw[fill=red, fill opacity = 0.25] (0.5,0) rectangle (5.5,5); \filldraw[fill=red, fill opacity = 0.25] (0.5,0) rectangle (4.5,6); \filldraw[fill=red, fill opacity = 0.25] (-.5,1) rectangle (5.5,5); \node [anchor=south] (I) at (3,-1.5) {$N$}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \begin{tikzpicture}[scale = 0.70] \filldraw[fill=green, fill opacity = 0.25] (0,0.5) rectangle (5,5.5); \filldraw[fill=green, fill opacity = 0.25] (0,-.5) rectangle (6,5.5); \filldraw[fill=green, fill opacity = 0.25] (1,.5) rectangle (5,4.5); \filldraw[fill=red, fill opacity = 0.25] (0.5,0) rectangle (5.5,5); \filldraw[fill=red, fill opacity = 0.25] (0.5,0) rectangle (4.5,6); \filldraw[fill=red, fill opacity = 0.25] (-.5,1) rectangle (5.5,5); \node [anchor=south] (I) at (3,-1.5) {$M$ and $N$ overlayed}; \end{tikzpicture} \end{subfigure} \caption{We illustrate rectangular decomposable modules $M =(0,10)\times (1,11) \oplus (0,12)\times(-1,11)\oplus (2,10)\times (1,9)$ and $N =(1,11)\times (0,10) \oplus (1,9)\times(0,12)\oplus (-1,11)\times (2,10)$. There is a $1$-interleaving between $M,N$ but there is no $1$-matching. The optimal matching is a $3$-matching. See \cite{Bjerkevik2016} for details.} \label{BottleneckVsInterleaving} \end{figure} \section{Persistence Landscapes} \label{Landscapes} In this section we shall recall the definition of the single parameter persistence landscape and its properties. We shall generalise the definition to multiparameter persistence modules and show which properties of the single parameter persistence landscape are preserved. From this point onward all single parameter persistence modules we consider shall be pointwise finite dimensional and satisfy the finiteness conditions of \cite{Crawley-Boevey15} in order that they admit an interval decomposition. The multiparameter persistence modules we consider will be pointwise finite dimensional and tame, but will not necessarily admit an interval decomposition. \subsection{Single Parameter Persistence Landscapes} The \textit{persistence landscape} associated to a single parameter persistence module is defined in \cite{Bubenik:2015}. The persistence landscape is derived from the rank invariant of a module. \begin{defn}(Rank Invariant) Let $M \in \textbf{vect}^{\textbf{R}}$ be a persistence module then for $a\leq b$ the function $\beta^{\cdot,\cdot}$ giving the corresponding Betti number is the rank invariant of $M$: $$ \beta^{a,b} = \dim(\Ima(M(a\leq b)))$$ \end{defn} \begin{defn}(Rank Function) The rank function $\rk : \mathbb{R}^2 \rightarrow \mathbb{R}$ is given by $$\rk(b,d)= \begin{cases} \beta^{b,d} & \text{if } b \leq d \\ 0 & \text{otherwise} \end{cases} $$ \end{defn} \begin{defn}(Rescaled Rank Function) The rescaled rank function $r : \mathbb{R}^2 \rightarrow \mathbb{R}$ is supported on the upper half plane: $$r(m,h)= \begin{cases} \beta^{m-h,m+h} & \text{if } h \geq 0\\ 0 & \text{otherwise} \end{cases} $$ \end{defn} Observe that the rank function has support contained in the upper triangular half of the plane with the coordinates corresponding to ``births" and ``deaths", whilst the rescaled rank function has support contained in the upper half plane with coordinates corresponding to ``midpoints" and ``half-lifes". \begin{defn}(Persistence Landscape) The persistence landscape is a function $\lambda : \mathbb{N} \times \mathbb{R} \rightarrow \overline{\mathbb{R}}$, where $\overline{\mathbb{R}}$ denotes the extended real numbers, $[-\infty,\infty]$. Alternatively, it may be thought of as a sequence of functions $\lambda_{k} : \mathbb{R} \rightarrow \overline{\mathbb{R}}$, where $\lambda_{k}(t) = \lambda(k,t)$. Define $$ \lambda(k,t) = \sup\{ h \geq 0 : \beta^{t-h,t+h} \geq k\}$$ \end{defn} The value $\lambda(k,t)$ gives the maximal radius of an interval centred at $t$ that is contained in at least $k$ intervals of the barcode. The persistence landscape and persistence diagram of a suitably well-behaved single parameter module carry the same information. Alternatively the persistence landscape of a single parameter module can be derived from the landscape functions of the modules interval summands, see Figure \ref{fig:Persistence-Landscape-Example}. \begin{defn}(Persistence Landscape) Let $M$ be a single parameter persistence module with associated persistence diagram given by the multiset $\{\{(b_j,d_j)\}\}$. The persistence landscape may be equivalently defined as: $$ \lambda_M(k,t) = \text{kmax}\{ \lambda_{(b_j,d_j)}(1,t) \} $$ Where \text{kmax} denotes the $k^{\text{th}}$ largest value of the multiset and $\lambda_{(b_j,d_j)}$ is the landscape associated to the interval module $\mathds{1}^{(b_j,d_j)}$. \end{defn} We shall later see that our chosen multiparameter generalisation of the persistence landscape also admits a decomposition as the $\text{kmax}$ of a series of simple landscape functions when our module is interval decomposable. \begin{figure} \begin{center} \begin{tikzpicture} \draw (0,0) -- (2.5,0); \draw (0,0) -- (0,2.5); \draw (0,0) -- (2.5,2.5); \node at (0.5,1.5) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (.75,1) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (1,1.75) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (1.5,2) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (1.25,2.5) [draw,circle,fill = black, inner sep = 0.3mm]{}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture} \draw (0,0) -- (6,0); \draw (0,0) -- (0,2.5); \draw (1,0) -- (2,1) -- (3,0); \draw (1.5,0) -- (1.75,0.25) -- (2,0); \draw (2,0) -- (2.75,0.75) -- (3.5,0); \draw (3,0) -- (3.5,0.5) -- (4,0); \draw (2.5,0) -- (3.75,1.25) -- (5,0); \node at (2,1) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (1.75,0.25) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (2.75,0.75) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (3.5,0.55) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node at (3.75,1.25) [draw,circle,fill = black, inner sep = 0.3mm]{}; \node[align=right] at (5,2) {\textcolor{red}{$\lambda_M(1,x)$}}; \node[align=right] at (5,1.5) {\textcolor{green}{$\lambda_M(2,x)$}}; \node[align=right] at (5,1) {\textcolor{blue}{$\lambda_M(3,x)$}}; \draw[red] (0,0) -- (1,0) -- (2,1) -- (2.5,0.5) -- (2.75,0.75) -- (3,0.5) -- (3.75,1.25) -- (5,0) -- (6,0); \draw[green] (0,0) -- (1.5,0) -- (1.75,0.25) -- (2,0) -- (2.5,0.5) -- (2.75,0.25) -- (3,0.5) -- (3.25,0.25) -- (3.5,0.5) -- (4,0) -- (6,0); \draw[blue] (0,0) -- (2.5,0) -- (2.75,0.25) -- (3,0) -- (3.25,0.25) -- (3.5,0) -- (6,0); \end{tikzpicture} \end{center} \caption{We show the persistence diagram of a single parameter module $M$ on the left and the associated persistence landscapes on the right. One can see that $\lambda_M(k,t)$ is the $\text{kmax}$ of the landscape functions $\lambda_{(b_j,d_j)}(1,t)$ of the interval summands of $M$.} \label{fig:Persistence-Landscape-Example} \end{figure} \begin{lem} The persistence landscape has the following properties: \begin{enumerate} \item $\lambda_{k}(t) \geq 0$ \item $\lambda_{k}(t) \geq \lambda_{k+1}(t)$ \item $\lambda_{k}(t) \text{ is 1-Lipschitz}$ \end{enumerate} The first two properties are immediate from the definition and the third property is proved in \cite{Bubenik:2015}. \end{lem} \subsection{Multiparameter Persistence Landscapes} Let us define the multiparameter persistence landscape in analogy with the single parameter case. The rank invariant, rank function and rescaled rank function defined above generalise naturally to multiparameter persistence modules: \begin{defn}(Rank Invariant) Let $M$ be an $\mathbb{R}^n$ persistence module, then for $\vec{a}\leq \vec{b}$ the function $\beta^{\cdot,\cdot}$ giving the corresponding Betti number is the rank invariant of $M$: $$ \beta^{\vec{a},\vec{b}} = \dim(\Ima(M(\vec{a}\leq \vec{b})))$$ \end{defn} \begin{defn}(Multiparameter Rank Function) The rank function $\rk : \mathbb{R}^{2n} \rightarrow \mathbb{R}$ is given by $$\rk(\vec{b},\vec{d})= \begin{cases} \beta^{\vec{b},\vec{d}} & \text{if } \vec{b} \leq \vec{d}\\ 0 & \text{otherwise} \end{cases} $$ \end{defn} \begin{defn}(Rescaled Multiparameter Rank Function) The rescaled rank function $r : \mathbb{R}^{2n} \rightarrow \mathbb{R}$ $$r(\vec{m},\vec{h})= \begin{cases} \beta^{\vec{m}-\vec{h},\vec{m}+\vec{h}} & \text{if } \vec{h} \geq \vec{0}\\ 0 & \text{otherwise} \end{cases} $$ \end{defn} One could perform statistical analysis directly to the rank function and rescaled rank function. Endowed with the natural vector space structure these functions are not stable with respect to the interleaving distance. \begin{ex}(Rank Function Instability) For any $\varepsilon > 0 $ and $N\in\mathbb{N} $ the multiparameter persistence module $M = \langle (a_i,\vec{0}) \textrm{ for } i = 1,...,N \ |\ \vec{x}^\vec{\varepsilon}\cdot a_i \textrm{ for } i = 1,...,N \rangle$ is such that $\| \rk_M\|_\infty = N$ and $d^\omega(M,0) = \varepsilon$. \end{ex} We wish to define a stable invariant and so we derive a landscape function from the rank invariant. There are several choices to make in defining the persistence landscape for the multiparameter setting. We outline three natural generalisations from the single parameter landscape: \begin{defn}(Cartesian Product $p$-Landscape) This choice corresponds to taking the ordinary Persistence Landscape function in each coordinate and then applying the $p$-norm, $\lambda_{p} : \mathbb{N} \times \mathbb{R}^n \rightarrow \overline{\mathbb{R}}$. $$ \lambda_{p}(k,\vec{x}) = \|(\sup\{ h_i \geq 0 : \beta^{\vec{x}-h_i\vec{e}_i,\vec{x}+h_i\vec{e}_i} \geq k\})_i \|_{p}$$ \end{defn} \begin{defn}(Maximal Persistence $p$-Landscape) This landscape searches for the line through $\vec{x}$ in the parameter space over which $k$ features persist for the longest interval about $\vec{x}$ along this line, and evaluates at half the length of this interval $\lambda_{p} : \mathbb{N} \times \mathbb{R}^n \rightarrow \overline{\mathbb{R}}$. $$ \lambda_{p}(k,\vec{x}) = \sup\{ \|\vec{h} \|_{p} \geq 0 : \beta^{\vec{x}-\vec{h},\vec{x}+\vec{h}} \geq k\}$$ \end{defn} \begin{defn}(Uniform Persistence $p$-Landscape) This landscape considers the maximal length over which $k$ features persist in every (positive) direction through $\vec{x}$ in the parameter space $\lambda_{p} : \mathbb{N} \times \mathbb{R}^n \rightarrow \overline{\mathbb{R}}$. $$ \lambda_{p}(k,\vec{x}) = \sup\{ \varepsilon \geq 0 : \beta^{\vec{x}-\vec{h},\vec{x}+\vec{h}} \geq k \text{ for all } \vec{h} \geq \vec{0} \text{ with } \| \vec{h} \|_{p} \leq \varepsilon\}$$ \end{defn} It is worth noting that when restricted to the single parameter case all three of these definitions coincide with the single parameter Persistence Landscape \cite{Bubenik:2015}. However, the Cartesian Product Landscape and Maximal Persistence Landscape are not necessarily continuous (Figure \ref{Discontinuity}). In contrast we can show that the Uniform Persistence $p$-Landscape is 1-Lipschitz. \begin{figure} \begin{center} \begin{tikzpicture} \draw (0,0) -- (5,0); \draw (0,0) -- (0,5); \filldraw[fill=green, fill opacity = 0.2, draw=white] (1,1) rectangle (5,5); \node at (0.9,2) [draw,circle,fill = black, inner sep = 0.3mm, label=left:$\vec{a}$]{}; \node at (1.1,2) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{a + \varepsilon}$]{}; \draw[<->, draw=red] (1.1,2) -- (1.1,1) node[midway,right]{$h_2$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture} \draw (0,0) -- (5,0); \draw (0,0) -- (0,5); \filldraw[fill=green, fill opacity = 0.2,draw opacity = 0 ] (1,2) rectangle (5,5); \filldraw[fill=green, fill opacity = 0.2,draw opacity = 0] (2,1) rectangle (5,5); \filldraw[fill = red, fill opacity = 0.3, draw opacity = 0](3,2.5) -- (2.5,2.5) arc (180:270:0.5cm) -- cycle ; \filldraw[fill = red, fill opacity = 0.3, draw opacity = 0](3,2.5) -- (3.5,2.5) arc (0:90:0.5cm) -- cycle ; \node at (3,2.5) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{a}$]{}; \filldraw[fill = red, fill opacity = 0.3, draw opacity = 0](2.5,4.5) -- (1,4.5) -- (2.5,3) -- cycle ; \filldraw[fill = red, fill opacity = 0.3, draw opacity = 0](2.5,4.5) -- (4,4.5) -- (3.5,5) -- (2.5,5) -- cycle ; \node at (2.5,4.5) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{b}$]{}; \filldraw[fill = red, fill opacity = 0.3, draw opacity = 0](4,1.75) rectangle (3.25,1); \filldraw[fill = red, fill opacity = 0.3, draw opacity = 0](4,1.75) rectangle (4.75,2.5); \node at (4,1.75) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{c}$]{}; \end{tikzpicture} \end{center} \caption{Both diagrams are cartoons of 2-parameter free persistence modules, where the intensity of shading at each point is proportional to the dimension of the persistence module at that choice of parameter values. The first diagram represents a module with one free generator. This simple module exhibits the discontinuity of the Cartesian Product and Maximal Persistence Landscapes. The second diagram illustrates the \textit{half-balls} over which persistence of $k$ homological features is measured by the Uniform Persistence Landscape for different choices of $k$ and $p$-norm. At $\vec{a}$ we sketch the maximal half-ball for the 2-norm over which 2 features persist. At $\vec{b}$ we sketch the maximal half-ball for the 1-norm over which 1 feature persists. At $\vec{c}$ we sketch the maximal half-ball for the $\infty$-norm over which 1 feature persists. } \label{Discontinuity} \end{figure} \begin{prop} The Uniform Persistence $p$-Landscape is 1-Lipschitz for all $p \in [1,\infty]$ \end{prop} \begin{proof} Let $\vec{x},\vec{y} \in \mathbb{R}^n$ and let $k \in \mathbb{N}$. Without loss of generality assume that $\lambda_p(k,\vec{x}) \geq \lambda_p(k,\vec{y}) $ and that also $r = \lambda_p(k,\vec{x}) \geq \|\vec{x} - \vec{y} \|_p = \delta $. We seek to show that $\lambda_p(k,\vec{y}) \geq \lambda_p(k,\vec{x}) - \|\vec{x} - \vec{y} \|_p$. For any $\vec{\varepsilon} \geq \vec{0}$ such that $\|\vec{\varepsilon}\|_p \leq r - \delta $ let us define $\vec{h} = (|x_i - y_i| + \varepsilon_i)_i$. Then we observe that $\|\vec{h}\|_p \leq \|\vec{x} - \vec{y}\|_p + \|\vec{\varepsilon}\|_p \leq r$ and in particular that: $$ \vec{x} - \vec{h} \leq \vec{y} - \vec{\varepsilon} \leq \vec{y} + \vec{\varepsilon} \leq \vec{x} + \vec{h}$$ Thus $\|\vec{h}\|_p \leq r$ means the map $M(\vec{x} - \vec{h} \leq \vec{x} + \vec{h})$ which factors through $M( \vec{y} - \vec{\varepsilon} \leq \vec{y} + \vec{\varepsilon})$ has rank at least $k$. Since $\vec{\varepsilon}$ was arbitrary we see that $\lambda_p(k,\vec{y}) \geq r - \delta$ \end{proof} It is clear that the discriminating power of the proposed multiparameter landscapes will be restricted by the discriminating power of the rank invariant. It is well known, that non-isomorphic modules may have the same rank invariant see Figure \ref{Non-Isomorphic}. \begin{figure} \begin{center} \begin{tikzpicture} \draw (0,0) -- (5,0); \draw (0,0) -- (0,5); \filldraw[fill=green, fill opacity = 0.2,draw opacity = 0 ] (1,2) rectangle (5,5); \filldraw[fill=green, fill opacity = 0.2,draw opacity = 0] (2,1) rectangle (5,5); \node at (2,1) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{a}$]{}; \node at (1,2) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{b}$]{}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture} \draw (0,0) -- (5,0); \draw (0,0) -- (0,5); \filldraw[fill=green, fill opacity = 0.2,draw opacity = 0 ] (1,2) rectangle (5,5); \filldraw[fill=green, fill opacity = 0.2,draw opacity = 0] (2,1) rectangle (5,5); \node at (2,1) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{a}$]{}; \node at (1,2) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{b}$]{}; \node at (2,2) [draw,circle,fill = black, inner sep = 0.3mm, label=right:$\vec{c}$]{}; \end{tikzpicture} \end{center} \caption{The diagrams are cartoons of the 2 parameter persistence modules $M,N$, thought of as elements of $\mathbb{R}[x_1,x_2]-\textbf{Mod}$ with presentations given by: $ M = \langle(a,(2,1)) (b,(1,2))| \rangle$ and $ N = \langle(a,(2,1)) (b,(1,2)) (c,(2,2))| x_2\cdot {a} = x_1 \cdot {b} \rangle$. The rank invariant of these non-isomorphic modules coincide.} \label{Non-Isomorphic} \end{figure} We shall proceed with the Uniform Persistence $p$-Landscape as our chosen generalisation of the single parameter persistence landscape since these landscapes are stable with respect to interleaving and the Cartesian and Maximal Landscapes are not. The discontinuity and consequent lack of stability of the Cartesian and Maximal Landscapes is related to the results in \cite{Landi2014} regarding the stability of the fibered barcode space. \begin{Remark} By considering the inclusion of $n$-dimensional balls with respect to the $p$-norm and $p'$-norm, we observe that our uniform landscape functions for different choices of norm are related as follows: $$ c_n\lambda_{p}(k,\vec{x}) \leq \lambda_{p'}(k,\vec{x}) \leq \frac{1}{C_n}\lambda_{p}(k,\vec{x})$$ Where $c_n,C_n$ are the largest constants satisfying the inclusions of the zero centred balls $B_{p'}(c_n) \subseteq B_{p}(1)$ and $B_{p}(C_n) \subseteq B_{p'}(1)$ \label{remark:qrelated} \end{Remark} For purposes of computation it is most convenient to use the $\infty$-landscape. In order to calculate the value of the $\infty$-landscape $\lambda_\infty(k,\vec{x})$ at the point $\vec{x}=\vec{x}_0$ one only needs to calculate $\sup\{\varepsilon\geq 0 : \beta^{\vec{x}_0-\varepsilon\vec{1},\vec{x}_0 + \varepsilon\vec{1}} \geq k\}$. Thus we only need to compute a single barcode in the fibered barcode space to compute the landscape value at a point. \begin{prop} Let $M$ be a persistence module and let $\vec{L}$ be a line of slope $\vec{1}$ through the parameter space $\vec{R}^n$. Let $\iota_L : (\vec{R},\|\cdot\|_\infty) \to (\vec{R}^n,\|\cdot\|_\infty) $ denote the isometric embedding with $\iota_L(\vec{R}) = \vec{L}$ and $\iota_L(0) \in \{x_n =0\}$. Then the restriction of the uniform $\infty$-landscape of $M$ to $L$ and the single parameter persistence landscape of the persistence module $M^L$ coincide, $\lambda_{M^L}(k,t) = \lambda_{M}(k, \iota_L(t))$. \end{prop} \begin{proof} Following the landscape definitions we see that if $\lambda_{M}(k, \iota_L(t)) > h$ then we have that $\beta_M^{\iota_L(t)-h\vec{1},\iota_L(t)+h\vec{1}} = \beta_{M^L}^{t-h,t+h} \geq k$ and so $\lambda_{M^L}(k,t) > h$ thus $\lambda_{M^L}(k,t) \geq \lambda_{M}(k, \iota_L(t))$. Conversely, if $\lambda_{M^L}(k,t) > h$ then $\beta_M^{\iota_L(t)-h\vec{1},\iota_L(t)+h\vec{1}} \geq k$, and since we are working with the infinity-norm if $\vec{h} \geq \vec{0}$ with $\|\vec{h}\|_\infty \leq h$ then $\vec{h} \leq h\vec{1}$ and so $\beta_M^{\iota_L(t)-\vec{h},\iota_L(t)+h\vec{h}} \geq k$ and thus $\lambda_{M^L}(k,t) \leq \lambda_{M}(k, \iota_L(t))$. \end{proof} \begin{prop} The uniform persistence $\infty$-landscape of an interval decomposable module $M \cong \bigoplus_j \mathds{1}^{I_j}$ can be expressed as the pointwise maximum $\lambda_\infty(M)(k,\vec{x}) = \text{kmax}_j \ \lambda_\infty(I_j)(1,\vec{x})$. Note that this is only true for the uniform persistence $\infty$-landscape, and not for other choices of $p$. \end{prop} The following propositions demonstrate the information one loses in deriving the Uniform Persistence Landscape from the rank invariant. In the single parameter case the landscape captures almost all the information available from the rank invariant, whilst for the multiparameter landscape the rank invariant is only recoverable from the landscapes on pairs of points spanning hypercubes. \begin{prop} Let $M\in \vec{vect}^{\vec{R}}$ be a pointwise-finite-dimensional, finitely generated persistence module and let $\lambda : \mathbb{N} \times \mathbb{R} \to \mathbb{R}$ be the associated persistence landscape. Then $\lambda$ determines the rank invariant almost everywhere. \end{prop} \begin{proof} Let $C = \text{supp}(\xi_0 )\cup \text{supp}(\xi_1)$, then for $a,b\in (\mathbb{R}\setminus C)^2$ we recover the rank invariant by observing $\beta^{a,b} = \max \{k\in \mathbb{N} \ | \ \lambda(k, \frac{a+b}{2}) \geq \frac{b-a}{2} \}$. Since $\beta^{a,b} \geq k \implies \lambda(k, \frac{a+b}{2}) \geq \frac{b-a}{2}$ and conversely, since $a,b \notin C$ and $M$ finitely generated, we note that $\lambda(k, \frac{a+b}{2}) \geq \frac{b-a}{2} \implies \lambda(k, \frac{a+b}{2}) > \frac{b-a}{2}$ and thus $\beta^{a.b}\geq k$. \end{proof} \begin{prop} \label{Hypercube} Let $M\in \vec{vect}^{\vec{R}^n}$ be a pointwise-finite-dimensional, finitely presented persistence module and let $\lambda_\infty : \mathbb{N} \times \mathbb{R}^n \to \mathbb{R}$ be the associated uniform persistence landscape. Let $C \subset \mathbb{R}^n$ be the set of points sharing a coordinate with $\text{supp}(\xi_0) \cup \text{supp}(\xi_1)$: $$ C = \{\vec{x} \in \mathbb{R}^n \ |\ x_j = a_j \text{ for some } \vec{a} \in \text{supp}(\xi_0) \cup \text{supp}(\xi_1) \} $$ Then using $\lambda_\infty$ we can recover the rank invariant of $M$ on the set: $$L = \{(\vec{a},\vec{b}) \in (\mathbb{R}^n\setminus C)^2 \ | \ \vec{a}\leq \vec{b}, \ [\vec{a},\vec{b}] \text{ is a hypercube } \}$$ \end{prop} \begin{proof} The rank invariant on the set $L$ is derived from the landscape as follows: $$\beta^{\vec{a},\vec{b}} = \max \left\{ k \in \mathbb{N} \ \bigg| \ \lambda_\infty(k,\frac{\vec{a}+\vec{b}}{2}) \geq \left\| \frac{\vec{b}-\vec{a}}{2}\right\|_\infty \right\}$$ For if $\beta^{\vec{a},\vec{b}} \geq k$ then all morphisms starting and ending in the hypercube $[\vec{a},\vec{b}]$ have rank at least $k$. Thus for the centre of the hypercube $\vec{c} = \frac{\vec{a}+\vec{b}}{2}$ we see $\lambda_\infty(k,\vec{c}) \geq \|\frac{\vec{b}-\vec{a}}{2}\|_\infty$. Conversely if $\lambda_\infty(k,\vec{c}) \geq \|\frac{\vec{b}-\vec{a}}{2}\|_\infty$, since $\vec{a},\vec{b} \notin C$ and $M$ finitely presented, then in fact $\lambda_\infty(k,\vec{c}) > \|\frac{\vec{b}-\vec{a}}{2}\|_\infty$, and so $\beta^{\vec{a},\vec{b}} \geq k$. \end{proof} The following example illustrates two modules with distinct rank invariants which are not distinguished by their uniform persistence landscapes. \begin{ex} Denote by $[\vec{a},\vec{b}]$ the rectangular module with opposite vertices $\vec{a},\vec{b}$. Let $M$, $N$ be $2$-parameter persistence modules and rectangular decomposable, with $M =[(0,1), (10,2)] \oplus [(4,1), (6,2)]$ and let $N =[(0,1), (6,2)] \oplus [(4,1), (10,2)]$. These two modules have different rank invariant but the same Uniform Persistence Landscapes illustrated in Figure \ref{LandscapeVsRankInvariant}. \end{ex} \begin{figure} \centering \begin{tikzpicture}[scale = 1.2] \draw (0,0) -- (6,0); \draw (0,0) -- (0,2); \filldraw[fill=green, fill opacity = 0.25] (0,.5) rectangle (5,1); \filldraw[fill=red, fill opacity = 0.25] (2,.5) rectangle (3,1); \node [anchor=south] (I) at (3,-1) {\footnotesize{$M =[(0,1), (10,2)] \oplus [(4,1), (6,2)]$}}; \end{tikzpicture} \begin{tikzpicture}[scale = 1.2] \draw (0,0) -- (6,0); \draw (0,0) -- (0,2); \filldraw[fill=green, fill opacity = 0.25] (0,.5) rectangle (3,1); \filldraw[fill=red, fill opacity = 0.25] (2,.5) rectangle (5,1); \node [anchor=south] (I) at (3,-1) {\footnotesize{$N =[(0,1), (6,2)] \oplus [(4,1), (10,2)]$}}; \end{tikzpicture} \caption{We illustrate modules that have different rank invariant but the same Uniform Persistence Landscapes. (The first summand of each module is shaded green and the second summand red.)} \label{LandscapeVsRankInvariant} \end{figure} One would hope that a multiparameter module invariant could distinguish the modules in the previous example. The uniform persistence landscape fails to distinguish these modules since the rank invariants of these modules coincide on all pairs of points spanning hypercubes. This in turn occurs since the overlap between the summands in the $x_1$-coordinate is greater than their significance in the $x_2$-coordinate. Thus altering $x_1,x_2$ at the same rate we cannot detect the interaction between the summands with the uniform landscape. A simple reparametrisation scaling parameters $x_i$ appropriately would allow us to distinguish these modules and motivates the following definition. \begin{defn}($\vec{w}$-Weighted Persistence Landscape) Let $\vec{w}\in \{\vec{u} \in \mathbb{R}^n | u_i>0 , \|\vec{u}\|_\infty = 1 \}$ be a weighting vector corresponding to a rescaling of the parameter space $\mathbb{R}^n$. Define the $\vec{w}$-weighted infinity norm to be $\| \vec{h} \|^\vec{w}_{\infty} = \| (w_ih_i)_i \|_{\infty}$. The $\vec{w}$-Weighted Persistence Landscape is a function $\lambda_{\vec{w}} : \mathbb{N} \times \mathbb{R}^n \rightarrow \overline{\mathbb{R}}$. $$ \lambda_{\vec{w}}(k,\vec{x}) = \sup\{ \varepsilon \geq 0 : \beta^{\vec{x}-\vec{h},\vec{x}+\vec{h}} \geq k \text{ for all } \vec{h} \geq \vec{0} \text{ with } \| \vec{h} \|^\vec{w}_{\infty} \leq \varepsilon\}$$ \end{defn} \begin{Remark} The Uniform Persistence $\infty$-Landscape is the $\vec{1}$-Weighted Persistence Landscape. \end{Remark} \begin{defn}($\vec{w}$-Rescaling) Let $\varphi_\vec{w} \in \text{Aut}(\mathbb{R}^n)$ denote the invertible rescaling $\varphi_\vec{w}(\vec{x})=(w_ix_i)_i$ for $\vec{w}\in \{\vec{u} \in \mathbb{R}^n | u_i>0 , \|\vec{u}\|_\infty = 1 \}$. \end{defn} The following proposition makes precise the relationship between the weighted landscape and rescaling the parameter space. \begin{prop} \label{Rescaling} Let $\vec{w}$ be a rescaling vector and let $\lambda_{\vec{w}}$ denote the function taking a module to its $\vec{w}$-Weighted Persistence Landscape. Let $(\varphi_\vec{w})^\ast$ denote the pull back of $\varphi_\vec{w}$. Then the following diagram commutes:\\ \[ \begin{tikzcd} \vec{vect}^{\vec{R}^n} \arrow{rr}{(\varphi_\vec{w})^\ast} \arrow{d}{\lambda_\vec{1}} & & \vec{vect}^{\vec{R}^n} \arrow{d}{\lambda_\vec{w}}\\ L_p(\mathbb{N}\times \mathbb{R}^n) \arrow{rr}{(\id \times \varphi_\vec{w})^\ast} & & L_p(\mathbb{N}\times \mathbb{R}^n) \end{tikzcd} \] \end{prop} \begin{proof} Let $M \in \vec{vect}^{\vec{R}^n}$, it is a straight forward definition chase to see the diagram commutes. For convenience of notation let us use shorthand notation for component wise multiplication $\vec{a}\odot\vec{b} = (a_ib_i)_i$. \begin{align*} \lambda_{\vec{w}}(M \circ \varphi_{\vec{w}})(k,\vec{x}) &= \sup \{\varepsilon \geq 0 \ |\ \beta^{\vec{x}-\vec{h},\vec{x}+\vec{h}}_{M \circ \varphi_{\vec{w}}} \geq k, \text{ for all } \vec{h} \geq \vec{0} \text{ with } \| \vec{h} \|^{\vec{w}}_{\infty} \leq \varepsilon \} \\ &= \sup \{\varepsilon \geq 0 \ |\ \beta^{\vec{w}\odot(\vec{x}-\vec{h}),\vec{w}\odot(\vec{x}+\vec{h})}_{M} \geq k, \text{ for all } \vec{h} \geq \vec{0} \text{ with } \| \vec{w}\odot\vec{h} \|_{\infty} \leq \varepsilon \} \\ &= \sup \{\varepsilon \geq 0 \ |\ \beta^{\vec{w}\odot\vec{x}-\vec{t},\vec{w}\odot\vec{x}+\vec{t}}_{M} \geq k, \text{ for all } \vec{t} \geq \vec{0} \text{ with } \| \vec{t} \|_{\infty} \leq \varepsilon \} \\ &= \lambda_{\vec{1}}(M)(k,\vec{w}\odot\vec{x}) = ((\textrm{id} \times \varphi_\vec{w})^\ast \circ \lambda_{\vec{1}}(M))(k,\vec{x}) \end{align*} \end{proof} The example illustrated in Figure \ref{LandscapeVsRankInvariant} exhibits the dependence on the relative scaling of the significant parameter values in a multiparameter persistence module and highlights the importance of normalisation or choice of weighting vectors $\vec{w}$ in practical applications. \section{Stability and Injectivity} \label{Stability} In this section we shall show that the multiparameter landscapes are stable with respect to the interleaving distance and persistence weighted Wasserstein distance. We will then provide an injectivity result that shows the collection of weighted persistence landscapes derived from a persistence module contains almost all the information in the rank invariant of that module. Let us define a $q$-distance on the space of multiparameter landscapes completely analogously with the definition as in \cite{Bubenik:2015} where we implicitly are viewing our landscapes as elements of $L_q(\mathbb{N}\times \mathbb{R}^n)$. Our landscapes are all measurable since they are continuous, however they may be unbounded. We can either choose to permit infinite distances or alternatively truncate our landscapes to a bounded region in order to that all our distances are finite. \begin{defn}($q$-Landscape Distance) Let $M, M'$ be persistence modules then define the $q$-landscape distance to be: $$ d_{\lambda_p}^{(q)}(M,M') = \| \lambda_{p}(M) - \lambda_{p}(M') \|_{q} $$ \end{defn} \subsection{Stability} We now show that unlike the rank function and rank invariant, the infinity norm of multiparameter persistence landscapes is stable with respect to the interleaving distance. \begin{thm}(Multiparameter Uniform $\infty$-Landscape Stability) \label{thm:stability} Let $M , M' \in \textbf{vect}^{\textbf{R}^n}$ be multiparameter persistence modules and $\omega$ the sublinear projection given by $\omega_{\Gamma} = \| \Gamma - I \|_{\infty}$, then the $\infty$-Landscape distance of the Uniform Persistence $\infty$-Landscapes is bounded by the induced interleaving distance. $$ d_{\lambda_\infty}^{(\infty)}(M,M') \leq d^{\omega}(M,M') $$ \end{thm} \begin{proof} Suppose $M,M'$ are $\varepsilon$-interleaved with respect to $\omega$, and let $(\Gamma, K)$ realise an $\varepsilon$-interleaving. Let $\vec{x} \in \mathbb{R}^n$ and assume without loss of generality that $ r = \lambda_{\infty}(k,\vec{x}) \geq \lambda_{\infty}'(k,\vec{x})$ and that also $ \lambda_{\infty}(k,\vec{x}) \geq \varepsilon $. For any $\vec{h} \in B^{\geq}_\infty(\vec{0},r-\varepsilon) $ we have that $\vec{h} + \varepsilon \vec{1} \in B^{\geq}_\infty(\vec{0},r)$. Since $r = \lambda_{\infty}(k,\vec{x})$ we know that the map $M(\vec{x} - (\vec{h} + \varepsilon \vec{1}) \leq \vec{x} + (\vec{h} + \varepsilon \vec{1}))$ has rank at least $k$. The $(\Gamma, K)$ $\varepsilon$-interleaving gives rise to commutative diagram: \vspace{5mm} \begin{tikzcd}[cramped, column sep=tiny, font=\footnotesize] M(\vec{x} - (\vec{h} + \varepsilon \vec{1})) \arrow[ddr] \arrow[rrrr] & & & & M(K(\vec{x} + \vec{h})) \arrow[r] & M(\vec{x} + (\vec{h} + \varepsilon \vec{1})) \\ &&&&&\\ & M'(\Gamma(\vec{x} - (\vec{h} + \varepsilon \vec{1}))) \arrow[r] & M'(\vec{x} - \vec{h}) \arrow[r] & M'(\vec{x} + \vec{h}) \arrow[uur] & & \end{tikzcd} \vspace{5mm} Thus we see that the map $M'(\vec{x} - \vec{h} \leq \vec{x} + \vec{h})$ has rank at least $k$. \end{proof} \begin{cor}(Multiparameter Sublevel Set $\infty$-Landscape Stability Theorem) Let $ f,g : X \rightarrow \mathbb{R}^n $ then the sublevel set persistence modules satisfy: $$ d_{\lambda_\infty}^{(\infty)}(M(f),M(g)) \leq \|f-g\|_{\infty} $$ \end{cor} \begin{proof} $M(f),M(g)$ are $\| f-g\|_{\infty}$ interleaved with respect to $\omega_{\Gamma} = \|\Gamma - I \|_{\infty}$ \end{proof} We observed in Remark \ref{remark:qrelated} that the multiparameter $p$-landscapes are related to the $\infty$-landscapes by constant factors. Thus the $\infty$-landscape distance between a pair of $p$-landscapes is bounded by some constant multiple of the interleaving distance. Note also that in the situation where we truncate our landscapes to a bounded region $R \subset \mathbb{R}^n$, the $\infty$-landscape distance stability result yields a coarse bound for the $p$-landscape distance: the product of the measure of the bounded region $|R|$ and the $\infty$-landscape distance. Hence working over a bounded region $R$ the $q$-landscape distance between two $p$-landscapes is bounded by a constant factor of the interleaving distance for any choice of $p$ and $q$, with the constant dependent on $p,q, |R|$. The weighted landscapes also satisfy stability with respect to the interleaving distance. This can be shown directly or using Proposition \ref{Rescaling} and the stability result in the uniform case. \begin{cor}(Multiparameter $\vec{w}$-Weighted $\infty$-Landscape Stability) Let $M,N$ be multiparameter modules and let $\omega, \omega^{\vec{w}}$ be the sublinear projections given by $\omega_{\Gamma} = \| \Gamma - \id \|_{\infty},\ \omega^{\vec{w}}_{\Gamma} = \|\Gamma - \id \|_{\infty}^{\vec{w}}$ respectively. Then the induced interleaving distances bound the $\infty$-landscape distance: $$ \| \lambda_\vec{w}(M) - \lambda_\vec{w}(N)\|_\infty \leq d^{\omega^\vec{w}}(M,N) \leq d^{\omega}(M,N)$$ \end{cor} The $q$-landscape distance restricted to interval decomposable modules is stable with respect to the persistence weighted $q$-Wasserstein distance. \begin{prop}($q$-Landscape Distance Stability of Interval Decomposable Modules) Let $M,N$ be interval decomposable multiparameter modules with finite barcodes, and recall $d_{\overline{W}_q}$ the persistence weighted $q$-Wasserstein distance. The $q$-landscape distance is stable with respect to the persistence weighted $q$-Wasserstein distance: $$ d_{\lambda_\infty}^{(q)}(M,N) \leq d_{\overline{W}_q}(M,N)$$ \end{prop} \begin{proof} Let us use the shorthand notation $\lambda_M = \lambda_\infty(M)$, and suppose $M,N$ have barcodes $\{I_j\ \ | \ j \in \mathcal{J}\}$ and $\{J_\kappa \ \ | \ \kappa \in \mathcal{K}\}$ with equal cardinality (else append empty intervals). Recall that the landscape for $M$ can be expressed as a pointwise maximum, $\lambda_M(k,\vec{x}) = \textrm{kmax}_\mathcal{J} \ \lambda_{\mathds{1}^{I_j}}(1,\vec{x})$. Let $\sigma : \mathcal{J} \to \mathcal{K}$ be any bijection realising the persistence weighted $q$-Wasserstein distance. \begin{align*} d_{\lambda_\infty}^{(q)}(M,N)^q = \|\lambda_M -\lambda_N \|_q^q &= \sum_{k=1}^\infty \int_{\mathbb{R}^n} |\lambda_M(k,\vec{x}) -\lambda_N(k,\vec{x}) |^q d\mu \\ &= \int_{\mathbb{R}^n} \sum_{k=1}^\infty |\textrm{kmax}_\mathcal{J} \ \lambda_{\mathds{1}^{I_j}}(1,\vec{x}) -\textrm{kmax}_\mathcal{K} \ \lambda_{\mathds{1}^{J_\kappa}}(1,\vec{x}) |^q d\mu \\ &\leq \int_{\mathbb{R}^n} \sum_{j\in \mathcal{J}} | \lambda_{\mathds{1}^{I_j}}(1,\vec{x}) - \lambda_{\mathds{1}^{J_{\sigma(j)}}}(1,\vec{x}) |^q d\mu \\ &\leq \int_{\mathbb{R}^n} \sum_{j\in \mathcal{J}} \varepsilon_j^q \mathds{1}_{\{I_j \cup J_{\sigma(j)}\}} d\mu \\ &= \sum_{j\in \mathcal{J}} |I_j \cup J_{\sigma(j)}|\varepsilon_j^q = d_{\overline{W}_q}(M,N)^q \end{align*} The inequality between the second and third line follows from the general fact that for any $\vec{u},\vec{v} \in \mathbb{R}^n$ the sum $\sum |u_i - v_i|^q$ is minimised by ordering the components of each tuple. The fourth line bounds the third line by Theorem \ref{thm:stability} applied to the matched interval summands. \end{proof} \subsection{Injectivity} We now show that the collection of weighted landscapes associated to a module preserves almost all the information contained in the rank invariant. \begin{prop} Let $\vec{vect}_{\text{fin}}^{\vec{R}^n}$ denote the collection of pointwise finite dimensional, finitely presented persistence modules. Let us define an equivalence relation on $\vec{vect}_{\text{fin}}^{\vec{R}^n}$ identifying $M \sim N$ if the rank invariant of $M$ and $N$ coincide almost everywhere. Then the map $\lambda : M \mapsto \{(\vec{w}, \lambda_\vec{w}(M))\}$ is well defined and injective on the quotient space $\vec{vect}_{\text{fin}}^{\vec{R}^n}/\sim$. Moreover, equipping the quotient space with the distance induced by the interleaving distance, and the weighted landscape space with the metric: $$d(\lambda(M),\lambda(N)) = \sup_\vec{w}\{ \| \lambda_\vec{w}(M) - \lambda_\vec{w}(N)\|_\infty\}$$ then this map is $1$-Lipschitz. \end{prop} \begin{proof} $\newline$ \begin{enumerate} \item The map $\lambda$ is well defined: Suppose $M,N$ are modules such that $\lambda_\vec{1}(M) \neq \lambda_\vec{1}(N)$ we will show that the rank invariant of these modules differ on a set of positive measure. Without loss of generality let $r = \lambda_\vec{1}(M)(k,\vec{x}) > \lambda_\vec{1}(N)(k,\vec{x})$ and let $\varepsilon < \lambda_\vec{1}(M)(k,\vec{x}) - \lambda_\vec{1}(N)(k,\vec{x})$. Since $\lambda_\vec{1}(N)(k,\vec{x})<r$ there is some $\vec{h}$ with $\| \vec{h}\|_\infty<r$ such that $\beta_N^{\vec{x}-\vec{h},\vec{x}+\vec{h}} < k$. Consider any element $(\vec{a},\vec{b})$ of the open set $B^{<}(\vec{x}-\vec{h},\varepsilon)\times B^{>}(\vec{x}+\vec{h},\varepsilon) \subset \mathbb{R}^n \times \mathbb{R}^n$. Then $r = \lambda_\vec{1}(M)(k,\vec{x}) \implies \beta_M^{\vec{a},\vec{b}} \geq k$ and $\vec{a} \leq \vec{x}-\vec{h}\leq \vec{x}+\vec{h}\leq \vec{b} \implies \beta_N^{\vec{a},\vec{b}} \leq \beta_N^{\vec{x}-\vec{h},\vec{x}+\vec{h}} < k$. Thus the rank invariants differ on a set of positive measure. \item The map $\lambda$ is injective: Let $M \in \vec{vect}_{\text{fin}}^{\vec{R}^n}$ and recall Proposition \ref{Hypercube}. For all $\vec{a} < \vec{b}$ there is some rescaling vector $\vec{w}$ such that $[\varphi_\vec{w}(\vec{a}),\varphi_\vec{w}(\vec{b})]$ spans a hypercube. Thus we can recover the rank invariant of $M$ almost everywhere from the collection $\{(\vec{w}, \lambda_\vec{w}(M))\}$. \item The map $\lambda$ is $1$-Lipschitz: This is an immediate consequence of Multiparameter $\vec{w}$-Weighted $\infty$-Landscape Stability. \end{enumerate} \end{proof} Since $\lambda$ is $1$-Lipschitz we can compute a lower bound on the interleaving distance between modules from the collection of weighted landscapes. We would be interested to investigate further the relationship between the landscape distance and the interleaving distance to understand when the landscape distance provides a good lower bound for the interleaving distance. \section{Statistics on Multiparameter Landscapes} \label{Statistics} A principal advantage of working with landscapes as a summary statistic for our data is that we are always able to take the pointwise mean of a collection of landscapes. The space of persistence landscapes endowed with the $q$-landscape distance is naturally a subspace of Lebesgue space, a Banach space. We would like to perform statistical analysis on a set landscapes produced from data sets to distinguish significant topological signals from sampling noise. In this section we shall review relevant results from the theory of Banach Space valued random variables, and then apply these results to multiparameter persistence landscapes. We attain the same collection of results enjoyed by the single parameter persistence landscape established in \cite{Bubenik:2015}. \subsection{Probability in Banach Spaces} Let us begin by defining some notation. Let $(\mathcal{B},\| \cdot \|)$ denote a real, separable Banach Space with topological dual space $\mathcal{B}^{*}$. Let $ V : (\Omega,\mathcal{F},\mathbb{P}) \to \mathcal{B}$ denote a Borel measurable random variable. The covariance structure of such a random variable is defined to be the set of expectations $$\{\mathbb{E}[(f(V)-\mathbb{E}[f(V)])(g(V)-\mathbb{E}[g(V)])] : f,g \in \mathcal{B}^{*}\}$$ In order to take expected values of Banach valued random variables we require the notion of the Pettis Integral, which is an extension of the Lebesgue integral to functions on measure spaces taking values in normed spaces. We shall briefly introduce the properties of this integral and existence criteria, the essence of which is built upon reducing the problem to integrability of $\mathbb{R}$-valued functions. \begin{defn}(Scalarly Integrable)\cite{Geitz1981} A function $V: (\Omega, \mathcal{F}, \mu) \to \mathcal{B}$ is scalarly integrable if for all $f \in \mathcal{B}^{*}$ we have that $f(V)\in L_{1}(\mu)$ \end{defn} \begin{defn}(Pettis Integrable)\cite{Geitz1981} A scalarly integrable function $V: (\Omega, \mathcal{F}, \mu) \to \mathcal{B}$ is Pettis integrable if for all $E \in \mathcal{F}$ there is an element $I_V(E)\in \mathcal{B}$ such that: $$ \int_{E} f(V) d\mu = f(I_V(E)) \text{ for all }f\in \mathcal{B}^{*}$$ The set function $ I_V : \mathcal{F} \to \mathcal{B}$ is called the Pettis Integral of $V$ with respect to $\mu$. We may also refer to $I_V(\Omega)$ as the Pettis Integral of $V$ and denote this by $I_V$. \end{defn} \begin{thm} \cite{Musia2015}(Theorem 5.4) If $\mathcal{B}$ does not contain an isomorphic copy of $(c_0,\| \cdot \|_\infty)$ then each strongly measurable and scalarly integrable $\mathcal{B}$-valued function is Pettis Integrable. \end{thm} Note that for a separable Banach Space the notions of weak and strong measurability coincide. Thus the previous theorem gives a sufficient criterion for Pettis Integrability in the setting of multiparameter persistence landscapes endowed with the $q$-norm for $q\in [1,\infty)$ for which the underlying Banach space is separable. \begin{cor}\cite{Bubenik:2015} Let $V: (\Omega, \mathcal{F}, \mu) \to \mathcal{B}$ with $\mathcal{B}$ real and separable. If $\mathbb{E}^{\mu}[\|V\|] < \infty $ then $V$ has a Pettis Integral and $\|I_V(\Omega)\| \leq \mathbb{E}^{\mu}[\|V\|] $ \end{cor} \begin{thm}(Strong Law of Large Numbers)\cite{Talagrand2011}(Corollary 7.10) Let $V_i$ be i.i.d copies of $V: (\Omega, \mathcal{F}, \mathbb{P}) \to \mathcal{B}$ and let $S_n = \sum_{i=1}^n V_i $. Then $\mathbb{E}[\|V\|] < \infty$ if and only if: $$ \frac{S_n}{n} \to I_V(\Omega) \text{ almost surely as } n \to \infty $$ \end{thm} \begin{defn} We say a $\mathcal{B}$-valued random variable $X$ is Gaussian if for all $f\in \mathcal{B}^*$ the real valued random variable $f(X)$ is Gaussian with mean zero. Note that such a Gaussian random variable is determined by its covariance structure. \end{defn} The next result only applies for a certain class of Banach spaces. The type and cotype of a Banach space can be thought loosely of as a measure of how close that Banach space is to a Hilbert space. For $q\in [1,2]$ the Lebesgue space $L_q$ has type $q$ and cotype $2$, and for $q\in [2,\infty)$ the Lebesgue space $L_q$ has type $2$ and cotype $q$. \begin{thm}(Central Limit Theorem)\cite{Hoffmann-Jørgensen1976} Let $\mathcal{B}$ be a Banach space of type 2 and $V: (\Omega, \mathcal{F}, \mathbb{P}) \to \mathcal{B}$. If $I_V = 0$ and $\mathbb{E}[\|V\|^2] < \infty$ then $\frac{1}{\sqrt{n}}S_n$ converges weakly to a Gaussian random variable with the same covariance structure as $V$. \end{thm} \subsection{Convergence Results for Multiparameter Landscapes} We shall take the same probabilistic approach as in \cite{Bubenik:2015} in viewing multiparameter landscapes derived from a data set as a Banach space valued random variable. The model for applying statistical analysis to persistence landscapes will likely trace the following general setup: Suppose $X$ is a Borel measurable random variable on some probability space $(\Omega, \mathcal{F}, \mathbb{P})$ thought of as sampling data from some distribution. Further let $\Lambda = \Lambda(X)$ denote the multiparameter persistence landscape associated to some multifiltration of the data $X$, so that in summary $\Lambda : (\Omega, \mathcal{F}, \mathbb{P}) \to L_q(\mathbb{N}\times\mathbb{R}^n)$ is a random variable taking values in a real, separable Banach Space. Let $\{X_i\}$ be i.i.d copies of $X$ and $\{\Lambda_i\}$ their associated landscapes. Denoting the pointwise mean of the first $n$ landscapes by $\overline{\Lambda}^n$ and applying the general theory of probability in Banach spaces presented above we attain several results. Observe that in practice we may be required to truncate our multiparameter landscapes to a bounded region in order to satisfy the finiteness criteria in the convergence results. \begin{thm}(Strong Law of Large Numbers) With our notation as in the above discussion $\overline{\Lambda}^n \to I_\Lambda$ almost surely if and only if $\mathbb{E}[\|\Lambda\|] < \infty$. \end{thm} \begin{thm}(Central Limit Theorem) Let us consider the landscapes endowed with the $q$-Landscape distance for $q \geq 2$. Suppose $\mathbb{E}[\|\Lambda\|] < \infty$ and $\mathbb{E}[\|\Lambda^2\|] < \infty$, then $\sqrt{n}(\overline{\Lambda}^n - I_\Lambda(\Omega)) $ converges weakly to a Gaussian random variable $G(\Lambda)$ with the same covariance structure as $\Lambda$. \end{thm} The central limit theorem for the landscapes induces a central limit theorem for associated real valued random variables and facilitates the computation of approximate confidence intervals. \begin{cor} Let us consider the landscapes endowed with the $q$-Landscape distance for $q \geq 2$. Suppose $\mathbb{E}[\|\Lambda\|] < \infty$ and $\mathbb{E}[\|\Lambda^2\|] < \infty$. Furthermore let $f\in L_q(\mathbb{N}\times\mathbb{R}^n)^* \cong L_p(\mathbb{N}\times\mathbb{R}^n)$, so that $Y = f(\Lambda)$ is a real valued random variable. Then $\sqrt{n}(\overline{Y}^n - \mathbb{E}[Y]) \to \mathcal{N}(0,\text{Var}(Y))$ converges in distribution. \end{cor} \begin{cor}(Approximate Confidence Intervals) Suppose $Y$ is a real-valued random variable attained from a functional applied to the multiparameter landscape $\Lambda$ satisfying the conditions of the previous Corollary. Let $\{Y_i\}_{i=1}^n$ be i.i.d. instances of this random variable and $S_n^2 = \frac{1}{n-1} \sum_{i=1}^n (Y_i - \overline{Y}_n)^2$ the sample variance. An approximate $(1-\alpha)$ confidence interval for $\mathbb{E}[Y]$ is given by: $[\overline{Y}_n- z_{\frac{\alpha}{2}}\frac{S_n}{\sqrt{n}} ,\overline{Y}_n+z_{\frac{\alpha}{2}}\frac{S_n}{\sqrt{n}}]$, where $z_{\frac{\alpha}{2}}$ is the $\frac{\alpha}{2}$ critical value for the normal distribution. \end{cor} In practice, a functional of choice could be given by integrating the landscapes over a subset $E$ of the parameter domain $f_E(\Lambda) = \int_E \Lambda \ d\mathbb{P}$. These functionals can be used to establish the significance of homological features in different regions of the parameter space. We remark that recent work has attained confidence bands for single parameter persistence landscapes \cite{Chazal2014StochasticCO} \cite{Chazal2013Bootstrap}. It would be interesting to see similar analysis performed in the multiparameter setting. \section{Example Computations and Machine Learning Applications} \label{Computations} In this section we shall present example computations of multiparameter persistence landscapes and demonstrate a simple application of machine learning to the persistence landscapes. We use the RIVET software for computations of 2-parameter persistence modules presented in \cite{Lesnick2015}. RIVET supports the fast computation of multigraded Betti-numbers and an interactive visualisation for 2-parameter persistence modules. The software computes a data structure associated to a module which facilitates real time queries of the fibered barcode space. As far as we know, RIVET is the only publicly available TDA software package supporting multiparameter persistent homology calculations. The software supports a range of input formats including: point cloud, metric space, algebraic chain complex, and explicit bifiltered complex. In particular we shall use the software to calculate and query the fibered barcode associated to a module along a selection of one dimensional slices of the parameter space. Further details of the software may be found in \cite{Lesnick2015}. Computation of the module with RIVET is the most computationally expensive procedure in our calculations. Details of the time and space complexity of the algorithm may be found in \cite{Lesnick2015}, loosely if $m$ denotes the size of the filtered complex associated to the input data, in the worst case one requires time $O(m^5)$ and space $O(m^5)$ to compute the data structure which admits fast queries of the fibered barcode space. In theory, since our landscape is derived solely from the rank invariant, we need not calculate the full module and fibered barcode space. Recall that the value of the multiparameter uniform persistence $\infty$-landscape at each point can be calculate using the single parameter persistence landscape associated to the line of slope $\vec{1}$ passing through that point. Thus we could reduce the computation of the multiparameter landscape in any dimension to repeated single parameter persistent homology calculations. This reduction would be highly parallelizable and likely to provide significant speedup. \begin{prop} Let $M \in \vec{vect}^{\vec{R}^2}$ be a multiparameter persistence module derived from a simplicial complex with $m$ simplices. Let $\varepsilon$ be some tolerance value and $[0,R]\times[0,R] \subset \mathbb{R}^2$ a subset of our parameter space. Then we can compute an $\varepsilon$-approximate $\lambda_M^{(\varepsilon)}$ to the uniform persistence landscape $\lambda_M$ of $M$ on the region $[0,R]\times[0,R]$ in time $O(m^3 \frac{R}{\varepsilon}).$ Our approximation is with respect to the infinity norm $\|\lambda_M^{(\varepsilon)} - \lambda_M\|_\infty \leq \varepsilon$ \end{prop} \begin{proof} Divide the region $[0,R]\times[0,R]$ into a grid of spacing $\varepsilon$. It suffices to calculate the values of the landscape on this grid since the landscape functions are 1-Lipschitz and so we can extend the grid values to an $\varepsilon$-approximate function on $[0,R]\times[0,R]$. Thus we reduce our computation to the computation of $\frac{2R}{\varepsilon}$ single parameter landscapes corresponding to the collection of $\frac{2R}{\varepsilon}$ slope $\vec{1}$ lines passing through the points of the grid. Given birth-death pairs \cite{BubenikDlotko2017} provides an algorithm to compute the persistence landscapes in time $O(m^2)$. It is well known from \cite{books/daglib/0025666} that one can produce birth-death pairs from a filtration of size $m$ in time $O(m^3)$. Hence the result follows. \end{proof} It is possible that the above time estimate for the landscape computation could be improved by using vineyard style updates between the single parameter landscapes \cite{Cohen-Steiner2006}. Moreover it may be that in practical applications, computing the module with RIVET in time $O(m^5)$ and using the fibered barcode queries will be faster than the computation of a series of single parameter landscapes. Note also that the $\frac{2R}{\varepsilon}$ single parameter landscape calculations are independent and so can be computed in parallel. We postpone comparisons of different computational algorithms, benchmarking, and efficient implementation to follow up work. One may want to utilise machine learning algorithms with landscape functions as a collection of features for a data set. Recall that if we consider the persistence landscapes associated with the 2-Landscape distance then we are naturally in the setting of a Hilbert Space. The inner product on this space is positive definite on the space of persistence landscapes. As such we may use this kernel to learn non-linear relationships in our data and then apply convex optimisation techniques to an SVM. Another point to note is that integrating an $n$-dimensional landscape over a finite resolution gives an $n$-dimensional array as a summary of our data to which one could apply a convolutional neural network. This transform from landscape to multidimensional array will satisfy stability with respect to the landscape distance. A similar approach is used in \cite{Adams2015} to produce a \textit{persistence image} from a persistence diagram. We provide three computational examples together with the application of a basic statistical test and standard SVM classifier. Our examples demonstrate that the multiparameter landscape is sensitive to both topology and geometry. We do not claim that the multiparameter landscape is the optimal analytic tool to perform the various tasks in our examples, rather we demonstrate a range of potential applications. \subsection{Concentric Circles} Our first example will look at pointclouds sampled from densities concentrated around a pair of concentric circles with radii $1$ and $3$ respectively. We colour the points from each circle in two distinct ways. Colouring A assigns the large circle colour parameter $0.5$ and the small circle colour parameter $1.5$. Colouring B assigns the small circle colour parameter $0.5$ and the large circle colour parameter $1.5$. We examine how the multiparameter landscapes differ depending on the colouring of the circles. For each colouring we perform $30$ samples, each sample consisting of $100$ points uniformly sampled from each circle Figure \ref{subfig:CirclePointclouds}. We produce a filtration on each pointcloud with the Rips filtration in the first parameter and the colour parameter in the second parameter. Thus at parameter value $(r,c)\in\mathbb{R}^2$, we have the space $X_{(r,c)} = \text{VR}(\mathcal{P}_{c},r)$ where $\mathcal{P}_{c}$ denotes the sampled points with colour parameter no more than $c$. We compute the average landscapes of the $H_1$-modules for the two different colourings, Figure \ref{fig:ColouredCircles}. When the large circle has the smaller colour parameter value, the first landscape $(k=1)$ can detect the large circle Figure \ref{subfig:FirstMeanLandscape}. We see the large circle in the first landscape as the large mountain spanning the parameter subspace $[1,5.4]\times[0.5,1.5]$. When the large circle has the higher parameter value, the persistence in the Rips filtration parameter is diminished by the presence of the small circle with smaller colour parameter. In both colourings, the second landscape $(k=2)$ exhibits the range of parameter values for which both circles are detected Figure \ref{subfig:SecondMeanLandscape}. We test the robustness of the landscape by repeating the sampling this time with only $50$ points per circle and perturbing both the radii and colour of the sampled points with the addition of i.i.d. normals $\mathcal{N}(0,0.3)$, Figure \ref{subfig:NoisyCirclePointclouds}. We illustrate in Figure \ref{subfig:NoisyFirstMeanLandscape} and Figure \ref{subfig:NoisySecondMeanLandscape} the average landscapes taken over $30$ noisy samples. The resulting landscapes are similar to those of the larger samples without noise. Let us perform a statistical test to determine whether the multiparameter landscapes can detect that the noisy samples are drawn from different distributions. Consider the functional $f_E(\lambda) = \int_E \lambda d\mu$. Using the results of Section \ref{Statistics} we find approximate confidence intervals for $f_E(\lambda)$ with $E = \{1\}\times ([2,6]\times[0,1.5]) \subset \mathbb{N}\times \mathbb{R}^2$. We attain approximate $99\%$-confidence intervals on the noisy samples: for Colouring A $[0.400,0.474]$, and for Colouring B $[0.00556,0.00809]$. A two sample $t$-test on the values of this functional on the two sets of colourings attains a $p$-value of $0.00629$. Thus we reject the null hypothesis that the functional values on the landscapes of the two colourings have the same mean. \begin{figure}[p] \centering \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{ColouredCircles.pdf} \caption{An example point cloud sample from each colouring.} \label{subfig:CirclePointclouds} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{ColouredCirclesMeanLandscapes.pdf} \caption{The mean first landscape for each colouring taken over the $30$ samples, $\overline{\lambda_2}(1,\vec{x})$.} \label{subfig:FirstMeanLandscape} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{ColouredCirclesMeanLandscapes2.pdf} \caption{The mean second landscape for each colouring taken over the $30$ samples, $\overline{\lambda_2}(2,\vec{x})$.} \label{subfig:SecondMeanLandscape} \end{subfigure} \caption{The first column shows the plots for Colouring A and the second column Colouring B.} \label{fig:ColouredCircles} \end{figure} \begin{figure}[p] \centering \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{ColouredNoisyCircles.pdf} \caption{An example point cloud sample from each colouring with noise added} \label{subfig:NoisyCirclePointclouds} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{NoisyColouredCirclesMeanLandscapes.pdf} \caption{The mean first landscape $\overline{\lambda}_2(1,\vec{x})$ taken over the $30$ noisy samples.} \label{subfig:NoisyFirstMeanLandscape} \end{subfigure} \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\linewidth]{NoisyColouredCirclesMeanLandscapes2.pdf} \caption{The mean second landscape $\overline{\lambda}_2(1,\vec{x})$ taken over the $30$ noisy samples.} \label{subfig:NoisySecondMeanLandscape} \end{subfigure} \caption{The first column shows the plots for Colouring A and the second column Colouring B.} \label{fig:NoisyColouredCircles} \end{figure} \subsection{Modal Estimation} For this example we work on meteorite data which we have lifted from \cite{Good_1980}. The data set consists of values of the proportion of silica measured in $22$ samples. Our task is to infer how many modes there are in the distribution from which this data has been sampled. A standard approach to this task is kernel density estimation (KDE). With data $\{x_i\} \subset \mathbb{R}^n$ one estimates the probability density function (pdf) of the distribution using a sum of normalised kernels: $$f(x) = \frac{1}{n}\sum_{i=1}^n K_\sigma(x-x_i)$$ Here $K_\sigma$ is a density function with mass concentrated about the origin, for example a Gaussian centred at the origin. There are two natural parameters in this KDE setup. The \textit{bandwidth} parameter $\sigma$ of the kernel function $K_\sigma$, and a \textit{threshold} parameter which dictates how large a peak in the estimated distribution must be to be considered a mode. The choice of these parameters will dramatically alter our inferred number of modes (see Figure \ref{fig:ChondriteKDEs}). \begin{figure}[h] \centering \includegraphics[width=\linewidth]{ChondriteKDEs.pdf} \caption{We plot kernel density estimates on the meteorite data (red) for a range of bandwidth parameters. As we increase the bandwidth parameter we yield fewer modes in our kernel density estimate.} \label{fig:ChondriteKDEs} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{Modal-Estimation.png} \caption{A triangulated surface plot of the KDE for a range of bandwidth parameters. We observe three modes in the KDE estimate for a large range of bandwidth values. } \label{fig:KDE-Surface} \end{figure} Figure \ref{fig:KDE-Surface} is a surface plot of the KDEs ranging over various bandwidth parameters, demonstrating the change in the number of modes as we change the bandwidth. The surface has been triangulated using a triangulation subordinate to a regular grid on our parameter space. To each $2$-simplex $\tau$ in the triangulation we attach two parameters; the mean bandwidth $\sigma(\tau)$, and the mean probability density value $p(\tau)$, (averages taken over the vertices of the simplex). We produce a bifiltration by taking the simplicial closure of the $2$-simplices with appropriate parameter values, $X_{(\sigma_0,p_0)} = \text{SC}(\{ \tau | \sigma(\tau) \leq \sigma_0 , p(\tau) \geq 1-p_0 \})$. The multiparameter landscape detects that three modes appear in the KDEs for a range of parameter values. Looking at the landscapes associated to the $H_0$-module we see that the infinity norm of the first three landscapes is constant but decreases significantly between the third and fourth landscapes, Figure \ref{fig:Modal-Landscapes}. This indicates that within this setup, three modes are seen across a significantly wider range of parameter values than four modes, suggesting the data is drawn from a tri-modal distribution which coincides with our expected result. Whilst in this simple example one could suggest there are three modes from inspection, the landscape analysis can equally be applied to higher dimensional data sets for which visualisation is not possible. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{ModalLandscapes.pdf} \caption{The first to fifth landscapes associated to the $H_0$ module for the KDE surface} \label{fig:Modal-Landscapes} \end{figure} This basic example can be generalised to detect other properties of KDEs robust to changes in parameter values. For example one could detect significant $i$-dimensional holes in the distribution by considering the $H_i$ module in a similar setup. For related work see Persistence Terraces \cite{Moon2017}. \subsection{Curvature} In this subsection we shall work with a synthetic data set sampled from spaces of different curvature. This example is used to emphasise the ability of the multiparameter landscapes to detect geometric differences between point samples. The samples consist of $100$ points chosen uniformly with respect to the volume measure from discs of radius $1$ in the hyperbolic plane, the surface of the unit sphere and Euclidean space so that the spaces have constant curvature of $-1, 1, 0$ respectively. Topologically these disks are all trivial, our landscapes are detecting geometric differences induced by the distribution of points. We would like to show that the multiparameter landscape is able to detect the curvature of the space from which a sample is drawn given only the pairwise distances between points. A multifiltered complex is built on the sampled points by filtering the Rips complex with the third nearest neighbour density function $\rho$ on the points. Explicitly, if $\mathcal{P}$ denotes our sampled points and $(r,\rho_0)\in \mathbb{R}^2$ then $X_{(r,\rho_0)} = \text{VR}(\mathcal{P}_{\rho_0},r)$ where $\mathcal{P}_{\rho_0} = \{p\in \mathcal{P} : \rho(p)\leq \rho_0\}$ for the third nearest neighbour density function $\rho$. We take $100$ samples of $100$ points in each space and investigate the resulting multiparameter landscapes for dimension 1 homology. We plot the average first multiparameter landscapes in Figure \ref{subfig:CurvatureMeanLandscapes} and the differences between the average landscapes in Figure \ref{subfig:CurvatureLandscapeDifferences}. As one might expect, the persistence of cycles is affected by the curvature of the space. The more negative the curvature the longer the one dimensional cycles persist. Let us now apply a simple machine learning algorithm to the multiparameter landscapes to see if we can reliably distinguish the curvature of the space from which our small samples have been drawn. Using the Python package LinearSVC, we train a Support Vector Machine (SVM) with linear kernel on discretizations of the first $10$ landscapes for the samples of the hyperbolic discs and elliptic discs, using $l^2$ penalty and squared hinge loss function. We randomly partition our samples into $160$ training samples and $40$ test samples and evaluate the accuracy by the proportion of test samples correctly classified. Repeating this process $100$ times we attain an average classification score of $85.78\%$. Thus we see that the multiparameter landscapes are able to reliably detect curvature given a relatively small local sample. It is possible that alternative choices of filtration parameters may be better suited to detecting curvature. \begin{figure}[ht] \centering \begin{subfigure}[b]{1\linewidth} \includegraphics[width=\linewidth]{Curvature-Mean-Landscapes.pdf} \caption{The mean first landscape $\bar{\lambda}_{2}(1,\vec{x})$ of the $H_1$ module for the hyperbolic, Euclidean and elliptic discs taken over $100$ samples.} \label{subfig:CurvatureMeanLandscapes} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \includegraphics[width=\linewidth]{Curvature-Mean-Landscapes-Differences.pdf} \caption{The pointwise difference between the mean landscapes $\bar{\lambda}_{2}(1,\vec{x})$} \label{subfig:CurvatureLandscapeDifferences} \end{subfigure} \caption{} \label{fig:Curvature-Average-Landscapes} \end{figure} \section{Conclusion} Multiparameter persistence landscapes provide a stable representation of the rank invariant of a persistence module whilst retaining the discriminating power of the rank invariant. Moreover the landscape distance provides a computable lower bound for the optimal stable distance on persistence modules, the interleaving distance. The multiparameter landscape also offers a bridge from topological data analysis to machine learning and statistical analysis of multiparameter modules. The multiparameter landscapes, although hard to visualize in dimensions higher than $2$, are interpretable in any dimension with large landscape values indicating features robust to changes in the filtration parameters, and non-zero landscapes for large $k$ indicating a large number of homological features. The multiparameter landscapes highlight several open questions and challenges in the development of the theory and applications of multiparameter persistent homology that we would be interested to see addressed: \begin{enumerate} \item We would like to understand the relationship between the interleaving distance and landscape distance associated to modules to understand when the landscape distance provides a good lower bound estimate. \item We have restricted our invariant to the discriminating power of the rank invariant. We would be interested to see if we could combine our landscapes with invariants that capture the more subtle relationships between features born at incomparable parameter values. \item The Bootstrap Method has been used to compute confidence bands for single parameter persistence landscapes \cite{Chazal2014StochasticCO}. We would be interested in applying similar analysis for multiparameter landscapes. \end{enumerate} Finally it is worth remarking that the construction of multiparameter persistence landscapes from multiparameter persistence modules can be generalised to produce stable invariants of generalised persistence modules indexed over other posets. Providing the indexing poset $P$ is equipped with a superlinear family of translations $\Omega$, one can derive a landscape function $\lambda: \mathbb{N}\times P \to \overline{\mathbb{R}}$ from the rank function $\rk : P\times P \to \mathbb{N}$. This landscape equipped with the supremum norm is stable with respect to the interleaving distance induced by the superlinear family, and provides an interpretable, stable representation of the rank function. This vectorization may prove a useful invariant should the computation of generalised persistence modules be developed in future work.
1,314,259,992,867
arxiv
\section{Introduction} The Unruh effect \cite{Dav-Unr,crispino} predicts that the Minkowski vacuum as seen by an observer accelerating uniformly will appear as a warm gas emitting black-body radiation at the \textit{Unruh temperature}. The Unruh effect produces a decoherence-like effect \cite{Omkar:2014hba}. It degrades the quantum information shared between an inertial observer and an accelerated observer, as seen in the latter's frame, in the case of bosonic or Dirac field modes \cite{AM03, AFM+2006, Tian2012}. The studies on Unruh effect form a part of the endeavour to understand relativistic aspects of quantum information \cite{czachor1997,peres1,caban1, Bruschi:2012uf,Friis:2012cx,Lee:2014kaa,Banerjee:2014vga,Alok:2014gya,Banerjee:2015mha}, see for example the review \cite{Peres:2002wx}. In this work we take up the problem of studying Fisher and its variant Skew information \cite{fisher,cramer,frieden} for the Unruh effect on a Dirac field mode in the context of open quantum systems \cite{bruer,alicki,weiss}. The Fisher information plays a key role in the estimation of unknown state parameters and provides a lower bound on the error of estimation \cite{helstrom}. Estimation of initial state parameters has been of interest for quiet some time \cite{petz} and in recent years this approach has been turned towards state estimation in the context of open quantum systems \cite{nori,blandeau}. Another variant of the Fisher information is the Skew information, which is the related to the infinitesimal form of the quantum Hellinger distance \cite{wigner}. In recent times Skew information has been shown to satisfy some nice properties relevant to the coherence in the system \cite{luo,girolami,vedral}. Both the quantum Fisher and Skew information are two different aspects of the classical Fisher information in the quantum regime \cite{luo2}, with the Skew and Fisher information being related to the Hellinger and Bures distance, respectively \cite{bures,uhlman}. These notions have also been used in recent times to provide a diagnostic for the general evolution of the quantum system, that is, whether the dynamics is Markovian or non-Markovinan \cite{nonmarkov}. Here, we develop a Bloch vector representation characterizing the Unruh channel acting on a qubit, to provide analytical expressions for quantum Fisher and Skew information, both with and without external noises. For the external noises, we take the experimentally relevant \cite{haroche,turchette} purely dephasing QND (Quantum non-demolition) \cite{sbrg} as well as the squeezed generalized amplitude damping (SGAD) noise \cite{sbsrik,sbomkar}. The QND channel is a purely quantum effect incorporating decoherence without dissipation, while the SGAD channel is a very general noisy channel in that it incorporates both the effects of finite temperature and bath squeezing. We observe the non-trivial interplay between temperature and bath squeezing on the Fisher and Skew information. In particular, it is observed that in some regimes squeezing can play a constructive role in enhancing the information against the deteriorating influence of temperature. Plan of the work is as follows. In Sec. II we briefly discuss the importance of quantum Fisher information in the context of estimation theory and motivate the use of the Bloch vector formalism for the study of Unruh effect with(out) external noises. We then develop the Bloch vector formalism characterizing the Unruh channel. In the next section quantum Fisher information for the Unruh channel without any external noise is studied. In Sec. V we extend the above by incorporating the effect of external noises, both the purely dephasing phase damping as well as SGAD channels. Since the Skew information is another variant of the quantum Fisher information, we probe Skew information both for the pure Unruh channel as well as for the cases where the channel is affected by external noises, QND as well as SGAD. Finally we make our conclusions. \section{Quantum Fisher Information in the Bloch vector formalism} With the advent of experimental progress, estimation theory has become a powerful tool for activities such as state reconstruction, tomography and metrology \cite{paris}. Quantum Fisher information plays a prominent role in these activities where a question of central importance is the determination of an unknown parameter characterizing the system and to reduce the error in these estimations. Their roots are related to the famous Cramer-Rao bounds \cite{cramer,rao} which is related to the fundamental founds on the efficiency of the estimation problem. Quantum Fisher information is the quantum counterpart of these bounds. It is well known that Unruh process is an inherently noisy one. Thus it is of interest to have an understanding of this process from the prospective of the estimation problem and hence the motivation for a study of Unruh effect using Fisher information. Efforts have been made along these directions, for example, in \cite{nori} where a systematic study was made of the problem of Fisher information in the presence of a number of well know noisy channels such as the phase damping, which is essentially a QND channel and generalized amplitude damping (GAD) channel, which is a subset of the SGAD channel. A similar study was also made in \cite{blandeau} where the problem of estimation of probe states with the feature of best resistance to noise was studied. In both of these works, the geometric visualization offered by the Bloch vector formalism was made use of in estimating the Fisher information. In \cite{yao} these studies were applied to the problem of pure Unruh effect both for the scalar as well as the Dirac field mode. Here, using the Kraus operators characterizing the pure Unruh channel, we develop a Bloch vector treatment of the quantum Fisher information of the Unruh effect of a Dirac field mode. Further, we study the effect of external noises, both QND as well as SGAD, on the Fisher estimation. This enables us to study the interplay between the external temperature and reservoir squeezing, a quantum correlation, on the Fisher information. Our constructions are analytic in nature. Quantum Fisher information in terms of Bloch vector $\vec{\zeta}(\alpha)$ is given by \cite{nori,blandeau} \begin{equation} F_q(\alpha) = \frac{\l[\vec{\zeta}(\alpha) \cdot \partial_{\alpha}\vec{\zeta}(\alpha) \r]^2}{1-|\vec{\zeta}(\alpha)|^2} + \l[\partial_{\alpha}\vec{\zeta}(\alpha)\r]^2\,, \label{fisher} \end{equation} where $q$ denotes quantum and $\alpha$ is the parameter to be estimated, for example, the polar and azimuthal angles $\theta$ and $\phi$, respectively, of a qubit. From here on we will abbreviate $F_q(\alpha)$ by $F_{\alpha}$. We will make use of this in our work. \section{Bloch vector formalism for Unruh channel} Here we provide a sketch of the tools required to obtain a Bloch vector representation of the Unruh channel. The basic ingredient that goes into this endeavour is the Choi theorem \cite{choi,debbie} which is applied here by considering the maximally entangled two Dirac field modes state $\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ in which the second mode is Unruh accelerated. This results in \cite{Omkar:2014hba} \begin{equation} \rho_U=\frac{1}{2}\left( \begin{array}{cclr} \cos^2r&0&0&\cos r\\ 0&\sin^2r&0&0\\ 0&0&0&0\\ \cos r&0&0&1 \end{array}\right), \label{eq:uchoi} \end{equation} where $r$ is the Unruh parameter given by $\cos r=\frac{1}{\sqrt{e^{-\frac{2\pi\omega c}{a_u}}+1}}$. Here $a_u$ is the uniform Unruh acceleration and $\omega$ is the Dirac particle frequency. As $a_u$ ranges from $\infty$ to 0, $\cos r \in[\frac{1}{\sqrt{2}},1]$. The spectral decomposition of the above state gives \begin{equation} \rho_U=\sum_{j=0}^3|\xi_j\rangle\langle\xi_j|, \label{spectral} \end{equation} where $|\xi_j\rangle$ are the eigenvectors normalized to the value of the eigenvalue. Choi's theorem \cite{choi,debbie}, by making use of channel-state duality, then provides a root to obtaining the Kraus operators relevant to the channel generating the state in Eq.~\ref{eq:uchoi}. Essentially, each $|\xi_j\rangle$ yields a Kraus operator obtained by folding the $d^2$ elements of the eigenvector into a $d\times d$ matrix, by taking each sequential $d$-element segment of $|\xi_j\rangle$, writing it as a column, and then juxtaposing these columns to form the matrix \cite{debbie}. Here $d=2$. Spectral decomposition of $\rho_U$, Eq.~\ref{spectral}, yields the following eigenvectors, corresponding to two non-vanishing eigenvalues \begin{eqnarray} |\xi_0\rangle&=&(\cos r,0,0,1),\nonumber\\ |\xi_1\rangle&=&(0,\sin r,0,0). \end{eqnarray} A straightforward application of Choi's theorem now yields the following Kraus operators for the Unruh channel $\mathcal{E}_U$ as \begin{equation} {\mathcal K}^U_1=\left( \begin{array}{cclr} \cos r&0\\ 0&1 \end{array}\right);~~ {\mathcal K}^U_2=\left( \begin{array}{cclr} 0&0\\ \sin r&0 \end{array}\right), \label{eq:relcha} \end{equation} whereby \begin{equation} \mathcal{E}_U(\rho) = \sum_{j=1,2} {\mathcal K}^U_j \rho \left({\mathcal K}^U_j\right)^\dag, \label{eq:unrucha} \end{equation} with the completeness condition \begin{equation} \sum_{j=1,2} \left({\mathcal K}^U_j\right)^\dag {\mathcal K}^U_j = \mathbb{I}. \end{equation} From the above Kraus representation, it would appear that the Unruh channel is formally similar to an AD channel \cite{NC00}, which models the effect of a zero temperature bath \cite{NC00,sbsrik,sbrg}. This is surprising as the Unruh effect corresponds to a finite temperature and would naively be expected to correspond to finite temperature channels such as the GAD or SGAD channels. Any two level system can be represented in the Bloch vector formalism as \begin{equation} \rho = \frac{1}{2} \l( \mathbb{I}+ \vec{\zeta} \cdot \sigma \r)\,, \label{bloch} \end{equation} where $\sigma$ are the standard Pauli matrices. For the initial state $\rho= |0\rangle\langle0|\cos^2\frac{\theta}{2} + |0\rangle\langle1|e^{i\phi}\cos\frac{\theta}{2}\sin\frac{\theta}{2} + |1\rangle\langle0|e^{-i\phi}\cos\frac{\theta}{2}\sin\frac{\theta}{2} + |1\rangle\langle1|\sin^2\frac{\theta}{2},$ the Bloch vector can be seen to be $\zeta_0 = \l(\cos \phi \sin\theta,\, -\sin \phi \sin \theta,\, \cos \theta\r)$. Evolving this state under the Unruh channel, characterized by the above Kraus operators leads to a state, which could be called the Unruh-Dirac (UD) qubit state, whose Bloch vector is \begin{equation} \vec{\zeta}= \left(\begin{array}{clr} \cos r \cos \phi \sin \theta \\ \vspace{.2cm} - \cos r \sin \phi \sin \theta \\ \vspace{.2cm} \cos^2r \cos\theta - \sin^2r \end{array}\right)=A \vec{\zeta_0}+C. \label{bloch-unruh} \end{equation} From this $A$ and $C$ can be found to be \begin{equation} A= \left(\begin{array}{clclclr} \cos r & 0 & 0\\ \vspace{.2cm} 0 & \cos r & 0 \\ 0 & 0 & \cos^2 r \end{array}\right), \quad C= \left(\begin{array}{clr} 0 \\ \vspace{.2cm} 0\\ \vspace{.2cm} -\sin^2 r \end{array}\right). \label{AC} \end{equation} This, we believe, is a new result with the $A$ and $C$ matrices completely characterizing the Unruh channel and will be used in the investigations below. \section{Quantum Fisher information for Unruh channel without external noise} That the Unruh channel is an inherently noisy channel is made explicit by its Kraus representation Eqs.~(\ref{eq:relcha}) and (\ref{eq:unrucha}). Here we will estimate the UD qubit state using quantum Fisher information. For this purpose we will make use of $A$ and $C$ from Eq.~(\ref{AC}) and $\vec{\zeta}$ from Eq.~(\ref{bloch-unruh}) as inputs in Eq.~(\ref{fisher}). When no external noise is acting, i.e., for the case of the pure Unruh channel, it can be seen that \begin{eqnarray} F_\theta&=&\cos^2r,\nonumber\\ F_\phi&=&\cos^2r\sin^2\theta. \label{pureunruh} \end{eqnarray} The Fisher information with respect to the parameter $\theta$, $F_\theta$, is independent of the state parameter $\theta$ while the Fisher information with respect to the parameter $\phi$, $F_\phi$ is state dependent and depends upon $\theta$. It should be noted that both these expressions of Fisher information have no $\phi$ dependence. Also it can be observed from the above expressions that the Fisher information cannot be increased by increasing the Unruh acceleration. This is consistent with the fact that the Unruh acceleration produces a thermal like effect and quantum estimation would be expected not to increase with increase in temperature. This is also evident from Fig.~\ref{fig:without-noise}, where $F_{\theta}$ is plotted as a function of the Unruh parameter $r$ whereas $F_{\phi}$, is plotted with respect to $r$ and $\theta$. As $r$ goes from $\pi/4$ to 0, i.e., $\cos r$ goes from $1/\sqrt{2}$ to 1, which implies the Unruh acceleration $a$ decreasing from infinity to zero, $F_{\theta}$ increases to 1. Since the Unruh acceleration is directly proportional to temperature, as acceleration decreases, temperature also decreases and quantum Fisher information increases. This is also seen for $F_\phi$, albeit only for $\theta=\pi$. \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Ftheta-r.pdf}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Fphi-r-theta.png}} \caption{(a) Plot of $F_\theta$ with respect to Unruh parameter for acceleration $r$; (b) Plot of $F_\phi$ with respect to $r$ and $\theta$.} \label{fig:without-noise} \end{figure*} Further, it can be seen from above that $F_\theta$ depends upon the Unruh parameter $r$. It should be noted that this result is obtained for an Unruh channel, by accelerating one partner of the maximally entangled state, as indicated in the previous section. It is interesting to observe here that for an analogous study of Unruh effect on a different state, not necessarily maximally entangled, $F_\theta$ was shown to be independent of $r$ \cite{yao}. This suggests that Fisher information which is an important tool in state estimation could also be used as a witness for quantum correlations. \section{Quantum Fisher information for Unruh channel with external noise} Now we will analyze the effect of external noise on the Unruh channel using the Bloch vector formalism of Quantum Fisher information. For this, we consider two general external noisy channels : a) phase damping channel, which is of the QND kind and involves pure dephasing and b) the SGAD channel, which includes the effects of decoherence along with dissipation and accounts for finite bath temperature as well as squeezing. We adopt the following procedure. Starting from the UD qubit state, Eq.~(\ref{bloch-unruh}), application of the external noise channel results in \begin{equation} \rho_{\rm in} \xrightarrow{\mathcal{E}{\rm(phase/SGAD)}} \rho_{\rm new}\,. \end{equation} From $\rho_{\rm new}$, we get the new Bloch vector $\vec{\zeta}_{\rm new}$ which is related to the original state Bloch vector as \begin{eqnarray} \vec{\zeta}_{\rm new} &=& A' \vec{\zeta} +C' = A'(A\vec{\zeta_0}+C)+C'\nonumber\\ &=&AA'\vec{\zeta_0} + (A'C+C')\equiv A_{\rm new}\vec{\zeta_0}+C_{\rm new}. \label{zetanew} \end{eqnarray} Here $\vec{\zeta}$ and $\vec{\zeta_0}$ are as in Eq.~(\ref{bloch-unruh}). From the above equation, it can be seen that the effect of the external noise channel on the Unruh channel is encoded in $A_{\rm new}=A'A$ and $C_{\rm new} = (A'C + C')$. Thus we need to find out $A'$ and $C'$ for the desired channels using the Kraus operator formalism. \subsection{ Phase damping channel} In the context of open quantum systems, one is interested in the dynamics of the system of interest, for example, the UD qubit in this case, by taking into account the effect of the ambient environment on its evolution. Let the total Hamiltonian $H$ be $H=H_S + H_R + H_{SR}$, where $H_S$, $H_R$ are the system and reservoir Hamiltonians, respectively and $H_{SR}$ is the interaction between the two. If $[H_S, H_{SR}]=0$, then it implies decoherence without dissipation, that is, pure dephasing. This is a purely quantum mechanical effect and such an interaction is called a QND interaction. The phase damping channel is a well known noisy channel incorporating QND interaction. The Kraus operators corresponding to the phase damping channel, modelling the QND interaction of a qubit, with the two levels having a separation of $\hbar \omega_0$, interacting with a squeezed thermal bath are \cite{sbrg} \begin{eqnarray} K_1&=&\sqrt{\frac{1+e^{-(\hbar\omega_0)^2\gamma(t)}}{2}}\left(\begin{array}{cc} e^{-i\hbar\omega_0 t}&0\\ 0&1 \end{array}\right);\nonumber\\ K_2&=&\sqrt{\frac{1-e^{-(\hbar \omega_0)^2\gamma(t)}}{2}}\left(\begin{array}{cc} -e^{-i\hbar \omega_0 t}&0\\ 0&1 \end{array}\right). \end{eqnarray} Assuming an Ohmic bath spectral density with an upper cut-off frequency $\omega_c$, it can be shown that \begin{widetext} \begin{eqnarray} \gamma(t)&=& \left(\frac{\gamma_0 k_B T}{\pi \hbar \omega_c}\right) \cosh(2 s) \left(2 \omega_c t\tan^{-1}(\omega_c t) + \ln\left[\frac{1}{1 + \omega_c^2 t^2} \right]\right) - \left(\frac{\gamma_0 k_B T}{2\pi \hbar \omega_c}\right) \sinh(2 s) \Bigg(\frac{}{}4 \omega_c(t - a) \tan^{-1}[2 \omega_c (t - a)]\nonumber\\ && -4 \omega_c (t-2 a) \tan^{-1}[\omega_c (t-2a)] + 4 a \omega_c \tan^{-1}(2 a \omega_c) + \ln\left[\frac{\left(1 + \omega_c^2 (t - 2 a)^2\right)^2}{1 + 4\omega_c^2(t-a)^2}\right] + \ln\left[\frac{1}{1 + 4 a^2\omega_c^2}\right]\Bigg). \end{eqnarray} \end{widetext} Here $T$ is the reservoir temperature, while $a$ and $s$ are bath squeezing parameters. For the Unruh channel in the presence of phase damping noise the modified Bloch vector for the UD qubit, Eq.~(\ref{zetanew}), is \begin{equation} \zeta_{\textrm{new}}= \begin{pmatrix} \cos r\sin\theta\cos(\phi+\omega_0 t)e^{-(\hbar\omega_0)^2\gamma(t)/4}\\ -\cos r\sin\theta\sin(\phi+\omega_0 t)e^{-(\hbar\omega_0)^2\gamma(t)/4}\\ \cos^2r \cos\theta - \sin^2r \end{pmatrix}. \end{equation} \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Ftheta-qnd-t-r.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Fphi-qnd-t-r.png}} \caption{(a) Variation of $F_\theta$ (Fisher information with respect to the parameter $\theta$) and (b) $F_\phi$ (Fisher information information with respect to parameter $\phi$) for QND interaction with bath for a time (t) and and Unruh parameter (r). The parameter settings are $\theta=\pi/4$, $\phi=\pi/4$, $a=0$, $T=0.5$, $s=0.5$, $\omega_0=1$, $\omega_c=100$, $\gamma_0=0.1$.} \label{qndfisher} \end{figure*} \begin{figure*}[ht] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{Ftheta-qnd-s-tem.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Fphi-qnd-s-tem.png}} \caption{(a) Variation of $F_\theta$ (Fisher information with respect to parameter $\theta$) for QND interaction with bath temperature (T) and squeezing (s); (b) Variation of $F_\phi$ (Fisher information with respect to parameter $\phi$) for QND interaction with bath temperature (T) and squeezing (s). The parameter settings are $r=\pi/8$, $\theta=\pi/4$, $\phi=\pi/4$, $a=0$, $\omega_c=100$, $\gamma_0=0.1$, $t=2$.} \label{fisherpd} \end{figure*} The analytical expressions for the quantum Fisher information with respect to parameters $\theta$ and $\phi$ are \begin{widetext} \begin{eqnarray} F_\theta &=& \frac{\cos^2 r \left(2(1+\cos2r) + \cos(2 r-\theta) + 4 e^{\frac{\gamma(t) (\hbar\omega_0) ^2}{2}}(\cos\theta-1)-6\cos\theta + \cos(2r+\theta) \right)} {4(1-\cos\theta) + 2e^{\frac{\gamma(t) (\hbar\omega_0) ^2}{2}}(\cos2r -3 + 2\cos^2r\cos\theta)},\nonumber\\ F_\phi &=& e^{-\frac{\gamma(t) (\hbar\omega_0) ^2}{2}} \cos^2r \sin^2\theta, \end{eqnarray} \end{widetext} respectively. The above expressions of the Fisher information reduce, for $\gamma(t)=0$, to their pure Unruh counterparts in Eq.~(\ref{pureunruh}). Also, both $F_{\theta}$ and $F_{\phi}$ are independent of the azimuthal angle $\phi$, as in the pure Unruh case. From Fig.~\ref{qndfisher} it can be seen that both $F_\theta$ and $F_\phi$ decrease with time with increase in Unruh acceleration parametrized by $r$. However, for $r<0.2$, $F_\theta$ is stable with the evolution of time. In Fig.~\ref{fisherpd} profiles of $F_{\theta}$ and $F_{\phi}$ with respect to $T$ and $s$ are depicted. It is evident that the Fisher information decreases with increasing $T$. Also, squeezing is seen to have a depleting effect on the Fisher information. \subsection{ SGAD channel} Usually, $[H_S, H_{SR}]\neq0$, implying deoherence along with dissipation. Linbladian evolution \cite{lindblad,sudharshan,alicki} is a general class of evolutions which incorporates the effects of decoherence and dissipation. The SGAD channel is a very general Linbladian noisy channel incorporating the effects of bath squeezing, dissipation and decoherence. The Kraus operators for this channel are \cite{sbsrik,sbomkar} \begin{eqnarray} K_{1} &\equiv& \sqrt{p_1}\left[\begin{array}{ll} \sqrt{1-\alpha} & 0 \\ 0 & 1 \end{array}\right], \nonumber\\ K_{2} &\equiv& \sqrt{p_1}\left[\begin{array}{ll} 0 & 0 \\ \sqrt{\alpha} & 0 \end{array}\right], \nonumber\\ K_{3} &\equiv& \sqrt{p_2}\left[\begin{array}{ll} \sqrt{1-\mu} & 0 \\ 0 & \sqrt{1-\nu} \end{array}\right], \nonumber\\ K_{4} &\equiv& \sqrt{p_2}\left[\begin{array}{ll} 0 & \sqrt{\nu} \\ \sqrt{\mu}e^{-i\phi_s} & 0 \end{array}\right], \label{eq:srikraus} \end{eqnarray} where $p_1+p_2=1$ \cite{sbsrik}, and \begin{widetext} \begin{eqnarray} p_2 &=& \frac{1}{(A+B-C-1)^2-4D} \times \left[A^{ 2} B + C^{2} + A(B^{2} - C - B(1+C)-D) - (1+B)D - C(B+D-1) \nonumber \right. \\ && \pm \left. 2\sqrt{D(B-A B+(A-1)C+D)(A-A B+(B-1)C+D)}\right], \label{eq:p2} \end{eqnarray} \end{widetext} with \begin{eqnarray} A &=& \frac{2N+1}{2N} \frac{\sinh^2(\gamma_0 at/2)} {\sinh(\gamma_0(2N+1)t/2)} \exp\left(-\gamma_0(2N+1)t/2\right),\nonumber \\ B &=& \frac{N}{2N+1}(1-\exp(-\gamma_0(2N+1)t)), \nonumber \\ C &= & A + B + \exp(-\gamma_0 (2N+1)t),\nonumber \\ D &=& \cosh^2(\gamma_0 at/2)\exp(-\gamma_0(2N+1)t). \label{eq:auxip2} \end{eqnarray} Also, \begin{eqnarray} \nu &=& \frac{N}{(p_2)(2N+1)}(1-e^{-\gamma_0(2N+1)t}),\nonumber \\ \mu &=& \frac{2N+1}{2(p_2) N}\frac{\sinh^2(\gamma_0at/2)}{\sinh(\gamma_0(2N+1)t/2)} \exp\left(-\frac{\gamma_0}{2}(2N+1)t\right),\nonumber \\ \alpha & =& \frac{1}{p_1}\left(1 - p_2[\mu(t)+\nu(t)] - e^{-\gamma_0(2N+1)t}\right). \label{eq:nu} \end{eqnarray} Further, $ N = N_{\rm th}[\cosh^2(s) + \sinh^2(s)] + \sinh^2(s),~~a=\sinh(2s)( 2N_{\rm th}+1)$ where $N_{\rm th}= 1/(e^{\hbar \omega_0/k_B T} - 1)$ is the Planck distribution giving the number of thermal photons at the frequency $\omega_0$ while $s$ and $\phi_s$ are bath squeezing parameters. \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Ftheta-sgad-t-r.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Fphi-sgad-t-r.png}} \caption{(a) Variation of $F_\theta$ (Fisher information with respect to the parameter $\theta$) and (b) $F_\phi$ (Fisher information information with respect to parameter $\phi$) for SGAD interaction with bath interaction time (t) and and Unruh parameter (r). The parameter settings are $T=0.5$, $s=0.5$, $\theta=\pi/4$, $\phi=\pi/4$, $\phi_s=0$, $\omega_0=0.1$, $\gamma_0=0.1$.} \label{sgadskew} \end{figure*} Under the action of the SGAD channel, the effective Bloch vector Eq.~(\ref{zetanew}) becomes \begin{widetext} \begin{equation} \zeta_{\textrm{new}}= \begin{pmatrix} \cos r\sin\theta\left(\left(p_1\sqrt{1-\alpha }+p_2\sqrt{(1-\mu )(1-\nu )}\right)\cos\phi+p_2\sqrt{\mu\nu}\cos(\phi-\phi_s)\right)\\ -\cos r\sin\theta\left(\left(p_1\sqrt{1-\alpha }+p_2\sqrt{(1-\mu )(1-\nu )}\right)\sin\phi-p_2\sqrt{\mu \nu }\sin(\phi-\phi_s)\right)\\ \left(1-2p_1\alpha -2p_2\mu \right)\cos^2r\cos^2\frac{\theta}{2}-\left(1-2p_2\nu \right)\left(\sin^2r\cos^2\frac{\theta}{2}+\sin^2\frac{\theta}{2}\right)\\ \end{pmatrix}. \end{equation} \end{widetext} As a result, the Fisher information with respect to the Unruh qubit parameters $\theta$ and $\phi$, i.e., $F_\theta$ and $F_\phi$, respectively, can be shown to be \begin{widetext} \begin{eqnarray} F_\theta&=&\frac{\cos^2\theta(\mathcal{A}_+^2+\mathcal{B}_+^2)+\sin^2\theta~\mathcal{C}^2 +\left(\mathcal{C}\mathcal{D}+ (\mathcal{A}_+ + \mathcal{B}_+)\cos\theta\right)^2\sin^2\theta} {1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta},\nonumber\\ F_\phi&=&\frac{\sin^2\theta(\mathcal{A}_-^2+\mathcal{B}_-^2)+(\mathcal{A}_+\mathcal{B}_-+\mathcal{A}_-\mathcal{B}_+)^2\sin^4\theta} {1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta}. \label{fis-sgad} \end{eqnarray} \end{widetext} Here \begin{eqnarray} \mathcal{A}_\pm&=&\left[\left(p_1 \sqrt{1-\alpha }+p_2 \sqrt{(\mu-1 ) (\nu-1 )}~\right) \sin\phi\right. \nonumber\\ && \left.\mp p_2 \sqrt{\mu \nu } \sin[\phi -\phi_s]\frac{}{}\right]\cos r,\nonumber\\ \mathcal{B}_\pm&=&\left[\left(p_1 \sqrt{1-\alpha }+p_2 \sqrt{(\mu-1 ) (\nu-1 )}~\right) \cos\phi\right. \nonumber\\ && \left.\pm p_2 \sqrt{\mu \nu } \cos[\phi -\phi_s]\frac{}{}\right]\cos r,\nonumber\\ \mathcal{C}&=&\left[ p_1 (\alpha-1 )+p_2 (2 \nu-1 )\right]\cos^2r,\nonumber\\ \mathcal{D}&=& (1-2 p_1 \alpha -2 p_2 \mu ) \cos^2r \cos^2\frac{\theta }{2} \nonumber\\ &&- (1-2 p_2 \nu ) \left(\cos^2\frac{\theta }{2}\sin^2r+\sin^2\frac{\theta }{2}\right),\nonumber\\ \mathcal{F}&=&-(p_1\alpha+p_2\mu)\cos^2r\cos^2\frac{\theta }{2}\nonumber\\ && +\frac{1}{2}(1-2p_2\nu)(-1+\cos\theta-2\sin^2r\frac{\theta }{2}). \label{eq:reduction} \end{eqnarray} In the absence of external noise $F_\theta$ and $F_\phi$, Eq.~(\ref{fis-sgad}), reduce to Eq.~(\ref{pureunruh}), corresponding to the pure Unruh channel. \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Ftheta-sgad-s-tem.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Fphi-sgad-s-tem.png}} \caption{(a) Variation of $F_\theta$ (Fisher information with respect to parameter $\theta$) for SGAD interaction with bath temperature (T) and squeezing (s); (b) Variation of $F_\phi$ (Fisher information with respect to parameter $\phi$) for SGAD interaction with bath temperature (T) and squeezing (s). The parameter settings are $r=\pi/8$, $\theta=\pi/4$, $\phi=\pi/4$, $\phi_s=0$, $\omega_0=0.1$, $\gamma_0=0.1$, $t=2$.} \label{fishersgad} \end{figure*} From Fig.~\ref{sgadskew}, it is evident that both $F_\theta$ and $F_\phi$ decrease with time for all values of $r$. This is in contrast with the corresponding behaviour of $F_\theta$ in the presence of phase damping channel. The fall in $F_\phi$ here is more dramatic as compared to its phase damping counterpart. The variation of $F_\theta$ and $F_\phi$ with respect to temperature and squeezing are shown in Fig.~\ref{fishersgad}. Fisher information shows an interesting behaviour with respect to squeezing. For a given $T>0.5$, as $s$ becomes nonzero, both $F_\theta$ and $F_\phi$ show a general trend of increasing and stabilizing after $|s|=1$. To summarize, here squeezing turns out to be a useful quantum resource in that it quantifies the resilience of the quantum system to the effects of the external noisy channel. Another feature that is observed is that as the temperature due to the external noise channel increases, the pattern of $F_\phi$ with respect to the Unruh parameter $r$ and state parameter $\theta$, as seen in Fig.~\ref{fig:without-noise}(b), remains unchanged although the magnitude of $F_\phi$ decreases, with the depletion being more dramatic for the case of the SGAD channel as compared to the QND channel. \section{Skew Information} Another variant of Fisher information which accounts for the amount of information in the quantum state with respect to its non commutation with a conserved quantity is the Skew information \cite{wigner,luo2,nori,pires}. This can be shown to have a metrical structure given by the quantum Hellinger distance \cite{luo2} which in turn is related to the quantum affinity and is intrinsically connected to the quantum Chernoff distance \cite{wei14}. In this sense Skew and Fisher information are variants of the same fundamental quantity with Fisher deriving its metrical origin from the Bures distance \cite{uhlman}. Recently there has been a lot of activity concerning the connection between the Skew information and quantum coherence \cite{girolami,vedral}. We thus find it instructive to compute the Skew information for the present problem of Unruh channel with and without the influence of external noisy channels. The Skew information in terms of Bloch vector $\vec{\zeta}(\alpha)$ is given by \begin{eqnarray} S_q(\alpha) &=& \frac{2|\partial_{\alpha}\vec{\zeta}(\alpha)|^2}{1+\sqrt{1-|\vec{\zeta}(\alpha)|^2}} + \left[\vec{\zeta}(\alpha)\cdot\partial_{\alpha}\vec{\zeta}(\alpha)\right]^2 \nonumber\\ && \times \left( \frac{1}{1-|\vec{\zeta}(\alpha)|^2}-\frac{1}{1+\sqrt{1-|\vec{\zeta}(\alpha)|^2}} \right), \label{skew} \end{eqnarray} where $q$ denotes quantum and $\alpha$ is the parameter to be estimated, for example, the polar and azimuthal angles $\theta$ and $\phi$, respectively, of the UD qubit. From here on we will abbreviate $S_q(\alpha)$ by $S_{\alpha}$. \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{sk_theta-r-theta.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{sk_phi-r-theta.png}} \caption{For the pure Unruh channel: (a) Variation of $S_\theta$ (Skew information with respect to the parameter $\theta$); (b) Variation of $S_\phi$ (Skew information with respect to the parameter $\theta$).} \label{unruhskew} \end{figure*} Using the Bloch vector $\vec{\zeta}(\alpha)$, Eq.~(\ref{bloch-unruh}), the Skew information for the pure Unruh channel with respect to the parameters $\theta$ and $\phi$, i.e., $S_\theta$ and $S_\phi$, respectively, can be shown to be \begin{eqnarray} S_\theta &=& \frac{ \cos^2r \left(7+2 \cos2 \theta + 8 \cos^2\frac{\theta}{2}\sin2r +2\cos2r \sin^2 \theta\right)} {4\left(1+ \cos^2\frac{\theta}{2}\sin2r\right)^2},\nonumber\\ S_\phi &=& \frac{2 \cos^2r \sin^2\theta}{1+\cos^2\frac{\theta }{2} \sin 2r}. \label{skew-unr} \end{eqnarray} Unlike the analogous case of Fisher information $F_\theta$ for the pure Unruh channel, Eq.~(\ref{pureunruh}), we see that $S_\theta$ depends both on $r$ and $\theta$. From Fig.~\ref{unruhskew}, it is seen that the Skew information, for the pure Unruh channel, with respect to the parameter $\theta$, decreases with increase in the Unruh parameter $r$, a behaviour which is consistent with that of its Fisher counterpart. However, in contrast to the Fisher information, for a given $r$, there is a general trend of increase in $S_{\theta}$ as $\theta$ goes from 0 to $2\pi$. This increase is more dramatic for higher values of Unruh acceleration. The behaviour of $S_\phi$ is similar to that of its Fisher counterpart $F_{\phi}$, Fig.~\ref{fig:without-noise}(b). However for higher values of $r$ ($>0.5$), $S_\phi$ has a steeper fall as compared to its corresponding $F_\phi$, Fig.~\ref{fig:without-noise}. \subsection{Phase Damping} \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Stheta-qnd-t-r.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Sphi-qnd-t-r.png}} \caption{(a) Variation of $S_\theta$ (Skew information with respect to the parameter $\theta$) and (b) $S_\phi$ (Skew information information with respect to parameter $\phi$) for QND interaction with bath interacting for a time (t) and and Unruh parameter (r). The parameter settings are $T=0.5$, $s=0.5$, $\theta=\pi/4$, $\phi=\pi/4$, $a=0$, $\omega_0=1$, $\omega_c=100$, $\gamma_0=0.1$.} \label{qndskew} \end{figure*} The Skew information with respect to parameters $\theta$ and $\phi$, $S_{\theta}$ and $S_{\phi}$, due to the influence of the phase damping (QND) noise channel on the UD quibit are given by \begin{eqnarray} S_\theta &=& \frac{2\cos^2r \left(e^{-\frac{1}{2}\gamma(\hbar\omega_0) ^2}\cos^2\theta+ \cos^2r\sin^2\theta \right)} {1+\sqrt{1-\mathcal{H}}}\nonumber\\ &&+\mathcal{G}^2\sin^2\theta \left(\frac{1}{1-\mathcal{H}}-\frac{1}{1+\sqrt{1-\mathcal{H}}}\right),\nonumber\\ S_\phi &=& \frac{2 e^{-\frac{\gamma (\hbar\omega_0) ^2}{2}} \cos^2 r\sin^2\theta} {1+\sqrt{1-\mathcal{H}}}. \end{eqnarray} Here $\mathcal{G}$ and $\mathcal{H}$ are \begin{eqnarray} \mathcal{G}&=&e^{-\gamma(\hbar\omega_0) ^2}\cos^4r (\cos\theta+ e^{\frac{1}{2}\gamma(\hbar\omega_0) ^2}(\sin^2r-\cos^2r\cos\theta)),\nonumber\\ \mathcal{H}&=&(\sin^2 r-\cos^2 r \cos\theta)^2-e^{-\frac{\gamma (\hbar\omega_0) ^2}{2}} \cos^2 r \sin^2\theta. \end{eqnarray} \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Stheta-qnd-tem-s.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Sphi-qnd-s-tem.png}} \caption{(a) Variation of $S_\theta$ (Skew information with respect to the parameter $\theta$) and (b) $S_\phi$ (Skew information information with respect to parameter $\phi$) for QND interaction with bath at temperature (T) and squeezing (s). The parameter settings are $r=\pi/8$, $\theta=\pi/4$, $\phi=\pi/4$, $a=0$, $\omega_0=1$, $\omega_c=100$, $\gamma_0=0.1$, $t=2$.} \label{qndskew-ts} \end{figure*} In the absence of external noise, $S_\theta$ and $S_\phi$ reduces to their pure Unruh counterparts in Eq.~(\ref{skew-unr}). The variation of Skew informations $S_\theta$ and $S_\phi$ with respect to time of evolution $t$, Unruh parameter $r$ and temperature (T) and squeezing (s) are depicted in Fig.~\ref{qndskew} and Fig.~\ref{qndskew-ts}, respectively. Once more we see that the behaviour of Skew information is similar to its Fisher counterpart. \subsection{SGAD} As a result of action of SGAD channel on UD qubit, the Skew information with respect to parameters $\theta$ and $\phi$ are respectively given by \begin{widetext} \begin{eqnarray} S_\theta&=&\frac{2\left(\cos^2\theta(\mathcal{A}_+^2+\mathcal{B}_+^2)+\mathcal{C}^2\sin^2\theta\right)} {1+\sqrt{1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta}} +\left(\mathcal{C}\mathcal{D}\sin\theta +(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin\theta\cos\theta\right)^2\times\nonumber\\ &&\left(\frac{1}{1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta}-\frac{1}{1+\sqrt{1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta}}\right),\nonumber\\ S_\phi&=&\frac{2\left(\mathcal{A}_-^2+\mathcal{B}_-^2\right)\sin^2\theta} {1+\sqrt{1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta}} + \left(\mathcal{A}_+\mathcal{B}_- + \mathcal{A}_-\mathcal{B}_+ \right)^2\sin^4\theta\times\nonumber\\ &&\left(\frac{1}{1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta}-\frac{1}{1+\sqrt{1- (\mathcal{F} - \mathcal{C}\cos^2\frac{\theta }{2})^2 -(\mathcal{A}_+^2+\mathcal{B}_+^2)\sin^2\theta}}\right), \end{eqnarray} \end{widetext} where $\mathcal{A},~\mathcal{B},~\mathcal{C},~ \mathcal{D},~\mathcal{F}$ are as in Eq. (\ref{eq:reduction}). The above equation reduces to Eq.~(\ref{skew-unr}), in the absence of external noise. \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Stheta-sgad-t-r.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Sphi-sgad-t-r.png}} \caption{(a) Variation of $S_\theta$ (Skew information with respect to the parameter $\theta$) and (b) $S_\phi$ (Skew information information with respect to parameter $\phi$) for SGAD interaction with bath interaction time (t) and and Unruh parameter (r). The parameter settings are $T=0.5$, $s=0.5$, $\theta=\pi/4$, $\phi=\pi/4$, $\phi_s=0$, $\omega_0=0.1$, $\gamma_0=0.1$.} \label{sgadskew-tr} \end{figure*} \begin{figure*}[ht] \subfigure[]{ \includegraphics[width=0.48\textwidth]{Stheta-sgad-s-tem.png}} \subfigure[]{ \includegraphics[width=0.48\textwidth]{Sphi-sgad-s-tem.png}} \caption{(a) Variation of $S_\theta$ (Skew information with respect to the parameter $\theta$) for SGAD interaction with bath temperature (T) and squeezing (s). (b) Variation of $S_\phi$ (Skew information with respect to the parameter $\theta$) for SGAD interaction with bath temperature (T) and squeezing (s). The parameter settings are $r=\pi/8$, $\theta=\pi/4$, $\phi=\pi/4$, $\phi_s=0$, $\omega_0=0.1$, $\gamma_0=0.1$, $t=2$.} \label{sgad-skew} \end{figure*} Like its Fisher counterpart, it can be seen from Fig.~\ref{sgadskew-tr}, that both $S_\theta$ and $S_\phi$ decrease with time for all values of $r$. The behaviour of these two Skew informations with respect to the parameters $T$ and $s$ are depicted in Fig.~\ref{sgad-skew} (a) and (b), respectively. Qualitatively they are similar to their Fisher counterparts in Fig.~\ref{fishersgad}. Hence the rich structure exhibited by $F_\theta$ and $F_\phi$ are also seen here for their Skew counterparts. From the behaviour of Skew information, as observed in this section, we see that it is, baring a few differences, quite similar to the corresponding Fisher information. This is consistent with the notion that the Fisher and Skew information are variants of the same information content. \section{Conclusions} Quantum Fisher information plays a prominent role in state estimation and reconstruction, tomography and metrology. Its variant, Skew information is gaining prominence in studies probing into the nature of quantum coherence. In this work, we provide a detailed exposition of both the Fisher and Skew information, for an Unruh-Dirac qubit. An important feature of this work is that by using the Bloch vector formalism, a clear and unified treatment of Unruh effect both in its pure form as well as in the presence of experimentally relevant external noise channels is provided. The use of Bloch vector representation, developed here for the Unruh effect, enables us to provide analytical expressions for quantum Fisher and Skew information, both with and without external noises. We study the evolution of Fisher and Skew information with time and also the impact of external environment parameters such as temperature and squeezing on their evolution. The external noises are modelled by both purely dephasing phase damping as well as the squeezed generalized amplitude damping (SGAD) noise channels. An interesting interplay between the external reservoir temperature and squeezing on the Fisher and Skew information is observed, in particular, for the action of the SGAD channel. It is seen that for some regimes, squeezing can enhance the quantum information against the debilitating influence of the noise channels. Similar features are also observed for the analogous study of Skew information, highlighting the similar origin of the Fisher and Skew information. These studies, we hope, to be a contribution in the direction of efforts towards understanding and implementing relativistic quantum information.
1,314,259,992,868
arxiv
\section{Introduction} \label{sec:intro} Weakly Interactive Massive Particles (WIMPs) are one of the best motivated Dark Matter candidates. Goodman and Witten \cite{GW} suggested that they would interact with detector nuclei with recoil energies in the keV range (see Ref.~\cite{Gui2011}). Spergel~\cite{Spergel1988} has pointed out that measuring the angular distribution would be an unique signature to confirm the Galactic halo origin of a Dark Matter signal. Taking the example of a 10 kg CF$_{4}$ $50$ m$^3$ MIMAC detector with a recoil energy range of ($5$, $50$) keV, angular resolution of $10^{\circ}$ and after $3$ years of operation, Billard \emph{et al.}~\cite{Billard2010a} conclude from simulations that even in the presence of a significant background, the detector could set constraints for spin-dependent interactions comparable or better than existing detectors (PICO 2019 \cite{Amole2019}). Billard \emph{et al.}~\cite{Billard2011} show that with a $100\%$ sense recognition, an angular resolution of $20^{\circ}$ and with no background contamination, this type of detector could reach a $3\sigma$ sensitivity at $90\%$ C.L. down to $10^{-5}$ pb for a WIMP-proton spin dependent cross section. O'Hare \emph{et al.}~\cite{OHare2015} claim that to discriminate directional signals of light WIMPs from solar neutrinos, an angular resolution of order $30^{\circ}$ or better is requested. Several projects of Directional Dark Matter Detection (DDMD) are currently being developed \cite{daw11, miuchi07, Santos2010, Ross2014, Deaconu17, CYGNOCollaboration2019}. This paper presents a study of the performance of a MIMAC detector prototype, in terms of its angular resolution at low nuclear recoil kinetic energies ($6$ to $26$ keV). The experimental setup, presented in Section~\ref{sec:experimental}, consists of a MIMAC chamber prototype coupled to an ion beam facility. In Section~\ref{sec:recon} we explain how we define and reconstruct the nuclear recoil track direction and discuss the method used to measure the angular resolution. As pointed out in~\cite{Tao2020}, the MIMAC readouts on the pixelated anodes need to be convolved with the flash ADC asymmetry, and we include this in our analysis of the angular resolution. We present the final reconstructed angular resolution along the electron drift direction and show that it is below $15^{\circ}$ at an energy as low as $9.32$ keV. Reconstruction results of larger recoil angles are also presented by analyzing the data of 565 keV neutron experiments. In Section~\ref{sec:simulations} we compare the results of our measurements with simulations, including a dedicated study of several systematic effects. We show that the detection efficiency may have an impact on the angular resolution. Moreover, there is a strong indication that hydrogen might be the best target for low energy nuclear tracks for directional detection of low mass WIMPs. In Section~\ref{sec:application}, we revisit the recoil distribution under the Galactic Dark Matter halo model based on the results of Gaia observation, showing that the current finite angular resolution we expect for MIMAC preserves the dipole feature of the WIMP signal. \section{Experimental Setup and Principle of Operation} \label{sec:experimental} The MIMAC detector consists of a matrix of micro-Time Projection Chamber (TPC) (\cite{Sauli1977, Billard2012, Riffard2016}) developed in a collaboration between LPSC (Grenoble) and IRFU (Saclay). Each chamber module contains a pixelated bulk Micromegas coupled to fast self-trigger electronics. In this work, we are using a $10.8\times10.8\times5$ cm$^{3}$ prototype detector~\cite{Tao2020}. The optimized working gas is chosen to be a special mixture (called MIMAC gas): 70$\%$ CF$_{4}$ + 28$\%$ CHF$_{3}$ + 2$\%$ iC$_{4}$H$_{10}$, operating at a pressure of $50$ mbar. CF$_{4}$ and CHF$_{3}$ provide the main target $^{19}$F for spin dependent Dark Matter detection. $^{19}$F is a light odd nucleus, for which the momentum transfer from low mass WIMP elastic scattering is enhanced. Besides, a fraction of CHF$_{3}$ will effectively reduce the electron drift velocity to about $1/4$ of the pure CF$_{4}$ case. The isobutane (iC$_{4}$H$_{10}$) helps to increase the gain thanks to its relatively small pairing energy ($23\,\mathrm{eV}$)~\cite{Sharma1998}. We used the LHI (Ligne exp\'erimentale \`a Haute Intensit\'e) ion beam line~\cite{Tao2020} to generate $^{19}$F$^{+}$ ions with given kinetic energy. The required species were filtered out, thanks to a high resolution 0.33 T magnetic spectrometer. The prototype was coupled to the beam line via a \SI{1}{\micro\meter} hole and the ions are thus injected in the direction of the beam line parallel to the drift field in the chamber. The high voltages on the grid (or micromesh) and the cathode were set to $-570\,\mathrm{V}$ and $-1320\,\mathrm{V}$ respectively, building up a drift field of $150\,\mathrm{V/cm}$, while the anode was grounded. Due to the negative voltage applied on the cathode, an extra component of the kinetic energy ($1.32$ keV) must be added to the original one from the ECR ion source. Part of the kinetic energy of the incident $^{19}$F$^{+}$ ion is released in the detector active volume by ionization. Primary electron clouds along the physical track generated from ionization drift are subject to diffusion in the drift field. Avalanches take place in the amplification gap of the Micromegas, producing secondary electrons which trigger strips of pixels in the $X$ and $Y$ directions (pitch of \SI{424.3}{\micro\meter})~\cite{iguaz2011}, and are read out by a self-triggered electronics system developed at LPSC~\cite{Couturier2016, Richer2011}. The $Z$ coordinate of each primary electron is obtained by multiplying the primary electron drift velocity with the relative timing sampling. The total ionization energy is measured by a charge pre-amplifier coupled to the grid by a flash-ADC. Both the anode signal and grid charge collection were sampled at $50\,\mathrm{MHz}$ ($20\,\mathrm{ns}$), and events were recorded as coincidence entries one by one. The gain of the detector coupled to the preamplifier during the experiment is estimated to be $\mathcal{O}(10^4)$ from the $5.9\,\mathrm{keV}$ peak of the energy calibration source $^{55}$Fe~\cite{Tao2020}. We have also taken data with $565$ keV monoenegetic neutrons produced by the AMANDE facility~\cite{Gressier2004} giving nuclear recoils at different angles with respect to the drift direction with a $25$ cm drift chamber and MIMAC gas. The 565 keV neutrons are produced by protons of $2.3$ MeV on a $^7$Li target, which is 1.5 meter away from the cathode of the MIMAC chamber. In this experiment, the proton beam is in the direction parallel to the electron drift direction. The Micromegas set up and readout in the neutron experiments is the same as what we used in the LHI experiment. We also performed another neutron experiment using the MIMAC-IRSN detector ($10\times10\times17.7\,\mathrm{cm}^3$) with $95\%$ of $^4$He + $5\%$ of CO$_2$ gas mixture as a reference experiment (IRSN experiment) for testing our simulation program and reconstruction algorithm. The MIMAC-IRSN detector is located at $40^\circ$ with respect to the proton beam, which results in the incident neutron energy to be $460$ keV. \section{3D Reconstruction of Ion Track and Its Direction} \label{sec:recon} The $X$-$Y$ 2D positional information is provided by the secondary electrons created by the MIMAC Micromegas avalanche field (see Figure 2 in~\cite{Tao2020}). The sampling of the anode every $20$ ns allows the reconstruction of a 3D cloud of primary electrons for each detected event. In our experiment, we applied a $150\,\mathrm{V/cm}$ electric field, and thus the drift velocity of primary electrons was $V_\text{drift}$ = \SI{22.9}{\micro\meter}/ns, computed by the MAGBOLTZ code~\cite{Biagi1999}. After applying a correction on the effective drift velocity, as discussed in another recent work~\cite{Tao2020}, due to space charge effects in the \SI{512}{\micro\meter} gap of the Micromegas detector, we obtain the 3D primary electron cloud for each ion event, reconstruct the direction of the track and estimate the angular resolution. \subsection{Definition of angular resolution} \label{sec:definition} The direction of a recoil track or an incident ion track is modified in the first collision between the injected ion and nuclei in the gas. The important information is the initial ion direction. However, this ideal information will be washed out by secondary interactions in the drift and avalanche regions. In order to overcome this challenge, the strategy for reconstructing a track direction is to perform a 3D linear regression fit on the pixelated electron cloud. Then we derive the direction of the fitted track with respect to the drift direction ($Z$-axis). The 3D linear fit on pixelated reconstructed track was performed by a least squares minimizing algorithm using the coordinate distances of the barycenters in each time slice. The combination of the straggling and the detector spatial resolution gives the direction of the recoil coming from $\hat{r}(\Omega)$, interpreted as $\hat{r'}(\Omega')$, where $\Omega \equiv \Omega(\theta, \varphi)$ is the solid angle (Figure~\ref{fig:Coordinate}). \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{Angle_definition.png} \caption{Schematic diagram for direction-related geometrical observables in 3D space. The incoming beam direction is along the $Z$-axis, which is the same as the direction of the drift electric field. An example of reconstructed track direction $\Omega(\theta, \varphi)$ is shown as a red arrow with polar angle $\theta$ and azimuthal angle $\varphi$ indications. The orange arrows represent the 2D projections of this 3D directional vector, defining $\theta_{x}$ and $\theta_{y}$.} \label{fig:Coordinate} \end{figure} A polar angle $\theta$ was derived for each track, with $0^{\circ}$ being the direction of the $^{19}$F$^{+}$ beam ($Z$-axis and primary electron drift direction). $\theta$ is actually the angular deviation from $0^{\circ}$ from all effects combined, after the ion enters the chamber at $0^{\circ}$. As discussed later in Section~\ref{subsec:systematics}, systematic effects including the diffusion and space-charge effects can substantially elongate tracks along the $Z$-axis. In Figure~\ref{fig:angle-corr}, the relationships of spatial coordinates before and after correction are \begin{equation} x_{2}=x_{1}, \quad y_{2}=y_{1}, \quad z_{2}=\eta \cdot z_{1}, \end{equation} where $\eta$ is the event-by-event correction factor (i.e. \textit{asymmetric factor}). Constrained by geometry, we have $\tan\theta_{i}=\sqrt{x_{i}^{2}+y_{i}^{2}}/z_{i}, i=1,2$, and then \begin{equation} \theta_{2} = \arctan \left(\frac{z_{1}}{z_{2}} \tan \theta_{1}\right) = \arctan \left(\frac{1}{\eta} \cdot \tan \theta_{1}\right). \end{equation} $\theta_{2} > \theta_{1}$ for $\eta\in (0, 1]$. Thus, most ion track will be closer to the initial direction if no high-gain systematic correction is performed. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Direction_Correction.png} \caption{Schematic diagram for obtaining the ``original'' reconstructed direction.} \label{fig:angle-corr} \end{figure} The distribution of the reconstructed polar angle can be used to define an angular resolution as discussed below. However, the distribution of the reconstructed angle $\theta$ between the track and the low energy beam is not a Gaussian variable by definition. In contrast, $\theta_x$ and $\theta_y$ defined in Figure~\ref{fig:Coordinate}, appear as Gaussian variables in our experiments as shown in Figure~\ref{fig:2DTheta}. All directional information are embedded in $\theta_x$ and $\theta_y$ via \begin{equation}\label{eq:angle-relation} \tan\theta_x = \tan\theta \cos\varphi, \qquad \tan\theta_y = \tan\theta \sin\varphi. \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{2D_theta_examples_all.png} \caption{Normalized distributions of $\theta_x$ and $\theta_y$ for $^{19}$F$^{+}$ ions of kinetic energy ranging from 6.3 keV to 26.3 keV.} \label{fig:2DTheta} \end{figure} The spread of angular distribution can be written as \begin{equation}\label{eq:resolution_def} \sigma_{\theta}(\theta_x, \theta_y)\Big\vert_{\theta_x = \mu_{\theta_x}, \theta_y = \mu_{\theta_y}} = \frac{\sqrt{f^2(\theta_x)\sigma^2_{\theta_x} + f^2(\theta_y)\sigma^2_{\theta_y}}}{f(\theta)}, \end{equation} in terms of $\theta_x$ and $\theta_y$, where $f(\theta) = (\tan^2{\theta} + 1)\tan\theta$. Both the angular distribution of the incident ions and the dispersion of the primary electron distribution contribute to the final angular resolution: \begin{itemize} \item Distribution of the incident ions: The reconstructed direction deviates from the initial direction. This is due to several physical effects: (1) primary electrons diffusion, (2) initial ion beam not exactly at zero degree: the hole through which the ions enter the chamber has a \SI{1}{\micro\meter} diameter and \SI{13}{\micro\meter} length (maximum angle of $4.4^{\circ}$) and (3) eventual bias from the reconstruction algorithm. \item Statistical dispersion: Spread of the distribution, usually defined as the standard deviation of a Gaussian Probability Distribution Function (PDF). The main contribution to the statistical dispersion should be the straggling of ions, which is a convolution of multiple small angle scattering with the nuclei of the gas. Other factors deteriorating angular resolution are the interactions of the primary electrons inside the gas chamber, straggling caused by electron collisions and re-combinations with the gas atoms~\cite{Billard2012}, and diffusion~\cite{Peisert84}. \end{itemize} The measured mean angle of the incident ions distribution is small ($< 1.6^\circ$) and the dispersion has an effect about 10 times larger than the shift of the central value (more than 4 times for $26.3$ keV), as shown in Table~\ref{tab:GaussParams}. Thus we simply take the spread of the angular distribution~(\ref{eq:resolution_def}) as the definition of angular resolution. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $\mathbf{Data\ Label\ (keV)}$ & $\mathbf{6.3}$ & $\mathbf{9.3}$ & $\mathbf{11.3}$ & $\mathbf{13.8}$ & $\mathbf{16.3}$ & $\mathbf{18.8}$ & $\mathbf{21.3}$ & $\mathbf{23.8}$ & $\mathbf{26.3}$ \\ \hline $\hat{\mu}_{\theta_x}$ & 0.65 & 0.36 & 0.19 & 0.15 & 0.37 & 0.52 & 0.42 & 0.51 & 0.75 \\ \hline $\hat{\sigma}_{\theta_x}$ & 23.64 & 12.56 & 8.49 & 6.46 & 4.95 & 4.43 & 3.80 & 3.58 & 3.29 \\ \hline $\hat{\mu}_{\theta_y}$ & 1.37 & 1.58 & 0.85 & 0.51 & 0.51 & 0.11 & 0.09 & -0.09 & 0.30 \\ \hline $\hat{\sigma}_{\theta_y}$ & 25.25 &12.40 & 8.02 & 6.09 & 4.69 & 4.10 & 3.52 & 3.19 & 2.94 \\ \hline \end{tabular} \caption{Gaussian fit parameters of $\theta_x$ and $\theta_y$ distributions for different kinetic energy $^{19}$F$^{+}$ ion events.} \label{tab:GaussParams} \end{table} \subsection{Analysis results on angular resolution} \label{sec:resolution} The analysis was performed for $^{19}$F$^{+}$ ions with $9$ kinetic energies in the range of $[6.3, 26.3]$ keV, and a statistics of over $1.8\times 10^{4}$ events per each energy, after the background rejection. Figure~\ref{fig:Track26keV} and Figure~\ref{fig:Track9keV} show examples of track trajectories in $ZX$, $ZY$ projections for ions with kinetic energies of $26.3$ keV ($25$ keV from the voltage acceleration in ion source plus $1.32$ keV from the cathode voltage with respect to the ground) and $9.3$ keV respectively, with the best fit line in 3D. \begin{figure}[htbp] \centering \includegraphics[width=0.2\textwidth]{26keV_track.png} \includegraphics[width=0.3\textwidth]{26keV_3D.png} \caption{Example of an ion track in $ZX$ and $ZY$ projection using barycenter representation (left) and 3D (right) for an ion of kinetic energy of 26.3 keV. To derive the direction of the track, a 3D linear fit is performed on the 3D cloud of primary electrons.} \label{fig:Track26keV} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.2\textwidth]{9keV_track.png} \includegraphics[width=0.3\textwidth]{9keV_3D.png} \caption{Example of an ion track in $ZX$ and $ZY$ projection using barycenter representation (left) and 3D (right) for an ion of kinetic energy of 9.3 keV. To derive the direction of the track, a 3D linear fit is performed on the 3D cloud of primary electrons.} \label{fig:Track9keV} \end{figure} The final angular resolution as a function of the ion kinetic energy is shown in Figure~\ref{fig:ETheta} (red and cyan). Its dispersion is better than the required $20^{\circ}$ \cite{Billard2011} down to kinetic energy of $9.3$ keV. We also plot the mean angle between the incident and reconstructed direction (denoted as $\delta(\theta)$) for each ion kinetic energy (blue). \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{Angular_Resolution_asymf.png} \caption{The measurement (red and cyan, with different reconstruction method) and simulation (green and magenta, with different threshold) of angular resolution of MIMAC detector as a function of $^{19}$F$^{+}$ ion kinetic energy. At lower energies, the ion tracks are shorter and have more straggling, resulting in worse angular resolution (bigger error bars). The measured angular resolution is better than $15^{\circ}$ down to a kinetic energy of $9.3$ keV. Error bars are derived from the pixel strips pitch and reconstructed track length as described in the text. We also show the measured mean angle $\delta(\theta)$ between the incident and reconstructed direction for each ion kinetic energy (blue).} \label{fig:ETheta} \end{figure} The derived uncertainty (denoted as $\Delta\theta$) on angular resolution is based on the determination of the spatial coordinates of the reconstructed primary electron cloud and the error of the 3D linear fit: \begin{equation}\begin{split}\label{eq:errorsys} \Delta\theta(x,y,z)\Big|_{\theta = \bar{\theta}} &= \sqrt{(\frac{\partial \theta}{\partial x})^2\Delta^2(x) + (\frac{\partial \theta}{\partial y})^2\Delta^2(y) + (\frac{\partial \theta}{\partial z})^2\Delta^2(z) + \Delta_{\textrm{fit}}^2} \\ &\simeq \frac{\cos^2{\theta}}{z}\sqrt{\Delta^2_{XY} + \tan^2{\theta}\cdot\Delta^2(z)} \end{split}\end{equation} where $\Delta(z) = \Delta(V_\text{drift}\cdot t)$ mainly depends on sampling time, $\Delta_{XY} = \Delta(x) = \Delta(y)$ is the intrinsic systematic uncertainty due to alignment and finite size of anode strips. The fit error $\Delta_{\textrm{fit}}$ is negligible, so we can only take the first term into consideration. For $\theta \approx 0^{\circ}$ case, the uncertainty can be further approximated and simplified to be only dependent on the pitch of the anode strips and the reconstructed track length: \begin{equation} \Delta\theta(x,y,z)\Big|_{\theta = \bar{\theta} \simeq 0^{\circ}} = \frac{\Delta_{XY}}{L}, \end{equation} where $\Delta_{XY}$ is the same as in \eqref{eq:errorsys} and $L$ describes the primary electron cloud dimensions (the reconstructed ion track length after the empirical correction~\cite{Tao2020}). For the accuracy of our estimation, the error bars presented in Figure~\ref{fig:ETheta} are derived from \eqref{eq:errorsys}. The error we obtained is $\pm 1.57^{\circ}$ for the lowest ion kinetic energy and $\pm 0.67^{\circ}$ for the highest. We have applied various algorithms in order to find the best way to reconstruct the initial angle. The differences among these algorithms are mainly whether to use the entire electron cloud or only part of it, and how to set weight on each pixel. Modifying the algorithms to use only the first part of the track (with a $\chi^2$ test to select the optimum number of points) does not yield an improvement on the angular resolution. In addition, initial and final time slices of the track usually have a larger than average deviation from the track direction. This is because the anode samples the endpoints of the transversely diffused primary electron cloud. Removing the first and last time slice does not produce better results either. In this paper, we present results of a barycenter weighted (BCW) reconstruction method and an all pixel equal weighted (APE) reconstruction method. The average reconstructed angle using BCW method is closest to the initial 0$^{\circ}$ angle, and has the lowest dispersion, while the feasibility of APE method is more general. In the neutron experiments with the MIMAC gaz, more possible species are scattered by the neutrons and produce recoils, including proton, Carbon and Fluorine nuclei, and even heavier species CF$_x$. The reconstructed recoil angles based on BCW and APE methods are shown in Figure~\ref{fig:neutron-expe-recoil}, in the range of [0$^\circ$, 90$^\circ$]. \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{Recoil_Angle_corr_asymf.png} \caption{Recoil angle of all recoil tracks in the 565 keV neutron experiment.} \label{fig:neutron-expe-recoil} \end{figure} \section{Comparison of simulations with measurements}\label{sec:simulations} The angular resolution measurements have been compared to a version of the Garfield++~\cite{Veenhof1998} simulations. Garfield++ is a toolkit for particle tracking simulations, which proposes an interface to SRIM~\cite{Ziegler2010} to generate ion tracks, and whose electron transport algorithm solves the second-order equations of motion based on the MAGBOLTZ~\cite{Biagi1999} gas tables. The simulation code has been verified by reproducing the neutron experiment at 27 keV~\cite{Maire2015}, and in this work we adjust it to reproduce the LHI experiment conditions. For each kinetic energy, we have sent 200 Fluorine ions at $(X, Y, Z) = (5.4\,\mathrm{cm}, 5.4\,\mathrm{cm}, 5\,\mathrm{cm})$ along the $Z-$direction. Each primary electron is transported up to the grid and suffers the transverse and longitudinal diffusion on the way. We have observed a continuous reduction of the drift velocity depending on the detector gain and the charge density. A possible explanation for such a phenomenon is a slow ion backflow in the Micromegas gap, which builds up space-charge and locally distorts the electric field in the higher gain configuration. The asymmetry of the flash-ADC signal gives a direct event by event observation of this velocity reduction, which enables us to determine an \textit{asymmetric factor} $\eta$ that quantifies the magnitude of the elongation. We have experimentally validated the ability of this factor to describe the phenomenon as a first approximation~\cite{Tao2020}. The space-charge effect has been included in the simulations: the drift velocity is linearly reduced according to the value of $\eta$. The simulations sample the arriving of charges \textit{MIMAC-wise}: a $50\,\mathrm{MHz}$ ($20\,\mathrm{ns}$) readout and a pixelization in the $X$-$Y$ plane with strips at \SI{424.3}{\micro\meter} pitch. We apply the same analysis algorithms on the measurements and the simulations, allowing us to compare the results. \subsection{Angular resolution along the electron drift direction} The simulation results of the projected angles $\theta_x$ and $\theta_y$ are presented in Figure \ref{fig:thetaXY}. The angle dependence on the kinetic energy follows the same evolution observed in the measurements. At $6.32\,\mathrm{keV}$, the spread of the simulated distribution is twice narrower than the measured one; this difference decreases with the energy and remains below $20\%$ for energies above $13.82\,\mathrm{keV}$. This difference at low energies propagates to the angular resolution, as shown in Figure~\ref{fig:ETheta}. An explanation for these differences is given below. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{2D_theta_simus_all.png} \caption{Normalized distributions of the projected angles from Garfield++ simulations with 200 $^{19}$F$^+$ ions per energy.} \label{fig:thetaXY} \end{figure} \subsection{Study of systematic effects} \label{subsec:systematics} Along its way to the anode, the primary electron cloud suffers several distortions modifying its shape. The angular resolution depends not only on the conservation of the primary cloud shape, but also on the experimental access to this information. The question is: will the physical information of the initial direction be washed out by these shape distortions? We identified four main effects: \begin{itemize} \item [(1)] the diffusion in the drift volume, \item [(2)] the space-charge effect in the amplification region, \item [(3)] the avalanche, \item [(4)] the finite threshold of the anode strips. \end{itemize} We have studied how the angular resolution is affected by each one of these effects. According to SRIM and without the distortions, a $^{19}$F$^+$ ion track of kinetic energy below $20\,\mathrm{keV}$ would be seen as a few pixels cloud but only one or at most two time slices of $20\,\mathrm{ns}$ would be triggered in a MIMAC detector chamber and no clear direction would be measured. We have shown that the diffusion in $5\,\mathrm{cm}$ drift enlarges the primary cloud of about one order of magnitude in each direction ~\cite{Tao2020}. If the diffusion was fully symmetric, the directional information would be conserved during the cloud enlargement. However, according to MAGBOLTZ, the longitudinal diffusion dominates the transverse one by a factor $\frac{293.9}{253.1} = 1.16$. With the correction for this factor on the polar angle of each track, the diffusion could appear as a helping process for 3D detection of low energy nuclear recoils. In addition, the space-charge or ion backflow systematic effect in the avalanche gap reduces the primary charge collection drift velocity giving a reduced total effective drift velocity, and thus more time slices describing the primary electron cloud along the $Z-$direction. We have mentioned in a previous work~\cite{Tao2020} ( its Figure 5 is reproduced in Figure~\ref{fig:comp-result} here), that this effect can be described by an \textit{asymmetric factor} $\eta$ between the flash-ADC rising and falling times. This multiplicative factor comes from the reduction of the drift velocity under space-charge effects, and has a value in the range $[0.66, 0.71]$ in the conditions of our LHI experiment. We confirm the space-charge (or ion backflow) systematic effect experimentally with the help of experiments performed with the COMIMAC ion facility, also developed in LPSC (Grenoble) for keV ion researches ~\cite{Tao2020}. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{Final_Point_Diff_Threshold_vs_LHI_expe_DEPTH.png} \caption{Main results of a companion work~\cite{Tao2020} showing comparison of ion depths ($\Delta Z$) at different energies between experiment (red stars) and Monte Carlo simulation (blue circles) combining SRIM and diffusion. The orange box is for SRIM only, the green diamond when diffusion and other effects are included using Garfield++. The magenta triangles are experimental measurements with an asymmetric factor correction.} \label{fig:comp-result} \end{figure} The Garfield++ simulations give the possibility to include or not this drift velocity reduction in order to study the space-charge effect. Figure~\ref{fig:spaceCharge} shows the simulation of $\theta_x$ and $\theta_y$ angles computed from $1000$ $^{19}$F$^+$ ions of $9.32\,\mathrm{keV}$ kinetic energy, with and without the drift velocity reduction. The charges are collected in $9$ time slices for the simulated clouds with the reduction; in $6$ time slices otherwise. This distortion leads to an underestimated reconstructed angle since the drift velocity reduction deforms the electron cloud towards the electric field drift direction, $0^\circ$ in our case. For $9.32$ keV ions, we obtain a better angular resolution when the velocity reduction is corrected for ($8.00^\circ$ vs $11.2^\circ$). Note that this resolution improvement is only effective close to $0^\circ$. The same reasoning can be conducted for the asymmetric diffusion (stronger along the electric field direction), which has an influence $3$ to $4$ times smaller than the influence of the space-charge effect. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{2D_theta_simus_9keV_comp.png} \caption{High-gain systematic influence on the projected angles from Garfield++ simulations with 1000 $^{19}$F$^+$ ions of $9.32\,\mathrm{keV}$. Measurement results of LHI experiment are also presented for comparison.} \label{fig:spaceCharge} \end{figure} We have shown that the avalanche contribution to the depth was negligible, see Figure~\ref{fig:comp-result}, being symmetric in $X$ and $Y$ directions. For this reason, the barycenter weighted method used for the 3D linear fit is not affected by the avalanche enlargement. Finally, after the avalanche, the positions in the $X$-$Y$ plane are measured by the anode strips. Each strip has its intrinsic noise threshold and can only be triggered if the number of charges within one time slice exceeds this threshold. This eventual lack of efficiency results in some non-detected charges, especially at low energies. We have studied the angular resolution dependence on the strip threshold with Garfield++. Since we do not simulate the avalanche, the threshold value is applied on the number of primary electrons needed. The results are presented in Figure~\ref{fig:ETheta}, where the same threshold value is applied on $X$ and $Y$ strips. The anode lack of efficiency represents a significant systematic effect, especially at low energies (Figure~\ref{fig:ETheta}). Comparison with simulations suggest that the LHI measurements are better described by a threshold of 2 primary electrons than one single primary electron. The Garfield++ simulations have allowed us to isolate each systematic effect in order to study its influence on the angular resolution. At low energy, the number of readouts acts as a critical parameter for the 3D linear regression accuracy. For this reason, even if the diffusion distorts the primary electron cloud, it appears as a necessary process in order to trigger enough pixels above threshold, and provide a clear detection of sub-millimeter tracks. The fully asymmetric distortion from the space-charge effect results in an underestimation of the reconstructed angle and consequently becomes a systematic bias in the measurements. Due to the limited length of low kinetic energy Fluorine ion tracks (similar for Carbon ions), the current direction reconstruction algorithms only work for small angles ($< 20^\circ$), and thus we would need to develop a better adapted reconstruction algorithm to treat them. However, the direction for keV hydrogen nuclear tracks (ie. proton tracks) can be well-reconstructed by 3D regression fit, thanks to their longer tracks (compared with heavier nuclei). Figure~\ref{fig:proton-simu-recon} present the reconstructed polar angle distributions for different initial preset incident directions of proton ($15^\circ$, $30^\circ$, $45^\circ$, $60^\circ$), generated by our Garfield++ simulation. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Proton_Simu_PolarAngle_Recon.png} \caption{Reconstructed polar angle distribution for 400 incident $21.3$ keV protons with preset directions $15^\circ$, $30^\circ$, $45^\circ$ and $60^\circ$ from Garfield++ simulations with MIMAC gas at $50$ mbar. The red inverted triangles label the preset incident polar angles in these 4 cases respectively. The orange dashed curve in each subplot is an empirical fit.} \label{fig:proton-simu-recon} \end{figure} The reconstructed results are then presented by the most probable value of the empirical fit function and the asymmetric statistical error is computed from the FWHM interval of the polar angle distribution, as shown in Table~\ref{tbl:proton-recon}. \begin{table}[htbp] \centering \begin{tabular}{ccccc} \toprule[1.5pt] Incident polar angle [$^\circ$] & $15$ & $30$ & $45$ & $60$ \\ \midrule[1pt] Reconstructed polar angle [$^\circ$] & $15.14_{-0.43}^{+0.74}$ & $30.27_{-0.59}^{+0.64}$ & $46.25_{-0.63}^{+0.90}$ & $66.07_{-0.61}^{+0.88}$ \\ \bottomrule[1.5pt] \end{tabular} \caption{Polar angle reconstruction results for proton tracks, generated by Garfield++. The reconstructed polar angle results are the most probable values of the empirical fit function.} \label{tbl:proton-recon} \end{table} Hydrogen is an odd nucleus which can also probe the spin-dependent WIMP-nucleon scattering process, turns out to be a better target for directional detection of light WIMPs. \section{Directional Signatures after Gaia Observation} \label{sec:application} L. Necib \textit{et al.}~\cite{Necib2018} used the latest data released by Gaia satellite to infer the Dark Matter distribution in the vicinity of 8 kpc from the center of the Galaxy, which is around the Solar system. They traced the Galactic Dark Matter by stars and found that the Milky Way halo has two major components, \begin{equation} f(\boldsymbol{v}) = \xi_\mathrm{h} f_\mathrm{h}(\boldsymbol{v}) + \xi_{\mathrm{s}} f_{\mathrm{s}}(\boldsymbol{v}), \end{equation} where $\xi_\mathrm{h,s}$ is the ratio factor that quantifies each component satisfying $\xi_\mathrm{h} + \xi_\mathrm{s} = 1$. One is the three-dimensional tri-axial Gaussian distribution halo component, and the other is a substructure component that could be formed by a younger merger event. The latter component can be characterized by a double Gaussian, which only takes opposite parameters in the radial direction, and other parameters are the same. \begin{equation} \begin{array}{l} f_{\mathrm{h}}(\boldsymbol{v}) \propto \mathcal{N}\left(\boldsymbol{\mu}_{\mathrm{h}}, \boldsymbol{\Sigma}_{\mathrm{h}}\right), \quad f_{\mathrm{s}}(\boldsymbol{v}) \propto \frac{1}{2}\left[\mathcal{N}\left(-\boldsymbol{\mu}_{\mathrm{s}}, \boldsymbol{\Sigma}_{\mathrm{s}}\right)+\mathcal{N}\left(\boldsymbol{\mu}_{\mathrm{s}}, \boldsymbol{\Sigma}_{\mathrm{s}}\right)\right]. \end{array} \end{equation} Here $\mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ is the three-dimensional Gaussian distribution with $\boldsymbol{\mu} = (\mu_r, \mu_\theta, \mu_\phi)$ and $\boldsymbol{\Sigma} = (\sigma_r, \sigma_\theta, \sigma_\phi)$ taken from Ref.~\cite{Necib2018}. With a large uncertainty of its fraction contribution, we can parametrize it with \begin{equation} r = \frac{\xi_{\mathrm{s}}}{\xi_{\mathrm{h}}+\xi_{\mathrm{s}}} \in[0,1]. \end{equation} The WIMP direction distribution and the distribution of the nuclear recoil angle in direct detection with different $r$ value can be found in Ref.~\cite{TaoThese}. Here without loss of generality, in Figure~\ref{fig:recoil-map} we show the Mollweide projections of the angular distribution of the two-component Galactic WIMP signal and its induced nuclear recoil angular distribution with $r= 0.0$ and $r = 0.5$. Plots in the middle column are in the case of perfect resolution, while finite resolution of $15^\circ$ are applied in the right column plots. We conclude that the distribution of the nuclear recoil keeps its dipole feature. The impact of the substructure component is faint to be resolved under the current angular resolution. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{recoil_map.png} \caption{From left to right : WIMP angular distribution, WIMP-induced recoil angular distribution with perfect resolution and with finite resolution ($15^\circ$). Top: Pure halo component model with $r=0.0$. Bottom: Two-component model with $r=0.5$.} \label{fig:recoil-map} \end{figure} \section{Conclusion and Outlook} \label{sec:conclusion} Providing angular reconstruction with a reasonable angular resolution represents a decisive aspect for Dark Matter identification, and helps overcome the isotropic neutron background and the neutrino floor. In this paper, we report experiments performed with a MIMAC prototype chamber at the LHI facility with the electric drift field at zero angle with respect to the beam line. For $^{19}$F$^{+}$ ion kinetic energies between $6.3$ keV and $26.3$ keV, the angular resolution ranges between $25^{\circ}$ and $3^{\circ}$, respectively. We find that down to $10$ keV kinetic energy, the angular resolution ($< 15^{\circ}$) of our MIMAC prototype detector is better than the required $20^{\circ}$ from Ref.~\cite{Billard2011} for small angles. Systematic effects including diffusion, space-charge at high gain in the amplification gap, and anode pixelization contribute to the measured angular resolution. Dedicated Garfield++ simulations have shown that these effects allow an experimental access to the angular resolution measurements for small polar angles near $0^\circ$. The simulations agree with the measurements within $20\%$ for energies above $13.82$ keV. Directional detection also plays a key role for neutron spectroscopy from neutron-induced nuclear recoils. Several experiments performed with MIMAC detectors with $18$ cm and $25$ cm drift chambers have demonstrated the ability of the MIMAC strategy to perform neutron spectroscopy in the range $[27\,\mathrm{keV}, 15\,\mathrm{MeV}]$~\cite{Sauzet2019, Tampon2018}. When applying the angular resolution we obtain with the MIMAC prototype, to the two-component Galactic halo model, the directional dipole feature of the WIMP signal is preserved as expected, yet not adequate to resolve the substructure component. Since we obtain a good angular resolution for sub-millimeter tracks, we can further make use of the favored direction correlated to the latitude of specific underground laboratories (eg. Modane Underground Laboratory (LSM) at $45.18^\circ$N and China Jin-Ping Underground Laboratory (CJPL) at $28.15^\circ$N). We have shown that the direction of the electric drift field can be a parameter that affects angular resolution, and therefore it becomes also important to optimize the detector orientation in order to probe the angular distribution of the WIMP-induced recoils in correlation to the rotation of the Earth. The MIMAC collaboration is developing a 1 m$^3$ detector built from bi-chamber modules with $25$ cm drift each, to be installed in LSM. If a Dark Matter signal in the GeV range is observed, the angular resolution that can be obtained with a MIMAC-like detector, provides a clear signature for the Galactic halo origin of such a Dark Matter candidate. \section{Acknowledgments} Yi Tao, Igor Moric and Charling Tao thank Tsinghua University physics department and Department of Astronomy (DOA) and National Natural Science Foundation of China (NSFC11475205) for support. We acknowledge David Diez for helping the MIMAC team to run Garfield++ simulations. \bibliographystyle{model1-num-names}
1,314,259,992,869
arxiv
\section{Introduction} It is well known that SuperSymmetry (SUSY) introduces many new sources of Flavor Changing Neutral Currents (FCNC) which give strong constraints on the construction of extensions of the Standard Model (SM). A common feature of these models is that FCNC effects are induced by SUSY breaking parameters that mix different flavors. In the literature several ideas have been proposed in order to suppress unwanted FCNC effects. For instance, in models where SUSY breaking is induced by gauge interactions~\cite{mdin} SUSY breaking parameters are flavor blind or they are dominated by the dilaton multiplet of string theory~\cite{kapl}. Alternatively, flavor symmetries are used to provide either a sufficient degeneracy between the first-two generation of sfermions~\cite{leig} or alignment between quark and squark mass matrices~\cite{ynir}. Here we want to investigate the hypothesis that the average squark mass of the first-two generations is much higher then the rest of the spectrum of (s-)particles~\cite{dine,dimo,poma,cohe}. Throughout the paper we indicate the average mass of the heavy scalar squarks as $M_{sq}$ and the typical mass scale of gauginos and of the other light sparticles as $m_{\tilde{g}}$. Small Yukawa couplings of the first-two generations of scalars to the Higgs doublets, together with masses of the rest of the supersymmetric spectrum close to the weak scale allow a natural electroweak symmetry breaking (EWSB). This scenario has very interesting phenomenological signatures~\cite{cohe} and can be easily realized in string theory~\cite{bine}. We consider gluino-squark-mediated FCNC contributions to $\Delta M_K$ and $\epsilon_K$ in the neutral $K$-system. The effect of the most general squark mass matrix for this class of models is studied. In some cases further restrictions on the squark masses are required and other contributions can be more important. In particular chargino-squark-quark interactions should be also considered. We postpone a discussion with the inclusion of these effects to a subsequent work. We work in the so-called mass insertion approximation~\cite{hall}. In this framework one chooses a basis for fermions and sfermions states where all the couplings of these particles to neutral gauginos are flavor diagonal and FC effects are shown by the non-diagonality of sfermion propagators. The pattern of flavor change, for the K-system, is given by the ratio \begin{equation} (\delta^{d}_{ij})_{AB}= \frac{(m^{\tilde d }_{ij})^2_{AB}}{M_{sq}^2} \ , \end{equation} where $ (m^{\tilde d }_{ij})^2_{AB}$ are the off-diagonal elements of the $\tilde d $ mass squared matrix that mixes flavor $i$, $j$ for both left- and right-handed scalars ($A,B=$Left, Right), see {\it e.g.}~\cite{gabb}. The sfermion propagators are expanded as a series in terms of the $\delta$'s and the contribution of the first term of this expansion is considered. The supersymmetric flavor problem consists in building viable models in which FCNC are suppressed without requiring excessive fine tuning of the parameters. In models with a splitted spectrum of s-particles, in which the average mass of the lightest ($m_{\tilde g}$) is in the electroweak or TeV region, two scenarios are possible: \begin{enumerate} \item for reasonable values of $M_{sq}$, the suppression of FCNC requires small $\delta$'s values. Thus, by fixing $M_{sq}$, one can find constraints on $\delta$'s, see refs.~\cite{gabb,bagg} and for a very recent NLO analysis ref.~\cite{ciuc2}; \item for natural values of the $\delta$'s, say ${\cal{O}}(1)$ or order of the Cabibbo angle, ${\cal{O}}(0.22)$, one finds that the only way to get rid of unwanted FCNC effects, is by having the squarks of the first-two generations heavy enough. Thus, by fixing the $\d$'s, one can find constraints on the minimal values for $M_{sq}$. Large values of $M_{sq}$, however, induce large values for the GUT masses of the third generation of squarks via Renormalization Group Equations (RGE). Consequently there can be fine-tuning problems for the $Z$-boson mass. We study this issue in sec.~\ref{sec:spec}. This point of view was adopted in refs.~\cite{arka,agas}. \end{enumerate} In the past, several phenomenological analyses were carried out, which relied on some approximations. For instance, the work of ref.~\cite{arka} does not include QCD radiative corrections and makes use of Vacuum Insertion Approximation (VIA) for the evaluation of hadronic matrix elements. Leading order QCD corrections to the evolution of Wilson coefficients were, instead, considered in the papers of refs.\cite{bagg,agas}. These authors found that QCD corrections are extremely important. For example in \cite{agas} the lower bound on the heavy squark mass is increased by roughly a factor three. In this work we discuss both the cited scenarios and improve previous analyses including the Next-to-Leading Order (NLO) QCD corrections to the most general $\Delta F=2$ effective Hamiltonian~\cite{ciuc} and the lattice calculation of all the B-parameters appearing in the $K-\bar K$ mixing matrix elements that have been recently computed~\cite{allt}. We find this very interesting for several reasons. First of all we find that the inclusion of these effects leads to sizeable deviation from the previous computations. The results obtained using only LO-QCD corrections and Vacuum Insertion Approximation are corrected by about $25-35\%$. Furthermore, the uncertainties of the final result due to its dependence on the scale at which hadronic matrix elements and quark masses are evaluated is much reduced. Predictions for any model can be tested using the so-called magic numbers we provide. These numbers allow to obtain the coefficient functions at any low energy scale once the matching conditions are given at a higher energy scale. The magic-numbers will be useful, {\it e.g.}, when a complete NLO analysis of SUSY contributions to $\Delta F=2$ processes (which should include also chargino exchange effects) will be implemented in the future. A complete NLO calculation should be comprehensive also of the ${\cal{O}}(\a_s)$ corrections to the Wilson coefficients at the scale of the SUSY masses running in the loops. So far, we miss this piece of information for gluino-squark contributions\footnote{The matching conditions for charged-Higgs and chargino contributions have been recently computed~\cite{krau}.}. We can argue the smallness of these corrections from the smallness of $\a_s$ at such scales. This uncertainty can be removed only by a direct computation. The paper is organized as follows. In sec.~\ref{sec:heff} we introduce the formalism concerning the operator basis, the Wilson coefficients and the Renormalization Group Equations (RGE). In sec.~\ref{sec:deltas} constraints on the $\d$'s are derived. The problem of consistency of the squark spectrum for given entries of the $\d$'s is considered in sec.~\ref{sec:spec}. Finally our conclusions are written in sec.~\ref{sec:conc} and all the magic-numbers are given in the appendix. \section{Effective Hamiltonian and hadronic matrix elements} \label{sec:heff} In this section we describe the framework in which the basic calculations have been performed. We follow the discussion of ref.~\cite{bagg} in the case $M_{sq}\gg m_{\tilde g}$. Throughout the paper (unless otherwise explicitly specified), we assume that the average mass of gluinos and of the squarks of the third generation are of the same order of magnitude. The three steps needed to use the Operator Product Expansion (OPE) (matching of the effective theory, perturbative evolution of the coefficients and evaluation of hadronic matrix elements) are treated in detail in the following subsections. \subsection{Operator basis and matching of the effective theory } In order to apply the OPE one has to calculate coefficients and operators of the effective theory. One first integrates out the heavy scalars of the first-two generations at the scale $M_{sq}$. This step produces $\Delta S=1$ (of the form $\bar{d} \tilde{g}\bar{\tilde g} s$) as well as $\Delta S=2 $ operators, at the same order $1/M_{sq}^2$. When also gluinos are integrated out at $m_{\tilde{g}}$, $\Delta S=1$ operators generate $\Delta S=2$ contributions that are proportional to $m_{\tilde g}^2/M_{sq}^4$, and so can be neglected. The final basis of operators is: \begin{eqnarray} Q_1 &=& \bar d^\alpha \gamma_\mu (1-\gamma_5) s^\alpha \ \bar d^\beta \gamma_\mu (1-\gamma_5) s^\beta\, , \nonumber \\ Q_2 &=& \bar d^\alpha (1-\gamma_5) s^\alpha \ \bar d^\beta (1-\gamma_5) s^\beta \, , \nonumber \\ \nonumber Q_3&=& \bar d^\alpha (1-\gamma_5) s^\beta \ \bar d^\beta (1-\gamma_5) s^\alpha \, , \\ Q_4 &=& \bar d^\alpha (1-\gamma_5) s^\alpha \ \bar d^\beta (1+\gamma_5) s^\beta \, , \nonumber \\ Q_5&=& \bar d^\alpha (1-\gamma_5) s^\beta \ \bar d^\beta (1+\gamma_5) s^\alpha\, , \label{eq:susy} \end{eqnarray} together with operators $\tilde Q_{1,2,3}$ which can be obtained from $Q_{1,2,3}$ by the exchange $(1-\gamma_5) \leftrightarrow (1+\gamma_5)$. The Wilson coefficients at the matching scale $M_{sq}$ are (see {\it e.g.}~\cite{gabb,bagg}): \begin{eqnarray} \nonumber C_1&=&-\frac{\alpha_s^2}{216 M^2_{sq}} \left(24 x f_6(x)+66 \tilde{f}_6(x)\right) (\delta^d_{12})_{LL}^2, \\ \nonumber C_2&=&-\frac{\alpha_s^2}{216 M^2_{sq}} 204 f_6(x)(\delta^d_{12})_{RL}^2,\\ \nonumber C_3&=&\frac{\alpha_s^2}{216 M^2_{sq}} 36 x f_6(x)(\delta^d_{12})_{RL}^2,\\ \nonumber C_4&=&-\frac{\alpha_s^2}{216 M^2_{sq}} \left[ \left(504 x f_6(x)-72\tilde{f}_6(x)\right)(\delta^d_{12})_{LL}(\delta^d_{12})_{RR}\right. \\ \nonumber & &\left. -132 \tilde{f}_6(x)(\delta^d_{12})_{LR}(\delta^d_{12})_{RL} \right],\\ \nonumber C_5&=&-\frac{\alpha_s^2}{216 M^2_{sq}} \left[ \left(24 x f_6(x)+120\tilde{f}_6(x)\right)(\delta^d_{12})_{LL}(\delta^d_{12})_{RR}\right. \\ & &\left. -180 \tilde{f}_6(x)(\delta^d_{12})_{LR}(\delta^d_{12})_{RL} \right], \end{eqnarray} where $x=(m_{\tilde g}/M_{sq})^2$ and \begin{eqnarray} f_6(x) &=& \frac{6(1+3 x)\ln{x}+x^3-9 x^2-9 x+17}{6 (x-1)^5}, \nonumber \\ \tilde{f}_6(x) &=&\frac{6x(1+x)\ln{x}-x^3-9 x^2+9 x+1}{3(x-1)^5}. \end{eqnarray} The coefficients for the operators $\tilde Q_{1,2,3}$ are the same as those of $Q_{1,2,3}$ with the replacement $L \leftrightarrow R$. The authors of refs.~\cite{bagg,arka,agas} use the matching coefficients directly in the limit $x\rightarrow 0$. However, we have contemplated also the extreme case of $m_{\tilde g} \sim M_{sq}/2$, so that we keep the whole expression. Of course, the value of the coefficients is the same as that of refs.~\cite{bagg,arka,agas} in cases where $x\ll 1$. As we said, NLO-corrections to these coefficients have not been computed yet. We assume they are negligible, in view of the smallness of $\a_s(M_{sq})$ and of the fact that similar corrections turned out to be rather small in the SM, the two Higgs doublet model, and for the chargino contribution in the constrained MSSM~\cite{krau}. Our effective Hamiltonian is so affected by a residual renormalization scheme dependence because of the missing piece of ${\cal{O}}(\a_s(M_{sq}))$ in the matching. \subsection{Evolution of Wilson coefficients and running of $\a_s$ } In order to evolve the Wilson coefficients between $M_{sq}$ and the scale at which hadronic matrix elements are evaluated ($\mu$=2 GeV), one has to account for the presence of all particles whose mass is intermediate between the two scales, both in the $\b$-function of $\a_s$ and in the Anomalous Dimension Matrix (ADM) of the operators. For what concerns the former one has~\cite{drtj}: \begin{eqnarray} \b(\a_s) &=&-\b_0 \a_s^2 - \b_1 \a_s^3 + {\cal{O}}(\a_s^4), \\ \b_0&=&\frac{1}{3} \left(11 N_c -2 n_f -2 N_c n_{\tilde g} -\frac{1}{2} n_{\tilde q} \right), \\ \b_1&=& \frac{1}{3} \left(34 N_c^2- \frac{13 N_c^2 -3}{ N_c} n_f- 16 N_c^2 n_{\tilde g} - \frac{4 N_c^2-3 }{2 N_c} n_{\tilde q} + 3 \ \frac{3 N_c^2-1}{2 N_c} n_{\tilde q} n_{\tilde g} \right), \end{eqnarray} where $N_c=3$ for color SU(3) and $n_f$ is the number of fermion flavors. The terms proportional to $ n_{\tilde g} $ and $n_{\tilde q }$ represent, respectively, the gluino and light scalar contributions. $n_{\tilde g}$=1 and $n_{\tilde q }$=4 when one evolves between $M_{sq}$ and $m_{\tilde{g}}$ and to $n_{\tilde g}=n_{\tilde q }$=0 evolving from $m_{\tilde{g}}$ to a lower mass scale. In ref.~\cite{ciuc} the Anomalous Dimension Matrix of the operators was computed at NLO. In that reference, since all SUSY particles are taken to be heavy, only loops with fermions and gluons were considered. This result must be modified taking into account that, from $M_{sq}$ to $m_{\tilde g}$, also the squarks of the third generation and gluinos can run in the loops. As a matter of fact, for the K-system, light third generation squarks and gluinos can enter two-loops ADM only via the renormalization of the gluon propagator. An explicit calculation shows that the required modification consists in considering the ADM of ref.~\cite{ciuc} as a function of $n_f+N_c \ n_{\tilde g} +n_{\tilde q}/4$ when one evolves between the heavy squark and gluino mass scales and as a function of $n_f$ below the latter scale. This substitution is no more true if the squarks of the first-two generations are light too. The value of the Wilson coefficients at the hadronic scale $\mu=2$ GeV where matrix elements are computed can then be easily calculated. Following ref.~\cite{ciuc} one evolves between two scales according to: \begin{eqnarray} \vec{C} (\mu)&=&\hat{N}[\mu] \ \hat{U}[\mu,M] \ \hat{N}^{-1}[M] \; \vec{C} (M),\nonumber \\ \hat{N}[\mu]&=& \hat{1} + \frac{\a_s(\mu)}{4\pi} \ \hat{J}(\mu), \nonumber \\ \hat{U}[\mu,M]&=&\left[ \frac{\a_s(M)}{\a_s(\mu)}\right]^{\hat{\gamma}^{(0)T}/(2 \b_0)}, \end{eqnarray} where $\hat{\gamma}^{(0)}$ is the LO-ADM and $\vec{C}(\mu)$ are the Wilson coefficients arranged in a column vector. This formula is correct up to the NLO. $\hat{U}[\mu,M]$ gives the LO evolution already computed in ref.~\cite{bagg} while $\hat{J}$ gives the NLO corrections calculated in ref.~\cite{ciuc}. $\hat{J}$ depends both on the number of active particles at the scale $\mu$, and on the renormalization scheme used for its computation. We have used $\hat{J}$ in the same scheme used for the lattice calculation of hadronic matrix elements, that is the so-called Landau-RI scheme. In this way the renormalization scheme dependence of the final result, at the scale at which hadronic matrix elements are evaluated, cancels out at this perturbative order. As already been stressed, for a complete scheme independence of our result one should include also the NLO corrections of the Wilson coefficients at the high matching scale. We provide here the full set of Wilson coefficients at $\mu$=2 GeV as functions of $M_{sq}$ and $m_{\tilde{g}}$ (the so-called magic-numbers). We find \begin{eqnarray} C_i(\mu)&=&\sum_{r,j=1}^{5} \ \left[b_{ij}^{(r)}+ \frac{\a_s(m_{\tilde{g}})}{4 \pi}c_{ij}^{(r)} \right] \ \a_s^{a_r}(m_{\tilde{g}}) \ C_{j} (m_{\tilde{g}}), \nonumber \\ C_i(m_{\tilde{g}})&=&\sum_{r,j=1}^{5} \ \left[ d_{ij}^{(r)}+\frac{\a_s(m_{\tilde{g}})}{4 \pi}e_{ij}^{(r)} +\frac{\a_s(M_{sq})}{4 \pi}f_{ij}^{(r)} \right] \ \left(\frac{\a_s(M_{sq})}{a_s(m_{\tilde{g}})}\right)^{a^\prime_r} C_{j} (M_{sq}). \label{eq:magic} \end{eqnarray} The complete expression of $a_r$, $a^\prime_r$, $b^{(r)}_{ij}$, $\dots$, is given in the appendix. Eq.~(\ref{eq:magic}) is useful for testing predictions for any model, once the two scales are fixed. The magic numbers for the evolution of $\tilde{C}_{1-3}$ are the same as the ones for the evolution of $C_{1-3}$. Eq.~(\ref{eq:magic}) and the formulae of the appendix, can be used with B-parameters evaluated at $\mu=2$ GeV (see eq.~(\ref{eq:bpar})), in order to determine the contribution to $\Delta M_K$ and $\epsilon_K$ at NLO in QCD for any model of new physics in which the new contributions with respect to the SM originate from the extra heavy particles. It is sufficient to compute the values of the coefficients at the matching scales $M_{sq}$ and $m_{\tilde g}$ and put them in eq.~(\ref{eq:magic}). \subsection{Hadronic Matrix Elements} \label{sub:hme} The hadronic matrix elements of the operators of eq.~(\ref{eq:susy}) in the Vacuum Insertion Approximation (VIA) are: \begin{eqnarray} \langle K^0|Q_1|\bar{K}^0\rangle_{VIA}&=&\frac{1}{3}M_K f_K^2 ,\nonumber \\ \langle K^0|Q_2|\bar{K}^0\rangle_{VIA}&=&-\frac{5}{24}\left(\frac{M_K}{m_s+m_d}\right)^2 M_K f_K^2 ,\nonumber \\ \langle K^0|Q_3|\bar{K}^0\rangle_{VIA}&=&\frac{1}{24} \left(\frac{M_K}{m_s+m_d}\right)^2 M_K f_K^2 ,\nonumber \\ \langle K^0|Q_4|\bar{K}^0\rangle_{VIA}&=&\left[\frac{1}{24}+\frac{1}{4} \left(\frac{M_K}{m_s+m_d}\right)^2\right] M_K f_K^2 ,\nonumber \\ \langle K^0|Q_5|\bar{K}^0\rangle_{VIA}&=&\left[\frac{1}{8}+\frac{1}{12} \left(\frac{M_K}{m_s+m_d}\right)^2\right] M_K f_K^2 , \end{eqnarray} where $M_K$ is the mass of the $K$ meson and $m_s$, $m_d$ are the masses of the $s$ and $d$ quarks respectively. An analogous definition holds for ${\tilde{Q}}_{1,2,3}$. Hadronic matrix elements can be evaluated non-perturbatively introducing B-pa\-ra\-me\-ters, defined as follows: \begin{eqnarray} \langle K^0|Q_1(\mu)|\bar{K}^0\rangle&=&\frac{1}{3}M_K f_K^2 B_{1}(\mu),\nonumber \\ \langle K^0|Q_2(\mu)|\bar{K}^0\rangle&=&-\frac{5}{24}\left(\frac{M_K}{m_s(\mu)+m_d(\mu)}\right)^2 M_K f_K^2 B_{2}(\mu),\nonumber \\ \langle K^0|Q_3(\mu)|\bar{K}^0\rangle&=&\frac{1}{24} \left(\frac{M_K}{m_s(\mu)+m_d(\mu)}\right)^2 M_K f_K^2 B_{3}(\mu),\nonumber \\ \langle K^0|Q_4(\mu)|\bar{K}^0\rangle&=&\frac{1}{4} \left(\frac{M_K}{m_s(\mu)+m_d(\mu)}\right)^2 M_K f_K^2 B_{4}(\mu),\nonumber \\ \langle K^0|Q_5(\mu)|\bar{K}^0\rangle&=&\frac{1}{12} \left(\frac{M_K}{m_s(\mu)+m_d(\mu)}\right)^2 M_K f_K^2 B_{5}(\mu), \label{eq:bibi} \end{eqnarray} where $Q_i(\mu)$ are the operators renormalized at the scale $\mu$. The B-parameters for ${\tilde{Q}}_{1,2,3}(\mu)$ are the same as those of $Q_{1,2,3}(\mu)$. In the computation of $ B_i$ for the operators 2-5, smaller contributions of higher order in chiral expansion, coming from axial current, have been neglected. A detailed explanation of the reasons of this approximation can be found in ref.~\cite{allt}. The definition of B-parameters in eq.~(\ref{eq:bibi}) takes explicitly into account this approximation and using it the low scale ($\mu$) dependence of the final result is explicitly canceled in the product of coefficient functions and hadronic matrix elements. The B-parameter of the first operator is usually addressed as $B_K$ and has been extensively studied on the lattice and used in many phenomenological applications (see, {\it e.g.}~\cite{shar,lusi}). We have considered its world average~\cite{shar}. The other $B_i$ have been taken from ref.~\cite{allt} (for another determination of these $B_i$, calculated with perturbative renormalization see ref.~\cite{gupta}). All the B-parameters are evaluated at a scale of 2 GeV in the LRI renormalization scheme: \begin{eqnarray} B_{1}(\mu=2\ {\rm GeV}) &=&0.60 \pm 0.06,\nonumber \\ B_{2}(\mu=2\ {\rm GeV}) &=&0.66 \pm 0.04,\nonumber \\ B_{3}(\mu=2\ {\rm GeV}) &=&1.05 \pm 0.12,\nonumber \\ B_{4}(\mu=2\ {\rm GeV}) &=&1.03\pm 0.06,\nonumber \\ B_{5}(\mu=2\ {\rm GeV}) &=&0.73 \pm 0.10. \label{eq:bpar} \end{eqnarray} So far in the literature all phenomenological analyses on this subject have used the VIA and have computed Wilson coefficients and quark masses at a scale variable between 0.5-1 GeV. We will see this represents in some cases quite a rough approximation. Finally we give in table~(\ref{tab:cost}) all the numerical values of the physical constants we have considered. All coupling constants and $\sin^2{\theta_W}(M_Z)$ are meant in the $\overline{MS}$-scheme~\cite{pdg}. \begin{table} \begin{center} \begin{tabular}{||c|c||}\hline Constants & Values \\ \hline \hline $\a_{em} (M_Z)$ & 1/127.88 \\ \hline $\a_s (M_Z)$ & 0.119 \\ \hline $M_K$ & 497.67 MeV \\ \hline $f_K$ & 159.8 MeV \\ \hline $m_d$(2 GeV)&7 MeV\\ \hline $m_s$(2 GeV)&125 MeV\\ \hline $m_c$&1.3 GeV\\ \hline $m_b$&4.3 GeV\\ \hline $m_t$&175 GeV\\ \hline $\sin^2{\theta_W}(M_Z) $ & 0.23124 \\ \hline \hline \end{tabular} \caption{ Constants used for phenomenological analysis.} \label{tab:cost} \end{center} \end{table} \section{Constraints on the $\d$'s} \label{sec:deltas} We are ready to provide a set of constraints on SUSY variables coming from the $K_L-K_S$ mass difference, $\Delta M_K$ and the CP violating parameter $\epsilon_K$ defined as \begin{eqnarray} \Delta M_K &=&2 {\rm{Re}}\langle K^0|H_{\rm{eff}}|\bar{K}^0\rangle, \nonumber \\ \epsilon_K &=& \frac{1}{\sqrt{2} \Delta M_K}{\rm{Im}}\langle K^0|H_{\rm{eff}}|\bar{K}^0\rangle. \end{eqnarray} The parameter space is composed of two real and four complex entries, that is $M_{sq}$, $m_{\tilde g}$ and $(\delta^d_{12})_{LL}$, $(\delta^d_{12})_{LR}$, $(\delta^d_{12})_{RL}$, $(\delta^d_{12})_{RR}$. Neglecting interference among different SUSY contributions, we give upper bounds on the $\d$'s, at fixed values of $M_{sq}$ and $m_{\tilde g}$, with the condition $M_{sq} > m_{\tilde g}$. In this way one gets a set of constraints on individual $\d$'s. Indeed, since we are interested in model independent constraints, it is meaningful to study the interference of cancellation effects only in specific models. The physical condition used to get the bounds on the $\delta$'s is that the SUSY contribution (proportional to each single $\d$) plus the SM contribution to $\Delta M_K$ and $\epsilon_K$ do not exceed the experimental value of these quantities. For what concerns the SM contribution to $\Delta M_K$, we assume that the values of the CKM elements $V_{cd}$ and $V_{cs}$ are unaffected by SUSY. This implies the (very reasonable) hypothesis that SUSY does not correct significantly tree level weak decays. The value of the SM contribution to $\epsilon_K$, instead, depends on the phase of the CKM matrix. This phase can be largely affected by unknown SUSY corrections and can be treated as a free parameter. We put the CKM phase to zero so that the experimental value of $\epsilon_K$ is completely determined by SUSY. Finally, to be even more conservative, we subtract one standard deviation to the values of the B-parameters. The final results are shown in tabs.~\ref{tab:m250}-\ref{tab:e1000} for gluino masses of 250, 500, 1000 GeV. We consider the heavy squark masses expected in some common models (see {\it e.g.}~\cite{dine,dimo,poma,cohe}). The constraints that come from the four possible insertions of the $\d$'s are presented: in the first and second rows only terms proportional respectively to $(\delta^d_{12})_{LL}$ and $(\delta^d_{12})_{LR}$ are considered; in the last two rows the contribution of operators with opposite chirality, $RR$ and $RL$, is also evaluated by assuming $(\delta^d_{12})_{LR}=(\delta^d_{12})_{RL}$ and $ (\delta^d_{12})_{LL}=(\delta^d_{12})_{RR}$. In each column of the table we show the bounds on the $\d$'s in the various approximations that one can use for their determination: without QCD correction and in VIA, with LO-QCD corrections and in VIA, with LO-QCD corrections and with lattice B-parameters and, eventually, with NLO-QCD corrections and lattice B-parameters. Comparing the values of our constraints at LO-VIA with those found from the authors of ref.~\cite{bagg} we find some differences. The reason is twofold. On the one hand they do not consider the SM contribution to $\Delta M_K$ and on the other they evaluate the hadronic matrix elements at a scale $\tilde{\mu}$ such that $\a_s(\tilde{\mu})=1$. This latter choice may be questionable because at this scale strong interactions break perturbation theory. The combination of B-parameters and NLO-QCD corrections change the LO-VIA results by about $25-35\%$. As expected~\cite{bagg}, the tightest constraints are for the cases $(\delta^d_{12})_{LL}=(\delta^d_{12})_{RR}$ and $(\delta^d_{12})_{LR}=(\delta^d_{12})_{RL}$. In these cases the coefficients proportional to $(\delta^d_{12})_{LL}(\delta^d_{12})_{RR}$, $(\delta^d_{12})_{LR}(\delta^d_{12})_{RL}$ dominate the others. We have checked that the uncertainties of the results due to higher perturbative orders, are sizeable, being, in some cases up to $10\%$. \section{Constraints on squarks spectrum} \label{sec:spec} In this section, following the discussion of ref.~\cite{arka}, we provide a different kind of constraints. For fixed values of the $\d$'s and of the average light sparticle mass, $m_{\tilde g}$, it is possible to calculate the minimum value of $M_{sq}$ necessary to suppress the FCNC at an experimentally acceptable level. Here we give constraints on $M_{sq}$ and we discuss about their consistency. Using Renormalization Group Equations, one finds that a too large $M_{sq}$ can drive to zero or negative values the average mass of the third generation of sfermions, $m_{\tilde f}$, at the TeV scale ($m_{\tilde f}(\sim 1 {\rm{TeV}})$). To circumvent this problem, a minimum value for $m_{\tilde f}(\mu_{\rm{GUT}})$ at the GUT scale has to be chosen. If $m_{\tilde f}(\mu_{\rm{GUT}})$ is too high (say more then 3-4 TeV), however, a too large fine-tuning of the SUSY parameters is required in order to account for the observed mass of the $Z$-boson and severe naturalness problems arise~\cite{barb,ande}. This problem was studied in refs.~\cite{arka,agas}. One obtains constraints about the consistency of models with a splitted mass spectrum following three steps: \begin{itemize} \item determining the {\it minimum} value of $M_{sq}$ necessary to suppress FCNC. This is discussed in subsec.~\ref{sub:min}; \item computing the {\it maximum} value of $M_{sq}$ allowed by positiveness of light scalar masses and fine-tuning. More about this in subsec.~\ref{sub:rge}; \item combining the previous two results one can determine regions of allowable values of $M_{sq}$ that satisfy both the requests of the previous points. We comment about that in subsec.~\ref{sub:res}. \end{itemize} \subsection{Minimum values for heavy squark mass} \label{sub:min} \begin{figure}[t] \begin{center} \hspace*{-2.5cm} \begin{minipage}[t]{0.12\linewidth} \vspace*{-5.5cm} $M_{sq} $~[TeV] \end{minipage} \hspace*{-0.5cm} \epsfig{file=caso1,width=0.62\linewidth} \\[5pt] $m_{\tilde{g}}$ [TeV] \caption[]{\it Lower bounds on $M_{sq}$ from $\Delta M_K$ with various approximations for the case I with ${\cal{K}}=0.22$. In this case the larger correction to LO-VIA come from the B-parameters.} \protect\label{fig:confr1} \end{center} \end{figure} \begin{figure}[t] \begin{center} \hspace*{-2.5cm} \begin{minipage}[t]{0.12\linewidth} \vspace*{-5.5cm} $M_{sq} $~[TeV] \end{minipage} \hspace*{-0.5cm} \epsfig{file=caso3,width=0.62\linewidth} \\[5pt] $m_{\tilde{g}}$ [TeV] \caption[]{\it Lower bounds on $M_{sq}$ from $\Delta M_K$ with various approximations for the case III with ${\cal{K}}=0.22$. In this case the larger correction to LO-VIA come from NLO perturbative corrections.} \protect\label{fig:confr3} \end{center} \end{figure} In order to obtain constraints on $M_{sq}$ one has to specify a value for the $\d$'s. We consider the cases \begin{equation} \mbox{ \begin{tabular}{cccccc} & $(\delta^d_{12})_{LL}$& $(\delta^d_{12})_{LR}$& $(\delta^d_{12})_{RL}$& $(\delta^d_{12})_{RR}$ \\[.4cm] I & $ {\cal{K}} $& 0& 0& 0\\ II & 0&${\cal{K}}$ & 0& 0&\\ III & ${\cal{K}}$ & 0& 0& ${\cal{K}}$\\ IV & 0&${\cal{K}}$&${\cal{K}}$&0\\ \end{tabular} } \label{cases} \end{equation} where ${\cal{K}}$ can take the values (1, 0.22, 0.05). We have chosen these entries to leave aside possible accidental cancellations. The cases in which $\cal{K}$=1, are, of course, extreme cases: one may wonder about the consistency of mass insertion approximation as the neglected terms are of order ${\cal{O}}(1)$. However these cases have already been studied in the literature, (see {\it e.g.} \cite{arka,agas}), and we report them for completeness. The results so obtained just give an estimate of the mass scales that are involved and can be trusted if other corrections do not provide accidental cancellations. This can be checked only by a direct calculation. The assumptions made for the SM contribution and the B-parameters are the same as in section~\ref{sec:deltas}. \begin{figure}[t] \begin{center} \hspace*{-2.5cm} \begin{minipage}[t]{0.12\linewidth} \vspace*{-5.5cm} $M_{sq} $~[TeV] \end{minipage} \hspace*{-0.5cm} \epsfig{file=1,width=0.60\linewidth} \\[5pt] $m_{\tilde{g}}$ [TeV] \caption[]{\it The full (colored) lines give the lower bounds on $M_{sq}$ necessary to suppress FCNC and with ${\cal{K}}$=1 for the various cases. An upper bound on $M_{sq}$ is derived in order to satisfy fine-tuning requirements and it is shown by the dashed line. The two kind on constraints are not compatible in this case.} \protect\label{fig:caso1} \end{center} \end{figure} \begin{figure} \begin{center} \hspace*{-2.5cm} \begin{minipage}[t]{0.12\linewidth} \vspace*{-5.5cm} $M_{sq} $~[TeV] \end{minipage} \hspace*{-0.5cm} \epsfig{file=0.22.eps,width=0.60\linewidth} \\[5pt] $m_{\tilde{g}}$ [TeV] \caption[]{\it The same as fig.~\ref{fig:caso1} with ${\cal{K}}$=0.22. Cases I and II are now compatible with fine-tuning requirements. } \protect\label{fig:caso022} \end{center} \end{figure} \begin{figure} \begin{center} \hspace*{-2.5cm} \begin{minipage}[t]{0.12\linewidth} \vspace*{-5.5cm} $M_{sq} $~[TeV] \end{minipage} \hspace*{-0.5cm} \epsfig{file=0.05.eps,width=0.60\linewidth} \\[5pt] $m_{\tilde{g}}$ [TeV] \caption[]{\it The same as fig.~\ref{fig:caso1} with ${\cal{K}}$=0.05. Case I is not drawn since no lower bound on $M_{sq}$ can be obtained in this case. Cases II and III are now compatible with fine-tuning requirements. } \protect\label{fig:caso005} \end{center} \end{figure} In order to monitor the effect of the different corrections on the final result we show in figs.~\ref{fig:confr1},~\ref{fig:confr3} the lower bound obtained for the cases I and III with ${\cal{K}}=0.22$ (the other cases give similar results). As we see, B-parameters and NLO-QCD corrections play a significant r\^ole in the final computation and the correction they provide with respect to the LO-VIA results are of the order of $(25-35) \%$. In particular, in case I, fig.~\ref{fig:confr1}, B-parameters provide the most important corrections with respect to LO-VIA results. In case III, fig.~\ref{fig:confr3}, instead, corrections to LO-VIA results are dominated by the NLO-QCD perturbative contributions. The case $(\delta^d_{12})_{LL}=(\delta^d_{12})_{RR}$ was also considered in refs.~\cite{arka,agas}. The differences, at LO and without B-parameters, among our result and the ones of refs.~\cite{arka,agas} come from our inclusion of the SM contribution, from the value of the strange quark mass and from the scale at which hadronic matrix elements are evaluated. We agree with them for the same choice of parameters. Notice that, if the imaginary parts of the $\d$'s are of the same order of their real parts, there are much stronger constraints coming from $\epsilon_K$ than from $\Delta M_K$ (namely by a factor $\sim 7.7$). To be conservative, we consider in this section only constraints coming from the real parts of the $\d$'s. The final results are shown by the (colored) continuous lines in figs.~\ref{fig:caso1},~\ref{fig:caso022},~\ref{fig:caso005}. The minimum value of $M_{sq}$ depends strongly both on ${\cal{K}}$ and on the case one considers (I, II, III or IV, see eq.~\ref{cases}). Notice that $(\delta^d_{12})_{LR}$, $(\delta^d_{12})_{RL}$, (which enter the cases II and IV), are ``naturally'' small in the MSSM. However, since we would like to do a model--independent analysis, we have made no particular assumption on them. In all graphs the strongest constraints come from the case $(\delta^d_{12})_{LR}=(\delta^d_{12})_{RL}\neq 0$. Much lower constraints are generally obtained in cases I and II. In fig.~\ref{fig:caso005} the case I has not been drawn since no constraint can be derived. \subsection{RGE for the masses of the third generation of scalars} \label{sub:rge} It is well known that large values of $M_{sq}$ can drive the mass of the third generation of scalars to negative values, via RGE~\cite{arka}. Let us consider the two-loop RGE's for the mass, $m_{\tilde f}$ of the third generation of scalars, $\tilde{f}$. In the $\overline{DR}^{\ \prime}$ scheme (see {\it e.g.} ref.~\cite{sieg}), with two generations of heavy scalars, one has \begin{eqnarray} \mu \frac{ d}{d \mu}m_{\tilde f}(\mu) &=& -\frac{8}{4 \pi} \sum_i \a_i(\mu)\ C_i^{\tilde f} (m_{G}^2)_i(\mu)+ \frac{32}{(4 \pi)^2} \sum_i\a_i^2(\mu)\ C_i^{\tilde f} M_{sq}^2, \label{eq:rge} \end{eqnarray} where $ C_i^{\tilde f}$ is the Casimir factor for $\tilde f$ in the SU(5) normalization, the sums are over the gauge groups SU(3), SU(2), U(1) and $m_G$ denotes the gaugino masses. In eq.~(\ref{eq:rge}), Yukawa couplings are neglected: these couplings drive the light masses to even lower values and so, in this respect, our choice is a conservative one. Moreover, the introduction of Yukawa interactions requires further assumptions on SUSY parameters (see {\it e.g.}~\cite{agas}) that we do not discuss in this paper. The solution of eq.~(\ref{eq:rge}) between a Grand Unification (GUT) scale $\mu_{\rm GUT}\sim 2 \cdot 10^{16}$ GeV, and $\mu\sim 1$ TeV can be easily written as \begin{eqnarray} m_{\tilde f}^2(\mu)&=&m_{\tilde f}^2(\mu_{\rm{GUT}})- \sum_i \frac{16}{ 4 \pi \b_i^0} \Big[\a_i(\mu_{\rm{GUT}})-\a_i (M_{sq})\Big] C_i^{\tilde f} \ M_{sq}^2 +\nonumber \\ & & \sum_i \frac{2}{\b^0_i} \Big[ m_{G}^2(\mu_{\rm{GUT}})-(m_{G}^2)_i(M_{sq})\Big] C_i^{\tilde f} + \nonumber \\ & & \sum_i \frac{2}{\b_i^0} \Big[ (m_{G})_i^2(M_{sq})-(m_{G}^2)_i(\mu)\Big] C_i^{\tilde f}, \label{eq:rgefin} \end{eqnarray} where $\b_i^0$ are the $\b$-functions LO coefficients of the $i$-th gauge coupling. In eq.~(\ref{eq:rgefin}) we have considered a common gaugino mass, $m_G$ at the GUT scale, while for what concerns the couplings we have evolved them starting backward from $\mu=M_Z$. Note that in eq.~(\ref{eq:rgefin}) the contribution of the heavy scalars has been decoupled at $ M_{sq}$. Eq.~(\ref{eq:rgefin}) can be used in order to derive consistency constraints on the values of $M_{sq}$ and $m_{G}^2(\mu_{\rm{GUT}})$ once the values of $m_{\tilde f}^2(\mu)$ and of $m_{\tilde f}^2(\mu_{\rm{GUT}})$ are fixed. The latter can be determined according to the following requirements. First, $m_{\tilde f}^2(\mu)$ must be at least positive, such as to leave color and electric symmetries unbroken. The value of $m_{\tilde f}^2(\mu_{\rm{GUT}})$ determines the amount of fine-tuning necessary in order to achieve the electroweak symmetry breaking. Following ref.~\cite{barb} the necessary fine-tuning scales approximately as $10\% \times (0.3 \ {\rm {TeV}}/m_{\tilde{Q}_3}(\mu_{\rm{GUT}}))^2 $ for the squark doublet of the third generation $\tilde{Q}_3$. We have calculated the constraints on $M_{sq}$ and $m_{G}^2(\mu_{\rm{GUT}})$ coming from eq.~(\ref{eq:rgefin}) in the case ${\tilde{f}}={\tilde{Q}_3 }$ choosing for $m_{\tilde{Q}_3 }^2(\mu_{\rm{GUT}})$ the value of $(3.5 \ {\rm {TeV}})^2$. The latter choice corresponds to a fine-tuning of more than $0.1\%$. At fixed values of $m_{\tilde{Q}_3 }^2(\mu)$ and $m_{\tilde{Q}_3 }^2(\mu_{\rm{GUT}})$ (which depend on $M_{sq}$ and $m_{G}$) one can plot the upper value of $M_{sq}$ as function of $m_{G}$. The result is the (black) dashed line of figs.~\ref{fig:caso1},~\ref{fig:caso022},~\ref{fig:caso005}. One finds that $M_{sq}$ can not be much larger than about 25 TeV. Of course this is just an estimate of this limiting value. The inclusion of Yukawa couplings, of more severe fine-tuning requirements and of other effects can only lower this limit. \subsection{Final remarks} \label{sub:res} In figs.~\ref{fig:caso1},~\ref{fig:caso022},~\ref{fig:caso005} we combine the constraints derived in the two previous subsections. These figures (together with tables tabs.~\ref{tab:m250}-\ref{tab:e1000}) suggest that also models with a splitted mass spectrum need further assumptions to be phenomenologically viable, {\it e.g.} one has to introduce flavor symmetry or dynamical generation of degenerate scalar masses~\cite{arka}. In particular, without these further hypotheses, most of the cases which we have considered, face fine tuning problems. In particular, values of ${\cal{K}}\sim {\cal{O}}(1)$ are hardly acceptable. Although ${\cal{K}}\sim {\cal{O}}(0.22)$ and ${\cal{K}}\sim {\cal{O}}(0.05)$ have better chances they must be treated carefully. \section{Conclusions} \label{sec:conc} In this work we analyze in detail the constraints on SUSY-models parameters coming from $\rm K-\overline{K}$ oscillations in the hypothesis of a splitted SUSY spectrum. FCNC contributions coming from gluino-squark-quark interactions, working in the so-called mass insertion approximation, have been considered. We provide boundaries on mass insertions and on SUSY mass scales and we discus their consistency. Previous results including NLO-QCD corrections to $\Delta S=2$ effective Hamiltonian, B-parameters for the evaluation of hadronic matrix elements have been improved. A full set of magic-numbers is provided, that can be used for further analyses. We have discussed the residual uncertainty of our results coming from our ignorance of NLO-QCD corrections to the matching coefficients. Our analysis confirms that a splitted sparticle mass spectrum does not explain easily FCNC suppression without some amount of fine-tuning. These problems can be solved only if further assumptions in these kind of models are made, {\it e.g.} flavor symmetry or dynamical generation of degenerate scalar masses~\cite{arka}. In order to perform a complete analysis of SUSY-FCNC effects chargino contributions should be included. It is also interesting to extend this kind of analysis to $\Delta {\rm B}=2$ processes, once the calculation of B-parameters for the $B-\overline{B}$ system parameters (which is in progress~\cite{reti}) will be completed. \subsection*{Acknowledgements} R.~C. wants to thank INFN for partial support. M.~Ciuchini, E.~Franco, V.~Lubicz, G.~Martinelli, A.~Masiero and L.~Silvestrini are acknowledged for fruitful discussions.
1,314,259,992,870
arxiv
\section{Introduction} The accelerated expansion of the Universe \cite{1,2}, and its verification by several observations \cite{3}-\cite{5}, motivate the theocratical physicists to propose such theories of gravity, which can extend General Relativity (GR). The extension should be in such a way that observation can be justified by the modifications instead of introducing dark energy. It is well known that GR deals with the description of gravity at local scale but not at global scale, for example GR fails to explain the the rotation of spiral galaxies \cite{6}. Several theories of gravity have proposed to generalized the Einstein theory of gravity, but most of these theories are the geometric extension of GR which provide the essential ingredients to support the observations. Among all these modifications, one can consider the $f(R)$ theories, in which Lagrangian can be expressed as a function of Ricci Scalar $R$ \cite{7,8}. Analogous to $f(R)$ theory of gravity there is another theory so-called $f(T)$ theory, this theory is defined by the teleparallel equivalence of gravity (TEGR). In this theory gravitational action can be described by the torsion scalar $T$ instead of Lagrangian. In $f(T)$ theory, Riemann-Cartan spacetime is defined with Weitzzenb$\ddot{o}$ck connections instead of Levi-Civita connections, in this situation background spacetime admits non-zero torsion and zero curvature. This definition of spacetime was introduced by Einstein to define gravitation in terms of torsion and tetrad. In TEGR, tetrad field plays the role of dynamical field instead of metric field. Inspite of such differences TEGR and GR have many similarities, in other words every solution of GR is also solution of TEGR. But when TEGR is generalized to $f(T)$ gravity theory, by assuming the Lagrangian as a general function of torsion scalar, the equivalence between these tow theories breaks down \cite{11,12}. Therefore $f(T)$ theory can be considered as a prominent candidate for the explanation of the accelerated expansion of universe in the absence of dark energy \cite{13}. In $f(T)$ gravity, equations of motions are second order differential equations as in GR, but in $f(R)$ such equations are fourth order in metric formulism (second order in plantini approach). As compared to GR, dynamical equations in $f(T)$ gravity, displays the additional degree of freedom which is related to the fact that equations of motions are invariant under the Lorentz transformation \cite{15}. Recently \cite{16}, it has been proposed that there are two types tetrad, bad and good. The diagonal and non-diapasonal tetrad are known as bad and good tetrad, respectively. The non-diagonal tetrad are used to properly parallelize the isotropic spherically symmetric spacetime. No doubt $f(T)$ theory is excellent theory for the explanation of cosmic acceleration and observations on large scale(clustering of galaxies) \cite{17}. But we must remember that GR is in excellent agreement in with solar system test and pulsar observation \cite{18}. In recent papers \cite{19,20}, some solar system constraints have been proposed for the $f(T)$ gravity in the diagonal tetrad case. But in the present paper, we adopt the non-diagonal tetrad to drive the models of anisotropic compact stars in $f(T)$, gravity by assuming the power law form of $f(T)$ model. In theoretical astrophysics, $f(T)$ theory was used to explore the effects of $f(T)$ models in 3dimensions, so that $f(T)$ version of BTZ black hole solutions has been formulated \cite{21}. Later on \cite{22}, it has been proved that the violation of first of black hole thermodynamics in $f(T)$ gravity, is due to the violation of Lorentz invariance. Recently \cite{23}, some static spherically symmetric solutions with charged source have been found in $f(T)$ theory. After finding a large class of static perfect fluid solutions \cite{24}, the physical conditions for the existence of astrophysical stars in $f(T)$ theory have been discussed \cite{25}. Recently, Abbas and his collaborators have studied the models of anisotropic compact stars in GR, $f(R)$, $f(G)$ and $f(T)$ theories in diagonal tetrad case \cite{25a}-\cite{25d}. The main objective of this paper is to study the symmetric models of the anisotropic compact stars in the context of $f(T)$ gravity using non-diagonal tetrad. We investigate the anisotropic the anisotropic behavior, regularity as well as stability of these models. Finally, the surface redshift has been calculated. All these properties of the models have been discussed by using the observational data of the compact stars. The plan of the paper is the following: In the next section, we present the review of $f(T)$ gravity. Section \textbf{3} deals with anisotropic source and equation of motion equations in $f(T)$ gravity with nondiagonal tetrad. Section \textbf{4} investigates with the physical analysis of the proposed models. In the last section, we conclude the results of the paper. \section{$f(T)$ Gravity: Fundamentals} In this section we briefly overview the basics of $f(T)$ gravity (for review see ). The dynamical variables of TEGR are the orthonormal tetrad componenets $e_A(x^\mu)$. The metric can be defined via these components by $g_{\mu\nu}=\eta_{AB}e^A_\mu{e}^B_\nu$, where $\eta_{AB}=diag(1,-1,-1,-1)$. Herein, A,B run over $0,1,2,3$ for the tangent space of the manifold and $\mu,\nu$ are coordinate indices on the manifold which also run over $0,1,2,3$. The tetrad fields define a connection of the form $\Gamma^\lambda_{\mu\nu}=e^\lambda_i\partial_\nu{e}^i_\mu$, which is named as Weitzenb\"{o}ck's connection. This connection involves no contribution from curvature as compared to Levi-Civita connection for which torsion is zero. The torsion $T^\lambda_{\mu\nu}$ and contorsion $K^{\mu\nu}_\lambda$ tensors are defined by \begin{eqnarray}\label{1} T^\lambda_{\mu\nu}=e^\lambda_A(\partial_\mu{e}^A_\nu-\partial_\nu{e}^A_\mu),\\\label{2} K^{\mu\nu}_\lambda=-\frac{1}{2}(T^{\mu\nu}_\lambda-T^{\nu\mu}_\rho-T^{\mu\nu}_\lambda). \end{eqnarray} The torsion scalar defines the teleparallel Lagrangian density and is given by \begin{equation}\label{3} T=T^\lambda_{\mu\nu}S^{\mu\nu}_\lambda, \end{equation} where \begin{equation}\label{4} S^{\mu\nu}_\lambda=\frac{1}{2}(K^{\mu\nu}_\lambda+\delta^\mu_\lambda{T}^{\alpha\nu}_\alpha -\delta^\nu_\lambda{T}^{\alpha\mu}_\alpha). \end{equation} One can define the modified teleparallel action by replacing $T$ with a function of $T$, in analogy to $f(R)$ gravity \cite{7,8}, as follows \begin{equation}\label{5} \mathcal{I}=\int{dx^4e\left[\frac{1}{2\kappa^2} f(T)+\mathcal{L}_{(M)}\right]}, \end{equation} where $e=det(e^A_\mu)=\sqrt{-g}$, $\kappa^2=8\pi{G}=1$. The variation of the action with respect to the vierbein vector field $e^\mu_i$ presents \begin{equation}\label{6} e^{-1}\partial_\mu(eS^{\mu\nu}_A)f'(T)-e^\lambda_AT^\rho_{\mu\lambda}S^{\nu\mu}_\rho f'(T)+S^{\nu\mu}_A\partial_\mu(T)f''(T)+\frac{1}{4}e^\nu_Af(T)=\frac{\kappa^2}{2} e^\rho_AT^{(m)\nu}_\rho, \end{equation} where $T^{(m)}_{\mu\nu}$ is the matter energy momentum tensor and prime denotes the differentiation with respect to $T$. \subsection{Anisotropic Matter Configuration in $f(T)$ Gravity} We assume the Krori and Barua spacetime to describes the strange star stellar configuration, which is given by \cite{26} \begin{equation}\label{7} ds^2=-e^{\mu(r)}dt^2+e^{\nu(r)}dr^2+r^2d\Omega^2, \end{equation} with $\nu=Ar^2$, $\mu=Br^2+C$ where $d\Omega^2=d\theta^2+sin^2\theta{d\phi^2}$, $A$, $B$ and $C$ are arbitrary constant to be evaluated by using some physical constraints. In this setup we consider the off-diagonal tetrad matrix of the form \cite{27} \begin{equation}\label{8} e^a_\mu=\left( \begin{array}{cccc} e^{\mu/2}& 0 & 0 & 0 \\ 0 & e^{\nu/2}sin\theta{cos\phi} & rcos\theta{cos\phi} & -rsin\theta{sin\phi} \\ 0 & e^{\nu/2}sin\theta{sin\phi} & rcos\theta{sin\phi} & rsin\theta{cos\phi} \\ 0 & e^{\nu/2}cos\theta & -rsin\theta & 0 \\ \end{array}\right) \end{equation} The determinant of $e_\mu^i$ can be obtained as $e=e^{\mu+\nu}r^2sin\theta$. We assume the anisotropic fluid as interior of compact object defined by the energy momentum tensor $T^m_{\alpha\beta}$ as \begin{equation}\label{9} T^m_{\alpha\beta}=(\rho+p_t)u_\alpha{u}_\beta-p_tg_{\alpha\beta}+(p_r-p_t)v_\alpha{v}_\beta, \end{equation} where $u_\alpha=e^\frac{\mu}{2}\delta^0_\alpha$, $v_\alpha=e^\frac{\nu}{2}\delta^0_\alpha$, $\rho$, $p_r$ and $p_t$ correspond to energy density, radial and transverse pressures, respectively. The dynamical equations for spacetime (\ref{7}) lead to \begin{eqnarray}\label{10} \rho&=&-\frac{e^{-\nu/2}}{r}(e^{-\nu/2}-1)F'-\left(\frac{T}{2}-\frac{1}{r^2}- \frac{e^{-\nu}}{r^2}(1-r\nu')\right)\frac{F}{2}+\frac{f}{4},\\\label{11} p_r&=&\left(\frac{T}{2}-\frac{1}{r^2}+\frac{e^{-\nu}}{r^2}(1+r\mu')\right) \frac{F}{2}-\frac{f}{4},\\\nonumber p_t&=&\frac{e^{-\nu}}{2}\left(\frac{\mu'}{2}+\frac{1}{r}-\frac{e^{\nu/2}}{r}\right)F' +\left(\frac{T}{2}+e^{-\nu}\left(\frac{\nu''}{2}+\left(\frac{\nu'}{4}+\frac{1}{2r}\right) (\mu'\right.\right.\\\label{12}&-&\left.\left.\nu')\right)\right)\frac{F}{2}-\frac{f}{4}, \end{eqnarray} where $F=f_T$, prime denotes the derivative with respect to radial coordinate and $T$ is the torsion scalar given by \begin{eqnarray}\label{13} T(r)&=&\frac{2e^{-\nu}(e^{\nu/2}-1)e^{\nu/2}-1-r\mu'}{r^2}, \end{eqnarray} Eqs.(\ref{10})-(\ref{12}) represents the energy density and anisotropic pressures for the off-diagonal tetrad. In case of diagonal tetrad there exist an off-diagonal equation which results in linear form of algebraic function $f(T)$. However, the choice of off-diagonal tetrad does not provide any particular condition and one can set a consistent $f(T)$ model. In this setup we consider a observationally viable $f(T)$ model given by \cite{28,29} \begin{equation}\label{14} f(R)=\beta{T}^n, \end{equation} where $\beta$ is an arbitrary constant. In \cite{29} Bamba et al. discussed the existence of finite-time future singularities for power law type model of the form $T^{\beta}$, it is found that the terms involving powers like $\beta>1$ can remove such singularities. Setare and Darabi \cite{30} developed the $f(T)$ model for power law solutions and discussed the case when universe enters the phantom phase. Using the $f(T)$ model (\ref{14}) together with relations of $\mu$ and $\nu$ in metric (\ref{4}), we find the expressions for $\rho$, $P_r$ and $P_t$ in the following form \begin{eqnarray}\nonumber \rho&=&\frac{1}{(e^{Ar^2/2}-1)(e^{Ar^2/2}-2Br^2-1)^2}\left[2^{n-2}\beta \left(e^{Ar^2}-(1+2Br^2)^2\right.\right.\\\nonumber&-&\left.\left.4n^2(1+Ar^2+2ABr^4) +2n(2+3Ar^2+Br^2+6ABr^4+2B^2r^4)\right.\right.\\\nonumber&+&\left.\left.e^{Ar^2}(-3-4n^2-4Br^2 +2n(3+Br^2))+e^{Ar^2/2}(3+8Br^2+2B^2r^4\right.\right.\\\nonumber&+&\left.\left.4n^2(2+Ar^2+ABr^4) -2n(5+3Ar^2+4Br^2+2ABr^4+2B^2r^4))\right)\right.\\\label{15}&\times&\left.\left(\frac{e^{-Ar^2} (e^{Ar^2/2}-1)(e^{Ar^2/2}-2Br^2-1)}{r^2}\right)^n\right],\\\nonumber p_r&=&-\frac{2^{n-2}\beta{e}^{-Ar^2}}{r^2}\left(\frac{e^{-Ar^2} (e^{Ar^2/2}-1)(e^{Ar^2/2}-2Br^2-1)}{r^2}\right)^{n-1}\left(e^{Ar^2}\right.\\\label{16}&+&\left. 2e^{Ar^2/2}(n-1)(1+Br^2-(2n-1)(1+2Br^2))\right),\\\nonumber p_t&=&2^{n-4}\beta\left[-4+2n\left(2-\frac{(n-1)Ar^2}{(e^{Ar^2/2}-1)^2} -\frac{(n-1)r^2(A-4B+2ABr^2)}{(e^{Ar^2/2}-2Br^2-1)^2} \right.\right.\\\nonumber&+&\left.\left.\frac{An-2B+B(An-B)r^2}{B(e^{Ar^2/2}-1)}- \frac{2B+(A-4B)n+B(An-B)r^2}{B(e^{Ar^2/2}-2Br^2-1)^2}\right) \right]\\\label{17}&\times&\left(\frac{e^{-Ar^2}(e^{Ar^2/2}-1)(e^{Ar^2/2}-2Br^2-1)}{r^2}\right)^{n}. \end{eqnarray} Using the expressions of energy density, radial and transverse pressures (\ref{15})-(\ref{17}), we find the corresponding equation of state (EoS) as follows \begin{eqnarray}\nonumber \omega_r&=&\left[(1-e^{Ar^2/2}+2Br^2)(e^{Ar^2}+2e^{Ar^2/2}(n-1)(1+Br^2) -(2n-1)(1\right.\\\nonumber&+&\left.2Br^2))\right]/\left[e^{3Ar^2/2}-(1+2Br^2)^2 -4n^2(1+Ar^2+2ABr^4)+2n(2\right.\\\nonumber&+&\left.3Ar^2+Br^2+6ABr^4+2B^2r^4)+e^{Ar^2} (-3-4n^2-4Br^2+2n(3\right.\\\nonumber&+&\left.Br^2))+e^{Ar^2/2}(3+8Br^2+4B^2r^4+4n^2(2+Ar^2+ABr^4) -2n(5\right.\\\label{18}&+&\left.3Ar^2+4Br^2+2ABr^4+2B^2r^4))\right],\\\nonumber \omega_t&=&\left[e^{2Ar^2}(n-1)+2e^{3Ar^2/2}(n-1)(n-2-2Br^2)-(1+2Br^2)^2-2n^2 \right.\\\nonumber&\times&\left.(1+Br^2) (1+A(r^2+2Br^4))+n(3+8Br^2+9B^2r^4+2B^3r^6+A(r^2\right.\\\nonumber&+&\left.3Br^4 +2B^2r^6))+e^{Ar^2}(-2(3+6Br^2+2B^2r^4)-2n^2(3+Br^2+A(r^2\right.\\\nonumber&+&\left.Br^4)) +n(12+16Br^2+5B^2r^4+A(r^2+Br^4)))+2e^{Ar^2/2}(2+6Br^2 \right.\\\nonumber&+&\left.4B^2r^4-n(5+10Br^2+7B^2r^4+B^3r^6+A(r^2+2Br^4)) +n^2(3+2Br^2\right.\\\nonumber&+&\left.Ar^2(2+4Br^2+B^2r^4)))\right]/\left[(e^{Ar^2/2}-1)(e^{3Ar^2/2} -(1+2Br^2)^2-4n^2(1\right.\\\nonumber&+&\left.Ar^2+2ABr^4)+2n(2+3Ar^2+Br^2+6ABr^4+2B^2r^4)+e^{Ar^2} (-3\right.\\\nonumber&-&\left.4n^2-4Br^2+2n(3+Br^2))+e^{Ar^2/2}(3+8Br^2+4B^2r^4+4n^2(2+Ar^2 \right.\\\label{19}&+&\left.ABr^4)-2n(5+3Ar^2+4Br^2+2ABr^4+2B^2r^4))\right]. \end{eqnarray} \section{Physical Analysis} In this section we explore some features of the anisotropic compact star which includes anisotropic behavior, matching conditions and stability constraints. \subsection{Anisotropic Behavior} Here, we present the evolution of $\rho$, $p_r$ and $p_t$ as shown in Figs.1-3 for the strange star candidates Her X-1, SAX J 1808.4-3658 and 4U 1820-30 (see table 1). Herein, we set $n=2$ and $\beta=-2$. \begin{figure} \centering \epsfig{file=Fig1_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig1_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig1_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of energy density $\rho$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure} \begin{figure} \centering \epsfig{file=Fig2_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig2_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig2_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of radial pressure $p_r$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure} \begin{figure} \centering \epsfig{file=Fig3_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig3_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig3_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of tangential pressure $p_t$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure} From Eqs.(\ref{15}) and (\ref{16}), one can find the expressions for $d\rho/dr$, $d^2\rho/dr^2$, $dp_r/dr$ and $d^2\rho/dr^2$. We show the results for $d\rho/dr$ and $dp_r/dr$ in \textbf{Appendix} as Eqs.(\ref{01}) and (\ref{02}) respectively. Figures 4 and 5 show the variation of derivatives of $\rho$ and $p_r$ with respect to radial coordinate for strange star Her X1. We find that $d\rho/dr<0$ and $dp_r/dr$ shows transition from positive to negative values. \begin{figure} \centering \epsfig{file=Fig4_a.eps, width=.49\linewidth, height=1.5in}\epsfig{file=Fig4_b.eps, width=.49\linewidth, height=1.45in}\caption{Evolution of $d\rho/dr$ and $d^2\rho/dr^2$ versus $r(km)$ at the stellar interior of Her X1.} \end{figure} \begin{figure} \centering \epsfig{file=Fig4_c.eps, width=.49\linewidth, height=1.5in}\epsfig{file=Fig4_d.eps, width=.49\linewidth, height=1.5in}\caption{Evolution of $dp_r/dr$ and $d^2p_r/dr^2$ versus $r(km)$ at the stellar interior of Her X1.} \end{figure} We also examine behavior of derivatives at $r=0$, centre of strange star Her X1. It can be seen that \begin{eqnarray}\nonumber \frac{d\rho}{dr}=0, \quad \quad \frac{dp_r}{dr}=0,\\\label{20} \frac{d^2\rho}{dr^2}<0, \quad \quad \frac{d^2p_r}{dr^2}>0. \end{eqnarray} Clearly, at $r=0$, density is maximum and it decreases outward with the increase in $r$ as shown in Figure 1. In case of $p_r$, we have increasing function unlike the evolution $\rho$, which attains maximum value in midway to the boundary of star and then decreases. One can test the evolution of $\rho$ and $p_r$ for the strange stars SAX J 1808.4-3658 and 4U 1820-30, which would show similar behavior. The evolution of EoS parameters $\omega_r$ and $\omega_t$ is also presented for the strange stars in $f(T)$ gravity. It can be seen that $\omega_r>0$, which favors the quintessence regime near the boundary of star whereas $0<\omega_t<1$ despite of the role of modified gravity. \begin{figure} \centering \epsfig{file=Fig5_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig5_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig5_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of $\omega_r$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure}\\ \begin{figure} \centering \epsfig{file=Fig6_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig6_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig6_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of radial pressure $\omega_t$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure} The anisotropy measurement in $f(T)$ gravity, for spherically symmetric metric is given by \begin{eqnarray}\nonumber \Delta&=&\frac{2(p_t-p_r)}{r}=\frac{n2^{n-1}\beta}{r^{2n+1}} \left\{(e^{-Ar^2})^n\left(\left(e^{\frac{Ar^2}{2}}-1\right) \left(e^{\frac{Ar^2}{2}}-1-2Br^2\right)\right)^{n-2}\right\} \\\nonumber&\times&\left[e^{2Ar^2}+2e^{\frac{3Ar^2}{2}}(-2+n-Br^2)+e^{Ar^2} (6+Ar^2+4Br^2+ABr^4\right.\\\nonumber&&\left.+B^2r^4-2n(3+Ar^2+Br^2+ABr^4)) +(1+Br^2)(1+Ar^2-Br^2\right.\\\nonumber&&\left.+2ABr^4+2B^2r^4-2n(1 +Ar^2+2ABr^4))+2e^{\frac{Ar^2}{2}}(-2-Ar^2-Br^2\right.\\\label{21}&&\left.-2ABr^4-B^2r^4-B^3r^6+n(3 +2Br^2+Ar^2(2+4Br^2+B^2r^4)))\right]. \end{eqnarray} The measure of anisotropy depends on radial and tangential pressures, it is directed outward when $p_t>p_r$ which implies $\Delta>0$ whereas $p_t<p_r$ results in $\Delta<0$ so that anisotropy is directed inward. We present the evolution of $\Delta$ for strange star candidates Her X-1, SAX J 1808.4-3658 and 4U 1820-30 in Fig. 8. It can be seen that $\Delta>0$ for our model which implies the existence of repulsive force, allowing more massive distribution. One can see that $f(T)$ gravity does not affect the anisotropic force. Here, $\Delta$ vanishes at the center $r=0$. \begin{figure} \centering \epsfig{file=Fig7_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig7_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig7_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of anisotropy measurement $\Delta$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure} \subsection{Matching Conditions} Here, we match the interior metric (\ref{7}) to the vacuum exterior spherically symmetric metric given by \begin{equation}\label{22} ds^2=-\left(1-\frac{2M}{r}\right)dt^2+\left(1- \frac{2M}{r}\right)^{-1}dr^2+r^2d\theta^2+r^2Sin^2\theta{d}\varphi^2, \end{equation} At the boundary surface $r=R$ continuity of the metric functions $g_{tt}$, $g_{rr}$ and $\frac{\partial g_{tt}}{\partial r}$ yield, \begin{eqnarray}\label{23} g_{tt}^-=g_{tt}^+,~~~~~ g_{rr}^-=g_{rr}^+,~~~~~ \frac{\partial g_{tt}^-}{\partial r}=\frac{\partial g_{tt}^+}{\partial r}, \end{eqnarray} where $-$ and $+$, correspond to interior and exterior solutions. From the interior and exterior metrics, we get \begin{eqnarray}\label{24} A&=&-\frac{1}{R^2}ln\left(1-\frac{2M}{R}\right),\\\label{25} B&=&\frac{M}{R^3}{{\left(1-\frac{2M}{R}\right)}^{-1}},\\\label{26} C&=&ln\left(1-\frac{2M}{R}\right)-\frac{M}{R}{{\left(1-\frac{2M}{R}\right)}^{-1}}. \end{eqnarray} For the given values of $M$ and $R$ for given star, the constants $A$ and $B$ takes the values as given in the table \textbf{1}. \begin{table}[ht] \caption{Values of constants for given Masses and Radii of Stars \cite{33a, 33b}} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline {Strange Quark Star}& \textbf{ $M$} & \textbf{$R(km)$} & \textbf{ $\frac{M}{R}$} &\textbf{ $A(km ^{-2})$}& \textbf{$B(km ^{-2})$} \\\hline Her X-1& 0.88$M_\odot$& 7.7&0.168&0.006906276428 & $0.004267364618$ \\\hline SAX J 1808.4-3658& 1.435$M_\odot$& 7.07&0.299& 0.01823156974 & $0.01488011569$ \\\hline 4U 1820-30&2.25$M_\odot$& 10.0 &0.332&0.01090644119 & $0.009880952381$ \\\hline \end{tabular} \end{center} \end{table} \subsection{Energy Conditions} Energy conditions have been a handy tool to limit the arbitrariness in the energy-momentum tensor, based on Raychaudhuri equation with attractiveness property of gravity. These conditions include weak energy condition (WEC), null energy condition (NEC), strong energy condition (SEC) and dominant energy condition (DEC). In case of anisotropic fluid, we find the following inequalities \begin{eqnarray}\nonumber \textbf{NEC}:\quad&&\rho+p_r\geq0, \quad \rho+p_t\geq0,\\\nonumber \textbf{WEC}:\quad&&\rho\geq0, \quad \rho+p_r\geq0, \quad \rho+p_t\geq0,\\\nonumber \textbf{SEC}:\quad&&\rho+p_r\geq0, \quad \rho+p_t\geq0, \quad \rho+p_r+2p_t\geq0,\\\nonumber \textbf{DEC}:\quad&&\rho>|p_r|, \quad \rho>|p_t|. \end{eqnarray} We examine these constraints for the strange star Her X-1 and show the evolution in Figure 9. It can be seen that these conditions are satisfied for strange star Her X-1 and one can also the constraints on other strange star models. \begin{figure} \centering \epsfig{file=Fig11.eps, width=.52\linewidth, height=2in}\caption{Evolution of energy constraints at the stellar interior of strange star Her X-1.} \end{figure} \subsection{TOV Equation} For an anisotropic fluid the generalized Tolman-Oppenheimer-Volkoff (TOV) equation has the following form \begin{equation}\label{27} \frac{dp_r}{dr}+\frac{\nu'(\rho+p_r)}{2}+\frac{2(p_r-p_t)}{r}=0. \end{equation} Following \cite{28a}, we can express the above equation in terms of gravitational mass and henceforth it results in the equilibrium condition for the strange star, involving the gravitational, hydrostatic and anisotropic forces of the stellar object as \begin{eqnarray}\nonumber &&F_g+F_h+F_a=0, \\\label{28} && F_g=-Br(\rho+p_r),\quad F_h=-\frac{dp_r}{dR}, \quad F_a=\frac{2(p_t-p_r)}{r}. \end{eqnarray} Using the effective $\rho$, $p_r$ and $p_t$ (\ref{10})-(\ref{12}), we can find the numerically the equilibrium condition in $f(T)$ theory. In Figure 10, we show the evolution of above forces at the interior of strange star Her X-1. This figure indicates that the static equilibrium can be attained due to pressure anisotropy, gravitational and hydrostatic forces for strange star candidates in $f(T)$ gravity. \begin{figure} \centering \epsfig{file=Fig12.eps, width=.52\linewidth, height=2in}\caption{Evolution of gravitating, hydrostatic and pressure anisotropic forces at the stellar interior of strange star Her X-1.} \end{figure} \subsection{Stability Analysis} In this section, we discuss the stability of strange star models in $f(T)$ theory. The radial and transverse speeds can be calculated as \begin{eqnarray}\nonumber &&v_{sr}^2=\left[(-1+ e^{\frac{Ar^2}{2}}- 2 B r^2) (e^{2 A r^2} +2 e^{\frac{3Ar^2}{2}} (n-2) (1 + B r^2) - (2n-1) (1 \right.\\\nonumber&&\left.+ 2 B r^2) (1 +A (r^2 + 2 B r^4)) + e^(A r^2) (6 + 10 B r^2 + A (r^2 + 2 B r^4 + 2 B^2 r^6) \right.\\\nonumber&&\left.- 2 n (3 + 4 B r^2+ A (r + B r^3)^2)) + 2 e^{\frac{Ar^2}{2}} (-(1 + 2 B r^2) (2 + A (r^2 + B r^4)) \right.\\\nonumber&&\left.+ n (3 + 5 B r^2 + 2 A (3 B r^4 +r^2+ 2 B^2 r^6))))\right]/\left[1 -e^{\frac{5Ar^2}{2}} + 7 A r^2 - 10 B r^2 \right.\\\nonumber&&\left.+ 18 A B r^4 + 8 B^2 r^4 + 12 A B^2 r^6 + 8 A B^3 r^8 +e^{2 A r^2} (5 + 4 n^2 - 2 A r^2 + 6 B r^2 \right.\\\nonumber&&\left.+ 2 n (-3 + A r^2 - B r^2)) + 4 n^2 (1 + A (r^2+ 2 B r^4))^2 - 2 n (2 - 7 B r^2 + 2 B^2 r^4 \right.\\\nonumber&&\left.+ 3 r^4 (A + 2 A B r^2)^2 +A r^2 (7 + 15 B r^2 + 4 B^2 r^4 + 4 B^3 r^6)) -e^{\frac{3Ar^2}{2}} (10 + 4 B r^2 \right.\\\nonumber&&\left.+ 8 B^2 r^4 - 2 A^2 r^4 (3 + 2 B r^2) +A r^2 (16 B r^2 + 2 B^2 r^4 +3) + 8 n^2 (2 + A (r^2 \right.\\\nonumber&&\left.+ B r^4)) +2 n (-11 + 2 B r^2 - 2 B^2 r^4 -A r^2 (5 + 12 B r^2 + B^2 r^4) + 2 A^2 (r^4 \right.\\\nonumber&&\left.+ B r^6))) +e^{A r^2}(-4 A^2 r^4 (3 + 5 B r^2) +2 (5 - 10 B r^2 + 8 B^2 r^4) +A r^2 (19 \right.\\\nonumber&&\left.+ 56 B r^2 + 16 B^2 r^4 + 4 B^3 r^6) + 4 n^2 (6 + r^4 (A + A B r^2)^2 + A (6 r^2 + 8 B r^4)) \right.\\\nonumber&&\left.-2 n (15 - 14 B r^2 + 6 B^2 r^4 +A^2 r^4 (-1 - 2 B r^2 + 2 B^2 r^4) + 2 A r^2 ( 21 B r^2 \right.\\\nonumber&&\left.+ 4 B^2 r^4 + B^3 r^6 +10))) + e^{\frac{Ar^2}{2}} (-5 + 28 B r^2 - 16 B^2 r^4 +2 A^2 r^4 (3 + 8 B r^2 \right.\\\nonumber&&\left.+ 4 B^2 r^4) -A r^2 (21 + 58 B r^2 + 30 B^2 r^4 + 12 B^3 r^6) -8 n^2 (2 + A r^2 (3 + 5 B r^2) \right.\\\nonumber&&\left.+ A^2 (r^4 + 3 B r^6 + 2 B^2 r^8)) +2 n (9 - 18 B r^2 + 6 B^2 r^4 +A r^2 (21 + 45 B r^2 \right.\\\label{29}&&\left.+ 13 B^2 r^4 + 6 B^3 r^6) +4 A^2 (r^4 + 3 B r^6 + 2 B^2 r^8)))\right]. \end{eqnarray} \begin{eqnarray}\nonumber&& v_{st}^2=-(-1 + e^{\frac{3Ar^2}{2}} (n-1) - 2 A r^2 - 10 A B r^4 - 6 B^2 r^4 - 18 A B^2 r^6 - 6 B^3 r^6 \\\nonumber&&- 12 A B^3 r^8 - 4 B^4 r^8 + e^{\frac{5Ar^2}{2}} (n-1) (-6 + 2 n - B r^2 (4 + A r^2))-2 n^2 (1 \\\nonumber&&+ B r^2) (1 + A (r^2 + 2 B r^4))^2 + n(3 + 2 B r^2 + 5 B^2 r^4 + 2 B^3 r^6 +r^4 (1 + B r^2)\\\nonumber&&\times(A + 2 A B r^2)^2 +A r^2 (6 + 25 B r^2 + 39 B^2 r^4 +28 B^3 r^6 + 4 B^4 r^8)) - e^{2 A r^2} (15 \\\nonumber&&+ 16 B r^2 + 6 B^2 r^4 +A r^2 (2 + 10 B r^2 + 3 B^2 r^4) - A^2 (r^4 + B r^6) +2 n^2 (5 + B r^2\\\nonumber&&+ 2 A (r^2 + B r^4)) +n (-25 - 18 B r^2 - 5 B^2 r^4 -A r^2 (6 + 17 B r^2 + 4 B^2 r^4) + 2 A^2 (r^4\\\nonumber&& + B r^6))) + e^{\frac{3Ar^2}{2}} (20 + 24 B r^2 + 24 B^2 r^4 + 6 B^3 r^6 - 3 A^2 (r^4 + 2 B r^6) + A r^2 (34 B r^2\\\nonumber&&+8 + 15 B^2 r^4 + B^3 r^6) + 2 n^2 (10 + 4 B r^2 + r^4 (A + A B r^2)^2 + 2 A r^2 (6 B r^2 + B^2 r^4\\\nonumber&&+4)) + n (A^2 r^4 (5 + 10 B r^2 + 2 B^2 r^4) - 2 (20 + 16 B r^2 + 10 B^2 r^4 + B^3 r^6) - A r^2 (24 \\\nonumber&&+ 70 B r^2 + 35 B^2 r^4 + 5 B^3 r^6))) - e^{A r^2} (15 + 16 B r^2 + 36 B^2 r^4 + 18 B^3 r^6 + 4 B^4 r^8\\\nonumber&& - 3 A^2 (r^4 + 3 B r^6 + 2 B^2 r^8) + A r^2 (12 + 52 B r^2 + 39 B^2 r^4 + 6 B^3 r^6 - 2 B^4 r^8)\\\nonumber&& + 2 n^2 (10 + 6 B r^2 + 4 A r^2 (3 + 6 B r^2 + 2 B^2 r^4) + A^2 r^4 (3 + 9 B r^2 + 7 B^2 r^4 + B^3 r^6)) +\\\nonumber&& n (-35 - 28 B r^2 - 30 B^2 r^4 - 6 B^3 r^6 + A^2 r^4 (3 + 9 B r^2 + 8 B^2 r^4 + 2 B^3 r^6) - A r^2 (36 \\\nonumber&&+ 118 B r^2 + 97 B^2 r^4 + 30 B^3 r^6 + 2 B^4 r^8))) + e^{\frac{Ar^2}{2}} (6 + 4 B r^2 + 24 B^2 r^4 + 18 B^3 r^6 \\\nonumber&&+ 8 B^4 r^8 + A r^2 (8 + 37 B r^2 + 45 B^2 r^4 + 17 B^3 r^6 - 2 B^4 r^8) - A^2 (r^4 + 4 B r^6 + 6 B^2 r^8\\\nonumber&& + 4 B^3 r^10) + 2 n^2 (5 + 4 B r^2 + 2 A r^2 (4 + 10 B r^2 + 5 B^2 r^4) + A^2 r^4 (12 B r^2 + 14 B^2 r^4\\\nonumber&&+3 + 4 B^3 r^6)) + n (-2 (8 + 6 B r^2 + 10 B^2 r^4 + 3 B^3 r^6) + A^2 r^4 (-1 - 4 B r^2 - 2 B^2 r^4 \\\nonumber&&+ 4 B^3 r^6) - A r^2 (24 + 89 B r^2 + 105 B^2 r^4 + 53 B^3 r^6 + 6 B^4 r^8))))/((-1 + e^{\frac{Ar^2}{2}} (1 \\\nonumber&&- e^{\frac{5Ar^2}{2}} + 7 A r^2 - 10 B r^2 + 18 A B r^4 + 8 B^2 r^4 + 12 A B^2 r^6 + 8 A B^3 r^8 + e^{2 A r^2} (5 + 4 n^2\\\nonumber&& - 2 A r^2 + 6 B r^2 + 2 n (-3 + A r^2 - B r^2)) + 4 n^2 (1 + A (r^2 + 2 B r^4))^2 - 2 n (2 - 7 B r^2\\\nonumber&& + 2 B^2 r^4 + 3 r^4 (A + 2 A B r^2)^2 + A r^2 (7 + 15 B r^2+ 4 B^2 r^4 + 4 B^3 r^6)) - e^{\frac{3Ar^2}{2}}(10 \\\nonumber&&+ 4 B r^2 + 8 B^2 r^4 - 2 A^2 r^4 (3 + 2 B r^2) + A r^2 (3 + 16 B r^2 + 2 B^2 r^4) + 8 n^2 (2 + A (r^2\\\nonumber&& + B r^4)) + 2 n (-11 + 2 B r^2 - 2 B^2 r^4 - A r^2 (5 + 12 B r^2 + B^2 r^4) + 2 A^2 (r^4 + B r^6)))\\\nonumber&& + e^{A r^2} (-4 A^2 r^4 (3 + 5 B r^2) + 2 (5 - 10 B r^2 + 8 B^2 r^4) + A r^2 (19 + 56 B r^2 + 16 B^2 r^4\\\nonumber&& + 4 B^3 r^6) + 4 n^2 (6 + r^4 (A + A B r^2)^2 + A (6 r^2 + 8 B r^4)) -2 n (15 - 14 B r^2 + 6 B^2 r^4 \\\nonumber&&+A^2 r^4 (-1 - 2 B r^2 + 2 B^2 r^4)+2 A r^2 (10 + 21 B r^2 + 4 B^2 r^4 + B^3 r^6)))+e^{\frac{Ar^2}{2}} (-5\\\nonumber&& + 28 B r^2 - 16 B^2 r^4 + 2 A^2 r^4 (3 + 8 B r^2 + 4 B^2 r^4)-A r^2 (21 + 58 B r^2 + 30 B^2 r^4 \\\nonumber&&+ 12 B^3 r^6)-8 n^2 (2 + A r^2 (3 + 5 B r^2) +A^2 (r^4 + 3 B r^6 + 2 B^2 r^8))+2 n (9 - 18 B r^2\\\label{30}&& + 6 B^2 r^4 +A r^2 (21 + 45 B r^2 + 13 B^2 r^4 + 6 B^3 r^6)+4 A^2 (r^4 + 3 B r^6 + 2 B^2 r^8))))). \end{eqnarray} In \cite{31}, Herrera developed a new technique to explore the potentially unstable matter configuration and introduced the concept of cracking. One can analyze the potentially stable and unstable regions regions depending on the difference of sound speeds, the region for which radial sound speed is greater than the transverse sound speed is said to be potentially stable. In Figure 11 and 12, we plot the radial and transverse speeds for different strange star candidates. Here, the values of $v^2_{sr}$ and $v^2_{st}$ lies in the range $0<|v^2_{i}|<1$ within the anisotropic matter configuration. We also present the evolution of $v^2_{st}-v^2_{sr}$ in Figure 13, which shows that difference of two sound speeds satisfies the inequality $|v^2_{st}-v^2_{sr}|\leq1$. Moreover, $v^2_{st}-v^2_{sr}$ retain the similar sign for Her X-1 whereas the sign changes in case of SAX J 1808.4-3658 and 4U 1820-30. Hence, our proposed strange star model is stable for Her X-1 and unstable for both SAX J 1808.4-3658 and 4U 1820-30. \begin{figure} \centering \epsfig{file=Fig8_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig8_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig8_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of $v^2_{sr}$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure}\\ \begin{figure} \centering \epsfig{file=Fig9_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig9_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig9_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of $v^2_{st}$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure}\\ \begin{figure} \centering \epsfig{file=Fig10_a.eps, width=.34\linewidth, height=1.3in}\epsfig{file=Fig10_b.eps, width=.36\linewidth, height=1.3in}\epsfig{file=Fig10_c.eps, width=.34\linewidth, height=1.3in}\caption{Evolution of $v^2_{st}-v^2_{sr}$ versus radial coordinate $r(km)$ at the stellar interior of strange star candidates.} \end{figure} \subsection{Surface Redshift} The compactness of star in $f(T)$ gravity can be found as \begin{eqnarray}\nonumber&& u=\frac{M}{b}=\frac{e^{-2 A R^2} \pi R \beta }{9 (A R^2)^{\frac{3}{2}}}\left[-36 (20 A^2 + 2 A B + B^2)e^{2 A R^2} \sqrt{\pi}R^2 Erf[\sqrt{A R^2}]\right.\\\nonumber&&\left.+16(24A^2+ A B + B^2) e^{2 A R^2} Sqrt{6 \pi}R^2 Erf[\sqrt{\frac{3}{2}} \sqrt{A R^2}]-3 (4 \sqrt{ A R^2}(-B^2\right.\\\nonumber&&\left.(3 - 8 e^{\frac{A R^2}{2}} + 6 e^{A R^2})R^2+A (27 - 24 e^{\frac{3A R^2}{2}} + 3 e^{2 A R^2} + 6 B R^2-8e^{\frac{A R^2}{2}}(9 \right.\\\nonumber&&\left.+ 2 B R^2)+6 e^{A R^2} (11 + 2 B R^2)))-48 A^2 e^{2 A R^2} \sqrt{2 \pi}R^2 Erf[ \frac{\sqrt{A R^2}}{\sqrt{2}}] +3(32 A^2\right.\\\label{31}&&\left.-2AB+B^2)e^{2AR^2}\sqrt{2\pi}R^2 Erf[\sqrt{2} \sqrt{A R^2}])\right]. \end{eqnarray} The surface redshift corresponding to compactness is given by \begin{eqnarray}\nonumber&& 1+Z_s=(1-2u)^{-1/2}=\{1-\frac{2e^{-2 A R^2} \pi R \beta }{9 (A R^2)^{\frac{3}{2}}}\left[-36 (20 A^2 + 2 A B + B^2)e^{2 A R^2} \right.\\\nonumber&&\left.\sqrt{\pi}R^2 Erf[\sqrt{A R^2}]+16(24A^2+ A B + B^2) e^{2 A R^2}\sqrt{6 \pi}R^2 Erf[\sqrt{\frac{3}{2}} \sqrt{A R^2}]\right.\\\nonumber&&\left.-3 (4 \sqrt{ A R^2}(-B^2(3 - 8 e^{\frac{A R^2}{2}} + 6 e^{A R^2})R^2+A (27 - 24 e^{\frac{3A R^2}{2}} + 3 e^{2 A R^2} + 6 B R^2\right.\\\nonumber&&\left.-8e^{\frac{A R^2}{2}}(9+ 2 B R^2)+6 e^{A R^2} (11 + 2 B R^2)))-48 A^2 e^{2 A R^2} \sqrt{2 \pi}R^2 Erf[ \frac{\sqrt{A R^2}}{\sqrt{2}}]\right.\\\label{32}&&\left.+3(32 A^2-2AB+B^2)e^{2AR^2}\sqrt{2\pi}R^2 Erf[\sqrt{2} \sqrt{A R^2}])\right]\}. \end{eqnarray} In Figure 13, we show the redshift plot for the star Her X-1 and maximum redshift turns out to be $Z_s=0.025$. \begin{figure} \centering \epsfig{file=Fig13.eps, width=.5\linewidth, height=2in}\caption{Evolution reshift $Z_s$ versus radial coordinate $r(km)$ at the stellar interior of strange star Her X-1.} \end{figure} \section{Conclusion} In this paper, we have addressed the issue of modeling the spherically anisotropic compact stars in $f(T)$ gravity using the off diagonal tetrad field. For the analytic solution of the system of differential equations, we have assumed the KB form of the metyric components and the power law form of $f(T)$ model, as $f(T)=\beta T^n$, with $n\neq1$ and $\beta$ is an arbitrary small constant, which plays the role of departure of these theories from GR. This approach leads to the set field equations, which second order nonlinear differential equations, for the analytic form of solutions we assume that the metric coefficients for the interior and exterior regions of a star continuous over the boundary. This is the smooth matching of the general spherical interior geometry with the Schwarzschild exterior geometry, because in both regions metric function are continuous upto fist order of derivatives (GR and $f(T)$ are second order derivative theories). This matching helps to use the observational data of stars for the calculation of arbitrary constants appearing in the system due to parameterization of metric functions. We have determined the explicit form of the matter density and pressure components, which are regular inside the stars as shown in figures ({\textbf{1-3}}). The behavior of EoS parameter implies that near the boundary the matter inside the stars behaves as quintessence in the presence of $f(T)$ terms (see figure \textbf{6}). The anisotropic parameter $\Delta=0$ at center vanished and it becomes $\Delta>0$, everywhere inside the star except at center, which predicts that there may exist the repulsive force which leads to the formation of more massive star. The physical viability of the solutions lies in the fact that solutions satisfies the energy conditions (Figure \textbf{9}). The analysis of TOV equation presented in this paper shows that the static equilibrium can be achieved for strange stars in $f(T)$ as shown in figure\textbf{ 10}. The stability of the proposed model is given in figures \textbf{11-13}, on the physical investigations of these figures predict that our proposed strange star model is stable for Her X-1, and unstable for both SAX J 1808.4-3658 and 4U 1820-30. The maximum redshift of Her X-1 turn out be 0.025 which is shown in Figure \textbf{14}. \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix A} \begin{eqnarray} \nonumber&& \frac{\partial\rho}{\partial r}=\frac{n2^{n-1}\beta}{r^{2n+1}}\left\{(e^{-Ar^2})^n\left(e^{\frac{Ar^2}{2}}-1\right)^{n-2} \left(e^{\frac{Ar^2}{2}}-1-2Br^2\right)^{n-3}\right\}\left[7Ar^2-e^{\frac{5Ar^2}{2}}\right. \\\nonumber&&\left.+1-10Br^2+18ABr^4+12AB^2r^6+8AB^3r^8+e^{2Ar^2}(5+4n^2-2Ar^2+ 6Br^2\right.\\\nonumber&&\left. +2n(-3+Ar^2-Br^2))+4n^2(1+A(r^2+2Br^4))^2-2n(2B^2r^4-7Br^2+2\right. \\\nonumber&&\left.+3A^2r^4(1+2Br^2)^2+Ar^2(7 +15Br^2+4B^2r^4+4B^3r^6))-e^{\frac{3Ar^2}{2}}(10+4Br^2\right. \\\nonumber&&\left.+8B^2r^4-2A^2r^4(3+2Br^2)+Ar^2(3+16Br^2+2B^2r^4) +8n^2(2+A(r^2+Br^4))\right. \\\nonumber&&\left.+2n(2Br^2-11-2B^2r^4)-Ar^2(5+12Br^2+B^2r^4)+2A^2r^4(1+Br^2)))+e^{Ar^2}\right. \\\nonumber&&\left.\times(10-20Br^2+16B^2r^4-4A^2r^4(3+5Br^2)+Ar^2(56Br^2+16B^2r^4+4B^3r^6 \right. \\\nonumber&&\left.+19)+4n^2 (6+r^4(A+ABr^2)^2+A(6r^2+8Br^4))-2 n (15 - 14 B r^2+ 6 B^2 r^4 \right. \\\nonumber&&\left. +A^2 r^4(-1- 2 B r^2 + 2 B^2 r^4) + 2 A r^2 (10 + 21 B r^2 + 4 B^2 r^4+ B^3 r^6))) + e^{\frac{Ar^2}{2}}(-5\right. \\\nonumber&&\left.+ 28 B r^2 - 16 B^2 r^4 +2 A^2 r^4 (3 + 8 B r^2 + 4 B^2 r^4) - A r^2 (58 B r^2 + 30 B^2 r^4 + 12 B^3 r^6\right. \\\nonumber&&\left.+21) - 8 n^2 (2 + A r^2 (3 + 5 B r^2)+ A^2 (r^4 + 3 B r^6 + 2 B^2 r^8)) +2 n (-18 B r^2+9+\right. \\\label{01}&&\left. 6 B^2 r^4 + A r^2 (21 + 45 B r^2 + 13 B^2 r^4 + 6 B^3 r^6) + 4 A^2 (r^4 + 3 B r^6 + 2 B^2 r^8)))\right], \end{eqnarray} \begin{eqnarray}\nonumber &&\frac{\partial p_r}{\partial r}=\frac{n2^{n-1}\beta}{r^{2n+1}}\left\{(e^{-Ar^2})^n(\left(e^{\frac{Ar^2}{2}}-1\right) \left(e^{\frac{Ar^2}{2}}-1-2Br^2\right))^{n-2}\right\}\left[ 2 e^{\frac{3 A r^2}{2}}(-2 + n)\right. \\\nonumber&\times&\left.(1 + B r^2)+e^{2 A r^2}-(2n-1) (1 + 2 B r^2) (1 + A (r^2 + 2 B r^4)) + e^{A r^2} (6 + 10 B r^2 \right. \\\nonumber&+&\left. A (r^2 + 2 B r^4 + 2 B^2 r^6) - 2 n (3 + 4 B r^2 + A (r + B r^3)^2)) + 2e^{\frac{ A r^2}{2}}(-(1 + 2 B r^2) (2\right. \\\label{02}&+&\left.A (r^2 + B r^4)) + n (3 + 5 B r^2 + 2 A (r^2 + 3 B r^4 + 2 B^2 r^6)))\right], \end{eqnarray} \begin{eqnarray} \nonumber&& \frac{\partial p_t}{\partial r}=\frac{-n2^{n-1}\beta}{r^{2n+1}}\left\{(e^{-Ar^2})^n(\left(e^{\frac{Ar^2}{2}}-1\right) \left(e^{\frac{Ar^2}{2}}-1-2Br^2\right))^{n-3}\right\}\left[e^{3Ar^2}(n-1)-1\right. \\\nonumber&&\left.-2 A r^2-10 AB r^4-6 B^2 r^4 - 18A B^2 r^6-6 B^3 r^6-12 A B^3 r^8-4 B^4 r^8+e^{\frac{5Ar^2}{2}}(n-1)\right. \\\nonumber&&\left.\times(2n-6-B r^2(4+A r^2))- 2n^2 (1+B r^2)(1 + A (r^2 + 2Br^4))^2 +n(2 B r^2+5 B^2 r^4\right. \\\nonumber&&\left.+3+ 2 B^3 r^6 +r^4 (1 + B r^2) (A + 2 A B r^2)^2 +A r^2 (6 + 25 B r^2+ 39 B^2 r^4 + 28 B^3 r^6 \right.\\\nonumber&&\left.+ 4 B^4 r^8)) - e^{2 A r^2}(15 + 16 B r^2 + 6 B^2 r^4 + A r^2 (2 + 10 B r^2 + 3 B^2 r^4)- A^2r^4(1+B r^2)\right. \\\nonumber&&\left. +2 n^2 (5 + B r^2 + 2 A (r^2 + B r^4))+n (-25 - 18 B r^2 - 5 B^2 r^4 - A r^2 (17 B r^2 + 4 B^2 r^4\right. \\\nonumber&&\left.+6) + 2 A^2 (r^4 + B r^6))) + e^{\frac{3Ar^2}{2}}(20 + 24 B r^2 + 24 B^2 r^4 + 6 B^3 r^6 - 3 A^2 (r^4 + 2 B r^6)\right. \\\nonumber&&\left. + A r^2 (8 + 34 B r^2 + 15 B^2 r^4 + B^3 r^6) + 2 n^2 (10 + 4 B r^2 + r^4 (A + A B r^2)^2 + 2 A r^2 (4\right. \\\nonumber&&\left. + 6 B r^2 + B^2 r^4)) + n (A^2 r^4 (5 + 10 B r^2 + 2 B^2 r^4) - 2 (20 + 16 B r^2 + 10 B^2 r^4 + B^3 r^6)\right. \\\nonumber&&\left. -A r^2 (24 + 70 B r^2 + 35 B^2 r^4 + 5 B^3 r^6))) - e^{A r^2} (16 B r^2 + 36 B^2 r^4 + 18 B^3 r^6 + 4 B^4 r^8\right. \\\nonumber&&\left.+15 - 3 A^2 (r^4 + 3 B r^6 + 2 B^2 r^8) +A r^2 (12 + 52 B r^2 + 39 B^2 r^4 + 6 B^3 r^6 - 2 B^4 r^8)\right. \\\nonumber&&\left. +2 n^2 (10 + 6 B r^2 + 4 A r^2 (3 + 6 B r^2 + 2 B^2 r^4) + A^2 r^4 (3 + 9 B r^2 + 7 B^2 r^4 + B^3 r^6))\right. \\\nonumber&&\left. +n (-35 - 28 B r^2 - 30 B^2 r^4 - 6 B^3 r^6 + A^2 r^4 (3 + 9 B r^2 + 8 B^2 r^4 + 2 B^3 r^6) -A r^2 (36\right. \\\nonumber&&\left. + 118 B r^2 + 97 B^2 r^4 + 30 B^3 r^6 +2 B^4 r^8))) + e^{\frac{A r^2}{2}} (4 B r^2 + 24 B^2 r^4 + 18 B^3 r^6 + 8 B^4 r^8\right. \\\nonumber&&\left.+6 + A r^2 (8 + 37 B r^2 + 45 B^2 r^4 + 17 B^3 r^6 - 2 B^4 r^8) - A^2r^4(4 B r^2 + 6 B^2 r^4 + 4 B^3 r^6\right. \\\nonumber&&\left.+1) +2 n^2 (5 + 4 B r^2 + 2 A r^2 (4 + 10 B r^2 + 5 B^2 r^4) + A^2 r^4 (12 B r^2 + 14 B^2 r^4 + 4 B^3 r^6\right. \\\nonumber&&\left.+3)) + n (-2 (8 + 6 B r^2 + 10 B^2 r^4 + 3 B^3 r^6) + A^2 r^4 (-1 - 4 B r^2 - 2 B^2 r^4 + 4 B^3 r^6)\right. \\\label{03}&&\left. - A r^2 (24 + 89 B r^2 + 105 B^2 r^4 + 53 B^3 r^6 + 6 B^4 r^8)))\right] \end{eqnarray} \vspace{.5cm}
1,314,259,992,871
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/fig_intro_v3.png} \caption{Example shape and pose predictions from a group of input images. Probabilistic shape combination results in a more accurate body shape estimate than both individual single-image predictions (visualised here from SPIN \cite{kolotouros2019spin} and STRAPS \cite{STRAPS2020BMVC}) and naively-averaged single-image predictions, as our experiments show in Section \ref{sec:experimental_results}.} \label{fig:intro} \vspace{-0.4cm} \end{figure} 3D human body shape and pose estimation from RGB images is a challenging problem with potential applications in augmented and virtual reality, healthcare and fitness technology and virtual retail. Recent solutions have focused on three types of inputs: i) single images \cite{Bogo:ECCV:2016, tan2017,hmrKanazawa17,kolotouros2019spin,kolotouros2019cmr,zhang2019danet,Moon_2020_ECCV_I2L-MeshNet,STRAPS2020BMVC,omran2018nbf,pavlakos2018humanshape,varol18_bodynet}, ii) video \cite{kocabas2019vibe,humanMotionKanazawa19,sun2019dsd-satn, pavlakos2019texturepose, MuVS:3DV:2017} with temporal constraints on pose, camera viewpoint and background conditions and iii) multi-view images \cite{liang2019samv,smith20193dfromsilhouettes} with a fixed subject pose captured from multiple viewpoints. In contrast, we aim to estimate 3D body shape and pose from a group of images of the same human subject without any constraints on the subject's pose, camera viewpoint or background conditions between the images, as illustrated in Figure \ref{fig:intro}. This task is motivated by the intuition that multiple images of the same subject should contain additional visual information about their body shape compared to a single image, regardless of whether the subject's pose or surrounding environment change between images. A suitable shape and pose estimator should leverage this information to improve shape prediction accuracy over single-image methods. We present a probabilistic body shape and pose estimation method from a group of unconstrained images of the same subject. Inference occurs in three stages (see Figure \ref{fig:method}). First, we predict a proxy representation from each input image in the group, consisting of the subject's silhouette and 2D joint location heatmaps, using off-the-shelf segmentation and 2D keypoint detection CNNs \cite{he2017maskrcnn, Guler2018DensePose, kirillov2019pointrend, wu2019detectron2}. Then, each proxy representation is passed through a 3D distribution prediction network that outputs a probability distribution over SMPL \cite{SMPL:2015} body shape and pose parameters conditioned on the input representation. Lastly, body shape distributions from each input image are probabilistically combined to procure a final shape prediction. This yields a better estimate of the subject's body shape than current single-image body shape and pose estimators \cite{hmrKanazawa17,kolotouros2019spin,kolotouros2019cmr,zhang2019danet,STRAPS2020BMVC}, which may be inaccurate or inconsistent, as shown in Figure \ref{fig:intro}. Moreover, most single-image body model parameter regressors \cite{hmrKanazawa17,kolotouros2019spin,omran2018nbf,zhang2019danet,STRAPS2020BMVC,Xu_2019_ICCV, georgakis2020hkmr} do not consider the uncertainty associated with each pose parameter estimate. If certain body parts are occluded or out-of-frame in the input image, the estimator can only guess about the pose parameters corresponding to these body parts. Such situations further motivate our approach of predicting a distribution over body pose, since the variance of the distribution quantifies the uncertainty associated with each pose parameter prediction, as shown in Figures \ref{fig:synthtic_data_example_predictions} and \ref{fig:3dpw_uncertainty}. Training body model parameter regressors to accurately predict body shape is challenging due to the lack of suitable training datasets of in-the-wild images paired with \textit{accurate and diverse} body shape labels. Collecting such data is practically difficult, particularly for our proposed task of shape estimation from a group of unconstrained images. Recent works \cite{STRAPS2020BMVC, smith20193dfromsilhouettes, varol17_surreal} propose using synthetic input-label pairs to overcome the lack of suitable training datasets. We adopt the same synthetic training approach as STRAPS \cite{STRAPS2020BMVC} to train our 3D distribution prediction network, but extend the data augmentations used to bridge the gap between synthetic and real inputs. In particular, our synthetic training data better models occluded and out-of-frame body parts in silhouettes and joints such that the domain gap to real occluded data is smaller. This allows our method to estimate pose prediction uncertainty and also results in improved single-input pose prediction metrics on challenging evaluation datasets, such as 3DPW \cite{vonMarcard2018}. In summary, our main contributions are as follows: \begin{itemize} \item We propose a novel task: predicting body shape from a group of images of the same human subject, without imposing any constraints on subject pose, camera viewpoint or backgrounds between the images. \item We present a solution to the proposed task which predicts a distribution over 3D human body shape and pose parameters conditioned on the input images in the group. Body shape distributions from each image are probabilistically combined to yield a final body shape estimate which leverages multi-image shape information, resulting in a more accurate body shape estimate compared to single-input methods. \item To the best of our knowledge, our method is the first to output uncertainties alongside associated SMPL \cite{SMPL:2015} shape and pose parameter predictions, which are shown to be useful when input images contain occluded or out-of-frame body parts. \item We extend the synthetic training framework introduced by \cite{STRAPS2020BMVC} to better model occlusion and missing body parts, allowing our synthetically-trained distribution prediction neural network to yield better 3D shape and pose metrics. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{figures/fig_method_v3.png} \caption{Overview of our shape and pose distribution prediction network. Each image $\mathbf{I}_n$ in the input group is converted into a silhouette and joint proxy representation $\mathbf{X}_n$, which is passed through a distribution prediction network to obtain multivariate distributions over SMPL \cite{SMPL:2015} shape and pose parameters, $\boldsymbol{\beta}$ and $\boldsymbol{\theta}_n$, conditioned on the input. Shape distributions from each individual input are probabilistically combined to form a multi-input shape distribution. The encoder and distribution MLP are trained using randomly-generated synthetic data \cite{STRAPS2020BMVC}. The per-vertex uncertainty visualisations (in cm) are obtained by sampling SMPL parameters from the predicted distributions, computing the SMPL vertex mesh for each sample and determining the average Euclidean distance from the mean for each vertex. Black dots indicate left hands.} \label{fig:method} \vspace{-0.3cm} \end{figure*} \vspace{-0.3cm} \section{Related Work} This section discusses recent approaches to 3D human shape and pose estimation from single images, multi-view images and video. \noindent \textbf{Single-image shape and pose estimation} methods can be classified into 2 categories: optimisation-based and learning-based. Optimisation-based methods fit a parametric 3D body model \cite{SMPL:2015, SMPL-X:2019, Anguelov05scape:shape, Joo_2018_CVPR_total_capture} to 2D observations (e.g. 2D joints \cite{Bogo:ECCV:2016, SMPL-X:2019}, surface landmarks \cite{Lassner:UP:2017}, silhouettes \cite{Lassner:UP:2017} or part segmentations \cite{Zanfir_2018_CVPR}) via optimisation. They can accurately estimate 3D poses without requiring expensive 3D-labelled datasets, however they are susceptible to poor initialisation and tend to be slow at test-time. Learning-based methods can be further classified as model-free or model-based. Model-free approaches directly predict a 3D body representation from an image, such as a voxel occupancy grid \cite{varol18_bodynet}, vertex mesh \cite{kolotouros2019cmr, Moon_2020_ECCV_I2L-MeshNet, Zeng_2020_CVPR_mesh_dense} or implicit surface representation \cite{saito2020pifuhd}. Model-based approaches \cite{hmrKanazawa17, zhang2019danet, omran2018nbf, pavlakos2018humanshape, tan2017, georgakis2020hkmr, Guler_2019_CVPR_holopose} predict the parameters of a 3D body model \cite{SMPL:2015, SMPL-X:2019, Anguelov05scape:shape, Joo_2018_CVPR_total_capture}, which provides a useful prior over human body shape. Several methods \cite{hmrKanazawa17, kolotouros2019cmr, Guler_2019_CVPR_holopose, Xu_2019_ICCV} overcome the scarcity of in-the-wild 3D-labelled training data by incorporating weak supervision with datasets of labelled 2D keypoints. \cite{kolotouros2019spin} extends this further by integrating optimisation into the model training loop to lift 2D keypoint labels to self-improving 3D pose and shape labels. Such approaches predict impressive 3D poses but fail to predict accurate body shapes (particularly for non-average humans) since 2D keypoints do not densely inform shape. Recently, \cite{STRAPS2020BMVC} used random synthetic training data to overcome data scarcity and demonstrated improved shape predictions. \noindent \textbf{Video shape and pose estimation} methods may be classified similarly to their single-image counterparts. Optimisation-based video methods \cite{Arnab_CVPR_2019, MuVS:3DV:2017, alldieck2017optical} extend single-image optimisation over time, while learning-based video methods \cite{humanMotionKanazawa19, kocabas2019vibe, sun2019dsd-satn, NIPS2017_7108, pavlakos2019texturepose} modify single-image predictors to take sequences of frames as inputs. However, video inputs allow these methods to enforce consistent body shapes and smooth motions across frames, e.g. using motion discriminators \cite{movi2020}, optical flow \cite{alldieck2017optical, NIPS2017_7108}, or texture consistency \cite{pavlakos2019texturepose}. Learning-based video methods also overcome 3D data scarcity by incorporating weak 2D supervision \cite{kocabas2019vibe, humanMotionKanazawa19}, or with self-supervision enforcing visual consistency between frames \cite{NIPS2017_7108, pavlakos2019texturepose, alldieck2017optical}. Nevertheless, current methods are unable to predict accurate body shapes, particularly for non-average humans. \noindent \textbf{Multi-view shape and pose estimation.} \cite{liang2019samv} extends the iterative regressor of \cite{hmrKanazawa17} to predict body model parameters from multiple input images of the same subject in a fixed pose, captured from varying camera angles. They use synthetic data to overcome data scarcity, resulting in more accurate body shape estimates, particularly under clothing. \cite{smith20193dfromsilhouettes} uses synthetic data to learn to predict body model parameters from A-pose silhouettes. Contrary to the above approaches, our method estimates shape and pose from a group of images without \textit{any} temporal or absolute constraints on the subject's pose, camera viewpoint or background between images. \noindent \textbf{Shape and pose distribution estimation.} While substantial progress has been made in predicting probability distributions using neural networks \cite{baum1988superviseddistribution, nix1994icnn, Bishop94mixturedensity, kendall2017whatuncertainties, rupprecht2017learning, prokudin2018deepdirectstat, li2019posemdn, mohlin2020matrixfisher}, prediction of distributions over 3D human shape \textit{and} pose is still under-explored. Recently, \cite{Moon_2020_ECCV_I2L-MeshNet} used lixel-based 1D heatmaps to quantify uncertainty in predicted 3D human mesh vertex locations. \cite{biggs2020multibodies} predicted a categorical distribution over multiple SMPL hypotheses given an ambiguous image. In contrast, we aim to explicitly output separable uncertainties per predicted pose and shape parameter, since the shape uncertainties, specifically, are used for shape prediction from multiple unconstrained images. \section{Method} This section provides a brief overview of the SMPL parametric human body model \cite{SMPL:2015}, presents our three-stage method for probabilistic body shape and pose estimation from a group of unconstrained images of the same human subject (illustrated in Figure \ref{fig:method}) and finally discusses the synthetic training framework and loss functions used. \subsection{SMPL model} SMPL \cite{SMPL:2015} provides a differentiable function $\mathcal{M}(\boldsymbol{\theta}, \boldsymbol{\beta}, \boldsymbol{\gamma})$ which takes pose parameters $\boldsymbol{\theta}$, global rotation $\boldsymbol{\gamma}$ and identity-dependent body shape parameters $\boldsymbol{\beta}$ as inputs and outputs a vertex mesh $\mathbf{V} \in \mathbb{R}^{6890 \times 3}$. $\boldsymbol{\theta} \in \mathbb{R}^{69}$ and $\boldsymbol{\gamma} \in \mathbb{R}^3$ represent axis-angle rotation vectors for 23 SMPL body joints and the root joint respectively. $\boldsymbol{\beta} \in \mathbb{R}^{10}$ represents coefficients of a PCA body shape basis. Given the vertex mesh $\mathbf{V}$, 3D joint locations may be obtained using a linear regressor, $\mathbf{J}^{\text{3D}} = \mathcal{J}\mathbf{V}$ where $\mathcal{J} \in \mathbb{R}^{L \times 6890}$ is a regression matrix for $L$ joints of interest. \subsection{Proxy representation computation} Given a group of $N$ RGB input images $\{\mathbf{I}_n\}_{n=1}^N$ of the same subject, we first compute proxy representations $\{\mathbf{X}_n\}_{n=1}^N$. DensePose \cite{Guler2018DensePose} is used to obtain body part segmentations, which are converted into silhouettes. Keypoint-RCNN from Detectron2 \cite{he2017maskrcnn, wu2019detectron2} is used to obtain 2D joint locations, which are converted into Gaussian heatmaps, and associated confidence scores. Heatmaps corresponding to joint detections with confidence scores less than a threshold $t=0.025$ are set to 0. Thresholding is essential for modelling uncertainty in 3D pose predictions as it typically removes invisible 2D joints from the input representations. The predicted silhouette and joint heatmaps from each image are stacked along the channel dimension to form each proxy representation $\mathbf{X}_n \in \mathbb{R}^{H \times W \times (L+1)}$. The use of silhouette and joint heatmap representations as inputs instead of RGB images is inspired by \cite{pavlakos2018humanshape, STRAPS2020BMVC} and allows us to train our distribution prediction network using a simple synthetic training framework (see Section \ref{subsec:network_training}), overcoming the lack of shape diversity in current datasets. We follow \cite{STRAPS2020BMVC} and use simple silhouettes and 2D joint heatmaps as our proxy representation, instead of more complex alternatives (e.g. part segmentations or IUV maps), since this leads to a smaller synthetic-to-real domain gap which is more readily bridged by data augmentation \cite{STRAPS2020BMVC}. \subsection{Body shape and pose distribution prediction} We aim to estimate probability distributions \cite{nix1994icnn} over SMPL pose parameters $\{\boldsymbol{\theta}_n\}_{n=1}^N$ (which are free to change between inputs) and the subject's identity-dependent shape $\boldsymbol{\beta}$, both conditional upon $\{\mathbf{X}_n\}_{n=1}^N$. We assume simple multivariate Gaussian distributions \begin{equation} \begin{aligned} p(\boldsymbol{\theta}_n | {\mathbf{X}_n}) & = \mathcal{N}(\boldsymbol{\theta}_n ; \boldsymbol{\mu}_\theta({\mathbf{X}_n}), \boldsymbol{\Sigma}_\theta ({\mathbf{X}_n}))\\ p(\boldsymbol{\beta} | {\mathbf{X}_n}) & = \mathcal{N}(\boldsymbol{\beta} ; \boldsymbol{\mu}_\beta({\mathbf{X}_n}), \boldsymbol{\Sigma}_\beta ({\mathbf{X}_n})). \end{aligned} \label{eqn:pose_shape_distributions} \vspace{-0.1cm} \end{equation} Covariance matrices are constrained to be diagonal, i.e. $\boldsymbol{\Sigma}_\theta (\mathbf{X}_n) = \text{diag}(\boldsymbol{\sigma}^2_\theta (\mathbf{X}_n))$ and $\boldsymbol{\Sigma}_\beta (\mathbf{X}_n) = \text{diag}(\boldsymbol{\sigma}^2_\beta(\mathbf{X}_n))$. Formally, $\boldsymbol{\sigma}^2_\theta(\mathbf{X}_n)$ and $\boldsymbol{\sigma}^2_\beta(\mathbf{X}_n)$ represent estimates of the heteroscedastic aleatoric uncertainty \cite{derkiureghian2009aleatoric_epistemic, kendall2017whatuncertainties} in the SMPL parameters explaining the input observations $\mathbf{X}_n$, which arises particularly due to occlusion. We also predict deterministic estimates of the global rotations $\{\boldsymbol{\gamma}_n\}_{n=1}^N$ and weak-perspective camera parameters $\{\mathbf{c}_n\}_{n=1}^N$, where $\mathbf{c}_n = [s_n, t^x_n, t^y_n]$ representing scale and $xy$ translation respectively. Global rotation and camera parameters are unconstrained across images. Hence, we require a function mapping each input proxy representation $\mathbf{X}_n$ to the desired set of outputs $\mathbf{Y}(\mathbf{X}_n) = \{\boldsymbol{\mu}_\theta, \boldsymbol{\mu}_\beta, \boldsymbol{\sigma}^2_\theta, \boldsymbol{\sigma}^2_\beta, \boldsymbol{\gamma}_n, \mathbf{c}_n\}$. This function is represented using a deep neural network $f$ with learnable weights $\mathbf{W}$: \begin{equation} \mathbf{Y} = f(\textbf{X}_n; \mathbf{W}). \label{eqn:neural_network} \vspace{-0.1cm} \end{equation} $f$ consists of a convolutional encoder for feature extraction followed by a simple multi-layer perceptron that predicts the set of outputs $\mathbf{Y}$, as illustrated in Figure \ref{fig:method}. The network training procedure is detailed in Section \ref{subsec:network_training}. \subsection{Body shape combination} \label{subsec:body_shape_combination} We combine the conditional body shape distributions output by $f$ given each individual input, $p(\boldsymbol{\beta} | \mathbf{X}_n)$ for $n = 1, ..., N$, into a final distribution $p(\boldsymbol{\beta} | \{\mathbf{X}_n\}_{n=1}^N)$ that aggregates shape information across the input group. Formally, \begin{equation} p(\boldsymbol{\beta} | \{\mathbf{X}_n\}_{n=1}^N) \propto \prod_{n=1}^N p(\boldsymbol{\beta} | \mathbf{X}_n) \label{eqn:probabilistic_shape_combination} \vspace{-0.1cm} \end{equation} which follows from the conditional independence assumption $(\mathbf{X}_i \perp \!\!\! \perp \mathbf{X}_j) | \boldsymbol{\beta}$ for $i,j \in \{1,...,N\}$ and $i \neq j$. This is justifiable since we do not impose any relationship between the subject's pose or camera viewpoint across inputs - only the body shape is fixed. Further details are in the supp. material. Since the product of Gaussians is an un-normalised Gaussian, $p(\boldsymbol{\beta} | \{\mathbf{X}_n\}_{n=1}^N) \propto \mathcal{N}(\boldsymbol{\beta}; \mathbf{m}, \mathbf{S})$ where \begin{equation} \begin{aligned} \mathbf{S} &= \bigg(\sum_{n=1}^N \boldsymbol{\Sigma}^{-1}_\beta ({\mathbf{X}_n})\bigg)^{-1}\\ \mathbf{m} &= \mathbf{S}\bigg(\sum_{n=1}^N \boldsymbol{\Sigma}^{-1}_\beta({\mathbf{X}_n})\boldsymbol{\mu}_\beta({\mathbf{X}_n})\bigg). \end{aligned} \label{eqn:combined_shape_mean_var} \vspace{-0.2cm} \end{equation} The combined mean $\mathbf{m}$ is a final point estimate of the subject's body shape from the input group $\{\mathbf{X}_n\}_{n=1}^N$. \vspace{-0.05cm} \subsection{Network training} \label{subsec:network_training} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/fig_synthetic_data_examples_v1.png} \caption{Clean and corrupted versions of an example group of inputs from our synthetic evaluation dataset, and corresponding (single-image) shape and pose distribution predictions. Black dots indicate left hands.} \label{fig:synthtic_data_example_predictions} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.82\linewidth]{figures/fig_3dpw_uncertainty_v5.png} \caption{Predictions on single images from 3DPW \cite{vonMarcard2018}. 3D locations of invisible parts are uncertain due to large predicted variances for the corresponding pose parameters.} \label{fig:3dpw_uncertainty} \vspace{-0.4cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/fig_ssp3d_predictions_v3.png} \caption{Example predictions on groups of images from SSP-3D \cite{STRAPS2020BMVC}. Single-image predictions from SPIN \cite{kolotouros2019spin}, STRAPS \cite{STRAPS2020BMVC} and our method are visualised, along with probabilistically combined body shapes from our method.} \label{fig:ssp3d_predictions} \vspace{-0.4cm} \end{figure} \noindent \textbf{Loss functions.} While inference occurs using a group of inputs, the model is trained with a dataset of \textit{individual} input-label pairs, $\{\mathbf{X}_k, \{\boldsymbol{\theta}_k, \boldsymbol{\beta}_k, \boldsymbol{\gamma}_k\}\}_{k=1}^K$, with $K$ i.i.d training samples. The negative log-likelihood is given by \begin{equation} \begin{aligned} \mathcal{L}_{\text{NLL}} &= - \sum_{k=1}^K \bigg( \log p(\boldsymbol{\theta}_k |\mathbf{X}_k) + \log p(\boldsymbol{\beta}_k |\mathbf{X}_k) \bigg)\\ & \propto \sum_{k=1}^K \bigg( \sum_{i=1}^{69} \log(2\pi \sigma^2_{\theta_i}) + \frac{(\theta_{k_i} - \mu_{\theta_i})^2}{\sigma^2_{\theta_i}} \\& \qquad \;\; + \sum_{j=1}^{10} \log(2\pi \sigma^2_{\beta_j}) + \frac{(\beta_{k_j} - \mu_{\beta_j})^2}{\sigma^2_{\beta_j}} \bigg)\\ \end{aligned} \label{eqn:log_likelihood_loss} \end{equation} where $\mu_{\theta_i}$, $\mu_{\beta_i}$, $\sigma^2_{\theta_i}$ and $\sigma^2_{\beta_i}$ represent elements of the predicted SMPL mean and variance vectors $\boldsymbol{\mu}_\theta (\mathbf{X}_k, \mathbf{W})$, $\boldsymbol{\mu}_\beta (\mathbf{X}_k, \mathbf{W})$, $\boldsymbol{\sigma}^2_\theta (\mathbf{X}_k, \mathbf{W})$ and $\boldsymbol{\sigma}^2_\beta (\mathbf{X}_k, \mathbf{W})$, which are output by the neural network $f$ with weights $\mathbf{W}$. We maximise the log-likelihood of the model w.r.t $\mathbf{W}$ by minimising the loss function $\mathcal{L}_{\text{NLL}}$. Intuitively, each squared error term in Eqn. \ref{eqn:log_likelihood_loss} is adaptively-weighted by the corresponding predicted variance \cite{kendall2017whatuncertainties}. This mitigates the ill-posed nature of a naive squared error loss on SMPL parameters when training inputs are occluded, since the network learns to predict large variances for the parameters corresponding to invisible body parts, thus down-weighting the respective squared error terms. Furthermore, adaptive weighting means that our network is able to train stably without additional ``global'' losses on 3D vertices or 3D joints, as is common in most other recent methods \cite{STRAPS2020BMVC,pavlakos2018humanshape,kolotouros2019spin,hmrKanazawa17,zhang2019danet}. Our network also predicts deterministic estimates of the global rotations $\boldsymbol{\gamma}_k$. Predictions $\boldsymbol{\hat{\gamma}}_k$ are supervised by \begin{equation} \mathcal{L}_{\text{glob}} = \sum_{k=1}^K \| \mathbf{R}(\boldsymbol{\gamma}_k) - \mathbf{R}( \boldsymbol{\hat{\gamma}}_k) \|_{F}^2. \label{eqn:glob_loss} \end{equation} $\mathbf{R}(\boldsymbol{\gamma}) \in SO(3)$ is the rotation matrix corresponding to $\boldsymbol{\gamma}$. Finally, our network estimates weak-perspective camera parameters $\mathbf{c}_k = [s_k, t^x_k, t^y_k]$, which are supervised using a 2D joint reprojection loss. Target 2D joint coordinates $\mathbf{J}_{k} \in \mathbb{R}^{L \times 2}$ are computed from $\{\boldsymbol{\theta}_k, \boldsymbol{\beta}_k,\boldsymbol{\gamma}_k\}$ during synthetic data generation (see Section \ref{subsec:network_training}). Predicted 2D joint coordinates are obtained by first differentiably sampling $\boldsymbol{\hat{\theta}}_k^i \sim p(\boldsymbol{\theta}_k |\mathbf{X}_k)$ and $\boldsymbol{\hat{\beta}}_k^i \sim p(\boldsymbol{\beta}_k |\mathbf{X}_k)$ using the re-parameterisation trick \cite{kingma2014autoencoding}, for $i=1,...,B$ samples. These are converted into 2D joint samples using the SMPL model and weak-perspective projection \begin{equation} \mathbf{\hat{J}}_k^i = s_k \Pi(\mathcal{J}\mathcal{M}(\boldsymbol{\hat{\theta}}_k^i, \boldsymbol{\hat{\beta}}_k^i, \boldsymbol{\hat{\gamma}}_k)) + [t^x_k, t^y_k] \end{equation} where $\Pi()$ represents an orthographic projection. A squared error reprojection loss is imposed between the predicted 2D joint samples and the target 2D joints \begin{equation} \mathcal{L}_\text{2D} = \sum_{k=1}^{K} \sum_{i=1}^{B} \|\boldsymbol{\omega}_k(\mathbf{J}_k - \mathbf{\hat{J}}_k^i)\|_2^2 \label{eqn:2d_loss} \end{equation} where $\boldsymbol{\omega}_k \in \{0,1\}^L$ denote the visibilities of the target joints (1 if visible, 0 otherwise), which are computed during synthetic data generation. We apply a reprojection loss on \textit{samples} from the predicted body shape and pose distributions, instead of only on the means of the distributions, because \textit{any} 3D body sampled from the distributions must match the 2D joint locations present in the input $\mathbf{X}_k$. Our overall loss function is given by $\mathcal{L} = \mathcal{L}_\text{NLL} + \lambda_\text{glob} \mathcal{L}_\text{glob} + \lambda_\text{2D} \mathcal{L}_\text{2D}$ where $\lambda_\text{glob}, \lambda_\text{2D}$ are weighting terms. \noindent \textbf{Synthetic data generation.} To train our distribution prediction network using the proposed losses, we require training data consisting of input proxy representations paired with target SMPL shape, pose and global rotation parameters, $\{\mathbf{X}_k, \{\boldsymbol{\theta}_k, \boldsymbol{\beta}_k, \boldsymbol{\gamma}_k\}\}_{k=1}^K$. We employ a similar synthetic training data generation process as \cite{STRAPS2020BMVC}. In short, within each iteration of the training loop, $\boldsymbol{\theta}_k$ and $\boldsymbol{\gamma}_k$ are sampled from any suitable dataset with SMPL pose parameters \cite{h36m_pami,Lassner:UP:2017,AMASS:2019,vonMarcard2018}, while $\boldsymbol{\beta}_k$ are randomly sampled from a suitably high-variance Gaussian distribution to ensure body shape diversity. These are converted into synthetic silhouette and joint heatmap representations $\mathbf{X}_k$ and target 2D joint coordinates $\mathbf{J}_{k}$ using the SMPL model, a renderer \cite{kato2018renderer} and randomly sampled perspective camera parameters. The clean synthetic inputs are corrupted to model the failure modes of the off-the-shelf detection and segmentation CNNs used at test-time, such as noisy keypoint locations, and occluded silhouettes. Examples are given in Figure \ref{fig:synthtic_data_example_predictions}. We improve the data generation process of \cite{STRAPS2020BMVC} in two ways. First, we significantly increase the severity of the occlusion and cropping augmentations to match the occlusions seen in challenging test datasets such as 3DPW (illustrated in Figure \ref{fig:3dpw_uncertainty}, first row). Second, we explicitly compute a joint visibility vector $\boldsymbol{\omega}_k$ (1 if visible, 0 otherwise) for each $\mathbf{J}_k$ and set the heatmaps corresponding to invisible joints to 0, unlike \cite{STRAPS2020BMVC}. This is necessary for our distribution prediction network to learn to be uncertain about pose parameters corresponding to invisible body parts. \begin{table}[t] \centering \small \renewcommand{\tabcolsep}{1pt} \begin{tabular}{l l c c} \hline \textbf{Input groups} & \textbf{Method} & \textbf{Synthetic} & \textbf{Synthetic}\\ & & \textbf{Clean} & \textbf{Corrupted}\\ & & PVE-T-SC & PVE-T-SC\\ [0.5ex] \hline \textbf{Single-Input} & Ours & 14.4 & 15.1\\ \hline \multirow{2}{0.3\linewidth}{\textbf{Quadruplets} Front + L/R Side + Back}\\ & Ours + Mean & 13.1 & 13.3\\ & Ours + PC & \textbf{13.0} & \textbf{12.8}\\ \hline \textbf{Pairs} \\ Front + L Side or & Ours + PC & 13.5 & 13.4\\ Back + R Side & & &\\ \hline \textbf{Pairs} \\ Front + Back or & Ours + PC & 13.6 & 13.8\\ L + R Side & & &\\ \hline \end{tabular} \caption{PVE-T-SC (mm) results on synthetic data (see Figure \ref{fig:synthtic_data_example_predictions}) investigating: i) probabilistic shape combination (PC) versus simple averaging (Mean), ii) effect of increasing input group size from 1 to 2 (Pairs) to 4 (Quadruplets) and iii) effect of global rotation variation within pairs of inputs.} \label{table:synth_ablation_comparison} \end{table} \section{Implementation Details} \noindent \textbf{Network Architecture.} We use a ResNet-18 \cite{He2015} encoder followed by a multi-layer perceptron (MLP) to predict SMPL parameter distributions. The MLP is comprised of one hidden layer with 512 neurons and ELU \cite{clevert2016elu} activation and one output layer with 164 neurons, which predicts the set of outputs $\mathbf{Y}$. Predicted variances are forced to be positive using an exponential activation function. \noindent \textbf{Training dataset.} Synthetic training data is generated by sampling SMPL pose parameters from the training sets of UP-3D \cite{Lassner:UP:2017}, 3DPW \cite{vonMarcard2018}, and Human3.6M \cite{h36m_pami} (Subjects 1, 5, 6, 7, 8). \noindent \textbf{Training details.} We use Adam \cite{kingma2014adam} with a learning rate of 1e-4 and a batch size of 120, and train for 100 epochs, which takes 1.5 days on a 2080Ti GPU. Inference runs at ~4fps, 90\% of which is silhouette and joint prediction \cite{Guler2018DensePose, wu2019detectron2}. \begin{table}[t!] \centering \small \begin{tabular}{c l c} \hline \multirow{2}{0.23\linewidth}{\textbf{Max. input group size}} & \textbf{Method} & \textbf{SSP-3D}\\ & & PVE-T-SC\\ [0.5ex] \hline 1 & Ours & 15.2\\ 2 & Ours + PC & 13.9\\ 3 & Ours + PC & 13.6\\ 4 & Ours + PC & 13.5\\ \hline 5 & Ours + Mean & 13.6\\ & Ours + PC & \textbf{13.3}\\ \hline \end{tabular} \caption{PVE-T-SC (mm) results on SSP-3D \cite{STRAPS2020BMVC} comparing i) probabilistic shape combination (PC) versus simple averaging (Mean) and ii) effect of increasing input group size from 1 to 5.} \vspace{-0.3cm} \label{table:ssp3d_ablation_comparison} \end{table} \noindent \textbf{Evaluation datasets.} We use the test set of 3DPW to evaluate pose prediction accuracy. It consists of 35515 RGB images of 7 subjects with paired ground-truth SMPL parameters. We report mean per joint position error after scale correction (MPJPE-SC) and after Procrustes analysis (MPJPE-PA). We use the scale correction technique introduced in \cite{STRAPS2020BMVC} to combat the ambiguity between subject scale and distance from camera. MPJPE-SC measures 3D joint error up to scale and MPJPE-PA measures 3D joint error up to scale and global rotation. We also report scale-corrected per-vertex Euclidean error in a T-pose (PVE-T-SC) on the SSP-3D dataset \cite{STRAPS2020BMVC} to evaluate identity-dependent shape prediction accuracy. SSP-3D consists of 311 images of 62 subjects and pseudo-ground-truth SMPL parameters. In addition, we evaluate body shape prediction accuracy on a private dataset consisting of 6 subjects (4 male, 2 female) with 4 RGB images of each and ground-truth body measurements obtained using a tape measure or full 3D body scanning technology. The subjects' body poses, clothing, surrounding environments and camera viewpoints vary between images. Example images are in the supplementary material. Finally, we create a synthetic dataset for our experimental analysis. It consists of 1000 synthetic humans with randomly sampled SMPL body shapes, each posed in 4 different SMPL poses sampled from Human3.6M \cite{h36m_pami} subjects 9 and 11. Global orientations are set such that the camera is facing the human's front, back, left or right. A group of 4 clean synthetic inputs and 4 corrupted inputs is rendered for each human, where the corruptions used are the same as the data augmentations applied during training. Examples are given in Figure \ref{fig:synthtic_data_example_predictions}. \begin{table}[t] \centering \small \begin{tabular}{c l c c} \hline \multirow{2}{0.23\linewidth}{\textbf{Max. input group size}} & \textbf{Method} & \textbf{SSP-3D}\\ & & PVE-T-SC\\ [0.5ex] \hline & HMR \cite{hmrKanazawa17} & 22.9\\ & GraphCMR \cite{kolotouros2019cmr} & 19.5\\ \large 1 & SPIN \cite{kolotouros2019spin} & 22.2\\ & DaNet \cite{zhang2019danet} & 22.1\\ & STRAPS \cite{STRAPS2020BMVC} & 15.9\\ & Ours & \textbf{15.2}\\ \hline & HMR \cite{hmrKanazawa17} + Mean & 22.9\\ & GraphCMR \cite{kolotouros2019cmr} + Mean & 19.3\\ & SPIN \cite{kolotouros2019spin} + Mean & 21.9\\ \large 5 & DaNet \cite{zhang2019danet} + Mean & 22.1\\ & STRAPS \cite{STRAPS2020BMVC} + Mean & 14.4 \\ & Ours + Mean & 13.6\\ & Ours + PC & \textbf{13.3}\\ \hline Video & VIBE \cite{kocabas2019vibe} & 24.1\\ \hline \end{tabular} \caption{Comparison with the state-of-the-art in terms of PVE-T-SC (mm) on SSP-3D \cite{STRAPS2020BMVC}. Our method surpasses the state-of-the-art when using single-image inputs. Probabilistic shape combination (PC) outperforms simple averaging of predictions from other methods when using groups of up to 5 images, as well as video predictions from \cite{kocabas2019vibe}.} \label{table:ssp3d_sota_comparison} \end{table} \begin{table}[t] \centering \small \begin{tabular}{l c c c} \hline \textbf{Method} & \multicolumn{2}{c}{\textbf{3DPW}}\\ & MPJPE-SC & MPJPE-PA\\ [0.5ex] \hline HMR \cite{hmrKanazawa17} & 102.8 & 71.5\\ GraphCMR \cite{kolotouros2019cmr} & 102.0 & 70.2\\ SPIN \cite{kolotouros2019spin} & 89.4 & 59.2\\ I2L-MeshNet \cite{Moon_2020_ECCV_I2L-MeshNet} & - & 57.7\\ Biggs et al. \cite{biggs2020multibodies} & - & 55.6\\ DaNet \cite{zhang2019danet} & \textbf{82.4} & \textbf{54.8}\\ \hline HMR (unpaired) \cite{hmrKanazawa17} & 126.3 & 92.0\\ Kundu et al. \cite{kundu_human_mesh} & - & 89.8\\ STRAPS \cite{STRAPS2020BMVC} & 99.0 & 66.8\\ Ours & 90.9 & 61.0\\ \hline \end{tabular} \caption{Comparison with the state-of-the-art in terms of MPJPE-SC and MPJPE-PA (both mm) on 3DPW \cite{vonMarcard2018}. Methods in the top half require training images paired with 3D ground-truth, methods in the bottom half do not.} \label{table:3dpw_sota_comparison} \vspace{-0.3cm} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/fig_3dpw_ssp3d_sorted_combined_v2.png} \caption{Comparison with other methods using sorted distributions of a) PVE-T-SC per SSP-3D evaluation sample and b) MPJPE-PA per 3DPW evaluation sample.} \label{fig:3dpw_ssp3d_sorted_error_distribution_combined} \vspace{-0cm} \end{figure} \section{Experimental Results} \label{sec:experimental_results} In this section, we present our ablation studies, where we investigate uncertainty predictions, compare probabilistic shape combination with simple averaging and explore the effects of varying input group sizes and global rotation variation within groups. We also compare our method to other approaches in terms of shape and pose accuracy. \subsection{Ablation studies} \noindent \textbf{Pose and shape uncertainty.} SMPL pose and shape prediction uncertainties are represented by the predicted variances $\boldsymbol{\sigma}^2_\theta$ and $\boldsymbol{\sigma}^2_\beta$. Rows 2 and 5 of Figure \ref{fig:synthtic_data_example_predictions} illustrate pose prediction uncertainty on clean and corrupted synthetic inputs. Heavily-occluded inputs (e.g. the corrupted input in column 4), result in large predicted variances for the pose parameters corresponding to the occluded joints, while predicted variances are smaller for visible joints. This behaviour is replicated on real inputs from the 3DPW dataset, as shown by Figure \ref{fig:3dpw_uncertainty} where the network is uncertain about the 3D locations of occluded and out-of-frame body parts. Figure \ref{fig:synthtic_data_example_predictions} also showcases shape parameter prediction uncertainty on synthetic data, in rows 3 and 6. The network is more uncertain about body shape when the subject is heavily occluded and/or in a challenging pose, seen by comparing the sitting pose in column 2 with the standing poses in columns 1 and 3. This behaviour is also seen on real inputs in Figure \ref{fig:ssp3d_predictions}, e.g. by comparing the crouching pose in row 1 with the standing pose in row 2. \noindent \textbf{Body shape combination method.} We compare probabilistic body shape combination (from Section \ref{subsec:body_shape_combination}) with a simpler heuristic combination, where we obtain combined body shape estimates from a group of inputs $\{\mathbf{X}_n\}_{n=1}^N$ by simply averaging (i.e. taking the mean of) the shape distribution means $\{\boldsymbol{\mu}_\beta(\mathbf{X}_n)\}_{n=1}^N$. Rows 3-4 in Table \ref{table:synth_ablation_comparison} show that better shape estimation metrics are attained using probabilistic combination versus simple averaging on synthetic input quadruplets (examples in Figure \ref{fig:synthtic_data_example_predictions}). This is replicated on groups of real inputs from SSP-3D, as shown in Table \ref{table:ssp3d_ablation_comparison}, row 5 versus row 6. Since probabilistic combination may be interpreted as uncertainty-weighted averaging (Eqn. \ref{eqn:combined_shape_mean_var}), these experiments suggest that inaccurate mean body shape predictions are generally accompanied by large prediction uncertainty, and subsequently down-weighted during probabilistic combination. This may explain why probabilistic combination actually gives better shape metrics when evaluating on corrupted synthetic inputs compared to clean inputs in Table \ref{table:synth_ablation_comparison}, since heavy input corruption results in inaccurate but highly-uncertain shape estimates. \noindent \textbf{Input group size.} Table \ref{table:synth_ablation_comparison} also investigates the effect of the input group size, evaluated on our synthetic dataset, by comparing single inputs (i.e. group size of 1) with body shape combination applied to pairs and quadruplets (i.e. input group sizes of 2 and 4). Body shape metrics are significantly improved when using pairs compared to single images, suggesting that probabilistic combination is successfully using shape information from the multiple inputs. A smaller improvement is seen when using quadruplets versus pairs. Table \ref{table:ssp3d_ablation_comparison} shows that increasing the input group size on real data (from SSP-3D) also results in a consistent but diminishing improvement in shape prediction metrics. \noindent \textbf{Global rotation variation.} To investigate whether variation in global rotation of the subject between inputs is correlated with shape prediction accuracy, we split each group of 4 inputs from our synthetic dataset into groups of 2 in two ways: (front, left) + (back, right) and (front, back) + (left, right). We expect the latter split to be less informative for shape prediction as the pairs contain more redundant visual shape information. This is corroborated by the experiments labelled ``Pairs'' in Table \ref{table:synth_ablation_comparison}, where the former split yields better shape metrics, particularly for corrupted inputs where the amount of visual shape information in each individual input is lower. \begin{table}[t!] \centering \small \renewcommand{\tabcolsep}{3.5pt} \begin{tabular}{c l c c c c c c} \hline \multirow{2}{0.12\linewidth}{\textbf{Group size}} & \textbf{Method} & \multicolumn{6}{c}{\textbf{RMSE}}\\ & & C & S & H & B & F & T \\ \hline \multirow{3}{3em}{1} & SPIN \cite{kolotouros2019spin} & 6.9 & 8.0 & 6.6 & 6.9 & 2.5 & 5.3\\ & STRAPS \cite{STRAPS2020BMVC} & 6.7 & 5.3 & 4.3 & 3.9 & 1.8 & 3.7\\ & Ours & 4.9 & 4.7 & 5.5 & 4.2 & 1.8 & 3.9\\ \hline \multirow{3}{3em}{4} & SPIN \cite{kolotouros2019spin} + Mean & 6.5 & 8.1 & 6.4 & 6.7 & 2.4 & 5.1\\ & STRAPS \cite{STRAPS2020BMVC} + Mean & 6.1 & 4.2 & 4.0 & \textbf{3.2} & 1.7 & 3.3\\ & Ours + Mean & 3.4 & 3.9 & 3.8 & 4.9 & \textbf{1.6} & 3.1\\ & Ours + PC & \textbf{3.1} & \textbf{3.8} & \textbf{2.7} & 5.0 & 1.7 & \textbf{2.8}\\ \hline \end{tabular} \caption{Comparison with the state-of-the-art in terms of tape measurement RMSE (cm) on our private shape evaluation dataset. Errors are reported for the chest (C), stomach (S), hips (H), biceps (B), forearms (F) and thighs (T).} \label{table:size_stream_results} \end{table} \subsection{Comparison with the state-of-the-art} \noindent \textbf{Shape prediction.} Our method surpasses the state-of-the-art on SSP-3D in the single-input case (group size of 1), as shown in Table \ref{table:ssp3d_sota_comparison} and Figure \ref{fig:ssp3d_predictions}. The improvement over the similar synthetic training method in STRAPS \cite{STRAPS2020BMVC} is primarily due to our improved training data augmentations. When using groups of multiple images as inputs (with group size = 5), probabilistic combination outperforms simple averaging of predictions from all other methods. The distribution of errors per SSP-3D sample, shown in Figure \ref{fig:3dpw_ssp3d_sorted_error_distribution_combined}, suggests that probabilistic combination particularly improves errors for challenging samples, where the uncertainty weighting is more meaningful. Table \ref{table:size_stream_results} compares tape measurement errors computed using shape predictions from our method and competitors on a private dataset. Probabilistic combination results in the lowest measurement errors for large body parts, such as the chest, stomach, waist and hips. However, it is less accurate on smaller body parts (e.g. biceps and forearms), which are significantly obscured by clothing. In general, our method may over-estimate measurements for subjects with loose clothing, since silhouette-based inputs don't distinguish between clothing and the human body. \noindent \textbf{Pose prediction.} While we focus on body shape estimation, Table \ref{table:3dpw_sota_comparison} shows that our method is competitive with the state-of-the-art on 3DPW, and surpasses other methods that do not require training images paired with 3D labels. Figure \ref{fig:3dpw_ssp3d_sorted_error_distribution_combined} demonstrates that our method does well on low-to-medium difficulty samples, but struggles with the most challenging ones, which typically exhibit very severe occlusions leading to degraded proxy representations. Nevertheless, we outperform STRAPS \cite{STRAPS2020BMVC} on these challenging samples due to improved data augmentation and the adaptive loss weighting discussed in Section \ref{subsec:network_training}, which results in a more stable improvement of pose metrics during training. \section{Conclusion} In this paper, we have proposed the novel task of human body shape estimation from a group of images of the same subject, without imposing any constraints on body pose, camera viewpoint or backgrounds. Our solution predicts multivariate Gaussian distributions over SMPL \cite{SMPL:2015} body shape and pose parameters conditioned on the input images. We probabilistically combine predicted body shape distributions from each image to obtain a final multi-image shape prediction, and experimentally show that probabilistically combined estimates are more accurate than both individual single-image predictions, as well as naively-averaged single-image predictions, when evaluated on SSP-3D and a private dataset of tape-measured humans. Furthermore, predicting distributions over SMPL parameters allows us to estimate the heteroscedastic aleatoric uncertainty associated with pose predictions, which is useful when faced with input images containing occluded or out-of-frame body parts. Future work can consider using (i) clothed and textured synthetic data to further close the synthetic-to-real domain gap, and (ii) more expressive predicted distributions than the simple Gaussians proposed in this paper. \noindent \textbf{Acknowledgements.} We thank Dr. Yu Chen (Metail) and Dr. David Bruner (SizeStream) for providing body shape evaluation data and 3D body scans. This research was sponsored by SizeStream UK Ltd. \clearpage \twocolumn[ \large \centering \textbf{Supplementary Material: Probabilistic 3D Human Shape and Pose Estimation from Multiple Unconstrained Images in the Wild\\} \vspace{0.2in} ] \noindent This document provides additional material supplementing the main manuscript. Section \ref{sec:supmat_implementation_details} contains details regarding training data generation, evaluation protocols and probabilistic shape combination. Section \ref{sec:supmat_experiment_results} discusses qualitative results on the SSP-3D \cite{STRAPS2020BMVC} and 3DPW \cite{vonMarcard2018} datasets, as well as providing examples from our private evaluation dataset of tape-measured humans. \section{Implementation Details} \label{sec:supmat_implementation_details} \noindent \textbf{Training.} Table \ref{table:sup_mat_augment_hypparams} lists the data augmentation methods used to bridge the synthetic-to-real domain gap during synthetic training data generation, along with associated hyperparameter values. Table \ref{table:sup_mat_hypparams} lists additional hyperparameter values not given in the main manuscript. \noindent \textbf{Uncertainty Visualisation.} Figures \ref{fig:sup_mat_ssp3d} and \ref{fig:sup_mat_3dpw} in this supplementary material, as well as several figures in the main manuscript, visualise per-vertex prediction uncertainties. These are computed from the predicted SMPL \cite{SMPL:2015} pose and shape parameter distributions by i) sampling 100 SMPL parameter vectors from the predicted distributions, ii) passing each of these samples through the SMPL function to get the corresponding vertex meshes, iii) computing the mean location of each vertex over all the samples and iv) determining the average Euclidean distance from the mean for each vertex over all the samples, which is ultimately visualised in the scatter plots as a measure of uncertainty. \noindent \textbf{SSP-3D Evaluation Groups.} SSP-3D \cite{STRAPS2020BMVC} contains 311 images of 62 subjects, where subjects can have a different number of associated images. To evaluate our multi-input shape prediction method, the images for each subject were split into groups of \textit{maximum} size equal to $N$, where $N$ ranged from 1 to 5. For example, if a subject has 6 associated images and $N=4$, the images would be split into two groups with 4 and 2 images respectively. Splitting/group assignment was done after random shuffling of the images to prevent sequential images with similar poses/global orientations from always being in the same group. \noindent \textbf{Tape measurement normalisation by height.} There is an inherent ambiguity between 3D subject size/scale and distance from camera. Since the true camera location relative to the 3D subject (and the focal length) is unknown, it is not possible to estimate the absolute size of the subject given an image. This is accounted for by the PVE-T-SC \cite{STRAPS2020BMVC} metric used to evaluate shape prediction accuracy on synthetic data and SSP-3D in the main manuscript. For our evaluation dataset of tape-measured humans (see Figure \ref{fig:supmat_sizestream}), scale correction is done using the subject's height. The height of the predicted SMPL human can be determined by computing the neutral-pose mesh (i.e. pose parameters/joint rotations set to 0) and measuring the $y$-axis distance between the top of the head and bottom of the feet. The ratio between the subject's true height and this predicted height is then used to scale all the predicted body measurements derived from the neutral-pose mesh. \begin{table}[t] \centering \small \begin{tabular}{l l c} \hline \noalign{\smallskip} \textbf{Augmentation} & \textbf{Hyperparameter} & \textbf{Value}\\ \noalign{\smallskip} \hline \noalign{\smallskip} Body part occlusion & Occlusion prob. & 0.1 \\ 2D joints L/R swap & Swap prob. & 0.1\\ Half-image occlusion & Occlusion prob. & 0.05\\ 2D joints removal & Removal prob. & 0.1\\ 2D joints noise & Noise range & [-8, 8] pixels\\ 2D vertices noise & Noise range & [-10, 10] mm \\ Occlusion box & Probability, Size & 0.5, 48 pixels \\ \noalign{\smallskip} \hline \noalign{\smallskip} \noalign{\smallskip} \end{tabular} \caption{List of synthetic training data augmentations and their associated hyperparameter values. Body part occlusion uses the 24 DensePose \cite{Guler2018DensePose} parts. Joint L/R swap is done for shoulders, elbows, wrists, hips, knees, ankles.} \label{table:sup_mat_augment_hypparams} \end{table} \begin{table}[t] \centering \small \begin{tabular}{l c} \hline \noalign{\smallskip} \textbf{Hyperparameter} & \textbf{Value}\\ \noalign{\smallskip} \hline \noalign{\smallskip} Shape parameter sampling mean & 0 \\ Shape parameter sampling var. & 2.25 \\ Cam. translation sampling mean & (0, -0.2, 2.5) m\\ Cam. translation sampling var. & (0.05, 0.05, 0.25) m\\ Cam. focal length & 300.0\\ Proxy representation dimensions & $256 \times 256$ pixels\\ 2D joint confidence threshold & 0.025\\ \noalign{\smallskip} \hline \noalign{\smallskip} \noalign{\smallskip} \end{tabular} \caption{List of hyperparameter values not provided in the main manuscript.} \label{table:sup_mat_hypparams} \end{table} \noindent \textbf{Probabilistic shape combination.} The main manuscript presents our method to probabilistically combine individual body shape distributions, $p(\boldsymbol{\beta} | \mathbf{X}_n)$ for $n = 1, ..., N$, into a final distribution $p(\boldsymbol{\beta} | \{\mathbf{X}_n\}_{n=1}^N)$. The full derivation is given below: \begin{equation} \begin{aligned} p(\boldsymbol{\beta} | \{\mathbf{X}_n\}_{n=1}^N) & \propto p(\{\mathbf{X}_n\}_{n=1}^N | \boldsymbol{\beta})p(\boldsymbol{\beta})\\ & = \bigg(\prod_{n=1}^N p(\mathbf{X}_n | \boldsymbol{\beta}) \bigg)p(\boldsymbol{\beta})\\ & \propto \frac{\prod_{n=1}^N p(\boldsymbol{\beta} | \mathbf{X}_n)}{p(\boldsymbol{\beta})^{N-1}}\\ & \propto \prod_{n=1}^N p(\boldsymbol{\beta} | \mathbf{X}_n). \end{aligned} \label{eqn:probabilistic_shape_combination} \vspace{-0.1cm} \end{equation} The first and third lines use Bayes' theorem. The second line follows from the conditional independence assumption $(\mathbf{X}_i \perp \!\!\! \perp \mathbf{X}_j) | \boldsymbol{\beta}$ for $i,j \in \{1,...,N\}$ and $i \neq j$. This assumption is reasonable because only the subject's body shape is fixed across inputs - hence, the inputs are independent given the body shape parameters. The final line follows from assuming an (improper) uniform prior over the shape parameters $p(\boldsymbol{\beta}) = 1$. \section{Experimental Results} \label{sec:supmat_experiment_results} \begin{table*}[!b] \centering \begin{tabular}{l cc cc} \hline \multirow{2}{3em}{\textbf{Input}} & \multicolumn{2}{c}{\textbf{3DPW}} & \multicolumn{2}{c}{\textbf{SSP-3D}}\\ & MPJPE-SC & MPJPE-PA & PVE-PA & PVE-T-SC\\ \noalign{\smallskip} \hline \noalign{\smallskip} \textbf{GT Synthetic} Silh. + 2D Joint Heatmaps & \textbf{64.3} & \textbf{45.7} & \textbf{52.9} & \textbf{10.1}\\ \noalign{\smallskip} \textbf{GT} Silh. + 2D Joint Heatmaps & - & - & 69.9 & 14.4\\ \noalign{\smallskip} \textbf{Predicted} Silh. + 2D Joint Heatmaps & 90.9 & 61.0 & 71.4 & 15.2\\ \noalign{\smallskip} \hline \end{tabular} \caption{Comparison between ground-truth (GT), synthetic ground-truth and predicted input silhouettes and 2D joints, in terms of MPJPE-SC and MPJPE-PA (both in mm) on 3DPW \cite{vonMarcard2018}, as well as PVE-PA and PVE-T-SC (both in mm) on SSP-3D \cite{STRAPS2020BMVC}. Predicted silhouettes are obtained using DensePose \cite{Guler2018DensePose} and predicted 2D joint coordinates and confidences (for thresholding) are obtained using Keypoint-RCNN from Detectron2 \cite{wu2019detectron2}. Synthetic ground-truth inputs are obtained by rendering the SMPL \cite{SMPL:2015} body mesh labels given by SSP-3D and 3DPW, using ground-truth camera parameters, into silhouette and 2D joint input representations.} \label{table:supmat_gt_vs_pred_inputs} \end{table*} \noindent \textbf{Evaluation using ground-truth vs predicted inputs.} The synthetic training data augmentations listed in Table \ref{table:sup_mat_augment_hypparams} and the main manuscript are used to increase the robustness of our distribution prediction neural network to noisy and occluded test data, as demonstrated in Figure \ref{fig:sup_mat_3dpw}. However, the synthetic-to-real domain gap still persists, as evidenced by Table \ref{table:supmat_gt_vs_pred_inputs}, which compares body shape and pose prediction metrics when using ground-truth, synthetic ground-truth and predicted input proxy representations. A significant improvement in both body shape and pose metrics is observed when using synthetic inputs, instead of predicted inputs. This is mostly because predicted input silhouettes and 2D joints can be very inaccurate in cases with challenging poses, significant occlusion or occluding humans, such that the synthetic training data augmentations are not sufficient. Moreover, synthetic SMPL human silhouettes are not clothed, while silhouette predictors generally classify clothing pixels as part of the human body. This is particularly detrimental to body shape prediction metrics when subjects are dressed in loose clothing, as can be seen in Figure \ref{fig:sup_mat_3dpw} (left side, rows 3 and 4), where our method tends to over-estimate the subject's body proportions. \noindent \textbf{SSP-3D qualitative results.} Figure \ref{fig:sup_mat_ssp3d} shows qualitative results, particularly focusing on shape prediction, on groups of input images from SSP-3D \cite{STRAPS2020BMVC} corresponding to subjects with a wide range of body shapes. The first column in each cell shows the input images in the group. The second column shows the predicted SMPL \cite{SMPL:2015} body (rendered) for each \textit{individual} image, obtained by passing the mean of predicted SMPL parameter distributions through the SMPL function. The third and fourth columns visualise the 3D per-vertex uncertainty (or variance) in the individual SMPL shape distribution predictions (in a neutral pose i.e. pose parameters/joint rotations set to 0). The fifth column shows the \textit{combined} body shape prediction, which are obtained by probabilistically combining the individual shape distributions. In particular, note the relationship between challenging poses with significant self-occlusion (e.g. right side, row 4 of Figure \ref{fig:sup_mat_ssp3d}) and uncertainty in the predicted SMPL shape distribution. \noindent \textbf{3DPW qualitative results.} Figure \ref{fig:sup_mat_3dpw} shows qualitative results, particularly focusing on pose prediction, using single-image inputs from 3DPW \cite{vonMarcard2018}. The first column on each side shows the input images. The second column shows the corresponding silhouette and joint heatmap proxy representation predictions. The third column shows the predicted SMPL \cite{SMPL:2015} body (rendered) for each image, obtained by passing the mean of predicted SMPL parameter distributions through the SMPL function. The fourth column visualises the 3D per-vertex uncertainty (or variance) in the SMPL pose and shape distribution predictions (per-vertex uncertainties are mostly due to pose variance rather than shape). Specifically, note the large uncertainties of vertices belonging to body parts that are invisible in the image (and corresponding proxy presentations), either due to occluding objects, self-occlusion or being out-of-frame. Furthermore, large uncertainties also occur when the proxy representation prediction is highly-degraded, such as left side, row 7 of Figure \ref{fig:sup_mat_3dpw}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/supmat_sizestream.png} \caption{Example images from our private dataset of humans with body measurements obtained using a tape measure or 3D body scanners. The subjects' body pose, clothing, surrounding environment and camera viewpoints vary between images.} \label{fig:supmat_sizestream} \end{figure} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{figures/sup_mat_fig_ssp3d.png} \vspace{-0.25in} \caption{Qualitative results on groups of input images from SSP-3D \cite{STRAPS2020BMVC}. Black dots indicate left hands. Within each cell: 1st column is group of input images, 2nd column is predicted SMPL body, 3rd and 4th columns show 3D per-vertex uncertainty in the SMPL \textit{shape} distribution prediction, 5th column is the probabilistically-combined body shape. Challenging poses lead to large shape prediction uncertainty.} \label{fig:sup_mat_ssp3d} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/sup_mat_fig_3dpw.png} \caption{Qualitative results using single-image inputs from 3DPW \cite{vonMarcard2018}. Black dots indicate left hands. On each side: 1st column is input image, 2nd column is predicted proxy representation, 3rd column is predicted SMPL body and 4th column is 3D per-vertex uncertainty in the SMPL pose and shape distribution prediction. Vertices of occluded and out-of-frame body parts have higher prediction uncertainties.} \label{fig:sup_mat_3dpw} \end{figure*} \clearpage {\small \bibliographystyle{ieee_fullname}
1,314,259,992,872
arxiv
\section{Introduction} The identity and nature of dark matter (DM) remains one of the most important outstanding problems in modern physics. Despite the overwhelming astrophysical evidence for its existence, no conclusive terrestrial observation of DM has yet been reported \cite{Liu2017,Bertone2018}. Currently, most of the effort in the search for DM has focussed on weakly interacting massive particles (WIMPs) with masses $\gtrsim10-100\,$GeV through their hypothesised non-gravitational interactions with Standard Model particles. In this work, we consider low-mass WIMP DM with masses on the order of $1\,$GeV. One long standing claim of a potential DM detection was made by the DAMA Collaboration, which uses a NaI-based scintillation detector to search for possible DM interactions within the crystal in the underground laboratory at the Gran Sasso National Laboratory, INFN, Italy~\cite{Bernabei2013} (see also Refs.~\cite{Bernabei2008,Bernabei2012}). The results from the combination of the DAMA/LIBRA and DAMA/NaI experiments indicated an annual modulation in the event rate at around 3 keV electron-equivalent energy deposition (with a low-energy threshold of $\sim2$ keV) with a 9.3$\sigma$ significance~\cite{Bernabei2013}. The phase of this modulation agrees very well with the assumption that the signal is due to the scattering of WIMP DM present in the galactic halo. An annual modulation in the observed WIMP scattering event rate is expected due to the motion of the earth around the sun, which results in an annual variation of the DM flux through a detector (and the mean DM kinetic energy); see, e.g., Refs.~\cite{Freese2013,Lee2015}. Despite the significant signal, there is strong doubt that the DAMA Collaboration result can be due to WIMPs, since it is seemingly in conflict with the null results of many other direct detection experiments, e.g., Refs.~\cite{Aprile2019a,Aprile2018,LUXjan2017,Ren2018,Agnes2018}. There are also several works which offer explanations the DAMA result in terms of non-DM origins, e.g., in Ref.~\cite{Pradler2013}. However, it is not always possible to compare different experiments in a model-independent way, meaning it is difficult to make general statements to this effect. For example, one possibility that has been considered in the literature is that the DAMA modulation signal may be caused by WIMPs that scatter off the atomic electrons~\cite{Bernabei2008a,Foot2018,Savage2009}, as opposed to nuclear scattering as is assumed in typical experiments. This is particularly applicable for lighter WIMPs ($\lesssim$\,$10$\,GeV), which will not leave appreciable nuclear recoils. Most direct detection experiments try to reject pure electron scattering events, in order to perform nuclear recoil searches with as low as possible background. Conversely, the DAMA experiment is sensitive to WIMPs which scatter off either electrons or nuclei, potentially allowing electron-interacting DM to explain the DAMA modulation while avoiding the tight constraints from other experiments. In a recent work~\cite{RobertsDAMA2016}, however, we used scintillation and ionisation signals from the XENON100~\cite{XENONcollab2015a} and XENON10~\cite{Angle2011} experiments to rule out this possibility for the observed signal above 2\,keV; see also Refs.~\cite{Xe100er2017,XENONcollab2015,Kopp2009}. Recently, newer results from the DAMA/LIBRA-phase2 experiment have become available \cite{Bernabei2018} (see also \cite{Bernabei2017,Bernabei2015}). These results strengthen the claim for a detected signal, with the significance of the annual modulation in the 2--6\,keV energy window rising to 12.9$\sigma$. Importantly, the low-energy threshold has been lowered in the new experiment to 1\,keV, and the annual modulation is also clearly present in this region (9.5$\sigma$ significance). This may be of particular significance for the interpretation in terms of electron interacting DM. In our previous work~\cite{RobertsDAMA2016} we showed that there would be an almost exponential increase in the potential event rate at lower energies for such models of light ($\sim$\,1\,GeV) WIMPs. For the $\sim$\,keV energy depositions of interest to this work, the relevant process for electron-scattering DM is atomic ionisation. Such processes are kinematically disfavoured at these energy scales, and therefore the scattering probes deep inside the bound-state wave function, with the main contribution coming from wavefunction at distances much smaller than the characteristic Bohr radius of an atom. In such a situation, incorrect small-distance scaling of the wavefunctions (for example, by using an ``effective $Z$'' model, or assuming plane waves for the outgoing ionisation electron) can lead to large errors in the predicted ionisation rates~\cite{RobertsDAMA2016}. Further, the relativistic effects for the electron wavefunction are crucial and must be taken into account~\cite{RobertsAdiabatic2016}. As such, interpretation in terms of light WIMPs requires non-trivial calculations of the atomic structure and ionisation rates. Finally, we note that there are several ongoing experiments \cite{Akerib2018,Akerib2018-er,Akerib2017,XMASScollaboration2018,Ema2018,Cappiello2019a,Antonello2019,Collaboration2019a} and proposals \cite{Essig2017,Kouvaris2016,McCabe2017a,Wolf2018a,Emken2019,Bringmann2018,Kurinsky2019} to search for light WIMPs in direct detection experiments. We also note that weak evidence for annual modulation at 2\,keV from the COSINE collaboration has been recently made public~\cite{Collaboration2019} (see also Ref.~\cite{TheCosine-Collaboration2018}). \section{Theory} \subsection{Atomic ionisation} Throughout the text and in the figures, we use relativistic units ($\h=c=1$), with masses, energies and momenta presented in eV, as is standard in the field. However, it is also customary, e.g., to present cross sections in $\un{cm}^2$, event rates in counts/kg/keV/day. Further, for the calculations of atomic ionisation, it is convenient and common to use atomic units ($\h=m_e=1$, $c=1/\a$). Therefore, to avoid any possible confusion, we leave all factors $\h$ and $c$ in the equations. We consider DM particles that have electron interactions of the form \begin{equation}\label{eq:hint} V(\v{r}) = \h c\a_\chi \frac{e^{-\mu r}}{{r}}, \end{equation} where $\mu$ is the inverse of the length-scale for the interaction, set by the mediator mass (e.g.,\ $\mu=m_v c/\h$), and $\alpha_\chi$ is the effective DM--electron coupling strength. Such effective interaction Hamiltonians arise generally in the case of either scalar or vector interactions (e.g., via the exchange of a dark photon). The coefficient in (\ref{eq:hint}) is chosen so that in the case of a massless mediator (long-range interaction, $\mu=0$), this reduces to a Coulomb(-like) potential (with $\a\to\a_\chi$). In the limit of a very heavy mediator, the above reduces to the contact interaction: $V(\v{r}) = 4\pi \h c ({\a_\chi}/{\mu^2})\delta(\v{r})$. The differential cross section (for fixed velocity $v$) for the excitation of an electron in the initial state $njl$ is \begin{equation}\label{eq:dsigma} \frac{\ensuremath{{\rm d}}\s_{n jl}}{\ensuremath{{\rm d}} E} = 8\pi\a_\chi^2 \left(\frac{c}{v}\right)^2 \int_{q_-}^{q_+}\frac{q\ensuremath{{\rm d}} q}{(q^2+\mu^2)^2}\, \frac{K_{n jl}(E,q)}{E_H}, \end{equation} where $\h q$ is the magnitude of the momentum transfer, $E$ is the energy deposition, and $K$ is the atomic excitation factor, defined \cite{RobertsAdiabatic2016} \begin{equation}\label{eq:Knk} K_{n jl} \equiv E_H\sum_{m}\sum_{f}\left| \bra{f}e^{i\v{q}\cdot\v{r}}\ket{nj l m}\right|^2 \varrho_f(E). \end{equation} Here, $\varrho_f$ is the density of final states, $m=j_z$, and the total cross section is to be summed over all electrons $\ensuremath{{\rm d}}\s = \sum\ensuremath{{\rm d}}\s_{n jl}$. The factor of the Hartree energy unit ($E_H \equiv m_e c^2\a^2\simeq 27.2\un{eV}$) is included in Eqs.~(\ref{eq:dsigma}) and (\ref{eq:Knk}) in order to make the $K$ factor dimensionless ($q$ and $\mu$ have dimensions of inverse length). Since we are considering ionisation processes, the final state is a continuum electron with energy $\varepsilon = E - I_{njl}$ ($I_{njl}$ is the ionisation energy). Formulas for calculating the atomic excitation factor (\ref{eq:Knk}) are given in Appendix~\ref{sec:appendix}. Equation (\ref{eq:dsigma}) is to be integrated over all possible values for the momentum transfer. From conservation of momentum, the allowed values fall in the range between \begin{equation}\label{eq:qpm} \h q_{\pm} = m_\chi v \pm \sqrt{ m_\chi^2 v^2-2m_\chi E}, \end{equation} where both the DM particle and ejected electron are assumed to be non-relativistic. For $\sim$\,GeV WIMPs leaving $\sim$\,keV energy depositions, the typical momentum transfer is $q\sim\sqrt{2m_\chi E}\sim{\rm MeV}$, which is very large on atomic scales \cite{RobertsDAMA2016}. The resulting differential event rate (per unit mass of target material) is proportional to the cross section (\ref{eq:dsigma}) averaged over incident DM velocities: \begin{equation} dR = \frac{n_T \rho_{\rm DM}}{m_\chi c^2} \frac{\ensuremath{{\rm d}}\braket{\s_{n jl}v_\chi}}{\ensuremath{{\rm d}} E} \, \ensuremath{{\rm d}} E, \end{equation} where $n_T$ is the number density of target atoms (per unit mass), and $\rho_{\rm DM}\sim0.3-0.4\un{GeV}\un{cm}^{-3}$ is the local DM energy density~\cite{Bovy:2012tw}. We follow Ref.~\cite{EssigPRL2012} and parameterise the velocity-averaged cross section as \begin{equation} \frac{\braket{\ensuremath{{\rm d}}\s v}}{\ensuremath{{\rm d}} E} = \frac{\bar\s_e }{2 m_e c}\int\ensuremath{{\rm d}} v \frac{f(v)}{v/c}\int\limits_{q_-}^{q_+} a_0^2q\ensuremath{{\rm d}} q \, |F^{\mu}_{\rm \chi}(q)|^2\,{K(E,q)}, \end{equation} where $\sbe$ is the free electron cross section at fixed momentum transfer of $q=a_0^{-1}$, $a_0 = \h/(m_e c\a)$ is the Bohr radius, $\a\approx1/137$ is the fine-structure constant, and $f$ is the DM speed distribution (in the lab frame, normalised to $\int f(v)\,\ensuremath{{\rm d}} v=1$). In the case of a vector or scalar mediated interaction such as (\ref{eq:hint}), these are expressed \begin{align} \sbe &= a_0^2 \frac{16\pi\,\a^2\a_\chi^2}{((m_v/m_e)^2+\a^2)^2} \\ F_\chi(q) &= \frac{(m_v/m_c)^2 + \a^2}{(m_v/m_c)^2+ (\a a_0 q)^2}. \end{align} We have assumed here that $m_\chi\gg m_e$, which is valid for the considered mass ranges. In the limit of a heavy mediator (contact-like interaction), the DM form factor reduces simply to $F_\chi=1$, while for an ultra-light mediator (Coulomb-like interaction), it reduces to $F_\chi = (a_0q)^{-2}$. This is a convenient way to parameterise the calculations, and allows for easy model independent comparison between different results. There is no contribution to the event rate stemming from DM velocities below $v_\min = \sqrt{2E/m_\chi}$, the minimum required to deposit energy $E$. If the majority of the target momentum-space wavefunction density lies inside the $(q_-,q_+)$ region, then the integration over $q$ in Eq.~(\ref{eq:dsigma}) is essentially independent of the integration limits, so one may write \[ \braket{\ensuremath{{\rm d}}\s v} \propto \eta(v_{\rm min}) \int q\ensuremath{{\rm d}} q\,|F_{\rm DM}(q)|^2K(E,q) \, \ensuremath{{\rm d}} E, \] where $\eta(v_{\rm min})$ is the mean inverse speed of DM particles fast enough to cause the ionisation ($v>v_{\rm min}$) for the given velocity distribution. This is a common way to calculate DM direct detection event rates, particularly for nuclear recoils, however, we note that in the case of electron scattering it is not valid. The cross section depends strongly on $q_-$ and hence the DM velocity, since, in many cases, the bulk of the electron momentum-space wavefunction lies below the allowed region for momentum transfer. This means that a careful treatment of the DM speed distribution, including uncertainties, is required for the analysis (see also Ref.~\cite{WuFreese2019}). \subsection{Calculation of the atomic ionisation factor} \begin{figure} \includegraphics[width=0.48\textwidth]{./img/K-approx} \caption{Atomic factor $K$ (\ref{eq:Knk}) for the $3s$ state of Xe at fixed energy (2\,keV), calculated using different approximations. For $3s$ Xe state, we have $Z_{\rm eff} = n\sqrt{2I_{n jl}/E_H}\simeq28$. } \label{fig:Kapprox} \end{figure} In Fig.~\ref{fig:Kapprox} we show a comparison of the atomic ionisation factor as calculated using a number of different approximations, as a function of the momentum transfer $q$ (for fixed $E$). For the values relevant to this work, around $q\sim1\,$MeV, there is almost four orders of magnitude difference between the various approximations. Also note that the relativistic effects are very important for large $q$, and the corrections continue grow with increasing $q$ \cite{RobertsAdiabatic2016}. Since the typical kinetic energy of a $\sim\,$GeV mass WIMP is large compared to typical atomic transition energies, the minimum momentum transfer is given $\h q_{\rm min} \sim m_\chi v_\chi \sim E/v_\chi$ (\ref{eq:qpm}). Therefore, we see that $\h q_{\rm min}\gtrsim m_e v_e^2/v_\chi = p_e(v_e/v_\chi)\gg p_e$, with $v_e\sim \a c - Z\a c$ the typical velocity of an atomic electron. The consequence is that only the very high-momentum tail of the wavefunctions (in momentum space) can contribute to such processes. In position space, this part of the wavefunction comes from distances very close to the nucleus. For a detailed discussion, see Ref.~\cite{RobertsAdiabatic2016}. Therefore, care must be taken to perform the calculations of such processes correctly. For example, it is common to calculate such processes using analytic hydrogen-like wavefunctions, with an effective nuclear charge, which is chosen to reproduce experimental binding energies: $Z_{\rm eff} = n\sqrt{2I_{n jl}/E_H}$. While such functions give a reasonable approximation for low $q$, for the large $q$ values important for this work, they drastically underestimate the cross-section. This is because such functions have incorrect scaling at distances close to the nucleus, which is the only part of the electron wavefunction that can contribute enough momentum transfer. Another common approach is to approximate the outgoing ionisation electron wavefunction as a plane wave state. Such functions also have the incorrect scaling at small distances, and underestimate the cross section by orders of magnitude for large $q$. (This is mostly due to the missing Sommerfeld enhancement as discussed in Ref.~\cite{Essig2012}.) More details regarding this point are given in Appendix~\ref{sec:appendix}. Therefore, to perform accurate calculations one must employ a technique known to accurately reproduce the electron orbitals. Namely, the relativistic Hartree-Fock method, including finite-nuclear size, and using continuum energy eigenstates as the outgoing electron orbitals. Detailed calculations and discussion was presented in Ref.~\cite{RobertsDAMA2016}. Formulas are given in the appendix \ref{sec:appendix}. \begin{figure*} \includegraphics[width=0.47\textwidth]{./img/dsv-E-contact} \includegraphics[width=0.47\textwidth]{./img/dsv-E} \caption{ Velocity-averaged differential cross section for a single Xe atom, with $\sbe=10^{-37}\un{cm}^{2}$. The left panel is for a contact (heavy mediator) interaction, and the right is for a long-range (Coulomb-like) interaction. The kinks in the plots are due to the opening up of deeper electron shells. There is no signal above $E_{\rm max}=m_\chi v_{\rm max}^2/2$, where $v_{\rm max}$ is the maximum DM velocity. } \label{fig:dsvde} \end{figure*} Given the extreme dependence on the atomic physics seen in Fig.~\ref{fig:Kapprox}, it is important to estimate the uncertainty in the calculations. To gauge this, we also calculation the cross-section using other (simpler) methods. Namely, we exclude the effect of the exchange potential from the Hartree-Fock method, and also solve the Dirac equations using only a local parametric potential (chosen to reproduce the ionisation energies) instead of the Hartree-Fock equations. The effect this has on the calculations is very small, with the main difference coming from small changes in the calculated values for the ionisation energies. This is as expected, since the cross-section is due mainly to the value of the wavefunctions on small distances, close to the nucleus, where many-body electron effects are less important (but the correct scaling is crucial). All of these methods (unlike the effective $Z$ method, or plane-wave assumption) give the correct small-$r$ scaling of the bound and continuum electron orbitals. The finite-nuclear size correction is important for large values of $q$, but is small compared to the relativistic corrections, and ultimately is not a leading source of error. In any case, we include this in an {\em ab initio} manner, by directly solving the electron Dirac equation in the field created by the nuclear charge density, which we take to be given by a Fermi distribution, $\rho(r) = \rho_0 [1+\exp(r-c/a)]^{-1}$. Here $t\equiv 4a\ln3\simeq2.3\,{\rm fm}$ and $c\simeq 1.1 A^{1/3}\,{\rm fm}$ are the nuclear skin-thickness and half-density radius, respectively, e.g.,~\cite{Fricke1995}, and $\rho_0$ is the normalisation factor. We note that the uncertainties stemming from the atomic physics errors are small compared to those coming from the assumed dark matter velocity distribution and detector performance, as discussed in the following sections. Plots of the velocity averaged differential cross-sections for several WIMP masses and mediator types are presented in Fig.~\ref{fig:dsvde}. We find very good agreement with similar recent calculations for Xe atoms in Ref.~\cite{Pandey2018}. We present these plots for the xenon atom, since it is the most common target material. For DAMA/LIBRA experiment, the cross section is dominated by scattering off iodine ($Z=53$), which has very similar electron structure to xenon ($Z=54$). \subsection{Annual modulation} We assume the DM velocity distribution is described by the standard halo model, with a cut-off (in the galactic rest frame) of $v_{\rm esc} = 550(55)\un{km/s}$, and a circular velocity of $v_0=220(20)\un{km/s}$, see, e.g., Ref.~\cite{Freese2013,Baum2018}. The numbers in the parentheses above represent estimates for the uncertainties in the values. This is important, due to the strong velocity dependence of the cross section (see also, e.g., Ref.~\cite{WuFreese2019}). We use these uncertainties to estimate the resulting uncertainty in the calculated event rates. For the calculations, the velocity distribution is boosted into the Earth frame, which has a speed of \begin{equation} v_E(t) \approx v_L + v_{\rm orb}\cos\beta\cos({2\pi}\cdot{{\rm yr}^{-1}} \, t + \phi). \end{equation} Here, $v_L = v_0+13\un{km/s}$ is the average local rest frame velocity, accounting for the peculiar motion of the Sun, $v_{\rm orb}$ is the Earth's orbital velocity, and $\cos\beta\approx 0.49$ accounts for the inclination of Earth's orbit to the galactic plane. The $\cos(\w t)$ term accounts of the annual change in the local frame velocity due to the orbital motion around the sun, with phase $\phi$ chosen such that $v_E$ is maximum at June 2, when the Earth and sun velocities add maximally in the galactic halo frame. Due to the strong velocity dependence of the cross-section, the resulting event rates are not perfectly sinusoidal, particularly at higher energies and lower WIMP masses \cite{RobertsDAMA2016}. However, the general sinusoidal feature remains a reasonable approximation. We define the modulation amplitude as $(R_{\rm max} - R_{\rm min})/2$. \section{Implications from DAMA/LIBRA-phase2} \begin{figure*} \includegraphics[width=0.47\textwidth]{./img/DAMA-Sm-1GeV-h} \includegraphics[width=0.47\textwidth]{./img/DAMA-Sm-1GeV-l} \caption{ Calculated modulation amplitude for NaI, accounting for the DAMA detector resolution, with $\sbe$ chosen to reproduce the observed DAMA/LIBRA-phase2 modulation amplitude averaged across the 1--2\,keV energy bin. The black points are the combined DAMA/LIBRA data (the points below 2\,keV from phase2 alone) \cite{Bernabei2018}. The shaded blue region shows the uncertainties from the calculations (this work), which are mostly due to uncertainties in the DAMA energy resolution. The plot is drawn assuming $m_\chi=1$\,GeV for a contact-like interaction (left), and a long-range interaction (right). Clearly, the fit is poor. } \label{fig:fitSpectrum} \end{figure*} In order to calculate the number of events detected within a particular energy range, the energy resolution of the detectors must be taken into account. To do this, we follow the procedure from Ref.~\cite{Bernabei2008a}, and take the detector resolution to be described by a Gaussian with standard deviation \begin{equation} \sigma_{\rm LE}/E = \a_{\rm LE}/\sqrt{E/{\rm keV}} + \b_{\rm LE}, \end{equation} which is measured at low energy to be given by $\a_{\rm LE} = 0.45(4)$, and $\b_{\rm LE}=9(5)\E{-3}$ \cite{Bernabei2008b}. The calculated rate, $R$, is integrated with the Gaussian profile to determine the observable event rate, $S$: \begin{equation} \frac{\ensuremath{{\rm d}} S}{\ensuremath{{\rm d}} E} = \int_{E_{\rm HW}}^\infty g_{\sigma_{\rm LE}}(E'-E) \, \frac{\ensuremath{{\rm d}} R(E')}{\ensuremath{{\rm d}} E'} \, \ensuremath{{\rm d}} E', \end{equation} where $E_{\rm HW}$ is the hardware threshold, which for DAMA is 1 photoelectron \cite{Bernabei2008b}. The extracted number of photoelectrons is measured by the DAMA collaboration to be 5.5--7.5 photoelectrons/keV, depending on the detector \cite{Bernabei2008b}. We take an average value of 6.5, with $\pm1$ as an error term, so that $E_{\rm HW} = 0.15(3)\un{keV}$. We don't take the detector efficiency into account, because the DAMA collaboration present their results corrected for this~\cite{Bernabei2018}. The effect of the finite detector resolution is that it allows events that originally occur at lower energies to be visible in the observed data above the threshold. This is particularly important due to the strong enhancement in the cross section at low energies (see Fig.~\ref{fig:dsvde}). Due to the strong atomic number $Z$ dependence, the cross section for scattering off sodium electrons is negligible \cite{RobertsDAMA2016}. So, for the DAMA NaI crystals, it is sufficient to calculate the rate due to scattering of the iodine electrons. We have treated iodine as though it were a free atom, whereas in fact, it is bound in the NaI solid. Only the outermost $5p$ orbitals are involved in binding. However, even after accounting for the detector resolution, the $5p$ (and $5s$) orbitals contribute negligibly, with the dominant contribution at $\sim$\,1--2\,keV coming from the inner $3s$ shell, which is very well described by atomic wavefunctions. Using this approach, we calculate the expected event rate and annual modulation amplitude for DAMA, as a function of the incident WIMP mass, assuming both an ultra-light and super-heavy mediator. Due to the very large enhancement in the expected event rate at smaller energies, the calculated modulation amplitude is a poor fit to the observed DAMA spectrum. In Fig.~\ref{fig:fitSpectrum}, we present the calculated spectrum along-side the DAMA/LIBRA-phase2 data~\cite{Bernabei2018}. For the coupling strength (parameterised in terms of $\sbe$), we have fitted the expected event rate to the observed DAMA modulation signal only for the lowest $1-2$\,keV bins. Taking the higher energy bins into account can only increase the best-fit value for $\sbe$, so (as discussed below) this is the most conservative choice. \begin{figure*} \includegraphics[width=0.47\textwidth]{./img/fitDAMA-heavy-modified} \includegraphics[width=0.47\textwidth]{./img/fitDAMA-light-modified} \caption{ The black line shows the calculated value for $\sbe$ required to reproduce the observed DAMA modulation signal over the 1--2\,keV energy bin, as a function of the WIMP mass for the heavy (left) and ultra-light (right) mediator cases. The green shaded region shows the 90\% C.L. region for the fit, taking into account uncertainties stemming from the detector resolution, standard halo model, and atomic physics errors. The red and blue curves are the 90\% C.L. exclusions from Ref.~\cite{Essig2017} (Essig {\em et al.}), derived from the $S2$ ionization signals from the XENON10 and XENON100 experiments, respectively. The ``DAMA-allowed'' regions are excluded for all relevant WIMP masses by these bounds. The fit for DAMA was performed by averaging over just the lowest energy bins without regard to the shape of the spectrum. Taking the higher energy bins into account pushes the DAMA region higher, strengthening this conclusion (see Fig.~\ref{fig:fitSpectrum}). } \label{fig:Constraints} \end{figure*} In Fig.~\ref{fig:Constraints} we plot the best-fit regions for the lowest energy DAMA/LIBRA-phase2 modulation signal, as a function of possible DM masses $m_\chi$ and coupling strengths $\sbe$. Despite the large enhancement in the expected event rate at the lower energies, and the conservative assumptions made for extracting the best-fit, the interpretation of the observed modulation amplitude in terms of electron-interacting dark matter is inconsistent with existing bounds. All regions of parameter space that could possibly explain the observed DAMA signal are excluded by constraints derived in Ref.~\cite{Essig2017}, using $S2$ ``ionisation-only'' results from the XENON10~\cite{Angle2011} and XENON100~\cite{XENON100-LowM-2016} experiments. Note the large uncertainties visible in the plots in Figs.~\ref{fig:fitSpectrum} and \ref{fig:Constraints}. The dominating source of error comes from the uncertainties in the detector response and energy resolution. Sizable errors also arise due to uncertainties in the standard halo model DM velocity distribution. Uncertainties coming from the atomic physics calculations are also included, but are negligible. The uncertainties in the detector resolution and DM velocities themselves are not so large ($\sim$\,10\%) -- but they lead to very large uncertainties (up to an order of magnitude) in the observable event rate. This is due to the very strong enhancement in the event rate at low energies, which makes the observed rate very sensitive to the detector cut-offs and energy resolution. Clearly, taking these uncertainties into account is crucial. \section{Propsects for liquid xenon detectors} In this section, we discuss the prospects for the detection of light ($\sim$\,GeV) WIMPs using xenon duel-phase time projection chambers. (We base our discussion here on XENON Collaboration detectors, see e.g., Ref.~\cite{Aprile2012b}; similar principals apply for other experiments.) When a scattering event occurs in the liquid xenon bulk of such a detector, a prompt $S1$ scintillation signal is induced, which is proportional to the total energy deposited in the detector. Then, any ionized electrons are drifted upwards through the liquid/gas boundary (via an applied electric field), where a secondary scintillation signal ($S2$) that is proportional to the number of ionized electrons may be observed. Combining the $x,y$ spatial resolution of the top and bottom photodetectors with the $z$-resolution from the time between the $S1$ and $S2$ signals allows three-dimensional reconstruction of the event geometry. This allows for the ``fiducialization'' of the target material, where only scattering events occurring within the inner volume of the detector are included in the analysis. This is an important stage of background rejection, since charged particles are much more likely to scatter quickly, i.e.,~at the outer regions of the xenon chamber, whereas feebly interacting particles such as WIMPs are equally likely to scatter anywhere within the detector volume. Further, the ratio between the relative strengths of the $S1$ and $S2$ signals may be used to distinguish between nuclear and electronic scattering events. The combination of both the $S1$ and $S2$ signals is thus key to understanding the source of any scattering events. Proposals to use the ionization-only ($S2$) signals to search for sub-GeV WIMPs have been made previously~\cite{Essig2012,EssigPRL2012}, and limits from $S2$ observations using XENON10 and XENON100 experiments have been set\cite{Essig2017} (as discussed in the previous section). It is worth noting that the best constraints actually come from the older XENON10 experiment (finished in year 2011), despite its much smaller detector mass, older generation of detectors, and much smaller total exposure. This is due to the detection strategy of the modern experiments, which rely on the combination of $S1$ and $S2$ signals. The reason $S2$ ionisation-only signals were considered, is because for low-mass WIMPs, the typical energy deposited in the detectors is much smaller than the $\sim$\,few keV effective low-energy threshold for $S1$ signals. It was thus believed that the $S1$ scintillation signal produced from such events would be negligible. In this work, we demonstrate that, due to the large enhancement from lower energies and the finite detector resolution, the prompt $S1$ scintillation signal can be many times larger than otherwise expected, and that it therefore can be a promising WIMP direct detection observable. Thus, it would be possible to perform a low-mass WIMP search with modern liquid xenon detectors using the combined $S1$ and $S2$ signals. Detailed calculations of the observable $S2$ spectrum from low mass WIMPs was presented recently in Ref.~\cite{Essig2017}; here we present calculations for the corresponding $S1$ signal. \begin{figure*} \includegraphics[width=0.47\textwidth]{./img/Xe100-S0-h} \includegraphics[width=0.47\textwidth]{./img/Xe100-S0-l} \caption{Hypothetically observable WIMP electron recoil event count expected for a 1 tonne$\cdot$year exposure of a liquid xenon detector (based on XENON100) using the prompt scintillation ($S1$) signal; ({\em left}) for a contact interaction (with $\sbe=10^{-38}\un{cm}^{-1}$), ({\em right}) for a contact interaction (with $\sbe=4\times10^{-35}\un{cm}^{-1}$). The shaded blue regions show the uncertainties in the calculations. The $\sbe$ values are chosen to be below the present constraints, which are most stringent around $\sim0.1\un{GeV}$ (see Fig.~\ref{fig:Constraints}). Note that for larger masses, the constraints are less stringent, and larger events rates are not ruled out. } \label{fig:Xe100-fit} \end{figure*} \begin{figure} \includegraphics[width=0.48\textwidth]{./img/modFrac} \caption{ Expected fractional modulation amplitude [$(S^{\rm max} - S^{\rm min})/2S^{\rm avg}$] as a function of the WIMP mass, assuming a heavy mediator ($F_\chi=1$), for the prompt $S1$ signal in a XENON100-like detector, in the 1--, 2--, and 3--14 PE bins. A discussion of the features is given in the text. } \label{fig:modFrac} \end{figure} We calculate the potentially observable $S1$ (prompt scintillation signal) event rate and modulation amplitude for a hypothetical future liquid xenon detector. We model this detector after that of XENON100, and follow Ref.~\cite{AprileAxion2014} for the conversion from the energy deposition to the observable photoelectron (PE) count (see also Ref.~\cite{AprileAP2014}). In this case, the relevant quantity is a counted rate as a function of observable photoelectrons, denoted $s1$. The calculated event rate for the production of $n$ photoelectrons is obtained by applying Poisson smearing to the calculated differential rate \cite{AprileAxion2014}. We do this according to a Poisson distribution, $P_n(N) = e^{-N}({N^n}/{n!})$, where $N=N(E)$ is the expected/average number of photo-electrons produced for a given energy deposition $E$, and $n$ is the actual number of photoelectrons produced. The relation between the deposited energy (electron recoil energy) and the produced number of photoelectrons is given in Fig.~2 of Ref.~\cite{AprileAxion2014}. We model this as a power law: $N(E) = a E^b$, with $a=1.00(25)$ and $a=1.53(10)$, which give the best fit for the lower energies applicable for this work, accounting for the uncertainties from Ref.~\cite{AprileAxion2014}. Further, to account for the photomultiplier tube (PMT) detector resolution, we convolve the calculated rate with a Gaussian of standard deviation $\sigma = \sigma_{\rm PMT}\sqrt{n}$, with $\sigma_{\rm PMT}=0.5\,$PE \cite{AprileAP2014}. We do not include uncertainty contributions from the PMT resolution, though note that we have checked, and error in the $N(E)$ conversion is by far the dominant source of uncertainty in this step. Finally, the detection acceptance is taken into account as $\epsilon(s1) = 0.88(1-e^{s1/3})$ \cite{AprileAxion2014}, though we note that this has an insignificant impact on the results. The final expression for the observable event rate, $S$, as a function of counted PEs $s1$ is \begin{equation} \frac{\ensuremath{{\rm d}} S}{\ensuremath{{\rm d}} s1} = \epsilon(s1) \sum_n g_\sigma(n-s1)\int_0^\infty \frac{\ensuremath{{\rm d}} R(E)}{\ensuremath{{\rm d}} E} P_n(N) \, \ensuremath{{\rm d}} E. \end{equation} We calculate potential event rates, assuming a value for $\sbe$ that is not excluded by current experiment, for a one tonne-year exposure in Fig.~\ref{fig:Xe100-fit} as a function of the WIMP mass, for both a contact and long range interaction. We show the rate integrated between $3$ and 14 PE, as in Ref.~\cite{XENONcollab2015} (see also Refs.~\cite{Xe100er2017,XENONcollab2015a}). This roughly corresponds to the $2-6$\,keV energy window. The rate is strongly dominated by the lower PE contribution, so it doesn't matter where the higher PE cut is taken. We also present the expected rates for the ranges including 1 and 2 PE. The larger and less-well understood background at these lower energies make experiments more difficult to interpret. However, the much enhanced event rate, and large annual modulation amplitudes, may make these regions interesting for future experiments. In Fig.~\ref{fig:modFrac}, we show the expected annual modulation fraction for the same type of experiment. Due to the strong velocity dependence of the cross-section, the fractional modulation amplitude is large. For example, for a $\sim$\,$0.1\,$GeV WIMP, where the event rate may be expected to be high, it is $\sim$\,15-20\%. The peaks in the annual modulation curves (Fig.~\ref{fig:modFrac}) at around 0.04 and 1 GeV are due to the opening of the $n=3$ and $n=4$ shells in Xe. Electrons may only become ionised if their binding energies are lower than the maximum kinetic energy of the incident WIMPs: \begin{equation} I_{njl}<K_{\rm max} \simeq \frac{1}{2}m_\chi v_{\rm max}^2 \sim 4 \left(\frac{m_\chi c^2}{{\rm GeV}}\right) {\rm keV}. \end{equation} The ionisation rate for shells with energies close to this number (i.e.,\ that are ``only just'' accessible) will be sensitive to small changes in the velocity distribution. For Xe, these occur for the $n=3$ shell just above $\sim$\,$1\un{keV}$, and the $n=4$ shell just below $\sim$\,$0.1\un{keV}$ (see Fig.~\ref{fig:dsvde}). \section{Conclusion} We have calculated the expected event rate for atomic ionization by $\sim$\,GeV scale WIMPs that scatter off atomic electrons, relevant to the DAMA/LIBRA direct detection experiment. Though the calculated event rate and annual modulation amplitude is much larger than may be expected, we show that such WIMP models cannot explain the observed DAMA modulation signal without conflicting with existing bounds, even when just the lowest energy $1-2$\,keV bin is fitted. Taking higher bins into account strengthens this conclusion. Further, we demonstrate explicitly the importance of treating the electron wavefunctions correctly, and note that the expected event rates are extremely sensitive to the detector resolution and low-energy performance, and the assumed dark matter velocity distribution. Uncertainties in these quantities lead to large uncertainties in the calculated rates, and therefore must be taken into account. Finally, we calculate the potentially observable event rate for the prompt scintillation signal of future liquid xenon detectors. Large event rates would be expected for dark matter parameters which are not excluded by current experimental bounds, making this an important avenue for potential future discovery. \acknowledgements BMR gratefully acknowledges financial support from Labex FIRST-TF. This work was also supported by the Australian Research Council and the Gutenberg Research College fellowship.
1,314,259,992,873
arxiv
\section*{Acknowledgements} We thank useful discussion with F. Sfigakis and L. W. Smith. This work was supported by EPSRC, U.K. \section*{Methods} This work utilized GaAs/Al$_{0.33}$Ga$_{0.67}$As heterostructure in which the two-dimensional electron gas is 96~nm below the surface, with a low temperature mobility of $3.97 \times 10^6$~cm$^2$/Vs and a carrier density of $3.37 \times 10^{11}$~cm$^{-2}$. Split-gate devices with a width of 0.8~$\mu$m and lengths from 0.3~$\mu$m to 1~$\mu$m were fabricated and measured and all exhibited similar characteristics. Measurements of differential conductance $G_{ac}=dI/dV_{sd}$ via the variation of $V_{sd}$ due to a small ac excitation voltage of $5~\mu$V, and dc conductance $G_{dc}=I/V_{sd}$ via a larger source-drain dc bias ($V_{sd}$) were performed simultaneously in a dilution refrigerator.
1,314,259,992,874
arxiv
\section{Introduction} At present, experiments are at the edge of dealing with thermal machines where quantum mechanics should be relevant; therefore, issues such as the manifestations of non-classical features in their behavior and a proper thermodynamic formulation for quantum machines are currently being investigated~\cite{KoslovPRX}. For various systems of interest, the interaction energy between the system and the environment can be neglected compared to the energy of the system and the energy of the bath. In this case, a thermodynamic framework referred to as ``in the weak coupling" is very successful~\cite{koslov-entropy,BrandnerSeifert}. In particular, work is performed on the system by externally varying a control parameter of the system Hamiltonian, for instance, by changing a field that raises an energy level of the system. For other systems of interest, one can engineer the coupling between the system and external probes, making it global or local and switching it on and off in a controlled manner; see, e.g.,~\cite{ion-trap}. In this paper, we consider such a setup, and with the purpose of studying thermodynamic processes, we consider the simple situation in which the external probe is prepared in a thermal state. Thus, we shall refer to the probe as the bath, even though we are not assuming that the probe is macroscopic. The experimenter controls the coupling at some work cost, and a certain amount of heat will also flow between the system and the bath. These thermodynamic quantities are completely determined by the coupling energy in this case. An appropriate formalism for analyzing these quantum evolutions and the thermodynamic behavior is that of completely positive trace-preserving (CPTP) maps~\cite{MHPPRE2015, Goold}. In particular, one would like to account for the coupling between the system and the bath non-perturbatively. The thermodynamic properties of systems strongly coupled to a bath are not well understood, although they have been considered in~\cite{seifert-strong,jarzynski-strong,esposito-NJP,anders-strong}. We study quantum stochastic thermodynamics as formulated in~\cite{esposito-NJP,Reeb-Wolf,HPNJP2013}, where one considers that the system plus environment evolve unitarily during the process from an uncorrelated initial state; therefore, the change in the system due to this process is given by a CPTP map. In this formulation, the strength of the coupling is arbitrary, but the thermodynamic quantities, such as the work performed on the system, the total heat exchange between the system and the bath and the total entropy production, are non-local quantities that are expressed in terms of the total system-bath density matrix at the beginning and end of the process. In the limit where the strength of the coupling vanishes, one recovers the expressions obtained in the weak coupling~\cite{BreuerBook,HSJMP1978,SLACP1978,Alicki}, which are local, i.e., they depend only on system operators. In our study, we go beyond the analysis of averaged quantities and also consider their stochastic versions defined using a two-point measurement scheme~\cite{esposito-mukamel-RMP}. This stochastic thermodynamics relies on the concept of stochastic trajectory. The thermodynamic quantities are defined for every trajectory in such a way that upon averaging, the standard definitions for the mean quantities are recovered. Fluctuation theorems reveal the statistical properties of these quantities for arbitrary non-equilibrium processes~\cite{Gallavotti-cohen, KurchanJPA, LebSphon, Crooks1, Gasp, SeifertPRL05,experimentales}, and they have been established for classical Hamiltonians or stochastic systems~\cite{Gallavotti-cohen,LebSphon}, isolated quantum systems~\cite{Kurchan, Tasaki, Gaspard, Campisi} and for CPTP maps~\cite{JHPRE2012, HPNJP2013,MHPPRE2015} representing a system driven by a time-dependent protocol. Here, we derive these equalities but emphasize a process driven by a controlled interaction with the bath. A process may involve several interactions with the bath~\cite{anders-giovannetti}; thus, a sequence of different maps may act on the system. In particular, repeating the same sequence, i.e., a periodic driving, may bring the system to a stationary or invariant state. In general, an invariant state of a CPTP map represents a non-equilibrium steady state (NESS) in the sense that the entropy production due to the action of the map over the state is strictly positive. We call such states {\it maps with NESS}. We define {\it maps with equilibrium} as maps in which the entropy production associated with the action of the map on the invariant state vanishes. \textit{Thermal maps} are a subgroup of maps with equilibrium, and they have the canonical thermal state of the system as an invariant state. We will provide further details later. From the perspective of thermodynamics, maps with equilibrium have an interesting property in that the thermodynamic quantities depend only on the system variables, even in the strong-coupling regime. These maps generalize thermal maps, and they are related to the existence of conserved quantities. We obtain analytical expressions for the thermodynamic quantities and, in particular situations, for the stochastic quantities. We illustrate the relevance of our results in spin 1/2 systems under the evolution of a thermal map, a non-thermal map with equilibrium and maps with NESS. Lindblad master equations are an important tool for studying open quantum systems. Systems that are weakly and passively coupled to a heat bath are described by a Lindblad equation with Lindblad operators, which are eigenoperators of the system Hamiltonian~\cite{BreuerBook}. In this case, the evolution satisfies the condition of quantum detailed balance~\cite{HSJMP1978,SLACP1978} with respect to the Gibbs thermal state, which allows a consistent thermodynamics formulation~\cite{Alicki}. Since any Lindblad equation preserves the positivity and the trace of the density matrix, it is natural to attempt to extend the previous scenario to other choices for the Lindblad operators. For instance, in the so-called boundary-driven Lindblad equations~\cite{prosenXY,Wichterich07}, the dissipator acts on the boundaries of the open system with Lindblad operators that are not eigenoperators of the system Hamiltonian. If one is interested in thermodynamic processes in these systems, as in ~\cite{linden,AB2013,Shigeru,Bojan,norditaUs}, one has to be careful with the definitions of quantities such as work, heat and entropy production~\cite{kosloff} because one cannot infer them from sole knowledge of the Lindblad equation. Since Lindblad master equations can be obtained from the repeated concatenation of CPTP maps in a particular limit~\cite{Attal,Karevski}, our results provide a proper thermodynamic description of the processes described by the corresponding Lindblad dynamics. We show that maps with and without equilibrium translate into Lindblad equations with and without quantum detailed balance, respectively. Thermal maps generate a Lindblad dynamics with quantum detailed balance with respect to the Gibbs thermal state, whereas a generic map with equilibrium generates a Lindblad dynamics with quantum detailed balance with respect to an equilibrium state that may not be the Gibbs state. Maps with NESS generate Lindblad equations without quantum detailed balance. The consequences for the thermodynamic quantities are discussed, and we particularly emphasize the work that accompanies a process whose dynamics is generated by a time-independent Lindblad master equation with a non-Gibbsian equilibrium state or with a NESS. The remainder of this article is organized as follows. In section \ref{traj.map.sec}, we introduce stochastic thermodynamics for CPTP maps, we define CPTP maps with equilibrium, and we study their main properties. Then, we prove a fluctuation theorem for the entropy production and a work fluctuation theorem similar to the Crooks fluctuation theorem~\cite{Crooks1} for CPTP maps with and without equilibrium. In section \ref{sec.aps}, we apply our results to spin 1/2 systems. Subsequently, in section \ref{sec: Lindblad}, we consider the limit in which the concatenation of maps provides a Lindblad dynamics and discuss the consequences for the thermodynamic properties of the systems. We conclude this article in section \ref{secCONC}. \section{Stochastic thermodynamics for completely positive trace-preserving maps} \label{traj.map.sec} In this section, we present the main results of stochastic thermodynamics for an open quantum system whose dynamics is controlled by a CPTP map. Consider a system and a bath that have a joint evolution governed by the unitary $U=\mathcal{T}e^{-i\int_0^\tau(H_S(t)+H_B+V(t))dt}$ ($\hbar =1$ throughout the text), where $\mathcal T$ is the time ordering operation. The Hamiltonian $H_B$ of the heat bath is constant in time. The coupling between the system and the bath is represented by an interaction energy $V(t)$ that vanishes for $t<0$ and $t>\tau$. During the joint evolution, the system can be driven in a cycle, i.e., its Hamiltonian may be time dependent $H_S(t)$, but $H_S(0)=H_S(\tau)=H_S$. We consider this condition because we are interested in systems that have an invariant state, and an arbitrarily driven system will generally not have such a state. Later, we will consider examples with constant Hamiltonians $H_S$. We assume that $H_B$ and $H_S$ are non-degenerate. The eigenstates of the system are in the Hilbert space ${\mathcal H}_S$, and those of the heat bath are in ${\mathcal H}_B$. Initially, at time $t=0$, the system and bath are uncoupled, i.e., their density matrix is the tensor product of the respective density matrices $\rho_{\rm tot}=\rho_S\otimes \omega_\beta(H_B)$, where $\omega_\beta(H_B)=\frac{e^{-\beta H_B}}{Z_B}$ is the canonical thermal state of the bath with $\beta=T^{-1}$ ($k_B=1$), the inverse temperature of the bath, and $Z_B={\rm Tr} \,e^{-\beta H_B}.$ After a lapse of time $\tau$ in which the system and bath are coupled, the initial state $\rho_{\rm tot}$ in the product Hilbert space ${\mathcal H}_S\otimes {\mathcal H}_B$ changes to a new state, \begin{equation} \label{unitary} \rho'_{\rm tot}=U \left(\rho_S\otimes\omega_\beta(H_B)\right) U^\dag. \end{equation} In the following, we denote $\rho_S'={\rm Tr}_B\rho_{\rm tot}'$ and $\rho_B'={\rm Tr}_S\rho_{\rm tot}'$, where ${\rm Tr}_X$ is the partial trace over subsystem $X$. By tracing out the bath, one obtains a completely positive trace-preserving (CPTP) map $\mathcal{E}$ for the system evolution \begin{equation} \rho'_S=\mathcal{E}(\rho_S)={\rm Tr}_B\left[U \left(\rho_S\otimes\omega_\beta(H_B)\right) U^\dag\right]=\sum_{ij} M_{ij}\rho_S M_{ij}^\dag \label{CPTP} \end{equation} with Kraus operators \begin{equation} M_{ij}=\sqrt{\frac{e^{-\beta \varepsilon_i}}{Z_B}}\langle j|U|i\rangle. \label{kraussHeat} \end{equation} Here, $|i\rangle$ and $|j\rangle$ are the eigenstates of the bath Hamiltonian $H_B$ with eigenvalues $\varepsilon_i$ and $\varepsilon_j$, respectively. Note that as required for the trace preservation of $\mathcal{E}$ \[ \sum_{ij}M^\dag_{ij}M_{ij}=\sum_{i}\frac{e^{-\beta \varepsilon_i}}{Z_B}\langle i|U^\dag\sum_j|j\rangle\langle j|U|i\rangle=\sum_{i}\frac{e^{-\beta \varepsilon_i}}{Z_B}\langle i|I_S\otimes I_B|i\rangle=I_S. \] As is well known, there are many choices for the set of Kraus operators $M_{ij}$ that produce the same map $\mathcal{E}$. This particular representation provides a relation between the evolution of the system and the changes in the bath. The quantum operation $M_{ij}\rho_S M_{ij}^\dag$ provides the change in the system associated with the transition $|i\rangle\to|j\rangle$ in the bath. For a single map, we define and study the relevant quantities for stochastic thermodynamics. This will be generalized later for sequences of maps. \subsection{Quantum trajectories} In the context of stochastic thermodynamics, one introduces the fluctuating quantities as the result of a two-point measurement~\cite{esposito-mukamel-RMP}. These measurements induce a probability distribution for the possible outputs of the thermodynamic quantities characterizing the process associated to $\mathcal E$. Initially, with the system in an arbitrary state $\rho_S$, a non-selective projective measurement of a non-degenerate system operator $A$ is performed, leaving the system in the state $\bar{\rho}_S=\sum p_i(n) |a_{n}\rangle\langle a_{n}|$ with $p_i(n)={\rm Tr}_S(\ket{a_n}\bra{a_n}\rho_S)$. The energy $H_B$ is measured on the bath that was initially in a thermal state, leaving it in the state $\ket{i}\bra{i}$ with probability $e^{-\beta \epsilon_i}/Z_B$. The uncorrelated state $\bar{\rho}_S\otimes\ket{i}\bra{i}$ evolves unitarily, and at the end of the process, the bath Hamiltonian $H_B$ is measured again. The bath is then left in state $\ket{j}\bra{j}$. The pair $k=(ij)$ identifies a Kraus operator $M_{ij}$, and one can observe that the probability of measuring the bath energies $\varepsilon_i$ initially and $\varepsilon_j$ finally is given by $p_k(\bar\rho_S)={\rm Tr}[\mathcal{E}_k(\bar\rho_S)]$, where $\mathcal{E}_k(\cdot)\equiv M_k\cdot M_k^\dag$. Thus, if one is concerned about the system, one can say that during its evolution, one of the operations $\mathcal{E}_k(\cdot)$ occurs with probability $p_k(\bar\rho_S)$, but if no register is kept of the value $k$, then the final state is $\rho'_S=\sum p_k(\bar\rho_S) \rho'_k=\mathcal{E}(\bar\rho_S)$ with $\rho'_k=\mathcal{E}_k(\bar\rho_S)/p_k(\bar\rho_S)$. Additionally, at the end of the process, a non-selective projective measurement of another non-degenerate system operator $B$ is performed, and finally, the system is in state $\bar\rho_S'=\sum p_f(n)|b_{n}\rangle\langle b_{n}|$ with $p_f(n)={\rm Tr}[\mathcal{E}(\bar\rho_S) |b_{n}\rangle\langle b_{n}|]$. A {\it trajectory} is defined as the sequence of values $a_n,k,b_m$ and is denoted by $\gamma=\{n,k,m\}$. Its probability $p(\gamma)=p(m,k|n)p_i(n)$ is the probability of measuring $b_m$ after the operation $\mathcal{E}_k$ occurs, given that the system was initially in the state $|a_{n}\rangle\langle a_{n}|$, times the probability of the latter, i.e., \begin{equation} p(\gamma)=|\langle b_m|M_k|a_n\rangle|^2 p_i(n). \label{prob1} \end{equation} Note that $\sum_{k,n}p(m,k|n)p_i(n)=p_f(m)$. The probability $p(\gamma)$ that we obtained with these non-selective measurements corresponds to the probability one would compute in practice by repeating many times the (identically prepared) experiment with selective measurements. Explicitly, with the expression in Eq.~(\ref{kraussHeat}) for the Kraus operators, the probability of a trajectory $\gamma=\{n,k,m\}=\{n,(ij),m\}=\{n,i;m,j\}$ is, according to Eq.~(\ref{prob1}), \begin{equation} p(\gamma)=|\langle j,b_m|U|i,a_n\rangle|^2 \frac{e^{-\beta \varepsilon_i}}{Z_B}p_i(n). \label{psm1} \end{equation} \subsection{Stochastic thermodynamics} \label{stoch.term.sec} In the previous subsection, we obtained the probability $p(\gamma)$ of the quantum trajectory $\gamma$ for a quantum system interacting with a bath. We now associate the stochastic thermodynamic quantities of these trajectories. If one measures the energy at the beginning and at the end of the process, i.e., $A=B=H_S$, we have the stochastic system energy change $\Delta e_\gamma=\epsilon_m-\epsilon_n$. Here, $\epsilon_n$ denotes the eigenvalue of $H_S$ associated with the eigenvector $\ket{\epsilon_n}$, and in this case, $\ket{a_n}=\ket{b_n}=\ket{\epsilon_n}$. The stochastic heat flow to the system $q_\gamma$ associated with the trajectory $\gamma$ corresponds to the negative energy change of the bath, and we assume that it is obtained knowing $k$. Indeed, the trajectory $\gamma=\{n,k,m\}=\{n,i;m,j\}$, with probability given in Eq.~(\ref{psm1}), represents the transition $|i\rangle\to|j\rangle$ in the bath whose energy change $\varepsilon_j-\varepsilon_i$ is minus the stochastic heat flow to the system, i.e., $q_\gamma=\varepsilon_i-\varepsilon_j$. According to the first law of stochastic thermodynamics, the stochastic work is given by \begin{equation} w_\gamma=\Delta e_\gamma-q_\gamma. \label{w-gamma} \end{equation} These fluctuating quantities will be studied through their distribution, for instance, for heat and work \begin{equation} p(q)=\sum_\gamma \delta(q-q_\gamma)p(\gamma),\quad p(w)=\sum_\gamma \delta(w-w_\gamma)p(\gamma). \label{p(w)} \end{equation} By defining the averages over the trajectories $\Delta E=\sum_\gamma \Delta e_\gamma p(\gamma)$, $Q=\sum_\gamma q_\gamma p(\gamma)$ and $W=\sum_\gamma w_\gamma p(\gamma)$, one obtains \begin{equation} \Delta E={\rm Tr}[H_S(\bar\rho'_S-\bar\rho_S)]={\rm Tr}[H_S(\rho'_S-\rho_S)] \label{Av.Energy} \end{equation} \begin{equation} Q={\rm Tr}[H_B(\omega_\beta(H_B)-\rho'_B)] \label{Av.Heat} \end{equation} and \begin{equation} W={\rm Tr}[(H_S+H_B)(\rho'_{\rm tot}-\rho_{\rm tot})], \label{Av.Work} \end{equation} satisfying the first law $\Delta E=W+Q$. Although measuring density matrices is highly non-trivial from an experimental perspective, if one measures $A=\rho_S$ and $B=\rho'_S$, given in Eq.~(\ref{CPTP}), one obtains the stochastic entropy change $\Delta s_\gamma=-\ln p_f(m)+\ln p_i(n)$. Note that in this case, $\bar\rho_S=\rho_S$ and $\bar\rho_S'=\rho_S'$. As the environment consists of a heat bath with inverse temperature $\beta$, the stochastic entropy flow $\beta q_\gamma$ and the entropy change $\Delta s_\gamma$ define the stochastic entropy production \begin{equation} \Delta_i s_\gamma=\Delta s_\gamma-\beta q_\gamma. \label{epgamma} \end{equation} This fluctuating quantity will be studied through the entropy production distribution \begin{equation} p(\Delta_is)=\sum_\gamma \delta(\Delta_is-\Delta_is_\gamma)p(\gamma). \label{p(si)} \end{equation} These definitions are such that upon averaging over the trajectories, one obtains $\Delta_i S\equiv\sum_\gamma \Delta_i s_\gamma p(\gamma)=\Delta S-\beta Q$, with \begin{equation} \Delta S=-{\rm Tr}[\rho_S'\ln\rho_S']+{\rm Tr}[\rho_S\ln\rho_S] \label{Av.Ent} \end{equation} the von Neumann entropy change and $Q$ given in Eq.~(\ref{Av.Heat}). The averaged entropy production $\Delta_iS$ can be expressed as \begin{equation} \Delta_iS=D(\rho_{\rm tot}'||\rho_S'\otimes \omega_\beta(H_B))\geq 0, \label{Av.Ent.Prod} \end{equation} where $D(a||b)=\Tr[a\ln a] - \Tr[a \ln b].$ These averaged expressions are valid beyond the two-point measurement scheme that we consider here and were first obtained in~\cite{esposito-NJP}. Therefore, we consider that for a process $\rho_S\to\rho_S'={\mathcal E}(\rho_S)$, the averages simultaneously satisfy $\Delta E=W+Q$ and $\Delta_i S=\Delta S-\beta Q$ with the quantities given in Eqs.~(\ref{Av.Energy},~\ref{Av.Heat},~\ref{Av.Work},~\ref{Av.Ent}, and~\ref{Av.Ent.Prod}), even though the fluctuations of the entropy production and the work can be studied simultaneously only if $[H_S,\rho_S]=[H_S,\rho_S']=0$. Note that for their evaluation, particularly for the work, Eq.~(\ref{Av.Work}), and entropy production, Eq.~(\ref{Av.Ent.Prod}), we need to know the full state $\rho_{\rm tot}'$. In contrast, in the weak-coupling limit, where $V(t)$ can be neglected in comparison to $H_S$ and $H_B$, the thermodynamic quantities depend only on the states $\rho_S'$ and $\rho_S$ of the system of interest. We will subsequently show that this simplification can occur for the strongly coupled systems defined below. \subsection{ Maps with thermodynamic equilibrium} \label{sec.map.eq.} Let us assume that the map ${\mathcal E}$ has an attractive invariant state $\pi$ defined as $\lim_{N\to\infty}{\mathcal E}^N(\rho_S)=\pi$ $\forall \rho_S$ and $\pi={\mathcal E}(\pi)$. An invariant state is thermodynamically characterized by $\Delta S=0=\Delta E$, as shown in Eq.~(\ref{Av.Energy}) and Eq.~(\ref{Av.Ent}). We will say that this invariant state is an {\it equilibrium state} if $\Delta_iS=0$, i.e., if the entropy production, Eq.~(\ref{Av.Ent.Prod}), vanishes by the action of $\mathcal E$ on $\pi$. Maps with these special states are called maps with equilibrium. If the entropy produced by the action of the map ${\mathcal E}$ on $\pi$ provides $\Delta_iS>0$, then we say that the invariant state is a non-equilibrium steady state. In this case, one obtains $Q=-\beta^{-1}\Delta_iS<0$ and $W=\beta^{-1}\Delta_iS>0$. This means that the non-equilibrium steady state is sustained by the work performed by an external agent implementing the map on the system, which is dissipated as heat. In this situation, we say that ${\mathcal E}$ is a map with NESS. According to Eq.~(\ref{Av.Ent.Prod}), $\Delta_iS=0$ for the steady state $\pi$ if and only if $\pi\otimes\omega_\beta(H_B)=U \left(\pi\otimes\omega_\beta(H_B)\right) U^\dag$. Equivalently, if the unitary $U$ in Eq.~(\ref{unitary}) satisfies $[U,H_0+H_B]=0$, where $H_0$ is an operator in the Hilbert space of the system ${\mathcal H}_S$, then the product state $\omega_\beta(H_0)\otimes\omega_\beta(H_B)$, with $\omega_\beta(H_0)=\frac{e^{-\beta H_0}}{Z_0}$, where $Z_0={\rm Tr}[e^{-\beta H_0}]$, is invariant under the unitary evolution in Eq.~(\ref{unitary}) and $\omega_\beta(H_0)$ is an equilibrium state for the map in Eq.~(\ref{CPTP}). It follows from $[U,H_0+H_B]=0$ that \begin{equation} Q={\rm Tr}[H_B(\omega_\beta(H_B)-\rho_B')]={\rm Tr}[H_0(\rho_S'-\rho_S)]. \label{Eq.prop} \end{equation} Thus, for maps with equilibrium, the average work, Eq.~(\ref{Av.Work}), simplifies to \begin{equation} W={\rm Tr}_S[(H_S-H_0)(\rho_S'-\rho_S)] \label{Av.Work.Eq} \end{equation} which is determined by the system state only. When the map has an equilibrium state $\omega_\beta(H_0)$, the entropy production also reduces to an expression that does not involve the state of the bath. Indeed, from $\Delta_i S=\Delta S-\beta Q$ with Eq.~(\ref{Eq.prop}), we obtain $\Delta_iS={\rm Tr}[\rho_{S}\ln \rho_{S}]-{\rm Tr}[\rho_{S}'\ln \rho_{S}']-{\rm Tr}[(\rho_S-\rho_S')\ln \omega_\beta(H_0)]$, which can be rearranged into \begin{equation} \Delta_iS =D(\rho_S||\omega_\beta(H_0))-D(\rho_S'||\omega_\beta(H_0)), \label{epthermal} \end{equation} which is positive due to the contracting character of the map~\cite{BreuerBook}. Note that Eqs.~(\ref{Eq.prop}), (\ref{Av.Work.Eq}) and (\ref{epthermal}) are exact for maps with an equilibrium state and do not require a weak-coupling ($V(t)$ small) condition to be satisfied. Let us consider in further detail two particular cases of interest: {\it i) Stochastic thermodynamics of thermal maps:} If $H_0=H_S$, then the map is called thermal~\cite{terry1,terry2,terry3,Oppenheim}. The equilibrium state of thermal maps is the Gibbs thermal state $\omega_\beta(H_S)=e^{-\beta H_S}/Z_S$ with $Z_S={\rm Tr}[e^{-\beta H_S}]$. For thermal maps, the average work vanishes for every initial state $\rho_S$, as follows from Eq.~(\ref{Av.Work.Eq}). If a thermal map brings the system to the equilibrium Gibbs state, then the entropy production Eq.~(\ref{epthermal}) reduces to the well-known expression~\cite{DeffnerPRL} $\Delta_iS=D(\rho_S||\omega_\beta(H_S)).$ This is the dissipation that occurs in the relaxation process $\rho_S\to\omega_\beta(H_S).$ Let us now discuss the work fluctuations. Consider an energy measurement $H_S$ at the beginning and at the end of a process realized by a thermal map. From $[U,H_S+H_B]=0$, it follows that if the transition is possible ($p(\gamma)\sim|\langle \epsilon_m j|U|\epsilon_n i\rangle|^2\neq 0$), then it conserves the energy $\epsilon_m+\varepsilon_j=\epsilon_n+\varepsilon_i$. Therefore, for a thermal map, Eq.~(\ref{w-gamma}) provides $w_\gamma=0$ for any trajectory $\gamma$ with $p(\gamma)\neq 0$ and for any initial state $\rho_S$. Since thermal maps do not require an external agent who performs or extracts work from the system, they are supposed to describe the passive coupling between a system and a heat bath. \vspace{0.2cm} {\it ii) Stochastic thermodynamics of non-thermal maps with equilibrium:} We consider maps with an equilibrium state $\omega_\beta(H_0)$ with $H_0\neq H_S$ but restrict ourselves to the particular situation in which $[H_0,H_S]=0$. This is the situation that we will encounter in the examples, and we argue that it will often be the case if the system Hamiltonian is constant during the evolution, i.e., $H_S(t)=H_S\,\forall t$, and the coupling is a step function, i.e., $V(t)=V_{\rm loc}$ if $0<t<\tau$ and $V(t)=0$ elsewhere. Indeed, in this case, one finds that the equilibrium condition $[U,H_0+H_B]=0$ implies that $[H_S,H_0]=0$, with the exception of some very particular cases in which $[V,H_0+H_B] = [H_0,H_S]\otimes\mathbb{1}_B$. Regardless, if $[H_0,H_S]=0$, the first interesting observation is that one can manipulate Eq.~(\ref{epthermal}) and rewrite it as \[ \Delta_iS=D(\rho_S||\omega_\beta(H_S))-D(\rho_S'||\omega_\beta(H_S))+\beta W, \] where $W$ is given in Eq.~(\ref{Av.Work.Eq}). This expression coincides with the one derived in~\cite{DeffnerPRL} on thermodynamical grounds for a system weakly coupled to a bath under a cyclic driving ($H_S(0)=H_S(\tau)=H_S$) as those that we consider. Interestingly, if $[H_0,H_S]=0$, then the work fluctuations also become a system property, i.e., one does not need to perform a measurement in the bath. Indeed, let us consider the work distribution Eq.~(\ref{p(w)}). Since $[H_0,H_S]=0$ and $H_S$ is non-degenerate, $\ket{\epsilon_n}$ is also an eigenvector of $H_0$, i.e., $H_0\ket{\epsilon_n}=\epsilon_n^0\ket{\epsilon_n}$. Then, from the equilibrium property $[U,H_0+H_B]=0$, one finds that $\epsilon^0_m+\varepsilon_j=\epsilon^0_n+\varepsilon_i$ if $|\langle \epsilon_m,j|U|\epsilon_n,i\rangle|^2\neq 0$, which is proportional to $p(\gamma)$ in Eq.~(\ref{p(w)}). Therefore, one is allowed to replace the stochastic work Eq.~(\ref{w-gamma}) inside the delta function by $w_\gamma=\epsilon_m-\epsilon_m^0-(\epsilon_n-\epsilon_n^0)$, obtaining \begin{equation} \label{ecc:dist trabajo eq. map} p(w)=\sum_{n,m} \delta(w-[(\epsilon_m-\epsilon_m^0)-(\epsilon_n-\epsilon_n^0)])\bra{\epsilon_m}{\mathcal E}(\ket{\epsilon_n}\bra{\epsilon_n})\ket{\epsilon_m}p_i(n), \end{equation} completely determined by system quantities and the map ${\mathcal E}$ without the need for measuring the bath. By averaging $w$ with Eq.~(\ref{ecc:dist trabajo eq. map}), we recover an expression that is apparently different from Eq.~(\ref{Av.Work.Eq}), namely, $\Tr[(H_S-H_0)(\rho_S'-\bar \rho_S)]$. However, since the non-selectively measured $\bar \rho_S$ is diagonal in the eigenbasis of $H_S$ (and $H_0$), then it is the same as Eq.~(\ref{Av.Work.Eq}). Moreover, because $H_S$ and $H_0$ share an eigenbasis, we can simultaneously study the work and entropy production fluctuations in the equilibrium state. One finds here that $p(\Delta_is)=\delta(\Delta_is)$. In fact, using a similar argument as the one used for the work distribution, one can replace $\Delta_is_\gamma$ by $\ln p_i(n)-\ln p_f(m)-\beta (\epsilon^0_m-\epsilon^0_n)$ inside the delta function in Eq.~(\ref{p(si)}). However, since $p_i(n)=e^{-\beta\epsilon_n^0}/Z_0$ and $p_f(m)=e^{-\beta\epsilon_m^0}/Z_0$, one obtains $p(\Delta_is)=\delta(\Delta_is)$, a result also valid for thermal maps ($H_0=H_S$). In contrast, the equilibrium work fluctuations, Eq.~(\ref{ecc:dist trabajo eq. map}) with $p_i(n)=e^{-\beta\epsilon_n^0}/Z_0$, gives $p(w)\neq \delta(w)$ (although $W=0$) for non-thermal maps with equilibrium while $p(w)= \delta(w)$ for thermal maps. Let us summarize the results of this subsection. For a map with an equilibrium state, the average thermodynamic quantities can be written in terms of the properties of the system of interest only. In general, an active external agent has to provide (or extract) work to perform the map on a state $\rho_S$. Only for thermal maps is $W=0$, and the agent is passive. If the system is in the equilibrium state $\omega_\beta(H_0)$, the map can be performed with $W=0$, see Eq.~(\ref{Av.Work.Eq}), and $\Delta_iS=0$. In this equilibrium state, the entropy production does not fluctuate $p(\Delta_i s)= \delta(\Delta_is)$, but the work may still present fluctuations $p(w)\neq \delta(w)$ except for thermal maps. In the general case of a map with equilibrium in which $[H_0,H_S]\neq0$, we cannot discuss work and entropy production fluctuations simultaneously. If we consider the latter, we still have that the entropy production is a non-fluctuating quantity in the equilibrium state. This can also be obtained as a consequence of the integral fluctuation theorem, as we will see below. \subsection{Fluctuation Theorems for the Entropy Production} A central result from stochastic thermodynamics is the detailed fluctuation theorem for the stochastic entropy production $\Delta_is$. Consider the probability distribution, Eq.~(\ref{p(si)}), of a given $\Delta_is$ value obtained according to the $A=\rho_{S}$ and $B=\rho_S'$ two-point measurement procedure. The detailed fluctuation theorem for the entropy production is \begin{equation} \ln\frac{p(\Delta_is)}{\tilde p(-\Delta_is)}=\Delta_is, \label{ep.fluct} \end{equation} where $\tilde p(\Delta_is)$ refers to the distribution of the entropy production in a reverse process to be specified later. This equality is derived from the time reversal properties of the system. For the type of systems that we study, it was derived in~\cite{HPNJP2013} for driven Hamiltonians. Here, we derive it to emphasize that the fluctuation theorem is also valid for systems driven by other mechanisms, for instance, by the coupling to the bath. The time reversal operator~\cite{Haake} $\Theta$ is anti-unitary, i.e., $\Theta i=-i\Theta $, and $\Theta ^\dag=\Theta ^{-1}$. This operator is defined in the full Hilbert space ${\mathcal H}_S\otimes{\mathcal H}_B$ and is of the form $\Theta_S\otimes\Theta_B$. The unitary evolution $U$ depends on the time dependence of the system Hamiltonian, but as we mentioned, it can be constant in time, and on the time dependence of the coupling that at least is switched on and off. This time dependence is referred to as the protocol. If one performs the protocol in the time-inverted sequence, i.e., one considers $\{H_S(\tau-t),V(\tau-t)\}$, the unitary dynamics will be called $\tilde U$. The micro-reversibility principle for non-autonomous systems~\cite{Gasp,Campisi} relates the forward and backward dynamics by $\Theta ^\dag \tilde U \Theta =U^\dag$. Thus, if the unitary operator maps $|\phi\rangle$ to $|\phi'\rangle$, i.e., $|\phi'\rangle=U|\phi\rangle$, the time-reversed state $\Theta |\phi'\rangle$ is mapped to the time-reversed state $\Theta |\phi\rangle$ by the time-reversed unitary $\tilde U$. We denote reverse states as $|\tilde\cdot\rangle=\Theta |\cdot\rangle$. We remark that the anti-linearity of $\Theta$ implies that $\bra{\tilde \cdot} = \bra{\Theta\, \cdot} \neq \bra{\cdot}\Theta^\dag$. For the time-reversed dynamics, we consider an arbitrarily chosen initial state for the system, $\tilde\rho_S=\sum \tilde p_f(m)|\tilde b_{m}\rangle\langle \tilde b_{m}|$, and a time-reversed thermal state for the bath, $\tilde\rho_B=\sum \frac{e^{-\beta \varepsilon_j}}{Z_B}|\tilde j\rangle\langle \tilde j|$. Then, the system's time-reversed map $\tilde{\mathcal{E}} (\tilde{\rho}_S) = \tilde{\rho}_S'$ has the representation ${\tilde{\mathcal E}}(\cdot)=\sum_{ij}\tilde M_{ji}\cdot \tilde M_{ji}^{\dag}$ in terms of reversed Kraus operators \begin{equation} \tilde M_{ji}=\sqrt{\frac{e^{-\beta \varepsilon_j}}{Z_B}}\langle\tilde i|\tilde U|\tilde j\rangle. \label{kraussHeat-tilde} \end{equation} Micro-reversibility implies that they satisfy (see Appendix \ref{sec:appendix microrev}) \begin{equation} \tilde M_{ji}= \Theta_S\sqrt{e^{\beta(\varepsilon_i-\varepsilon_j)}}M_{ij}^\dag \Theta^\dag_S. \label{Mmicroreversibility} \end{equation} We can now relate the probability $p(\gamma)=|\langle b_m|M_{k=ij}|a_n\rangle|^2 p_i(n)$ for a trajectory $\gamma=\{n,k,m\}=\{n,i;m,j\}$ to the probability $\tilde p(\tilde\gamma)=|\langle\tilde a_n|\tilde M_{k=ji}|\tilde b_m\rangle|^2 \tilde p_f(m)$ of its time reversal $\tilde\gamma=\{\tilde m,\tilde k,\tilde n\}=\{\tilde m,\tilde j;\tilde n,\tilde i\}$. From Eq.~(\ref{Mmicroreversibility}), we have $\tilde p(\tilde \gamma)=e^{\beta(\varepsilon_i-\varepsilon_j)}|\langle a_n| M_{k=ij}^\dag| b_m\rangle|^2 \tilde p_f(m)$; therefore, \begin{equation} \frac{p(\gamma)}{\tilde p(\tilde \gamma)}=e^{-\beta(\varepsilon_i-\varepsilon_j)}\frac{p_i(n)}{\tilde p_f(m)}. \label{FTgamma} \end{equation} If the initial state of the backward process $\tilde p_f(m)$ is the final state of the forward process, i.e., $\tilde p_f(m)=p_f(m)$, then we have $p(\gamma)=e^{\Delta_i s_\gamma}\tilde p(\tilde \gamma)$. Using this equality, we now evaluate \[ p(\Delta_is)=\sum_\gamma p(\gamma)\delta(\Delta_is-\Delta_is_\gamma)=e^{\Delta_i s}\sum_{\tilde\gamma}\tilde p(\tilde \gamma)\delta(\Delta_is+\Delta_is_{\tilde \gamma})=e^{\Delta_i s} \tilde p(-\Delta_i s), \] where we have also used $\Delta_is_{\tilde \gamma}=-\Delta_is_\gamma$; see Eq.~(\ref{epgamma}). If the reversed process is identical to the forward process, $\tilde p(\Delta_i s)=p(\Delta_i s)$, then the fluctuation theorem for the entropy production, Eq.~(\ref{ep.fluct}), can be written just with the distribution of the forward process, $p(\Delta_i s)= e^{\Delta_i s} p(-\Delta_i s)$. This is the case in the systems that we consider if the driving is time symmetric, $\{H_S(t),V(t)\}=\{H_S(\tau-t),V(\tau-t)\}$ for $0\leq t<\tau$. A constant $H_S$ and (step) $V$ fulfill this condition. It is also necessary for the measured operator to be invariant under the time reversal transformation, guaranteeing a one-to-one correspondence between the forward and backward trajectories. See Appendix \ref{appendix: bkwequivalence}. The detailed fluctuation theorem for the entropy production implies the integral fluctuation theorem $\langle e^{-\Delta_is}\rangle=1$. This in turn implies that if the average entropy production $\Delta_iS=\langle \Delta_is\rangle=0$ vanishes, then $\langle e^{-\Delta_is}\rangle=e^{\langle-\Delta_is\rangle}$, and due to the convexity of the exponential, this is possible only if $p(\Delta_is)=\delta(\Delta_is)$, i.e., the stochastic entropy production does not fluctuate. This was already noted when we discussed maps with equilibrium states and the fluctuation properties in these states. Conversely, it implies the opposite for {\it non-equilibrium} steady states, that is, fluctuations of the stochastic entropy production are necessary to have a positive average entropy production $\Delta_iS=\langle \Delta_is\rangle>0$. \subsection{Fluctuations of work} Now consider the case where the initial states of the forward and backward processes are canonical, $p_i(n)=e^{-\beta \epsilon_n}/Z_S$ and $\tilde p_f(m)=e^{-\beta \epsilon_m}/Z_S$; then, Eq.~(\ref{FTgamma}) provides $\frac{p(\gamma)}{\tilde p(\tilde \gamma)}=e^{-\beta(\varepsilon_i-\varepsilon_j)}e^{-\beta(\epsilon_n-\epsilon_m)}=e^{\beta w_\gamma}$. One can prove that if the reversed process is identical to the forward process (see Appendix \ref{appendix: bkwequivalence}), then the probability of performing a work $w$ between the initial time with the system in the state $e^{-\beta H_S}/Z_S$ and an arbitrary time (possible after infinite time, when the system reaches the steady state) at which the energy of the system is measured satisfies the fluctuation relation \begin{equation} p(w)=p(-w)e^{\beta w}. \label{WFT} \end{equation} Conversely, if the reversed process is not the same as the forward process, then the work fluctuation theorem reads $p(w)=\tilde p(-w)e^{\beta w}$. For thermal maps, whose stationary state is the canonical thermal state, we observed that $p(w)=\delta(w)$; thus, Eq.~(\ref{WFT}) is trivially satisfied. For other maps, the canonical thermal state is not necessarily invariant. Thus, one can consider the evolution of the system initially prepared in the canonical thermal state toward its steady state and perform the two-point measurement of the system Hamiltonian to find that the work statistics follows Eq.~(\ref{WFT}). We later illustrate this equality in two interesting situations: a system that undergoes a cyclic process and a system with a NESS. \subsection{Generalization to concatenated CPTP maps and applications} \label{sec.appl} In the previous section, we considered that the process $\rho_S\to\rho_S'$ is given by a single CPTP map $\rho_S'={\mathcal E}(\rho_S)=\sum_k M_k\rho_SM_k^\dag$ with a particular choice for the Kraus operators $M_k$ that allows a thermodynamic interpretation. We will show that the previous results of this section can be extended to concatenations of maps, providing a richer setup for studying thermodynamic processes. One can concatenate CPTP maps acting over a system to describe a sequence of evolutions of a system coupled to heat baths for given lapses of time. We generalize the concept of quantum trajectory to concatenations of $N$ maps $\mathcal{E}(\cdot)=\mathcal{E}^{(N)}\cdots\mathcal{E}^{(1)}(\cdot)$. Each \begin{equation} \mathcal{E}^{(n)}(\cdot)=\sum_k M_k^{(n)}\cdot M_k^{(n)\dag} \label{CPTP2} \end{equation} is a CPTP map, and for each, we measure a corresponding $k_n$ associated with the process $|i_n\rangle\to |j_n\rangle$ between eigenstates of the bath. Note that with each map $\mathcal{E}^{(n)}$, a new fresh bath is introduced to interact with the system. Fig.~\ref{fig:esquema} provides a scheme. As before, we consider the unitary evolution operator $U_n=e^{-i\tau_n(H_S+H_b^{n}+V^{n})}$, where $V^n$ represents the energy coupling between the system and the $n$th copy of the bath (with Hamiltonian $H_b^{n}$) in the time interval $[\sum_{l<n}\tau_l,\sum_{l\leq n}\tau_l]$. We consider $V^{n}$ to be constant in this time interval and $V^{n}=0$ outside the interval. The Kraus operators are \begin{equation} M^n_{ij}=\sqrt{\frac{e^{-\beta \varepsilon_{i_n}}}{Z_b}}\langle j_n|U_n|i_n\rangle, \label{kraussHeat2} \end{equation} where $\{\varepsilon_{i_n},|i_n\rangle\}$ is the spectrum of $H_b^n$ and $Z_b={\rm Tr} \,e^{-\beta H_b^n}$. The stochastic heat flow from the bath to the system is given by minus the energy change of the baths $q_\gamma=\sum_n(\varepsilon_{i_n}-\varepsilon_{j_n})$. We perform a measurement of a system operator $A$ at the beginning and another $B$ at the end of the process. The trajectory is $\gamma=\{n,k_1,\ldots,k_N,m\}$, and its probability $p(\gamma)=p(m,k_1,\ldots,k_N|n)p_i(n)$ is \begin{equation} p(\gamma)=|\langle b_m|M^{(N)}_{k_N}\cdots M^{(1)}_{k_1}|a_n\rangle|^2p_i(n) \label{pgamma} \end{equation} or explicitly in terms of Eq.~(\ref{CPTP2}) and Eq.~(\ref{kraussHeat2}), the probability of a trajectory $\gamma=\{a_n;i_1,j_1;\ldots;i_N,j_N;b_m\}$ is \begin{equation} p(\gamma)=|\langle b_m,j_1\cdots j_N|U_N\cdots U_1|i_1\cdots i_N,a_n\rangle|^2 \frac{e^{-\beta \sum_{n=1}^N\varepsilon_{i_n}}}{Z_b^N}p_i(n). \label{psmN} \end{equation} With this, the detailed fluctuation theorem can be extended to concatenations of maps. If in the forward process the sequence ${\mathcal E^{(N)}}\cdots {\mathcal E^{(1)}}$ acts on an initial state, then the backward process is the reversed concatenation of the reversed maps, i.e., $\tilde{\mathcal E}^{(1)}\cdots \tilde{\mathcal E}^{(N)}$, and for a given trajectory, $\gamma=\{n,k_1\cdots k_N,m\}$, the corresponding backward trajectory is $\tilde\gamma=\{\tilde m,\tilde k_N\cdots \tilde k_1,\tilde n\}$. The probability of the forward path is given by Eq.~(\ref{pgamma}), whereas for the backward path, the probability is \begin{equation} \tilde p(\tilde\gamma)=|\langle \tilde a_n|\tilde M^{(1)}_{k_1}\cdots \tilde M^{(N)}_{k_N}|\tilde b_m\rangle|^2 \tilde p_f(m) \label{pgammatilde} \end{equation} Since every Kraus operator involved in Eq.~(\ref{pgammatilde}) satisfies Eq.~(\ref{Mmicroreversibility}), one obtains \begin{equation} \frac{p(\gamma)}{\tilde p(\tilde \gamma)}=e^{-\beta q_\gamma}\frac{p_i(n)}{\tilde p_f(m)} \label{FTgamma2} \end{equation} and as before, considering the initial state of the backward process $\tilde p_f(m)$ as the final state of the forward process, i.e., $\tilde p_f(m)=p_f(m)$, we have $p(\gamma)=e^{\Delta_i s_\gamma}\tilde p(\tilde \gamma)$. Finally, if the initial states of the forward and backward processes are canonical, $p_i(n)=e^{-\beta \epsilon_n}/Z_S$ and $\tilde p_f(m)=e^{-\beta \epsilon_m}/Z_S$, then Eq.~(\ref{FTgamma2}) provides $\frac{p(\gamma)}{\tilde p(\tilde \gamma)}=e^{-\beta q_\gamma}e^{-\beta(\epsilon_n-\epsilon_m)}=e^{\beta w_\gamma}$. Note that for every iteration of the map, there is a certain amount of average work, heat and entropy production. These quantities are additive. This means that if we know the average work, heat and/or entropy production for two maps that are composed, then the average work, heat and/or entropy production for the total map (the composition) is the sum of the corresponding quantity for each map. For fluctuations, this separation is not possible. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{Figure1.pdf} \caption{The figure depicts the first two interactions between the system and the copies of the bath.} \label{fig:esquema} \end{figure} \subsection{A thermodynamic cycle} \label{sec.cycle} We can illustrate the advantage of considering concatenations of maps by studying a simple thermodynamic cycle. A system starts in the canonical thermal state $\omega_\beta(H_S)$, and then it is driven by a map such as the ones considered in Sec. \ref{traj.map.sec}. In such maps, work is performed on the system and leaves it in a non-equilibrium state $\rho_S'$ but with the same Hamiltonian $H_S$. Then, we assume that a thermal map brings the system back to the thermal state $\omega_\beta(H_S)$. For such a cycle, we have that \[ \Delta E=0=W_d+Q_d+Q_r, \] where $Q_d$ and $W_d$ respectively refer to the heat exchanged with the bath, Eq.~(\ref{Av.Heat}), and the work performed on the system, Eq.~(\ref{Av.Work}), during the driving process $\omega_\beta(H_S)\to \rho'_S$, and $Q_r$ is the heat exchanged during the final relaxation process $\rho_S'\to \omega_\beta(H_S)$. Since this is achieved with a thermal map, one has $W_r=0$; thus, $Q_r={\rm Tr}[H_S(\omega_\beta(H_S)-\rho_S')]$. One can verify that \[ \Delta S=0=\Delta_iS_d+\Delta_iS_r+\beta(Q_d+Q_r), \] where $\Delta_iS_d=D(\rho'_{\rm tot}||\rho_S'\otimes\omega_\beta(H_B))$ and $\Delta_iS_r=D(\rho_S'||\omega_\beta(H_S))$ according to Eq.~(\ref{Av.Ent.Prod}) and Eq.~(\ref{epthermal}), respectively. It also follows that the total entropy production $\Delta_iS_d+\Delta_iS_r=\beta W_d$ is the dissipated work, as expected for an isothermal process starting and finishing in equilibrium. We also obtain $\Delta_iS_d=-D(\rho_S'||\omega_\beta(H_S))+\beta W_d$~\cite{DeffnerPRL}. We now consider fluctuations. Note that because the initial and final density matrices are Gibbsian, work and entropy production fluctuations are identical, \begin{equation} \label{ecc:thermo_cycle} \ln \frac{p(\gamma)}{\tilde p (\tilde \gamma)} = \Delta_i s_\gamma = \Delta s_\gamma - \beta(q^d_\gamma +q^r_\gamma) = \beta \Delta \epsilon_\gamma -\beta( q^d_\gamma + q^r_\gamma) = \beta w_\gamma. \end{equation} In the last equation, $q_\gamma^{d,r}$ are the driving and relaxation stochastic heats, respectively. In general, the thermal state is achieved asymptotically when a system interacts with a large memoryless heat bath. This thermalization might also be achieved by concatenating (in theory) an infinite sequence of thermal maps, taking the limit $N\to \infty$ in the process described with Eqs.~(\ref{CPTP2} and \ref{kraussHeat2}). This composition is also a thermal map. In the first example presented in the next section, we show that when the system is small, this can be done much quicker in practice, with only a few maps. \section{Examples} \label{sec.aps} \subsection{Single spin in a cycle} Consider a single spin with $H_S=(h/2)\sigma_S^z$ that interacts with thermal spins $\omega_\beta(H^n_b)$ with $H^n_b=(h/2)\sigma_b^z$. Here, $\sigma_{S,b}^{x,y,z}$ are the Pauli spin 1/2 operators of the system and bath. The interaction $V^1$ with the first thermal spin is such that $[H_S+H^1_b, V^1]\neq 0$; thus, if the spin system starts in equilibrium, $\rho_S = \omega_\beta(H_S)$, the interaction with the first thermal spin drives the system out of equilibrium to state ${\mathcal E}^{(1)}(\omega_\beta(H_S))$. In this process, some work is performed on the system and heat flows to the bath. Then, for the subsequent interactions, we take $V^n$ such that $[H_S+H^n_b, V^n]=0$ with $n\geq 2$; thus, the corresponding $\{{\mathcal E}^{(n)}\}_{n\geq 2}$ are thermal maps that will bring the system back to the thermal $\omega_\beta(H_S)$ state without performing or extracting work. This constitutes a thermodynamic cycle for the system with a unitary evolution for the total system of spin plus baths. Consequently, the micro-reversibility principle is valid, and Eq.~(\ref{ecc:thermo_cycle}) is fulfilled (see the next example for details on the time reversal operator). In the left panel of Fig.~\ref{fig:cycle}, we plot the Hilbert-Schmidt distance \footnote{Hilbert-Schmidt distance is defined as $||\sigma-\rho||_\text{HS} \equiv \Tr[(\sigma-\rho)^\dag (\sigma-\rho)]$.} between the state of the system at each step of the concatenation and the thermal state, and we indeed observe that as $N$ increases, thermalization becomes more effective. In the right panel, we show work and entropy production probability distributions for the cycle considering that at $N=7$, thermalization has been achieved ($||\rho_S'-\omega_\beta(H_S)||_\text{HS}<10^{-5}$). We show in Appendix \ref{appe.thermal maps dont contribute} that thermal maps do not contribute to work fluctuations; thus, we plot work fluctuations of the driving part alone, i.e., ${\mathcal E}^{(1)}$, as well as of the full cycle, $p_1 \equiv p_\text{cycle}(\beta w)=p_\text{drive}(\beta w) = p_\text{cycle}(\Delta_i s)$. Note that this does not apply for entropy production, $p_2 \equiv p_\text{drive}(\Delta_i s) \neq p_1$. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{Figure2a.pdf} \hspace*{\fill} \includegraphics[width=0.4\textwidth]{Figure2b.pdf} \caption{Left panel) Hilbert-Schmidt distance between the state of the system at each step of the concatenation and the thermal state. In this concatenation process, we take $\tau_1=1$, $V^1=(J_B + 0.3)\sigma^x_b\sigma^x_S + J_B \sigma^y_b\sigma^y_S$, $\tau_n=4, V^n=J_B \sigma^x_b\sigma^x_S + J_B \sigma^y_b\sigma^y_S$ for $n\geq 2$, and the other parameters are $h=1$, $J_B=3$, and $\beta = 1.$ Right panel) Work and entropy production distributions for the full cycle and the driving alone, $p_1 \equiv p_{\text{drive}}(\beta w) = p_{\text{cycle}}(\beta w)=p_{\text{cycle}}(\Delta_i s)\neq p_2 \equiv p_{\text{drive}}(\Delta_i s)$. The parameters of the plot are the same as in the left panel.} \label{fig:cycle} \end{figure} \subsection{Spin 1/2 chains} Let us consider a one-dimensional spin $1/2$ chain $\vec\sigma_1\cdots\vec\sigma_N$ with Hamiltonian $H_S$ interacting through the first site with a spin $1/2$ particle with Hamiltonian $H_b=\frac{h}{2}\sigma_b^z$ in a thermal state, where the interaction is given by \begin{equation} V=J_B(\sigma_b^x\sigma_1^x+\sigma_b^y\sigma_1^y). \label{lambda} \end{equation} Here, the Pauli operators $\sigma_b^{x,y,z}$ belong to the single bath spin, and $\sigma_1^{x,y,z}$ belong to the first spin of the chain. The unitary evolution for the system plus bath is given by the operator $U=e^{-i\tau(H_S+H_b+V)}$. Micro-reversibility holds if an anti-unitary operator $\Theta$ exists such that $\Theta \tilde U\Theta^\dag =U^\dag$. For the spin systems that we consider below, $\Theta=i\sigma_b^x i \sigma_b^y\Pi_{i=1}^{N}(i\sigma^x_n i \sigma^y_n) K$ is a time reversal symmetry operator, where $K$ performs the complex conjugation. Note that when a system involves a magnetic field, one generally needs to invert the direction of the magnetic field; thus, the detailed fluctuation theorem in that case will relate the fluctuations of two different systems. However, for the time reversal operator that we consider here, that is not the case \cite{Haake}. The factor $i\sigma^x$ rotates the system $180\degree$ in the $x$ direction, leaving the $\sigma^z$ terms invariant (see the Hamiltonians below). By replacing $H_b$ in Eq.~(\ref{kraussHeat}), one obtains four Kraus operators $M_{ij}$ with $i,j=\pm$, which correspond to transitions in the bath when $i\neq j$. Now let the system be an XX spin $1/2$ chain with Hamiltonian \[ H_{XX}=\frac{h}{2}\sum_{i=1}^M\sigma_i^z - \sum_{i=1}^{M-1} J_i^x(\sigma_{i}^x\sigma_{i+1}^x+\sigma_{i}^y\sigma_{i+1}^y). \] The total magnetization is a conserved quantity for the XX spin chain, i.e., $[H_{XX},H_0]=0$, where $H_0=\frac{h}{2}\sum_{i=1}^M \sigma_i^z$ is the non-interacting part of the Hamiltonian. Considering the Hamiltonian of the bath $H_b=\frac{h}{2}\sigma_b^z$ and the interaction between the chain and the bath $V$ in Eq.~(\ref{lambda}), one finds that the unitary evolution $U=e^{-i\tau(H_{XX}+H_b+V)}$ satisfies $[U,H_0+H_b]=0$; thus, $\omega_\beta(H_0)$ is an equilibrium state of the map ${\mathcal E}$. In this example, the Hamiltonian is of the form $H_S=H_0+H_I$, where $[H_0,H_I]=0$. The stochastic thermodynamics for maps with equilibrium was discussed in section \ref{stoch.term.sec}. We can observe that by iterating the map as discussed in section \ref{sec.appl}, an initial state $\rho_S(0)$ converges to the equilibrium state $\omega_\beta(H_0)$. This is illustrated in Fig.~\ref{fig:Thermo}. The (cumulated) average work performed on that process is given by Eq.(\ref{Av.Work.Eq}), and because ${\rm Tr}_S[H_I\omega_\beta(H_0)]=0$ in this example, it is simply given by $W=-{\rm Tr}_S[H_I\rho_S(0)]$, the (cumulated) heat is $Q={\rm Tr}_S[H_0(\omega_\beta(H_0)-\rho_S(0))]$, and the (cumulated) entropy production is $\Delta_iS=D(\rho_S(0)||\omega_\beta(H_0))$. The asymptotic values are indicated in Fig.~\ref{fig:Thermo}, left panel. Regarding the fluctuating properties, we showed in \ref{stoch.term.sec} that in the equilibrium state $\omega_\beta(H_0)$, the entropy production does not fluctuate, but work may fluctuate. This is illustrated in Fig.~\ref{fig:wFTXY}, left panel. Let us now consider the XY spin $1/2$ chain with Hamiltonian \[ H_{XY}=\frac{h}{2}\sum_{i=1}^M\sigma_i^z - \sum_{i=1}^{M-1} (J_i^x \sigma_{i}^x\sigma_{i+1}^x+J_i^y \sigma_{i}^y\sigma_{i+1}^y) \] coupled to the bath with Hamiltonian $H_b$ through the same coupling $V$. For this system, the invariant state $\rho_{XY}$ of the map ${\mathcal E}$ is not related to any operator $H_0$ such that $[U,H_0+H_b]=0$. Indeed, the state $\rho_{XY}\otimes\omega_\beta(H_b)$ is not invariant under the unitary evolution, and this already indicates that the steady state $\rho_{XY}$ is a NESS. We can observe that by iterating the map for the XY chain, the steady state is reached, where a constant amount of work is being performed each time that the map is applied and the same for the heat and entropy production. This is illustrated by the constant slopes in the cumulated thermodynamic quantities in Fig.~\ref{fig:Thermo}, right panel. These slopes can be computed, but not in terms of the system properties. They are global quantities. For instance, the slope for the entropy production is $D(U\rho_{XY}\otimes\omega_\beta(H_b)U^\dag||\rho_{XY}\otimes\omega_\beta(H_b))$. In this case, the map on the XY spin chain is a map with NESS. Let us now consider some fluctuation properties. In the left panel of Fig.~\ref{fig:wFTXY}, we plot the work distribution for the XX spin $1/2$ chain in the equilibrium state, where $p(\Delta_is)= \delta(\Delta_is)$, but as noted previously, the work fluctuates, i.e., $p(w)\neq \delta(w)$, even though $\langle w\rangle=0$. The work fluctuation relation, Eq.~(\ref{WFT}), is illustrated in the right panel of Fig.~\ref{fig:wFTXY} for the XY spin chain. To conclude this section, we note that for the XX case, where the map has equilibrium, and for the XY case, where the map has a NESS, the agent performs (or extracts) work. This work is easily understood as the cost of implementing the concatenation, i.e., due to the (periodic) time dependence of the coupling between the system and the bath. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{Figure3a.pdf} \hspace*{\fill} \includegraphics[width=0.4\textwidth]{Figure3b.pdf} \caption{Left panel) For the homogeneous XX spin 1/2 chain with three sites, cumulated average work (dashed-black), heat (dashed-red) and entropy production (dashed-blue) as a function of the iteration number. The corresponding straight lines are the theoretically computed asymptotic values. Right panel) The same quantities for the homogeneous XY spin 1/2 chain with three sites. In both cases, the initial density matrix is the Gibbs state $\omega_\beta(H_S)$, which is a non-equilibrium state. The parameters for the plots are $h=2$, $J_i^x=J_B=3$, $\beta = 1.2$ and $\tau =1$, and $J_i^y=2$ in the right panel.} \label{fig:Thermo} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{Figure4a.pdf} \hspace*{\fill} \includegraphics[width=0.4\textwidth]{Figure4b.pdf} \caption{Left panel) Work distribution for the XX spin 1/2 chain in the equilibrium state. Right panel) Work fluctuation relation Eq.~(\ref{WFT}) for the XY spin 1/2 chain, starting from the Gibbs state in the forward and backward processes. In both plots, we have used a chain of two sites and one map. The parameters for the plots are $h=2$, $J_i^x=J_B=3$, $\beta = 1.2$ and $\tau =1$, and $J_i^y=2$ in the right panel.} \label{fig:wFTXY} \end{figure} \section{Consequences for Lindblad Dynamics} \label{sec: Lindblad} In this section, we present the consequences of our previous results for systems described with some Lindblad equations. We will only consider the averaged thermodynamic quantities discussed previously and omit the fluctuations. As discussed in \cite{Attal, Karevski}, when the interaction strength between the system and the bath is scaled as $V=v/\sqrt{\tau}$ in the scheme discussed above, where $\tau$ is the time duration of the coupling between the system and the bath in every iteration, a Lindblad equation describes the dynamics in the continuous time limit $\tau \to 0$. In \cite{Barra}, the limit was considered for the dynamics and the thermodynamic quantities simultaneously. Recalling that the maps that we consider are given by Eq.~(\ref{CPTP}), we first note that by scaling the coupling as $V=v/\sqrt{\tau}$, the map $\mathcal E$ can be written up to order $\tau$ as \begin{equation} \label{ecc: lindblad: repeated infinitesimal} \mathcal E^{\tau}(\rho_S) = \rho_S - i\tau [H_S, \rho_S] + \tau \sum_{ij} \frac{e^{-\beta \varepsilon_i}}{Z_B} \left( L_{ij} \rho_S L_{ij}^\dag - \frac{1}{2}\{ L_{ij}^\dag L_{ij},\rho_S\} \right) , \end{equation} where $L_{ij}= \bra{j}v\ket{i}$ ($v$ is Hermitian) and $H_B\ket{i} = \varepsilon_i\ket{i}$. The double sum in the last term of Eq.~(\ref{ecc: lindblad: repeated infinitesimal}) is split into the diagonal part, $i=j$, and the non-diagonal part, $i\neq j$, which in turn can be converted into a sum $\sum_{i<j}$ by combining the $ij$ and the $ji$ terms. Note that if we consider the labels $\{ij\}$ with $i\le j$ as one label, $r=ij$, then the last term of Eq.~(\ref{ecc: lindblad: repeated infinitesimal}) has the form $\tau\mathcal D(\rho_S)$, where \begin{equation} \label{ecc: dissipator theorem} \mathcal D(\cdot) = \sum_r \gamma_r \left[L_r \cdot L_r^\dag - \frac{1}{2}\{L_r^\dag L_r, \cdot \} + \omega_r^{-1} \left(L_r^\dag \cdot L_r - \frac{1}{2}\{L_r L_r^\dag, \cdot\}\right) \right] \end{equation} is a Lindblad dissipator with Lindblad operators $L_r=L_{ij}$ and coefficients $\omega_r= e^{\beta(\varepsilon_j-\varepsilon_i)}$ and $\gamma_r = e^{-\beta \varepsilon_i}/Z_B>0$ when $i< j$ and $\gamma_r = e^{-\beta \varepsilon_i}/2Z_B>0$ when $i=j$. Considering $\partial_t \rho_S=\lim_{\tau\to 0}\frac{\mathcal E^{\tau}(\rho_S)-\rho_S}{\tau}$, we have the Lindblad evolution equation \begin{equation} \partial_t \rho_S=- i [H_S, \rho_S] +\mathcal D(\rho_S). \label{LindEq} \end{equation} Note that in the general case, this Lindblad equation does not fulfill the quantum detailed balance condition~\cite{Alicki76} because there is no \textit{a priori} relation between the Lindblad operators $L_r$ and the system Hamiltonian $H_S$ (see \textit{Theorem 3} in~\cite{Alicki76}). If we simultaneously consider the limit for the thermodynamic quantities in Eqs.~(\ref{Av.Heat},~\ref{Av.Work}, and~\ref{Av.Ent.Prod}), then we obtain \begin{equation} \dot Q =-\Tr_S\left[\sum\limits_{ij} \frac{e^{-\beta\varepsilon_i}}{Z_B} \left(\varepsilon_j L_{ij} \rho_S L_{ij}^\dag - \frac{\varepsilon_i}{2}\{ L_{ij}^\dag L_{ij},\rho_S\} \right) \right], \label{newQ} \end{equation} \begin{align} \dot W &= \Tr_S[H_S \mathcal D(\rho_S)] -\dot Q, \\ \dot S_i &= \Tr_S[\mathcal D(\rho_S) \ln \rho_S] - \beta \dot Q, \end{align} which correspond to the heat flux, the power performed on (or by) the system and the rate of entropy production for the process described by the Lindblad dynamics. Remarkably, if $\mathcal E$ is a map with equilibrium, then the corresponding Lindblad equation $\partial_t \rho_S=\lim_{\tau\to 0}\frac{\mathcal E^{\tau}(\rho_S)-\rho_S}{\tau}$ has detailed balance. To prove this, one notes that the equilibrium condition implies $[H_0,v] = [v,H_B]$, and therefore, \begin{align} [H_0,L_{r}] &= (\varepsilon_i-\varepsilon_j)L_{r} \label{ecc:L_r eigenoperator}\\ [H_0,L_{r}^\dag] &= -(\varepsilon_i-\varepsilon_j)L_{r}^\dag. \label{ecc:L_rdag eigenoperator} \end{align} These conditions, together with $[H_0,H_S]=0$, imply that the Lindblad equation, Eq.~(\ref{LindEq}), with dissipator, Eq.~(\ref{ecc: dissipator theorem}), has quantum detailed balance with respect to $\pi=e^{-\beta H_0}/Z_0$ \footnote{ It can be shown that the quantum detailed balance condition [$\mathcal L_a(\pi)=0$ and $\mathcal L_s(A\pi)=\mathcal L_s^*(A)\pi$ for all observables $A$, where $\mathcal L_{(a)s}$ is the (anti-)symmetric part of $-i[H_S,\cdot]+\mathcal D(\cdot)$, see~\cite{Alicki76} for further details] can be generalized for equilibrium states $\pi=e^{-\beta H_0}/Z_0$ for the Lindblad Eq.(\ref{LindEq}) with dissipator Eq.(\ref{ecc: dissipator theorem}) when the following holds: $[H_0, L_r] = -\frac{1}{\beta} \ln \omega_r L_r $, $[H_0, L_r^\dag] = \frac{1}{\beta}\ln \omega_r L_r^\dag$ and $[H_0,H_S]=0.$ (In the particular case that $H_0=H_S$, the detailed balance relation is standard.)}. If our Lindblad equation of interest has detailed balance with respect to a positive invariant equilibrium state, i.e., $\pi = \omega_\beta(H_0)$, then the thermodynamic quantities for CPTP maps, which are simplified due to an equilibrium, can be written in the following way in the Lindblad limit without requiring explicit knowledge of the particular form of $H_B$ and the coupling $V$~\footnote{These follow directly from Eqs.~(\ref{Av.Work.Eq},~\ref{Eq.prop}). One can also start from Eq.(\ref{newQ}), evaluate the trace in the eigenbasis of $H_S$ and use the equalities $(\epsilon_n^0-\epsilon_m^0)\bra{\epsilon_n}L_{ij}\ket{\epsilon_m}=(\varepsilon_i-\varepsilon_j)\bra{\epsilon_n}L_{ij}\ket{\epsilon_m}$ and $[H_0,L_{ij}^\dag L_{ij}]=0$, which are two consequences of Eqs.(\ref{ecc:L_r eigenoperator}-\ref{ecc:L_rdag eigenoperator}). }: \begin{align} \dot Q(t) &= \Tr[H_0 \mathcal D (\rho_S(t))] \label{ecc: simplified heat lindblad lim}\\ \dot W(t) &= \Tr[(H_S-H_0) \mathcal D(\rho_S(t))] \label{ecc: simplified work lindblad lim}\\ \dot{S}_i(t) &= - \Tr[\mathcal D (\rho_S(t)) \ln \rho_S(t)] - \beta \dot Q. \label{ecc: simplified ent. prod. lindblad lim} \end{align} Note that if $H_0=H_S$ and thus the equilibrium state for which detailed balance is valid is Gibbsian, $\pi = \omega_\beta(H_S)$, then $\dot W = 0$ according to Eq.~(\ref{ecc: simplified work lindblad lim}), and Eqs.~(\ref{ecc: simplified heat lindblad lim}) and (\ref{ecc: simplified ent. prod. lindblad lim}) provide the standard definitions used for the heat and entropy production in the weak-coupling regime \cite{BreuerBook}. As an example, consider the spin 1/2 chains of the last section with the same bath and scaling of the coupling Eq.~(\ref{lambda}) as $J_B=\sqrt{\lambda/\tau}$. In the limit discussed above, one obtains the same Lindblad equation, other than the unitary part, for the $H_S=H_{XX}$ or $H_S=H_{XY}$ spin chains, which is given by \begin{equation} \label{ecc: lindblad: lindblad XX or XY} \partial_t \rho_S = -i[H_S,\rho_S] + \gamma_+\left[ \sigma_1^+ \rho_S \sigma_1^- - \frac{1}{2}\{\sigma_1^- \sigma_1^+,\rho_S\} + e^{\beta h} \left( \sigma_1^- \rho_S \sigma_1^+ - \frac{1}{2}\{\sigma_1^+ \sigma_1^-,\rho_S\} \right) \right], \end{equation} where $\gamma_+ = 2\lambda(1-\tanh(\beta h/2))$ and $\sigma_1^\pm=\sigma_1^x \pm i \sigma_1^y$. Analogously to the case of the maps, the Lindblad equation for the XX chain has an equilibrium steady state $\pi=e^{-\beta H_0}/Z_0$ with $H_0 = (h/2)\sum_i \sigma_i^z$. Because $[H_0, \sigma_1^\pm] = \pm h\sigma_1^\pm$, Eqs.~(\ref{ecc:L_r eigenoperator}) and (\ref{ecc:L_rdag eigenoperator}) are verified, and since we also have that $[H_{XX},H_0]=0$, the Lindblad equation for the XX chain has quantum detailed balance with respect to the state $\pi=e^{-\beta H_0}/Z_0$. An initial condition $\rho_S(0)$ will relax to the equilibrium state $\pi$ according to this Lindblad equation, and even though it is time independent, work has to be performed or extracted in this relaxation process according to Eq.~(\ref{ecc: simplified work lindblad lim}). In the case of the XY chain, one can find the steady state numerically and verify that it is a NESS, i.e., quantum detailed balance is not satisfied and work is continuously being done and dissipated as heat in the steady state. The work that accompanies the process is impossible to obtain from the sole knowledge of the Lindblad equation. We can physically understand this work considering that this Lindblad dynamics arises due to the active role of an agent that continuously refreshes the bath that interacts with the system in such a way that he or she imposes the form of the dissipator (the Lindblad operators $L_r$) that act on the system. \section{Conclusion} \label{secCONC} We have studied the stochastic thermodynamics of CPTP quantum maps, specifically discussing the properties of maps with equilibrium. Thermal maps are a very important type of map with equilibrium because they represent the passive effect of a heat bath on a system, whereas non-thermal maps with equilibrium require the active intervention of an agent, manifested by the work required to perform the process represented by ${\mathcal E}$ even when the system Hamiltonian is non-driven. Non-thermal maps with equilibrium are, in a sense, between thermal maps and maps with non-equilibrium steady states. They share thermal maps' simplicity of relaxing to the equilibrium state, and at the same time, they allow us to study energy exchanges between the system, bath and experimenter in a non-trivial case. The results that we have illustrated here are not restricted to quantum systems, but in quantum systems, the manipulation of the interaction with the environment is much better controlled; thus, the possibility of transforming the state of a system using {\it active} interactions $V(t)$ with the bath, as illustrated with the XX spin 1/2 chain, is already in the tool box of the experimentalist~\cite{ion-trap}. An implementation of the XX chain with trapped ions is possible, and it can be a candidate to investigate further aspects of quantum thermodynamics. The properties that we have derived for maps with equilibrium, namely, the local (in terms of system's operators) thermodynamic quantities and their fluctuation simplifications, could also be experimentally verified in NMR experiments \cite{NMRexp}. Stochastic thermodynamics for classical systems with Hamiltonian or stochastic dynamics (Langevin or discrete master equation) is usually formulated for systems with the properties that we associate with thermal operations: in the absence of driving, the system thermalizes to the corresponding Gibbs state, and the stochastic work is zero in non-driven systems. For instance, in classical stochastic thermodynamics, as well as for thermal maps, a system in equilibrium presents no entropy production fluctuation, which also implies no work fluctuations. In the quantum case, one can consider these fluctuations simultaneously if the equilibrium state commutes with the Hamiltonian of the system. We have observed that in the XX spin 1/2 chain that we have studied, where the dynamics is controlled by a non-thermal map with equilibrium and where the equilibrium state commutes with the Hamiltonian, it is possible to have a fluctuating work and no entropy production fluctuations. Meanwhile, open systems in contact with a single heat bath, whose dynamics is represented by a map with NESS, present a complexity similar to the one of open systems passively coupled to two heat baths~\cite{pillet}. A few properties for this case were also studied, such as the statistics of work for systems starting in the Gibbs $\omega_\beta(H_S)$ state, Eq.~(\ref{WFT}). We showed that the results for a single map are also valid for concatenations of them. A process corresponding to a concatenation of maps is performed with some work cost (or gain) due to the switching on and off of the coupling between the system and the new and fresh copy of the bath, even for time independent $H_S$ and $V$. When we iterate CPTP maps with equilibrium and take the continuous limit of Sec.~\ref{sec: Lindblad}, the resulting Lindblad equation has an equilibrium steady state and satisfies detailed balance respect to it. For CPTP maps with NESS the resulting Lindblad equation has a NESS. Remarkably, the thermodynamic quantities in the continuous limit can be written locally (in terms of the system's operators) when the map used has equilibrium. Normally, one cannot say much about the thermodynamics of an \textit{a priori} given Lindblad equation unless it satisfies the detailed balance condition with respect to the Gibbs state. An important conclusion from our analysis is that even though the given Lindblad equation may be time independent, i.e., no time-dependent driving in the system Hamiltonian and time-independent dissipative parts, the process described by the Lindblad equation is conducted by performing (or extracting) work, which we interpret as the work cost (or gain) of coupling the system to the bath with the given Lindblad operators. Lindblad operators arising from thermal maps have zero work cost but a Lindblad dynamics describing relaxation toward a non-Gibbsian equilibrium state occurs with a work contribution; see Eq.~(\ref{ecc: simplified work lindblad lim}). Our analysis (see also~\cite{Barra}) could provide a procedure to derive a consistent thermodynamics for a broad class of Lindblad evolutions. \section*{Acknowledgments} F.B. gratefully acknowledges the financial support of FONDECYT grant 1151390. C.L. gratefully acknowledges the financial support of CONICYT through Becas Magister Nacional 2016, Contract No. 22161809. \section*{Appendices} \begin{appendices} \section{Proof of Eq.~(\ref{Mmicroreversibility})} \label{sec:appendix microrev} Eq.~(\ref{Mmicroreversibility}) is an equality between operators in the system Hilbert space ${\mathcal H}_S$, which we can write as \[ \bra{\tilde i}\tilde U\ket{\tilde j}=\Theta_S \bra{j} U\ket{i}^\dag \Theta_S^\dag, \] where $\tilde U=\Theta U^\dag\Theta^\dag$ and $\ket{\tilde i}=\Theta_B \ket{i}$. We prove it by showing that the matrix elements in an arbitrary basis are the same. Consider the operator at the left-hand side, and evaluate its matrix elements with states $\ket{\tilde a}=\Theta_S\ket{a}$ and $\ket{\tilde b}=\Theta_S\ket{b}$, i.e., \[ \left(\ket{\tilde a},\bra{\tilde i}\tilde U\ket{\tilde j}\ket{\tilde b}\right)_{\mathcal{H}_S}=\left(\ket{\tilde a\tilde i},\tilde U\ket{\tilde b\tilde j}\right)_{\mathcal{H}_{\rm tot}}= \left(\Theta\ket{ a i},\Theta U^\dag\ket{ bj}\right)_{\mathcal{H}_{\rm tot}}=\left(U^\dag\ket{bj},\ket{ a i}\right)_{\mathcal{H}_{\rm tot}}=\bra{bj}U\ket{ai}. \] Let us now evaluate the same element for the operator on the right-hand side \[ \left(\ket{\tilde a},\Theta_S \bra{j} U\ket{i}^\dag \Theta_S ^\dag \ket{\tilde b}\right)_{\mathcal{H}_S}=\left(\Theta_S\ket{ a},\Theta_S \bra{j} U\ket{i}^\dag \ket{ b}\right)_{\mathcal{H}_S} =\left( \bra{j} U\ket{i}^\dag \ket{ b},\ket{ a}\right)_{\mathcal{H}_S}=\left( \ket{ b},\bra{j} U\ket{i}\ket{ a}\right)_{\mathcal{H}_S}=\bra{bj}U\ket{ai}, \] and therefore, we have the equality. Note that we have used the property $\left(\Theta \phi,\Theta\psi\right)=\left( \psi,\phi\right)$ that the anti-unitary operators $\Theta$ and $\Theta_S$ satisfy in the corresponding Hilbert space. \section{ $p(w) = \tilde p(w)$ if the Hamiltonian $H_\text{tot}$ is invariant under time reversal.} \label{appendix: bkwequivalence} For the spin systems that we consider in our analysis and the time reversal operator $\Theta$ defined in Section \ref{sec.aps}, we have the equality $\tilde U = U$, which equivalently means that the Hamiltonian $H_\text{tot}$ is invariant under time reversal. We will now show that this implies that the work distribution of the forward process equals the work distribution of the reversed process, $p(w) = \tilde p(w)$. For each trajectory $\gamma = \{n,k,m\}$ in the forward process, there is an associated backward trajectory $\tilde \gamma = \{\tilde m,\tilde k,\tilde n \}$, but now, we will compare $\gamma$ with another trajectory belonging to the backward trajectories, that is, $\tilde \gamma ' = \{\tilde n, -\tilde k, \tilde m \}$, where if $k=ij$ is associated with the Kraus operator $M_{k=ij}$, then $-\tilde k$ corresponds to the operator $\tilde M_{ij}$ (see Eq.~(\ref{kraussHeat-tilde})). Note that if we measure the system energy at the beginning and at the end, $\ket{a_n}$ and $\ket{b_m}$ correspond to the energy eigenstates. Because the Hamiltonian is time reversal invariant, we have that $\Theta_S \ket{a_n} = \ket{a_n}$ and $\Theta_S \ket{b_m} = \ket{b_m}$ (the same is true for the energy eigenstates of the bath, $\Theta_B \ket i = \ket i$ and $\Theta_B \ket j = \ket j$); thus, according to Eq.~(\ref{kraussHeat-tilde}), we obtain $\tilde M_{ij}=M_{ij}$. These trajectories are as follows: \begin{equation} \ket{n} \xrightarrow{k} \ket{m}, \quad p(\gamma) = p_i(n) \abs{\bra{b_m} M_{ij} \ket{a_n}}^2, \quad \beta w_\gamma = \ln \frac{p_i(n)}{p_i(m)} - \beta(\varepsilon_i - \varepsilon_j) \end{equation} \begin{equation} \ket{\tilde n} \xrightarrow{-\tilde k} \ket{\tilde m}, \quad \tilde p(\tilde \gamma') = p_i(n) \abs{\bra{\tilde b_m} \tilde M_{ij} \ket{\tilde a_n}}^2, \quad \beta w_{\tilde \gamma'} = \ln \frac{p_i(n)}{p_i(m)} - \beta(\varepsilon_i - \varepsilon_j). \end{equation} Those two probabilities are the same: $p(\gamma) = \tilde p(\tilde \gamma ')$ and $w_\gamma = w_{\tilde \gamma'}$. We conclude that every trajectory in the forward process is also present in the backward process; thus, the work distribution is the same, $p(w) = \tilde p(w)$. To derive an equivalent relation for the entropy production, $p(\Delta_i s) = \tilde p(\Delta_i s)$, we equivalently require that $\Theta\ket{a_n} = \ket{a_n}$ and $\Theta \ket{b_m} = \ket{b_m}$; however, these states are now the initial density matrix and final density matrix eigenstates. We therefore need time-reversal-invariant initial and final density matrices, i.e., $\Theta \rho \Theta^\dag = \rho$ and $\Theta \rho' \Theta^\dag = \rho'$. In the stationary state of the XX spin chain, this is fulfilled since $\rho_{XX} = e^{-\beta H_0}/Z_0$ is invariant under time reversal with $\Theta$ defined as in Section \ref{sec.aps}. However, for the XY stationary state, this is not true since $\rho_{XY} \neq \Theta \rho_{XY} \Theta^\dag$. The previous analysis can readily be generalized to a concatenation of maps, and the same results hold. \section{ Thermal maps do not contribute to work fluctuations} \label{appe.thermal maps dont contribute} To demonstrate that in the thermodynamic cycle of section \ref{sec.cycle} the work probability distribution is determined only by the driving, $p_\text{cycle}(w) = p_\text{drive}(w)$, we assume that there is just one thermalization map. The generalization to many thermalization maps, however, is straightforward. The work distribution probability is \begin{equation} \begin{split} p(w) &= \sum_\gamma p(\gamma) \, \delta(\epsilon_m - \epsilon_n - q_\gamma - w) \\ &= \sum_\gamma p_i(n)p_{i_1}p_{i_2}\, |\bra{\epsilon_m,\varepsilon_{j_1},\varepsilon_{j_2}} U_2 U_1 \ket{\epsilon_n,\varepsilon_{i_1},\varepsilon_{i_2}}|^2 \, \delta(\epsilon_m - \epsilon_n - q_\gamma - w), \end{split} \end{equation} where $U_1$ is responsible for the driving and $U_2$ is responsible for the thermalization. The probabilities are $p_i(n) = e^{-\beta \epsilon_n}/Z_S$ and $p_{i_k} = e^{-\beta \varepsilon_{i_k}}/Z_b$, and $\delta(\cdot)$ is a Kronecker-Delta. Expanding the transition probability and including two identities in the system Hilbert space $\mathcal H_S$, we obtain \begin{equation} \nonumber p(w) = \sum_{\gamma,\alpha,\beta} p_i(n)p_{i_1}p_{i_2} \, |\bra{\epsilon_m,\varepsilon_{j_2}} U_2 \ket{\epsilon_\alpha,\varepsilon_{i_2}} \bra{\epsilon_\alpha,\varepsilon_{j_1}} U_1 \ket{\epsilon_n,\varepsilon_{i_1}}\bra{\epsilon_n,\varepsilon_{i_1}}U_1^\dag \ket{\epsilon_\beta, \varepsilon_{j_1}} \bra{\epsilon_\beta,\varepsilon_{i_2}} U_2^\dag \ket{\epsilon_m,\varepsilon_{j_2}} \, \delta(\epsilon_m - \epsilon_n - q_\gamma - w). \end{equation} Since $U_2$ is thermal, $\epsilon_m + \varepsilon_{j_2} = \epsilon_\alpha + \varepsilon_{i_2} = \epsilon_\beta + \varepsilon_{i_2}$; thus, $\alpha=\beta$ because the system Hamiltonian is non-degenerate. Additionally, we can replace $\epsilon_m = \epsilon_\alpha + \varepsilon_{i_2} - \varepsilon_{j_2}$ in the delta, obtaining \begin{equation} p(w) = \sum_{\gamma,\alpha} p_i(n)p_{i_1}p_{i_2} \, |\bra{\epsilon_{\alpha},\varepsilon_{j_1}} U_1 \ket{\epsilon_n,\varepsilon_{i_1}}|^2 \, |\bra{\epsilon_m,\varepsilon_{j_2}} U_2 \ket{\epsilon_\alpha, \varepsilon_{i_2}}|^2 \delta(\epsilon_\alpha-\epsilon_n +\varepsilon_{j_1} - \varepsilon_{i_1}-w). \end{equation} Summing over $m$ and $j_2$, the second transition probability becomes a trace, which is equal to one. Finally, we arrive at \begin{equation} \label{ecc:apenddix_work distribution cycle} p(w) = \sum_{\alpha,j_1,n,i_1} p_i(n)p_{i_1}\, |\bra{\epsilon_\alpha,\varepsilon_{j_1}} U_1 \ket{\epsilon_n,\varepsilon_{i_1}}|^2 \delta(\epsilon_\alpha - \epsilon_n + \varepsilon_{j_1} - \varepsilon_{j_2}-w). \end{equation} Note that this quantity is exactly the work distribution probability of the driving alone; thus, $p_\text{cycle}(w) = p_\text{drive}(w)$. We remark that this also implies that $p_\text{cycle}(w) = \tilde p_\text{cycle}(w)$ even though the protocol is not symmetric (because in the backward process, the relaxation map acts first). \end{appendices}
1,314,259,992,875
arxiv
\section{Introduction} \label{sec:introduction} Large-volume liquid-scintillator detectors (LSDs) are often used in neutrino-physics experiments, either for neutrino detection or as a veto detector. For detection of the scintillation photons, large-area spherical photomultiplier tubes (PMTs) are typically employed. In this paper, we explore the possibility to read out such large-volume LSDs with Wavelength-shifting Optical Modules (WOMs). This technology might be used in future neutrino-physics experiments or for the Surrounding Background Tagger of the proposed SHiP experiment~\cite{Anelli:2015pba} at the CERN SPS. WOMs as large-area photodetectors of low cost and low noise were proposed first for an upgrade of the IceCube experiment~\cite{WOMs}. They are considered as a promising replacement for the large-area spherical PMTs. A WOM consists of a cylinder or tube, typically made out of quartz glass or polymethyl methacrylate (PMMA), the surface of which is coated with a wavelength-shifting paint. The WOM is placed inside a UV-transparent cylindrical vessel, which is of larger diameter than the WOM and separates it from the detector material, that can be either a (liquid) scintillator or water/ice to detect Cherenkov photons. If a UV photon hits the WOM, it is absorbed inside the wavelength-shifting paint and subsequently a secondary photon of longer wavelength in the range of visible light is emitted isotropically. The WOM is designed to act as a light guide providing total reflection at the glass-air interface. Neglecting transport losses, the probability that a secondary photon is transported via total reflection to one of either ends of the WOM, where it can then be detected by a photosensor, is about $75~\%$~\cite{WOMs}. If only one end of the WOM is instrumented with a photosensor and the other end is open, a part of the photons being transported to the uninstrumented end will also be totally reflected there and eventually detected at the instrumented WOM end. The detection principle is depicted in Fig.~\ref{fig:WOM_Principle}. \begin{figure} \centering \subfigure[]{ \label{fig:WOM_Principle} \includegraphics[scale=0.35]{WOM_Sketch_v4.pdf}} \subfigure[]{ \label{fig:EmissionAbsorptionSensitivities} \includegraphics[scale=0.35]{PMT_LSD.pdf}} \caption{(a) The working principle of a wavelength-shifting optical module (WOM). (b) Emission spectra of the liquid scintillator (LAB+PPO) and the wavelength-shifting paint used on the WOMs (Bis-MSB). Also shown is the quantum effiency (QE) of the photomultiplier and the photo detection efficiency of the SiPMs, at 3~V as quoted by the producer company Hamamatsu photonics, coupled to the WOMs in this setup.} \label{fig:WOM_PrincipleSpectra} \end{figure} The photosensor only needs to cover the exit area of the WOM tube or cylinder. The gain in effective detection area with respect to the photosensor area is thus mainly defined by the WOM length. For a WOM tube, the detection area can be further increased by enlarging the tube diameter and guiding the secondary light exiting at the end of the WOM tube via a light guide to a photosensor. Using low-radioactivity quartz glass or PMMA and given the small photosensor area, the noise of such a large-area WOM is expected to be low. In this paper, we report on proof-of-principle measurements with a LSD that is viewed by WOM tubes transporting the light via a light guide either to a photomultipler tube (PMT) or an array of large-area silicon photomultipliers (SiPMs). The measurements were performed at the CERN SPS test-beam facility using different particle types and momenta. \section{Liquid-Scintillator Detector and WOMs} \label{sec:liquidscintillatordetector} A box with a volume of $50 \times 50 \times 25~{\rm cm}^3$ made out of 1 cm thick black-colored ABS plastic plates glued together is filled with a liquid scintillator consisting of linear alkylbenzene (LAB)~\footnote{linear alkylbenzene (type Hyblene 113) produced by Sasol Italy S.p.A.} as a solvent and 1.5~g/l Diphenyloxazole (PPO)~\footnote{2,5-Diphenyloxazole (ca. $99~\%$, for scintillation) produced by Carl Roth GmbH + Co. KG Schoemperlenstr. 1-5, 76185 Karlsruhe, Germany} as a fluorescent. These liquid-scintillator constituents are used in state-of-the-art large-volume $\nu$ experiments as SNO+~\cite{OKeeffe:2011dex} and JUNO~\cite{Djurcic:2015vqa}. PPO emits in the wavelength range between about 330~nm and 450~nm with the emission maximum at about 360~nm (see Fig.~\ref{fig:EmissionAbsorptionSensitivities}). Laboratory measurements have shown that the absorption length of this liquid scintillator is about 1~m at 380~nm and 0.5~m at 360~nm~\cite{Hackbarth}. To avoid detector performance deterioration induced by oxygen, the LSD is constantly flushed with dry nitrogen gas. \\ \begin{figure} \begin{center} \includegraphics[scale=0.50]{Box_Sketch_v2.pdf} \end{center} \caption{A sketch of the liquid-scintillator detector module used for the test-beam measurements. (Left) front view, (middle) side view, (right) top view showing the definition of the beam incident angle in the testbeam measurements.} \label{fig:lsbox} \end{figure} A sketch of the liquid-scintillator detector module is shown in Fig.~\ref{fig:lsbox}. The inner sides of the front and back covers is lined with Tyvek~\footnote{Tyvek foil, registered trademark of the DuPont company} foil~\cite{Bellini:2011yd} to allow diffuse reflection of UV photons. At four positions of the module cover plate, holes allow the installation of the WOMs. For the testbeam measurements, the LSD was equipped with three WOMs. To enable total reflection at its surface, the WOM needs to be surrounded by air. It is thus placed inside a 20cm long UV-transparent PMMA vessel which is glued to the module cover and separates the WOM from the liquid scintillator. Fig.~\ref{fig:PMMA_Vessel} shows a photo of a PMMA vessel~\cite{MaximilianEhlert-Bachelorarbeit}. This PMMA vessel is made out of two tubes. The inner tube is sealed against the liquid scintillator via a glued-on disk of 4mm-thick PMMA, and both tubes are connected on the opposite side by a ring of the same material. This geometrical shape of the PMMA vessel with an outer and inner PMMA tube allows one to have also liquid scintillator in between the inner walls of the PMMA vessel, increasing the photon detection efficiency for particles traversing the module at the location of the WOM. \\ \begin{figure \centering \subfigure[]{ \label{fig:PMMA_Vessel} \includegraphics[scale=0.50]{WOM_Mantel_1.jpeg}} \subfigure[]{ \label{fig:LightGuide_WOM} \includegraphics[scale=0.50]{WOM_dip.jpeg}} \caption{(a) The double-walled PMMA vessel to separate the WOM from the liquid scintillator. (b) The light guide glued to the dip-coated WOM tube to be placed in the PMMA vessel.} \label{fig:PMMA_Vessel2_LightGuide_WOM} \end{figure} The WOMs themselves are made out of 20~cm long PMMA tubes of 6~cm diameter, with a wall thickness of 4~mm. They are dip-coated on both sides with a wavelength-shifting paint developed by D.~Hebeker~\cite{WOM-Masterarbeit-DustinHebeker} that consists of Toluene (100~ml) as a solvent, Paraloid B72 (25~g) as a binder, and the two wavelength shifters Bis-MSB (0.15~g) and p-Terphenyl (0.3~g). The solvent is heated to just below the boiling point in order to solve binder and WLS, which are added in small quantities. The utilised wavelength shifters absorb photons with high probability in the range between 300~nm and 400~nm and emit secondary photons in the range between 400~nm and 520~nm with the emission maxiumum at 420~nm (see Fig.~\ref{fig:EmissionAbsorptionSensitivities}). Bis-MSB is spectrally particularly well-matched to absorb the PPO emission spectrum~\cite{Djurcic:2015vqa}.\\ Using acetone, one end of the WOM tube is fused onto a 3.5~cm-long light guide made out of PMMA A~\footnote{produced by K\"umpel Kunststoff-Verarbeitungs GmbH}. Its shape was optimised for minimal light losses while mapping the WOM ring shape onto a disc of 2.8~cm diameter~\cite{Lightguide-Bachelorarbeit}. Fig.~\ref{fig:LightGuide_WOM} shows the light guide and WOM tube glued together~\cite{MaximilianEhlert-Bachelorarbeit}.\\ During the test-beam measurements, two WOMs were viewed by a Hamamatsu R1924A PMT, called PMT1 and PMT2, with a photocathode diameter of 2.3~cm, thus only covering the inner 67.5~\% of the light guide exit. A third WOM was instrumented with an array of eight Hamamatsu S13360-6025PE SiPMs, each of which covering an area of $6 \times 6~{\rm mm}^2$. They are placed on a PCB designed by the University of Geneva, as shown in Fig.~\ref{fig:SiPM_PCB}, where one SiPM is positioned in the centre of the PCB and the other ones around it. The diameter of the PCB is 3~cm and the SiPMs cover $40~\%$ of the light guide exit. While the SiPMs thus cover a smaller fraction of the light guide exit than the Hamamatsu R1924A PMT, they are partly placed on its outer regions. Both, the PMTs as well as the SiPM array were optically coupled to the light guide exit with a silicone grease~\footnote{Baysilone-paste (medium viscosity) produced by Bayer}. \begin{figure} \begin{center} \includegraphics[scale=0.25]{PCB_8SiPM_WOM.jpg} \end{center} \caption{The eight SiPMs, each of an area of $6 \times 6~{\rm mm}^2$, mounted on the PCB, which is placed on the exit of the WOM light guide (visible in the right-hand side of the picture).} \label{fig:SiPM_PCB} \end{figure} During the test beam, the SiPMs were operated with a bias voltage of 59.2~V. The SiPM array was read out by an eight-channel MUSIC\,R1 ASIC~\cite{MUSIC-ASIC} providing individual channel read out as well as the analog sum of all eight channel signals. This approach was already approbated earlier for the readout of a plastic scintillator bar~\cite{Betancourt:2017sex}. \section{Testbeam setup and Data taking} \label{sec:testbeamsetup} Data were taken for three days in September 2017 with beams of hadrons, muons or electrons at the CERN SPS north area beamline T2 H2. The SPS provided spills every 13~s or 36~s with spill lengths varying between 4~s and 5.5~s. For the secondary hadron and muon beams, a momentum of 150~GeV/c was chosen, while the momentum for the tertiary electron beam was 20~GeV/c. Depending on the particle type, typical particle rates during a spill varied between 0.1 and 1~kHz and the typical diameter of the beam at the position of the LSD module was on the order of a few cm.\\ A 'DESY table' allowed for horizontal and vertical movement of the detector module, enabling measurements with the beam spot at several reference points. Fig.~\ref{fig:Testbeam_Box_Scetch} shows a sketch of the module indicating these positions as well as the positions of the WOMs with a label indicating whether a PMT or the SiPM array was used. In addition, the module could be rotated around its vertical symmetry axis to change the incident angle $\theta_{inc}$ of the particles with respect to the front and back covers of the LSD module, with $\theta_{inc}=0^{\circ}$ (see Fig.~{\ref{fig:lsbox}}, right-hand plot) denoting the default position where the beam was perpendicular to the detector module cover.\\ \begin{figure} \begin{center} \includegraphics[scale=0.50]{Box_Positions_OldRotation.pdf} \end{center} \caption{Sketch of the LSD module indicating the beam positions on the detectors with reference numbers. The WOMs equipped either by a PMT or by a SiPM array are labeled accordingly.} \label{fig:Testbeam_Box_Scetch} \end{figure} Two plastic scintillators (EJ-228) with dimensions $2 \times 2 \times 2~{\rm cm}^3$ acted as trigger and were placed 0.75~m upstream of the LSD module. The scintillators were viewed from both sides by Hamamatsu R4998 PMTs via 5~cm long light guides, hereafter referred to as $PMT_{trigg1}$ and $PMT_{trigg2}$, respectively, $PMT_{trigg3}$ and $PMT_{trigg4}$. Three veto scintillators were installed between the two trigger scintillators to suppress events with additional particles not passing the trigger scintillators. Each veto scintillator was 50~cm long in horizontal direction, 6~cm in vertical direction, and had 1 cm thickness. The veto scintillators were sligthly overlapping in vertical direction so that they covered an area of 50 cm $\times$ 17~cm. The central veto scintillator had a hole of 1.5~cm diameter, thus defining the accepted position of the beam spot on the LSD module. Fig.~ \ref{fig:Trigger_Veto} shows a sketch of the plastic scintillator trigger-and-veto system. \begin{figure} \begin{center} \includegraphics[scale=0.50]{Veto_Sketch_v2.pdf} \end{center} \caption{Sketch of the plastic-scintillator trigger-and-veto system located 75~cm upstream of the LSD module.} \label{fig:Trigger_Veto} \end{figure} All signals from the trigger and veto scintillators, the PMTs from two WOMs and and SiPM signals of seven out of eight individual channels and the sum-channel from the other WOM were sent to a 16-channel WaveCatcher module~\cite{WaveCatcher} for data acquisition. The module was operated at the 3.2~GS/s sampling rate, thus covering a 320~ns time window. Event triggers were only accepted if all four trigger PMTs fired and no signal from the veto counters was present within a time window of~30 ns. Data taken with constant conditions were grouped in runs. The waveforms of the PMT and SiPM signals of each event in a run were stored in corresponding ROOT files for offline data analysis. The typical number of events for runs with the muon or hadron beam was about 5000, while for the electron beam, due to lower intensity, the typical number was about 600.\\ During some specific runs, a 3.7~cm thick stainless steel plate was additionally placed in front of the LSD module. This allowed to study the LSD response to hadronic or electromagnetic (pre-)showers in case where a hadron or electron traversed the LSD module, and to study in general the response of the LSD for higher photon yields.\\ During data taking, the output signal of the WOM with PMT1 was observed to be much smaller than the output signal of the WOM with PMT2. Although this was traced down to insufficient optical coupling between the WOM light guide and PMT1, the problem could not be solved during the test-beam data taking. Therfore, only waveforms for the WOM with PMT2 and the WOM with the SiPM array are further considered in the offline analysis.\\ \section{Data Analysis and Results} \label{sec:dataanalysis} \subsection{Waveform analysis and signal definition} \label{subsec:waveform} Fig.~\ref{fig:Waveforms_PMT2} shows a typical waveform of a signal of PMT2, for an event in which the trigger scintillators have fired. Fig.~\ref{fig:Waveforms_SiPMs} shows a typical waveform of an individual SiPM channel, while Fig.~\ref{fig:Waveforms_SiPMs_SUM} shows for the same event the sum signal (of all eight SiPMs). \begin{figure \centering \subfigure[]{ \label{fig:Waveforms_PMT2} \includegraphics[scale=0.35]{Waveforms_PMT2.pdf}} \subfigure[]{ \label{fig:Waveforms_SiPMs} \includegraphics[scale=0.35]{Waveforms_SiPMs.pdf}} \subfigure[]{ \label{fig:Waveforms_SiPMs_SUM} \includegraphics[scale=0.35]{Waveforms_SiPM_SUM.pdf}} \caption{Examples of a waveform for a single event from the hadron beam pointing to position 0 with an incident angle of $0^{\circ}$: for (a) the WOM with PMT2, (b) the WOM with SiPMs for one individual SiPM, and (c) the sum signal of all SiPMs.} \label{fig:Waveforms_PMT2_SiPMs} \end{figure} For the PMTs, the waveforms are then integrated within a 70~ns time window starting 5~ns before the signal (see Fig.~\ref{fig:TimeIntegrated_Distribution_PMT}). The integral is converted from ${\rm mV} \times {\rm ns}$ to the number of photoelectrons ($N_{\rm pe}$) by means of a dedicated laboratory calibration measurement using the LED pulser and digitizer from the CAEN educational toolkit~\cite{CAENEducationalToolkit}. In case of the SiPM array, the maximum amplitude of the waveform for each individual SiPM channel within a 70~ns time window starting 5~ns before the signal is converted to the number of photoelectrons ($N_{\rm pe}$) using a corresponding calibration measurement. In the subsequent analysis, $N_{\rm pe}$ for the whole WOM is then estimated by adding up the seven individually read-out SiPM channels and scaling the result by the factor 8/7 in order to account for the eigth SiPM. For timing measurements, the sum signal from the MUSIC board is used. \begin{figure \centering \includegraphics[scale=0.35]{integral_mVns_PMT2.pdf} \caption{ Time-integrated spectrum for the WOM with PMT2 obtained for events from the hadron beam pointing to position 0 with an incident angle of $0^{\circ}$.} \label{fig:TimeIntegrated_Distribution_PMT} \end{figure} To define the arrival time of the signals for the WOM with PMT2 (SiPM array), the digital constant fraction technique has been applied~\cite{Delagnes}: The maximum of the signal within a time window of 320~ns is searched for and then the earliest time before the maximum is taken, for which the signal has $20~\%$ ($30~\%$) of the maximum signal height. This threshold value for the constant fraction discrimination is chosen as the result of an optimization procedure that determines the minimal standard deviation of the arrival time as a function of the threshold. For the WOM with the SiPM array, only the sum channel of the SiPMs is considered if not explicitly stated otherwise. It was explicitly checked that there are no events with more than one trigger signal within a time window of 320~ns, so that the probability of two beam particles hitting the LSD module in one event is negligible. \subsection{Photo detection efficiency of the liquid-scintillator detector} \label{subsec:efficiency} For the different particle types, the photo detection efficiency of the LSD module looking only at a specific WOM is determined as a function of $N_{\rm pe}$ for different beam positions on the LSD module and an incident angle of $0^{\circ}$. The photo detection efficiency is defined as the ratio between the number of events with a WOM signal above a certain photoelectron threshold and the total number of triggered events. \begin{figure \centering \subfigure[]{ \label{fig:Efficiency_PMT2_2PE} \includegraphics[scale=0.35]{Efficiency_2PE_PMT.pdf}} \subfigure[]{ \label{fig:Efficiency_SiPMs_2PE} \includegraphics[scale=0.35]{Efficiency_2PE_noSP_SiPM_amp.pdf}} \caption{Photo detection efficiency for a threshold of $N_{\rm pe} \ge 2$ for (a) the WOM with PMT2 and (b) the WOM with SiPMs for different beam positions (indicated by black crosses) at an incident angle of $0^{\circ}$. Efficiency numbers are quoted from bottom to top for hadrons, muons, and for electrons. Numbers for positions 10 and 11 for the WOM with SiPMs are not quoted, since these later measurements were suffering from liquid-scintillator leakage into the PMMA vessel, lowering the efficiency .} \label{fig:Efficiency_PMT2_SiPMs_2PE} \end{figure} Figs.~\ref{fig:Efficiency_PMT2_2PE} and~\ref{fig:Efficiency_SiPMs_2PE} show the photo detection efficiency for a threshold of $N_{\rm pe} \ge 2$ for the WOM with PMT2 respectively for the one with the SiPM array for different beam positions on the LSD module (indicated by black crosses) and an incident angle of $0^{\circ}$. The quoted uncertainties are estimated for $68~\%$ confidence level using the Clopper-Pearson method. For the WOM with PMT2, the photo detection efficiency is presented for all positions and is better than $99.7\%$ for any particle type and all beam positions, except those where the beam directly hits another WOM (positions 6 and 7). For these positions, the photo detection efficiency of the WOM with PMT2 is significantly reduced, because a large part of the scintillation light is generated inside the other WOM and absorbed with very large probability right away in its wavelength-shifting paint. A similar effect is seen for the WOM with the SiPM array: when the beam hits position 6 where another WOM is located, the photo detection efficiency is smaller than at other positions, although the effect is less pronounced compared to the photo detection efficiency of the WOM with PMT2. On the other hand, when the beam directly hits the studied WOM, the photo detection efficiency is very large, see position 7 in Fig.~\ref{fig:Efficiency_SiPMs_2PE}. \begin{figure \centering \subfigure[]{ \label{fig:Efficiency_PMT2_5PE} \includegraphics[scale=0.35]{Efficiency_5PE_PMT.pdf}} \subfigure[]{ \label{fig:Efficiency_SiPMs_5PE} \includegraphics[scale=0.35]{Efficiency_5PE_noSP_SiPM_amp.pdf}} \caption{Photo detection efficiency for a threshold of $N_{\rm pe} \ge 5$ for (a) the WOM with PMT2 and (b) the WOM with SiPMs for different beam positions (indicated by black crosses) at an incident angle of $0^{\circ}$. Efficiency numbers are quoted from bottom to top for hadrons, muons, and for electrons.} \label{fig:Efficiency_PMT2_SiPMs_5PE} \end{figure} Figs.~\ref{fig:Efficiency_PMT2_5PE} and~\ref{fig:Efficiency_SiPMs_5PE} show the photo detection efficiencies for a threshold of $N_{\rm pe} \ge 5$. For the WOM with PMT2 at position 3, the efficiency drops to $98~\%$, since the average number of photoelectrons for this position is only O(10). This is visualized in Figs.~\ref{fig:PE_Distribution_PMT} and~\ref{fig:PE_Distribution_SiPM} showing the $N_{\rm pe}$ distribution for the WOM with PMT2 and the WOM with the SiPM array at different beam positions. As expected, the $N_{\rm pe}$ yield decreases with increasing distance between the particle track and the studied WOM, and when the beam hits a different WOM, the $N_{\rm pe}$ yield at the studied WOM is particularly low, as discussed above.\\ \begin{figure \centering \subfigure[]{ \label{fig:PE_Distribution_PMT} \includegraphics[scale=0.35]{PMT_Fig11a.pdf}} \subfigure[]{ \label{fig:PE_Distribution_SiPM} \includegraphics[scale=0.35]{SiPM_Fig11b.pdf}} \caption{ $N_{\rm pe}$ distribution for muons and different beam positions at an incident angle of $0^{\circ}$. (a) WOM with PMT2 and (b) WOM with SiPM array. For the latter case, $N_{\rm pe}$ at position 7 is not shown, since it is much higher compared to positions 0 -- 6 because the particle beam directly hits the WOM with the SiPM array.} \label{fig:PE_Distribution_PMT_SiPM} \end{figure} Several factors may be optimised to improve the detected light yield: \begin{itemize} \item The PPO concentration of 1.5~g/l does not give the maximum light yield and could be increased~\cite{Djurcic:2015vqa} with the positive side effect that also the time resolution of the liquid scintillator would improve~\cite{MarrodanUndagoitia:2009kq}. \item The absorption length of the liquid scintillator reduces the light yield. It could be increased by a dedicated $Al_{2}O_{3}$ purification procedure~\cite{Djurcic:2015vqa,Benziger:2007aq}. \item Equipping all of the inner walls of the LSD module with a reflective foil or coating. \item The photo sensor coverage of the light guide exit can be optimised. In particular, the fact that the PMT does not cover the whole light guide exit reduces its collected light by a significant amount. The observed photoelectron yield for the WOM with PMT2 is smaller on average than that of the WOM with SiPMs (see Figs.~\ref{fig:PE_Distribution_PMT} and~\ref{fig:PE_Distribution_SiPM}), despite the larger coverage of the light guide exit by PMT2. Indeed, the individual SiPM photoelectron yields show that a SiPM placed on the outer ring of the SiPM array detects about a factor of two more photons than the SiPM in the centre of the array. Compared to the PMT case, the detection efficiency is further increased for the SiPMs case by their higher photo detection efficiency in particular in the wavelength range of the WLS emission spectrum (see Fig.~\ref{fig:EmissionAbsorptionSensitivities}). \item After the test-beam measurements, it has been found with a setup at the University of Mainz~\cite{WOMefficiencysetup} that the WOM with the light guide has a photo detection efficiency that is about one order of magnitude lower than a WOM without a light guide~\cite{MainzIceCubeTest}. Possible reasons are: I. significant light losses in the light guide due to its geometrical shape, which is supported by GEANT4-based simulations~\cite{GEANT4_1,GEANT4_2}; II. the quality of the gluing between the WOM tube and the light guide.\\ Problem II could be circumvented if the WOM tube and the light guide are made out of one piece. Problems I and II could be both avoided by not relying on a light guide, but placing e.g. SiPMs on a ring-like structure and coupling them directly to the end of the WOM tube. \item Furthermore, the PMMA used for the WOM vessels could be optimised, as after the test beam measurements, the employed material was found to not provide a maximal transmission coefficient in the PPO emission wavelength range. \end{itemize} \subsection{Time Resolution} \label{subsec:timeresolution} To measure the time resolution of the LSD module, the difference \begin{equation} \Delta t= t_{trigg}-t_{WOM} \end{equation} between the average time of the trigger scintillators \begin{equation} t_{trigg}=(t_{trigg1}+t_{trigg2}+t_{trigg3}+t_{trigg4})/4 \end{equation} and the WOM signal from PMT2 respectively the SiPM array is determined ($t_{WOM}$). Fig.~\ref{fig:TriggerTimeResolution} shows the time distribution of $(t_{trigg1}+t_{trigg2}-t_{trigg3}-t_{trigg4})/4$ for triggered events, which demonstrates the time resolution of the trigger system. The standard deviation $\sigma$ of a Gaussian fit to this distribution is found to be $18.9~{\rm ps}$. Since the time resolution of the LSD module is much larger due to the fast component of the scintillation decay time of 6~ns~\cite{MarrodanUndagoitia:2009kq}, the effect of $t_{trigg}$ on the $\Delta t$ resolution is negligible. \begin{figure} \begin{center} \includegraphics[scale=0.50]{TriggerTimeResolution.pdf} \end{center} \caption{Average time distribution of the four PMTs used for the two trigger scintillators $(t_{trigg1}+t_{trigg2}-t_{trigg3}-t_{trigg4})/4$. A Gaussian fit to the distribution gives a standard deviation of 18.9 ps.} \label{fig:TriggerTimeResolution} \end{figure} Figs.~\ref{fig:DeltaT_PMT2} and~\ref{fig:DeltaT_SiPMs} show typical $\Delta t$ spectra for the WOM with PMT2 and for the WOM with the SiPM array respectively. The time resolution is then given by the standard deviation $\sigma$ of a Gaussian fit to this distribution. For the SiPM array, a dependence of the signal time on its amplitude was observed. The average time-walk effect could be parameterized by a 4th-order polynomial, fit to the two-dimensional $\Delta t^{SiPMs}$ vs. $N_{\rm pe}$ distribution. \begin{figure \centering \subfigure[]{ \label{fig:DeltaT_PMT2} \includegraphics[scale=0.35]{DeltaT_PMT2.pdf}} \subfigure[]{ \label{fig:DeltaT_SiPMs} \includegraphics[scale=0.35]{DeltaT_SiPMs.pdf}} \caption{$\Delta t$ distributions for muons at an incident angle $0^{\circ}$ and beam position 0: (a) WOM with PMT2, $\Delta t^{PMT2}$, (b) WOM with SiPMs, $\Delta t^{SiPMs}$.} \label{fig:DeltaT_PMT2_SiPMs} \end{figure} Fig.~\ref{fig:DeltaT_PMT2_SiPMs_PE} shows the time resolution as a function of the number of photoelectrons for the WOM with PMT2 respectively with the SiPM array, after application the time-walk correction. The time resolution improves with increasing number of photoelectrons and is better than 1~ns for more than $N_{\rm pe} > O(25)$ for both WOMs.\\ \begin{figure \centering \includegraphics[scale=0.50]{DeltaT_PMT2_SiPMs_PE.pdf} \caption{$\sigma(\Delta t)$ as a function of the number of photoelectrons for different beam positions at an incident angle $0^{\circ}$ for the WOM with PMT2 and the WOM with SiPMs.} \label{fig:DeltaT_PMT2_SiPMs_PE} \end{figure} In a real experiment, the position where a particle hits the LSD module might not be known from other sub-detector information. This will add an additional time uncertainty due to the geometrical extension of the LSD module leading to different light-traveling paths inside the liquid scintillator. By using the average time of both WOMs, the time response of the LSD module becomes less sensitive to this geometry effect. This can be seen in Fig.~\ref{fig:MeanAndStandardDeviationTimeResolution}, showing the mean of the individual time distributions for the two WOMs, $\Delta t^{PMT2}$ and $\Delta t^{SiPMs}$, as well as of their average, $(\Delta t^{PMT2}+\Delta t^{SiPMs})/2$, for various beam positions on the LSD module at an incident angle of $0^{\circ}$. With the low detected photon yield of the current design, the LSD module has a time resolution of about 1~ns (represented by the error bars in Fig.~\ref{fig:MeanAndStandardDeviationTimeResolution}), independent from the beam position. \begin{figure} \begin{center} \includegraphics[scale=0.50]{Average_time.pdf} \end{center} \caption{Mean value of the $\Delta t^{PMT2}$, $\Delta t^{SiPMs}$, and $(\Delta t^{PMT2}+\Delta t^{SiPMs})/2$ distributions for various beam positions on the LSD module at an incident angle of $0^{\circ}$. The error bars represent the standard deviation (i.e. the time resolution) of the corresponding distribution.} \label{fig:MeanAndStandardDeviationTimeResolution} \end{figure} As expected, differences between the mean values of the average time distribution, $(\Delta t^{PMT2}+\Delta t^{SiPMs})/2$, for various beam positions are much smaller than those of the individual times. The mean value of $(\Delta t^{PMT2}+\Delta t^{SiPMs})/2$ decreases when moving from position 0, (located on the diagonal between the WOM with SiPMs and the WOM with PMT2) to positions 1, 2, 3, and 4 (all at the same distance to that diagonal) to position 5 (the largest distance to the diagonal) as expected. This shift induced by the geometrical extension of the LSD module and the WOM positions inside the LSD module is smaller than 1 ns and therefore only affects the time resolution of the LSD significantly, if the photon yield is substantially increased or the geometrical dimensions of the LSD module are enlarged. This timing uncertainty can be reduced by using more than two WOMs in the LSD module. \subsection{Detector response to a well-defined energy deposition} \label{subsec:detectorresponse} Another important question is the homogeneity of the LSD response with respect to different beam positions. Two different combinations of the signals from PMT2 and the SiPMs are considered in this paper: \begin{equation} {\rm the~arithmetic~mean}=\frac{N_{pe}^{PMT2}+N_{pe}^{SiPMs}}{2} \end{equation} and \begin{equation} {\rm the~geometric~mean}=\sqrt{N_{pe}^{PMT2} \cdot N_{pe}^{SiPMs}} , \end{equation} requiring $N_{\rm pe} \ge 2$. The latter one is expected to provide a homogeneous response in case that the detected light yield drops exponentially as a function of distance between light source to WOM tube. Fig.~\ref{fig:gaus_fit_example} shows an example of the distribution of the geometric mean obtained for muons hitting the LSD module at position 0. The value of the geometric mean at the maximum of the distribution obtained from a Gaussian fit to the core part of the distribution serves as an estimator for the detector response. For the resolution, full-width-half-maximum (FWHM) of this Gaussian function is quoted. \begin{figure} \begin{center} \includegraphics[scale=0.50]{gaus_fit_example.pdf} \end{center} \caption{The distribution of the geometric mean for muons hitting the LSD at beam position 0. A Gaussian fit is applied to the core part of the distribution to obtain the position of the maximum and FWHM of the distribution.} \label{fig:gaus_fit_example} \end{figure} Fig.~\ref{fig:MeanAndGeometricalMeanVsPosition} shows the position of the maximum and the corresponding FWHM for the distribution of the geometric mean and of the arithmetic mean at different beam positions for muons hitting the LSD module with an incident angle of $0^{\circ}$. \begin{figure} \begin{center} \includegraphics[scale=0.50]{energyDepos_vs_position.pdf} \end{center} \caption{Position of the maximum and FWHM (represented by error bars) for the distribution of geometric and arithmetic mean of the number of photo electrons calculated from the two WOMs, requiring at least two photo electrons. Measurements of muons for different beam positions on the detector module with an incident angle of $0^{\circ}$. The average value for all beam positions is shown by the horizontal line. } \label{fig:MeanAndGeometricalMeanVsPosition} \end{figure} The FWHM of both distributions show that the energy deposition of minimum ionizing particles is measured with about $20-25~\%$ relative uncertainty. This is due to the low light yield of the current LSD module design. Except for position 4, the values of the geometric mean respectively arithmetic mean at the maximum vary for the different beam positions by $10-15~\%$, which can be interpreted as a systematic uncertainty. These position-dependent differences are on average less pronounced for the geometric mean. One would expect equal geometric (arithmetic) mean values in particular for the positions 1, 2, 3, and 4, if there is no directional detection asymmetry of the two WOMs. While the geometric (arithmetic) mean values at the maximum agree within $10-15~\%$ for positions 1, 2, and 3, they are significantly larger for position 4. This systematic difference may have been caused by liquid-scintillator leakage that occurred during the test beam measurements. This resulted in a small nitrogen-filled gap volume, leading to a liquid-scintillator surface in the upper part of the LSD module. This may result in total reflection of scintillation light close to the upper inner wall of the LSD (which is not possible for the other inner walls), thus affecting the light yield collection mainly for position 4. In future measurements, all inner walls of the LSD will be covered by a reflective foil or coating to increase the light yield in general. Furthermore, a new LSD module will be produced from welded steel plates instead of glued ABS to avoid liquid scintillator leakage. An expansion vessel filled with liquid scintillator will be added that guarantees the LSD module to be always completely filled with liquid scintillator.\\ For an incident angle different from $0^{\circ}$, one naively expects a $1/\cos{\theta_{inc}}$ higher energy deposition from a minimum ionizing particle, since the path length through the liquid scintillator increases by the same factor. Figs.~\ref{fig:GeometricalMeanMuonsDifferentAngles} and~\ref{fig:MeanMuonsDifferentAngles} show the ratio between the measurement at $\pm 30^{\circ}$ with respect to the measurement at $0^{\circ}$ for the geometric and arithemtic mean, respectively. On average, the results are in agreement with the expectation of $1/\cos{\theta_{inc}}=1.155$, however, significant deviations are observed. While for positions 0, 1, 2, and 5, the ratios for $+30^{\circ}$ respectively $-30^{\circ}$ are quite close to each other, they differ much more for positions 3 and 4, showing the opposite behaviour. This behaviour will be the subject of future more detailed studies and Monte Carlo simulations. It might be e.g. partially caused by the WOM photo detection efficiency, which depends on the distance between the end of the WOM tube (where the photosensor is located) and the hit position of the primary scintillation photon. \begin{figure \centering \subfigure[]{ \label{fig:GeometricalMeanMuonsDifferentAngles} \includegraphics[scale=0.35]{ratio_geomMean_pm30deg_withLabel.pdf}} \subfigure[]{ \label{fig:MeanMuonsDifferentAngles} \includegraphics[scale=0.35]{ratio_mean_pm30deg_withLabel.pdf}} \caption{The ratio of the observed photoelectron yield for incident angles of $+30^{\circ}$ and $-30^{\circ}$ with respect to $0^{\circ}$ for (a) geometric mean and (b) arithmetic mean at different muon beam positions.} \label{fig:MeanGeometricalMeanMuonsDifferentAngles} \end{figure} The specific response to electrons was studied with a steel plate placed in front of the LSD module, since the LSD can serve as a preshower detector to distinguish between electrons/photons and other particles. The radiation length of the steel plate was about $2.1 \cdot X_{0}$, with $X_{0}=1.76~{\rm cm}$ being the radiation length of stainless steel~\cite{PDG}. For a perpendicularly crossing particle, the liquid scintillator adds a a thickness of $O(0.6 \cdot X_{0})$. This will produce an electromagnetic preshower with an expected energy deposition of $O(4~\%)$ of the incident electron energy~\cite{EMShower}, corresponding to about 1~GeV for the 20~GeV electrons used in the test-beam measurement. From the Bethe-Bloch formula, a minimum ionizing particle is expected to lose about 45~MeV energy on average when trasversing the liquid scintillator. For 20~GeV electrons, a factor of 20 more light yield is expected. Fig.~\ref{fig:PMT_electrons} shows the $N_{\rm pe}^{PMT2}$ distributions for 20~GeV electrons hitting the LSD module at beam position 0 with an incident angle of $0^{\circ}$, with and without steel plate. The electron beam contamination from hadrons and muons is expected to be $O(20~\%)$. This is reflected by the observed peak at $N_{\rm pe} \approx 25$, corresponding to the expected 45~MeV of average energy deposition from minimum ionizing particles. The position of this peak is consistent with the results from a muon beam, see Fig.~\ref{fig:PE_Distribution_PMT}. A second broad peak $N_{\rm pe} \approx 600$ for PMT2 with the steel plate in front of the LSD module is observed. Assumining linearity in the detector response, this is in agreement with the estimated energy deposition from electromagnetic preshowering induced by 20~GeV electrons in the steel plate. The fraction of events under the first peak from minimum ionizing particles accounts for about $O(20~\%)$ of the events, being consistent with the $O(80~\%)$ purity of the electron beam. The effect from the preshowering is heavily reduced when no steel plate is present. We conclude that the LSD can be used as a preshower detector for particle identifcation in connection with a sufficiently thick metal layer in front of the detector. \begin{figure} \begin{center} \includegraphics[scale=0.50]{PMT_electrons.pdf} \end{center} \caption{$N_{\rm pe}$ distributions (normalized to the same integral) for the WOM with PMT2 for an electron beam at beam position 0 and an incident angle of $0^{\circ}$ with and without a steel plate in front of the LSD module.} \label{fig:PMT_electrons} \end{figure} \section{Conclusion} A proof-of-concept building large-volume liquid-scintillator detectors was conducted, encompassing: wavelength-shifting optical modules for the collection of scintillation photons in the UV range, conversion of the UV photons into photons of longer wavelength in the WOM, their detection by a vacuum photomultiplier tube or a SiPM array mounted on a light guide on one side of the WOM. With a liquid-scintillator detector module of $50 \times 50 \times 30~{\rm cm}^3$ volume and one WOM read out, photo detection efficiencies of at least $99.7~\%$ for a threshold of five photoelectrons are achieved. Using two WOMs, placed on opposite sides of the detector, results in a time resolution of about 1~ns. For the size of the detector module and obtained average light yield, the effect on the time resolution from different positions of the traversing particle is found to be subdominant. The detector light yield for the combined signal of two WOMs on opposite sides of the detector is homogenous within $O(15\%)$ for particles perpendicularly traversing the detector. A layer of stainless steel in front of the detector demonstrates its capability to measure the preshowers from electrons providing much higher light yield than for muons and hadrons.\\ The light yield, photo detection efficiency, time resolution and possibly also the homogenity of the detector can still be improved in the future in several ways: increase of the PPO concentration in the liquid scintillator, purification of the LAB to obtain larger absorption lengths, coverage of all inner walls with material of good reflection properties for UV light, complete filling of the LSD (avoiding a liquid-scintillator surface leading to total reflection only at this surface), choosing PMMA material with higher transmission in the emission range of the PPO. Furthermore, the light guide and its coupling to the WOM tube can be improved, the coverage of the light guide exit by the photosensor can be optimized. As an alternative, SiPMs can be directly placed on the WOM tube without the need for any light guide. In this context, one might profit from SiPMs of higher efficiency, such as the new Hamamatsu S14520 series. \section*{ACKNOWLEDGEMENTS} We acknowledge the support of I. Korol and P. Venkova by the DFG. The Geneva group acknowledges support from the Swiss National Science Foundation (grant PP00P2\_150583). We thank: Dustin Hebeker, Timo Karg, and Marek Kowalski (DESY) for the scientific discussions on using WOMs and providing the dip-coater for our WOM production; Peter Peiffer (Johannes Gutenberg-Universit\"at Mainz) for measuring the WOM photo detection efficiency with and without light guide; Steffen Hackbarth (Humboldt-Universit\"at zu Berlin) for measuring the transmission of the liquid scintillator in the UV range; Yannick Favre (University of Geneva) for design and assembling of SiPM boards; Vadim Denysenko and Oleg Bezshyyko (Taras Shevchenko National University of Kyiv) for GEANT4 simulations of the WOM tube and light guide; Sergio G\'{o}mez and David Gasc\'{o}n (University of Barcelona) for support on the usage of the MUSIC board; Dominique Breton and Jihane Maalmi (LAL Orsay) for support on the usage of WaveCatcher. Special thanks go to Richard Jacobsson (CERN) and the SPS test-beam operators at CERN.
1,314,259,992,876
arxiv
\section{Introduction}\label{sec1} Over-parameterized deep neural networks owe their popularity much to their ability to (nearly) perfectly memorize large numbers of training examples, and the memorization is known to decrease the generalization error \cite{feldman20does}. On the other hand, scaling the acquisition of examples for training neural networks inevitably introduces non-fully supervised data annotation, a typical example among which is \emph{partial label} \cite{Nguyen_Caruana2008,Cour_2011,Zhang_ZhouBin2016,zhang2017disambiguation,feng2018leveraging,xu2019partial,yao2020network,lv2020progressive,feng2020provably,wen2021leveraged}---a partial label for an instance is a set of candidate labels where a \emph{fixed but unknown} candidate is the true label. \emph{Partial label learning} (PLL) trains multi-class classifiers from instances that are associated with partial labels. It is therefore apparent that some techniques should be applied to prevent memorizing the false candidate labels when PLL resorts to deep learning, and unfortunately, empirical evidence has shown general-purpose regularization cannot achieve that goal \cite{lv2021robustness}. A large number of deep PLL algorithms have recently emerged that aimed to design regularizers \cite{yao2020deep,yao2020network,lyu2022partial} or network architectures \cite{wang2022pico} for PLL data. Further, there are some PLL works that provided theoretical guarantees while making their methods compatible with deep networks \cite{lv2020progressive,feng2020provably,wen2021leveraged,wu2021learning}. We observe that these existing theoretical works have focused on the \emph{instance-independent} setting where the generation process of partial labels is homogeneous across training examples. With an explicit formulation of the generation process, the \emph{asymptotical consistency} \cite{mohri2018foundations} of the methods, namely, whether the classifier learned from partial labels approximates the Bayes optimal classifier, can be analyzed. However, the instance-independent process cannot model the real world well since data labeling is prone to different levels of error in tasks of varying difficulty. Intuitively, \emph{instance-dependent} (ID) partial labels should be quite realistic as some poor-quality or ambiguous instances are more difficult to be labeled with an exact true label. Although the instance-independent setting has been extensively studied, on the basis of which many practical and theoretical advances have been made in PLL, relatively less attention has been paid to the practically relevant setting of ID partial labels. Very recently, one solution has been proposed \cite{xu2021instance} which learned directly from ID partial labels, nevertheless, it is still unclear in theory whether the learned classifier is good. Motivated by the above observations, we set out to investigate ID PLL with the aim of proposing a learning approach that is model-independent and theoretically explain when and why the proposed method works. In this paper, we propose \emph{PrOgressive Purification} ({\fid}), a theoretically grounded PLL framework for ID partial labels. Specifically, we use the observed partial labels to pretrain a randomly initialized classifier (deep network) for several epochs, and then we update both partial labels and the classifier for the remaining epochs. In each epoch, we purify each partial label by moving out the candidate labels for which the current classifier has high confidence of being incorrect, and subsequently we train the classifier with the purified partial labels in the next epoch. As a consequence, the false candidate labels are gradually sifted out and the classification performance of the classifier is improved. We justify {\fid} and outline the main contributions below: \begin{itemize}[topsep=0ex,leftmargin=*,parsep=1pt,itemsep=1pt] \item We propose a novel approach named {\fid} for the ID PLL problem, which purifies the partial labels and refines the classifier iteratively. Extensive experiments validate the effectiveness of {\fid}. \item We prove that {\fid} can be guaranteed to enlarge the region where the model is \emph{reliable} by a promising rate, and eventually approximates the Bayes optimal classifier with mild assumptions. This proof process does \emph{not rely on} the assumption of the instance-independent setting. To the best of our knowledge, this is the first theoretically guaranteed approach for the general ID PLL problem. \item {\fid} is flexible with respect to \emph{losses}, so that the losses designed for the instance-independent PLL problems can be embedded directly. We empirically show that such embedding allows advanced PLL losses can be applied to the ID problem and achieve state-of-the-art learning performance. \end{itemize} \section{Related Work} In this section, we briefly go through the seminal works in PLL, focusing on the theoretical works and discussing the underlying assumptions behind them. \vspace{1em} \noindent \textbf{Non-deep PLL} There have been substantial non-deep PLL algorithms from the pioneering work~\cite{Jin_Ghahramani2003}. From a practical standpoint, they have been studied along two different research routes: the identification-based strategy and the average-based strategy. The identification-based strategy purifies each partial label and extracts the true label heuristically in the training phase, so as to identify the true labels \cite{Chen_2014,Zhang_ZhouBin2016,Tang_Zhang2017,feng2019partial,xu2019partial}. On the contrary, the average-based strategy treats all candidates equally \cite{Hullermeier_Beringer2006,Cour_2011,ZY15}. On the theoretical side, Liu and Dietterich~\cite{liu2012conditional} analyzed the learnability of PLL by making a \emph{small ambiguity degree condition} assumption, which ensures classification errors on any instance have a probability of being detected. And Cour \textit{et al.}~\cite{Cour_2011} proposed a consistent approach under the small ambiguity degree condition and a dominance assumption on data distribution (Proposition 5 in~\cite{Cour_2011}). Liu and Dietterich~\cite{liu2012conditional} proposed a Logistic Stick-Breaking Conditional Multinomial Model to portray the mapping between instances and true labels while assuming the generation of the partial label is independent of the instance itself. It should be noted that the vast majority of non-deep PLL works have only empirically verified the performance of algorithms on small data sets, without formalizing the statistical model for the PLL problem, and therefore even less so for theoretical analysis of when and why the algorithms work. \vspace{1em} \noindent \textbf{Deep PLL} In recent years, deep learning has been applied to PLL and has greatly advanced the practical application of PLL. Yao \textit{et al.}~\cite{yao2020deep,yao2020network} and Lv \textit{et al.}~\cite{lv2020progressive} proposed learning objectives that are compatible with stochastic optimization and thus can be implemented by deep networks. Soon Feng \textit{et al.} \cite{feng2020provably} formalized the first generation process for PLL. They assumed that given the latent true label, the probability of all incorrect labels being added into the candidate label set is uniform and independent of the instance. Thanks to the uniform generation process, they proposed two provably consistent algorithms. Wen \textit{et al.} \cite{wen2021leveraged} extended the uniform one to the \emph{class-dependent} case, but still keep the instance-independent assumption unchanged. In addition, a new paradigm called complementary label learning \cite{ishida2017learning,yu2018learning,ishida2019complementary,feng2020learning} has been proposed that learns from instances equipped with a complementary label. A complementary label specifies the classes to which the instance does not belong, so it can be considered to be an inverted PLL problem. However, all of them made the instance-independent assumption for analyzing the statistic consistency. Wu and Sugiyama~\cite{wu2021learning} proposed a framework that unifies the formalization of multiple generation processes under the instance-independent assumption. Very recently, some researchers are beginning to notice a more general setting---ID PLL. Learning with the ID partial labels is challenging, and all instance-independent approaches cannot handle the ID PLL problem directly. Specifically, the theoretical approaches mentioned above utilize mainly the \emph{loss correction} technique, which corrects the prediction or the loss of the classifier using a \emph{prior or estimated knowledge} of data generation processes, i.e., a set of parameters controlling the probability of generating incorrect candidate labels, or it is often called transition matrix \cite{patrini2017making}. The transition matrix can be characterized fixedly in the instance-independent setting since it does not need to include instance-level information, a condition that does not hold in ID PLL. Furthermore, it is \emph{ill-posed} to estimate the transition matrix by only exploiting partially labeled data, i.e., the transition matrix is unidentifiable \cite{xia2020part}. Therefore, some new methods should be proposed to tackle this issue. Xu \textit{et al.}~\cite{xu2021instance} introduced a solution that infers the latent label posterior via variational inference methods \cite{blei2017variational}, nevertheless, its effectiveness would be hardly guaranteed. In this paper, we propose {\fid} for the ID PLL problem and theoretically prove that the learned classifier approximates well to the Bayes optimal. \section{Proposed Method} \subsection{Preliminaries} First of all, we briefly introduce some necessary notations. Consider a multi-class classification problem of $c$ classes. Let $\mathcal{X}=\mathbb{R}^q$ be the $q$-dimensional instance space and $\mathcal{Y}=\{1,2,\ldots,c\}$ be the label space with $c$ class labels. In supervised learning, let $p(\boldsymbol{x},y)$ be the underlying ``clean'' distribution generating $(\boldsymbol{x},y^{\boldsymbol{x}})\in\mathcal{X}\times\mathcal{Y}$ from which $n$ i.i.d.~samples $\{(\boldsymbol{x}_i,y^{\boldsymbol{x}_i})\}^n_{i=1}$ are drawn. In PLL, there is a \emph{partial label space} $\mathcal{S}:=\{S|S\subseteq\mathcal{Y},S\neq\emptyset\}$ and the PLL training set $\mathcal{D}=\{(\bm{x}_i,S_i)|1\leq i\leq n\}$ is sampled independently and identically from a ``corrupted'' density $\tilde{p}(\boldsymbol{x},S)$ over $\mathcal{X}\times\mathcal{S}$. It is generally assumed that $p(\boldsymbol{x},y)$ and $p(\boldsymbol{x},S)$ have the same marginal distribution of instances $p(\boldsymbol{x})$. Then the \emph{generation process} of partial labels can thus be formalized as $p(S|\boldsymbol{x})=\sum_{y}p(S|\boldsymbol{x},y)p(y|\boldsymbol{x})$. We define the probability that, given the instance $\bm{x}$ and its class label $y^{\bm{x}}$, $j$-label being included in its partial label as the \emph{flipping probability}: \begin{equation*} \xi^j(\bm{x})=p(j\in S|\bm{x},y^{\bm{x}}), \ \forall j\in\mathcal{Y}, \end{equation*} The key definition in PLL is that the latent true label of an instance is always one of its candidate label, i.e., $\xi^{y^{\bm{x}}}(\bm{x})=1$. We consider use deep models by the aid of an inverse link function \cite{reid2010composite} $\boldsymbol{\phi}:\mathbb{R}^c\rightarrow\Delta^{c-1}$ where $\Delta^{c-1}$ denotes the $c$-dimensional simplex, for example, the softmax, as learning model in this paper. Then the goal of supervised multi-class classification and PLL is the same: a scoring function $f:\mathcal{X}\mapsto\Delta^{c-1}$ that can make correct predictions on unseen inputs. Typically, the classifier takes the form: \begin{equation*} h(\boldsymbol{x})=\arg\max_{i\in \mathcal{Y}}f_i(\boldsymbol{x}). \end{equation*} The Bayes optimal classifier $h^\star$ (learned using supervised data) is the one that minimizes the risk w.r.t the 0-1 loss (or some classification-calibrated loss \cite{bartlett2006convexity}), i.e., \begin{equation*} h^\star=\arg\min_{h}\mathcal{R}_{01}=\arg\min_{h}\mathbb{E}_{(\boldsymbol{X},Y)\sim p(\boldsymbol{x},y)}\big[\mathbf{1}_{\{\arg\max_{i\in \mathcal{Y}} h(\boldsymbol{X})\neq Y\}}\big]. \end{equation*} In this case, the scoring function $f^*$ recovers the class-posterior probabilities, i.e., $f^\star(\boldsymbol{x})=p(y|\boldsymbol{x}), \forall \boldsymbol{x}\in\mathcal{X}$. When the supervision information available is partial label, the PLL risk under $\tilde{p}(\boldsymbol{x},S)$ w.r.t.~a suitable \emph{PLL loss} $\mathcal{L}:\mathbb{R}^k\times\mathcal{S}\rightarrow\mathbb{R}^+$ is defined as \begin{equation*} \tilde{\mathcal{R}}=\mathbb{E}_{(\boldsymbol{X},S)\sim\tilde{p}(\boldsymbol{x},S)}\big[\mathcal{L}(h(\boldsymbol{X}),S)\big]. \end{equation*} Minimizing $\tilde{\mathcal{R}}$ induces the classifier and it is desirable that the minimizer approach $h^\star$. In addition, let $o=\arg\max_{j\neq y^{\bm{x}}} p(y=j|\bm{x})$ be the class label with the second highest posterior possibility among all labels. \subsection{Overview} In the latter part of this section, we will introduce a concept \emph{pure level set} as the region where the model is \emph{reliable}. We prove that given a tiny reliable region, one could progressively enlarge this region and improves the model with a sufficient rate by disambiguating the partial labels. Motivated by the theoretical results, we propose an approach {\fid} that works by progressively purifying the partial labels to move out the false candidate labels, and eventually the learned classifier could approximate the Bayes optimal classifier. {\fid} employs the observed partial labels to pre-train a randomly initialized classifier for several epochs, and then updates both partial labels and the classifier for the remaining epochs. We start with a warm-up period, in which we train the predictive model with a well-defined PLL loss \cite{lv2020progressive}. This allows us to attain a reasonable predictive model before it starts fitting incorrect labels \cite{ZhangBHRV17}. After the warm-up period, we iteratively purify each partial label by moving out the candidate labels for which the current classifier has high confidence of being incorrect, and subsequently we train the classifier with the purified partial labels in the next epoch. After the model has been fully trained, the predictive model can perform prediction for unseen instances. \subsection{Theoretical Analysis}\label{Theoretical} We assume that the hypothesis class $\mathcal{H}$ is sufficiently complex (and deep networks could meet this condition), such that the approximation error equals zero, i.e., $\arg\min_{h}\mathcal{R}=\arg\min_{h\in\mathcal{H}}\mathcal{R}$. Then the gap between the learned classifier and Bayes optimal is determined by the ambiguity caused by partial labels. Besides, for two instance $\bm{x}$ and $\bm{z}$ that satisfy $p(y=j|\bm{z})>p(y=j|\bm{x})$, the indicator function $\left[\mathbf{1}_{\{j \neq y^{\bm{z}}\}}\Big\vert p(y^{\bm{z}}|\bm{z})- p(o|\bm{z})\geq p(y^{\bm{x}}|\bm{x})- p(o|\bm{x})\right]$ equals 1 if the more confident point $\bm{z}$ is inconsistent with Bayes classifier. Then, the gap between $f_j(\bm{x})$ and $p(y^{\bm{x}}|\bm{x})$ should be controlled by the risk at point $\bm{z}$. Therefore, we assume that there exist constants $\alpha$, $\epsilon < 1$, such that for $f(\bm{x})$, \begin{equation}\label{gap} \vert f^j(\bm{x})-p(y=j|\bm{x})\vert\leq\alpha\mathbb{E}_{(\bm{z},S)\sim \tilde{p}(\bm{z},S)}\left[\mathbf{1}_{\{j\neq y^{\bm{z}} \}}\Big\vert p(y^{\bm{z}}|\bm{z})- p(o|\bm{z})\geq p(y^{\bm{x}}|\bm{x})- p(o|\bm{x})\right]+\frac{\epsilon}{6} \end{equation} In addition, for the probability density function $d(u)$ of the margin $u(\bm{x})= p(y^{\bm{x}}|\bm{x})- p(o|\bm{x}) $, we assume that there exist constants $c_{\star}$, $c^{\star} > 0$ such that $c_{\star} < d(u) < c^{\star}$. Then, the worst-case density-imbalance ratio is denoted by $l=\frac{c^\star}{c_\star}$. Motivated by the pure level set in binary classification \cite{ZhangZW0021}, we define the pure level set in PLL, i.e., the region where the model is reliable: \begin{definition} (Pure $(e, f)$-level set). A set $L(e):=\left\{\boldsymbol{x} \| p(y^{\bm{x}}|\bm{x})- p(o|\bm{x}) \mid \geq e\right\}$ is pure for $f$ if $y^{\bm{x}}=\arg\max_{j} f_j(\bm x)$ for all $\boldsymbol{x} \in L(e)$. \end{definition} Assume that there exists a boundary $e$ for all $\bm{x}$ which satisfies $y^{\bm{x}}=\arg\max_{j} f_j(\bm x)$ and $ p(y^{\bm{x}}|\bm{x})- p(o|\bm{x})\geq e$, we have \begin{equation} \mathbb{E}_{(\bm{z},S)\sim \tilde{p}(\bm{z},S)}\left[\mathbf{1}_{\{j\neq y^{\bm{z}} \}}\Big\vert p(y^{\bm{z}}|\bm{z})- p(o|\bm{z})\geq p(y^{\bm{x}}|\bm{x})- p(o|\bm{x})\right]=0, \forall j\in\mathcal{Y} \end{equation} which means that there is a tiny region $L(e):=\left\{\boldsymbol{x} \| p(y^{\bm{x}}|\bm{x})- p(o|\bm{x}) \mid \geq e\right\}$ where the model $f$ is reliable. Let $e_\text{new}$ be the new boundary and $\frac{\epsilon}{6l\alpha}( p(y^{\bm{x}}|\bm{x})-e)\leq e-e_\text{new}\leq\frac{\epsilon}{3l\alpha}( p(y^{\bm{x}}|\bm{x})-e)$. As the probability density function $d(u)$ of the margin $u(\bm{x})= p(y^{\bm{x}}|\bm{x})- p(o|\bm{x}) $ is bounded by $c_{\star} < d(u) < c^{\star}$, we have the following result for $\bm{x}$ that satisfies $ p(y^{\bm{x}}|\bm{x})- p(o|\bm{x}) \geq e_\text{new}$ \footnote{More details could be found in Appendix A.1.}: \begin{equation}\label{e_new} \begin{aligned} \mathbb{E}_{(\bm{z},S)\sim \tilde{p}(\bm{z},S)}\left[\mathbf{1}_{\{j\neq y^{\bm{z}} \}}\Big\vert p(y^{\bm{z}}|\bm{z})- p(o|\bm{z})\geq p(y^{\bm{x}}|\bm{x})- p(o|\bm{x})\right] &\leq \frac{c^{*} (e-e_\text{new})}{c_{*}\left(p(y^{\bm{x}}|\bm{x})-e_\text{new}\right)} \\ &\leq \frac{\epsilon}{3\alpha}. \end{aligned} \end{equation} Combining Eq. (\ref{gap}) and Eq. (\ref{e_new}), there is \begin{equation}\label{} \begin{aligned} \vert f^j(\bm{x})-p(y=j|\bm{x})\vert \leq \frac{\epsilon}{2}. \end{aligned} \end{equation} Denote by $m=\arg\max_j f_j(\bm{x})$ the label with the highest posterior probability for the current prediction. If $f_m(\bm{x})-f_{j\neq m}(\bm{x})\geq e+\epsilon$, we have \footnote{More details could be found in Appendix A.2.} \begin{equation}\label{} \begin{aligned} p(y^{\bm{x}}|\bm{x}) \geq p(y=j|\bm{x}) + e \end{aligned} \end{equation} which means that the label $j$ is incorrect label. Therefore, we could move the label $j$ out from the candidate label set to disambiguate the partial label, and then refine the learning model with the partial label with less ambiguity. In this way, we would move one step forward by trusting the model with the tiny reliable region with following theorem. \begin{theorem}\label{theorem1} Assume that there exists an boundary $e$ with $\epsilon\leq e<1$ for an $f$ such that for all $\bm{x}$ which satisfy $y^{\bm{x}}=\arg\max_{j} f^j(\bm x)$, $ p(y^{\bm{x}}|\bm{x})- p(o|\bm{x})\geq e$. For each $\bm{x}$ and $\forall j \in S$ and $j \neq m$, if $ f_m(\bm{x})-f_j(\bm{x})\geq e+\epsilon$, we move out label $j$ from the candidate label set and then update the candidate label set as $S_\text{new}$. Then the new classifier $f_{\text{new}}(\bm{x})$ is trained on the updated data with the new distribution $ \tilde{p}(\bm{x},S_\text{new})$. Let $e_{\text{new}}$ be the minimum boundary that $L(e_\text{new})$ is pure for $f_\text{new}$ for $f_{new}$. Then, we have $$p(y^{\bm{x}}|\bm{x})-e_{\text{new}}\geq(1+\frac{\epsilon}{6\alpha l})(p(y^{\bm{x}}|\bm{x})-e).$$ \end{theorem} The detailed proof can be found in Appendix A.1. Theorem \ref{theorem1} shows that the purified region $\gamma = p(y^{\bm{x}}|\bm{x})-e$ would be enlarged by at least a constant factor with the given purification strategy. As the flipping probability of the incorrect label in the ID generation process is related to its posterior probability, we assume that there exists a constant $t>0$ such that: \begin{equation} \xi^j(\bm{x}) \leq p(y=j|\bm{x})t. \end{equation} Then we prove that if there exists a pure level set for an initialized model, our proposed approach can purify incorrect labels and the classifier $f$ will finally match the Bayes optimal classifier $h$ after sufficient rounds $R$ under a reasonable hyper-parameter setting . \begin{theorem}\label{theorem2} For any flipping probability of each incorrect label $\xi^{j}(\bm{x})$, define $e_0=\frac{(1+t)\alpha+\frac{\epsilon}{6}}{1+\alpha}$. And for a given function $f_0$ there exists a level set $L(e_0)$ which is pure for $f_0$. If one runs purification in Theorem \ref{theorem1} starting with $f_0$ and the initialization: (1) $e_0 \geq \frac{(1+t)\alpha+\frac{\epsilon}{6}}{1+\alpha}$, (2) $R \geq\frac{12l}{\epsilon}\log(\frac{1-\epsilon}{\frac1c-e_0})$, (3)$e_\text{end}>\epsilon$, then we have: $$\mathbb{P}_{\bm{x}\sim D}[y_{f_{final}(\bm{x})}=h^\star]\geq1-c^{\star}\epsilon$$ \end{theorem} The proof of Theorem \ref{theorem2} is provided in Appendix A.3. Theorem \ref{theorem2} shows that the classifier can be guaranteed to eventually approximate the Bayes optimal classifier. According to Theorems~\ref{theorem1} and~\ref{theorem2}, we could progressively purify the observed partial labels by moving out the incorrect candidate labels with the continuously strict bound, and subsequently train an effective classifier with the purified labels. Therefore, the proposed method {\fid} in the following subsection is designed along these lines. \begin{algorithm}[t] \caption{{\fid} Algorithm}\label{algorithm} \textbf{Input}: The PLL training set $\mathcal{D}=\{(\bm{x}_1,S _1),...,(\bm{x}_n,S_n)\}$, initial threshold $e_0$, end threshold $e_\text{end}$, total round $R$; \begin{algorithmic}[1] \State Initialize the predictive model $\bm{\theta}$ by warm-up training with the PLL loss Eq. \ref{minimal}, and threshold $e=e_0$; \For{$r=1,...,R$} \State Train the predictive model $f$ on $\mathcal{D}$; \For{$i=1,...,n$} \For{$j\in S_i$} \If{$f_{m_i}(\bm{x}_i)-f_j(\bm{x}_i) \geq e+\epsilon$} \State Purify the incorrect label $j$ by removing it from the candidate label set $S_i$; \EndIf \EndFor \EndFor \If{$e \leq e_\text{end}$, and there is no purification for any candidate label set} \State Decrease $e$; \EndIf \EndFor \end{algorithmic} \textbf{Output}: The final predictive model $f$ \end{algorithm} \subsection{The {\fid} Method} \subsubsection{Warm-up Training} We start with a warm-up period, in which the classifier is trained on the original partial label to be able to attain reasonable outputs before fitting label noise \cite{ZhangBHRV17}. The predictive model $\bm{\theta}$ is trained on partially labeled examples by minimizing the following PLL loss function \cite{lv2020progressive}: \begin{equation}\label{minimal} \mathcal{L}_{PLL}=\sum_{i=1}^{n}\sum_{j=1}^{c}w_{ij}\ell(f_j(\bm{x}_i), S_i). \end{equation} Here, $\ell$ is the cross-entropy loss and the weight $w_{ij}$ is initialized with with uniform weights: \begin{equation} {w}_{i j}=\left\{\begin{array}{cc} 1 /\left|S_{i}\right| & \text { if } j \in S_{i} \\ 0 & \text { otherwise. } \end{array}\right. \end{equation} and then could be tackled simply using the current predictions for slightly putting more weights on more possible labels \cite{lv2020progressive}: \begin{equation} w_{ij}=\left\{\begin{array}{cl} f_{j}\left(\boldsymbol{x}_{i} \right) / \sum_{j \in S_{i}} f_{j}\left(\boldsymbol{x}_{i} \right) & \text { if } {j} \in S_{i} \\ 0 & \text { otherwise } \end{array}\right. \end{equation} \subsubsection{Progressive Purification} After the warm-up period, the classifier could be employed for purification. According to Theorem \ref{theorem1}, we could progressively move out the incorrect candidate label with the continuously strict bound, and subsequently train an effective classifier with the purified labels. Specifically, we set a high threshold $e_0$ and calculate the difference $f_{m_i}(\bm{x}_i)-f_j(\bm{x}_i)$ for each candidate label. If there is a label $j$ for $\bm{x}_i$ satisfies $f_{m_i}(\bm{x}_i)-f_j(\bm{x}_i)\geq e_0$, we move out it from the candidate label set and update the candidate label set. If there is no purification for all partial labels, we begin to decrease the threshold $e$ and continue the purification for improving the training of the model. In this way, the incorrect candidate labels are progressively removed from the partial label round by round, and the performance of the classifier is continuously improved. According to Theorem \ref{theorem2}, the learned classifier will be consistent with the Bayes optimal classifier eventually. The algorithmic description of {\fid} is shown in Algorithm \ref{algorithm}. \begin{table}[t] \caption{Characteristic of the benchmark datasets corrupted by the ID generation process.} \label{benchmark} \centering \renewcommand\arraystretch{1.25} \setlength{\tabcolsep}{2mm}{ \begin{tabular}{c|c|c|c|c|c} \toprule \textbf{Dataset} & \textbf{\#Train} & \textbf{\#Test} & \textbf{\#Features} & \textbf{\#Class Labels} & \textbf{avg. \#CLs} \\\hline CUB-200 & 3619 & 2414 & 150,258 & 200 & 8.71 \\\hline Fashion-MNIST & 60,000 & 10,000 & 784 & 10 & 4.61 \\\hline Kuzushiji-MNIST & 60,000 & 10,000 & 784 & 10 & 4.34 \\\hline CIFAR-10 & 50,000 & 10,000 & 3,072 & 10 & 2.74 \\\hline CIFAR-100 & 50,000 & 10,000 & 3,072 & 100& 5.50 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[t] \caption{Characteristic of the real-world PLL datasets.} \label{real-world} \centering \renewcommand\arraystretch{1.25} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{c|c|c|c|c|c|c} \toprule \textbf{Dataset} & \textbf{\#Train} & \textbf{\#Test} & \textbf{\#Features} & \textbf{\#Class Labels} & \textbf{avg. \#CLs} & \textbf{Task Domain}\\\hline Lost & 898 &224 & 108 & 16 & 2.23 & automatic face naming \cite{Cour_2011}\\\hline MSRCv2 & 1,406 &352 & 48 & 23 & 3.16 & object classification \cite{liu2012conditional}\\\hline BirdSong & 3,998 &1,000 & 38 & 13 & 2.18 & bird song classification \cite{Briggs_2013}\\\hline Soccer Player & 13,978 &3,494 & 279 & 171 & 2.09 & automatic face naming \cite{Zeng_2013}\\\hline Yahoo! News & 18,393 &4,598 & 163 & 219 & 1.91 & automatic face naming \cite{Guillaumin_2010}\\ \bottomrule \end{tabular}} \end{table} \section{Experiments} \subsection{Datasets}\label{sec:dataset} We adopt five widely used benchmark datasets including CUB-200 \cite{welinder2010caltech}, Kuzushiji-MNIST \cite{clanuwat2018deep}, Fashion-MNIST \cite{xiao2017fashion}, CIFAR-10 \cite{krizhevsky2009learning}, CIFAR-100 \cite{krizhevsky2009learning}. These datasets are manually corrupted into ID partially labeled versions. Specifically, we set the flipping probability of each incorrect label corresponding to an instance $\bm{x}$ by using the confidence prediction of a neural network trained using supervised data parameterized by $\hat{\bm{\theta}}$ \cite{xu2021instance}. The flipping probability $\xi^j(\bm{x})=\frac{f_j(\bm{x};\hat{\bm{\theta}})}{\max_{{j}\in\bar{Y}}f_j(\bm{x};\hat{\bm{\theta}})}$, where $\bar{Y}_i$ is the set of all incorrect labels except for the true label of $\bm{x}_i$. The average number of candidate labels (avg. \#CLs) for each benchmark dataset corrupted by the ID generation process is recorded in Table~\ref{benchmark}. In addition, five real-world PLL datasets which are collected from different application domains are used, including Lost \cite{Cour_2011}, Soccer Player \cite{Zeng_2013}, Yahoo!News \cite{Guillaumin_2010}, MSRCv2 \cite{liu2012conditional}, and BirdSong \cite{Briggs_2013}. The average number of candidate labels (avg. \#CLs) for each real-world PLL dataset is also recorded in Table~\ref{real-world}. \begin{table}[t] \caption{Classification accuracy (mean$\pm$std) of each comparing approach on benchmark datasets corrupted by the ID generation process. } \centering \label{performance_benchmark} \begin{tabular}{cccccc} \toprule & CUB-200 & Kuzushiji-mnist & Fashion-mnist & CIFAR-10 & CIFAR-100 \\ \midrule {\fid} & \textbf{45.68$\pm$0.12}\% & \textbf{88.70$\pm$0.02}\% & \textbf{87.62$\pm$0.04\%} & \textbf{79.00$\pm$0.28\%} & \textbf{57.68$\pm$0.14\%} \\ \midrule {\vpll} & $42.10\pm$0.59\% &87.95$\pm$0.08\% &87.20$\pm$0.18\% & 77.71$\pm$0.35\% & 55.60$\pm$0.24\%\\ {\proden} & 40.66$\pm$0.11\% & 87.60$\pm$0.23\% & 87.21$\pm$0.11\% & 76.77$\pm$0.63\% & 55.12$\pm$0.12\% \\ {\rcpll} & 41.86$\pm$0.71\% & 87.25$\pm$0.06\% & 87.06$\pm$0.14\% & 76.49$\pm$0.52\% & 55.18$\pm$0.70\% \\ {\ccpll} & 22.16$\pm$0.55\% & 83.31$\pm$0.07\% & 86.01$\pm$0.13\% & 72.87$\pm$0.82\% & 55.56$\pm$0.23\% \\ {\lws} & 19.00$\pm$0.16\% & 84.46$\pm$0.22\% & 86.25$\pm$0.01\% & 46.77$\pm$0.66\% & 48.00$\pm$0.16\% \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \caption{Classification accuracy (mean$\pm$std) of each comparing approach on the real-world datasets. } \label{performance_real} \centering \begin{tabular}{cccccc} \toprule & Lost & BirdSong & MSRCv2 & Soccer Player & Yahoo!News \\ \midrule {\fid} & \textbf{78.57$\pm$0.45\% } & \textbf{74.47$\pm$0.36\%} & 45.86$\pm$0.28\%& 54.48$\pm$0.10\% & \textbf{66.38$\pm$0.07\%} \\ \midrule {\vpll} & 76.87$\pm$0.86\% &73.39$\pm$0.26\% & \textbf{49.97$\pm$0.43\%}& 55.81$\pm$0.10\% & 66.26$\pm$0.13\% \\ {\proden} & 76.47$\pm$0.25\% & 73.44$\pm$0.12\% & 45.10$\pm$0.16\% & 54.05$\pm$0.15\% & 66.14$\pm$0.10\% \\ {\rcpll} & 76.26$\pm$0.46\% & 69.33$\pm$0.32\% & 49.47$\pm$0.43\% & \textbf{56.02$\pm$0.59\%} & 63.51$\pm$0.20\% \\ {\ccpll} & 63.54$\pm$0.25\% & 69.90$\pm$0.58\%& 41.50$\pm$0.44\% & 49.07$\pm$0.36\% & 54.86$\pm$0.48\% \\ {\lws} & 73.13$\pm$0.32\% & 51.45$\pm$0.26\% & 49.85$\pm$0.49\%& 50.24$\pm$0.45\% & 48.21$\pm$0.29\% \\ \bottomrule \end{tabular} \end{table} \subsection{Baselines} The performance of {\fid} is compared against five deep PLL approaches: \begin{itemize}[topsep=0ex,leftmargin=*,parsep=1pt,itemsep=1pt] \item {\proden} \cite{lv2020progressive}: A progressive identification approach which approximately minimizes a risk estimator and identifies the true labels in a seamless manner; \item {\rcpll} \cite{feng2020provably}: A risk-consistent approach which employs the loss correction strategy to establish the true risk by only using the partially labeled data; \item {\ccpll} \cite{feng2020provably}: A classifier-consistent approach which also uses the loss correction strategy to learn the classifier that approaches the optimal one; \item {\vpll} \cite{yao2020deep}: An ID PLL approach which recovers the latent label distribution via variational inference methods; \item {\lws} \cite{wen2021leveraged}: A risk-consistent approach which proposes a leveraged weighted loss to trade off the losses on candidate labels and non-candidate ones. \end{itemize} For all the deep approaches, We used the same training/validation setting, models, and optimizer for fair comparisons. Specifically, a 3-layer MLP is trained on {Kuzushiji-MNIST} and {Fashion-MNIST}, the 32-layer ResNet \cite{he2016deep} is trained on {CIFAR-10}, the 12-layer ConvNet \cite{han2018co} is trained on {CIFAR-100}, a pre-trained ResNet-18 and pre-trained ResNet-34 is trained on {FLOWER-102} and {CUB-200}, and the linear model is trained on real-world PLL datasets, respectively. The hyper-parameters are selected so as to maximize the accuracy on a validation set (10\% of the training set). We run 5 trials on the benchmark datasets and the real-world PLL datasets. The mean accuracy as well as standard deviation are recorded for all comparing approaches. All the comparing methods are implemented with PyTorch. \begin{table} \caption{Classification accuracy (mean$\pm$std) of each comparing approach on benchmark datasets corrupted by the ID generation process. } \centering \fontsize{9}{11.5}\selectfont \label{enhance_benchmark} \begin{tabular}{cccccc} \toprule & CUB-200 & Kuzushiji-mnist & Fashion-mnist & CIFAR-10 & CIFAR-100 \\ \midrule {\proden} & 40.66$\pm$0.11\% & 87.60$\pm$0.23\% & 87.21$\pm$0.11\% & 76.77$\pm$0.63\% & 55.12$\pm$0.12\% \\ {\proden+\fid} & 45.68$\pm$0.12\% & 88.70$\pm$0.02\% & 87.62$\pm$0.04\% & 79.00$\pm$0.28\% & 57.68$\pm$0.14\% \\ \midrule {\rcpll} & 41.86$\pm$0.71\% & 87.25$\pm$0.06\% & 87.06$\pm$0.14\% & 76.49$\pm$0.52\% & 55.18$\pm$0.70\% \\ {\rcpll+\fid} & 46.02$\pm$0.66\% & 87.78$\pm$0.09\% & 87.45$\pm$0.05\% & 78.89$\pm$0.17\% & 57.66$\pm$0.11\% \\ \midrule {\ccpll} & 22.16$\pm$0.55\% & 83.31$\pm$0.07\% & 86.01$\pm$0.13\% & 72.87$\pm$0.82\% & 55.56$\pm$0.23\% \\ {\ccpll+\fid} & 22.74$\pm$0.19\% & 83.98$\pm$0.10\% & 86.32$\pm$0.06\% & 77.03$\pm$0.58\% & 56.18$\pm$0.06\% \\ \midrule {\lws} & 19.00$\pm$0.16\% & 84.46$\pm$0.22\% & 86.25$\pm$0.01\% & 46.77$\pm$0.66\% & 48.00$\pm$0.16\% \\ {\lws+\fid} & 19.55$\pm$0.11\% & 84.71$\pm$0.07\% & 86.40$\pm$0.05\% & 48.54$\pm$0.04\% & 49.61$\pm$0.27\% \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \caption{Classification accuracy (mean$\pm$std) of each comparing approach on the real-world datasets. } \label{enhance_real} \fontsize{9}{11.5}\selectfont \centering \begin{tabular}{cccccc} \toprule & Lost & BirdSong & MSRCv2 & Soccer Player & Yahoo!News \\ \midrule {\proden} & 76.47$\pm$0.25\% & 73.44$\pm$0.12\% & 45.10$\pm$0.16\% & 54.05$\pm$0.15\% & 66.14$\pm$0.10\% \\ {\proden+\fid} & 78.57$\pm$0.45\% & 74.47$\pm$0.36\% & 45.86$\pm$0.28\%& 54.48$\pm$0.10\% & 66.38$\pm$0.07\% \\ \midrule {\rcpll} & 76.26$\pm$0.46\% & 69.33$\pm$0.32\% & 49.47$\pm$0.43\% & 56.02$\pm$0.59\% & 63.51$\pm$0.20\% \\ {\rcpll+\fid} & 78.56$\pm$0.45\% & 70.77$\pm$0.26\% & 51.18$\pm$0.59\% & 56.49$\pm$0.03\% & 63.86$\pm$0.22\% \\ \midrule {\ccpll} & 63.54$\pm$0.25\% & 69.90$\pm$0.58\%& 41.50$\pm$0.44\% & 49.07$\pm$0.36\% & 54.86$\pm$0.48\% \\ {\ccpll+\fid} & 65.47$\pm$0.93\% & 71.50$\pm$0.06\% & 43.21$\pm$0.43\% & 49.36$\pm$0.02\% & 55.22$\pm$0.05\% \\ \midrule {\lws} & 73.13$\pm$0.32\% & 51.45$\pm$0.26\% & 49.85$\pm$0.49\%& 50.24$\pm$0.45\% & 48.21$\pm$0.29\% \\ {\lws+\fid} & 75.30$\pm$0.26\% & 52.35$\pm$0.26\% & 52.42$\pm$0.86\%& 50.94$\pm$0.47\% & 48.6$\pm$0.12\% \\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \begin{minipage}{0.42\linewidth} \centering{\includegraphics[width=1\textwidth]{lost.pdf}} \caption{Estimated purified region on {Lost}.}\label{region} \end{minipage}\qquad \begin{minipage}{0.42\linewidth} \centering{\includegraphics[width=1\textwidth]{cifar10_e.pdf}} \caption{Hyper-parameter sensitivity on {CIFAR-10}.}\label{sensitivity} \end{minipage} \end{figure} \subsection{Experimental Results} Table~\ref{performance_benchmark} and Table~\ref{performance_real} report the classification accuracy of each approach on benchmark datasets corrupted by the ID generation process and the real-world PLL datasets, respectively. The best results are highlighted in bold. We can observe that {\fid} achieves the best performance against other approaches in most cases and the performance advantage of {\fid} over comparing approaches is stable under varying the number of candidate labels. In addition, to analysis the purified region in Theorem~\ref{theorem1}, we employ the confidence predictions of $f(\bm{x},\tilde{\bm{\theta}})$ (the network in Section~\ref{sec:dataset}) as the posterior and plot the curve of the estimated purified region in every epoch on Lost in Figure \ref{region}. We can see that although the estimated purified region would be not accurate enough, the curve could show that the trend of continuous increase for the purified region. \subsection{Further Analysis} As the framework of {\fid} is flexible for the loss function, we integrate the proposed method with the previous methods for instance-independent PLL including {\proden}, {\rcpll}, {\ccpll} and {\lws}. In this subsection, we empirically prove that the previous methods for instance-independent PLL could be promoted to achieve better performance after integrating with {\fid}. Table \ref{enhance_benchmark} and Table \ref{enhance_real} report the classification accuracy of each method for instance-independent PLL and its variant integrated with {\fid} on benchmark datasets corrupted by the ID generating procedure and the real-world datasets, respectively. As shown in Table \ref{enhance_benchmark} and Table \ref{enhance_real}, the approaches integrated with {\fid} including {\proden+\fid}, {\rcpll+\fid}, {\ccpll+\fid} and {\lws+\fid} achieve superior performance against original method, which clearly validates the usefulness of {\fid} framework for improving performance for ID PLL. Figure \ref{sensitivity} illustrates the variant integrated with {\fid} performs under different hyper-parameter configurations on {CIFAR-10} while similar observations are also made on other data sets. The hyper-parameter sensitivity on other datasets could be founded in Appendix A.4. As shown in Figure \ref{sensitivity}, it is obvious that the performance of the variant integrated with {\fid} is relatively stable across a broad range of each hyper-parameter. This property is quite desirable as {\fid} framework could achieve robust classification performance. \section{Conclusion} In this paper, the problem of partial label learning is studied where a novel approach {\fid} is proposed. we consider ID partial label learning and propose a theoretically-guaranteed approach, which could train the classifier with progressive purification of the candidate labels and is theoretically guaranteed to eventually approximates the Bayes optimal classifier for ID PLL. Specifically, we first purify the candidate labels of these data with high confidence in the predictions of the classifier. Next, the model is improved using purified labels. Then, we continue alternating the purification and model training until it converges. In addition, we prove that the model could improve with sufficient rate through iterations and eventually be consistent with the Bayes optimal classifier. Experiments on benchmark and real-world datasets validate the effectiveness of the proposed method. If PLL methods become very effective, the need for exactly annotated data would be significantly reduced. As a result, the employment of data annotators might be decreased which could lead to a negative societal impact. \bibliographystyle{plain}
1,314,259,992,877
arxiv
\section{Introduction} The Australian Square Kilometre Array Pathfinder (ASKAP; \cite{johnston:2007},\cite{westmeier:2010}), represents a significant advance in radio telescope design. This facility will combine high resolution imaging (through the use of a 36-element aperture synthesis array with a maximum baseline of $6$~km) with a wide field of view (achieved with innovative focal plane array technology) at frequencies between 700 MHz and 1.8 GHz. Installation of the first ASKAP antenna at the Murchison Radio Observatory site, Western Australia, occured in early 2010, and the 6-antenna BETA test array will operate from September 2011-March 2013. It is anticipated that full science operations will be underway by 2014. Processing and data transport requirements for ASKAP are described in \cite{cornwell:2008}, and \cite{quinn:2010} provides an overview of the data infrastructure requirements. WALLABY \cite{wallaby:url} is one of ten ASKAP Survey Science Projects currently in the design study phase. WALLABY aims to significantly enhance our understanding of the extragalactic neutral hydrogen (H{\sc i}) universe. The survey will cover 75\% of the sky, detecting $\sim\!0.5$ million galaxies to redshift $z = 0.26$ (lookback time $\sim\!3$ Gyr). Key science outcomes are studies of galaxy formation and the missing satellite problem in the Local Group, evolution and star formation in galaxies, mergers and interactions, determination of the H{\sc i} mass function and its variation with galaxy density, and the nature of the cosmic web. Unlike previous H{\sc i} surveys, it will not be feasible to keep all of the raw data (i.e. Fourier visibilities) from ASKAP observations for subsequent reprocessing. Instead, pipeline-preprocessed spectral data cubes will be provided for analysis. Each WALLABY spectral cube is anticipated to comprise 6144 $\times$ 6144 spatial pixels and 16,384 spectral channels (i.e. $\sim\,600$ gigavoxels or volume elements in total), requiring 2.5 terabytes (TB) of storage. A total of 1200 cubes will be required to cover the sky south of declination $\delta = +30^\circ$. Likely additional outputs are integrated (moment) maps, continuum images, sub-cubes (individual objects or scaled versions of larger datasets), and full parameterisation of all galaxies, resulting in several petabytes of data products. Such data volumes pose considerable challenges for the existing work practices of astronomers. Indeed, visualisation and analysis (hereafter, ``V+A'') of WALLABY data products will require both evolutionary and revolutionary changes to existing software and hardware, with a likely move away from desktop-only solutions, and a greater reliance on remote services. A brief overview of the WALLABY workflow from data collection to catalogue is as follows: \begin{enumerate} \item Observe field. \item Generate spectral data cube from visibilities. \item Visualise cube as quality control prior to deletion of raw data. \item Transfer preprocessed data cube to archive. \item Perform source finding on data cube. \item Fit models to candidates and perform related quantitative analysis. \item Add parameterised candidates to catalogue. \end{enumerate} Apart from personnel, the main resource for completion of all of these stages is access to appropriate computing infrastructure (hardware and software). As a framework within which to assess the practicalities of achieving each step in the WALLABY workflow, we begin (Section II) by considering desktop and high performance computing (HPC) cluster resources available and used by astronomers today, and project these forward to configurations available by 2014. In Section III, we present five challenges that V+A of WALLABY data products will face in the likely computing environment. We consider tasks that can be done essentially the same way they are now, and those requiring an investment in new technology or the development of new software, in order to deal with data sets orders of magnitude larger than previous extragalactic H{\sc i} survey projects. We make our concluding remarks in Section IV. Throughout, we make comparisons with the existing H{\sc i} Parkes All Sky Survey (HIPASS; \cite{barnes:2001}), conducted with the Parkes Multibeam receiver \cite{staveley:1996}. The southern catalogue, HICAT ($\delta < +2^{\circ}$; \cite{meyer:2004}), comprised 4315 galaxies, and the northern extension, NHICAT ($+2^{\circ} < \delta < +25^{\circ}30'$; \cite{wong:2006}), a further 1002 sources. Russell Jurek (Australia Telescope National Facility; ATNF) has combined the 388 individual southern sky data cubes into a single all-sky cube with $1721 \times 1721 \times 1025 = 3$ gigavoxels, and a file size of 12 GB. \section{The Compute Context} The configurations of (typical) desktop and HPC resources available to astronomers are fundamental to the capacity of existing or new software to enable each stage of the WALLABY workflow. In Table I, we present hardware parameters for today's mid-range desktop computer. Using typical growth rates in the computing industry (e.g. Moore's Law; Kryder's Law \cite{walter:2005}), we extrapolate to 2014 (i.e. ``tomorrow''). Quoted processing speeds are theoretical (i.e. peak), single precision values; these assume 100$\%$ efficient algorithms using all available processing cores/streams. Table II presents similar per-node comparisons for cluster-based HPC configurations. Other HPC configurations are possible, but we restrict our discussion to facilities similar to the Swinburne ``Green Machine'' Supercomputer \cite{green:url}, with which we are most familiar. Most specifications and capabilities of desktop and HPC compute platforms will simply evolve and grow as they have done over the past decades. Two significant revolutions in compute capability of processors, however, are currently underway (e.g. \cite{sutter:2005}): \begin{enumerate} \item Central procesing units (CPUs) are gaining increased compute capacity in the form of multiple cores, rather than increased clock speeds. \item Graphics processing units (GPUs) are boosting compute capacity by around 10--50 times by functioning as streaming co-processors, at very low cost. \end{enumerate} This ``concurrency'' revolution, based on the availability of high levels of parallelism on a single chip, requires major software work \cite{kirk:2010} and a re-examination of algorithms so that scientists can benefit fully from this new processing paradigm (see \cite{barsdell:2010} and \cite{fluke:2010b} for astronomy-related solutions). While CPUs are optimised for sequential programs thanks to sophisticated control logic and large memory caches (to reduce instruction and data access latencies), GPUs maximise the chip area for computation. The advent of programming libraries such as CUDA and OpenCL has enabled the use of GPUs for general purpose computation, with the GPU acting as a computational co-processor. Typical (single precision) theoretical peaks are already over 1 teraflop/s for cards like the NVIDIA Tesla C2050. Some of the challenges we identify in this paper can only be solved with GPUs. \begin{table}[!t] \caption{Desktop computing for WALLABY} \label{tbl:desktop} \centering \begin{tabular}{|l|l|l|} \hline {\bf Attribute} & {\bf Today (2010)} & {\bf Tomorrow (2014)} \\ \hline CPU Clock & 3 GHz & 3 GHz \\ Number of Cores & 2 & 8 \\ CPU Speed & 24 gigaflop/s & 100 gigaflop/s \\ CPU Memory & 4 GB & 18 GB \\ CPU--Memory Bandwidth & 12 GB/s & 24 GB/s \\ CPU--Network Bandwidth & 10--100 MB/s & 100--1000 MB/s \\ Local Storage & 0.5 TB & 5 TB \\ Local I/O Access & 50 MB/s & 100 MB/s \\ GPU Memory & -- & 2-3 GB \\ GPU Speed & -- & 2 Tflop/s \\ GPU Memory Bandwidth & -- & 200 GB/s \\ \hline \end{tabular} \end{table} \begin{table} \caption{Per-Node High Performance Computing for WALLABY} \label{tbl:hpc} \centering \begin{tabular} {|l|l|l|} \hline {\bf Attribute} & {\bf Today (2010)} & {\bf Tomorrow (2014)} \\ \hline CPU Clock & 3 GHz & 3 GHz \\ Number of Cores & 8 & 24 \\ CPU Speed & 96 gigaflop/s & 500 gigaflop/s \\ CPU Memory & 16 GB & 72 GB \\ CPU--Memory Bandwidth & 12 GB/s & 36 GB/s \\ CPU--Network Bandwidth & 1--2 GB/s & 2-8 GB/s\\ Network Storage & 10's TB & 1-10 PB \\ Number of nodes & 100-300 & 100-300 \\ GPU Memory & -- & 6-10 GB\\ GPU Speed & -- & 4-8 Tflop/s \\ GPU Memory Bandwidth & -- & 300 GB/s \\ \hline \end{tabular} \end{table} \section{Challenges} \subsection{Handling Big Data Files} Steps 1--4 of the WALLABY work flow relate to producing spectral data cubes that are significantly larger than have been available for previous surveys. The logistics of moving large data cubes on the network notwithstanding, it should be immediately apparent that an entire WALLABY cube cannot fit in the main memory of either today's or tomorrow's desktop configuration, and only one full resolution cube (at a time) can be stored on tomorrow's internal desktop hard drive. Indeed, 16 GB memory limits sub-cubes to 2k $\times$ 2k $\times$ 1k (= 4 gigavoxels) for in-memory analysis. Since most existing astronomy software for the V+A of radio telescope data (e.g. Karma \cite{karma:url}, AIPS \cite{AIPS:url}, CASA \cite{CASA:url}) is aimed at handling data sets that can fit in the host memory of a desktop machine, without further development, these packages are clearly incompatible with handling 2.5 TB cubes. For V+A tasks that require access to an entire spectral cube (see below), the practical alternative is to use a distributed computing cluster architecture as a remote V+A service. This is one of the anticipated roles of the Pawsey HPC Centre \cite{quinn:2010}, however, other major computing facilities such as the Swinburne Supercomputer incorporating gSTAR (the GPU Supercomputer for Theoretical Astrophysics Research) could also be used. In principle, 2.5 TB of memory must be available across a computing cluster: assuming 16 GB (or 72 GB) is available per compute node, this means 160 machines today (or 36 tomorrow); there are clear advantages in managing fewer machines, each with more memory. Moving software to a cluster environment neccesitates the use of a distributed memory infrastructure, and an understanding of the level of parallelism in V+A algorithms. A data-parallel paradigm will be appropriate in many cases. A remote service mode of operation is not likely to have a negative impact on the user's experience for most large-scale analysis tasks (e.g. source finding or re-gridding), as these do not occur in ``real-time''. The ability to achieve interactive visualisation at frame rates above 5-10 frames/sec (fps) will be limited by factors such as processing and network speed, and bandwidth. To maximise efficiency, a distributed cluster also requires a parallel file system or other form of distributed network storage. Unfortunately, astronomy's standard FITS (Flexible Image Transport System \cite{FITS:url}) file format is not ideal for parallel access. Practical alternatives for faster access include NetCDF \cite{NetCDF:url} or HDF5 \cite{HDF5:url} formats, but these require either ``on-the-fly'' transformations between file formats and metadata, or a possible need to increase the total storage for the WALLABY survey cubes. \subsection{Global Views versus Image Slices} The need to discard raw visiblity data from ASKAP early in the WALLABY workflow (Step 3) means that global quality control of data cubes will be critical. Possible noise characteristics and artefacts may include large-scale gradients, non-uniform noise levels across the field of view, incompletely subtracted continuum sources and hydrogen recombination lines. While inspecting individual slices may be one approach to quality control, this is not straightforward. Suppose it was possible to sequentially examine individual 2D slices from a WALLABY data cube (along the spectral axis), at a reaonsable frame inspection rate of 5 fps: it would take $\sim\!1$ hour to step through 16k spectral slices. This assumes a display capable of displaying 6k x 6k pixels - for a HD-1080 monitor, we require at least 3x6 sub-cubes, thus increasing the view time to 18 hours per cube. Moving to sub-cubes may limit opportunities to understand global variations. Moreover, slicing techniques remove the ability to perceive artefacts or noise characteristics along the slicing axis, so it may be necessary to slice along more than one axis. Alternatively, scaled down cubes could be inspected, but these may hide artefacts, as scaling of approximately 10:1 (spatially) would be required. A preferred option may be to use a multi-panel display for full resolution visualisation. For example, the OptiPortal \cite{Opti:url} at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) ICT Centre, Marsfield, New South Wales, comprises 25 high definition panels, with a total resolution over 50 megapixels. Accordingly, a full resolution WALLABY cube could be viewed on such a display at a 1:1 mapping of data to screen pixels. A GPU-cluster based visualisation framework capable of volume rendering ``larger than memory'' data cubes at interactive frame rates has been demonstrated by \cite{hassan:2011}. In recent tests of this framework using the CSIRO GPU Cluster in Canberra (256 Tesla S2050 GPUs with 3 GB/GPU), frame rates of better than 40 fps (1024 x 1024 pixel output) were achieved for a 204 GB cube using 128 S2050 GPUs. Scaling this to a full WALLABY data cube requires a minimum of $\sim\!450$ x 6 GB GPUs (or $\sim\!275$ x 10 GB) in 2014. This task will not be feasible at interactive frame rates with a CPU-only HPC cluster. A combination of an OptiPortal and a GPU cluster may support fully three-dimensional global views of WALLABY data cubes, and the ability to quickly identify (compared with slicing) areas of a data cube that may indicate further processing is required using the visibilities. \subsection{Source Finding and Confirmation} Source finding is the process of identifying and extracting candidate sources from a data cube. To a large extent, the science outcomes of WALLABY depend on the existence of source finding software that maximizes reliability (i.e. {\em only} identifies extragalactic H{\sc i} sources) and completeness (i.e. finds {\em every} source that exists within a data cube). An ideal source finder would have a 1:1 candidate to source ratio, and offer 100\% completeness. Conceptually, source finding is a simple task: examine each voxel in turn and determine the amount of source signal contributing to that voxel. Practically, source finding is extremely difficult, as every voxel contains both source and non-source components. The latter include noise (that may vary across the field of view), interference (natural and artificial), contamination from bright sources outside the field, recombination lines, incomplete subtraction of continuum sources, and so on. It is instructive to consider the source finding tasks undertaken for HIPASS. The southern HIPASS catalogue, HICAT, used two main source finders: {\sc multifind} and {\sc tophat}. These produced $\sim\!140,000$ candidates, all of which were inspected manually (see \cite{meyer:2004} for details of these source finders). Neither source finder identified all candidates in the final source list. The overall performance of {\sc tophat} was much better than {\sc multifind}: 17,232 {\sc tophat} candidates resulted in $90 \%$ of the final catalogue of 4315 galaxies. Due to its lower candidate-to-source ratio, only {\sc tophat} was used for the northern extension, NHICAT, with 14,879 candidates resulting in 1002 astronomer-confirmed sources \cite{wong:2006}. For HIPASS, it was possible to view $>150,000$ candidates by eye in order to provide confirmation of source identification. Overall, $\sim\!95\%$ of candidates were rejected. Limiting this to {\sc tophat}, the rejection rate was $75\%$ for HICAT and $93\%$ for NHICAT. For the expected $\sim\!0.5$ million WALLABY sources, such high rejection rates will be crippling if human inspection is expected to play a significant role. Assuming a perfect source finder (i.e. no false detections) and 1 minute per source to load data, confirm, and annotate a candidate for later analysis, inspecting 0.5 million candidates will take a minimum of $\sim\!1$ year (walltime). Fortunately, this is a parallel task that can commence before all survey cubes are obtained. The inspection processes could be shared between WALLABY team members, provided consistency in source confirmation can be assured. As with visualisation, source finding within 2.5 TB data cubes requires (at minimum) a distributed computing approach. Effort is underway to produce a distributed version of {\sc duchamp} \cite{DUCHAMP:url}, but as with HIPASS, more than one source finder may be required. While solutions to the source finding problem are outside the scope of this paper, we assert that a GPU cluster will prove to be beneficial here. For compute intensive tasks, GPUs offer a massive processing gain at much lower cost than a CPU-only cluster with the equivalent processing power. GPUs may also permit alternative approaches to source finding that are simply not feasible to undertake on a CPU. \subsection{Desktop Visualisation and Analysis} Assuming we have solved the data handling problem, and that an appropriate catalogue of sources is available for inspection and quantitative analysis, we now consider what could be achieved on a desktop computer in 2014. The biggest limitation is likely to be the amount of main memory: 16 GB will accommodate a $\sim\,$4 gigavoxel cube, with a choice between cropping and subsampling from a larger WALLABY cube. Storing the WALLABY data in 16 GB tiles would facilitate some reasonable level of ``traditional'' handling of data by astronomers - but the entire survey would now occupy nearly 190,000 tiles instead of 1200 cubes. A reasonable balance between the spatial and spectral axes yields: \begin{equation} \left( \frac{s_{\alpha\delta} d_{\alpha\delta}}{6} \right)^2 \left( \frac{s_z d_z}{4} \right) > 1 \end{equation} where $s_{\alpha\delta}$ ($s_z$) is how coarsely the user is prepared to subsample along each spatial (spectral) axis of the WALLABY data cube, and $d_{\alpha\delta}$ ($d_z$) is the factor by which the user is prepared to crop a standard WALLABY cube along each spatial (spectral) axis. Let us refer to a cube that has been subsampled and/or cropped to fit in main memory on tomorrow's desktop as a {\em scube}. Scubes will be acceptable for most modes of qualitative visualisation, but are not appropriate if quantitative analysis is going to be attempted; here, cropping is the necessary choice, but this will substantially reduce the area of sky and/or frequency space that is represented by a single scube. While a 16 GB scube can easily be stored locally, it will take nearly 3 minutes to load into memory - waiting for data to load will become a much more common task for tomorrow's astronomers. Once loaded, even the simplest of traditional operations (e.g. find the minimum, maximum, mean, standard deviation) will take on the order of seconds in the absolute best case (based on having to process the entire scube through the CPU). If any significant additional processing or filtering of the scube is desired, then the desktop platform will not have sufficient compute capability in the CPU alone. A desktop platform with a GPU co-processor would improve the situation, but not drastically, as the scube is still too large to fit on the GPU’s own local memory (2-3 GB). The improvement in compute capability might in practice be a few times, but is unlikely to be better than 10 times. Assuming a 4 gigavoxel WALLABY scube, the following traditional, interactive visualisation tasks should be feasible on tomorrow's desktop: \begin{enumerate} \item Image slicing: 4 gigavoxels can be scanned on a 1 megapixel display, at 25 fps, in under three minutes. Compare this with HIPASS: 1024 frames at 25 fps takes $\sim\,$40 seconds, but there is much less data (HIPASS cubes had spatial 170 x 160 pixels, with some blanked, so the information content is vastly lower per frame). \item Volume rendering: to accomplish a traditional, hardware-accelerated texture-based volume rendering, we must further compress our scube from 16 GB down to $\sim 2$ GB (500 megavoxel $\sim\!800^3$ voxels) so that the image fits in the GPU co-processor memory. Subsampling is likely the preferred option here, as traditional volume rendering is qualitative not quantitative. \end{enumerate} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Three-dimensional texture volume rendering performance using NVIDA GT120 GPU with 512 MB RAM. } \label{tbl:volrender} \centering \begin{tabular}{|c|c|c|c|c|} \hline && {\bf Minimum} & {\bf Maximum} & {\bf Average}\\ ${\mathbf N}$ & ${\mathbf N^3}$ {\bf voxels} & {\bf (fps)} & {\bf (fps)} & {\bf (fps)} \\ \hline 150 & $3.4 \times 10^6$ & 20.0 & 20.0 & 20.0 \\ 200 & $8.0 \times 10^6$ & 10.0 & 19.9 & 14.0 \\ 250 & $15.6 \times 10^6$ & 8.6 & 11.2 & 15.0 \\ 300 & $27.0 \times 10^6$ & 6.7 & 12.0 & 9.1 \\ 350 & $42.9 \times 10^6$ & 6.0 & 10.0 & 7.8 \\ \hline \end{tabular} \end{table} While fitting a 2 GB scube into GPU memory is achievable, we still require an interactive frame rate of $>5$ fps. Table III presents the results of performance testing with an NVIDIA GT120 GPU (512 MB RAM). Today's desktop with a mid-range GPU can render up to $350^3$ voxels, filling 600 x 600 pixels on the screen, at $\sim\,$8 fps. This limit is imposed by the maximum texture size on the card (a factor of both the hardware and the application programming interface). A top-end graphics card today (the ATI Radeon 5970) can render a $500^3$ volume (125 megavoxels = 500 MB) at around 8 fps comfortably, just filling $1000 \times 1000$ pixels when the cube is face on. In practice this three-dimensional (3D) texture rendering does better for certain orientations of the cube, presumably corresponding to more continguous memory access when gathering textures from the volume. Thus 8 fps is a conservative lower bound; around half the time it is actually managing closer to 15 fps. We can extrapolate our results to estimate a rendering rate of $\sim\,$1 fps if we could fit a 500 megavoxel cube on the card. On tomorrow's desktop platform this could be accomplished at $\sim\,$4 fps. Not a stellar result, so even tomorrow, texture-based volume rendering will be limited by rendering capability, not GPU memory size. Both the Local Volume H{\sc i} Survey (LVHIS; \cite{koribalski:2008}) and The H{\sc i} Nearby Galaxy Survey (THINGS; \cite{walter:2008}) have demonstrated the diversity in H{\sc i} kinematic structures in the local universe. Simple models, such as differentially rotating H{\sc i} disks \cite{rogstad:1974}, \cite{begeman:1989}, do not capture the complexities of warps, anomalous gas and mergers. A typical modelling process involves the generation of six-dimensional position and velocity values for an input model, and mapping these to two spatial coordinates and a line-of-sight velocity. New opportunites may arise for visualisation-directed, interactive model-fitting to complex kinematic structures using an approach of the type described by \cite{fluke:2010a}. The highly data parallel nature of this processes (the contribution of each spatial pixel, and hence line-of-sight, can be computed independently of all others) is well-matched to the GPU, so interactive frame rates are unlikely to be computationally limited. See also the {\sc shape} 3D modelling tool for a similar technique applied to planetary nebulae and other bipolar outflows \cite{steffen:2010}. \subsection{Data Product Management} While overall ASKAP data management will be largely the responsibility of the Pawsey HPC Centre, individual survey projects will need to carefully consider how they approach management of derived data. For a survey as comprehensive and data-rich as WALLABY, there is no place for the somewhat ad hoc data management practices that have sufficed for earlier all-sky extragalactic H{\sc i} surveys. The access times required to open and edit files notwithstanding, text files are not a satisfactory solution for managing catalogues of $\sim\,0.5$ million galaxies, plus similar orders of rejected or unconfirmed candidates. Catalogues will need to capture model parameters, reasons for rejecting candidates, meta-data relating to the provenance of analysed sources (which analysis package was used, by whom, and with what set of input parameters, so that the results can be repeated). Moreover, it will necessary to share up-to-date modifications of the catalogue between multiple collaborators. Solutions here are likely to include large-scale commercial databases - and may be one of the cases where astronomers should spend money to buy a solution, rather than reinvent one themselves. We intend to address data management solutions for WALLABY in future work, but note that understanding the benefits and limitations of approaches used for similar large-scale catalogues from observational (e.g. Sloan Digital Sky Survey \cite{SDSS:url} and WiggleZ \cite{WiggleZ:url}) and simulation (e.g. Millenium \cite{Mill:url}) projects will be essential. \begin{table}[!t] \caption{Visualisation and analysis challenges for WALLABY, with potential solutions. } \label{tbl:vischal} \centering \begin{tabular}{|l|l|c|} \hline {\bf Challenge} & {\bf Solution} \\ \hline Big data files & Use distributed file system \\ & and remote V+A services. \\ Global views & Use cluster of GPUs and \\ & large-format displays. \\ Source finding & Requires most attention. \\ & GPU-cluster approach beneficial?\\ Human inspection & Not feasible without \\ & high-reliability source finders. \\ Desktop visualisation & Use GPUs for computation \\ & and display. \\ Image slicing & Only practical for sub-cubes. \\ Quantitative analysis & Opportunities for automated \\ & and interactive fitting with GPUs. \\ Data management & Must not be ad hoc. Databases \\ & must be used wisely.\\ \hline \end{tabular} \end{table} \section{Concluding Remarks} Perhaps the biggest challenge to planning strategies for visualisation and analysis is that no ASKAP data exists yet. We do not know what the exact imaging properties of ASKAP will be. Although simulated data cubes are now being generated, until the full ASKAP system undergoes commissioning, we will not fully understand all of the calibration, noise, interference, etc. issues that will arise with the relatively new technology of focal plane arrays. Testing source finders often includes injecting fake sources, with a given signal level, and then seeing how often they are recovered. With real WALLABY data cubes unavailable until 2014, progress in testing source finders will necessarily be limited. While we can do our best to plan source finders based on existing datasets, and early science data from the BETA configuration (September 2011-March 2013), we may find that our techniques do not work adequately for the full dataset. By considering the various V+A tasks now, and identifying approaches based on new hardware and software that were not available or feasible for earlier surveys, we can hope to minimise the impact of the ``unknown unknowns'' of ASKAP. Graphics processing units offer an intriguing solution to a number of the current desktop-bound problems. Table IV summarises our thoughts regarding the visualisation and analysis tasks that will require either an evolution of existing software and hardware, or a revolution in how they are approached. By planning today, we aim to maximise the scientific return from WALLABY tomorrow. \section*{Acknowledgment} The authors would like to thank B\"{a}rbel Koribalski (ATNF) and Russell Jurek (ATNF) for discussions relating to the WALLABY project, and John Taylor (CSIRO) and Tim Cornwell (ATNF) for enabling access to the CSIRO GPU cluster. \bibliographystyle{IEEEtran}
1,314,259,992,878
arxiv
\section{Abstract} We demonstrate that halogenated methane 2D-Terahertz Terahertz Raman (2D-TTR) spectra are determined by the complicated structure of the instrument response function (IRF) along $f_1$ and by the molecular coherences along $f_2$. Experimental improvements have helped increase the resolution and dynamic range of the measurements, including accurate THz pulse shape characterization. Sum-frequency excitations convolved with the IRF are found to quantitatively reproduce the 2D-TTR signal. A new Reduced Density Matrix model which incorporates sum-frequency pathways, with linear and harmonic operators fully supports this (re)interpretation of the 2D-TTR spectra. \section{Main text} Observing interactions within the low-frequency, thermally populated continuum of bath states is critical to developing a molecular understanding of liquid dynamics at room temperature. This energy regime is predominantly characterized by broad inter-molecular modes with short coherence times ($\sim$100s fs) which complicate the measurement and interpretation of potential energy, dipole and polarizability surfaces. One exception to this general observation are the intra-molecular vibrational modes of the halogenated methane (HM) family of liquids, whose well-defined coherent vibrational signals have long been observed in optical Kerr effect (OKE) experiments.\citep{McMorrow1988, Cho1993} Multidimensional time-resolved spectroscopy methods seek to disentangle these ambiguous spectra by introducing an additional time delay which separates dynamics along a second axis. The 5th order Raman technique\citep{tokmakoff1997} extends OKE to two dimensions and provides information on electrical and mechanical anharmonicities of the liquid, but practical implementation of this method is quite challenging.\citep{Blank1999, kubarych2002diffractive} A trio of 3$^{rd}$ order terahertz-Raman hybrid spectroscopies have been proposed as alternatives to 5$^{th}$ order Raman spectroscopy that avoid some technical challenges inherent to 5$^{th}$ order spectroscopy.\citep{Hattori2010,Ikeda2015,savolainen2013two-dimensional,shalit2017terahertz,hamm2017perspective} However, new challenges emerge in the hybrid techniques, especially compared to the more common 2D-infrared (2D-IR) spectroscopy. First, there are no commercially available dispersive THz spectrometers with adequate sensitivity to directly detect the emitted THz signal in THz-Raman-THz (2D-TRT) and Raman-THz-THz (2D-RTT) measurements. Instead, the 2D-TRT/RTT techniques have used time-domain electro-optic sampling to capture the faint THz emission.\citep{Ciardi2019} 2D-TTR avoids this step by using a Raman probe pulse which generates an easily detected near-IR signal photon. In all the cases of hybrid THz-Raman spectroscopies, the poorly defined THz wave vector precludes a phase-matching box-CARS style geometry that could be used to discriminate between signals originating from different quantum mechanical coherence pathways. With 2D-TTR spectroscopy, complex spectra have been observed in several halogenated methane (HM) liquids, and were interpreted as signatures of coherent energy transfer pathways between intra-molecular vibrational modes.\citep{Allodi2015,Finneran2016c, finneran20172d, Magdau2019}. A thorough re-investigation of two HM liquids -- bromoform (CHBr$_{3}$) and chloroform (CHCl$_{3}$) -- casts doubt on this original interpretation. Our new investigation is enabled by the development of a single-shot 2D-TTR spectrometer\citep{Mead2019} which records tens of picoseconds of molecular dynamics in a single acquisition. From the order of magnitude speed-up, the new technique provides substantially higher signal-to-noise data which has allowed a much larger region of the molecular response to be measured, and at finer resolution. We demonstrate through experiment, models, and theoretical simulations that the features observed in the HM 2D-TTR spectra arise from convolutions between the instrument response function (IRF) and linear interactions with the molecular polarizability operator $\mathit{\Pi}$. This interaction requires a scattering with two instantaneous THz photons, and is therefore referred to throughout the text as a sum-frequency (SF) excitation process. (Very recent experimental and theoretical works have also observed efficient phonon excitation through the same linear-$\mathit{\Pi}$ interaction with two THz field interactions.\citep{Maehrlein2017, Juraschek2018, Shishkov2019}) Resonant nonlinear interactions with the transition dipole operator $\mathit{M}$, while also in principal weakly allowed, are not detected. We begin by re-examining the relative importance of the $\mathit{M}$ (resonant) and $\mathit{\Pi}$ (sum-frequency) excitation pathways in HM vibrational modes. Ladder diagrams in Fig. \ref{fig:fig1} illustrate the two competing pathways as well as the OKE process, which is analogous to SF-TKE. In order to observe the desired nonlinear THz signal, the resonant pathway must have a larger or (at least) comparable magnitude with the sum-frequency pathway. This is a difficult condition to satisfy in HMs since the resonant process is nonlinear with respect to $\mathit{M}$ while the sum-frequency pathway is linear in $\mathit{\Pi}$. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Levels.png} \caption{A resonant TTR signal requires dipole nonlinearities to excite a vibrational coherence - a representative process is illustrated in the Res-TKE (resonant terahertz Kerr effect) ladder and Feynman diagrams. In contrast, a sum-frequency excited molecular coherence is produced through interactions linear in the polarizability operator. The virtual state in SF-TKE is short-lived, and therefore the signal is highly dependent upon overlap between the two pump pulses. The familiar optical Kerr effect (SF-OKE) illustrates fundamental similarities with SF-TKE.} \label{fig:fig1} \end{figure} Sum-frequency and resonant excitation pathways have distinct $t_1$ responses. A clear sign of resonant $M$ interactions is a prolonged vibrational response along $t_1$ which arises from the generation of a vibrational coherence during the first THz field interaction. From 2D-TTR measurements, molecular coherences extending in excess of 5 ps along $t_{2}$ have been observed in HMs, suggesting a resonant signal should have a commensurate lifetime along $t_{1}$. In contrast, sum-frequency excitation cannot directly generate coherent states through a single field interaction, but instead require two instantaneous interactions. In this case, the extent of a molecular response along $t_1$ will be determined by the duration of temporal overlap of the two THz electric field waveforms. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{Fig2_ClBr.png} \caption{Top and bottom rows compare the experimental (Exp) bromoform and chloroform time-domain data to the SF-RDM models. The calculated IRF, whose THz electric fields are used as inputs to the SF-RDM model, is shown for reference (right column). Identical pulse shapes and IRFs are used for both SF-RDM models of bromoform and chloroform.} \label{fig:fig2} \end{figure*} In Fig. \ref{fig:fig2} time-domain bromoform and chloroform measurements recorded under identical experimental conditions are shown. The initial key observation is that while the $t_2$ response is long-lived, the response along $t_1$ never extends past the region of THz field overlap. Both bromoform's and chloroform's vibrational coherent responses are therefore far more consistent with a SF excitation mechanism than a resonant process. The different bandwidth requirements of the two processes provides a second argument supporting SF excitation as the dominant pathway. In both SF and resonant 2D-TTR pathways, a vibrational mode must begin and end the measurement in a population state. In addition, the Raman probe interaction only changes the vibrational quanta by $\pm 1$. If a $\mathit{M}$ non-linearity is present, one of the THz field interactions must produce either a zero-quanta or two-quanta excitation, which in the latter case necessitates a bandwidth spanning $\geq 2 \omega$.\citep{Sidler2019} (No evidence for zero-quanta transitions in HM vibrational modes have been observed.) Again, this is quite different from 2D-IR spectroscopies. SF excitation, on the other hand, progresses with a bandwidth $\approx \omega/2$. The experimental THz field bandwidth spans 1-5 THz, and therefore the 2D-TTR experiment lacks the necessary frequency content to produce vibrational coherences arising from $\mathit{M}$ non-linearities in the molecular Hamiltonian. Given that sum-frequency excitation is the predominant source of the signals observed in 2D-TTR spectroscopy of HMs, a key mystery becomes how this mechanism, whose instantaneous nature precludes separating the two THz field interactions in time, can nonetheless produce a signal which varies along $t_1$? We interrogate the origins of this complex $t_1$ response by considering how the observed signal $S(t_1, t_2)$ depends upon the IRF $I(t_1, t_2)$ (Eq. \ref{eq:eq1}). In a 2D-TTR experiment, two orthogonally polarized THz fields ($\vec{x}, \vec{y}$ in the laboratory frame) create a birefringent response within the room-temperature HM liquid sample. A $\vec{x}$ polarized Raman probe scatters off this birefringence, producing a $\vec{y}$ polarized signal field that is selectively isolated through an analyzing polarizer and differential chopping. The 2D-TTR signal is proportional to the anisotropic third-order molecular response function $R_{xyxy}^{(3)}(t_1, t_2)$, and contains information on the molecular orientational and vibrational correlation functions of the system. During the measurement process, this response is inevitably convolved with the experimental IRF, which in 2D-TTR is determined by the product of the two THz electric fields. \begin{equation} S(t_{1}, t_{2}) = I(t_{1}, t_{2}) \circledast R^{(3)}_{xyxy}(t_{1}, t_{2}) \label{eq:eq1} \end{equation} Through the convolution theorem, the time-domain convolution becomes a multiplication between the IRF spectral power and the HM molecular response function upon transformation to the frequency domain. \begin{equation} \tilde{S}(f_{1}, f_{2}) = \tilde{I}(f_{1}, f_{2}) \cdot \tilde{R}^{(3)}_{xyxy}(f_{1}, f_{2}) \label{eq:eq1} \end{equation} We study the impact of IRF convolution in two ways. First, we generate a time-domain model IRF (Fig. \ref{fig:fig2}) using model THz field profiles (Fig. \ref{fig:fig5}) that closely resemble experimental pulse shapes (see SI for details). An instantaneous SF process gives a molecular response which is a delta function in the time-domain ($t_1$) and a flat response in the corresponding frequency domain ($f_1$). In 2D-TTR, this amounts to a flat response along the $f_1$ axis, and a delta functions along the $f_2$ axis centered at the eigenmode frequencies of the molecular sample. Multiplication of this molecular response with the IRF yields the final measured signal. This results in simply selecting a slice of the IRF along $f_1$ at the eigenmode frequency. Using this simple model we find excellent agreement with the experimental spectra. Second, we use the same THz field profiles that produce the IRF model as inputs to RDM simulations that consider sum-frequency excitation processes (SF-RDM). Again, we find near-quantitative agreement between the experimental data and theoretical simulations. Critically, no electrical or mechanical non-linearities are required to reach excellent agreement between the data and the IRF model/RDM simulations. The SF-RDM results precisely reproduce the experimental time-domain (Fig. \ref{fig:fig2}) and frequency-domain (Fig. \ref{fig:fig3}) responses, substantiating the claim that SF processes dominate the 2D-TTR response of HMs. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{Fig3_ClBr.png} \caption{Top and bottom rows compare the experimental (Exp) bromoform and chloroform frequency-domain data to the SF-RDM model. The calculated IRF is shown for reference with vertical lines indicating where each HMs intramolecular vibrational mode samples the IRF. Note that the experimental and SF-RDM spectra are well matched, and arise from sampling the same IRF at different $f_{2}$ frequencies.} \label{fig:fig3} \end{figure} The agreement between experiment, SF-RDM, and model IRF are shown in Fig. \ref{fig:fig4}. While bromoform and chloroform have different intra-molecular vibrational energies, we reproduce both spectra by slicing the same model IRF at their respective eigenmode frequencies along $f_2$. Crucially, this model mimics the non-specific sum-frequency excitation of vibrational coherences by the THz electric field. Unlike previous interpretations, here we do not invoke Feynman diagram pathways involving multi-quanta transitions between several vibrational modes; instead, the experimental IRF filtered through a single SF excitation pathway explains our observations. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{comp_IRF.png} \caption{(a) Slices along $f_1$ at $f_2$=eigenmode of the IRF/RDM model and experimental response demonstrate the quality of fit for the bromoform data. Chloroform (b) is reproduced by slicing along the same IRF/SF-RDM model as shown in (a) at $f_2$=7.8 THz, instead of at bromoform's $f_2$=4.7 THz.} \label{fig:fig4} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_pulses_2.png} \caption{Comparison of the experimental and model THz pulse shapes (a,b) and corresponding bandwidths (c,d). See the SI for more detail on the optimization process used to obtain the model pulse shapes.} \label{fig:fig5} \end{figure} That a two-quanta sum-frequency excitation pathway is predominantly responsible for vibrational coherences observed in 2D-TTR is also supported by estimations of chloroform and bromoform's transition dipole moments. From THz-TDS measurements and FT-IR literature, the bromoform and chloroform $E$ mode's molar extinction coefficients were $\epsilon \approx1 M^{-1}cm^{-1}$, suggestive of a vanishingly small transition dipole moment. Ab initio calculations similarly arrive at transition dipole elements in the few milliDebye range. As a result, it would be very difficult to observe resonant excitation of these HM modes, even in the absence of interfering SF pathways. For comparison, 2D-IR spectroscopy on proteins is often performed by resonantly pumping the amide I stretch at $\sim50$ THz, which have $\epsilon\sim$200-400 M$^{-1}$cm$^{-1}$. Not only are these oscillators intrinsically orders of magnitude stronger than HM vibrational modes, but the IR excitation field's $\delta\omega/\omega$ is also substantially narrower, which helps to selectively and resonantly generate the desired coherences while suppressing any SF contribution. Common sources of high intensity, sub-ps THz pulses (organic emitters, LiNbO3, etc) all have $\delta\omega \approx \omega$ and thus both Res and SF pathways must be considered when analyzing responses in the overlapping pump field region. Finally, we would like to note that our conclusions regarding the excitation mechanism of intra-molecular vibrational modes of HMs in 2D-TTR spectroscopy likely do not alter analyses performed on similar systems in the complementary 2D-TRT and 2D-RTT experiments.\citep{Ciardi2019} Those measurements attributed spectral features that remained post-deconvolution to couplings between a Raman-excited vibrational coherence and a resonant one-quanta interaction with bath modes of the liquid. Our conclusions are consistent with their observation that the IRF strongly determines the observed multi-dimensional experimental response. In this work we provide extensive new experimental and theoretical data that leads to a simple reinterpretation of previous 2D-TTR measurements of HMs. With this new analysis, we explain the entire spectrum of both bromoform and chloroform through a convolution of the experiment's THz fields with the molecules' intra-molecular vibrational modes. No coherence pathways outside of the SF-TKE process in Fig. \ref{fig:fig1} are required. The new analysis is also fully consistent with the observed magnitudes of transition dipole moments and molecular polarizabilities. Moving forward, there are two keys lessons. First, large transition dipoles are crucial for performing truly resonant 2D-TTR experiments. Halogenated methanes unfortunately do not satisfy this requirement, and the bright, complex signals observed can easily be mis-attributed to resonant processes. Second, nearly transform-limited half-cycle THz fields would maximize the field strengths achievable, greatly simplify the experimental IRF structure, and reduce ambiguities in the analysis of dynamics in molecular systems. \section*{2D-TTR Single shot spectrometer} Greater temporal resolution and sensitivity compared to previous research was achieved by developing a single-shot 2D-TTR spectrometer. This technique eliminates scanning of the probe delay line, producing substantial reductions in experimental acquisition time. With the imaging and echelon combination used for this work, each camera acquisition captures 30 ps of molecular response along $t_{2}$ at 28 fs resolution. Equivalent resolution of 28 fs steps are recorded along $t_{1}$, while the overall temporal extent of a typical data set spans 30$\times$[5-10] ps ($t_{2} \times t_{1}$). This represents $\geq10\times$ greater temporal extent compared to previous stage-scan 2D-TTR measurements\citep{finneran20172d}. Typical acquisition times ranged from a few hours to overnight (For comparison, previous stage-scan experiments required 48 hrs of averaging). We briefly discuss the overall experimental setup (Fig. \ref{fig:SI_INSTRUMENT}), and then provide specific details on the single-shot spectrometer's construction. A 1 kHz Coherent Legend HE regenerative amplifier, seeded with an 80 MHz Coherent Micra oscillator, is used to pump a Light Conversion TOPAS-C optical parametric amplifier (input 3.2 mJ, $\sim$30\% conversion efficiency signal+idler). The 500 $\mu$J 1.4 $\mu$m signal output is divided with a 50:50 beam-splitter before illuminating two 6 mm clear aperture DAST THz emitters (4-N,N-dimethylamino-4$'$-N$'$-methyl-stilbazolium tosylate, Swiss Terahertz). One OPA pump line is passed off a mechanical stepping delay stage which controls the relative time delay between the two pump fields. Additionally, one of the 1.4 $\mu$m pump line's polarization is rotated 90 degrees so that the THz emission from the two sources can be recombined and collimated with a wire-grid polarizer (WGP). After recombination, the THz light is filtered with 4 QMC 18 THz low-pass filters (2 after each crystal) to remove any residual optical bleed-through. The THz light is collected, collimated, and focused with a series of off-axis parabolic (OAP) mirrors before being focused onto the sample. The sample is held in a 1 mm path length Suprasil cuvette with a 4.6 mm diameter hole drilled in the front face. Over this hole a 5$\times$5 mm clear aperture, 1 $\mu$m thick silicon nitride membrane window (Norcada, QX10500F) is epoxied. This thin membrane minimizes dispersion differences between the THz pump and optical probe fields and also reduces the window response (previous works used 1 mm thick quartz or 300 $\mu$m thick diamond windows). \begin{figure} \centering \includegraphics[width=1.0\textwidth]{2D_instrument_2.png} \caption{The optical path of the 2D-TTR single-shot spectrometer. Seen in an inset are the polarizations of the two THz pump and 800 nm probe fields, and the experimental time definitions.} \label{fig:SI_INSTRUMENT} \end{figure} The probe line beam conditioning for single-shot acquisition is unchanged from a previous study.\citep{Mead2019} Signal photons are detected on an Andor Zyla 5.5 MP sCMOS camera using a pair of crossed 800 nm polarizers with a 10,000:1 extinction ratio (Thorlabs LPVIS050-MP2). Because 2D-TTR uses two orthogonally polarized THz pump fields ($\vec{x}$,$\vec{y}$ polarized in lab coordinates) and crossed probe, signal polarizations ($\vec{x}$,$\vec{y}$ in lab coordinates), the detected signal is sensitive to the anisotropic third order molecular response function $R^{(3)}_{xyxy}(t_{1}, t_{2})$. All data and experimental parameters were controlled from a MATLAB GUI purpose-built for the experiments. Within the GUI, number of acquisitions per average and number of averages are specified by the user. To capture a data set at a given $t_{1}$ time, typical acquisition settings for bromoform are to collect 5 averages with 2,000 shots per average, for a total acquisition time of $\sim$12 seconds, which includes 2 seconds of camera and computational overhead and the 10 seconds of data acquisition. The data set for each average is first collected as a 2000 $\times$ 1280 array, with dimensions specified by the number of laser shots captured (e.g. 2000) and the 1280 horizontal binned output pixels from the camera that capture the time-domain molecular response along $t_{2}$ at the fixed $t_{1}$ time. Next, we extract the signal from this data array by performing a FFT on each of the 1280 rows of data. Each row corresponds to a single point in time $t_2$ sampled by the probe pulse. These FFTs must be performed carefully to phase relationship of the two THz fields that produce the signal. To do so, we collect several other pieces of data while each average is acquired. First, outputs of the two optical choppers that modulate the THz pump fields are recorded on a DAQ card. Second, the camera output signal which confirms the start of a series acquisition is digitized by the same DAQ. From these three trigger signals (chopper A phase, chopper B phase, camera acquisition start), we can measure the signal phase of each average and correct for the phase offset. Doing so ensures that every average at every $t_{1}$ time is collected with a single consistent phase, which preserves the overall phase of the 2D signal. The FFTs are performed by first multiplying each of the 1280 rows of data (each with length 2000 in this example) with an equal length sinusoidal waveform with the measured phase offset and a frequency equal to the difference between the two chopper frequencies (i.e. differential chopping). Multiplication shifts the desired differential signal to the DC position in the frequency domain FFT output. The DC response from all 1280 channels are then saved as a 1$\times$1280 data array for each average, and are co-added until all averages are collected. In essence, this approach creates a software equivalent to the standard digital lock-in amplifier typically used for collection of single channel spectroscopic data. \section*{2D-TTR Sensitivity to IRF} We provide an additional example of the sensitivity of bromoform's 2D-TTR spectrum to the instrument response function, which was first noticed as a result of changing the infrared wavelengths that pump the two THz emitters. Initial 2D-TTR experiments used the two orthogonally polarized signal (1.4 $\mu$m) and idler (1.8 $\mu$m) outputs of the TOPAS-C OPA were each used to pump one of the DAST crystals. However, we observed that the majority of the idler line's IR power was concentrated in a small $\sim$1 mm diameter point within the larger 8 mm diameter beam. As a result, this concentrated point of power was burning the DAST crystal face. To prevent damage to the emitter crystal, we split the signal beam into two halves, rotated one half's polarization to match the idler polarization, and recorded new 2D-TTR data of bromoform. Dramatic differences in bromoform's time (Fig. \ref{fig:BrBr_Time}) and frequency (Fig. \ref{fig:BrBr_Freq}) domain response were observed, drawing our attention to the importance of the IRF in 2D-TTR. SF-RDM analysis of the original 1.4/1.8 $\mu$m bromoform data yielded similarly excellent agreement between experiment and simulation. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Fig2_BrBr.png} \caption{Top and bottom rows compare the experimental (Exp) bromoform time-domain data under two different THz pumping regimes. Slight changes to the emitted THz fields result in different time-domain IRFs. SF-RDM models using the THz electric fields that produce each IRF fully reproduce the experimental data.} \label{fig:BrBr_Time} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Fig3_BrBr.png} \caption{Top and bottom rows compare the experimental (Exp) bromoform frequency-domain data under two different THz pumping regimes. Slight changes to the emitted THz fields result in different frequency-domain IRFs. SF-RDM models with the THz electric fields that produce each IRF reproduce the experimental spectra.} \label{fig:BrBr_Freq} \end{figure} \section*{Halogenated methane purification} Bromoform was purchased from Sigma Aldrich. As received the liquid had a orange hue, indicating some degradation and contamination from water. A simple distillation under nitrogen at 150 degree Celsius was performed to purify the bromoform liquid. NMR analysis indicated complete removal of water. The purified sampled was stored in a round-bottom flask with dry sodium sulfate, under argon in a dark refrigerator. Chloroform was purchased from Sigma Aldrich and used as received. \section*{TKE Response of HMs} Terahertz Kerr effect measurements were taken of bromoform and chloroform, and the 2D-TTR response of diamond was measured to gauge the bandwidth of the THz pump fields.\citep{finneran20172d} As shown in Fig. \ref{fig:SI_BANDWIDTH}, the THz electric field bandwidth is centered at 4 THz. Bromoform's two Raman-active vibrational modes are observed at 4.7 and 6.6 THz, while the Raman-active vibrational mode in chloroform was observed at 7.8 THz. Note that the orientational (low-frequency) response of both liquids was removed prior to taking the FFT, and only the higher frequency spectral content arising from intra-molecular vibrations are seen in Fig. \ref{fig:SI_BANDWIDTH}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{SI_TKE.png} \caption{Bromoform and chloroform intramolecular vibrational responses are detected in 1D-TKE measurements. The THz bandwidth of the DAST emission as measured in diamond is shown in blue shading.} \label{fig:SI_BANDWIDTH} \end{figure} \subsection*{THz field modeling} The frequency domain IRF slices at the eigenmode frequencies for halogenated methanes in this study are highly sensitive to small changes in the THz pump pulse shapes. This is especially true for bromoform, whose eigenmode frequency intersect a region of very low spectral power in the IRF. Near perfect fits to the experimental TTR spectra of HMs in both the time and frequency domains were obtained by using model THz pulse shapes, which closely resemble the experimental pulse shapes measured in diamond. We believe the use of these model THz pulse shapes is adequate here considering the chromatic nature of the THz focus and the S/N limitations of the diamond TTR spectra (due to the inherently small $\chi^{(3)}$ constant). The model THz pulse shapes were obtained with the following process. First, the bandwidths of the two THz pulses were optimized to fit both experimental HM TTR slices at eigenmode frequencies of bromoform and chloroform. To avoid overfitting, a simple asymmetric Gaussian functional form was assumed for the bandwidth of the THz pulses to minimize the number of parameters. The time domain IRF is calculated as the product of the THz pulse shapes $$ I(t_1, t_2) = E_1(t_1+t_2)E_2(t_2) $$ and the Fourier transform of the IRF $\Tilde{I}(f_1, f_2)$ is $$ \Tilde{I}(f_1, f_2) = \Tilde{E_1}(f_1)\Tilde{E_2}(f_2-f_1)$$ where $\Tilde{E_1}$ and $\Tilde{E_2}$ are the bandwidths of the THz pulses. The frequency domain optimization yields only the amplitude spectra, and a phase spectra is still required to uniquely determine the time-domain representation. The hybrid input-output phase retrieval algorithm was used with the experimentally measured THz pulse shapes as targets. Convergence was generally obtained within 100 iterations. The resulting model THz pulses accurately reproduce the observed time and frequency domain results for both HMs, as shown in the main text. THz pulse shape measurements in general underestimate the available power at higher THz frequencies due to velocity mismatch between the probe (800 nm) and THz pulses, which further supports the increased power above 4 THz for the model pulse shapes. \subsection*{SF-RDM model} Sum-frequency pathways were not considered in our previous RDM model Hamiltonian \citep{Finneran2016c,finneran20172d,Magdau2019}: \begin{equation} H(t;t_1) = H_0 - M \cdot \left[ E_2(t-t_1) + E_1(t) \right] \end{equation} A new Hamiltonian was constructed here, which accounts for the SF process: \begin{equation} H(t;t_1) = H_0 - \mathit{\Pi} \cdot \left[ E_2(t-t_1) + E_1(t) \right]^2 \end{equation} Experimentally, differential chopping of the two THz fields automatically removes the single pulse contributions $\mathit{\Pi} \cdot E_2^2(t-t_1)$ and $\mathit{\Pi} \cdot E_1^2(t)$, so we are left with an effective Hamiltonian of the form: \begin{equation} H(t;t_1) = H_0 - \mathit{\Pi} \cdot E_2(t-t_1)E_1(t) \end{equation} The time response is computed as described in our previous work\citep{Magdau2019}, but now it corresponds to an SF signal: \begin{equation} S(t_2;t_1) = S_{SF}(t_2;t_1) = Tr(\mathit{\Pi}\cdot\rho(t_2;t_1)) \end{equation} The complete signal $S$ is obtained when $E_1$ and $E_2$ are replaced with the fitted pulse shapes, while the molecular response $R$ when $E_1$ and $E_2$ are simple $\delta$-functions. All operators are kept linear and harmonic.
1,314,259,992,879
arxiv
\section{Introduction} Recently, there has been much interest in generating numerical constraints on conformal field theories using the conformal bootstrap \cite{Rattazzi:2008pe,Rychkov:2009ij,Caracciolo:2009bx, Poland:2010wg,Rattazzi:2010gj,Rattazzi:2010yc, Poland2011,Vichi:2011ux,ElShowk:2012hu,ElShowk:2012ht,Liendo:2012hy,Beem:2013qxa,Kos:2013tga,El-Showk:2013nia,Alday:2013opa,Gaiotto:2013nva}. In these works the existence of a scalar primary operator $\phi$ of dimension $\Delta$ is assumed. Using the conformal bootstrap on the 4-point function of $\phi$, it is then possible to generate numerically bounds on operator dimensions and OPE coefficients of operators in the $\phi\times\phi$ OPE as a function of $\Delta$. The crucial ingredient which allows us to generate these bounds is the knowledge of all the scalar conformal blocks \cite{Dolan:2000ut,Dolan:2003hv}, which encode the dependence of the $4$-point function of $\phi$ on each conformal family in the $\phi\times\phi$ OPE\footnote{More precisely, for doing numerics it is sufficient to have a systematic way to approximate the conformal blocks. For scalar 4-point functions this can be done efficiently in any dimension \cite{ElShowk:2012ht, Kos:2013tga, Hogervorst:2013kva}.}. It would interesting to apply these methods without introducing any assumption on the operator spectrum. For instance, one would like to analyze the four-point function of the stress-tensor, which exists for any CFT. More generally, assuming the CFT has some global symmetry one would like to understand the constraints of conformal invariance arising from application of the conformal bootstrap to the $4$-point function of the global symmetry current. Unfortunately, to this date there are no closed form expressions for the conformal blocks of non-scalar operators (see however \cite{Costa:2011dw,SimmonsDuffin:2012uy}), so these interesting directions cannot be pursued in a straightforward way yet. However, in supersymmetric theories the situation is better since in some cases, symmetry currents reside in multiplets whose superconformal-primary (sprimary) is a scalar field. For instance in $\ensuremath{\mathcal{N}}=4$ supersymmetric Yang-Mills the energy-momentum tensor resides in a multiplet whose sprimary is a scalar in the $\mathbf{20'}$ representation of the $SU(4)_R$ $R$-symmetry group. The bootstrap constraints for this case were recently analyzed in \cite{Beem:2013qxa}. Similarly, in any four-dimensional $\ensuremath{\mathcal{N}}=1$ superconformal theory, a global symmetry current $j^a_{\mu}$ resides in a real multiplet \begin{align} \ensuremath{\mathcal{J}}^a(z) = \ensuremath{\mathcal{J}}^a(x,\theta,\bar{\theta}) = J^a(x) + i\theta j^a(x) - i\bar{\theta}\bar{j}^a(x) - \theta\sigma^{\mu}\bar{\theta} j^a_{\mu}(x) + \cdots\,,\label{susyJ} \end{align} which satisfies $D^2\ensuremath{\mathcal{J}}^a=\bar{D}^2\ensuremath{\mathcal{J}}^a=0$, and the omitted terms in the equation above are determined by this constraint. The sprimary $J^a$ is a dimension two real scalar field in the adjoint representation of the symmetry group. In this paper we will use the conformal bootstrap to constrain four dimensional $\ensuremath{\mathcal{N}}=1$ superconformal theories with an $SU(N)$ global symmetry\footnote{The generalization to other symmetry groups is straightforward.}. In particular, we will place lower bounds on the current ``central charge'' $\tau$ defined as the coefficient of the current 2-point function\footnote{Similar bounds were generated in \cite{Poland:2010wg, Poland2011}, assuming the existence of a charged scalar primary. Here we assume supersymmetry instead.} \begin{align} \langle j_{\mu}^a(x) j_{\nu}^b(0) \rangle = 3 \tau \, \delta^{ab} \, \frac{x^2\delta_{\mu\nu}-2x_{\mu}x_{\nu}}{4\pi^4x^8} \,. \label{jj} \end{align} The decomposition of the 4-point function of $J^a$ into conformal blocks is constrained by supersymmetry. In particular, the OPE coefficients in $J^a\times J^b$ of different primary operators in a super-multiplet are not independent and the corresponding conformal blocks are re-packaged into the so-called superconformal blocks. These constraints were already analyzed in detail in \cite{Fortin:2011nq} for the $U(1)$ case, and will generalize those results to the non-Abelian case. In addition, we find new operators which generally appear in the OPE which were not found in \cite{Fortin:2011nq}. The form of the bounds we find is $\tau > f(N)$. Qualitatively, the existence of a lower bound means there is a minimal amount of ``charged stuff'' which must exist in any such theory. A free chiral superfield has, in our normalization, $\tau=1$. We do not know of any theory with $\tau<1$ and it would be very interesting to understand whether those exist, or alternatively to prove that $\tau\ge 1$ in general. The paper is organized as follows. In section \ref{prelim} we briefly review the conformal bootstrap and set up our conventions. In addition, we determine the sum-rules which result from applying crossing-symmetry to the 4-point function of a scalar primary in the adjoint representation of $SU(N)$. In section \ref{SUSY} we discuss the constraints imposed by $\ensuremath{\mathcal{N}}=1$ superconformal invariance on the $J^a\times J^b$ OPE and superconformal blocks. In section \ref{results} we present the lower bounds we obtained on $\tau$ and a short discussion. \section{Preliminaries} \label{prelim} \subsection{Conformal Bootstrap} In this section we spell out our normalization conventions and briefly summarize the conformal bootstrap constraint for a general CFT. The reader is referred to \cite{Rattazzi:2008pe} for a more extensive treatment. Consider a general CFT in four Euclidean dimensions, and in particular the subset of operators consisting of spin-$\l$ primary operators $\ensuremath{\mathcal{O}}_I^{(\l)} \equiv \ensuremath{\mathcal{O}}_I^{\mu_1\cdots\mu_\l}$, which are symmetric-traceless rank-$\l$ tensors (i.e. in the $(\l/2,\l/2)$ representation of the Lorentz group $SO(4)$). The index $I$ labels the primary operators in the CFT, and we will denote the complex conjugate operator by a barred index $\bar{\ensuremath{\mathcal{O}}}_{\bar{J}}\equiv(\ensuremath{\mathcal{O}}_J)^{\dag}$. We set the normalization of such operators by demanding that their $2$-point function is of the form \begin{align} \langle \ensuremath{\mathcal{O}}_I^{\mu_1\cdots\mu_\l}(x_1) \bar{\ensuremath{\mathcal{O}}}_{\bar{J}}^{\nu_1\cdots\nu_\l}(x_2)\rangle &= \delta_{I\bar{J}} \frac{I^{\mu_1\nu_1}(x_{12})\cdots I^{\mu_\l\nu_\l}(x_{12})}{x_{12}^{2\Delta_I}} \,, \label{2pnt} \\ I^{\mu\nu}(x) &\equiv \delta^{\mu\nu} - 2\frac{x^{\mu}x^{\nu}}{x^2} \,, \end{align} where on the RHS the indices $(\mu_1,\ldots,\mu_\l)$ and $(\nu_1,\ldots,\nu_\l)$ should be symmetrized with the traces removed, and $\Delta_I$ denotes the dimension of $\ensuremath{\mathcal{O}}_I^{(\l)}$. The $3$-point function of a spin-$\l$ primary with two scalar primaries $\phi_a$, $\phi_b$ of equal dimension $\Delta_0$ is \begin{align} \langle \phi_a(x_1) \phi_b(x_2) \bar{\ensuremath{\mathcal{O}}}_{\bar{I}}^{\mu_1\cdots\mu_\l} \rangle &= \lambda_{ab,I} \frac{Z^{\mu_1}\cdots Z^{\mu_\l}}{x_{12}^{2\Delta_0 - \Delta_I + \l} x_{23}^{\Delta_I-\l}x_{31}^{\Delta_I-\l}} \,, \label{3pnt} \\ Z^{\mu} &\equiv \frac{x_{31}^{\mu}}{x_{31}^2} - \frac{x_{32,}^{\mu}}{x_{32}^2} \,, \end{align} where again the Lorentz indices on the RHS should be symmetrized with the traces removed, and $a,b$ are arbitrary labels. The information on the $2$-point and $3$-point functions \eqref{2pnt}, \eqref{3pnt} is contained in the $\phi_a\times\phi_b$ OPE (suppressing Lorentz indices for simplicity), \begin{align} \phi_a(x)\phi_b(0) = \frac{\delta_{ab}}{x^{2\Delta_0}}\mathds{1} \,\,+\!\! \sum_{\ensuremath{\mathcal{O}}_I=\text{primary}} \lambda_{ab,I} C_{\ensuremath{\mathcal{O}}}(x,\partial)\ensuremath{\mathcal{O}}_I \,. \end{align} In above equation the identity operator $\mathds{1}$ contains the information on the $2$-point function and the sum over primaries encodes all the information on $3$-point functions. The operator $C_{\ensuremath{\mathcal{O}}}(x,\partial)$ is entirely determined by conformal symmetry, and encodes the contributions of all the descendants of $\ensuremath{\mathcal{O}}_I$ to the OPE. In a unitary theory if the scalars $\phi_a$ are real but $\ensuremath{\mathcal{O}}_I^{(\l)}$ are complex then the OPE coefficients are generally complex and satisfy $\lambda_{ab,\bar{I}} = (\lambda_{ab,I})^*$. If we choose a real basis of operators, then the OPE coefficients $\lambda_{ab,I}$ must be real $\lambda_{ab,\bar{I}} = \lambda_{ab,I}$. The $3$-point function is non-zero only for $\ensuremath{\mathcal{O}}_I$ of integer spin, and (odd) even spins correspond to the (anti-)symmetric combination of $\phi_a$ and $\phi_b$ (i.e. $\lambda_{ab,I}$ is \mbox{(anti-)symmetric} in $a,b$ for (odd) even spins). The crossing-symmetry constraints for the 4-point function $\langle \phi_{a}(x_1)\phi_{b}(x_2)\phi_{c}(x_3)\phi_{d}(x_4)\rangle$ are obtained by using the OPE in the $(12)(34)$ (``$s$-channel'') and $(14)(23)$ (``$t$-channel'') channels and equating the results, \begin{align} u^{-\Delta_0} \cdot \!\!\!\! \sum_{\ensuremath{\mathcal{O}}_I \in \phi_{a}\times\phi_{b}} \frac{\lambda_{a b, I}\lambda_{c d, \bar{I}}}{(-2)^{\l_I}} g_{\Delta_I,\l_I}(u,v) = v^{-\Delta_0} \cdot \!\!\!\! \sum_{\ensuremath{\mathcal{O}}_I \in \phi_{a}\times\phi_{d}} \frac{\lambda_{a d, I}\lambda_{b c, \bar{I}}}{(-2)^{\l_I}} g_{\Delta_I,\l_I}(v,u) \,, \label{sumrule} \end{align} whered $g_{\Delta,\l}$ are the scalar conformal blocks \cite{Dolan:2000ut,Dolan:2003hv}, \begin{align} g_{\Delta, \l}(u, v)&=\frac{z\bar z}{z-\bar z}\left( k_{\Delta +\ell}(z)k_{\Delta -\l -2}(\bar z)-(z\leftrightarrow \bar z)\right) \label{cb} \,,\\ k_{\beta} (x)&\equiv x^{\beta /2}{}_2F_{1}(\beta /2, \beta/2, \beta ; x) \,, \end{align} and \begin{align} u = \frac{x_{12}^2 x_{34}^2}{x_{13}^2 x_{24}^2} = z\bar{z} \ec\quad v = \frac{x_{14}^2 x_{23}^2}{x_{13}^2 x_{24}^2} = (1-z)(1-\bar{z}) \label{uvCR}\,, \end{align} are the two conformal cross-ratios. In \eqref{sumrule} the summation is over all primary operators in the $\phi\times\phi$ OPE\footnote{If an operator is complex then its complex conjugate should also be included in the sum as an independent primary operator. }. \subsection{Bootstrap for Scalars in the Adjoint of $SU(N)$} \label{adjScalar} In this section we discuss a specific case of the general bootstrap constraint \eqref{sumrule} in which $\phi_a$ is a real scalar primary in the adjoint representation\footnote{The indices $a,b,\ldots,$ label the adjoint representation of $SU(N)$ and $i,j,\ldots,$ are (anti-)fundamental indices. We will sometimes find it more convenient to work in the fundamental basis with $\phi^m_{~i}\equiv \phi^a(T^a)^m_{~i}$, where $(T^a)^m_{~i}$ is a generator in the fundamental of $SU(N)$. Our normalization convention is $(T^a)^m_{~i}(T^a)^n_{~j} = \delta_i^n\delta_j^m - \frac{1}{N}\delta_i^m\delta_j^n$. The structure constants are $i f^{abc} = \text{tr}\left([T^a,T^b]T^c\right)$ and $d^{abc} = \text{tr}\left(\{T^a,T^b\}T^c\right)$.} of $SU(N)$. We will later apply the results of this section to the case in which this scalar is the top of the current multiplet in $\ensuremath{\mathcal{N}}=1$ theories. The crossing-symmetry relations in CFTs with global symmetries were considered in full generality in \cite{Rattazzi:2010yc}, and we apply these results to our case of interest. The operators which appear in the $\phi_a\times\phi_b$ OPE can be decomposed into any of the $7$ irreducible representations in the product of two adjoint representations of $SU(N)$. Each such representation arises from either the symmetric or anti-symmetric product. Operators in the $\phi_a\times\phi_b$ OPE which are in a (anti-)symmetric representation must be of (odd) even spin from Bose symmetry. The reader is referred to appendix \ref{group} for details regarding the tensor product of two $SU(N)$ adjoint representations and our notations. Let $\ensuremath{\mathcal{O}}^{\mathbf{r}}_I$ be an operator in representation $\mathbf{r}$ which appears in the $\phi\times\phi$ OPE, with $I,J,\ldots,$ labeling the elements of the representation. We denote the corresponding OPE coefficient (defined in \eqref{3pnt}) as $\lambda_{a b\,, I}^{\ensuremath{\mathcal{O}}_{\mathbf{r}}}$ and split it to a universal group factor times some coefficient, \begin{align} \lambda_{a b, I}^{\ensuremath{\mathcal{O}}_{\mathbf{r}}} &\equiv \lambda_{\ensuremath{\mathcal{O}}} C_{ab,I}^{(\mathbf{r})} \,, \label{OPEcf}\\ \lambda_{a b, \bar{I}}^{\bar{\ensuremath{\mathcal{O}}}_{\bar{\mathbf{r}}}} &= \lambda_{\bar{\ensuremath{\mathcal{O}}}} \bar{C}_{ab,\bar{I}}^{(\bar{\mathbf{r}})} \equiv (\lambda_{\ensuremath{\mathcal{O}}} C_{ab,I}^{(\mathbf{r})})^* \,, \end{align} where $C_{ab,I}^{(\mathbf{r})}$ is the relevant Clebsch-Gordan coefficient, and is the same for any operator in the representation $\mathbf{r}$, while the coefficient $\lambda_{\ensuremath{\mathcal{O}}}$ is the same for each element of the representation. The sum-rule in \eqref{sumrule} becomes \begin{align} u^{-\Delta_0} &\sum_{\mathbf{r}} \sum_{\ensuremath{\mathcal{O}} \in (\phi\times\phi)_{\mathbf{r}}} \frac{|\lambda_{\ensuremath{\mathcal{O}}}|^2}{(-2)^{\l_{\ensuremath{\mathcal{O}}}}} \delta^{I\bar{J}} C_{ab,I}^{(\mathbf{r})}\bar{C}_{cd,\bar{J}}^{(\bar{\mathbf{r}})} \, g_{\Delta_{\ensuremath{\mathcal{O}}},\l_{\ensuremath{\mathcal{O}}}}(u,v) \nonumber \\ &= v^{-\Delta_0} \sum_{\mathbf{r}} \sum_{\ensuremath{\mathcal{O}} \in (\phi\times\phi)_{\mathbf{r}}} \frac{|\lambda_{\ensuremath{\mathcal{O}}}|^2}{(-2)^{\l_{\ensuremath{\mathcal{O}}}}} \delta^{I\bar{J}} C_{ad,I}^{(\mathbf{r})}\bar{C}_{bc,\bar{J}}^{(\bar{\mathbf{r}})} \, g_{\Delta_{\ensuremath{\mathcal{O}}},\l_{\ensuremath{\mathcal{O}}}}(v,u) \,. \label{SR1} \end{align} Each term in the second sums in equation \eqref{SR1} has the same sign from the $(-2)^\l$ factor, as this only depends on whether $\mathbf{r}$ is in the symmetric or anti-symetric product of two adjoints\footnote{The adjoint representation appears both in the symmetric and anti-symmetric product and we count those as distinct in the sum over representations in \eqref{SR1}.}. We use this property to write the above sum-rule as \begin{gather} u^{-\Delta_0} \sum_{\mathbf{r}} (\mathds{1}_{\mathbf{r}})_{ab,cd} G_{\mathbf{r}}(u,v) = v^{-\Delta_0} \sum_{\mathbf{r}} (\mathds{1}_{\mathbf{r}})_{ad,cb} G_{\mathbf{r}}(v,u) \,, \notag\\ G_{\mathbf{r}}(u,v) \equiv \! \sum_{\ensuremath{\mathcal{O}}\in(\phi\times\phi)_{\mathbf{r}}} p_{\ensuremath{\mathcal{O}}}\, g_{\Delta_{\ensuremath{\mathcal{O}}},\l_{\ensuremath{\mathcal{O}}}}(u,v) \ec\quad p_{\ensuremath{\mathcal{O}}} \equiv \frac{|\lambda_{\ensuremath{\mathcal{O}}}|^2}{2^{\l_{\ensuremath{\mathcal{O}}}}}\,. \label{SUnSR} \end{gather} where $G_{\mathbf{r}}(u,v)$ is the sum over conformal blocks in a given representation, and $(\mathds{1}_{\mathbf{r}})_{ab,cd}=\pm \delta^{I\bar{J}} C_{ab,I}^{(\mathbf{r})}\bar{C}_{cd,\bar{J}}^{(\bar{\mathbf{r}})}$ is just the identity matrix in the representation $\mathbf{r}$ projected to adjoint representation indices up to a sign, which can be determined by reflection positivity. Explicit expressions for these identity matrices are given in \eqref{id}. After plugging \eqref{id} into the sum-rule \eqref{SUnSR}, it can be decomposed into several equations by equating the coefficients of the independent delta-functions in the identity matrices. We do this in the next subsections paying attention to the special cases $SU(2)$ and $SU(3)$. The resulting sum-rules are conveniently expressed in terms of the functions \begin{align} F_{(\mathbf{r})}(u,v) &\equiv \sum_{\ensuremath{\mathcal{O}}\in(\phi\times\phi)_{(\mathbf{r})}} \!\!\! p_{\ensuremath{\mathcal{O}}} \,F_{\Delta_{\ensuremath{\mathcal{O}}},\l_{\ensuremath{\mathcal{O}}}}(u,v) \equiv \frac{u^{-\Delta_0}G_{(\mathbf{r})}(u,v) - v^{-\Delta_0}G_{(\mathbf{r})}(v,u)}{v^{-\Delta_0}-u^{-\Delta_0}} \,, \label{eq:F} \\ H_{(\mathbf{r})}(u,v) &\equiv \sum_{\ensuremath{\mathcal{O}}\in(\phi\times\phi)_{(\mathbf{r})}} \!\!\! p_{\ensuremath{\mathcal{O}}} \,H_{\Delta_{\ensuremath{\mathcal{O}}},\l_{\ensuremath{\mathcal{O}}}}(u,v) \equiv \frac{u^{-\Delta_0}G_{(\mathbf{r})}(u,v) + v^{-\Delta_0}G_{(\mathbf{r})}(v,u)}{v^{-\Delta_0}+u^{-\Delta_0}} \label{eq:G} \,. \end{align} We have verified that all the sum-rules written below are obeyed by both the four point function of a free scalar field in the adjoint of $SU(N)$, and that of the adjoint bilinear $\bar{\phi}^i (T^a)_i^j\phi_j$ in the theory of a free fundamental scalar $\phi_i$. \subsubsection{$U(1)$} \label{U1} For $U(1)$ the sum-rule is the usual one for the 4-point function of a real scalar operator: \begin{align} \label{sum_rule_u1} F(u,v)=\sum_{\ensuremath{\mathcal{O}}\in(\phi\times\phi)_{(1)}} \!\!\! p_{\ensuremath{\mathcal{O}}} \,F_{\Delta_{\ensuremath{\mathcal{O}}},l_{\ensuremath{\mathcal{O}}}}(u,v)=1\,, \end{align} where we separated the contribution of the identity operator for which $p_{\mathds{1}}=1$ and $g_{0,0}=F_{0,0} = -1$. \subsubsection{$SU(2)$} \label{SU2} For $SU(2)$ we have $\mathbf{3}\times \mathbf{3} = \mathbf{5}_s + \mathbf{3}_a + \mathbf{1}_s$ corresponding to the representations $(S,\bar{S})_s$, $(Adj)_a$ and the trivial representation. Setting all the terms which correspond to the other representations in \eqref{SUnSR} to zero, plugging in the expressions for the identity matrices \eqref{id} and equating independent coefficients, we can express the result as three independent sum-rules\footnote{Equivalent sum-rules were also worked out in \cite{Rattazzi:2010yc} for scalars in the fundamental of $SO(3)$. Our result is consistent with \cite{Rattazzi:2010yc}, but we work in slightly different convention such that $G_1^{here}=2G_1^{there}$, which amounts to a rescaling of all the OPE coefficients in the trivial representation by a factor of $2$.}, \begin{align} F_{(S,\bar{S})_s} - F_{Adj_a} &= 0 \,, \label{eq:SU2sr1}\\ \frac{2}{3}F_{(S,\bar{S})_s} + 2F_{Adj_a} + F_1 &= 1 \label{eq:SU2sr2}\,, \\ \frac{10}{3}H_{(S,\bar{S})_s} + 2H_{Adj_a} - H_1 &= 1 \label{eq:SU2sr3}\,. \end{align} \subsubsection{$SU(3)$} \label{SU3} For $SU(3)$ the $(A,\bar{A})_s$ representation does not exist so we set it to zero in \eqref{SUnSR}. The resulting sum-rules are given by \begin{gather} \frac{3}{2}F_{(S,\bar{S})_s} + F_{(S,\bar{A})_a} + F_1 = 1 \,, \\ \frac{9}{5}F_{(S,\bar{S})_s} - \frac{3}{2}F_{(S,\bar{A})_a} + F_{(Adj)_s} = 0 \,, \\ F_{(S,\bar{S})_s} - \frac{1}{6}F_{(S,\bar{A})_a} - F_{(Adj)_a} = 0 \,, \\ \frac{9}{10} H_{(S,\bar{S})_s} + H_{(S,\bar{A})_a} + \frac{4}{3}H_{(Adj)_s} - H_1 = 1 \,, \\ \frac{5}{2}H_{(S,\bar{S})_s} + \frac{5}{9}H_{(S,\bar{A})_a} + \frac{4}{3}H_{(Adj)_a} - H_1 = 1\,. \end{gather} \subsubsection{$SU(N)$ for $N>3$} \label{SUN} For $N>3$ all the 7 representations listed in appendix \ref{group} can appear in the OPE, and we find \begin{gather} F_{(S,\bar{S})_s} + F_{(A,\bar{A})_s} - F_{(S,\bar{A})_a} = 0 \,, \\ \frac{1}{N+2} F_{(S,\bar{S})_s} - \frac{1}{N-2} F_{(A,\bar{A})_s} + \frac{2}{N} F_{(Adj.)_s} = 0 \,, \\ \frac{1}{N+2} F_{(S,\bar{S})_s} + \frac{1}{N-2} F_{(A,\bar{A})_s} + \frac{1}{N} F_{(S,\bar{A})_a} - F_{(Adj.)_s} - F_{(Adj.)_a} = 0 \,, \\ \frac{2N^2}{N^2+3N+2} F_{(S,\bar{S})_s} + \frac{2N^2}{N^2-3N+2} F_{(A,\bar{A})_s} - \frac{16}{N} F_{(Adj.)_s} + F_{(\mathds{1})_s} = 1 \,, \\ \frac{N(N+3)}{N^2+3N+2} H_{(S,\bar{S})_s} + \frac{N(N-3)}{N^2-3N+2} H_{(A,\bar{A})_s} + H_{(S,\bar{A})_a} + \frac{4}{N} H_{(Adj.)_s} - H_{(\mathds{1})_s} = 1 \,, \\ \frac{N+3}{N+2} H_{(S,\bar{S})_s} - \frac{N-3}{N-2} H_{(A,\bar{A})_s} - \frac{1}{N} H_{(S,\bar{A})_a} - H_{(Adj.)_s} + H_{(Adj.)_a} = 0 \,. \end{gather} \section{Conformal Bootstrap for Conserved Currents in $\ensuremath{\mathcal{N}}=1$ SCFTs} \label{SUSY} Consider an $\ensuremath{\mathcal{N}}=1$ superconformal field theory with global symmetry group $\ensuremath{\mathcal{G}}$. In this section we will analyze the bootstrap constraints for the $4$-point function of $J^a(x)$, which is the top of the current multiplet $\ensuremath{\mathcal{J}}^a(z)$ defined in \eqref{susyJ}. In particular, we extend the results of \cite{Fortin:2011nq} for $U(1)$ to the non-Abelian case, and also find additional possible operators in the $J^a\times J^b$ OPE. We use the notations and conventions of \cite{Fortin:2011nq}. \subsection{Current-Current OPE in $\ensuremath{\mathcal{N}}=1$ SCFTs} The general form of the 3-point function of sprimary operators was found in \cite{Osborn:1998qu}. For the 3-point function of two conserved currents with some other sprimary $\ensuremath{\mathcal{O}}$ in some representation $\mathbf{r}$ the result is \begin{align} \langle \ensuremath{\mathcal{J}}_a(z_1) \ensuremath{\mathcal{J}}_b(z_2) \ensuremath{\mathcal{O}}^i_I(z_3) \rangle = C_{ab,I}^{(\mathbf{r})} \frac{t^i(X,\Theta,\bar{\Theta})}{x_{\bar{1}3}^2 x_{\bar{3}1}^2 x_{\bar{2}3}^2 x_{\bar{3}2}^2}\,, \label{JJO1} \end{align} where the superspace coordinates are $z_j=(x_j,\theta_j,\bar{\theta}_j)$, and we define \begin{equation} x_{\bar{i}j}^{\mu} = -x_{j\bar{i}} \equiv x_{ij}^{\mu} - i\theta_i\sigma^{\mu}\bar{\theta}_j + i\theta_j\sigma^{\mu}\bar{\theta}_i - i\theta_{ij}\sigma^{\mu}\bar{\theta}_{ij}. \end{equation} The quantities $X$, $\Theta$ and $\bar{\Theta}$ are functions of the superspace coordinates given by \begin{align} X &\equiv \frac{x_{3\bar{1}} \tilde{x}_{\bar{1}2} x_{2\bar{3}}}{x_{\bar{1}3}^2 x_{\bar{3}2}^2}\,, & \Theta &\equiv i\left(\frac{1}{x_{\bar{1}3}^2} x_{3\bar{1}} \bar{\theta}_{31} - \frac{1}{x_{\bar{2}3}^2} x_{3\bar{2}} \bar{\theta}_{32}\right)\,, & \bar{\Theta} &= \Theta^{\dag}, \end{align} and $i$ labels the Lorentz representation of $\ensuremath{\mathcal{O}}$. The function $t^i(X,\Theta,\bar{\Theta})$ has to scale appropriately with respect to dilatations and $U(1)_R$ transformations\footnote{The scaling is $t(\lambda\bar{\lambda}X,\lambda\Theta,\bar{\lambda}\bar{\Theta})=\lambda^{2a}\bar{\lambda}^{2\bar{a}}t(X,\Theta,\bar{\Theta})$, where $a-2\bar{a}=2-q$, and $\bar{a}-2a=2-\bar{q}$. The R-charge and dimension of $\ensuremath{\mathcal{O}}$ are related to $(q,\bar{q})$ by $ R_{\ensuremath{\mathcal{O}}} = \frac{2}{3}(q-\bar{q})$ and $\Delta_{\ensuremath{\mathcal{O}}} = q+\bar{q} $.}. Moreover, because of current conservation, $D^2\ensuremath{\mathcal{J}}^a=\bar{D}^2\ensuremath{\mathcal{J}}^a=0$, the correlator \eqref{JJO1} satisfies a differential equation. As shown in \cite{Osborn:1998qu}, this equation can be translated to the following differential equation for $t$: \begin{align} \ensuremath{\mathcal{D}}^2 t = \bar{\ensuremath{\mathcal{D}}}^2 t = 0 \,, \label{Dsquare} \end{align} where \begin{align} \ensuremath{\mathcal{D}}_{\alpha} = \frac{\partial}{\partial\Theta^{\alpha}}-2i\left(\sigma^{\rho}\bar{\Theta}\right) _{\alpha}\frac{\partial}{\partial X^{\rho}} \ec\quad \bar{\ensuremath{\mathcal{D}}}_{\dot{\alpha}} = -\frac{\partial}{\partial\bar{\Theta}^{\dot{\alpha}}} \,. \end{align} Note that $\eqref{JJO1}$ is symmetric under $(z_1,a)\leftrightarrow (z_2,b)$. Therefore, since $C_{ab,I}^{(\mathbf{r})}$ is either symmetric or anti-symmetric under $a\leftrightarrow b$, we need to find $t^i(X,\Theta,\bar{\Theta})$ which is either symmetric or anti-symmetric under $z_1\leftrightarrow z_2$. Under $z_1 \leftrightarrow z_2$ we have $(X,\Theta,\bar{\Theta})\leftrightarrow (-\bar{X},-\Theta,-\bar{\Theta})$, with $\bar{X}^{\mu} = X^{\mu} + 2 i \Theta\sigma^{\mu}\bar{\Theta}$. It is therefore useful to define \begin{align} X_{+}^{\mu} &= \frac{1}{2}(X^{\mu}+\bar{X}^{\mu})=X^{\mu} + i\Theta\sigma^{\mu}\bar{\Theta} \,,\\ X_{-}^{\mu} &= i(X^{\mu}-\bar{X}^{\mu})= 2\Theta\sigma^{\mu}\bar{\Theta} \,, \end{align} which are manifestly odd and even under $z_1 \leftrightarrow z_2$, respectively. The above constraints are sufficient to completely determine $t(X,\Theta,\bar{\Theta})$ up to an overall numerical factor. In particular, \cite{Fortin:2011nq} found\footnote{We find a slightly different coefficient then \cite{Fortin:2011nq} for the second term in the square brackets of \eqref{eq:tm}.} two structures corresponding to spin-$\l$ sprimary operators with zero R-charge, which take the form\footnote{Round brackets around Lorentz indices denote symmetrization, which is defined by averaging over all permutations.} \begin{align} t^{\mu_1\cdots\mu_\l}_+(X,\Theta,\bar{\Theta}) &\equiv \frac{X_+^{(\mu_1}\cdots X_+^{\mu_\l)}}{(X\cdot\bar{X})^{2-\frac{1}{2}(\Delta-\l)}} \left[ 1 - \frac{1}{4} \left( \Delta - \l - 4 \right) \left( \Delta + \l - 6 \right) \frac{\Theta^2\bar{\Theta}^2}{X\cdot\bar{X}} \right] \,, \label{eq:tp} \\ t^{\mu_1\cdots\mu_\l}_-(X,\Theta,\bar{\Theta}) &= \frac{X_+^{(\mu_1}\cdots X_+^{\mu_{\l-1}}}{(X\cdot\bar{X})^{2-\frac{1}{2}(\Delta-\l)}} \left[ X_-^{\mu_\l)} - \frac{\left( \Delta - \l -4 \right)}{\Delta-2} \frac{\left(X_-\cdot X_+\right) X_+^{\mu_\l)}}{X\cdot\bar{X}} \right] \label{eq:tm} \,. \end{align} Under $z_1 \leftrightarrow z_2$ the structures \eqref{eq:tp} and \eqref{eq:tm} transform as \begin{align} t^{\mu_1\cdots\mu_\l}_{\pm} \xrightarrow{z_1 \leftrightarrow z_2} \pm(-)^\l t^{\mu_1\cdots\mu_\l}_{\pm} \,. \end{align} Therefore if $C_{ab,I}^{(\mathbf{r})}$ is (anti-)symmetric in $a$ and $b$, then in \eqref{JJO1}, the structure $t_+^{\mu_1\cdots\mu_\l}$ appears for (odd) even $\l$ and $t_-^{\mu_1\cdots\mu_\l}$ for (even) odd $\l$. The $\l=0$ case is special since there is no structure for $\Delta\neq 2$ which is odd under $z_1\leftrightarrow z_2$ (see appendix \ref{scalar}). Therefore in this case only scalar sprimaries in representations which arise from the symmetric product of two adjoints can contribute to \eqref{JJO1} with the structure \begin{align} t_+(X,\Theta,\bar{\Theta}) &\equiv \frac{1}{(X\cdot\bar{X})^{2-\frac{1}{2}\Delta}} \left[ 1 - \frac{1}{4} \left( \Delta - 4 \right) \left( \Delta - 6 \right) \frac{\Theta^2\bar{\Theta}^2}{X\cdot\bar{X}} \right] \,. \end{align} The $\Delta=2$ scalar in the adjoint representation corresponds to\footnote{There could be other conserved currents in the theory, but those would appear in the singlet representation.} $\ensuremath{\mathcal{O}}_I = \ensuremath{\mathcal{J}}_a$. In that case the 3-point function is completely determined (for the canonically normalized current) by the Ward identities to be \cite{Osborn:1998qu}, \begin{align} \langle \ensuremath{\mathcal{J}}^a(z_1) \ensuremath{\mathcal{J}}^b(z_2) \ensuremath{\mathcal{J}}^c(z_3) \rangle &= \frac{1}{x_{\bar{1}3}^2 x_{\bar{3}1}^2 x_{\bar{2}3}^2 x_{\bar{3}2}^2} \left[ i\frac{f^{abc} \tau}{128\pi^6} \left( \frac{1}{X^2} - \frac{1}{\bar{X}^2} \right) + \frac{d_{abc} \kappa}{256\pi^6} \left( \frac{1}{X^2} + \frac{1}{\bar{X}^2} \right) \right] \label{JJJ} \,, \end{align} where $\kappa$ is the $\text{Tr}\,\ensuremath{\mathcal{G}}^3$ ~`t~Hooft anomaly and $\tau$ is defined through the 2-point function of the canonically normalized current \eqref{jj}. In addition, we find various contributions to \eqref{JJO1} corresponding to operators which are not in spin-$\l$ Lorentz representations. Those are collected in table \ref{nonSpinl}. \begin{table}[h] \begin{tabular}{|c|c|c|c|c|} \hline $\left(j,\bar{j}\right)$ & $R$ & $\Delta$ & $z_1\leftrightarrow z_2$ & $t(X,\Theta,\bar{\Theta})$ \\ \hline $\left(\frac{\l-1}{2},\frac{\l}{2}\right)$ & $1$ & $\l+\frac{7}{2}$ & $(-)^\l$ & $X^{\alpha_1}_{~(\dot{\alpha}_1}\cdots X^{\alpha_{\l-1}}_{~\phantom{(}\dot{\alpha}_{\l-1}}\bar{\Theta}^{\phantom{\alpha}}_{\dot{\alpha}_{\l})}$ \\ \hline $\left(\frac{1}{2},0\right)$ & $1$ & $\frac{3}{2}$ & $+$ & $(X)^{-4}\cdot X^{\alpha}_{~\dot{\alpha}}\bar{\Theta}^{\dot{\alpha}}$\\ \hline $\left(\frac{\l}{2},\frac{\l-1}{2}\right)$ & $-1$ & $\l+\frac{7}{2}$ & $(-)^\l$ & $X^{(\alpha_1}_{~~\dot{\alpha}_1}\cdots X^{\phantom{(}\alpha_{\l-1}}_{~~\dot{\alpha}_{\l-1}}\Theta^{\alpha_\l)}_{\phantom{\dot{\alpha}_{\l}}}$ \\ \hline $\left(0,\frac{1}{2}\right)$ & $-1$ & $\frac{3}{2}$ & $+$ & $(X)^{-4}\cdot X^{\alpha}_{~\dot{\alpha}}\Theta_{\alpha}$ \\ \hline $\left(\frac{\l+1}{2},\frac{\l-1}{2}\right)$ & $0$ & $\geq \l+3$ & $(-)^\l$ & $(X\cdot\bar{X})^{\frac{\Delta-\l-5}{2}} X^{(\alpha_1}_{+~\dot{\alpha}_1}\cdots X^{\alpha_\l}_{+~\dot{\alpha}_\l}X^{|\dot{\alpha}_\l|\alpha_{\l+1})}_{-}$ \\ \hline $\left(\frac{\l-1}{2},\frac{\l+1}{2}\right)$ & $0$ & $\geq \l+3$ & $(-)^\l$ & $(X\cdot\bar{X})^{\frac{\Delta-\l-5}{2}} X^{\alpha_1}_{+~(\dot{\alpha}_1}\cdots X^{\alpha_\l}_{+~\dot{\alpha}_\l}X_{-~|\alpha_{\l}|\dot{\alpha}_{\l+1})}$ \\ \hline \end{tabular} \caption{Structures corresponding to superconformal primaries in the $\ensuremath{\mathcal{J}}\times\ensuremath{\mathcal{J}}$ OPE, in Lorentz representations with $j\neq\bar{j}$.} \label{nonSpinl} \end{table} Let us discuss some properties of the operators listed in table \ref{nonSpinl}. The $\left(\frac{1}{2},0\right)$ structure in the second entry of table \ref{nonSpinl} (and its $\left(0,\frac{1}{2}\right)$ conjugate) actually arises from a larger family of structures \mbox{$t^{\alpha(\l)}_{~\dot{\alpha}(\l-1)} = (X^2)^{-\l-1} X^{(\alpha_1}_{~\dot{\alpha}_1}\cdots X^{\alpha_\l)}_{~\dot{\alpha}_\l}\bar{\Theta}^{\dot{\alpha}_\l}$}, which satisfies all the constraints\footnote{We use the notation: $t^{\alpha(\l),\dot{\alpha}(\l')}\equiv t^{\alpha_1\cdots\alpha_\l,\dot{\alpha}_1\cdots\dot{\alpha}_{\l'}}$.}. These structures correspond to operators with dimension $\Delta=\frac{5}{2} - \l$ which violate the unitarity bound for $\l\neq 1$, $\Delta \ge |\frac{3}{2}R-j+\bar{j}|+j+\bar{j}+2=\l+\frac{5}{2}$. The $\l=1$ structure however, corresponds to a chiral operator ($\bar{Q}_{\dot{\alpha}}\Psi_{\alpha}=0$), in which case the unitarity bound is modified to $\Delta=\frac{3}{2}R\ge j+1$. The corresponding operator saturates this bound, so it is in fact a free chiral fermion. When the zero R-charge $\left(\frac{\l+1}{2},\frac{\l-1}{2}\right)$ operators saturate the unitarity bound $\Delta\ge \l+3$, they decompose into two short representations as follows: \begin{align} \left(\frac{\l + 1}{2},\frac{\l - 1}{2}\right) \xrightarrow{\Delta\rightarrow \l+3} \left(\frac{\l+1}{2},\frac{\l - 1}{2}\right)_{\mathrm{short}} \!\!\oplus \left(\frac{\l}{2},\frac{\l - 1}{2}\right)_{\mathrm{short}}\,. \label{short} \end{align} The shortening condition is $Q^{\alpha_1} \ensuremath{\mathcal{O}}_{\alpha_1\cdots\alpha_{\l+1}\,,\dot{\alpha}_1\cdots\dot{\alpha}_{\l-1}}=0$. The resulting structure for the short representation $\left(\frac{\l}{2},\frac{\l - 1}{2}\right)_{\mathrm{short}}$ appears as the third entry of table \ref{nonSpinl}, so these two series of structures are actually related. A similar story holds for the representations $\left(\frac{\l-1}{2},\frac{\l+1}{2}\right)$ and $\left(\frac{\l-1}{2},\frac{\l}{2}\right)$. This decomposition into short multiplets matches the one described, e.g. on \cite{Gadde:2010en}, which also specifies where the spin $\l$ conformal primaries reside after the decomposition. Short representations such as $\left(\frac{\l}{2},\frac{\l - 1}{2}\right)_{\mathrm{short}}$, can certainly appear, at least for free theories. They can be constructed in the following way, using the current $J^a$ as the basic building block \begin{align} \ensuremath{\mathcal{O}}^{ab}_{\alpha(\l)\dot{\alpha}(\l-1)} &= \begin{cases} J^{(a}\overleftrightarrow{\partial}_{\!\!\alpha_1\dot{\alpha}_1} \cdots \overleftrightarrow{\partial}_{\!\!\alpha_{\l-1}\dot{\alpha}_{\l-1}} Q_{\alpha_\l} J^{b)} - \left(Q_{\alpha_\l} J\right)^{(a} \overleftrightarrow{\partial}_{\!\!\alpha_1\dot{\alpha}_1} \cdots \overleftrightarrow{\partial}_{\!\!\alpha_{\l-1}\dot{\alpha}_{\l-1}} J^{b)} & \l=\text{even} \,, \\ J^{[a} \overleftrightarrow{\partial}_{\!\!\alpha_1\dot{\alpha}_1} \cdots \overleftrightarrow{\partial}_{\!\!\alpha_{\l-1}\dot{\alpha}_{\l-1}} Q_{\alpha_\l} J^{b]} - \left(Q_{\alpha_\l} J\right)^{[a} \overleftrightarrow{\partial}_{\!\!\alpha_1\dot{\alpha}_1} \cdots \overleftrightarrow{\partial}_{\!\!\alpha_{\l-1}\dot{\alpha}_{\l-1}} J^{b]}& \l=\text{odd} \,. \end{cases} \label{short_ops} \end{align} Symmetrization over Lorentz is to be understood. One can verify that these are superconformal primaries and satisfy the shortening condition $Q^{\alpha_1} \ensuremath{\mathcal{O}}^{ab}_{\alpha_1 \cdots \alpha_{\l}\,,\dot{\alpha}_1 \cdots \dot{\alpha}_{\l-1}}=0$, by using the superconformal algebra and the fact that $Q^2 J^a(x) = \bar{Q}^2 J^a(x)=0$. In the following section we will describe the application of these results to the conformal block decomposition of the 4-point function of $J^a(x)$. \subsection{Superconformal Blocks} The structures for the 3-point function $\langle \ensuremath{\mathcal{J}}_a(z_1) \ensuremath{\mathcal{J}}_b(z_2) \ensuremath{\mathcal{O}}^i_I(z_3) \rangle$, found in the previous section, relate the $\ensuremath{\mathcal{J}}^a\times\ensuremath{\mathcal{J}}^b$ OPE coefficients of primary super-descendants of $\ensuremath{\mathcal{O}}^i_I$, to the coefficient of the superconformal primary. The sum over primary operators in the conformal block decomposition of the current 4-point function, can then be rearranged as a sum over superconformal primary operators, with ``superconformal blocks'' replacing the usual conformal blocks. The superconformal blocks are linear combinations of the usual conformal blocks, which take into account the relations between OPE coefficients of the primary operators in each super-multiplet. For the purposes of this paper, we are interested in these relations for the $J^a\times J^b$ OPE. These can be obtained by setting $\theta_{1,2}=\bar{\theta}_{1,2}=0$ in the various expressions for \eqref{JJO1}, expanding in $\theta_3$ and $\bar{\theta}_3$ and disentangling the various primary super-descendants in this expansion. The superconformal blocks for spin-$\l$ sprimaries, corresponding to the $t_+$ and $t_-$ structures in equations \eqref{eq:tp} and \eqref{eq:tm}, were computed in \cite{Fortin:2011nq}. The result is\footnote{Equation \eqref{Gm} fixes a mistake in the superconformal block which was found in \cite{Fortin:2011nq}. We are grateful to J.F~Fortin, K.~Intriligator and A.~Stergiou for discussions on this point.} \footnote{We are grateful to Z.U.~Khandker, D.~Li, D.~Poland and D.~Simmons-Duffin for pointing out a mistake in (\ref{Gp}) in an earlier version of this paper. Their full analysis can be found in \cite{KLPSD}.}. \begin{align} \ensuremath{\mathcal{G}}^+_{\Delta,\l}(u,v) &= g_{\Delta,\l}(u,v) + \frac{ (\Delta-2)^2 (\Delta + \l) (\Delta - \l - 2) }{ 16 \Delta^2 (\Delta + \l + 1) (\Delta - \l - 1) } g_{\Delta+2,\l}(u,v) \,, \label{Gp}\\ \ensuremath{\mathcal{G}}^-_{\Delta,\l}(u,v) &= g_{\Delta+1,\l+1}(u,v)+ \frac{(\l+2)^2 (\Delta+\l+1)(\Delta-\l-2)}{\l^2 (\Delta -\l-1) (\Delta + \l)} g_{\Delta+1,\l-1}(u,v) \,. \label{Gm} \end{align} Depending on the spin and global symmetry representation of the spin-$\l$ sprimary, either $\ensuremath{\mathcal{G}}^+_{\Delta,\l}$ or $\ensuremath{\mathcal{G}}^-_{\Delta,\l}$ should be used in the superconformal block decomposition. In particular, define \begin{align} \ensuremath{\mathcal{G}}^{(\mathbf{r})_s}_{\Delta,\l} &= \begin{cases} \ensuremath{\mathcal{G}}^+_{\Delta,\l}\,, & \l=\text{even} \\ \ensuremath{\mathcal{G}}^-_{\Delta,\l}\,, & \l=\text{odd} \end{cases} \,, \\ \ensuremath{\mathcal{G}}^{(\mathbf{r})_a}_{\Delta,\l} &= \begin{cases} \ensuremath{\mathcal{G}}^+_{\Delta,\l}\,, & \l=\text{odd} \\ \ensuremath{\mathcal{G}}^-_{\Delta,\l}\,, & \l=\text{even} \end{cases} \,. \end{align} If the representation of the sprimary is in the (anti-)symmetric product then one should use ($\ensuremath{\mathcal{G}}^{(\mathbf{r})_a}_{\Delta,\l}$) $\ensuremath{\mathcal{G}}^{(\mathbf{r})_s}_{\Delta,\l}$. For the operators corresponding to the structures in table \ref{nonSpinl} there is only one primary super-descendant which can contribute to the $J^a\times J^b$ OPE (i.e. which is in a spin-$\l$ Lorentz representation). Therefore, there are no special relations between OPE coefficients inside each multiplet in those cases. In particular, the $\left(\frac{\l\pm 1}{2},\frac{\l\mp 1}{2}\right)$ sprimaries contain a spin-$\l$ primary super-descendant of dimension $\Delta\ge \l+4$, which is obtained by acting on the sprimary with $Q_{\alpha}$ and $\bar{Q}_{\dot{\alpha}}$ appropriately\footnote{When the unitarity bound is saturated the multiplet decomposes to two short multiplets as in \eqref{short}. The above spin-$\l$ primary superdescendant sits in the second factor on the RHS of \eqref{short} as can be seen e.g., in \cite{Gadde:2010en}.}. To summarize, we can write the $s$-channel decomposition as follows: \begin{gather} \langle J^m_{~i}(x_1) J^n_{~j}(x_2) J^p_{~k}(x_3) J^q_{~l}(x_4) \rangle_{s} = \frac{1}{x_{12}^{4}x_{34}^{4}} \Bigg\{ \notag\\ \sum_{\mathbf{r} \in Adj. \otimes_s Adj.} \id{mn}{ij}{pq}{kl}{\mathbf{r}} \biggl[ \,\,\sum_{\substack{ \ensuremath{\mathcal{O}}_{\alpha(\l),\dot{\alpha}(\l)} \\ \Delta\ge \l+2 }} \!\!\!\! p_{\ensuremath{\mathcal{O}}}\, \ensuremath{\mathcal{G}}_{\Delta,\l}^{(\mathbf{r})_s}(u,v) \,\,\,+\!\! \sum_{\substack{ \ensuremath{\mathcal{O}}_{\alpha(\l\pm 1),\dot{\alpha}(\l\mp 1)} \\ \Delta\ge \l+3 }} \!\!\!\!\!\! p_{\ensuremath{\mathcal{O}}}\, g_{\Delta+1,\l}(u,v)\biggr] \notag\\ + \sum_{\mathbf{r} \in Adj. \otimes_a Adj.} \id{mn}{ij}{pq}{kl}{\mathbf{r}} \biggl[\,\,\sum_{\substack{ \ensuremath{\mathcal{O}}_{\alpha(\l),\dot{\alpha}(\l)} \\ \Delta_{\ensuremath{\mathcal{O}}}\ge \l+2 }} \!\!\!\! p_{\ensuremath{\mathcal{O}}} \, \ensuremath{\mathcal{G}}_{\Delta,\l}^{(\mathbf{r})_a}(u,v) \,\,\,+\!\! \sum_{\substack{ \ensuremath{\mathcal{O}}_{\alpha(\l\pm 1),\dot{\alpha}(\l\mp 1)} \\ \Delta\ge \l+3 }} \!\!\!\!\!\! p_{\ensuremath{\mathcal{O}}} \, g_{\Delta+1,\l}(u,v)\biggr]\Bigg\} \,,\label{Neq1SR} \end{gather} where we separated the sum over representations to sums over symmetric and anti-symmetric representations. The sums in the square brackets are over sprimary operators in the $J^a\times J^b$ OPE in the indicated Lorentz representation\footnote{In the sum over operators in the Lorentz reprsentations $\left(\frac{\l\pm 1}{2},\frac{\l\mp 1}{2}\right)$, we implicitly include also the short $\left(\frac{\l}{2}\,,\frac{\l-1}{2}\right)$ and $\left(\frac{\l-1}{2}\,,\frac{\l}{2}\right)$ operators.}. Note that for the operators in table \ref{nonSpinl}\footnote{For spin-$\l$ sprimaries the summation is over even and odd spins regardless of whether the representation of $\ensuremath{\mathcal{O}}$ is in the symmetric or anti-symmetric product of the current operators. This is because an even (odd) spin sprimary contains odd (even) super-descendant conformal primaries.} only (odd) even $\l$ appears for (anti-)symmetric representations. A similar expression holds for the $t$-channel. The final result for the sum-rules in the $\ensuremath{\mathcal{N}}=1$ case is obtained using the appropriate conformal or superconformal blocks in the adjoint scalar sum-rules written in section \ref{adjScalar}. We wrote the coefficient $p_{\ensuremath{\mathcal{O}}}$ in \eqref{Neq1SR} with some abuse of notation to avoid clutter. It should be understood that it denotes the coefficient which was defined in \eqref{SUnSR} for the appropriate operator. In particular, for a spin-$\l$ sprimary it denotes the coefficient of the sprimary if $\ensuremath{\mathcal{G}}^+_{\Delta,\l}$ is used, while if $\ensuremath{\mathcal{G}}^-_{\Delta,\l}$ is used, it denotes the coefficient of the spin-$\l\!+\!1$ super-descendant. For the sprimaries in table \ref{nonSpinl} it denotes the coefficient of the spin-$\l$ super-descendant. A check of the above results can be obtained by decomposing the $\mathcal{N}=2$ superconformal blocks found in \cite{Dolan:2001tt}. This was described in some detail \cite{Fortin:2011nq}, though not carried out explicitly. We verified that this decomposition is consistent only if we include the operators in Table \ref{nonSpinl}. \section{Bounds on Current Central Charges} \label{results} Having written down the sum-rules, including the SUSY constraints, we are now ready to apply any of the methods developed in \cite{Rattazzi:2008pe,Poland:2010wg, Rattazzi:2010gj,Poland2011} to find bounds on OPE coefficients. The basic strategy for obtaining such bounds involves converting the problem into a system of constraints for every possible operator in the spectrum and is reviewed in Appendix \ref{bootstrap_bounds}. In SCFTs, the $\Delta=2$ adjoint scalar sits at the top of the current supermultiplet. Thus, one can effectively use it to place a bound applicable for every $\ensuremath{\mathcal{N}}=1$ theory with $SU(N)$ global symmetry. Specifically, we have obtained an lower bound on $\tau$, the coefficient of the current two-point function. \subsection{Bounds on OPE coefficients in $U(1)$ SCFTs} The leading terms in the $JJ$ OPE, when $J$ is canonically normalized, take the form \begin{align} J(x) J(0)=\tau \frac{\mathds{1}}{16\pi^4x^4}+\frac{\kappa}{\tau} \frac{J(0)}{16\pi^2x^2} +c^i\frac{\ensuremath{\mathcal{O}}_i(0)}{x^{4-\Delta _i}}+\cdots. \end{align} We first attempt to obtain a bound for the OPE coefficient $\lambda_J$. In our normalization \eqref{2pnt}, \eqref{3pnt}, this is nothing but $\lambda_J = \frac{\kappa}{4 \tau^{3/2}}$. In this case, which has fewer sum rules, we used the procedure and parameters described in \cite{Poland2011} with $k=10$, and obtained an upper bound for the OPE coefficient $|\lambda_J| < 5.38$.\footnote{We are grateful to D.~Li for pointing out to us a mistake in the $\lambda_J$ bound that appeared in the previous version of this paper.} \subsection{Bounds on OPE Coefficients and on $\tau$ in $SU(N)$ SCFTs} The major difference from the $U(1)$ case arises from the fact that there are now several different tensor structures appearing in the OPE: three in the case of $SU(2)$, five in the case of $SU(3)$ and seven in the generic case $SU(N)$ for $N>3$. As shown, for example, in \cite{Poland:2010wg,Rattazzi:2010yc,Vichi:2011ux, Poland2011}, one can use a vectorial linear functional in order to obtain a bound when several sum-rules are involved. In the non-abelian case, the $JJ$ OPE for the canonically normalized current takes the form \begin{align} \label{JJOPE} J_a(x) J_b(0) &= \tau \frac{\delta _{ab}\mathds{1}}{16\pi ^4x^4}+\frac{k d_{abc}}{\tau} \frac{J_c(0)}{16\pi ^2x^2}+f_{abc} \frac{x^\mu j_\mu ^c (0)}{24\pi ^2 x^2}+c_{ab}^i\frac{\ensuremath{\mathcal{O}}_i(0)}{x^{4-\Delta _i}}+\cdots \,, \\ j_{\mu}^a(x) j^b_{\nu}(0) &= 3\tau\delta^{ab} \frac{ I_{\mu\nu}(x)}{4\pi^4x^6} \mathds{1} +\cdots \,. \end{align} We want to place a bound on $\tau$. This can be done by isolating the contribution of $j^a_{\mu}$ in the sum-rules and placing a bound on its OPE coefficient. In our normalization \eqref{2pnt}, \eqref{3pnt} we have \begin{align} \lambda_{ab,c}^{j_{\mu}} = \frac{1}{\sqrt{3 \tau}} f_{abc} \,. \end{align} Therefore, $|\lambda_{j_{\mu}}|=\frac{1}{\sqrt{3 \tau}}$ in \eqref{OPEcf}. The OPE coefficient enters the sum-rule as the coefficient $p_{j_{\mu}}$ of the conformal block $g_{3,1}$ (see \eqref{SUnSR})\footnote{This corresponds to the structure \eqref{JJJ}. In this channel there is only one operator in the $JJ$ OPE and therefore no superconformal block.}, and due to our normalization of the conformal blocks chosen without the $(2)^{-\l}$ factor we have, \begin{align} \label{tau_bound} p_{j_{\mu}} = \frac{1}{2} |\lambda_{j_{\mu}}|^2 = \frac{1}{6\tau} \,. \end{align} We can obtain an upper bound on $p_{j_\mu}$, which translates into a lower bound on $\tau$. Figure \ref{bounds_fig} shows the lower bounds on $\tau$ obtained for different values of the gauge group size $N$. Due to numerical difficulties, the bounds for $k > 6$ is not optimal, but it does satisfy the constraints. \begin{figure}[h] \includegraphics[width=1 \linewidth,scale=0.45]{lowerbounds.png} \centering \caption{Lower bounds on $\tau$ for different gauge groups $SU(N)$, obtained for different search space sizes $k$. } \label{bounds_fig} \end{figure} Note that the bound increases with $N$, as one would expect. Indeed, we can think of $SU(2)$ as a subgroup of $SU(N)$ with $N>2$. In that case the generators of $SU(N)$, which are not part of the $SU(2)$ subgroup, would appear in the singlet representation of the $SU(2)$ current-current OPE. Thus, for consistency, the bound for $SU(2)$ must be weaker than the bound for $N>2$. This is indeed the case. In Figure \ref{high_bounds_fig} we show the bounds for (some) very large values of $N$ as well. \begin{figure}[h] \includegraphics[width=1 \linewidth,scale=0.45]{lowerbounds_high.png} \centering \caption{Lower bounds on $\tau$ for different gauge groups $SU(N)$ with high values of $N$, obtained for different search space sizes $k$. } \label{high_bounds_fig} \end{figure} The bound again rises with $N$, though very slowly. These results are consistent with the results of \cite{Poland:2010wg,Poland2011}, which were obtained by analyzing the 4-point function of a chiral field $\Phi$ in the fundamental representation of $SU(N)$. In those works the bounds on $\tau$ were obtained as a function of $\Delta_{\Phi}$. For $\Delta_{\Phi}=1$ one finds $\tau\ge 1$. The saturation of this bound corresponds to the free theory value of $\tau$. Eventually, for high enough dimensions of $\Phi$ the bound drops well below $1$. Since we do not assume any particular field content, one would expect our bounds to be weaker then the ones found in \cite{Poland:2010wg,Poland2011} for the entire range of $\Delta_{\Phi}$, and this is indeed the case. Nevertheless, it is still slightly puzzling that the bounds we find appear to be much weaker than those of \cite{Poland:2010wg,Poland2011}. While this could be simply due to the fact that we make less assumptions on the theory, the following argument offers an alternative explanation. Consider the free theory of one fundamental chiral field. In this theory the $SU(N)$ currents $j_{B\mu}^a$, $j_{F\mu}^a$ constructed from the boson and fermion fields are separately conserved, while only the combination $j_{\mu}^a=j_{B\mu}^a+j_{F\mu}^a$ sits in a current multiplet (and the current central charge corresponding to $j_{\mu}^a$ is $\tau=1$). The additional symmetries contaminate the OPE coefficient of $g_{3,1}$ out of which we have been extracting our bounds on $\tau$. In fact, the 4-point function of $J^a$ in the free $\ensuremath{\mathcal{N}}=1$ theory is obviously the same as in the non-supersymmetric theory of one scalar field. Therefore, if we interpret the coefficient of $g_{3,1}$ as $1/6\tau$ we would obtain $\tau=1/3$; the value of $\tau$ for one free fundamental scalar. As a result, using our method we cannot expect our bounds to be stronger then $\tau\ge 1/3$. As we just explained our interpretation of the coefficient of $g_{3,1}$ is incorrect in the free theory, since this coefficient receives other contributions. These additional contributions are special to the free theory, as the corresponding conserved currents reside in multiplets which contain conserved higher-spin currents \cite{Maldacena:2011jn}. It would be interesting to remove the free theory from the numerical analysis, by introducing small gaps in the dimensions to exclude conserved higher-spin currents from the $JJ$ OPE. In fact, we expect this to improve the bounds dramatically, especially for large values of $N$. For instance, it was shown in \cite{Beem:2013sza} that in interacting $\ensuremath{\mathcal{N}}=2$ CFTs $\tau \ge N$ for $N \ge 3$. In those cases there is therefore a discontinuous jump in the bound compared to free theories which have $\tau = O(N^0)$. We find it plausible that similar results hold also for $\ensuremath{\mathcal{N}}=1$ CFTs. It would be interesting to study the interplay between the size of the dimension gaps needed to see a jump in the bound and the size of $N$. We leave this to future work. As a preliminary result, we found that the bound is $\tau > 3.82$ for $SU(10000)$ with $k=6$ when one assumes a gap of $0.1$ above the unitarity bound for all spin $\ell$ operators which are not shortened due to the global symmetry. Finally, if the theory has a gravity dual, then in our normalization we have \cite{Freedman:1998tz} \begin{equation} \tau = 8 \pi^2 \frac{R_{AdS}}{g^2}, \end{equation} where $g$ is the coupling constant of the non-abelian gauge theory in the bulk, which matches the $SU(N)$ global symmetry. Thus, one can obtain an upper bound on $g^2/R_{AdS}$, meaning that the gauge coupling cannot become arbitrarily large in the bulk theory. This argument has been used in \cite{Poland:2010wg} to claim that such a bound exists in a bulk theory in the presence of a charged scalar. Here we see that it exists regardless of the type of excitation, and it is just a consequence of having a holographic dual. It would be very interesting to understand why such a bound exists from the bulk perspective. As an order of magnitude estimation, a lower bound of $\tau \ge 1$ at $N\rightarrow \infty$ translates to $g^2/R_{AdS} \le 8 \pi^2$. \subsection*{Acknowledgments} \label{s:acks} We are grateful to O.~Aharony for many suggestions, enlightening conversations and collaboration at early stages of this project. We would also like to thank D.~Simmons-Duffin and B.C. van Rees for discussions and helpful suggestions. We are especially grateful to J.F.~Fortin, K.~Intriligator and A.~Stergiou for useful comments on an early version of this manuscript. We would like to thank S.~Rychkov for very helpful comments on the first version of this paper, which have helped us pinpoint a mistake in the overall normalization of $\tau$. Finally, we are grateful to Z.U.~Khandker, D.~Li, D.~Poland and D.~Simmons-Duffin for sharing their results with us, thereby correcting a mistake in the earliar version of this draft. This work was supported in part by an Israel Science Foundation (ISF) center of excellence grant, by the German-Israeli Foundation for Scientific Research and Development and by the Minerva Foundation.
1,314,259,992,880
arxiv
\section{Introduction} Let $S=\mathbb{C}[x,y,z]$ be the polynomial ring in three variables $x,y,z$ with complex coefficients, and let $C:f=0$ be a reduced curve of degree $d$ in the complex projective plane $\mathbb{P}^2$. The minimal degree of a Jacobian relation for the polynomial $f$ is the integer $mdr(f)$ defined to be the smallest integer $m\geq 0$ such that there is a nontrivial relation \begin{equation} \label{rel_m} af_x+bf_y+cf_z=0 \end{equation} among the partial derivatives $f_x, f_y$ and $f_z$ of $f$ with coefficients $a,b,c$ in $S_m$, the vector space of homogeneous polynomials in $S$ of degree $m$. When $mdr(f)=0$, then $C$ is a union of $d$ lines passing through one point, a situation easy to analyse. We assume from now on in this note that \begin{equation} \label{mdr} mdr(f)\geq 1. \end{equation} We denote by $J_f$ the Jacobian ideal of $f$, i.e. the homogeneous ideal in $S$ spanned by $f_x,f_y,f_z$, and by $M(f)=S/J_f$ the corresponding graded quotient ring, called the Jacobian (or Milnor) algebra of $f$. Let $I_f$ denote the saturation of the ideal $J_f$ with respect to the maximal ideal ${\bf m}=(x,y,z)$ in $S$ and consider the local cohomology group, usually called the Jacobian module of $f$, $$N(f)=I_f/J_f=H^0_{\bf m}(M(f)).$$ The Lefschetz type properties for Artinian algebras have attracted a vast literature, see for instance \cite{ H+1,H+2,IG}. It was shown in \cite[Corollary 4.3]{DPop} that the graded $S$-module $N(f)$ satisfies a Lefschetz type property with respect to multiplication by generic linear forms. This implies in particular the inequalities \begin{equation} \label{in} 0 \leq n(f)_0 \leq n(f)_1 \leq ...\leq n(f)_{[T/2]} \geq n(f)_{[T/2]+1} \geq ...\geq n(f)_T \geq 0, \end{equation} where $T=3d-6$ and $n(f)_k=\dim N(f)_k$ for any integer $k$. Note that for a smooth curve $C:f=0$, one has $N(f)=M(f)$ and also \begin{equation} \label{in2} 0= \indeg (M(f))=\indeg (N(f)) \text{ and } T=\en (M(f))=\en (N(f)), \end{equation} in the notation from \cite{HS}. We set as in \cite{AD,Drcc} $$\nu(C)=\max _j \{n(f)_j\},$$ and introduce a new invariant for $C$, namely $$\sigma(C)=\min \{j : n(f)_j \ne 0\}.$$ The self duality of the graded $S$-module $N(f)$, see \cite{HS,Se, SW}, and the Lefschetz type property mentioned above imply that $n(f)_s \ne 0$ exactly for $s=\sigma(C),..., T-\sigma(C)$. In other words, for a reduced curve $C:f=0$, one has \begin{equation} \label{in3} \indeg (N(f)) =\sigma(C) \text{ and } \en (N(f))= T-\sigma(C). \end{equation} Denote by $\tau(C)$ the global Tjurina number of the curve $C$, which is the sum of the Tjurina numbers of the singular points of $C$. The study of {\it free curves in the projective plane} has a rather long tradition, being inaugurated by A. Simis in \cite{Sim1, Sim2}, and actively continued by several mathematicians, see for instance \cite{AD,B+,Dmax,Drcc,DS14,DStFD,FV1,ST}, and we refer to these papers for the properties of free curves listed below. If $C$ is a free curve with exponents $(d_1,d_2)$, with $d_1 \leq d_2$, then $J_f=I_f$, or equivalently $\nu(C)=0$. Hence, by the definition of the exponents, the minimal resolution of the Milnor algebra $M(f)$ as a graded $S$-module has the following form \begin{equation} \label{r1} 0 \to S(-d_1-d+1) \oplus S(-d_2-d+1) \to S^3(-d+1) \xrightarrow{(f_x,f_y,f_z)} S. \end{equation} Moreover, a reduced curve $C:f=0$ is free if and only if \begin{equation} \label{r1.5} \tau(C)=(d-1)^2-r(d-r-1), \end{equation} where $r=mdr(f)$, see \cite{Dmax,duPCTC}. For a free curve one has $d_1+d_2=d-1$. The {\it nearly free curves} have been introduced in \cite{Dmax, DStRIMS}, they have properties similar to the free curves, and together with the free curves may lead to a new understanding of the rational cuspidal curves, due to Conjecture \ref{c1} below. This class of curves forms already the subject of attention in a number of papers, see for instance \cite{AD, B+,MaVa}. We refer to these papers for the properties of nearly free curves listed below. By definition, $C$ is a nearly free curve if $\nu(C)=1$. Such a curve has also a pair of exponents $(d_1,d_2)$, with $d_1 \leq d_2$, such that the minimal resolution of the Milnor algebra $M(f)$ as a graded $S$-module has the following form \begin{equation} \label{r2} 0 \to S(-d-d_2) \to S(-d-d_1+1) \oplus S^2(-d-d_2+1) \to S^3(-d+1) \xrightarrow{(f_x,f_y,f_z)} S. \end{equation} In addition, a reduced curve $C:f=0$ is nearly free if and only if \begin{equation} \label{r2.5} \tau(C)=(d-1)^2-r(d-r-1)-1, \end{equation} where $r=mdr(f)$, see \cite{Dmax}, and for such curves one has $d_1+d_2=d$. Both $\nu(C)$ and $\sigma(C)$ are determined by the Hilbert function $k \mapsto n(f)_k$ of the Jacobian module $N(f)$, and for a nearly free curve $C:f=0$, the invariant $\sigma(C)$ determines the Hilbert function of $N(f)$. Note that one has \begin{equation} \label{in4} \sigma(C)=d+d_1-3, \end{equation} for a nearly free curve by \cite[Corollary 2.17]{DStRIMS}. Our interest in the free and nearly free curves comes from the following. \begin{conj} \label{c1} A reduced plane curve $C:f=0$ which is rational cuspidal is either free, or nearly free. \end{conj} This conjecture is known to hold when the degree of $C$ is even, as well as in many other cases, in particular for all odd degrees $d \leq 33$, see \cite{ Drcc, DStRIMS,DStMos}. In this note we investigate first the minimal resolution of the graded $S$-module $S/I_f$ for a nearly free curve $C:f=0$. The result can be stated as follows, see for a proof Theorems \ref{thm1} and \ref{thm2} below. \begin{thm} \label{thmA} Suppose $C:f=0$ is a nearly free curve of degree $d\geq 3$ with exponents $(d_1,d_2)$, and set $s=\sigma(C) -(d-2)$. Then the following two cases are possible. \begin{enumerate} \item $s=0$ and the minimal resolution of the graded $S$-module $S/I_f$ has the form $$0 \to S(-T-1+\sigma(C)) \to S(1-d) \oplus S(-\sigma(C)) \to S,$$ or \item $1 \leq s \leq \lfloor d/2 \rfloor -1$ and the minimal resolution of the graded $S$-module $S/I_f$ has the form $$0 \to S(-\sigma(C)-1)^2\oplus S(-T-1+\sigma(C)) \to S(1-d)^3 \oplus S(-\sigma(C)) \to S.$$ \end{enumerate} \end{thm} Note that the formula \eqref{in4} implies $s=d_1-1$. When $s\geq 2$, the claims of this Theorem can be obtained as a special case of a general result by Hassanzadeh and Simis, namely \cite[Proposition 1.3]{HS}. This is clearly explained in the fourth section below. In particular, we note in Remark \ref{rkHS} that the assumption in two key results by Hassanzadeh and Simis, namely \cite[Proposition 1.3]{HS} and \cite[Theorem 1.5]{HS} can be slightly weakened. With this improvement, we get also the case $s=1$ in Theorem \ref{thmA} as a consequense of \cite[Proposition 1.3]{HS}. More significatively, this new weaker assumption is verified by all the arrangements of $d \geq 4$ lines in $\mathbb{P}^2$, see Remark \ref{rkHS2}. In this way, the modified version of \cite[Theorem 1.5]{HS} yields the following key property of line arrangements. \begin{cor} \label{corAR} Let $C:f=0$ be an arrangement of $d \geq 3$ lines in $\mathbb{P}^2$. Then the graded $S$-module of Jacobian syzygies $$AR(f)=\{(a,b,c) \in S^3 \ : \ af_x+bf_y+cf_z=0\}$$ is generated by at most $d-1$ elements. \end{cor} This property was stated by Schenck in a discussion at the end of section 3 of his paper \cite{Sch0}, where it is attributed to Jiang and Feng, by referring to \cite{JF}, subsection (4.2). We describe briefly the results of the paper \cite{JF} below in Remark \ref{rkJF}, and explain that Jiang and Feng never stated or proved a result similar to our Corollary \ref{corAR} above. On the other hand, Theorem \ref{thmA} implies the following. \begin{cor} \label{corA} For a nearly free curve $C:f=0$ of degree $d$, the exponents $d_1$ and $d_2$, as well as the numerical data of the minimal resolution of the graded $S$-module $S/I_f$ are determined by the total Tjurina number $\tau(C)$. \end{cor} Indeed, the formula \eqref{r2.5} shows that $d_1=mdr(f)$ is determined by $\tau(C)$, $d_2$ is just $d-d_1$, and the rest of the claim follows from Theorem \ref{thmA}. Note that in the case of a line arrangement ${\mathcal A}$, the total Tjurina number $\tau({\mathcal A})$ is determined by the combinatorics. Examples \ref{ex3} shows that the numerical data of the minimal resolution of the graded $S$-module $S/I_f$ are not determined by the combinatorics for general line arrangements, though this seems to be the case for the related invariant $\nu({\mathcal A})$, see \cite[Conjecture 1.3]{Drcc} as well as the equivalent formulation in \cite[Question 7.12]{CHMN}. Some applications to rational cuspidal curves are given in the final section. \medskip We would like to thank the referee for the careful reading of our manuscript, and for suggesting a simpler proof for Theorem \ref{thm1} as well as the upper-bound discussed in Remark \ref{rkREF}. \section{First properties} The first general property we need is the following. \begin{lem} \label{lem1} Let $I$ be a homogeneous ideal in $S$ of codimension 2. Then the projective dimension $\pd S/I$ of the graded $S$-module $S/I$ is either 2 or 3. More precisely, $\pd S/I=2$ if and only if the ideal $I$ is saturated. \end{lem} \proof This result is a direct consequence of Hilbert Syzygy Theorem, see \cite[Corollary 19.7]{Eis0}, and of the Auslander-Buchsbaum formula, see \cite[Theorem 19.9]{Eis0}. The reader needing more detail can see Lemma 4.2 and Lemma 4.3 in \cite{Lin}. \endproof It follows that the quotient $S/I_f$, for any reduced plane curve $C:f=0$, admits a minimal resolution of the following type \begin{equation} \label{r3} 0 \to \oplus_{j=1}^tS(-b_j) \to \oplus_{i=1}^{t+1}S(-a_i) \to S. \end{equation} We call the positive integers $t,a_i,b_j$ the {\it numerical data of the resolution} \eqref{r3}. Moreover, recall that $\tau(C)= \deg Proj (S/J_f)$, see for instance \cite{CD}. One also has $ Proj (S/J_f)= Proj (S/I_f)$, since the two graded algebras have the same Hilbert polynomial, which is the constant $\tau(C)$. In addition, the Castelnuovo-Mumford regularity $\reg S/I_f$ of the module $S/I_f$ is given by \begin{equation} \label{r3.5} \reg S/I_f= \max \{a_i-1, b_j-2\}. \end{equation} We also need the following. \begin{lem} \label{lem2} For any reduced curve $C:f=0$, the numerical data of the minimal resolution \eqref{r3} can be chosen to satisfy the following relations. \begin{enumerate} \item $b_i \geq a_i+1$ for $i=1,...,t$; \item $a_1 \geq a_2 \geq ... \geq a_{t+1}$ and $b_1 \geq b_2 \geq ...\geq b_t$; \item $\sum_{i=1}^{t+1}a_i=\sum_{j=1}^{t}b_j$; \item $\sum_{j=1}^{t}b_j^2-\sum_{i=1}^{t+1}a_i^2=2\tau(C)$. \end{enumerate} \end{lem} When $I_f$ is replaced by the homogeneous ideal of a finite set of points in $\mathbb{P}^2$, this result is stated in \cite[ Proposition 3.8 and Exercise 3D15]{Eis}. The general case is discussed in \cite[Lemma 4.4]{Lin}. \begin{prop} \label{prop1} If $C:f=0$ is a nearly free curve of degree $d$, then the following hold. \begin{enumerate} \item $\sigma(C) \geq a_{t+1}$ and $a_{t+1} \in \{ d-2,d-1\}$; \item For a generic linear form $\ell \in S_1$, the multiplication by $\ell$ induces isomorphism $N(f)_s \to N(f)_{s+1}$ for $s=\sigma(C),..., T-\sigma(C)-1$. \end{enumerate} \end{prop} \proof Note that $a=a_{t+1} < d-2$, would imply $n(f)_{a+1} \geq 3$, a contradiction. Similarly, the presence of the partial derivatives $f_x,f_y,f_z$ in $I_f$ forces $a=a_{t+1} < d$. For the second claim, note that $\ell:N(f)_s \to N(f)_{s+1}$ is injective for $s <T/2$ and surjective for $s \geq [T/2]$ by \cite[Cor. 4.3]{DPop}. Since $n(f)_s=1$ for $s\in [\sigma(C),T-\sigma(C)]$, the second claim follows. \endproof \begin{ex} \label{ex0} When $d=1$, the curve $C$ is a line, and hence it is free, with $J_f=I_f=S$. Hence $S/I_f=0$ in this case. When $d=2$, there are two cases. If the curve $C$ is a smooth conic, then $C$ is nearly free with exponents $d_1=d_2=1$, $I_f=S$ and hence $S/I_f=0$. If the curve $C$ consists of two distinct lines, say $f=xy$, then again $C$ is free, with exponents $(0,1)$, $J_f=I_f=(x,y)$, and $S/I_f$ has the following minimal resolution $$0 \to S(-2) \to S^2(-1) \to S.$$ The same minimal resolution occurs for any curve $C$ having only one node, e.g. for the curve $$C:xyz^{d-2}+x^d+y^d=0,$$ with arbitrary $d \geq 2$, but these curves are neither free, nor nearly free for $d \geq 3$. Indeed, for any nodal curve $C:f=0$, the saturated ideal $I_f$ coincides with the radical ideal $\sqrt J_f$. If $C$ has a unique node, say at $p$, we can choose the coordinates on $\mathbb{P}^2$ such that $p=(0:0:1)$, and then $I_f=(x,y)$. \end{ex} \begin{ex} \label{ex1} Consider the nearly free curve $$C: f=y^d+x^kz^{d-k}=0,$$ where the integer $k$ satisfies $1 \leq k <d$ and $d \geq 3$. The exponents are $d_1=1$ and $d_2=d-1$, and $\tau(C)=(d-1)(d-2)$, see \cite[Prop. 2.12]{DStRIMS}. The generators of $J_f$ are the partial derivatives $f_x=kx^{k-1}z^{d-k}$, $f_y=dy^{d-1}$ and $f_z=(d-k)x^{k}z^{d-k-1}$. It is clear that $g_1=x^{k-1}z^{d-k-1}$ is in $I_f$. Indeed, one clearly has $xg_1 \in J_f$, $y^{d-1}g_1 \in J_f$ and $zg_1 \in J_f$, which imply that $g_1 \in I_f$. It is also clear that $I_f$ is spanned by $g_1$ and $g_2=y^{d-1}$. In fact, we know from \cite[Prop. 2.12]{DStRIMS} that $n(f)_j=1$ in this case exactly for $d-2 \leq j \leq 2d-4$. Note that the class of the monomial $y^mx^{k-1}z^{d-k-1}$ is non-zero in the 1-dimensional vector space $N(f)_{m+d-2}$ for $m=0,1,...,d-2$. In other words, $y$ is a generic linear form in this case for which the Lefschetz type property discussed above holds. It follows that the minimal resolution \eqref{r3} has the form $$0 \to S(3-2d) \to S(1-d) \oplus S(2-d) \to S.$$ Hence $t=1$, $b_1=2d-3$, $a_1=d-1$ and $a_2=d-2$, and they satisfy all the relations in Lemma \ref{lem2}. \end{ex} \begin{ex} \label{ex2} The cardinalily $t+1$ of a minimal set of generators for $I_f$ can be quite large when $C:f=0$ is neither free nor nearly free. An example of a cuspidal curve of degree $d=12$, with 38 cusps $A_2$ and minimal resolution for $S/I_f$ given by $$ 0 \to S(-14)^3\oplus S(-13)\oplus S(-12)^2 \to S(-12)^2\oplus S(-11)^5 \to S$$ is given in \cite[Section (6.21)]{Lin}. Similarly, for the Chebyshev curves considered in \cite{Camb2012}, we get via a direct computation using Singular or CoCoA softwares, the following minimal resolution for $S/I_f$, in the case $d=15$: $$0 \to S(-15)^7 \to S(-13)^7\oplus S(-14) \to S.$$ \end{ex} \begin{ex} \label{ex3} Consider the following two line arrangements in $\mathbb{P}^2$ $${\mathcal A}: f=xy(x-y-z)(x-y+z)(2x+y-2z)(x+3y-3z)(3x+2y+3z)$$ $$(x+5y+5z)(7x-4y-z)=0$$ and $${\mathcal A}':f'=xy(x+y-z)(5x+2y-10z)(3x+2y-6z)(x-3y+15z)$$ $$(2x-y+10z)(6x+5y+30z)(3x-4y-24z)=0.$$ They have isomorphic intersection lattices and have been constructed by Ziegler in \cite{Zi}. A picture of these arrangements can be found in \cite[Chapter 8]{DHA}. See also \cite[Example 13]{Schenck} for a discussion of this pair of line arrangements, as well as \cite{GV}, where an affine version of these line arrangements is considered from a new point of view. Then $d= \deg f= \deg f'=9$, and both arrangements have $n_2= 18$ double points and $n_3=6$ triple points. In the case of ${\mathcal A}$, the six triple points are on a conic, and a direct computation shows that $$0 \to S(-15) \oplus S(-16) \to S(-13)\oplus S(-14)^3 \to S(-8)^3 \to S$$ is a minimal resolution for $S/J_f$, while $$0 \to S(-10)^3\oplus S(-11) \to S(-8)^4 \oplus S(-9) \to S$$ is a minimal resolution for $S/I_f$. For ${\mathcal A}'$, the six triple points are not on a conic, i.e. the arrangement ${\mathcal A}'$ is a small deformation of the arrangement ${\mathcal A}$, and a direct computation shows that $$0 \to S(-15)^4 \to S(-14)^6 \to S(-8)^3 \to S$$ is a minimal resolution for $S/J_{f'}$, while $$0 \to S(-10)^6 \to S(-8)^3 \oplus S(-9)^4 \to S$$ is a minimal resolution for $S/I_{f'}$. It follows that the numerical data describing the minimal resolutions of both $S/J_f$ and $S/I_f$ in the case of line arrangements are not determined by the intersection lattice. \end{ex} \section{Main results} \begin{thm} \label{thm1} If $C:f=0$ is a nearly free curve of degree $d\geq 3$, and $a_{t+1} = d-2$, then the following hold. \begin{enumerate} \item $\sigma(C) =d-2$. \item The minimal resolution of the graded $S$-module $S/I_f$ has the form $$0 \to S(3-2d) \to S(1-d) \oplus S(2-d) \to S.$$ \item The first exponent $d_1$ of $C$ satisfies $d_1=1$. \end{enumerate} \end{thm} \proof The claim (1) is obvious, since $I_{f,d-2} \ne J_{f,d-2} =0$. Indeed, $a_{t+1}$ is the minimal degree of a generator for the ideal $I_f$. It follows that $n(f)_{d-2}\ne 0$, and since $C$ is nearly free, the only possibility is $n(f)_{d-2}=1$. To prove the claim (2), let $g_1$ be a generator of $I_f$ of degree $d-2$. We have $n(f)_{d-1}=1$, since $\en (N(f))=2(d-2)\geq d-1$, then $ \dim J_{f,d-1} =3$ by the formula \eqref{mdr}, and therefore $ \dim I_{f,d-1} =4$. Note that $g_1$ generate the graded $S$-module $N(f)$, either using Proposition \ref{prop1} (2), or using Hassanzadeh and Simis results in \cite[Proposition 1.3]{HS}, which are recalled in Theorem \ref{thmHS} below. Inside the 4-dimensional vector space $I_{f,d-1} $, we have two 3-dimensional vector spaces, namely $E_1$, spanned by $xg_1$, $yg_1$ and $zg_1$, and $E_2=J_{f,d-1}$. Since $g_1$ is a generator for $N(f)$, we get $E_1+E_2=I_{f,d-1}$, and hence $\dim (E_1 \cap E_2)=2$. In particular, at least one of the partial derivatives of $f$, say $f_z$, is not in $E_1 \cap E_2$. Then the vector space $E_2$ has a basis of the form $\ell g_1, \ell' g_1, f_z$, with $\ell, \ell'$ linear forms in $S_1$. It follows that the elements $\ell g_1, \ell' g_1, f_z$ generate the Jacobian ideal $J_f$, and hence $g_1,f_z$ generate the ideal $I_f$. Since $I_f$ has codimension 2, it follows that $g_1,f_z$ is a regular sequence, and the resolution given in (2) is just the Koszul complex of this regular sequence. To prove the last claim (3), it is enough to use the formula \eqref{in4}. \endproof \begin{thm} \label{thm2} Suppose $C:f=0$ is a nearly free curve of degree $d\geq 3$ with $a_{t+1} = d-1$, and set $s=\sigma(C) -(d-2)$. Then the following hold. \begin{enumerate} \item $$1 \leq s \leq \frac{d}{2}- 1.$$ \item The minimal resolution of the graded $S$-module $S/I_f$ has the form $$0 \to S(-s-d+1)^2\oplus S(-2d+3+s) \to S(1-d)^3 \oplus S(-s-d+2) \to S.$$ \item The first exponent $d_1$ of $C$ satisfies $d_1=s+1\geq 2$. \end{enumerate} \end{thm} \proof The first claim follows from $\sigma(C) \leq T/2$. Since now $a_{t+1} = d-1$, we need at least 3 generators for the ideal $I_f$ having degree $d-1$. Since the partial derivatives $f_x,f_y,f_z$ are linear independent by our assumption $mdr(f)>0$, these 3 generators can be taken to be $g_1=f_x$, $g_2=f_y$ and $g_3=f_z$. The next generator to be added, say $g_4$, occurs exactly in degree $\sigma(C)=s+d-2 \geq d-1$. Proposition \ref{prop1} (2) implies that we need no other generators, hence we get the morphism $$S(1-d)^3 \oplus S(-s-d+2) \xrightarrow{(g_1,g_2,g_3,g_4)} S$$ which occurs in the minimal resolution. Hence, in the notation from the formula \eqref{r3}, we have $t=3$ syzygies generating all the relations among $g_1,g_2,g_3$ and $g_4$. These syzygies are the following. First, if we set $m=\sigma(C)+1$, then $n(f)_m=1$ implies that there are two linearly independent linear forms $\ell_1, \ell_2 \in S_1$ such that $$\ell_1g_4 \in J_{f,m}=(g_1,g_2,g_3)_m \text{ and }\ell_2g_4 \in J_{f,m}=(g_1,g_2,g_3)_m.$$ Finally, for a generic linear form $\ell \in S_1$, we have $$\ell^{d-2s-1}g_4 \in J_{f,k}=(g_1,g_2,g_3)_k,$$ where $k=T-\sigma(C)+1=2d-3-s$. It is clear that the 3 relations among $g_1,g_2,g_3$ and $g_4$ generated in this way are independent, and this proves the claim (2). As a check, note that $$2(s+d-1)+(2d-3-s)=3(d-1)+(s+d-2),$$ i.e. Lemma \ref{lem2} (3) holds. To prove the last claim (3), it is enough to use again the formula \eqref{in4}. \endproof \section{The relation with general results by Hassanzadeh and Simis} Hassanzadeh and Simis have considered in \cite{HS} the general situation where the Jacobian ideal $J_f$ is replaced by an arbitrary ideal $I \subset S$ of codimension 2, generated by 3 linearly independent forms of the same degree. In their paper \cite{HS}, this common degree is denoted by $d$, but in order to compare easier their results to our special case $I=J_f$, we will restate some of their main results taking the common degree to be $d-1$. With this change of notation, the result \cite[Proposition 1.3]{HS} for the base field $k=\mathbb{C}$ takes the following form. \begin{thm} \label{thmHS} Let $I\subset S$ be an ideal of codimension 2 generated by 3 linearly independent forms of degree $d-1 \geq 1$, with a minimal graded free resolution $$0 \to \oplus_{i=1}^{r-2} S(-{\beta}_i)\to \oplus_{i=1}^{r} S(-{\alpha}_i)\to S^3(1-d)\to S,$$ for the $S$-module $S/I$, where $r \geq 3$. Let $I^{sat}$ denote the saturation of the ideal $I$ with respect to the maximal ideal ${\bf m}=(x,y,z)$ in $S$. Then the following hold. \medskip \noindent (i) The minimal free resolution of $N(I)=I^{sat}/I$ as a graded $S$-module has the form $$0 \to \oplus_{i=1}^{r-2} S(-{\beta}_i)\to \oplus_{i=1}^{r} S(-{\alpha}_i)\to \oplus_{i=1}^{r} S({\alpha}_i+3-3d) \to \oplus_{i=1}^{r-2} S({\beta}_i+3-3d),$$ where the leftmost map is the same as in the above resolution. \medskip \noindent (ii) If in addition $N(I)_k=0$ for $k\leq d-1$, then the resolution of $S/I^{sat}$ is given by $$0 \to \oplus_{i=1}^{r}S({\alpha}_i+3-3d) \to S^3(1-d)\oplus \left( \oplus_{i=1}^{r-2} S({\beta}_i+3-3d) \right)\to S.$$ \end{thm} \begin{rk} \label{rkHS} In fact the proof of the claim (ii) above given in \cite{HS} works with the weaker assumption $N(I)_k=0$ for $k\leq d-2$. Indeed, this proof needs a lift of the inclusion $I \to I^{sat}$ to a map of the corresponding free resolutions. In order to do this, the key point is that the generators $I$, call them $f_1,f_2,f_3$, are in $I_{d-1}\subset I_{d-1}^{sat}$. We can find a vector space basis of $I_{d-1}^{sat}$ starting with $f_1,f_2,f_3$, and the elements of this basis are part of a minimal system of generators for $I^{sat}$ since $I_{<d-1}^{sat}=0$ by our assumption. Using such a minimal system of generators for $I^{sat}$, which contains the generators $f_1,f_2,f_3$, gives us the required lifting of the inclusion $I \to I^{sat}$. Using this extension of \cite[Proposition 1.3]{HS}, we see that the assumption $N(I)_k=0$ for $k\leq d-1$ in \cite[Theorem 1.5]{HS}, reformulated with our convention that the generators of $I$ have degree $d-1$, can be replaced by the weaker assumption $N(I)_k=0$ for $k\leq d-2$. Then the second claim in \cite[Theorem 1.5]{HS} becomes \begin{equation} \label{reg} \reg (S/I)=3(d-1)-3-\indeg(N(I)) \leq 3d-6-(d-1)=2d-5. \end{equation} Similarly, the third claim in \cite[Theorem 1.5]{HS} becomes \begin{equation} \label{reg2} r \leq d-1, \end{equation} where $r$ is the minimal number of generators of the kernel of the obvious mapping $$S^3(1-d) \to S, \ \ (a,b,c) \mapsto af_1+bf_2+cf_3.$$ \end{rk} In the case $I=J_f$, the condition $N(I)_k=0$ for $k\leq d-2$ becomes $\sigma(C) \geq d-1$, in other words, for a nearly free curve $C:f=0$ as in the previous sections, $d_1 \geq 2$ or equivalently $s\geq 1$. This leads to the following immediate consequence of Theorem \ref{thmHS} and of our Remark \ref{rkHS} in the case $s=1$, just by taking $r=3$, ${\beta}_1=d+d_2$, ${\alpha}_1=d+d_1-1$ and ${\alpha}_2={\alpha}_3=d+d_2-1$ as in the formula \eqref{r2}. \begin{cor} \label{corHS} Suppose $C:f=0$ is a nearly free curve of degree $d\geq 3$ with exponents $(d_1,d_2)$, and set $s=\sigma(C) -(d-2)=d_1-1$. For $s\geq 1$, the minimal resolution of the graded $S$-module $S/I_f$ has the form $$0 \to S(-\sigma(C)-1)^2\oplus S(-T-1+\sigma(C)) \to S(1-d)^3 \oplus S(-\sigma(C)) \to S.$$ \end{cor} In other words, the claim of our Theorem \ref{thmA} for $s \geq 1$ is an easy consequence of the result by Hassanzadeh and Simis in \cite[Proposition 1.3]{HS}. \begin{rk} \label{rkREF} The case $s=0$ can also be derived from \cite[Proposition 1.3]{HS}, following the suggestion of the referee, which we describe below. In fact, this approach gives an {\it upper-bound for the number of generators of the ideal $I_f$, for any reduced plane curve $C:f=0$}. For any graded $S$-module $M$ of finite type, let's denote by $\mu(M)$ the minimal number of generators of $M$. Consider the $\mathbb{C}$-vector space $V_f=I_f/({\bf m}I_f)$ and let $E_2' \subset V_f$ be the image in $V_f$ of the vector subspace $E_2=J_{f,d-1} \subset I_f$, considered in the proof of Theorem \ref{thm1}. With this notation, one can see, essentially as in the proof of Theorem \ref{thm1}, that the following holds $$2=\codim (I_f) \leq \mu(I_f)\leq \dim E_2'+\mu(N(f)) = \dim E_2'+\mu(AR(f))-2,$$ where the last equality follows from \cite[Proposition 1.3]{HS}. If we are in the situation $C:f=0$ a nearly free curve, this implies $\mu(AR(f))=3$, while the case $s=0$ discussed in Theorem \ref{thm1}, implies $\dim E_2'=1$, as shown in the proof of Theorem \ref{thm1}. Therefore, in this special case, we get $\mu(I_f)=2$ from the above inequalities. \end{rk} \section{Applications} The formula \eqref{r3.5} and Theorem \ref{thmA} imply the following. \begin{cor} \label{corB} For a nearly free curve $C:f=0$ of degree $d\geq 3$ with exponents $d_1$ and $d_2$, the Castelnuovo-Mumford regularity of the module $S/I_f$ is given by $$\reg S/I_f= 2d-4-d_1.$$ \end{cor} \begin{rk} \label{rkB} It follows from \cite[Theorem 3.4]{DIM}, that Castelnuovo-Mumford regularity of the module $M(f) =S/J_f$ is given by $$\reg M(f)= 2d-3-d_1.$$ In the proof of \cite[Theorem 3.4]{DIM} it is also shown that $\reg S/I_f=T-ct(f)$ for any reduced plane curve, where $$ct(f)=\max \{q:\dim M(f)_k=\dim M(f_s)_k \text{ for all } k \leq q\},$$ with $f_s$ a homogeneous polynomial in $S$ of the same degree $d$ as $f$ and such that $C_s:f_s=0$ is a smooth curve in $\mathbb{P}^2$. This gives another proof of Corollary \ref{corB}, since it is known that for a nearly free curve one has $ct(f)=d+d_1-2$, see \cite{DStRIMS}. \end{rk} \begin{rk} \label{rkHS2} The assumption $N(I)_k=0$ for $k\leq d-2$ which occurs in Remark \ref{rkHS} above, in the special case $I=J_f$, seems to be satisfied for a very large class of reduced curves $C:f=0$. In view of the inequalities \eqref{in}, it is enough to check $n(f)_{d-2}=0$. We use next the formula $$n(f)_k=\dim M(f)_k+\dim M(f)_{T-k}-\dim M(f_s)_k-\tau(C),$$ see \cite[(2.8)]{DStRIMS} and conclude that $n(f)_{d-2}=0$ if and only if $$\dim M(f)_{2d-4}=\tau(C).$$ Note that for any arrangement $C:f=0$ of $d \geq 4$ lines in $\mathbb{P}^2$, it is known that this condition holds, see \cite[Corollary 3.6]{DIM}. Moreover, for such line arrangements it is known that $\reg M(f) \leq 2d-5$, see \cite[Corrolary 3.5]{Sch0} as well as \cite[Corollary 3.6]{DIM}. This confirms the inequality in \eqref{reg}, in the case of line arrangements. On the other hand, there are singular curves for which $\dim M(f)_{2d-4}>\tau(C)$, e.g. any curve of degree $d \geq 3$ having only one singularity, which is a simple node $A_1$. \end{rk} The above discussion and \cite[Theorem 1.5]{HS} (iii) as restated in the inequality \eqref{reg2} imply the claim in Corollary \ref{corAR} from the Introduction. \begin{rk} \label{rkJF} If we associate to a triple $\rho=(a,b,c)\in S^3_k$ of homogeneous polynomials in $S$ the $\mathbb{C}$-derivation $$\delta(\rho)=a\partial_x+b\partial_y +c \partial_z$$ of the $\mathbb{C}$-algebra $S$, then the graded $S$-module $AR(f)$ from Corollary \ref{corAR} corresponds to the graded $S$-module $D_0(C)$ of derivations $\theta \in Der(S)$ such that $\theta(f)=0$. Let $\theta_0=x\partial_x+y\partial_y +z\partial_z\in Der(S)$ be the Euler derivation, which has degree 1. Then Jiang and Feng in \cite{JF}, section 1, define inductively a non-decreasing sequence of positive numbers $\deg \theta_i $ for $i \geq 1$, by setting $$\deg \theta_i =\min \{\deg \theta \ : \ \{\theta_0, ...,\theta_{i-1}, \theta\} \text{ are } S-\text{linearly independent }\},$$ where $\theta_{j} \in D_0(C)$ for $j \geq 1$ and $\theta \in D_0(C)$. They remark that the maximal sequence obtained in this way has length 3, in fact they work in a polynomial ring of $n+1$ indeterminates $x_0,x_1,...,x_n$ and hence the length in general is $n+1$. Note that if $\{\theta_0, \theta_{1}, \theta_2\}$ is such a maximal chain in our case $n=2$, it does not imply that $\theta_{1}, \theta_2$ generate $D_0(C)$, unless $C$ is a free curve. In the sections 2 and 3 of \cite{JF}, the authors explain a linear algebra algorithm for computing a vector spaces basis of $AR(f)_k=D_0(C)_k$, for $k \geq 0$. In section 4 they apply their algorithm to a central hyperplane arrangement in $\mathbb{C}^{n+1}$ given by the equation $$f=x_0x_1\cdots x_n{\alpha}_1\cdots {\alpha}_p=0,$$ where ${\alpha}_j \in S_1$ are linear forms. Note that the degree of $f$ is $d=p+n+1$, but the number of variables involved is $n+1$. According to section 1 in \cite{JF}, the maximal sequence for $f$ should be of the form $$(\theta_0, \theta_{1}, ...,\theta_n)$$ i.e. it has length $n+1$. However, by a misprint, the authors claim at the end of subsection (4.2) that this length should be $d=p+n+1$. It is perhaps this error that explains Schenck quotation at the end of section 3 of \cite{Sch0}. What is crystal clear, is that there is no claim on the minimal number of generators of the graded $S$-module $AR(f)=D_0(f)$ in \cite{JF}. \end{rk} Finally we discuss some relations of the results in this note to rational cuspidal plane curves. \begin{cor} \label{corC} Let $C:f=0$ be an irreducible curve of degree $d$ such that $mdr(f)=1$. Then $C$ is a rational cuspidal curve, having only weighted homogeneous singularities. Moreover $C$ is nearly free and the minimal resolution for $S/I_f$ is of the form $$0 \to S(3-2d) \to S(1-d) \oplus S(2-d) \to S.$$ \end{cor} \proof The first claim follows from the proof of \cite[Proposition 4.1]{Drcc}. Indeed, the fact that (2) implies (3) in that proof does not use the assumption $d \geq 6$. The same proof shows that $\tau(C)=(d-1)^2-(d-2)-1$, which implies that $C$ is nearly free using \eqref{r2.5}. The claim about the minimal resolution then follows from Theorem \ref{thmA}, see also Theorem \ref{thm1}. \endproof \begin{rk} \label{rk1} Let $C:f=0$ is a rational cuspidal curve, having only weighted homogeneous singularities, and assume that $C$ has degree $d \geq 6$. Then it is shown in \cite{Drcc} that, up-to a linear change of coordinates, such a curve is a special case of the curves $C$ considered in Example \ref{ex1}. In particular, $mdr(f)=1$ and Conjecture \ref{c1} holds in such cases. \end{rk} \begin{cor} \label{corA1} Let $C:f=0$ be a rational cuspidal curve for which Conjecture \ref{c1} holds. Then the ideal $I_f$ is generated by at most 4 polynomials. More precisely, $I_f$ is generated by 2 polynomials if $mdr(f)=1$, by 3 polynomials if $C$ is free, and by 4 polynomials in the other cases. \end{cor} The following remark is tantalizing: the rational cuspidal curves with $mdr(f)=1$ have at most 2 cusps, and it is conjectured that the maximal number of cusps of any rational cuspidal curve is at most 4, see \cite{Pion} for a discussion. Note however that the only known rational cuspidal curve with 4 cusps is a quintic free curve $C:f=0$, hence the corresponding ideal $I_f=J_f$ is spanned by 3 elements, see \cite[Example 4.4 (ii)]{Drcc} for details. The relation between the number of cusps of a rational cuspidal curve $C:f=0$ and the number of generators of the corresponding ideal $I_f$, if it exists, it seems to be rather subtle.
1,314,259,992,881
arxiv
\section{Introduction} The Brout-Englert-Higgs mechanism \cite{Higgs} is one of the cornerstones of the Standard Model of particle physics. Part of the Higgs mechanism involves a scalar field developing a non-zero vacuum expectation value rather than having a vacuum expectation value of zero. An example of this non-zero vacuum expectation value comes from $\Lambda \Phi ^4$ theory with a complex scalar field whose Lagrangian density is \begin{equation} \label{higgs} {\cal L} = \partial_\mu \Phi ^* \partial ^\mu \Phi - \frac{1}{2} m^2 | \Phi | ^2 - \frac{1}{4} \Lambda | \Phi |^4 ~. \end{equation} The equation of motion from \eqref{higgs} is \begin{equation} \label{higgs-em} \partial _\mu \partial ^\mu \Phi - m^2 \Phi - \Lambda \Phi^3 = 0 ~. \end{equation} If we look for solutions, $\Phi$, which are space and time independent ({\it i.e.} $\partial _\mu \Phi =0$) {\it and} if $m^2 >0$ then the only solution is $\Phi =0$. However for a tachyonic mass term ({\it i.e.} $m^2 <0$) \eqref{higgs-em} has a non-zero, constant solution $\Phi_0 = \langle 0 |\sqrt{\Phi ^* \Phi} | 0 \rangle = \sqrt{\frac{-m^2}{\Lambda}}$. The vacuum solution is now given by $\Phi = \sqrt{\frac{-m^2}{\Lambda}} e^{i \theta}$ with magnitude $\sqrt{\frac{-m^2}{\Lambda}}$ and a phase $e^{i \theta}$ ($0 \le \theta \le 2 \pi$). Due to the phase of $e^{i \theta}$ there are an infinite number of equivalent vacua labeled by $\theta$. Usually one takes the arbitrary choice of $\theta =0$ as the vacuum for $\Phi$. This non-zero vacuum expectation value of the scalar field is responsible for giving masses to the $W^\pm$ and $Z^0$ gauge bosons of the $SU(2) \times U(1)$ Standard Model, while leaving the photon massless. Aside from the Standard Model, the Higgs mechanism has found application in the theory of superconductors via the Ginzburg-Landau model \cite{GL}. In the Ginzburg-Landau model the source of the non-zero order parameter/scalar field vacuum expectation value is due to the interaction between the electrons and the phonons of the background lattice. Another set of phenomena where a non-trivial vacuum is important are the Casimir effect \cite{casimir} and dynamical Casimir effect \cite{d-casimir}. In the canonical Casimir effect there are two, neutral, conducting plates which are placed a fixed distance apart. This restricts the type of quantum fluctuations that can occur between the plates as compared to outside the plates leading to an attractive force between the plates. In the dynamical Casimir effect the plates are moved with respect to one another and this results in the creation of photons out of the vacuum -- a result which has been observed relatively recently \cite{d-casimir1}. Below we will show that a massless scalar field placed in a gravitational wave background leads to the scalar field developing a non-zero vacuum expectation value. We make a comparison of this gravitationally induced effect with the scalar field vacuum expectation value of spontaneous symmetry breaking as found in the Higgs mechanism and the Ginzburg-Landau model. The comparison to the Ginzburg-Landau model is especially relevant since there the symmetry breaking is driven by the interactions induced by the phonons from the background lattice, whereas in the usual Higgs mechanism the symmetry breaking comes from the scalar field's self interaction. As in the Ginzburg-Landau model, here the scalar field's vacuum value is driven by interactions with the gravitational wave background. We also make a comparison between the present gravitationally induced vacuum expectation value and the dynamical Casimir. In the dynamical Casimir effect and the present case there is the possibility of producing {\it massless} particles from the vacuum. There are earlier works \cite{gibbons} \cite{garriga} which show that a plane gravitational wave background will not produce particles from the vacuum. We show how this is avoided exactly for the case of {\it massless} (scalar) fields. Finally, we connect the results of the present paper with other recent works that propose there is a shift of the pre-existing Higgs vacuum expectation value of the Standard Model either via stationary gravitational fields \cite{onifrio1, onifrio2} or via a gravitational wave background \cite{kurkov}. There is also very recent work \cite{caldwell} which discusses the consequences of the interaction of a gravitational wave background with a time-dependent vacuum expectation value from a (non-Abelian) gauge field. \section{Scalar field in gravitational wave background} \subsection{Approximate gravitational wave background} The equation for a complex scalar field, $\varphi$, in a general gravitational background is \begin{equation} \label{KGvacuum1} \frac{1}{{\sqrt { - det \left[ {g_{\mu \nu } } \right]} }}\left( {\partial _\mu g^{\mu \nu } \sqrt { - det \left[ {g_{\mu \nu } } \right]} \partial _\nu } \right)\varphi = 0. \end{equation} \noindent Following \cite{Jones16} we take the gravitational wave to travel in the positive $z$ direction and have the $+$ polarization. For this situation the metric \cite{Schutz} can be written as, \begin{equation} \label{metric} ds^2 = -dt^2 + dz^2 + f(t-z)^2 dx^2 + g(t-z)^2 dy^2 = du dv + f(u)^2 dx^2 + g(u)^2 dy^2, \end{equation} where in the last step we have switched to light front coordinates $u = z - t$ and $v = z+t$ with metric components $g_{uv} = g_{vu} = \frac{1}{2}$ and $g_{xx} = f(u) ^2$ and $g_{yy} = g(u)^2$. The metric functions $f(u)$ and $g(u)$ will be taken to be oscillatory functions of $u$. The determinant term in \eqref{KGvacuum1} is $\sqrt{-det {\left[ {g_{\mu \nu } } \right]}} = \frac{fg}{2}$. Substituting the light front version of the metric into equation \eqref{KGvacuum1} gives \begin{equation} \left( {4 f^2 g^2 \partial _u \partial _v + 2 fg {\partial _u (fg)} \partial _v + g^2 \partial _x^2 + f^2 \partial _y^2 } \right)\varphi = 0. \label{KGvacuum4} \end{equation} \noindent We take the metric ansatz functions of the form $f\left( u \right) = 1 + \varepsilon \left( {u} \right)$, and $g\left( u \right) = 1 - \varepsilon \left( {u} \right)$ and substitute these into equation \eqref{KGvacuum4}. This form for $f\left( u \right)$ and $g\left( u \right)$ describes a wave propagating in the $z$ direction so that $x$ and $y$ directions should be physically identical. Thus we require of the solution that $\left( {\partial _y^2 - \partial _x^2 } \right)\varphi=0$. Using this equation \eqref{KGvacuum4} becomes, \begin{widetext} \begin{equation} \left[ {4 \left( {1 - 2\varepsilon ^2 + \varepsilon ^4 } \right)\partial _u \partial _v - 4 \left( {1 - \varepsilon ^2 } \right)\varepsilon \left( {\partial _u \varepsilon } \right)\,\partial _v + (1 + \varepsilon ^2 ) (\partial _x^2 + \partial _y^2 )} \right]\varphi = 0. \label{KGvacuum7} \end{equation} \end{widetext} \noindent Finally we consider a sinusoidal, plane gravitational wave by taking $\varepsilon \left( u \right) = h_ + e^{i Ku}$, where $h_{+}$ is a dimensionless amplitude and $K$ is a wave number. The metric must be real so it is understood that the metric components are obtained by taking the real part of the ansatz functions so that $f(u), g(u) =1 \pm h_+ e^{i Ku} \rightarrow 1 \pm h_+ \cos(K u)$. This real form still satisfies the linearized general relativistic field equations to which $f(u), g(u)$ are solutions. Substituting this choice of $\varepsilon (u)$ into equation \eqref{KGvacuum7} gives \begin{equation} \left( {4 F(u) \partial _u \partial _v - 4iKG(u)\,\partial _v + H(u) (\partial _x^2 + \partial _y^2) }\right)\varphi = 0, \label{KGvacuum10} \end{equation} \noindent where $F\left( {u} \right) \equiv \left( {1 - 2h_ + ^2 e^{2iKu} + h_ + ^4 e^{4iKu} } \right)$, $G\left( {u} \right) \equiv \left( {h_ + ^2 e^{2iKu} - h_ + ^4 e^{4iKu} } \right)$, and $H(u) = \left( 1 + h_+^2 e^{2iKu} \right)$ . We separate equation \eqref{KGvacuum10} using $\varphi = X\left( x \right)Y\left( y \right) U \left( u \right) V \left( v \right)$. The eigenvalue equations and associated solutions for $X(x)$ and $Y(y)$ are \begin{equation} \partial _x^2 X = - p^2 X \to X(x) = e^{ip x} ~~~~,~~~~ \partial _y^2 Y = - p^2 Y \to Y(y) = e^{ip y}. \label{XYequations} \end{equation} The function $X(x)$ and $Y(y)$ are simply free waves as is to be expected since the gravitational wave is traveling in the $u=z-t$ direction, and $p$ is the common momentum in the $x, y$ directions. The common momentum in the $x$ and $y$ directions comes from the assumed symmetry in these transverse directions, and it also realizes the condition $\left( {\partial _y^2 - \partial _x^2 } \right)\varphi=0$ which we took above. Using \eqref{XYequations} we find that \eqref{KGvacuum10} becomes \begin{equation} F(u) \frac{{\partial _u U}}{U}\frac{{\partial _v V}}{V} - iKG(u)\frac{{\partial _v V}}{V} - H(u) \frac{p^2}{2} = 0. \label{ASequation} \end{equation} \noindent Since the light front coordinate $v$ is orthogonal to $u$ and since the gravitational wave only depends on $u$ one expects that the eigenfunction $V(v)$ also is solved by a free, plane wave, as was the case for $X(x)$ and $Y(y)$. This is the case and we find \begin{equation} - i\partial _v V = p_v V \to V(v) = e^{ip_v v} . \label{eigenvalueV} \end{equation} \noindent Substituting equation \eqref{eigenvalueV} into equation \eqref{ASequation} and defining $\lambda \equiv \frac{p ^2}{2 p_v}$ yields \begin{equation} i\frac{{\partial _u U(u)}}{U(u)} = \lambda \frac{H(u)}{F(u)} - K\frac{G(u)}{F(u)} ~. \label{eigenvalueU3} \end{equation} \noindent The term $i\frac{{\partial _u U(u)}}{U(u)}$ in \eqref{eigenvalueU3} represents the kinetic energy of the scalar field; the term $\lambda \frac{H(u)}{F(u)}$ represents the interaction of the scalar field, via $\lambda$, with the gravitational background, via $\frac{H(u)}{F(u)}$; the term $K\frac{G(u)}{F(u)}$ represents a pure gravitational potential term. Equation \eqref{eigenvalueU3} can be integrated to give, \begin{equation} U(u) = A e^{\frac{\lambda }{K}} e^{ \frac{- \lambda }{{K\left( {1 - h_ + ^2 e^{2iKu} } \right)}}} \left( {1 - h_ + ^2 e^{2iKu} } \right)^{\frac{1}{2}\left( {\frac{\lambda }{K} - 1} \right)} e^{ - i\lambda u} ~, \label{eigenvalueU4} \end{equation} \noindent where $A e^{\frac{\lambda }{K}}$ is constant. The factor $e^{\frac{\lambda }{K}}$ was chosen to ensure that the eigenfunction for the $u$ direction becomes a free plane wave, $e^{- i\lambda u}$, as $h_+ \to 0$ ({\it i.e.} as the gravitational wave is turned off). Collecting together all the solutions in $x, y, v$ and $u$ directions gives the solution of the scalar field in the gravitational background, \begin{equation} \varphi = A e^{\frac{\lambda }{K}} e^{ - \frac{\lambda }{{K\left( {1 - h_ + ^2 e^{2iKu} } \right)}}} \left( {1 - h_ + ^2 e^{2iKu} } \right)^{\frac{1}{2}\left( {\frac{\lambda }{K} - 1} \right)} e^{ - i\lambda u} e^{ip_v v} e^{ip x} e^{ip y} + B . \label{Sfield} \end{equation} \noindent $A$ is a normalization constant and we have added a constant $B$ which is allowed by the shift symmetry of solutions to \eqref{KGvacuum4}. Below we choose $B= -A$. This choice of $B$ ensures that if one turns off the gravitational background ($h_+ \to 0$) and also takes the field momenta to zero ($\lambda, p_v, p \to 0$) then $\varphi \to 0$. This solution for the scalar field given in \eqref{Sfield} is very similar to the form of the solution found in \cite{Padmanabhan99} for the {\it static} electric field evaluated in light front coordinates. Here we have a massless scalar field in a gravitational wave background. In the limit $h_+ \to 0$ \eqref{Sfield} goes to the expected flat space-time solution of a free wave scalar field, $\varphi _0 \propto e^{ - i\lambda u} e^{ip_v v} e^{ip x} e^{ip y} \to \varphi_0 \propto e^{i(p_v + \lambda)t} e^{i(p_v -\lambda) z} e^{ip x} e^{ip y}$. The second version of $\varphi _0$ is the conversion from light front coordinates back to $t ,z$ coordinates. Defining an energy $E_0 = p_v + \lambda$ and a momentum in the z-direction $p_z = p_v -\lambda$ one sees that the usual dispersion relationship for a massless field ({\it i.e.} $E_0 ^2 = p_x^2 + p_y^2 + p_z^2 \to 2p^2 + p_z^2$ is recovered if one recalls that $\lambda = \frac{p^2}{2 p_v}$). Next, taking the limit of \eqref{Sfield} when all the transverse momenta of the scalar field go to zero ({\it i.e.} $p_x=p_y=p \to 0$ and $\lambda \rightarrow 0$) one finds \begin{eqnarray} \label{phi-higgs} \varphi (u, v) &=& \mathop {\lim }\limits_{p_v \to 0} A \left( {1 - h_ + ^2 e^{2iKu} } \right)^{-\frac{1}{2}} e^{i p_v v} - A \nonumber \\ & & \to A \left[ \left( {1 - h_ + ^2 e^{2iKu} } \right)^{-\frac{1}{2}} - 1 \right] \approx \frac{A}{2} h_+ ^2 e^{2i Ku}+ \frac{3A}{8} h_+ ^4 e^{4i Ku}~. \end{eqnarray} \noindent In \eqref{phi-higgs} we have taken the limit $p_v \to 0$ after the limit of the transverse momenta going to zero $p_x=p_y=p \rightarrow 0$. Using the vacuum solution in \eqref{phi-higgs} the magnitude square of $\varphi$ is \begin{equation} \varphi ^* \varphi \approx \frac{A^2 h_+ ^4}{4} + \frac{3 A^2 h_+^6}{8} \cos(2 K u ). \label{phi-higgs-2} \end{equation} \noindent Equations \eqref{phi-higgs} and \eqref{phi-higgs-2} show that in the presence of a gravitational wave background, that in addition to the vacuum state solution $\varphi = 0$ ($\varphi=0$ is always a solution to \eqref{KGvacuum1}), there are also the vacuum solutions given by $\varphi (u,v)$ in \eqref{phi-higgs}. A common feature shared by the present example and the usual scalar field symmetry breaking is that the scalar field magnitude only depends on the parameters of the interaction -- $\varphi^* \varphi$ depends on $h_+$ and $K$ (the amplitude and wave number of the gravitational wave background) while for the canonical Higgs field example of \eqref{higgs} $\Phi^* \Phi$ depends on $m^2$ and $\Lambda$ (the parameters of the scalar fields self interaction). One can compare the above results with the Casimir effect \cite{casimir} and the dynamical Casimir effect \cite{d-casimir}. Both of the Casimir and dynamical Casimir effects involve a non-trivial vacuum state due to the restriction of the quantum fluctuations of fields placed between two neutral conducting, infinite plates. And in the dynamical Casimir effect the time dependent oscillation of the plates creates photons out of the vacuum. In the present case our interpretation of the scalar field solution in \eqref{phi-higgs} is that the time dependent oscillations of the gravitational field create scalar field quanta out of the vacuum. In support of this interpretation, that the scalar field solution from \eqref{phi-higgs} represents the creation of scalar field quanta by the gravitational wave background, we look at the current in the $u$-direction associated with $\varphi$ from \eqref{phi-higgs} which to lowest order in $h_+$ is given by \begin{equation} \label{current-u} j_u = -i (\varphi ^* \partial _u \varphi - \varphi \partial _u \varphi ^*) \approx A^2 h_+^4 K + \frac{9}{4} A^2 h_+ ^6 K \cos(2 K u) ~, \end{equation} \noindent If we take the lowest order of the current in \eqref{current-u} (or time averaging $j_u$ in \eqref{current-u}) we find a constant scalar field current in the $u$ direction of magnitude $A^2 h_+^4 K$. Our interpretation of this result is that the incoming gravitational wave current in the $u$ direction creates an outgoing scalar field current given, to leading order, by the first term in \eqref{current-u}. This picture is further supported by looking at the tree-level, Feynman diagram processes of $graviton + graviton \to photon + photon$, where in our case $photon$ is really the massless scalar field quanta. In reference \cite{skobelev} this tree level process was calculated and found to be non-zero in general. In particular it is non-zero when the incoming gravitons and outgoing photons travel in the same direction . In this work we are looking at collections of gravitons and scalar field quanta ({\it i.e.} gravitational plane waves and scalar field plane waves). Thus, if the process $graviton + graviton \to photon + photon$ is non-zero at the individual quanta level, as shown in \cite{skobelev}, then this implies it should be non-zero for a collection of these quanta {\it i.e.} gravitational and scalar field plane waves. To conclude this subsection we recall that there are well known restrictions against the creation of field quanta by a plane gravitational wave background \cite{gibbons} \cite{garriga}, which is what we are proposing above. However in reference \cite{gibbons} a loop hole was given -- particle creation might occur if the scalar field were massless and if the momenta of the scalar field quanta were in the same direction as the gravitational plane wave. These are exactly the conditions we have here -- the scalar field is massless and is traveling in the same direction as the gravitational wave since from \eqref{phi-higgs} $\varphi$ only depends on $u=z-t$. In more detail it was shown in \cite{garriga} that the Bogoliubov $\beta$ coefficients, which indicate particle production, were proportonal to energy-momentum conserving delta functions \begin{equation} \label{b-beta} \beta_{ij} = \langle u_i ^{out} | u_j ^{in~*} \rangle \propto \delta (k_{-} + l_{-}) ~, \end{equation} where $k_{-} = \frac{\omega - k_z}{2}$ and $l_{-} = \frac{\omega - l_z}{2}$ are the light front momenta of the scalar field before and after \footnote{In \cite{garriga} as well as in \cite{gibbons} a sandwich gravitational wave background was considered. The plane gravitational wave background of \eqref{metric} was sandwiched, before and after, by Minkowski space-time. The functions $u_i ^{in}$ and $u_j ^{out}$ are the solutions in the asymptotic Minkowski regions that are connected to each other through the intermediate plane wave gravitational background.}, $\omega =\sqrt{{\bf k}^2 + m^2}$ or $\omega =\sqrt{{\bf l}^2 + m^2}$ respectively, and the indices $i,j$ label the momenta of the outgoing and ingoing scalar field quanta. If $m \ne 0$ it is easy to see that $k_{-} + l_{-}$ can not vanish. If however, as is true in the case consider here, $m=0$ and ${\bf k, l} \to k_z, l_z$ ({\it i.e.} the before and after momemta of the scalar field is purely along the $+z$ direction) then $k_{-} + l_{-}$ vanishes and the Bogoliubov $\beta$ coefficient is non-zero indicating particle production. \subsection{Exact gravitational wave background} One might ask to what extent the linear approximation for the gravitational wave -- namely that $f(u) = 1 + \epsilon (u)$ and $g(u) = 1- \epsilon (u)$ with $\epsilon (u) = h_+ e^{iKu}$ -- is crucial in obtaining the result in \eqref{phi-higgs}. What if one took an exact, gravitational plane wave solution instead of a linearized approximation? To this end we now repeat briefly the above analysis for an exact solution for the plane wave metric in the $+$ polarization. The ansatz functions $f(u)$ and $g(u)$ will be exact, plane wave solutions if they satisfy the general relativistic field equations in this case which are of the form ${\ddot f}/ f + {\ddot g}/ g = 0$ \cite{Schutz}. One simple exact, plane wave, solution is $f(u) = e^{-iKu} e^{ - Ku}$ and $g(u) = e^{-iKu} e^{Ku} $. These ansatz functions have plane wave parts ($e^{-iKu}$) but they also have exponentially growing or decaying amplitudes ($e^{ \pm Ku}$). Near $u=0$ one has oscillating, wave solutions due to the $e^{-iKu}$ parts of the ansatz function, but as $u$ moves away from $u=0$ the $e^{\pm Ku}$ terms act like growing/decaying amplitudes. Because of this these solutions can only be of use for a restricted range of $u$ near $u=0$. Asymptotically, as $u\rightarrow \infty$, the functions $f(u), g(u)$ are not physically acceptable. In this case we are dealing with an exact solution to the non-linear general relativistic field equations so one may ask if taking the real part of the complex form of the ansatz functions will still be a solution to ${\ddot f}/ f + {\ddot g}/ g = 0$ due to the non-linearity of general relativity. One can show that taking the real part of the ansatz functions ({\it i.e.} $f(u) = e^{-iKu} e^{ - Ku} \to \cos(Ku) e^{- Ku}$ and $g(u) = e^{-iKu} e^{Ku} \to \cos(Ku) e^{Ku}$) is still a solution to ${\ddot f}/ f + {\ddot g}/ g = 0$. However, as in the previous linearized case, it is much easier to work with the complex form of the ansatz functions when one uses the background metric in the equation for the complex scalar field. Using the above metric background we substitute $f(u) = e^{-iKu} e^{ - Ku}$ and $g(u) = e^{-iKu} e^{Ku}$ into equation \eqref{KGvacuum4}, \begin{widetext} \begin{equation} \left( {4 e^{-4iKu} \partial _u \partial _v + 2 e^{-2iKu} \partial _u \left( {e^{-2iKu} } \right)\partial _v + e^{-2iKu} e^{2Ku} \partial _x^2 + e^{-2iKu} e^{ - 2Ku} \partial _y^2 } \right)\varphi = 0, \label{Exact1} \end{equation} \end{widetext} \noindent and making the substitution $\varphi = U(u) V(v) X(x) Y(y) = U(u) e^{ip_v v} e^{ip x} e^{ip y}$, we find \begin{equation} \left( i \frac{{\partial _u U}}{U} + K - \lambda e^{ 2iKu} \cosh(2 K u) \right) = 0 ~, \label{Exact2} \end{equation} \noindent where $\lambda =\frac{p^2}{2p_v}$ as before. Equation \eqref{Exact2} can be compared to \eqref{eigenvalueU3} in the sense that $i \frac{{\partial _u U}}{U}$ is the kinetic energy term of the scalar field, $\lambda e^{ 2iKu} \cosh(2 K u)$ is an interaction between the scalar field and the gravitational background, and $K$ is a pure gravitational potential term. In the limit when the gravitational wave is absent ({\it i.e.} $K \to 0$) the solution to \eqref{Exact2} is again given by $\varphi _0 \propto e^{ - i\lambda u} e^{ip_v v} e^{ip x} e^{ip y}$. Restoring the gravitational background ({\it i.e.} $K \ne 0$) the solution to \eqref{Exact2} is $U (u) = A e^{\left( {\frac{{\left( {-1 - i} \right)\lambda}}{{8K }} e^{ 2iKu} e^{2Ku} + \frac{{\left( {-1 + i} \right)\lambda}}{{8K}} e^{ 2iKu} e^{ - 2Ku} } \right)} e^{iKu}$, where $A$ is a constant similar to that found in \eqref{eigenvalueU4} and the scalar field takes the form \begin{equation} \varphi (u, v, x, y) = A e^{\left( {\frac{{\left( {-1 - i} \right)\lambda}}{{8K }} e^{ 2iKu} e^{2Ku} + \frac{{\left( {-1 + i} \right)\lambda}}{{8K}} e^{ 2iKu} e^{ - 2Ku} } \right)} e^{iKu} e^{ip_v v} e^{ip x} e^{ip y} + B, \label{Exact4} \end{equation} \noindent where $A$ is a normalization constant. We have again added a constant $B$ via the shift symmetry of solutions to \eqref{KGvacuum4}. As before we set $B=-A$ so that $\varphi (u,v,x,y) \to 0$ when the gravitational wave background is turned off and when the scalar field momenta go to zero. Here we do not have an $h_+$ since the ``amplitude" is given by the $e^{\pm Ku}$ terms in $f(u), g(u)$. As before if we take the limit of the massless scalar field to its vacuum state by taking its energy and momenta parameters to zero ({\it i.e.} taking the limit $p_x=p_y=p \to 0$ and $\lambda \to 0$) one finds that as before $\varphi$ and $\varphi \varphi^*$ do not go to zero but rather \begin{equation} \label{phi-exact} \varphi = \mathop {\lim }\limits_{p_v \to 0} A e^{iKu} e^{i p_v v} = A (e^{iKu} -1) ~~~~;~~~~ \varphi ^* \varphi = 2 A^2 (1- \cos(K u))~. \end{equation} In \eqref{phi-exact} we have again taken the limit that $p_v$ becomes arbitrarily small ($p_v \rightarrow 0$). With this we see that $\varphi$ depends on the gravitational wave background through the wave number, $K$. For this exact solution metric we again find that the scalar field acquires a non-zero, space-time dependent vacuum value even when one takes the limit of all the scalar field momenta going to zero. Also as in the previous subsection we find that $\varphi ^* \varphi$ has a constant term (the $2 A^2$ term in \eqref{phi-exact} which corresponds to the $\frac{A^2 h_+ ^4}{4}$ term in \eqref{phi-higgs-2}) and a space-time dependent part (the $-2 A^2 \cos(Ku)$ term in \eqref{phi-exact} which corresponds to the $\frac{3 A^2 h_+^6}{8} \cos(2 K u )$ term in \eqref{phi-higgs-2}). As for the plane wave solution of the previous subsection we can calculate the current in the $u$-direction and find \begin{equation} \label{current-u-2} j_u = -i (\varphi ^* \partial _u \varphi - \varphi \partial _u \varphi ^*) = 2 A^2 K (1 - \cos(Ku)) ~. \end{equation} The current above is similar to the one found in the previous subsection in \eqref{current-u} -- there is a constant term, $2 A^2 K$, and a space-time dependent term $-2 A^2 K\cos(Ku)$. These can be compared to the terms $A^2 h_+^4 K$ and $\frac{9}{4} A^2 h_+ ^6 K \cos(2 K u)$ in \eqref{current-u}. If one time averages the current in \eqref{current-u-2} one finds $\langle j_u \rangle = 2 A^2 K$ {\it i.e.} one has a constant current in the $u$ direction. We take the same interpretation of $\langle j_u \rangle$ as the leading term of $j_u$ from \eqref{current-u} -- $\langle j_u \rangle$ represents a scalar field plane wave traveling in the $u$-direction produced by the initial gravitational wave. \section{Discussion and Conclusions} We have shown that a massless scalar field placed in a plane, gravitational wave background will develop a space-time dependent, non-zero vacuum value given by \eqref{phi-higgs} \eqref{phi-higgs-2} even in the limit when all the momentum parameters of the scalar field are taken to zero ({\it i.e.} $p_v, p , \lambda \rightarrow 0$). This is different from what happens to the massless scalar field solution in Minkowski space-time, where when one takes the zero energy-momentum limit the scalar field vanishes. We have drawn attention to the similarity of this gravitationally induced scalar field vacuum value with the vacuum expectation value in the Higgs phenomenon and with the dynamical Casimir effect. Three potential physical consequences of this gravitationally induced vacuum value for the scalar field are: (i) the production of massless field quanta, such as photons, from the gravitational wave background; (ii) the usual Higgs vacuum expectation value of the Standard Model may be modified or even generated by stationary and/or time dependent gravitational backgrounds; (iii) the interplay between gravitational waves and scalar and gauge fields in the early Universe may lead to observational consequences at the present time. Point (i) was investigated in references \cite{Jones16, Jones15}. Here we calculated the currents connected with the scalar field (see equations \eqref{current-u} and \eqref{current-u-2}) that the time averaged current, $\langle j_u \rangle$ was a constant which we interpreted as the incoming gravitational wave creating an outgoing scalar field. There is a well known restriction against the creation of fields from an incident, plane gravitational plane wave \cite{gibbons}. The present work avoids this conclusion by using the loop hole mentioned in \cite{gibbons} that the prohibition only applied to massive fields. Here we considered a massless field that travels in the same direction as the initial gravitational wave. Furthermore in \eqref{b-beta} we have shown that the the Bogoliubov coefficients indicating particle production, as calculated in \cite{garriga} for a gravitational plane wave sandwich background, are non-zero exactly in the limit $m \to 0$ and the momenta of the field quanta being in the same direction as the gravitational wave ({\it i.e.} exactly the conditions of this work). The production of fields by an incoming gravitational plane wave background is also in agreement with the non-zero, tree-level amplitudes for $graviton + graviton \to photon + photon$ coming from Feynman diagram calculations \cite{Calmet16, skobelev, bohr}. Also since these processes occur at the tree-level Feynman diagrams they are in some sense classical effects. Point (ii) was discussed in references \cite{onifrio1, onifrio2, kurkov}. In the works \cite{onifrio1, onifrio2} the idea was considered that the usual Higgs expectation value can be shifted by the effect of a static or stationary gravitational background ({\it e.g.} the Schwarzschild or Kerr space-times). This effect requires a coupling between the scalar field and the gravitational field of the form $\xi \phi R$ or $\xi \phi K$ where $\xi$ is the coupling and $R$ is the Ricci scalar and $K=R_{\alpha \beta \gamma \delta} R^{\alpha \beta \gamma \delta}$ is the Kretcschmann scalar. This coupling of the scalar field to the gravitational field results in a shifting of the pre-existing Higgs vacuum expectation value with an associated shift in particle masses. This shift is potentially observable. In the work \cite{kurkov} the idea of a scalar field-gravitational background coupling of the form $\xi \phi R$ is again considered but now the gravitational background is both space and time dependent. Again it is found that the gravitational background can shift the Higgs vacuum expectation value, but now this shift is space-time dependent as is the case for the results of the present work. Further it was found in \cite{kurkov} that even when there is no vacuum expectation value of the scalar field, one can be generated from the interaction with the gravitational background as is the case for our results. Point (iii) was very recently proposed in reference \cite{caldwell} where the interplay of a gravitational wave background with a cosmological non-Abelian gauge field was considered. The non-Abelian gauge field was assumed to have a {\it pre-existing} vacuum expectation value of the form $A_i ^a = \phi (\tau) \delta _i ^a$ where $\phi (\tau)$ is a scalar function of the proper time $\tau$, and the indices $i$ and $a$ were space and ``color" indices respectively. This interplay between the gravitational wave and the pre-existing, non-Abelian gauge field might lead to interesting and potentially observable phenomenon such as neutrino-like oscillations between the gravitational field and the non-Abelian gauge field. In contrast, in the present work, a space-time dependent scalar field, as given in \eqref{phi-higgs}, is generated out of the vacuum by the gravitational wave background. Similarly one might conjecture that the phenomenon proposed in \cite{caldwell} could work with the gravitational wave background generating a non-zero vacuum expectation value for the gauge field from the vacuum rather than requiring a pre-existing gauge field. Finally we want to point out that, like the standard Higgs mechanism of particle physics, the present generation of the vacuum expectation value of the scalar field by the gravitational wave background is already implied at the classical level. In the usual Higgs mechanism, as given in \eqref{higgs}, the non-zero vacuum expectation value of $\Phi_0 = \sqrt{\frac{-m^2}{\Lambda}}$ is obtained from the classical, scalar field Lagrangian \eqref{higgs}. In a similar way the non-zero vacuum value for $\varphi$ from the gravitational wave background already emerges by examining the system of a classical scalar field interacting with a classical gravitational background as given in \eqref{phi-higgs} and \eqref{phi-exact}. Also the view that the leading term in the scalar field currents, $j_u$, from \eqref{current-u} and \eqref{current-u-2} represented production of the scalar field from the incoming gravitational wave, finds support from the non-zero, tree-level Feynman diagram process $graviton + graviton \to photon + photon$. This further indicates the classical nature of the scalar field vacuum expectation values found here, since tree-level Feynman diagrams represent the classical limit of a given interaction. {\bf Acknowledgment} DS is supported by grant $\Phi.0755$ in fundamental research in Natural Sciences by the Ministry of Education and Science of Kazakhstan.
1,314,259,992,882
arxiv
\section{Introduction} Parameterized quantum circuits offer a flexible paradigm for programming Noisy Intermediate Scale Quantum (NISQ) computers. These circuits are utilized in both Variational Quantum Algorithms (VQAs)~\cite{cerezo2020variationalreview,bharti2021noisy,peruzzo2014variational,mcclean2016theory,farhi2014quantum,romero2017quantum,khatri2019quantum,larose2019variational,arrasmith2019variational,cerezo2020variationalfidelity,cirstoiu2020variational,sharma2019noise,bravo2020variational,cerezo2020variational} and Quantum Neural Networks (QNNs)~\cite{schuld2014quest,cong2019quantum,beer2020training,verdon2018universal}. Both VQA and QNN approaches involve efficiently evaluating a cost function $C(\vec{\theta})$ or its gradient $\nabla C(\vec{\theta})$ on a quantum computer. A classical optimizer is then employed to train the parameters $\vec{\theta}$ of a parameterized quantum circuit $V(\vec{\theta})$ to minimize the cost. Rigorous scaling results are urgently needed for this exciting approach to near-term quantum computing. Gradient scaling is one of the few directions of significant progress. The most famous gradient scaling result is the barren plateau phenomenon~\cite{mcclean2018barren,cerezo2020cost,sharma2020trainability,wang2020noise,cerezo2020impact,holmes2020barren,pesah2020absence,zhang2020toward,abbas2020power,marrero2020entanglement,patti2020entanglement,uvarov2020barren,holmes2021connecting,du2020learnability,arrasmith2021equivalence}, whereby the gradient of the cost function shrinks exponentially with the number of qubits. Various issues lead to barren plateaus, such as deep ansatzes that lack structure~\cite{mcclean2018barren,sharma2020trainability,holmes2021connecting}, global cost functions~\cite{cerezo2020cost,sharma2020trainability}, high levels of noise~\cite{wang2020noise,du2020learnability}, scrambling target unitaries~\cite{holmes2020barren}, and large entanglement~\cite{marrero2020entanglement,patti2020entanglement}. Without effort to avoid barren plateaus, this phenomenon can have a major impact on the scaling of one's algorithm. Specifically, the exponential suppression of the gradient implies that one would need an exponential precision to make progress in the optimization, consequently, causing one's algorithm to scale exponentially in the number of qubits. The standard goal of quantum algorithms is polynomial scaling, unlike the exponential scaling of classical algorithms. Hence, the exponential scaling due to barren plateaus could erase the possibility of a quantum speedup with a parametrized quantum circuit. It is therefore crucial to study barren plateaus in VQAs and QNNs in order to understand when quantum speedup is possible. This has spawned an important research direction of finding strategies to avoid barren plateaus. Some examples include employing local cost functions~\cite{cerezo2020cost}, modifying the architecture~\cite{pesah2020absence,zhang2020toward}, pre-training~\cite{verdon2019learning}, parameter correlation~\cite{volkoff2021large}, layer-by-layer training~\cite{skolik2020layerwise}, and initializing layers to the identity~\cite{grant2019initialization}. These strategies are promising. However, more analytical and numerical studies are needed to understand how effective they are in general, for example, as in Ref.~\cite{campos2021abrupt}. One possible strategy to consider is the choice of optimizer. It is widely believed that gradient-based optimizers will be directly impacted by barren plateaus, for obvious reasons. Moreover, higher-order derivatives are also exponentially suppressed in a barren plateau~\cite{cerezo2020impact}, so optimizers based on such derivatives will also be impacted. Nevertheless, there still remains the question of whether gradient-free optimizers could somehow avoid the barren plateau problem. This is currently a topic of debate~\cite{cerezo2020cost,marrero2020entanglement}. The question is naturally made subtle by the fact that gradient-free optimizers can potentially use global information about the landscape, rather than being restricted to using local gradient information. In this work, we present an analytical argument suggesting that gradient-free approaches will, indeed, be impacted by barren plateaus. Specifically, we show that cost function differences, $C(\vec{\theta}_B)-C(\vec{\theta}_A)$, will be exponentially suppressed in a barren plateau. This holds even when the points $\vec{\theta}_A$ and $\vec{\theta}_B$ are not necessarily close in parameter space. Gradient-free optimizers use such cost function differences to make decisions during the optimization. Hence, our results imply that such optimizers will either need to spend exponentially large resources to characterize cost function differences, or else these optimizers will not make progress in the optimization. We confirm our analytical results with numerical simulations involving several gradient-free optimizers: Nelder-Mead, Powell, and COBYLA. For each of these optimizers, we attempt to train a deep parametrized quantum circuit, corresponding to the barren plateau scenario in Ref.~\cite{mcclean2018barren}. In all cases, we find that the number of shots (i.e., the amount of statistics) required to begin to train the cost function grows exponentially in the number of qubits. This is the same behavior that one sees for gradient-based methods, and is a hallmark of the barren plateau phenomenon. \section{Theoretical Background} Here we provide background needed to understand our results. We first consider the cost function used to train parameterized quantum circuits. Then we consider optimizers that can be used to optimize this cost function, with a specific focus on gradient-free optimizers. Finally, we give background on the barren plateau phenomenon. \subsection{Cost function} Consider a parameterized quantum circuit $V(\vec{\theta})$, whose parameters will be trained by minimizing a cost function $C(\vec{\theta})$. In this work, we consider a highly general cost function that can be expressed in the form \begin{equation}\label{eq:cost} C(\vec{\theta}) =\sum_{x=1}^{S} f_x(\vec{\theta},\rho_x)\,. \end{equation} Here, $\vec{\theta}$ is a vector of $m$ continuous parameters, $\{\rho_x\}_{x=1}^S$ are $n$-qubit input quantum states from a training set $\mathcal{S}$ of size $S$, and $f_x$ are functions that encode the problem and which can be different for each input state. To ensure algorithmic efficiency, we assume that the number $m$ of parameters in $\vec{\theta}$ is in $\mathcal{O}(\operatorname{poly}(n))$. In addition we consider that any $\theta_\mu\in\vec{\theta}$ parametrizes a unitary of the form $e^{-i\theta_\mu H_{\mu}}$. We assume $H_{\mu}$ is a Hermitian operator with two distinct non-zero eigenvalues (e.g., $H_{\mu}$ could be a Pauli operator). We remark that the cost function in~\eqref{eq:cost} contains as special cases many relevant applications. For instance, in a binary classification problem the cost function is given by the mean squared-error $C(\vec{\theta})=\sum_{x}(y_x-\widetilde{y}(\vec{\theta},\rho_x))^2$~\cite{farhi2018classification,killoran2019continuous,beer2020training,cong2019quantum}. Here, the training set is given by $\mathcal{S}=\{\rho_x,y_x\}$ where $y_x$ are the true labels, and $\widetilde{y}(\vec{\theta},\rho_x)$ are the labels predicted by the Quantum Neural Network. In addition, the cost of several Variational Quantum Algorithms is covered by~\eqref{eq:cost}. In this case, the cost takes a simpler form, where the training set contains a single state ($S=1$) and the cost is $C(\vec{\theta})={\rm Tr}[OV(\vec{\theta})\rho V^\dagger(\vec{\theta})]$, with $O$ a Hermitian operator~\cite{peruzzo2014variational,mcclean2016theory,farhi2014quantum,romero2017quantum,khatri2019quantum,larose2019variational,arrasmith2019variational,cerezo2020variationalfidelity,cirstoiu2020variational,sharma2019noise,bravo2020variational,cerezo2020variational}. The goal is then to solve the optimization problem \begin{equation}\label{eq:optimization} \vec{\theta}_{\text{opt}}=\argmin_{\vec{\theta}} C(\vec{\theta})\,. \end{equation} This involves choosing an optimizer, which can either be a gradient-based or gradient-free optimizer. Various gradient-based approaches~\cite{kubler2020adaptive,sweke2020stochastic,arrasmith2020operator,stokes2020quantum} have been proposed for training parameterized quantum circuits, and these will be directly impacted by barren plateaus. Optimizers employing higher-order derivatives are also impacted by barren plateaus~\cite{cerezo2020impact}. In this work we consider the case when one employs a gradient-free optimization method. In the next section we review some widely-used gradient-free optimizers. \subsection{Gradient-Free Optimizers} We will refer to any optimization method that only accesses a zeroth-order oracle (i.e., does not directly access derivative information) as being gradient free. This is a very large class of methods, but they all depend on being able to distinguish cost function values at different points. Though our analytical results are general to any such optimizer, we now introduce three particular gradient-free optimizers that we will examine numerically: Nelder-Mead, Powell's Method, and COBYLA. \subsubsection{Nelder-Mead} One popular gradient-free optimization strategy is the Nelder-Mead algorithm~\cite{nelder1965simplex}. In this approach, one constructs a simplex in the space to be optimized over. Then one modifies it with a sequence of reflect, expand, contract, and shrink operations to move the simplex and then shrink it around the minimum. These operations are chosen based on conditional comparisons of the cost function values at each vertex as well as proposed new vertices. See Figure~\ref{fig:opts}\textbf{a} for an illustration these operations. When used in an environment where the errors in those cost function values are large enough to cause mistakes in these comparisons, however, this algorithm is vulnerable to performing shrink operations prematurely, which slows the optimization down and may lead to a false appearance of convergence~\cite{barton1991modifications}. Due to this difficulty, one would expect the number of iterations required to converge with Nelder-Mead to be especially bad in limited precision environments, though we note that there are a number of modifications that attempt to improve the method's robustness to noise~\cite{barton1991modifications,huang2018robust}. \begin{figure*}[ht] \includegraphics[width=1.9\columnwidth]{gf_optimizers.pdf} \caption{Graphical depiction of the gradient-free optimizers considered. Panel \textbf{a} shows the different operations that the Nelder-Mead algorithm performs on the initial (grey) simplex: reflection (red), reflection with expansion (blue), reflection with contraction (green), and shrinking (turquoise). Panel \textbf{b} shows two iterations of the Powell method (with black for the first iteration and red for the second). Note that the direction of the final step is changed, reflecting a modified search direction. Finally, panel \textbf{c} shows an illustration of a COBYLA step from an initial (grey) simplex. After fitting a plane to the initial simplex, the method steps along the fitted slope to form a new simplex (red). The trust region is shown as a solid blue circle. A smaller trust region which might be used later in the optimization is illustrated with the dashed blue circle (though for this particular step the trust region would likely not be contracted).} \label{fig:opts} \end{figure*} \subsubsection{Powell's Method} The Powell algorithm~\cite{powell1964efficient} is another popular gradient-free optimizer that performs sequential line searches. This method starts with some input set of search vectors $V=\{\vec{v}_i\}$, usually just the coordinate directions in the parameter space. Searching along each of these directions in sequence, this method looks for the displacement $\{a_i\}$ along each direction that would minimize the cost when only varying parameters along the current direction. Finding the displacements $\{a_i\}$ is typically done with Brent's parabolic interpolation method~\cite{brent2013algorithms}, though in principle one could use any univariate gradient free optimizer. After sweeping through all of the search vectors, the iteration is completed by replacing the search vector $\vec{v}_j$ that corresponds to the greatest displacement, $a_j=\max({a_i})$, and replacing it with \begin{equation} \vec{v}_j\to \sum_i a_i \vec{v}_i. \end{equation} By making this replacement, convergence is accelerated and the method avoids getting stuck in a cyclic pattern of updates. See Figure~\ref{fig:opts}\textbf{b} for a sketch of two iterations of this method. \subsubsection{COBYLA} Constrained Optimization BY Linear Approximation (COBYLA) is another popular gradient-free optimizer by Powell~\cite{powell1994direct}. This algorithm constructs a simplex and uses the $m + 1$ points in the parameter space, with $m$ being the number of parameters, to define a hyperplane to capture the local slope of the cost function. The algorithm then replaces the highest cost function value point on the simplex by stepping from the lowest cost point along the direction of the slope. The method steps as far as possible along this estimated slope while staying within a lower bound on the radius of the trust region. The lower bound on the size of the trust region is decreased when the algorithm detects that is has stopped making progress, allowing the method to converge. Note, however, that the size of the trust region never increases in COBYLA. An iteration of this method (showing a shrinking trust region) is sketched in Figure~\ref{fig:opts}\textbf{c}. \subsection{Barren Plateaus} When the cost function exhibits a barren plateau, the cost function gradient vanishes exponentially with the system size. Without loss of generality we consider here the following generic definition of a barren plateau. \begin{definition}[Barren Plateau]\label{def:BP} Consider the cost function defined in Eq.~\eqref{eq:cost}. This cost exhibits a barren plateau if, for all $\theta_\mu\in\vec{\theta}$, the expectation value of the cost function partial derivative $\partial C(\vec{\theta})/\partial\theta_\mu=\partial_\mu C(\vec{\theta})$ is $\text{E}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]=0$ and its variance vanishes exponentially with the number of qubits $n$ as \begin{equation}\label{eq:var} {\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]\leq F(n)\,,\quad \text{with}\quad F(n)\in\mathcal{O}\left(\frac{1}{b^n}\right)\,. \end{equation} for some $b> 1$. As indicated, the expectation values are taken over the parameters $\vec{\theta}$. \end{definition} We remark here that, as shown in Definition~\ref{def:BP}, the barren plateau phenomenon is a probabilistic statement. In fact, from Chebyshev's inequality we know that ${\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]$ bounds the probability that the cost function partial derivative deviates from its mean of zero as \begin{equation}\label{eq:cheb} P(|\partial_\mu C(\vec{\theta})|\geq c)\leq \frac{{\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]}{c^2}\,, \end{equation} for any $c>0$. In practice this means that by randomly initializing the parameters $\vec{\theta}$, there is a high probability that one ends up in a flat region of the landscape where the gradients are exponentially suppressed. Let us now discuss different mechanisms that can lead to barren plateaus in the cost function landscape. As shown in the seminal work of Ref.~\cite{mcclean2018barren}, deep random unstructured circuits which form $2$-designs on $n$ qubits will exhibit barren plateaus. Here we use the term deep when the depth of the ansatz is in $\mathcal{O}(\operatorname{poly}(n))$. For instance, as shown in~\cite{harrow2009random,brandao2016local,harrow2018approximate} local circuits will form $2$-designs when their depth is in $\mathcal{O}(\operatorname{poly}(n))$. The barren plateau phenomenon was extended in~\cite{cerezo2020cost} to a type of shallow depth ansatz known as the layered hardware efficient ansatz, where random local gates act on alternating pairs of neighboring qubits in a brick-like structure. Here it was shown that the locality of the cost function can be linked to its trainability. Specifically, global cost functions (those where one compares operators living in exponentially large Hilbert spaces) exhibit barren plateaus for any circuit depth. On the other hand, it was shown that local cost functions (where one compares operators on an individual qubit level) are trainable when the ansatz depth is in $\mathcal{O}(\log(n))$, as here their gradients vanish at worst polynomially (rather than exponentially) with the system size. Barren plateaus have also been shown to arise in more general QNN architectures~\cite{sharma2020trainability,marrero2020entanglement}. In perceptron-based QNNs with hidden and visible layers, connecting a large number of qubits in different layers with random global perceptrons (and hence highly entangling them) can lead to exponentially vanishing gradients. These results have shown that the barren plateau phenomenon is a generic problem that can arise in multiple architectures for quantum machine learning. Finally, in~\cite{wang2020noise} a noise-induced barren plateau mechanism was found. Here it was proven that the presence of noise acting before and after each unitary layer in a parametrized quantum circuit leads to exponentially vanishing gradients for circuits with linear or super-linear depth. When the cost exhibits a noise-induced barren plateau we have $|\partial_\mu C(\vec{\theta})|\leq \widehat{F}(n)$ with $\widehat{F}(n)\in\mathcal{O}(1/\widehat{b}^n)$ for some $\widehat{b}>1$. The underlying mechanism here is that the state gets corrupted due to noise, leading to a flattening of the whole cost landscape. This phenomenon is conceptually different from the previous barren plateaus as here one does not average over the parameters $\vec{\theta}$. Nevertheless, the noise-induced barren plateau still satisfies Definition~\ref{def:BP}, which is a weaker condition. \section{Main Results} In this section we first present our main analytical results in the form of Proposition~\ref{prop:1} and Corollary~\ref{cor:1}. We then discuss the implications for employing gradient-free optimizers in a barren plateau. \subsection{Exponentially suppressed cost differences} Here we consider two relevant scenarios where we analyze, on average, how large the difference $\Delta C= C(\vec{\theta}_B)-C(\vec{\theta}_A)$ between two points in the landscape can be. First we consider the case when $\vec{\theta}_A$ and $\vec{\theta}_B$ are not independent, but rather $\vec{\theta}_B$ can be obtained from $\vec{\theta}_A$ through a given translation in parameter space. We then analyze the case when $\vec{\theta}_A$ and $\vec{\theta}_B$ are independent. The following proposition constitutes the main result of our work. The proof is presented in the Appendix. \begin{proposition}\label{prop:1} Consider the cost function of Eq.~\eqref{eq:cost}. Let $\vec{\theta}_A$ be a randomly chosen point in parameter space. Let $\vec{\theta}_B=\vec{\theta}_A+L\hat{\vec{\ell}}$ be a point at a distance $L=\|\vec{\theta}_B-\vec{\theta}_A\|$ from $\vec{\theta}_A$ in parameter space, for some unit vector $\hat{\vec{\ell}}$. If the cost exhibits a barren plateau according to Definition~\ref{def:BP}, then the expectation value of the difference $\Delta C=C(\vec{\theta}_B)-C(\vec{\theta}_A)$ is \begin{equation} \text{E}_{\vec{\theta}_A}[\Delta C]=0\,, \end{equation} and the variance is exponentially vanishing with $n$ as \begin{equation} {\rm Var}_{\vec{\theta}_A}[\Delta C]\leq G(n)\,, \end{equation} with \begin{equation}\label{eq:G-n} G(n)=m^2L^2 F(n)\,, \quad \text{and}\quad G(n)\in \widetilde{\mathcal{O}}\left(\frac{1}{b^n}\right)\,, \end{equation} for some $b> 1$. Here $m$ is the dimension of the parameter space, and $F(n)$ was defined in~\eqref{eq:var}. \end{proposition} Let us here recall that we have assumed that $m\in\mathcal{O}(\operatorname{poly}(n))$. Similarly, we have that $\theta_\mu$ parametrizes a unitary generated by a Hermitian operator $H_{\mu}$ with two distinct non-zero eigenvalues. From the latter it then follows that $L$ is always in $\mathcal{O}(\operatorname{poly}(n))$, and hence that $G(n)\in \widetilde{\mathcal{O}}(1/b^n)$. From the previous results one can readily evaluate the case when $\vec{\theta}_B$ and $\vec{\theta}_A$ are independent. This case is of relevance to global optimizers, such as Bayesian approaches, where initial points on the landscape are chosen independently. This scenario can be analyzed by computing the expectation value $\text{E}_{\vec{\theta}_A,\vec{\theta}_B}[\Delta C]=\text{E}_{\vec{\theta}_B}[\text{E}_{\vec{\theta}_A}[\Delta C]]$. From Proposition~\ref{prop:1}, we can derive the following corollary. \begin{corollary}\label{cor:1} Consider the cost function of Eq.~\eqref{eq:cost}. Let $\vec{\theta}_A$ and $\vec{\theta}_B$ be two randomly chosen points in parameter space. Without loss of generality we assume that $\vec{\theta}_B=\vec{\theta}_A+L\hat{\vec{\ell}}$ for random $L$ and $\hat{\vec{\ell}}$ so that $\text{E}_{\vec{\theta}_A,\vec{\theta}_B}[\cdots]=\text{E}_{\vec{\theta}_A,L,\hat{\vec{\ell}}}[\cdots]$. If the cost exhibits a barren plateau according to Definition~\ref{def:BP}, then the expectation value of the difference $\Delta C=C(\vec{\theta}_B)-C(\vec{\theta}_A)$ is $\text{E}_{\vec{\theta}_A,L,\hat{\vec{\ell}}}[\Delta C]=0$, and the variance is exponentially vanishing with $n$ as \begin{equation}\label{eq:boundprop} {\rm Var}_{\vec{\theta}_A,L,\hat{\vec{\ell}}}[\Delta C]\leq \widehat{G}(n)\,, \end{equation} with \begin{equation} \widehat{G}(n)=m^2\overline{L}^2 F(n)\,, \quad \text{and}\quad \widehat{G}(n)\in \widetilde{\mathcal{O}}\left(\frac{1}{b^n}\right)\,, \end{equation} for some $b> 1$. Here $m$ is the dimension of the parameter space, $F(n)$ was defined in~\eqref{eq:var}, and \begin{equation} \overline{L}=\text{E}_{L,\hat{\vec{\ell}}}\left[L\right] \end{equation} is the average distance between any two points in parameter space. \end{corollary} The proof of Corollary~\ref{cor:1} readily follows from Proposition~\ref{prop:1} by additionally computing the expectation value over $L$ and $\hat{\vec{\ell}}$. Moreover, here we can see that $\widehat{G}(n)$ is exponentially vanishing with the system size as $\overline{L}\in\mathcal{O}(\operatorname{poly}(n))$. From Proposition~\ref{prop:1} we have that given two dependent set of parameters $\vec{\theta}_A$ and a set $\vec{\theta}_B$ related trough a translation in parameter space, then the probability that the difference $\Delta C=C(\vec{\theta}_B)-C(\vec{\theta}_A)$ is larger than a given $c>0$ can be bounded as \begin{equation}\label{eq:difference-bound} P(|\Delta C|\geq c)\leq \frac{G(n)}{c^2}\,, \end{equation} where we have used~\eqref{eq:cheb}, and Eq.~\eqref{eq:boundprop} from Proposition~\ref{prop:1}. Note that a similar result can be obtained for the case when $\vec{\theta}_A$ and $\vec{\theta}_B$ are independent, but here one replaces $G(n)$ by $\widehat{G}(n)$ in~\eqref{eq:difference-bound}. This implies that, with high probability, the difference $\Delta C$ will be exponentially vanishing with the system size, for both cases when $\vec{\theta}_A$ and $\vec{\theta}_B$ are dependent or independent. Moreover, we remark that this is a direct consequence of the fact that the cost exhibits a barren plateau. \subsection{Implications for gradient-free optimizers} Let us first recall that, as discussed in the previous section, the capability of distinguishing the cost function value at different sets of parameters is at the core of gradient-free optimization methods. Therefore, the precision required to differentiate choices fundamentally limits the scaling of these methods, with smaller differences requiring greater precision. If an optimizer's precision requirements are not met, then each decision the method makes becomes randomly chosen by shot noise, leading to many optimizers effectively becoming either random walks or random sampling. The results in Proposition~\ref{prop:1} pertain to gradient-free optimizers that compare points that are a given distance and direction apart. For example, simplex-based methods like Nelder-Mead fall under this category. As we show that cost differences are exponentially suppressed with the system size in a barren plateau, this leads to applications having sampling requirements that scale exponentially. Exponentially scaling sampling requirements, in turn, hinder the possibility of achieving quantum speedup with such an algorithm. Similarly, Corollary~\ref{cor:1} tells us that cost function differences between randomly chosen points are also exponentially suppressed. This means that either random search methods or methods that use random initialization, such as Bayesian optimization~\cite{movckus1975bayesian}, will also struggle with barren plateau landscapes. Therefore, using randomness in the selection of points cannot evade this exponential scaling result. Let us finally remark that Proposition~\ref{prop:1} and Corollary~\ref{cor:1} make no assumption about how close (or far) the parameters $\vec{\theta}_A$ and $\vec{\theta}_B$ are in parameter space other than that $L=\|\vec{\theta}_B-\vec{\theta}_A\|\in\mathcal{O}(\operatorname{poly}(n))$. Given that for any practical application the number of parameters $m$ should scale no faster than $m\in\mathcal{O}(\operatorname{poly}(n))$ (or the problem will become untrainable for reasons having nothing to do with barren plateaus), this seems very reasonable. For example, if all of the parameters are single qubit rotations, the parameter space is a $m$-dimensional torus with unit radius. On that torus, the greatest length of the shortest path between two points is: \begin{equation} \begin{aligned} L_{\mathrm{max}}=&\max_{\vec{\theta}_A,\vec{\theta}_B}\|\vec{\theta}_A-\vec{\theta}_B\|\\ =&\sqrt{m}\pi. \end{aligned} \end{equation} This means that our results are valid for both local and global optimizers as sampling points that are further apart cannot overcome the suppressed slope. \section{Numerical Implementation} \begin{figure}[t] \includegraphics[width=\columnwidth]{PBC.pdf} \caption{Single layer of the hardware efficient ansatz employed in our numerical implementations, shown here for $n=4$. The $U$ gates are general single qubit unitaries $U(\theta_1,\theta_2,\theta_3) = R_Z(\theta_2+\pi)R_X(\pi/2)R_Z(\theta_1+\pi)R_X(\pi/2)R_Z(\theta_3)$. Here $R_Z(\alpha) = e^{-i\alpha/2 \sigma_Z}$, $R_X(\alpha) = e^{-i\alpha/2 \sigma_X}$, and $\sigma_Z$, $\sigma_X$ are Pauli matrices. } \label{fig:hardeff} \end{figure} In this section we present numerical results obtained by simulating a variational quantum compiling algorithm~\cite{khatri2019quantum,sharma2019noise}. Here, one trains a parametrized quantum circuit $V(\vec{\theta})$ to approximate a target unitary $U$. Specifically, we consider the toy-model problem where $U=\openone$ and the goal is to train the parameters in $V(\vec{\theta})$ such that $V(\vec{\theta})\ket{\vec{0}}=\ket{\vec{0}}$, with $\ket{\vec{0}}=\ket{0}^{\otimes n}$ the all-zero state. As shown in~\cite{khatri2019quantum,sharma2019noise}, the following local cost function is faithful \begin{equation} C(\vec{\theta}) = {\rm Tr}[ O_L V(\vec{\theta})\dya{\vec{0}}V^\dagger(\vec{\theta})]\,, \label{cost_function} \end{equation} where \begin{equation} O_L = \openone - \frac{1}{n}\sum_{i=1}^{n}\ket{0_i}\bra{0_i}\,, \end{equation} in that can verify that $C(\vec{\theta})\in[0,1]$, with $C(\vec{\theta})=0$ iff $V(\vec{\theta})\ket{\vec{0}}=\ket{\vec{0}}$ (up to global phase). We remark that there is an efficient quantum circuit to compute $C(\vec{\theta})$~\cite{sharma2019noise}. For $V(\vec{\theta})$ we employ a layered hardware efficient ansatz as shown in Fig.~\ref{fig:hardeff}. Moreover, we recall that Ref.~\cite{mcclean2018barren} showed that a cost function such as~\eqref{cost_function} with a layered hardware efficient ansatz that is randomly initialized will exhibit barren plateaus when the depth of $V(\vec{\theta})$ scales at least linearly in $n$. In our numerics we simulated the aforementioned quantum compilation task for different numbers of qubits $n=5,6,\ldots,11$. Letting $p$ be the number of layers in the ansatz in Fig.~\ref{fig:hardeff}, we choose $p=n$, so that the depth grows linearly in $n$. This corresponds to the barren plateau scenario in Ref.~\cite{mcclean2018barren}. For each value of $n$, we solved the optimization of Eq.~\eqref{eq:optimization} by employing the Nelder-Mead, Powell, and COBYLA methods. These simulations were performed using MATLAB (Nelder-Mead) and SciPy (Powell and COBYLA). In all cases we randomly initialized the parameters $\vec{\theta}$ in the ansatz and we ran the optimizer until a cost function value $C=0.4$ was achieved or until a maximal total number of shots used throughout the optimization was surpassed. For simplicity we use the default values for hyper-parameters not related to optimization termination. We note that we chose a relatively large value ($C=0.4$) for the cost threshold because we are interested in the initial stages of the training process, i.e., the question of whether one can get past the barren plateau. With this choice, the computational expense of reaching this threshold does not take into account the difficulty of finding a minimum, it only reflects the burden of the barren plateau. Since the goal is to heuristically determine the precision (i.e., the number of shots $N$) needed to minimize the cost, we first ran simulations with different values of $N$ allocated per cost-function evaluation. For each $N$ we simulated $20$ optimization instances (runs) with different initial points and we kept track of $N_{\text{total}}$ used throughout the optimization. The next step was to determine the value of $N$ for which a cost of $C=0.4$ could be reached and which minimizes the median total number of shots $N_{\text{total}}$ computed over different runs. We analyze the scaling of this median value as a function of $n$ below. \begin{figure}[t] \includegraphics[width=\columnwidth]{train_scal2.pdf} \caption{Median value of the total shot number $N_{\textrm{total}}$ necessary to reach $C=0.4$ plotted versus the number of qubits $n$. We show results for Nelder-Mead (blue diamonds), Powell (red asterisks), and COBYLA (pink squares) implementations, as well as results for a gradient-descent implementation (black crosses) for reference. Each data point was obtained from a sample of $20$ runs initialized by random initial angles~$\vec{\theta}$. } \label{fig:numerical} \end{figure} In Fig.~\ref{fig:numerical} we present our numerical results. Here we can see that the total number of shots scales exponentially with $n$ for the Powell method, and scales super-exponentially for the Nelder-Mead optimizer. For the COBYLA method, the $N_{\text{total}}$ behavior as a function of $n$ is not very regular but it is consistent with at least an exponential increase. As a reference point we also show in Fig. \ref{fig:numerical} results obtained with a custom gradient-descent optimizer. As expected, the total number of shots also scales exponentially in this case. \section{Discussion} With a wide range of applications spanning chemistry, optimization, and big data analysis, training parameterized quantum circuits is arguably the leading paradigm for near-term quantum computing. Yet barren plateaus in the training landscape remains an obstacle to making these paradigms scalable. Hence, one of the most important lines of research in this field is developing methods to avoid barren plateaus. In this work, we consider the question of whether the choice of optimizer could be a potential strategy in avoiding barren plateaus. We focus on gradient-free optimizers, since there has been recent debate in the community about whether barren plateaus effect such optimizers. Our main result is an analytical argument suggesting that gradient-free optimizers will, indeed, be impacted by barren plateaus. Proposition~\ref{prop:1} is relevant to gradient-free optimizers that search through the landscape starting from a (random) initial point. For example, this includes simplex-based optimizers like Nelder-Mead. This proposition asserts that the variance of cost function differences is exponentially suppressed in a barren plateau. This implies that such optimizers will need to expend exponentially large resources in order to make decisions about where to move in the landscape. Corollary~\ref{cor:1} considers a slightly different scenario, where both points are randomly and independently chosen. This is relevant to global gradient-free optimizers, such as Bayesian methods, which initially choose multiple random points on the landscape, and then proceed from this initial set of points. This corollary implies that these optimizers will also need to utilize exponentially large resources in order to make progress. We also numerically attempt to train in a barren plateau scenario using several gradient-free optimizers. We ask how many shots are required to begin to train the cost function. In all cases, we find that the required number of shots grows exponentially in the number of qubits. This is consistent with our main result, and demonstrates that barren plateaus can lead to exponential scaling even for gradient-free optimizers. We note that this exponential scaling is a lower bound on the asymptotics. For the Nelder-Mead we find super-exponential scaling, likely due to the chances of prematurely shrinking the simplex when it is hard to order the cost values~\cite{barton1991modifications}. For the case of COBYLA, there may be a similar effect from prematurely shrinking the radius of the trust region, though this is not clearly demonstrated in our data. Finally, the Powell method appears to show exponential scaling. It is likely that the reason Powell shows better scaling is that, unlike the other optimizers, statistical noise does not have a cumulative effect on the state of the optimizer. Our work casts doubt on the notion that the choice of optimizer could provide a strategy to avoid barren plateaus. While the asymptotically exponential scaling cannot be avoided, we note that the size limits of trainable problems may be extended by a careful choice of optimization strategy. For example, techniques using neural networks~\cite{verdon2019learning} or natural evolutionary strategies~\cite{anand2021natural} may improve the constants multiplying the exponential scaling. However, we emphasize that all such strategies at minimum require the comparison of cost function values at different points and thus are subject to our scaling analysis. This result highlights the difficult challenge posed by barren plateaus. Future work certainly should continue to develop strategies to avoid them. Additionally, in future work, we hope to develop a unified treatment that covers the impact of barren plateaus on various types of optimizers, gradient-based and gradient-free. \section*{Acknowledgements} AA and LC were initially supported by LDRD program of LANL under project number 20190065DR. MC acknowledges support from the Center for Nonlinear Studies at Los Alamos National Laboratory (LANL). Piotr C. was supported by the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory (LANL) under project number 20190659PRD4. PJC acknowledges initial support from the LANL ASC Beyond Moore's Law project. This work was supported by the U.S. DOE, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing (ARQC) program. \bibliographystyle{apsrev4-1mod}
1,314,259,992,883
arxiv
\section{Introduction} Visual tracking has various applications ranging from video surveillance, human computer interaction to autonomous driving. The main difficulty is how to utilize the extremely limited training data (usually a bounding box in the first frame) to develop an appearance model robust to a variety of challenges including background clutter, scale variation, motion blur and partial occlusions. Discriminative correlation filters (DCFs) have attracted an increasing attention in the tracking community \cite{bolme-cvpr10-mosse,Danelljan-iccvw15-DeepSRDCF,chao-iccv15-HCF}, due to the following two important properties. First, since spatial correlation is often computed in the Fourier domain as an element-wise product, DCFs are suitable for fast tracking. Second, DCFs regress the circularly shifted versions of input features to soft labels, i.e., generated by a Gaussian function ranging from zero to one. In contrast to most \ryn{existing} tracking-by-detection approaches \cite{kalal-pami12-tld,bai-iccv13-randomized,hare-pami16-struck,ning-cvpr16-dlsvm} that generate sparse response scores over sampled locations, DCFs always generate dense response scores over all searching locations. With the use of deep convolutional features \cite{krizhevsky-nips12-imagenet}, DCFs based tracking algorithms \cite{chao-iccv15-HCF,Danelljan-iccvw15-DeepSRDCF,martin-eccv16-beyond} have achieved state-of-the-art performance on recent tracking benchmark datasets \cite{wu-cvpr13-otb,wu-pami15-otb,kristan-eccvw16-vot}. \renewcommand{\tabcolsep}{.1pt} \def0.49\linewidth{0.48\linewidth} \def0.7\linewidth{0.9\linewidth} \begin{figure}[t] \begin{center} \begin{tabular}{cc} \vspace{-1mm}\includegraphics[width=0.49\linewidth]{figures/intro/singer2/0011.pdf}& \includegraphics[width=0.49\linewidth]{figures/intro/singer2/0036.pdf}\\ \vspace{-1mm}\includegraphics[width=0.49\linewidth]{figures/intro/motorolling/0023.pdf}& \includegraphics[width=0.49\linewidth]{figures/intro/motorolling/0065.pdf}\\ \vspace{-1mm}\includegraphics[width=0.49\linewidth]{figures/intro/shaking/0049.pdf}& \includegraphics[width=0.49\linewidth]{figures/intro/shaking/0365.pdf}\\ \end{tabular} \begin{tabular}{c} \includegraphics[width=0.7\linewidth]{figures/intro/fig.pdf}\\ \end{tabular} \end{center} \vspace{-5mm} \caption{Convolutional features improve DCFs (DeepSRDCF \cite{Danelljan-iccvw15-DeepSRDCF}, CCOT \cite{martin-eccv16-beyond}, HCFT \cite{chao-iccv15-HCF}). We propose the CREST algorithm by formulating DCFs as a shallow convolutional layer with residual learning. \ryn{It} performs favorably against existing DCFs with convolutional features.} \label{fig:intro} \end{figure} However, existing DCFs based tracking algorithms are limited by two aspects. First, learning DCFs is independent of feature extraction. Although it is straightforward to learn DCFs directly over deep convolutional features as in \cite{chao-iccv15-HCF,Danelljan-iccvw15-DeepSRDCF,martin-eccv16-beyond}, DCFs trackers benefit little from the end-to-end training. Second, most DCFs trackers use a linear interpolation operation to update the learned filters over time. Such an empirical interpolation weight is unlikely to strike a good balance between model adaptivity and stability. It leads to \ryn{drifting of the} DCFs trackers due to noisy updates. \ryn{These limitations raise two questions: (1) whether DCFs with feature representation can be modeled in an end-to-end manner, and (2) whether DCFs can be updated in a more effective way rather than using those empirical operations such as linear interpolation?} To address these two questions, we propose a Convolutional RESidual learning scheme for visual Tracking (CREST). We interpret DCFs as the counterparts of the convolution filters in deep neural networks. In light of this idea, we reformulate DCFs as a one-layer convolutional neural network that directly generates the response map as the spatial correlation between two consecutive frames. With this formulation, feature extraction through pre-trained CNN models (e.g., VGGNet \cite{simonyan-iclr14-very}), correlation response map generation, as well as model update are effectively integrated into an end-to-end form. The spatial convolutional operation functions similarly with the dot product between the circulant shifted inputs and the correlation filter. It removes the boundary effect in Fourier transform through directly convolving in the spatial domain. Moreover, the convolutional layer is fully differentiable. It allows updating \ryn{the} convolutional filters using back propagation. Similar to DCFs, the convolutional layer generates dense response scores over all searching locations in a one-pass manner. To properly update our model, we apply residual learning \cite{he-cvpr16-resnet} to capture appearance changes by detecting the difference between the output of this convolutional layer and ground truth soft label. This helps alleviate a rapid model degradation caused by noisy updates. Meanwhile, residual learning contributes to the target response robustness for large appearance variations. Ablation studies (Section \ref{sec:ablation}) show that the proposed convolutional layer performs well against state-of-the-art DCFs trackers and the residual learning \ryn{approach} further improves the accuracy. The main contributions of this work are as follows: \begin{itemize}[noitemsep,nolistsep] \item We reformulate the correlation filter as one convolutional layer. It integrates feature extraction, response generation, and model update into the convolutional neural network for end-to-end training. \item We apply residual learning to capture the target appearance changes referring to spatiotemporal frames. This effectively alleviates a rapid model degradation by large appearance changes. \item We extensively validate our method on benchmark datasets with large-scale sequences. We show that our CREST tracker performs favorably against state-of-the-art trackers. \end{itemize} \section{Related Work} There are extensive surveys of visual tracking in the literature \cite{yilmaz-acm06-object,salti-tip12-adaptive,smeulders-pami14-visual}. In this section, we mainly discuss tracking methods that are based on correlation filters and \ryn{CNNs}. {\flushleft \bf Tracking by Correlation Filters.} Correlation filters for visual tracking have attracted considerable attention due to the computational efficiency in the Fourier domain. Tracking methods based on correlation filters regress all the circular-shifted versions of the input features to a Gaussian function. They do not need multiple samples of target appearance. The MOSSE tracker \cite{bolme-cvpr10-mosse} encodes target appearance through an adaptive correlation filter by optimizing the output sum of squared error. Several extensions have been proposed to considerably improves tracking accuracy. The examples include kernelized correlation filters \cite{Henriques-eccv12-DCF}, multiple dimensional features \cite{martin-cvpr14-adaptive,henriques-pami15-high}, context learning \cite{zhang-eccv14-fast}, scale estimation \cite{martin-bmvc14-accurate}, re-detection \cite{ma-cvpr15-lct}, subspace learning \cite{liu-cvpr15-real}, short-term and long-term memory \cite{hong-cvpr15-muster}, reliable collection \cite{li-cvpr15-reliable} and spatial regularization \cite{martin-iccv15-learning}. Different from existing correlation filters based frameworks that formulate correlation operation as an element wise multiplication in the Fourier domain, we formulate the correlation filter as a convolution operation in the spatial domain. It is presented by one convolutional layer in CNN. In this sense, we demonstrate that feature extraction, response generation as well as model update can be integrated into one network for end-to-end prediction and optimization. \begin{figure*}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{figures/pipeline.pdf}\\ \end{tabular} \end{center} \vspace{-5mm} \caption{The pipeline of our CREST algorithm. We extract convolutional features from one search patch of the current frame T and the initial frame. These features are transformed into the response map through the base and residual \ryn{mappings}.} \label{fig:pipeline} \vspace{-3mm} \end{figure*} {\flushleft \bf Tracking by CNNs.} Visual representations are important for visual tracking. Existing CNN trackers mainly explore the pre-trained object recognition networks and build upon discriminative or regression models. Discriminative tracking methods propose multiple particles and refine them through online classification. They include stacked denoising autoencoder \cite{wang-nips13-dlt}, incremental learning \cite{Li-bmvc14-deeptrack}, SVM classification \cite{hong-icml15-cnnsvm} and fully connected neural network \cite{nam-cvpr16-mdnet}. These discriminative trackers require auxiliary training data as well as an off-line pre-training. On the other hand, regression based methods typically regress CNN features into soft labels (e.g., a two dimensional Gaussian distribution). They focus on integrating convolutional features with the traditional DCF framework. The examples include hierarchical convolutional features \cite{chao-iccv15-HCF}, adaptive hedging \cite{qi-cvpr16-hdt}, spatial regularization \cite{Danelljan-iccvw15-DeepSRDCF} and continuous convolutional operations \cite{martin-eccv16-beyond}. In addition, there are methods based on CNN to select convolutional features \cite{wang-iccv15-visual} and update sequentially \cite{wang-2016-stct}. Furthermore, the Siamese networks receive growing attention due to its two stream identical structure. These include tracking by object verification \cite{tao-cvpr16-siamese}, tracking by correlation \cite{bertinetto-eccv16-fully} and tracking by location axis prediction \cite{held-eccv16-learning}. Besides, there is investigation on the recurrent neural network (RNN) to facilitate tracking as object verification \cite{cui-cvpr16-rnn}. Different from the existing frameworks, we apply residual learning to capture the difference of the predicted response map between the current frame and the ground-truth (the initial frame). This facilitates to account for appearance changes and effectively reduce model degradation caused by noisy updates. \section{Convolutional Residual Learning} Our CREST algorithm carries out feature regression through the base and residual layers. The base layer consists of one convolutional layer which is formulated as the traditional DCFs. The difference between the base layer output and ground truth soft label is captured through the residual layers. Figure \ref{fig:pipeline} shows the \ryn{CREST pipeline. The details are discussed below.} \subsection{DCF Reformulation}\label{sec:base} We revisit the DCF based framework and formulate it as our base layer. The DCFs learn a discriminative classifier and predict the target translation through searching the maximum value in the response map. We denote the input sample by $X$ and denote the corresponding Gaussian function label by $Y$. A correlation filter $W$ is then learned by solving the following minimization problem: \begin{equation} W^\star=\mathop{\arg\min}\limits_{W}||W\ast X-Y||^2+\lambda||W||^2, \label{eq:dcf} \end{equation} where $\lambda$ is the regularization parameter. Typically, the convolution operation between the correlation filter $W$ and the input $X$ is formulated into a dot product in the Fourier domain \cite{Henriques-eccv12-DCF,ma-cvpr15-lct,henriques-pami15-high}. We reformulate the learning process of DCFs as the loss minimization of the convolutional neural network. The general form of the loss function \cite{jia-arxiv14-caffe} can be written as: \begin{equation} L(W)=\frac{1}{N}\sum_i^{|N|}\mathcal{L}_W(X^{(i)})+\lambda r(W), \label{eq:loss} \end{equation} where $N$ is the number of samples, $\mathcal{L}_W(X^{(i)})$ ($i\in N$) is the loss of the $i$-th sample, and $r(W)$ is the weight decay. We set $N=1$ and take the L2 norm as $r(W)$. The loss function in Eq. \ref{eq:loss} can be written as: \begin{equation} L(W)=\mathcal{L}_W(X)+\lambda||W||^2, \label{eq:lossL2} \end{equation} where $\mathcal{L}_W(X)=||\mathcal{F}(X)-Y||^2$. It is equivalent to the L2 loss between $\mathcal{F}(X)$ and $Y$ where $\mathcal{F}(X)$ is the network output and $Y$ is the ground truth label. We take $\mathcal{F}(X)=W\ast X$ as the convolution operation on $X$, which can be achieved through one convolutional layer. The convolutional filters $W$ is equivalent to the correlation filter and the loss function in Eq. \ref{eq:lossL2} is equivalent to the DCFs objective function. As a result, we formulate the DCFs as one convolutional layer with L2 loss as the objective function. It is named as the base layer in our network. Its filter size is set to cover the target object. The convolutional weights can be effectively calculated using the gradient descent method instead of the closed form solution \cite{henriques-pami15-high}. \subsection{Residual Learning}\label{sec:res} We formulate DCFs as a base layer represented by one convolutional layer. Ideally, the response map from the base layer output will be identical to the ground truth soft label. In practice, it is unlikely that a single layer network is able to accomplish that. Instead of stacking more layers which may cause the degradation problem \cite{he-cvpr16-resnet}, we apply the residual learning to effectively capture the difference between the base layer output and the ground truth. \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=1.0\linewidth]{figures/res.pdf}\\ \end{tabular} \end{center} \vspace{-3mm} \caption{Base and spatial residual layers.} \label{fig:res} \end{figure} Figure \ref{fig:res} shows the structure of the base and residual layers. We denote $\mathcal{H}(X)$ as the optimal mapping of input $X$ and $\mathcal{F}_B(X)$ as the output from the base layer. Rather than stacking more layers to approximate $\mathcal{H}(X)$, we expect these layers to approximate the residual function: $\mathcal{F}_{R}(X)=\mathcal{H}(X)-\mathcal{F}_B(X)$. As a result, our expected network output can be formulated as follows: \begin{eqnarray}\label{eq:res} \mathcal{F}(X)&=&\mathcal{F}_B(X)+\mathcal{F}_{R}(X)\nonumber\\ &=&\mathcal{F}_B(X,\{W_B\})+\mathcal{F}_{R}(X,\{W_{R}\}), \end{eqnarray} where $\mathcal{F}_B(X)=W_B\ast X$. The mapping $\mathcal{F}_{R}(X,\{W_{R}\})$ represents the residual learning and $W_{R}$ is a general form of convolutional layers with biases and ReLU \cite{nair-icml10-relu} omitted for simplifying notations. We adopt three layers in the residual learning with small filter size. They are set to capture the residual which is not presented in the base layer output. Finally, input $X$ is regressed through the base and residual mapping to generate the output response map. In addition, we can also utilize the temporal residual which helps to capture the difference when the spatial residual is not effective. We develop a temporal residual learning network that contains similar structure as the spatial residual learning. The temporal input is extracted from the first frame which contains the initial object appearance. Let $X_t$ denote the input $X$ on frame $t$. Thus we have \begin{equation} \mathcal{F}(X_\textrm{t})=\mathcal{F}_R(X_\textrm{t})+\mathcal{F}_{SR}(X_\textrm{t})+\mathcal{F}_{TR}(X_\textrm{1}),\\ \label{eq:rest} \end{equation} where $\mathcal{F}_{TR}(X_\textrm{1})$ is the temporal residual from the first frame. The proposed spatiotemporal residual learning process encodes elusive object representation into the response map generation framework and no additional data is needed to train the network. Figure \ref{fig:algo_visual} shows one example of the filter response from the base and residual layers. The feature maps are scaled for visualization purpose. Given an input image, we first crop the search patch centered at the estimated position of the previous frame. The patch is sent into our feature extraction network and then regressed into a response map through the base and residual layers. We observe that the base layer performs similarly to the traditional DCF based trackers to predict response map. When target objects undergo small appearance variations, the difference between the base layer output and the ground truth is minor. The residual layers have little effect on the final response map. However, when target objects undergo large appearance variations, such as background clutter, the response from the base layer is limited and may not differentiate the target and the background. Nevertheless, this limitation is alleviated by the residual layers, which effectively model the difference between the base layer output and the ground truth. It helps to reduce the noisy response values on the final output through the addition of base and residual layers. As a result, target response is more robust to large appearance variations. \def0.7\linewidth{1.0\linewidth} \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.7\linewidth]{figures/algo/spa.pdf}\\ \end{tabular} \end{center} \vspace{-5mm} \caption{Visualization of \ryn{the} feature maps in the target localization step.} \label{fig:algo_visual} \vspace{-3mm} \end{figure} \section{Tracking via CREST} We illustrate the detailed procedure of CREST for visual tracking. As we do not need offline training we present the tracking process through the aspects of model initialization, online detection, scale estimation and model update. {\flushleft \bf Model Initialization.} Given an input frame with the target location, we extract a training patch which is centered on the target object. This patch is sent into our framework for feature extraction and response mapping. We adopt the VGG network \cite{simonyan-iclr14-very} for feature extraction. Meanwhile, all the parameters in the base and residual layers are randomly initialized following zero mean Gaussian distribution. Our base and residual layers are well initialized after a few steps. {\flushleft \bf Online Detection.} The online detection scheme is straightforward. When a new frame comes we extract the search patch from the center location predicted by the previous frame. The search patch has the same size with the training patch and fed into our framework to generate a response map. Once we have the response map, we locate the object by searching for the maximum response value. {\flushleft \bf Scale Estimation.} After we obtain the target center location, we extract search patches in different scales. These patches are then resized into a fixed size of training patches. Thus the candidate object sizes in different scales are all normalized. We send these patches into our network to generate the response map. The width $w_\textrm{t}$ and height $h_\textrm{t}$ of the target object at frame $t$ is updated as: \begin{equation} (w_\textrm{t},h_\textrm{t})=\beta(w_\textrm{t}^\star,h_\textrm{t}^\star)+(1-\beta)(w_{\textrm{t}-1},h_{\textrm{t}-1}), \label{eq:scale} \end{equation} where $w_\textrm{t}^\star$ and $h_\textrm{t}^\star$ are the width and height of the scaled object with maximum response value. The weight factor $\beta$ enables the smooth update of the target size. {\flushleft \bf Model Update.} We consistently generate training data during online tracking. For each frame, after predicting the target location we can generate the corresponding ground truth response map, and the search patch can be directly adopted as the training patch. The collected training patches and response maps from every $T$ frames are adopted as training pairs which will be fed into our network for online update. \section{Experiments} In this section, we introduce the implementation details and analyze the effect of each component including the base and residual layers. We then compare our CREST tracker with state-of-the-art trackers on the benchmark datasets for performance evaluation. \subsection{Experimental Setups} {\flushleft \bf Implementation Details.} We obtain the training patch from the first frame. It is 5 times the maximum value of object width and height. Our feature extraction network is from VGG-16 \cite{simonyan-iclr14-very} with only the first two pooling layers retained. We extract the feature maps from the \textit{conv4-3} layer and reduce the feature channels to 64 through PCA dimensionality reduction, which is learned using the first frame image patch. The regression target map is generated using a two-dimensional Gaussian function with a peak value of 1.0. The weight factor $\beta$ for scale estimation is set to 0.6. Our experiments are performed on a PC with an i7 3.4GHz CPU and a GeForce GTX Titan Black GPU with MatConvNet toolbox \cite{Vedaldi-mm15-matconvnet}. In the training stage, we iteratively apply the adam optimizer \cite{Diederik-iclr15-adam} with a learning rate of 5e-8 to update the coefficients, until the loss in Eq. \ref{eq:lossL2} is below the given threshold of 0.02. We observe that in practice, our network converges from random initialization in a few hundred iterations. We update the model every 2 frames for only 2 iterations with a learning rate of 2e-9. {\flushleft \bf Benchmark Datasets.} The experiments are conducted on three standard benchmarks: OTB-2013 \cite{wu-cvpr13-otb}, OTB-2015 \cite{wu-pami15-otb} and VOT-2016 \cite{kristan-eccvw16-vot}. The first two datasets contain 50 and 100 sequences, respectively. They are annotated with ground truth bounding boxes and various attributes. In the VOT-2016 dataset, there are 60 challenging videos from a set of more than 300 videos. {\flushleft \bf Evaluation Metrics.} We follow the standard evaluation metrics from the benchmarks. For the OTB-2013 and OTB-2015 we use the one-pass evaluation (OPE) with precision and success plots metrics. The precision metric measures the rate of frame locations within a certain threshold distance from those of the ground truth. The threshold distance is set as 20 for all the trackers. The success plot metric measures the overlap ratio between predicted and ground truth bounding boxes. For the VOT-2016 dataset, the performance is measured in terms of expected average overlap (EAO), average overlap (AO), accuracy values (Av), accuracy ranks (Ar), robustness values (Rv) and robustness ranks (Rr). The average overlap is similar to the AUC metric in OTB benchmarks. \def0.49\linewidth{0.495\linewidth} \renewcommand{\tabcolsep}{.1pt} \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\linewidth]{figures/ablation/base_error.pdf}& \includegraphics[width=0.49\linewidth]{figures/ablation/base_auc.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{Precision and success plots using one-pass evaluation on the OTB-2013 dataset. The performance of the base layer without scale estimation is similar \ryn{to} that of HCFT \cite{chao-iccv15-HCF} on average.} \label{fig:abla1} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\linewidth]{figures/ablation/base2_error.pdf}& \includegraphics[width=0.49\linewidth]{figures/ablation/base2_AUC.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{Precision and success plots using one-pass evaluation on the OTB-2013 dataset. The performance of the base layer is improved gradually with the integration of spatiotemporal residual.} \label{fig:abla2} \vspace{-3mm} \end{figure} \subsection{Ablation Studies}\label{sec:ablation} Our CREST algorithm contains base and residual layers. As analyzed in Section \ref{sec:base}, the base layer is formulated similar to existing DCF based trackers, and the residual layers refine the response map. In this section, we conduct ablation analysis to compare the performance of the base layer with those of the DCF based trackers. In addition, we also evaluate the base layer and its integration with spatiotemporal residual layers. We analyze our CREST algorithm in the OTB-2013 dataset. We first compare the base layer performance with that of HCFT \cite{chao-iccv15-HCF}, which is a traditional DCF based tracker with convolutional features. The objective function of HCFT is shown in Eq. \ref{eq:dcf}, which is the same as ours. As there is no scale estimation in HCFT, we remove this step in our algorithm. Figure \ref{fig:abla1} shows the quantitative evaluation under AUC and average distance precision scores. We observe that the performance of our base layer is similar to that of HCFT on average. It indicates that our base layer achieves similar performance as the DCF based trackers with convolutional features. The DeepSRDCF \cite{Danelljan-iccvw15-DeepSRDCF} and CCOT \cite{martin-eccv16-beyond} trackers are different from the traditional DCF based trackers because they add a spatial constraint on the regularization term, which is different from our objective function. We also analyze the effect of residual integration in Figure \ref{fig:abla2}. The AUC and average distance precision scores show that the base layer obtains obvious improvement through the integration of spatial residual learning. Meanwhile, temporal residual contributes little to the overall performance. \subsection{Comparisons with State-of-the-art} \def0.49\linewidth{0.49\linewidth} \renewcommand{\tabcolsep}{.1pt} \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\linewidth]{figures/otb2013/error_2013.pdf}& \includegraphics[width=0.49\linewidth]{figures/otb2013/AUC_2013.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{Precision and success plots over all 50 sequences using one-pass evaluation on the OTB-2013 Dataset. The legend contains the area-under-the-curve score and the average distance precision score at 20 pixels for each tracker.} \label{fig:otb2013} \end{figure} We conduct quantitative and qualitative evaluations on the benchmark datasets including OTB-2013 \cite{wu-cvpr13-otb}, OTB-2015 \cite{wu-pami15-otb} and VOT-2016 \cite{kristan-eccvw16-vot}. The details are discussed \ryn{in the following}. \subsubsection{Quantitative Evaluation} {\flushleft \bf OTB-2013 Dataset.} We compare our CREST tracker with 29 trackers from the OTB-2013 benchmark \cite{wu-cvpr13-otb} and other 21 state-of-the-art trackers including KCF \cite{Henriques-eccv12-DCF}, MEEM \cite{zhang-eccv14-meem}, TGPR \cite{gao-eccv14-transfer}, DSST \cite{martin-bmvc14-accurate}, RPT \cite{li-cvpr15-reliable}, MUSTer \cite{hong-cvpr15-muster}, LCT \cite{ma-cvpr15-lct}, HCFT \cite{chao-iccv15-HCF}, FCNT \cite{wang-iccv15-visual}, SRDCF \cite{martin-iccv15-learning}, CNN-SVM \cite{hong-icml15-cnnsvm}, DeepSRDCF \cite{Danelljan-iccvw15-DeepSRDCF}, DAT \cite{possegger-cvpr15-defense}, Staple \cite{bertinetto-cvpr16-staple}, SRDCFdecon \cite{danelljan-CVPR16-adaptive}, CCOT \cite{martin-eccv16-beyond}, GOTURN \cite{held-eccv16-learning}, SINT \cite{tao-cvpr16-siamese}, SiamFC \cite{bertinetto-eccv16-fully}, HDT \cite{qi-cvpr16-hdt} and SCT \cite{choi-cvpr16-visual}. The evaluation is conducted on 50 video sequences using one-pass evaluation with distance precision and overlap success metrics. \def0.49\linewidth{0.49\linewidth} \renewcommand{\tabcolsep}{.1pt} \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\linewidth]{figures/otb2015/error_2015.pdf}& \includegraphics[width=0.49\linewidth]{figures/otb2015/auc_2015.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{Precision and success plots over all 100 sequences using one-pass evaluation on the OTB-2015 Dataset. The legend contains the area-under-the-curve score and the average distance precision score at 20 pixels for each tracker.} \label{fig:otb2015} \end{figure} Figure \ref{fig:otb2013} shows the evaluation results. We only show the top 10 trackers for presentation clarity. The AUC and distance precision scores for each tracker are reported in the figure legend. Among all the trackers, our CREST tracker performs favorably on both the distance precision and overlap success rate. In Figure \ref{fig:otb2013}, we exclude the MDNet tracker \cite{nam-cvpr16-mdnet} as it uses tracking videos for training. Overall, the precision and success plots demonstrate that our CREST tracker performs favorably against state-of-the-art trackers. \def0.245\linewidth{0.245\linewidth} \renewcommand{\tabcolsep}{.1pt} \begin{figure*}[t] \begin{center} \begin{tabular}{cccc} \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/background-clutter.pdf}& \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/Deformation.pdf}& \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/illumination_variation.pdf}& \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/In-Plane-Rotation.pdf}\\ \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/low-resolution.pdf}& \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/out-of-Plane-Rotation.pdf}& \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/motion-blur.pdf}& \includegraphics[width=0.245\linewidth]{figures/otb2013/attr/fast_motion.pdf}\\ \end{tabular} \end{center} \vspace{-5mm} \caption{The success plots over eight tracking challenges, \ryn{including} background clutter, deformation, illumination variation, in-plan rotation, low resolution, out-of-plane rotation, motion blur and fast motion.} \label{fig:attr} \end{figure*} In Figure \ref{fig:attr}, we further analyze the tracker performance under different video attributes (e.g., background clutter, deformation and illumination variation) annotated in the benchmark. We show the one pass evaluation on the AUC score under eight main video attributes. The results indicate that our CREST tracker is effective in handling background clutters and illumination variation. It is mainly because the residual layer can capture the appearance changes for effective model update. When the target appearance undergoes obvious changes or becomes similar to the background, existing DCF based trackers (e.g., HCFT, CCOT) can not accurately locate the target object. They do not perform well compared with SINT that makes sparse prediction through generating candidate bounding boxes and verifying through the Siamese network. This limitation is reduced in our CREST tracker, where the residual from the spatial and temporal domains effectively narrow the gap between the noisy output of the base layer and ground truth label. The dense prediction becomes accurate and the target location can be correctly identified. We have found similar performance in deformation, where LCT achieves a favorable result through integrating the redetection scheme. However, it does not perform well as our CREST tracker with the temporal residual integrated into the framework and optimized as a whole. In motion blur and fast motion sequences, our tracker does not perform well as CCOT. This can be attributed to the convolutional features from multiple layers CCOT has adopted. Their feature representation performs better than ours from single layer output. The feature representation from multiple layers will be considered in our future work. \def\hspace{0mm}{\hspace{0mm}} \renewcommand{\tabcolsep}{15pt} \begin{table*}[t] \caption{Comparison with \ryn{the} state-of-the-art trackers on the VOT 2016 dataset. The results are presented in terms of expected average overlap (EAO), average overlap (AO), accuracy value (Av), accuracy rank (Ar), robustness value (Rv), and robustness rank (Rr).} \centering \begin{tabular}{cccccccc} \toprule &\small{CCOT}&\small{Staple}&\small{EBT}&\small{DSRDCF}&\small{MDNet}&\small{SiamFC}&\small{CREST}\\ \midrule \small{EAO}&\small{0.331}&\small{0.295}&\small{0.291}&\small{0.276}&\small{0.257}&\small{0.277}&\small{0.283}\\ \small{AO}&\small{0.469}&\small{0.388}&\small{0.370}&\small{0.427}&\small{0.457}&\small{0.421}&\small{0.435}\\ \small{Av}&\small{0.523}&\small{0.538}&\small{0.441}&\small{0.513}&\small{0.533}&\small{0.549}&\small{0.514}\\ \small{Ar}&\small{2.167}&\small{2.100}&\small{4.383}&\small{2.517}&\small{1.967}&\small{2.465}&\small{2.833}\\ \small{Rv}&\small{0.850}&\small{1.350}&\small{0.900}&\small{1.167}&\small{1.204}&\small{1.382}&\small{1.083}\\ \small{Rr}&\small{2.333}&\small{3.933}&\small{2.467}&\small{3.550}&\small{3.250}&\small{2.853}&\small{2.733}\\ \bottomrule \end{tabular} \label{tab:vot} \end{table*} {\flushleft \bf OTB-2015 Dataset.} We also compare our CREST tracker on the OTB-2015 benchmark \cite{wu-pami15-otb} with the 29 trackers in \cite{wu-cvpr13-otb} and the state-of-the-art trackers, including KCF \cite{Henriques-eccv12-DCF}, MEEM \cite{zhang-eccv14-meem}, TGPR \cite{gao-eccv14-transfer}, DSST \cite{martin-bmvc14-accurate}, MUSTer \cite{hong-cvpr15-muster}, LCT \cite{ma-cvpr15-lct}, HCFT \cite{chao-iccv15-HCF}, SRDCF \cite{martin-iccv15-learning}, CNN-SVM \cite{hong-icml15-cnnsvm}, DeepSRDCF \cite{Danelljan-iccvw15-DeepSRDCF}, Staple \cite{bertinetto-cvpr16-staple}, SRDCFdecon \cite{danelljan-CVPR16-adaptive}, CCOT \cite{martin-eccv16-beyond}, HDT \cite{qi-cvpr16-hdt}. We show the results of one-pass evaluation using the distance precision and overlap success rate in Figure \ref{fig:otb2015}. It indicates that our CREST tracker performs better than DCFs trackers HCFT and HDT with convolutional features. Overall, the CCOT method achieves the best result. Meanwhile, CREST achieves similar performance with DeepSRDCF in both distance precision and overlap threshold. Since there are more videos involved in this dataset in motion blur and fast motion, our CREST tracker is less effective in handling these sequences as CCOT. {\flushleft \bf VOT-2016 Dataset.} We compare our CREST tracker with state-of-the-art trackers on the VOT 2016 benchmark, including CCOT \cite{martin-eccv16-beyond}, Staple \cite{bertinetto-cvpr16-staple}, EBT \cite{zhu-cvpr16-beyond}, DeepSRDCF \cite{Danelljan-iccvw15-DeepSRDCF}, MDNet \cite{nam-cvpr16-mdnet} and SiamFC \cite{bertinetto-eccv16-fully}. As indicated in the VOT 2016 report \cite{kristan-eccvw16-vot}, the strict state-of-the-art bound is 0.251 under EAO metrics. For trackers whose EAO values exceed this bound, they will be considered as state-of-the-art trackers. Table \ref{tab:vot} shows the results from our CREST tracker and the state-of-the-art trackers. Among these methods, CCOT achieves the best results under the EAO metric. Meanwhile, the performance of our CREST tracker is similar to those of Staple and EBT. In addition, these trackers perform better than DeepSRDCF, MDNet and SiamFC trackers. Note that in this dataset, MDNet does not use external tracking videos for training. According to the analysis of VOT report and the definition of the strict state-of-the-art bound, all these trackers can be regarded as state-of-the-art. \subsubsection{Qualitative Evaluation} \def0.33\linewidth{0.33\linewidth} \def0.7\linewidth{0.7\linewidth} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \begin{minipage}[t]{0.47\linewidth} \centering \renewcommand{\tabcolsep}{.1pt} \begin{tabular}{ccc} \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/football/153.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/football/288.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/football/310.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/bird1/121.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/bird1/167.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/bird1/203.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/liquor/812.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/liquor/880.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/liquor/1099.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/skiing/3.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/skiing/12.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/skiing/31.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/football1/24.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/football1/36.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/football1/62.jpg}\\ \includegraphics[width=0.33\linewidth]{figures/visual/ironman/25.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/ironman/45.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/ironman/72.jpg}\\ \end{tabular} \end{minipage} \begin{minipage}[t]{0.47\linewidth} \centering \renewcommand{\tabcolsep}{.1pt} \begin{tabular}{ccc} \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/bolt2/10.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/bolt2/26.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/bolt2/54.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/kitesurf/27.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/kitesurf/39.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/kitesurf/64.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/box/451.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/box/465.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/box/491.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/matrix/34.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/matrix/44.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/matrix/53.jpg}\\ \vspace{-0.5mm}\includegraphics[width=0.33\linewidth]{figures/visual/basketball/206.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/basketball/329.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/basketball/658.jpg}\\ \includegraphics[width=0.33\linewidth]{figures/visual/skating-2/138.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/skating-2/211.jpg}& \includegraphics[width=0.33\linewidth]{figures/visual/skating-2/252.jpg}\\ \end{tabular} \end{minipage}\\ \end{tabular} \begin{tabular}{c} \includegraphics[width=0.7\linewidth]{figures/visual/fig.pdf}\\ \end{tabular} \end{center} \vspace{-5mm} \caption{Qualitative evaluation of our CREST tracker, MUSTer \cite{hong-cvpr15-muster}, SRDCFDecon \cite{danelljan-CVPR16-adaptive}, DeepSRDCF \cite{Danelljan-iccvw15-DeepSRDCF}, CCOT \cite{martin-eccv16-beyond} on twelve challenging sequences (from left to right and top to down: \emph{Football}, \emph{Bolt2}, \emph{Bird1}, \emph{KiteSurf}, \emph{Liquor}, \emph{Box}, \emph{Skiing}, \emph{Matrix}, \emph{Football1}, \emph{Basketball}, \emph{Ironman} and \emph{Skating-2}, respectively). Our CREST tracker performs favorably against \ryn{the} state-of-the-art trackers.} \label{fig:visual} \end{figure*} Figure \ref{fig:visual} shows some results of the top performing trackers: MUSTer \cite{hong-cvpr15-muster}, SRDCFDecon \cite{danelljan-CVPR16-adaptive}, DeepSRDCF \cite{Danelljan-iccvw15-DeepSRDCF}, CCOT \cite{martin-eccv16-beyond} and our CREST tracker on 12 challenging sequences. The MUSTer tracker does not perform well in all the presented sequences. This is because it adopts empirical operations for feature extraction (e.g., SIFT \cite{lowe-ijcv04-sift}). Although keypoint matching and long-term processing are involved, handcrafted features with limited performance are not able to differentiate the target and background. In comparison, SRDCFDecon incorporates the tracking-by-detection scheme to jointly optimize the model parameters and sample weights. It performs well on in-plane rotation (\emph{Box}), out-of-view (\emph{Liquor}) and deformation (\emph{Skating2}) because of detection. However, the sparse response leads to the limitation on background clutter (\emph{Matrix}), occlusion (\emph{Basketball}, \emph{Bird1}) and fast motion (\emph{Bolt2}). DeepSRDCF improves the performance through combining convolutional features with SRDCF. The dense prediction performs well on the fast motion (\emph{Bolt2}) and occlusion (\emph{Basketball}) scenes. However, the direct integration does not further exploit the model potential and thus limitation occurs on the illumination variation (\emph{Ironman}) and out-of-view (\emph{Bird1}). Instead of involving multiple convolutional features from different layers like CCOT, our CREST algorithm further exploits the model potential through DCF formulation and improves the performance through residual learning. We generate the target prediction in base layer while capturing the elusive residual via residual layers. These residual layers facilitate the learning process since they are jointly optimized with the base layer. As a result, the response map predicted by our network is more accurate for target localization, especially in the presented challenging scenarios. Overall, the visual evaluation indicates that our CREST tracker performs favorably against state-of-the-art trackers. \section{Concluding Remarks} In this paper, we propose CREST to formulate the correlation filter as a one-layer convolutional neural network named base layer. It integrates convolutional feature extraction, correlation response map generation and model update as a whole for end-to-end training and prediction. This convolutional layer is fully differentiable and allows the convolutional filters to be updated via online back propagation. Meanwhile, we exploit the residual learning to take target appearance changes into account. We develop spatiotemporal residual layers to capture the difference between the base layer output and the ground truth Gaussian label. They refine the response map through reducing the noisy values, which alleviate the model degradation limitations. Experiments on the standard benchmarks indicate that our CREST tracker performs favorably against state-of-the-art trackers. \section{Acknowledgements} \vspace{-1mm} This work is supported in part by the NSF CAREER Grant \#1149783, gifts from Adobe and Nvidia. \small \bibliographystyle{ieee}
1,314,259,992,884
arxiv
\section{Introduction} The exponential function is fundamental in mathematics. In Lie theory, the exponential map connects a Lie algebra and its Lie group. Idempotented version of quantized universal enveloping algebras of simple Lie algebras have been categorified \cite{KL1,KL2,R}. This paper can be viewed as a small step towards lifting the exponential map to the categorical level. We focus on the case of $\frak{sl}_2$ and restrict to a single nilpotent element. Consider the expansion $$\exp(-f)=\sum\limits_{k\ge0}(-1)^k\frac{f^k}{k!},$$ where $f \in \frak{sl}_2$ is the lower-triangular matrix $\left( \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right) $. Categorification of the divided power $f^{(k)}=\frac{f^k}{k!}$ and of its quantized version $\frac{f^k}{[k]!}$ naturally appears in the categorified quantum $\frak{sl}_2$ \cite{La}. The generator $f$ is lifted to a bimodule $\cal{F}'$ over a direct sum of the nilHecke algebras $\bigoplus\limits_{n\ge0} NH_n$. The tensor powers $\cal{F}'^k=\cal{F}'^{\otimes k}$ of the bimodule admit a direct sum decomposition $\cal{F}'^{k} \cong \bigoplus\limits_{k!} \cal{F}'^{(k)}$. It is natural to expect lifting $\exp(-f)$ to a cochain complex whose degree $k$ component is $\cal{F}'^{(k)}$ for $k \ge 0$. A nontrivial differential is needed to link adjacent components. Diagrammatic approach is widely used in categorification and plays a significant role in the present paper as well. We provide a modification $\wt{NH}$ of $\bigoplus\limits_{n\ge0} NH_n$ by expanding nilHecke algebras with an additional generator. The new generator is described by a short strand which links $NH_n$ and $NH_{n+1}$ together. The induction $\wt{NH}$-bimodule still exists, denoted $\cal{F}$. Short strand induces a $\wt{NH}$-bimodule homomorphism $\wt{NH} \ra \cal{F}$. This morphism and its suitable generalizations $\cal{F}^{k} \ra \cal{F}^{k+1}$ define a differential on $ \bigoplus\limits_{k\ge0}\cal{F}^{(k)}[-k] $. Due to the existence of the short strand, the extension group $\Ext^1(\cal{F}, \wt{NH})$ of bimodules is nontrivial. We lift the expansion $\exp(f)=\sum\limits_{k\ge0}f^{(k)}$ to a complex $\left( \bigoplus\limits_{k\ge0}\cal{F}^{(k)}, d \right)$, where the differential $d$ consists of certain elements of Ext groups. Unfortunately, the two resulting complexes, lifting $\exp(-f)$ and $\exp(f)$, respectively, are not invertible, as explained in Section \ref{Sec dis}. A more elaborate or just a different construction is needed to more adequately categorify exponentiation. \begin{prob} Find a framework for categorification of the exponential map, where an exact endofunctor $\cal{F}$ in a triangulated category $\cal{C}$ lifts to two mutually-inverse exact endofunctors $\exp(\cal{F})$ and $\exp(-\cal{F})$ on some triangulated category $\cal{C}^e$. These endofunctors should descend to maps $\exp([\cal{F}])$ and $\exp(-[\cal{F}])$ on the Grothendieck ring of $\cal{C}^e$, where $[\cal{F}]$ is the map induced by $\cal{F}$ on the Grothendieck group $K_0(\cal{C})$. Grothendieck groups of $\cal{C}$ and $\cal{C}^e$ should be suitably related. \end{prob} Recently developed theory of stability in representations of the symmetric group (see \cite{CEF}, for instance) can be related to a diagrammatical category similar to the one for $\wt{NH}$. The nilHecke algebra $NH_n$ should be replaced by the group algebra $\mathbb{C}[S_n]$ of the symmetric group. Adding a short strand with suitable sliding and commutativity relations will enlarge the direct sum $\oplus_{n\ge 0}\mathbb{C}[S_n]$ to a non-unital idempotented algebra $\wt{S}$. Alternatively, this idempotented algebra can be viewed as describing a monoidal category with a single generating object. One of the first key results in the stable representation theory of $S_n$ can be restated as the theorem that the category of finitely-generated right $\wt{S}$ modules is Noetherian. The first author used endofunctors in the category of finite dimensional $\oplus_{n\ge 0}\mathbb{C}[S_n]$ modules to categorify the Heisenberg algebra \cite{Kh}. The second author studied the category of finite dimensional left $\wt{S}$ modules to build a categorical boson-fermion correspondence \cite{T}. The construction of adding short strands in diagrammatic categories was also used in categorification of the polynomial ring in \cite{KS}, and the ring of integers localized at two in \cite{KT}. One possible application of categorification of exponentiation would be to categorification of integral forms of Lie groups and the exponential map between a Lie algebra and its Lie group. Another use might be for categorification of Vassiliev invariants, where parameter $h$ appears as the logarithm of $q$. After categorification $q$ becomes the grading shift, and some sophisticated structure refining the shift functor would be needed to define its logarithm. \vspace{0.1in} \n{\bf Acknowledgments.} M.K. was partially supported by the NSF grant DMS-1406065 while working on the paper. Y.T. was partially supported by the NSFC 11601256. \section{The algebra} \subsection{The definition of $\nh$} \begin{defn} \label{def nh} Define an algebra $\nh$ by generators $1_n$ for $n \ge 0$, $\pa_{i,n}$ for $1 \le i \le n-1$, $x_{i,n}$ for $1 \le i \le n$, and $v_{i,n}$ for $1 \le i \le n$, subject to the relations consisting of three groups: \n(1) Idempotent relations: \begin{align*} 1_n 1_m=\delta_{n,m}1_n, &\qquad 1_n \pa_{i,n} = \pa_{i,n} 1_n = \pa_{i,n}, \\ 1_n x_{i,n} = x_{i,n} 1_n = x_{i,n}, &\qquad 1_{n-1} v_{i,n} = v_{i,n} 1_n = v_{i,n}. \end{align*} \n(2) NilHecke relations: \begin{align*} x_{i,n} x_{j,n} = x_{j,n} x_{i,n}, & \\ x_{i,n} \pa_{j,n} = \pa_{j,n} x_{i,n} ~~~\mbox{if}~|i-j|>1, &\qquad \pa_{i,n} \pa_{j,n} = \pa_{j,n} \pa_{i,n} ~\mbox{if}~~~|i-j|>1, \\ \pa_{i,n} \pa_{i,n} =0, &\qquad \pa_{i,n}\pa_{i+1,n}\pa_{i,n}=\pa_{i+1,n}\pa_{i,n}\pa_{i+1,n}\\ x_{i,n}\pa_{i,n}-\pa_{i,n}x_{i+1,n}=1_n, &\qquad \pa_{i,n}x_{i,n}-x_{i+1,n}\pa_{i,n}=1_n. \end{align*} \n(3) Short strands relations: \begin{align*} v_{i,n} x_{j,n} = x_{j,n-1} v_{i,n} ~~~\mbox{if}~i > j, &\qquad v_{i,n} x_{j,n} = x_{j-1,n-1} v_{i,n} ~~~\mbox{if}~i <j, \\ v_{i,n} \pa_{j,n} = \pa_{j,n-1} v_{i,n}, ~~~\mbox{if}~i > j+1, &\qquad v_{i,n} \pa_{j,n} = \pa_{j-1,n-1} v_{i,n} ~~~\mbox{if}~i <j, \\ v_{i,n} v_{j,n+1}=v_{j,n} v_{i+1,n+1}~~~\mbox{if}~i \ge j. & \\ v_{i,n} \pa_{i,n} = v_{i+1,n} \pa_{i,n}. & \qquad \mbox{(Exchange relation)}& \end{align*} \end{defn} \begin{figure}[h] \begin{overpic} [scale=0.4]{alg1.eps} \put(11,5){$i$} \put(10,0){$x_{i,n}$} \put(45,5){$i$} \put(51,5){$i+1$} \put(48,0){$\pa_{i,n}$} \put(89,5){$i$} \put(88,0){$v_{i,n}$} \end{overpic} \caption{Generators of $\nh$.} \label{fig alg1} \end{figure} The algebra $\nh$ can be described diagrammatically. The idempotent $1_n$ is denoted by $n$ vertical strands. In particular, $1_0$ is denoted by the empty diagram. The generator $x_{i,n}$ is denoted by $n$ vertical strands with a dot on the $i$-th strand, and $\pa_{i,n}$ is denoted by $n$ strands with a $(i,i+1)$ crossing. The new generator $v_{i,n}$ is denoted by a diagram with $n-1$ vertical strands and one short strand in the $i$-th position. The short strand has no endpoint at the top and one endpoint at the bottom, see figure \ref{fig alg1}. The product $ab$ of two diagrams $a$ and $b$ is a vertical concatenation of $a$ and $b$, where $a$ is at the top, $b$ is at the bottom. The product is zero unless the numbers of their endpoints match. The relations of the second group are the defining relations of the nilHecke algebras. The relations of the third group are about short strands. The first three lines are isotopy relations of disjoint diagrams. The last line says that the short strand is exchangeable between $i$-th and $(i+1)$-th positions when composing with the crossing. We call it the {\em exchange relation}. In addition to the isotopy relations of disjoint diagrams, other local relations are drawn in figure \ref{fig alg2}. Let $a \od b \in \nh$ denote a horizontal concatenation of $a$ and $b$, where $a$ is on the left, $b$ is on the right. The element $a \od b$ does not depend on the heights of $a$ and $b$ by the isotopy relations of disjoint diagrams. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg2.eps} \put(30,30){$a \od b$} \put(18,49){$a$} \put(45,49){$b$} \put(37,41){$a$} \put(26,41){$b$} \put(9,13){$=0$} \end{overpic} \caption{Local relations of $\nh$.} \label{fig alg2} \end{figure} The algebra $\nh$ is idempotented, i.e. has a complete system of mutually orthogonal idempotents $\{1_n\}_{n\ge 0},$ so that $$ \nh \ = \ \bigoplus\limits_{m,n\ge 0} 1_m \nh 1_n.$$ Let $\nhmn$ denote its component $1_m \nh 1_n$. It is spanned by diagrams with $m$ endpoints at the top and $n$ endpoints at the bottom. The new generator $v_{i,n} \in \nh^{n-1}_{n}$. Clearly, $\nhmn=0$ if $n<m$. Let $NH_n$ denote the nilHecke algebra with the generators $x_i$ and $\pa_i$. There is a surjective homomorphism $\psi_n: NH_n \ra \nh_{n}^{n}$ defined by $\psi_n(1)=1_n, \psi_n(x_i)=x_{i,n}, \psi_n(\pa_{i})=\pa_{i,n}$. We will prove that $\psi_n$ is an isomorphism in Section \ref{Sec basis}. A combination of the exchange relation, the isotopy relation and the nilHecke relations implies the following relations, see figure \ref{fig alg3}. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg3.eps} \end{overpic} \caption{Two induced relations} \label{fig alg3} \end{figure} Motivated by these relations, we introduce a new diagram as a circle crossing, see figure \ref{fig alg4}. It defines an element \begin{gather} \label{def s} s_{1,2}:=\pa_{1,2} x_{1,2} - x_{1,2} \pa_{1,2} = x_{2,2} \pa_{1,2} - \pa_{1,2} x_{2,2} \in NH_2^2. \end{gather} The relations in figure \ref{fig alg3} can be rewritten in terms of the new diagram $s_{1,2}$. The short strand can slide through the circle crossing. We call this relation the {\em slide relation}. As a corollary, the horizontal concatenation of two short strands is fixed by the circle crossing. We call it the {\em relation (F)}. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg4.eps} \end{overpic} \caption{A new diagram $s_{1,2}$, the slide relation, and the relation (F).} \label{fig alg4} \end{figure} We discuss the interaction of the new diagram $s_{1,2}$ with the generators of $\nh$ in the following. Firstly, we check that $s_{1,2}s_{1,2}=1_2$, and \begin{gather} \label{eq s cr} s_{1,2} \pa_{1,2} = \pa_{1,2}, \qquad \pa_{1,2} s_{1,2} = - \pa_{1,2}. \end{gather} using diagrams, see figure \ref{fig alg5}. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg5.eps} \end{overpic} \caption{Local relations about the circle crossing.} \label{fig alg5} \end{figure} Secondly, we use the nilHecke relations to deduce more relations, see figure \ref{fig alg6}. For $1\le i \le n-1$, define $$s_{i,n}=1_{i-1} \od s_{1,2} \od 1_{n-i-1} \in NH_n^n$$ by adding $i-1$ and $n-i-1$ vertical strands to the left and right of $s_{1,2}$, respectively. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg6.eps} \end{overpic} \caption{More local relations for the circle crossings.} \label{fig alg6} \end{figure} The diagrams used in defining $s_{1,2}$ do not have the short strand. The same diagram defines an element $s_1:=\pa_{1} x_{1} - x_{1} \pa_{1} \in NH_2$ such that $\psi_2(s_1)=s_{1,2}$. Similarly, define $s_i:=\pa_{i} x_{i} - x_{i} \pa_{i} \in NH_n$ such that $\psi_n(s_i)=s_{i,n}$. \begin{lemma} \label{lem s} (1) For $1\le i \le n-1$, the generators $s_{i,n}$ satisfy the defining relations of the symmetric group $S_n$. \n(2) A dot can slide through the circle crossing, and a crossing and two circle crossings satisfy the braid relation, see figure \ref{fig alg6}. \end{lemma} \begin{proof} Under the map $\psi_n: NH_n \ra \nh_n^n$, it is enough to check the corresponding relations in $NH_n$. Consider the action of $NH_n$ on the ring $\K[x_1, \dots, x_n]$ of polynomials, where $\pa_{i}$ acts as the divided difference operator, and $x_{i}$ acts as the multiplication by $x_i$. For $f \in \K[x_1, \dots, x_n]$, \begin{align*} s_{i}(f) & = \pa_{i} x_{i}(f) - x_{i} \pa_{i} (f) = \pa_i (x_i f) - x_i \pa_i(f) \\ & = \frac{x_i f - x_{i+1} \wt{s}_i(f)}{x_i - x_{i+1}} - x_i \cdot \frac{f - \wt{s}_i(f)}{x_i - x_{i+1}} = \wt{s}_i(f), \end{align*} where, $\wt{s}_i(f)(\dots, x_i, x_{i+1}, \dots)=f(\dots, x_{i+1}, x_{i}, \dots)$. In other words, the operator induced by $s_{i}$ is the same as the operator $\wt{s}_i$ on $\K[x_1, \dots, x_n]$. The relations hold in $NH_n$ since the action is faithful. \end{proof} The slide relation in figure \ref{fig alg4} says that the short strand can slide through the circle crossing. Thus, we can reduce the generators of $\nh$ in Definition \ref{def nh} to $1_n, \pa_{i,n}, x_{i,n}$, and $v_{n,n}$. Here, $v_{n,n}$ is the diagram with the short strand in the rightmost position. \vspace{.1cm} \n{\bf Notation: }Let $v_n=v_{n,n} \in \nh^{n-1}_{n}$ for simplicity. \begin{defn} \label{def nh'} Define a $\K$-algebra $\nh'$ by generators $1_n$ for $n \ge 0$, $\pa_{i,n}$ for $1 \le i \le n-1$, $x_{i,n}$ for $1 \le i \le n$, and $v_{n}$ for $n \ge 0$, subject to the relations consisting of three groups: \n(1) Idempotent relations: the same as in Definition \ref{def nh} (1), except that $v_{i,n}$ is replaced by $v_n$. \n(2) NilHecke relations: the same as in Definition \ref{def nh} (2). \n(3) Short strands relations: \begin{align*} v_{n} x_{j,n} = x_{j,n-1} v_{n}, & \qquad \mbox{if}~j<n, \\ v_{n} \pa_{j,n} = \pa_{j,n-1} v_{n}, & \qquad\mbox{if}~j<n-1, \\ v_{n-1} v_{n}=v_{n-1} v_{n} s_{n-1,n}, & \qquad \mbox{(Relation (F))} \end{align*} where $s_{n-1,n}=\pa_{n-1,n} x_{n-1,n} - x_{n-1,n} \pa_{n-1,n} = x_{n,n} \pa_{n-1,n} - \pa_{n-1,n} x_{n,n}$. \end{defn} \begin{lemma} \label{lem nh nh'} The algebras $\nh$ and $\nh'$ are naturally isomorphic. \end{lemma} \begin{proof} The algebra $\nh'$ has fewer generators and relations than $\nh$, except for the last relation in $\nh'$. This relation is a direct consequence of the relation (F) in figure \ref{fig alg4}. Therefore, there is a homomorphism $\nh' \ra \nh$. We construct a map $\phi: \nh \ra \nh'$ in the opposite direction as follows. On generators, define $\phi(a)=a$ for $a=1_n, \pa_{i,n}, x_{i,n}$, and $$\phi(v_{n,n})=v_{n}, \qquad \phi(v_{i,n})=v_{n} s_{n-1,n} \cdots s_{i+1,n} s_{i,n}.$$ Definition of $\phi(v_{i,n})$ is motivated by the slide relation. We need to prove that $\phi$ respects the defining relations of $\nh$. This is clear for the relations (1), (2) in Definition \ref{def nh}. Consider the short strand relations, see Definition \ref{def nh} (3). In figure \ref{fig alg7}, we check some special cases using the relations in figures \ref{fig alg4}, \ref{fig alg5} and \ref{fig alg6}. For $v_{i,n} x_{j,n} = x_{j-1,n-1} v_{i,n}$ when $i <j$, we check that \begin{align*} \phi(v_{i,n})\phi(x_{j,n})&=v_{n} s_{n-1,n} \cdots s_{i+1,n} s_{i,n}x_{j,n}=v_{n}x_{j-1,n}s_{n-1,n} \cdots s_{i+1,n} s_{i,n} \\ &=x_{j-1,n-1}v_{n} s_{n-1,n} \cdots s_{i+1,n} s_{i,n}=\phi(x_{j-1,n-1})\phi(v_{i,n}). \end{align*} The proof for the general cases is similar and left to the reader. \begin{figure}[h] \begin{overpic} [scale=0.25]{alg7.eps} \end{overpic} \caption{Checking $\phi$ preserves the short strand relations, where the diagrams on the left and right sides are in $\nh$, the arrows represent the map $\phi$, and the diagrams in the middle between the arrows are in $\nh'$.} \label{fig alg7} \end{figure} It is obvious that the two maps between $\nh$ and $\nh'$ are inverse to each other. \end{proof} We will identify $\nh'$ with $\nh$ under the isomorphism in the lemma. In other words, the two definitions of $\nh$ are equivalent. Definition \ref{def nh'} has fewer generators and relations, while Definition \ref{def nh} is more symmetric and local diagrammatically. Moreover, it is clear that Definition \ref{def nh} describes Hom spaces between powers of a generating object of a suitable monoidal category. We will use both presentations. \subsection{The basis} \label{Sec basis} We first construct an action of $\nh$ which generalizes the action of nilHecke algebras on the rings of polynomials. Let \begin{gather} \label{def R} R=\Z[r_1, r_2, \dots] \end{gather} be a graded ring of polynomials in infinitely many variables, where $\deg(r_i)=2i$. Let $R_n=R[x_1, \dots, x_n]$ be the graded ring of polynomials over $R$, where $\deg(x_i)=2$. In the following, we define an action $\alpha$ of $\nh=\bigoplus\limits_{m,n \ge 0} \nh^m_n$ on $\wt{R}=\bigoplus\limits_{n \ge 0} R_n$, where on each component $\alpha: \nh^m_n \ra \Hom_R(R_n, R_m)$. For $m=n$, the action is the same as the nilHecke action on $R_n$: $$\alpha(x_{i,n})(f)=x_i f, \qquad \alpha(\pa_{i,n})(f)=\frac{f-s_i(f)}{x_i-x_{i+1}},$$ for $f \in R_n$. For $v_n \in \nh^{n-1}_n$, the operator $\alpha(v_n): R_n \ra R_{n-1}$ is defined on the $R$-basis $\{x_1^{i_1}\dots x_{n-1}^{i_{n-1}} x_{n}^{i_n}\}$ of $R_n$ by \begin{gather} \label{eq action} \alpha(v_n)(x_1^{i_1}\dots x_{n-1}^{i_{n-1}} x_{n}^{i_n}) = x_1^{i_1}\dots x_{n-1}^{i_{n-1}} r_{i_n} \in R_{n-1}. \end{gather} In other words, $\alpha(v_n)$ fixes the first $n-1$ variables, and maps $x_n^k$ to $r_k$. To check that the action is well-defined, we use the presentation of $\nh$ in Definition \ref{def nh'}. Since $\alpha(v_n)$ acts nontrivially only on the variable $x_n$, all isotopy relations and nilHecke relations are preserved under $\alpha$. For the relation (F), both sides map $x_1^{i_1}\dots x_{n-2}^{i_{n-2}} x_{n-1}^{i_{n-1}} x_{n}^{i_n}$ to $x_1^{i_1}\dots x_{n-2}^{i_{n-2}} r_{i_{n-1}} r_{i_n}$. \vspace{.2cm} To compute the basis of $\nhmn$, we first discuss the case $m=0$. Recall from Lemma \ref{lem s} that the $s_{i,n}$'s generate a symmetric group $S_n \subset NH_n$. Let $$V_n=\Ind_{\Z[S_n]}^{NH_n} \Z$$ be the right $NH_n$-module induced from the trivial right $\Z[S_n]$-module $\Z$. Let $NH_n \ra V_n$ denote the quotient map which takes $t \in NH_n$ to $\ov{t} \in V_n$. As an abelian group, $V_n$ is generated by elements $\ov{t}$ for $t \in NH_n$, modulo the relation $\ov{w t}=\ov{t}$ for $w \in S_n$. Let $\vzn=v_1v_2\cdots v_n \in \nhzn$ denote the diagram of $n$ short strands. Define a map of abelian groups: \begin{align} \label{eq eta} \begin{array}{cccc} \eta_n: & V_n & \ra & \nhzn \\ & \ov{t} & \mapsto & \vzn t. \end{array} \end{align} The map $\eta_n$ is well-defined since $\eta_n(\ov{w t})=\vzn w t = \vzn t=\eta_n(\ov{t})$ for $w \in S_n$ from the relation (F), see figure \ref{fig alg4}. It is surjective by definition. The map $\eta_n$ is actually a homomorphism of right $NH_n$-modules. Note that both $V_n$ and $\nhzn$ are graded. Let $V_n(m)$ and $\nhzn(m)$ be their degree $2m$ components. We will prove that $V_n$ is a free abelian group and show that $\eta_n$ is an isomorphism in the following. Recall some results about the action of $NH_n$ on $\cal{P}_n=\Z[x_1, \dots, x_n]$, see \cite[Section 3.2]{La} for more detail. Let $\la_n\subset \cal{P}_n$ denote the subring of symmetric polynomials. Let $$\cal{H}_n=\{x_1^{j_1}\dots x_{n}^{j_n} ~|~ 0 \le j_k \le n-k\},$$ and $\cal{H}_n(m)$ be its subset of elements of degree $2m$. Then $\cal{P}_n$ is a finite free module over $\la_n$ with a basis $\cal{H}_n$. There is a canonical isomorphism of rings $$NH_n \cong \End_{\la_n}(\cal{P}_n).$$ Let $\pa_{w}=\pa_{i_1}\cdots \pa_{i_l} \in NH_n$ for a reduced word expression of $w=s_{i_1}\cdots s_{i_l} \in S_n$. The number $l$ is called the length $l(w)$ of $w$. Define $$t_{g,w}=g\cdot \pa_{w}\in NH_n,$$ for $w \in S_n, g \in \cal{P}_n$. The collection $\{t_{g,w}~|~ w \in S_n, g ~\mbox{a monomial in}~ \cal{P}_n\}$ forms a $\Z$-basis of $NH_n$, see \cite[Proposition 3.5]{La}. Let $$\cal{F}_n(m)=\{x_1^{i_1}\dots x_{n}^{i_n} ~|~ i_1 \ge \cdots \ge i_n \ge 0, \sum\limits_k i_k=m\},$$ $$\cal{B}_n(m)=\{t_{f,w} ~|~ w \in S_n, f \in \cal{F}_n(l(w)+m)\}.$$ Here, $\cal{F}_n(m)$ is a finite subset of $\cal{P}_n$, and $\cal{B}_n(m)$ a finite subset of $NH_n(m)$. Moreover, $\cal{B}_n(m)$ is empty if $m<-n!$. \begin{lemma} \label{lem vn} The abelian group $V_n(m)$ is free with a basis which is in bijection with $\cal{B}_n(m)$. \end{lemma} \begin{proof} Firstly, we claim that $\{\ov{t}_{f,w}~|~t_{f,w} \in \cal{B}_n(m)\}$ generates $V_n(m)$ over $\Z$. Let $g=x_{l_1}^{i_1}\dots x_{l_n}^{i_n}$ such that $i_1 \ge \cdots \ge i_n \ge 0$, and $\sum\limits_k i_k=l+m$. Let $w_g \in S_n$ denote the permutation which maps $l_k$ to $k$. Then $f= w_g (g) = x_1^{i_1}\dots x_{n}^{i_n} \in \cal{F}_n(l+m)$. So $w_g \cdot t_{g,w}=t_{f,w} \in NH_n$, and $\ov{t}_{g,w}=\ov{t}_{f,w}$. The claim follows from that $\{t_{g,w}\}$ generates $NH_n(m)$ over $\Z$. Secondly, we claim that the collection $\{\ov{t}_{f,w}~|~ t_{f,w} \in \cal{B}_n(m)\}$ is $\Z$-independent in $V_n(m)$. Let $$\beta_n=\alpha \circ \eta_n: V_n \ra \nhzn \ra \Hom_R(R_n ,R).$$ Let $h_0=x_1^{n-1}\cdots x_{n-1} \in \cal{H}_n$, $f=x_1^{i_1}\cdots x_{n}^{i_n} \in \cal{F}_n$, and $w_0 \in S_n$ denote the longest element. The image $\beta_n(\ov{t}_{f,w_0})=\alpha(v_n^0 \cdot t_{f,w_0})$ maps $h_0$ to $r_{i_1}\cdots r_{i_n}$, and other generators in $\cal{H}_n \ot R$ to zero. Since $\{r_{i_1}\cdots r_{i_n}~|~i_1 \ge \cdots \ge i_n \ge 0\}$ is $\Z$-independent in $R$, the collection $\{\beta_n(\ov{t}_{f,w_0})~|~ f \in \cal{F}_n\}$ is also $\Z$-independent. By induction on $l(w)$, one can show that $\{\beta_n(\ov{t}_{f,w})~|~ t_{f,w} \in \cal{B}_n(m)\}$ is $\Z$-independent. \end{proof} The map $\beta_n$ in the proof above is injective. It implies that $\eta_n$ is also injective. Combining with the fact that $\eta_n$ is surjective, we have the following description of $\nhzn$. \begin{prop} \label{prop basis nh0n} The map $\eta_n: V_n \ra \nhzn$ is an isomorphism of free abelian groups. Moreover, $\nhzn$ has a $\Z$-basis $\{v_n^0 t~|~ t \in \cal{B}_n(m), m \in \Z\}$ which is in bijection with $\cal{B}_n=\bigcup\limits_m\cal{B}_n(m)$. \end{prop} The abelian group $\nhzn$ has a $\Z$-basis $\{v^0_n \cdot f \cdot \pa_{w} ~|~ w \in S_n, f \in \cal{F}_n\}$, where $\cal{F}_n=\bigcup\limits_m\cal{F}_n(m)$. Diagrammatically, $f$ only has dots and numbers of dots are non-increasing from left to right, and $\pa_w$ only has crossings. See the picture on the left in figure \ref{fig alg8}. \begin{figure}[h] \begin{overpic} [scale=0.25]{alg8.eps} \put(5,29){$f$} \put(4,16){$\pa_w$} \put(31.5,42){$\scriptscriptstyle{NH_k}$} \put(41,42){$\scriptscriptstyle{NH^0_n}$} \put(56.5,42){$\scriptscriptstyle{NH_k}$} \put(65.5,42){$\scriptscriptstyle{NH^0_n}$} \put(84,45.5){$\scriptscriptstyle{NH_k}$} \put(93,45.5){$\scriptscriptstyle{NH^0_n}$} \put(31.5,12){$\scriptscriptstyle{NH_k}$} \put(41,12){$\scriptscriptstyle{NH^0_n}$} \put(56.5,12){$\scriptscriptstyle{NH_k}$} \put(65.5,12){$\scriptscriptstyle{NH^0_n}$} \put(84,12){$\scriptscriptstyle{NH_k}$} \put(93,12){$\scriptscriptstyle{NH^0_n}$} \end{overpic} \caption{The picture on the left describes the basis of $\nhzn$, where $f \in \cal{F}, \pa_w \in NH_n$; the six pictures on the right describe the basis of $\nh^k_{n+k}$ in terms of bases of $NH_k$ and $\nh^0_n$ in the case $n=k=2$.} \label{fig alg8} \end{figure} The basis of $\nh^k_{n+k}$ can be described using bases of $NH_k$ and $\nhzn$. Let $S_n \subset NH_{n+k}$ be the symmetric group generated by $s_{i,n+k}$'s for $i=k+1,\dots,n+k-1$. Let $NH_k \subset NH_{n+k}$ be the inclusion given by adding $k$ vertical strands on the right. Then the left actions of $S_n$ and $NH_k$ on $NH_{n+k}$ commute. Define the induction module $$V^k_{n+k}=\Ind_{\Z[S_n]}^{NH_{n+k}} \Z.$$ It is a left $NH_k$, right $NH_{n+k}$ module. We call it an $(NH_k, NH_{n+k})$-bimodule. Since $NH_{n+k}$ is a free left module over $NH_k$, $V^k_{n+k}$ is also free over $NH_k$. There is a canonical surjective map $NH_{n+k} \ra V^k_{n+k}$. Define a map \begin{align} \label{eq eta2} \begin{array}{cccc} \eta^k_{n+k}: & V^k_{n+k} & \ra & \nh^k_{n+k} \\ & \ov{t} & \mapsto & v^k_{n+k} t, \end{array} \end{align} for $t \in NH_{n+k}$, where $v^k_{n+k} \in \nh^k_{n+k}$ has $n$ short strands on the right. It is known that $NH_k$ is a free abelian group and has a basis of the form $$\cal{B}^k_k=\{f \cdot \pa_w ~|~w \in S_k,~ f ~\mbox{a monomial in}~ x_1,\dots,x_k\}.$$ By a similar argument as in the proof of Proposition \ref{prop basis nh0n}, one can prove the following result. \begin{prop} \label{prop basis nh} The map $\eta^k_{n+k}: V^k_{n+k} \ra \nh^k_{n+k}$ is an isomorphism of $(NH_k, NH_{n+k})$-bimodules. As an abelian group, $\nh^k_{n+k}$ is free with a basis $$\{v^k_{n+k} \cdot (t_k \od b_n) \cdot \pa_w ~|~ t_k \in \cal{B}_k^k, b_n \in \cal{B}_n, w ~\mbox{a minimal representative in}~ (S_{k} \times S_{n})\backslash S_{n+k}\}.$$ \end{prop} See figure \ref{fig alg8} for an example with $n=k=2$. \section{The complex lifting exponentials} \subsection{The $\nh$-bimodule $\cf$.} There is an inclusion \begin{align} \label{eq inclusion} \begin{array}{cccc} \rho: & \nh & \ra & \nh \\ & a & \mapsto & a \od 1_1, \end{array} \end{align} given by adding one vertical strand on the right to any diagram $a \in \nh$. Consider the $\nh$-bimodule corresponding to the induction functor with respect to $\rho$. \begin{defn} \label{def F} Define an abelian group $\cf= \bigoplus\limits_{m \ge 0, n \ge 1}\nhmn$ with the $\nh$-bimodule structure given by $a\cdot t \cdot b=a~t~\rho(b)$, where $a~t~\rho(b)$ is the product in $\nh$, for $a,b \in \nh$ and $t \in \cf$. \end{defn} Recall that $\nhmn=0$ if $m>n$. See figure \ref{fig alg10} for a diagrammatic description of $\cf$. The left and right multiplication correspond to stacking diagrams at the top and bottom, respectively. Since the rightmost strand in $\cf$ is unchanged under stacking diagrams from the bottom, we call it the {\em frozen strand} of $\cf$, and add a little bar at its lower end. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg10.eps} \put(0,54){$\cf:$} \put(-2,15){$[1_n]_f:$} \put(21,25){$n$} \put(19,8){$\scriptstyle{n-1}$} \put(21,68){$a$} \put(21,57){$t$} \put(20,45){$b$} \put(60,57){$a$} \put(84,64){$a$} \put(50,42){$[1_{n}]_f \cdot a =[(a \od 1_1)\cdot 1_{n}]_f$} \put(50,0){$[1_{n}]_f \cdot v_n =[v_{n,n+1}\cdot 1_{n+1}]_f$} \end{overpic} \caption{The bimodule $\cf$, its generators $[1_n]_f$, and the right multiplication on $\cf$.} \label{fig alg10} \end{figure} Let $P_n=\nh \cdot 1_n$ denote the left projective $\nh$-module associated to the idempotent $1_n \in \nh$. As an abelian group, $P_n$ is spanned by all diagrams with $n$ strands at the bottom. The summand $\bigoplus\limits_{0 \le m \le n}\nhmn$ of $\cf$ forms a left $\nh$-submodule of $\cf$, naturally isomorphic to $P_n$. So there is a natural isomorphism $$\cf \cong \bigoplus\limits_{n \ge 1}P_{n}$$ of left $\nh$-modules. In particular, $\cf$ is projective as a left $\nh$-module. For $t \in \nhmn$, let $[t]_f$ denote the corresponding element of $\cf$, which satisfies $$1_m \cdot [t]_f = [t]_f = [t]_f \cdot 1_{n-1}.$$ As a left $\nh$-module, $\cf$ is generated by $[1_n]_f$ for $n \ge 1$, since $[t]_f=t \cdot [1_n]_f$ for any $t \in \nhmn$. The generator $[1_n]_f$ is represented by the diagram of $n$ vertical strands with a little bar at the lower end of the rightmost strand. The right multiplication on the generators is given by \begin{align} \label{eq right F} [1_{n}]_f \cdot 1_{n-1} =[1_{n}]_f, & \\ [1_{n}]_f \cdot a =[(a \od 1_1)\cdot 1_{n}]_f, & \qquad \mbox{for}~~a\in NH_{n-1}, \\ [1_{n}]_f \cdot v_n =[1_n \cdot (v_n \od 1_1)]_f=[v_{n,n+1}\cdot 1_{n+1}]_f, & \end{align} see figure \ref{fig alg10}. \vspace{.2cm} From now on, all tensor products are taken over $\nh$. We will simply write $\ot$ for $\ot_{\nh}$. Let $\cf^k$ denote the $\nh$-bimodule given by the $k$-th tensor product $\cf^{\ot k}$ over $\nh$. As an abelian group, $$\cf^k= \bigoplus\limits_{m \ge 0, n \ge k}\nhmn,$$ where the $\nh$-bimodule structure is given by $a\cdot f \cdot b=a~f~\rho^k(b)$, where $a~f~\rho^k(b)$ is the product in $\nh$, for $a,b \in \nh$ and $f \in \cf$. Here, $\rho^k=\rho \circ \cdots \circ \rho: \nh \ra \nh$ maps $b$ to $b \od 1_k$ for $b \in \nh$. The map $\rho^k$ is diagrammatically given by adding $k$ vertical strands on the right. Let $[t]_{f^k} \in \cf^k$ which corresponds to $t \in \nhmn$ for $n \ge k$. The collection $\{[1_n]_{f^k}~|~n\ge k\}$ generates $\cf^k$ as a left $\nh$-module. The nilHecke algebra $NH_k$ naturally acts on $\cf^k$ on the right as follows. For any $c \in NH_k$, define a map \begin{align} \label{eq rt nh} \begin{array}{cccc} r(c): & \cf^k & \ra & \cf^k \\ & [t]_{f^k} & \mapsto & [t \cdot (1_{n-k} \od c)]_{f^k} \end{array} \end{align} for $t \in \nhmn$. Diagrammatically, the map $r(c)$ stacks $t \in NH_k$ onto the $k$ frozen strands from the bottom. The action commutes with the left and right multiplication of $\nh$ on $\cf^k$. Hence, $r(c)$ is a map of $\nh$-bimodules. \vspace{.2cm} Let $\mf{D}(\nh^e)$ denote the derived category of $\nh$-bimodules. It is a monoidal triangulated category whose monoidal bifunctor is given by the derived tensor product over $\nh$. The unit object, denoted by $\mb$, is isomorphic to $\nh$ as an $\nh$-bimodule placed in cohomological degree zero. Since $\cf$ is projective as a left $\nh$-module, the derived tensor product reduces to the ordinary tensor product between $\cf$'s. In particular, we view $\cf^k=\cf^{\ot k}$ placed in cohomological degree zero as an object of $\mf{D}(\nh^e)$. The subscript $\mf{D}(\nh^e)$ will be omitted in $\Hom_{\mf{D}(\nh^e)}$ from now on. We compute $\End(\mb)$ and $\End(\cf)$ in the following. \begin{lemma} \label{lem end 1} The endomorphism ring $\End(\mb)$ is isomorphic to $\K$ with a generator $id_{\mb}$. \end{lemma} \begin{proof} For any $h \in \End(\mb)$, $h(1_n) \in NH_n$ lives in the center $Z(NH_n)$ which is isomorphic to the ring $\la_n$ of symmetric functions. Moreover, $h(1_{n-1}) \cdot v_n=h(v_n)=v_n \cdot h(1_n) \in \nh^{n-1}_{n}$. Suppose $h(1_0)=a \cdot 1_0 \in Z(NH_0)$ for some $a \in \K$. Then $h(1_n)=a \cdot 1_n \in Z(NH_n)$ by induction on $n$. So $h=a \cdot id_{\mb}$. \end{proof} There is a natural inclusion $\rho_n: NH_n \ra NH_{n+1}$ of algebras which sends $a$ to $a \od 1_1$. Let $C_{n+1}(n)$ denote the centralizer of $NH_n$ in $NH_{n+1}$ with respect to the inclusion. \begin{prop} \label{prop center} There is an isomorphism $C_{n+1}(n) \cong Z(NH_n) \otimes \Z[x_{n+1}]$ of rings. \end{prop} \begin{proof} The $NH_n$-bimodule $NH_{n+1}$ has a decomposition $$_{NH_n}(NH_{n+1})_{NH_n} \cong (NH_n \otimes_{\K} \K[x_{n+1}]) \oplus (NH_n \otimes_{NH_{n-1}} NH_n),$$ where $NH_n \otimes_{\K} \K[x_{n+1}]$ is isomorphic to a sum of $\mathbb{N}$ copies of $NH_n$ as $NH_n$-bimodule, see figure \ref{fig alg15}. The center of the bimodule $NH_n \otimes_{\K} \K[x_{n+1}]$ is isomorphic to $Z(NH_n) \otimes \Z[x_{n+1}]$. It suffices to show that the center of the bimodule $NH_n \otimes_{NH_{n-1}} NH_n$ is trivial. Note that $NH_n \otimes_{NH_{n-1}} NH_n$ is a free left $NH_n$-module with a basis $$\{b_{i,k}=1_n \otimes (x_n^i \pa_{[k]}) ~|~ i \ge 0, 0 \le k \le n-1\},$$ where $\pa_{[k]}=\pa_{n-1}\cdots \pa_{n-k}$ for $k \ge 1$ and $\pa_{[0]}=1_n$. Let $c=\sum\limits_{i,k}c_{i,k}\cdot b_{i,k} \in NH_n \otimes_{NH_{n-1}} NH_n$, for $c_{i,k} \in NH_n$. Let $k_0(c)=\max \{k ~|~ c_{i,k} \neq 0 ~\mbox{for some}~ i\}$, and $i_0(c)=\max \{i ~|~ c_{i,k_0(c)} \neq 0\}$. Consider $c_1=x_{n-k_0} \cdot c$, and $c_2=c \cdot x_{n-k_0}$, for $x_{n-k_0} \in NH_n$. Then $k_0(c_1)=k_0(c_2)=k_0(c)$, but $i_0(c_1)=i_0(c), i_0(c_2)=i_0(c)+1$. Thus $c_1 \neq c_2$ which implies that the center is trivial. \end{proof} \begin{figure}[h] \begin{overpic} [scale=0.25]{alg15.eps} \put(2,11){$\scriptstyle{NH_n}$} \put(22,6){$\scriptstyle{NH_n}$} \put(22,16){$\scriptstyle{NH_n}$} \put(35,10){$\cong$} \put(45,6){$\scriptstyle{NH_n}$} \put(45,16){$\scriptstyle{NH_n}$} \put(69,0){$b_{i,2}$} \put(82,0){$b_{i,1}$} \put(94,0){$b_{i,0}$} \put(74,12){$i$} \put(88,12){$i$} \put(101,12){$i$} \end{overpic} \caption{} \label{fig alg15} \end{figure} Proposition \ref{prop center} together with a similar argument as in the proof of Lemma \ref{lem end 1} gives the following result. \begin{cor} There is a ring isomorphism $\K[x] \ra \End(\cf)$ which maps $c \in \K[x]$ to $r(c) \in \End(\cf)$. \end{cor} \subsection{The complex lifting $\exp(-f)$} The goal is to lift the expansion $$\exp(-f)=\sum\limits_{k\ge 0}(-1)^k\frac{f^k}{k!},$$ to a complex in $\mf{D}(\nh^e)$, where $1$ and $f$ are lifted to the objects $\mb$ and $\cf$, respectively. We use certain direct summands of $\cf^k$ to lift the divided powers $\frac{f^k}{k!}$. To define the differential, we will use the new generators with short strands. Recall some basic facts about some idempotents of $NH_k$ as follows. Let $w_0(k) \in S_k$ denote the longest element of $S_k$, and $\pak=\pa_{w_0(k)} \in NH_k$. There are idempotents $$e_k=x_1^{k-1}\cdots x_{k-1} \pak$$ in $NH_k$. Let $e_{i,k} = 1_{i-1} \od e_2 \od 1_{k-i-1} \in NH_k$ denote the diagram obtained from $e_2$ by adding $i-1$ and $k-i-1$ vertical strands on the left and right, respectively, see figure \ref{fig alg9}. It is easy to see that $e_k = e_{i_1, k} \cdots e_{i_l, k}$, where $s_{i_1}\cdots s_{i_l}$ is a reduced expression of $w_0(k) \in S_k$. In other words, $e_2$ is the building block of $e_k$. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg9.eps} \put(0,32){$e_2:$} \put(22,32){$e_{i,k}:$} \put(58,18){$u_3:$} \put(35,26){$\scriptstyle{i}$} \put(39,26){$\scriptstyle{i+1}$} \end{overpic} \caption{The diagrams for $e_2, e_{i,k}, e_3=e_{2,3}e_{1,3}e_{2,3}$, and $u_3$.} \label{fig alg9} \end{figure} The idempotent $e_k$ induces an idempotent endomorphism $r(e_k)$ of the bimodule $\cf^k$. \begin{defn} \label{def fk} Define the $\nh$-bimodule $\cfk$ as a direct summand of $\cf^k$ corresponding to the idempotent endomorphism $r(e_k)$. Elements of $\cfk$ are of the form $[t \cdot (1_{n-k} \od e_k)]_{f^k}$ for $t \in \nhmn$. \end{defn} Define an alternating sum \begin{gather} \label{def u} u_k = \sum\limits_{i=1}^{k}(-1)^{i-1}v_{i,k} \in \nh^{k-1}_k, \end{gather} for $k \ge 1$, see figure \ref{fig alg9} for $u_3$. Then $u_k \cdot u_{k+1}=0$ by the isotopy relation of disjoint short strands. Via the right action of $NH_k$ on $\cf^k$ in (\ref{eq rt nh}), the element $u_k$ induces a map \begin{align} \label{eq rt u1} \begin{array}{cccc} r(u_k): & \cf^{k-1} & \ra & \cf^k \\ & [t]_{f^{k-1}} & \mapsto & [t \cdot (1_{n+1-k} \od u_{k})]_{f^k} \end{array} \end{align} for $t \in \nhmn$, $n \ge k-1$. Consider the restriction of $r(u_k)$ to $\cf^{(k-1)}$ followed by the projection of $\cf^k$ onto $\cfk$. The resulting map is denoted by $d(u_k)$: \begin{align} \label{eq rt u} \begin{array}{cccc} d(u_k): & \cf^{(k-1)} & \ra & \cfk \\ & [t \cdot (1_{n+1-k} \od e_{k-1})]_{f^{k-1}} & \mapsto & [t \cdot (1_{n+1-k} \od (e_{k-1}u_{k}e_{k})]_{f^k} \end{array} \end{align} for $t \in \nhmn$, $n \ge k-1$. We compute $d(u_{k+1}) \circ d(u_k)$ in the following. \begin{lemma} \label{lem eue} There is an equality $e_{k-1} u_k e_k=e_{k-1} u_k \in \nh^{k-1}_k$. \end{lemma} \begin{proof} We claim that $e_{k-1} u_k e_{i,k}=e_{k-1} u_k$ for all $1 \le i \le k-1$. The lemma follows from the claim since $e_k$ can be written as a product of $e_{i, k}$'s. To prove the claim, we compute \begin{align*} e_{k-1} u_k \pa_{i,k} & = \sum\limits_{j=1}^{k}(-1)^{j-1}e_{k-1} v_{j,k} \pa_{i,k} \\ & = \sum\limits_{j \neq i,i+1}(-1)^{j-1} e_{k-1} v_{j,k} \pa_{i,k} + (-1)^{i-1} e_{k-1} v_{i,k} \pa_{i,k} + (-1)^{i} e_{k-1} v_{i+1,k} \pa_{i,k}. \end{align*} Each term of the first summation is zero since $v_{j,k} \pa_{i,k}=\pa_{i',k}v_{j,k}$ for some $i'$ when $j \neq i,i+1$, and $e_{k-1} \pa_{i',k-1}=0$ for all $i'$. The remaining two terms cancel each other since $v_{i,k} \pa_{i,k}=v_{i+1,k} \pa_{i,k}$ from the exchange relation in Definition \ref{def nh} (3). So $e_{k-1} u_k \pa_{i,k}=0$. By the nilHecke relation, $e_2=1_2 + \pa_1 x_2$ so that $e_{i,k}=1_k + \pa_{i,k} x_{i+1,k}$. Thus $$e_{k-1} u_k e_{i,k}=e_{k-1} u_k (1_k + \pa_{i,k} x_{i+1,k})= e_{k-1} u_k 1_k= e_{k-1} u_k.$$ The lemma follows. \end{proof} \begin{rmk} In general, $e_{k-1} u_k e_k \neq u_k e_k$. \end{rmk} The lemma above implies that \begin{gather} \label{eq u2} (e_{k-1} u_k e_k) \cdot (e_{k} u_{k+1} e_{k+1})=e_{k-1} u_k e_k u_{k+1} e_{k+1} = e_{k-1} u_k u_{k+1} e_{k+1}=0, \end{gather} where the last equality holds because $u_k u_{k+1}=0$. Hence, $d(u_{k+1}) \circ d(u_k): \cf^{(k-1)} \ra \cf^{(k+1)}$ is zero by the definition of $d(u_k)$ in (\ref{eq rt u}). We define a complex \begin{gather} \label{def e^-f} \exp(-\cf)=\left(\bigoplus\limits_{k \ge 0} \cfk[-k], d=\bigoplus\limits_{k \ge 1}d_k\right), \end{gather} where the components of the differential $d$ are given by $d_k=d(u_k): \cf^{(k-1)} \ra \cf^{(k)}$. \subsection{The complex lifting $\exp(f)$} The goal is to lift the expansion $$\exp(f)=\sum\limits_{k\ge 0}\frac{f^k}{k!},$$ to an object in $\mf{D}(\nh^e)$. We use another direct summand of $\cf^k$ induced by an idempotent which is different from $e_k$. The differential is induced by certain elements in the extension group $\Ext^1(\cf^k, \cf^{k-1})$. \vspace{.2cm} In the following, we construct an $\nh$-bimodule $\cg$ in two steps. We will show that $\cg$ is an extension of $\cf$ by $\mb=\nh$ as $\nh$-bimodules: $$0 \ra \mb \ra \cg \ra \cf \ra 0.$$ \vspace{.1cm} \n{\bf Step 1: The left module.} We first define $$\cg=\mb \oplus \cf$$ as a left $\nh$-module. Let $[t]_1 \in \mb$, and $[t']_f \in \cf$ for $t,t' \in \nhmn$. Then $$t \cdot [1_n]_1=[t]_1, \qquad t' \cdot [1_n]_f=[t']_f.$$ The left $\nh$-module $\cg$ is projective, and generated by $[1_n]_1$ and $[1_{n+1}]_f$, for $n \ge 0$. \vspace{.1cm} \n{\bf Step 2: The right module.} The right multiplication is defined on the generators $[1_n]_1$ and $[1_{n+1}]_f$ as follows. The summand $\mb$ is a right $\nh$-submodule of $\cg$: $$[1_n]_1 \cdot t=[t]_1, \quad \mbox{for}~~ t \in \nh^n_{n'}.$$ The algebra $\nh$ contains $\bigoplus\limits_{n \ge 0}NH_n$ as a subalgebra. The summand $\cf$ is a right $\bigoplus\limits_{n \ge 0}NH_n$-submodule of $\cg$: $$[1_{n+1}]_f \cdot t=[t \od 1_1]_f=(t \od 1_1) \cdot [1_{n+1}]_f, \quad \mbox{for}~~ t \in NH_n.$$ The only nontrivial part of the definition of $\cg$ is the right multiplication on $[1_{n}]_f$ with $v_n \in \nh^{n-1}_n$, for $n \ge 1$: \begin{gather} \label{eq rt g} [1_{n}]_f \cdot v_n=[v_n \od 1_1]_f + [1_n]_1=[v_{n,n+1}]_f+[1_n]_1, \end{gather} see figure \ref{fig alg11}. Here we use the presentation of $\nh$ in Definition \ref{def nh'}. The algebra $\nh$ is generated by $NH_{n-1}$ and $v_n$ for $n \ge 1$. In particular, $v_{i,n}=v_n \cdot g_{i,n}$ for some element $g_{i,n} \in NH_n$. Define $[1_{n}]_f \cdot v_{i,n}=([1_{n}]_f \cdot v_n) \cdot g_{i,n}$. Define the right multiplication on $[t]_f$ for $t \in \nhmn$, $b \in \nh$ as $$[t]_f \cdot b=t \cdot ([1_n]_f \cdot b).$$ Our construction of $\cg$ is complete. As a left $\nh$-module, $\cg$ is projective, and generated by $[1_n]_1$ and $[1_{n+1}]_f$ for $n \ge 0$. Thus, the right multiplication on $\cg$ is determined by the right multiplication on the generators $[1_n]_1$ and $ [1_{n+1}]_f$. By definition, it commutes with the left multiplication. \begin{figure}[h] \begin{overpic} [scale=0.3]{alg11.eps} \put(31,39){$f$} \put(67,39){$f$} \put(88,39){$1$} \put(15,17){$f$} \put(51,17){$f$} \put(87,17){$1$} \put(51,0){$f$} \put(74,0){$1$} \put(97,0){$1$} \end{overpic} \caption{Diagrams are rotated by $\frac{\pi}{2}$ counterclockwise from the vertical direction to the horizontal direction. The upper part is the definition $[1_{n}]_f \cdot v_n$; the lower part computes $([1_{n}]_f \cdot v_n) \cdot v_{n+1}$.} \label{fig alg11} \end{figure} \begin{lemma} \label{lem rt g} The right multiplication on $\cg$ is well-defined. \end{lemma} \begin{proof} Since the action of $NH_n$ on $\cg$ is the ordinary multiplication on $\nh$, we only have to check the relations involving $v_n$ in Definition \ref{def nh'} (3). We check the isotopy relation of a short strand with a dot \begin{align*} ([1_{n}]_f \cdot v_n) \cdot x_{j,n} &=([v_{n,n+1}]_f+[1_n]_1) \cdot x_{j,n} \\ &=[v_{n,n+1} \cdot (x_{j,n}\od 1_1)]_f+[x_{j,n}]_1 \\ &=[x_{j,n} \cdot v_{n,n+1}]_f+[x_{j,n}]_1 \\ &=([1_{n}]_f \cdot x_{j,n-1}) \cdot v_n. \end{align*} The proof of the isotopy relation of a short strand with a crossing is similar, and left to the reader. We check the relation (F) in Definition \ref{def nh'} (3): $v_{n} v_{n+1}=v_{n} v_{n+1} s_{n,n+1}$ \begin{align*} ([1_{n}]_f \cdot v_n) \cdot v_{n+1} &=([v_{n,n+1}]_f+[1_n]_1) \cdot v_{n+1} \\ &=[v_{n,n+1} \cdot 1_{n+1}] \cdot v_{n+1}+[v_{n+1}]_1 \\ &=[v_{n,n+1} \cdot v_{n+1,n+2}]_f+[v_{n,n+1} \cdot 1_{n+1}]_1+[v_{n+1}]_1 \\ &=[v_{n,n+1} \cdot v_{n+1,n+2}]_f+[v_{n,n+1}+v_{n+1}]_1, \end{align*} see figure \ref{fig alg11}. Recall that $s_{n,n+1} \in NH_{n+1}$ exchanges $v_{n+1}$ and $v_{n,n+1}$. So \begin{align*} ([1_{n}]_f \cdot v_n \cdot v_{n+1}) \cdot s_{n,n+1}&=[v_{n,n+1} \cdot v_{n+1,n+2}]_f \cdot s_{n,n+1} +[v_{n,n+1}+v_{n+1}]_1 \cdot s_{n,n+1} \\ &= [v_{n,n+1} \cdot v_{n+1,n+2} \cdot (s_{n,n+1} \od 1_1)]_f +[v_{n,n+1}+v_{n+1}]_1 \\ &= [v_{n,n+1} \cdot v_{n+1,n+2}]_f+[v_{n,n+1}+v_{n+1}]_1. \end{align*} This proves that the right multiplication on $\cg$ is well-defined. \end{proof} The $\nh$-bimodule $\cg$ fits into a short exact sequence of $\nh$-bimodules: $$0 \ra \mb \ra \cg \ra \cf \ra 0.$$ \begin{lemma} \label{lem g nontriv} The extension $\cg$ is not split. \end{lemma} \begin{proof} Suppose $\cg$ is split, i.e. there is a commutative diagram $$ \xymatrix{ 0 \ar[r] & \mb \ar[r] \ar[d]^{id} & \cg \ar[r] \ar[d]^{\psi} & \cf \ar[d]^{id} \ar[r] & 0. \\ 0 \ar[r] & \mb \ar[r] & \mb\oplus\cf \ar[r] & \cf \ar[r] & 0. \\ }$$ of $\nh$-bimodules. Let $[1_{n-1}]'_1$ and $[1_n]'_f$ for $n \ge 1$ denote the generators of $\mb \oplus \cf$. The commutative diagram implies that $\psi([1_{n-1}]_1)=[1_{n-1}]'_1$, and $\psi([1_n]_f)=[1_n]'_f+[t]'_1$ for some $t \in \nh^{n}_{n-1}$. Since $\nh^{n}_{n-1}=0$, we have $\psi([1_n]_f)=[1_n]'_f$. As a left $\nh$-module, $\cg$ is generated by $[1_{n-1}]_1$ and $[1_n]_f$. So $\psi([t]_f)=\psi(t \cdot [1_n]_f)=t \cdot \psi([1_n]_f)=t \cdot [1_n]'_f=[t]'_f$. Similarly, $\psi([t]_1)=[t]'_1$. On the other hand, $$\psi([1_n]_f \cdot v_n)=\psi([v_{n,n+1}]_f+[1_n]_1)=[v_{n,n+1}]'_f+[1_n]'_1, \qquad \psi([1_n]_f) \cdot v_n=[1_n]'_f \cdot v_n=[v_{n,n+1}]'_f.$$ Thus $\psi$ is not a map of right $\nh$-modules. This is a contradiction. \end{proof} \vspace{.2cm} The extension $\cg$ gives rise to a morphism $\wtv_1 \in \Hom(\cf,\mb[1])$ in the derived category $\mf{D}(\nh^e)$. We write $\Hom^1(\cf,\mb)$ for $\Hom(\cf,\mb[1])$. Lemma \ref{lem g nontriv} implies that $\wtv_1 \neq 0 \in \Hom^1(\cf,\mb)$. Define $$\wtv_{i,n}=1_{\cf^{i-1}} \ot \wtv_1 \ot 1_{\cf^{n-i}} \in \Hom^1(\cf^n,\cf^{n-1}),$$ for $1 \le i \le n$. Two elements $\wtv_{1,2}$ and $\wtv_{2,2}$ correspond to two extensions $\cgf$ and $\cfg$ of $\cf^2$ by $\cf$: $$\wtv_{1,2}: \qquad 0 \ra \cf \ra \cgf \ra \cf^2 \ra 0,$$ $$\wtv_{2,2}: \qquad 0 \ra \cf \ra \cfg \ra \cf^2 \ra 0.$$ Recall $s_{1,2} \in NH_2$ in (\ref{def s}) and the map $r(s_{1,2}): \cf^2 \ra \cf^2$ of $\nh$-bimodules in (\ref{eq rt nh}). \begin{lemma} \label{lem ext rel} There exists a map $\phi: \cgf \ra \cfg$ of $\nh$-bimodules such that the following diagram commutes $$ \xymatrix{ \wtv_{1,2}: & 0 \ar[r] & \cf \ar[r] \ar[d]^{id} & \cgf \ar[r] \ar[d]^{\phi} & \cf^2 \ar[d]^{r(s_{1,2})} \ar[r] & 0. \\ \wtv_{2,2}: & 0 \ar[r] & \cf \ar[r] & \cfg \ar[r] & \cf^2 \ar[r] & 0. \\ }$$ \end{lemma} \begin{proof} The left projective $\nh$-module $\cgf \cong \cf \oplus \cf^2$ is generated by elements $$[1_{n}]_{f}^{gf}:=[1_{n}]_1^g \ot [1_{n}]_f, \qquad [1_{n+1}]_{f^2}^{gf}:=[1_{n+1}]_f^g \ot [1_{n}]_f,$$ for $n \ge 1$. Similarly, the left projective $\nh$-module $\cfg \cong \cf \oplus \cf^2$ is generated by elements $$[1_{n}]_f^{fg}:=[1_{n}]_f \ot [1_{n-1}]_1^g, \qquad [1_{n+1}]_{f^2}^{fg}:=[1_{n+1}]_f \ot [1_{n}]_f^g,$$ for $n \ge 1$. Define a map $\phi: \cgf \ra \cfg$ of left $\nh$-modules on the generators as $$\phi([1_{n}]_{f}^{gf})=[1_{n}]_f^{fg}, \qquad \phi([1_{n+1}]_{f^2}^{gf})=[s_{n,n+1}]_{f^2}^{fg},$$ see figure \ref{fig alg14}. The map $\phi$ makes the diagram commute. \begin{figure}[h] \begin{overpic} [scale=0.25]{alg14.eps} \put(78,6){$\phi$} \put(20,6){$\phi$} \end{overpic} \caption{The map $\phi$ defined on the generators $[1_{n}]_{f}^{gf}$ and $[1_{n+1}]_{f^2}^{gf}$.} \label{fig alg14} \end{figure} We need to show that $\phi$ is a map of right $\nh$-modules. It is true when restricting to the right $\nh$-submodule $\cf$ of $\cgf$. It remains to show that $\phi([1_{n+1}]_{f^2}^{gf} \cdot v_n)=\phi([1_{n+1}]_{f^2}^{gf}) \cdot v_n$. We compute \begin{align*} [1_{n+1}]_{f^2}^{gf} \cdot v_n & = [1_{n+1}]_f^g \ot ([1_{n}]_f \cdot v_n) \\ & = [1_{n+1}]_f^g \ot [v_n \od 1_1]_f \\ & = ([1_{n+1}]_f^g \cdot v_{n,n+1} ) \ot [1_{n+1}]_f \\ & = ([1_{n+1}]_f^g \cdot v_{n+1} \cdot s_{n,n+1} ) \ot [1_{n+1}]_f \\ & = (([v_{n+1} \od 1_1]_f^g + [1_{n+1}]_1^g) \cdot s_{n,n+1} ) \ot [1_{n+1}]_f \\ & = ([v_{n+1,n+2} \cdot s_{n,n+2}]_f^g + [s_{n,n+1}]_1^g ) \ot [1_{n+1}]_f \\ & = ([v_{n,n+2}]_f^g + [s_{n,n+1}]_1^g ) \ot [1_{n+1}]_f \\ & = [v_{n,n+2}]_{f^2}^{gf} + [s_{n,n+1}]_f^{gf}. \end{align*} So $\phi([1_{n+1}]_{f^2}^{gf} \cdot v_n)=[v_{n,n+2} \cdot s_{n,n+1}]_{f^2}^{fg} + [s_{n,n+1}]_f^{fg}$. On the other hand, \begin{align*} \phi([1_{n+1}]_{f^2}^{gf}) \cdot v_n & = [s_{n,n+1}]_{f^2}^{fg} \cdot v_n \\ & = [s_{n,n+1}]_f \ot ([1_{n}]_f^g \cdot v_n) \\ & = [s_{n,n+1}]_f \ot ([v_{n,n+1}]_f^g+[1_n]_1^g) \\ & = ([s_{n,n+1}]_f \cdot v_{n,n+1}) \ot [1_{n+1}]_f^g + [s_{n,n+1}]_f^{fg} \\ & = [s_{n,n+1} \cdot v_{n,n+2}]_{f^2}^{fg} + [s_{n,n+1}]_f^{fg}. \end{align*} Since $v_{n,n+2} \cdot s_{n,n+1}=s_{n,n+1} \cdot v_{n,n+2}$, it follows that $\phi([1_{n+1}]_{f^2}^{gf} \cdot v_n)=\phi([1_{n+1}]_{f^2}^{gf}) \cdot v_n$. A graphic counterpart of this computation is depicted in figure \ref{fig alg12}. \end{proof} \begin{figure}[h] \begin{overpic} [scale=0.25]{alg12.eps} \put(48,52){$\phi$} \put(37,3){$\phi$} \end{overpic} \caption{The pictures compute $\phi([1_{n+1}]_{f^2}^{gf} \cdot v_n)$ and $\phi([1_{n+1}]_{f^2}^{gf}) \cdot v_n$ for $n=3$ on the left and right, respectively.} \label{fig alg12} \end{figure} Recall that $r(s_{1,2}) \in \Hom^0(\cf^2,\cf^2)$, and $\wtv_{1,2} \in \Hom^1(\cf^2,\cf)$. So $\wtv_{1,2} \circ r(s_{1,2}) \in \Hom^1(\cf^2,\cf)$ corresponds to another extension of $\cf^2$ by $\cf$, denoted by $s(\cgf)$. The lemma above shows that the extensions $s(\cgf)$ and $\cfg$ are equivalent. \begin{cor} \label{cor ext rel} The equalities $\wtv_{2,2}=\wtv_{1,2}\circ r(s_{1,2})$, and $\wtv_{2,2} \circ r(\pa_{1,2})=-\wtv_{1,2}\circ r(\pa_{1,2})$ hold in $\Hom^1(\cf^2,\cf[1])$. \end{cor} \begin{proof} The first equality directly follows from Lemma \ref{lem ext rel}. Precomposing the first equality with $r(\pa_{1,2})$ obtains the second one, since $r(s_{1,2}) \circ r(\pa_{1,2})=r(\pa_{1,2} \cdot s_{1,2})=r(-\pa_{1,2})$, see figure \ref{fig alg5}. \end{proof} In the monoidal category $\mf{D}(\nh^e)$, we could diagrammatically represent $\wtv_1 \in \Hom^1(\cf,\mb)$ as a short strand with one endpoint at the top and one endpoint in the middle decorated by a circle, see figure \ref{fig alg13}. It is of cohomological degree one. The element $\wtv_{i,n}$ can be obtained from $\wtv_1$ by adding $i-1$ and $n-i$ vertical strands on the left and right, respectively. The two relations in Corollary \ref{cor ext rel} are depicted in figure \ref{fig alg13}. They are the analogues of the slide relation, and the exchange relation in $\Hom^0(\cf,\cf^{2})$, see figures \ref{fig alg4} and \ref{fig alg2}. \begin{figure}[h] \begin{overpic} [scale=0.25]{alg13.eps} \put(10,45){$\wtv_1:$} \end{overpic} \caption{A graphic presentation of $\wtv_1 \in \Hom^1(\cf,\mb[1])$, and the slide, exchange, and isotopy relations in $\Hom^1(\cf^2,\cf)$.} \label{fig alg13} \end{figure} The super version of the isotopy relation of disjoint diagrams holds in $\mf{D}(\nh^e)$: $$(a \ot 1_{N'}) \circ (1_{M} \ot b)=(-1)^{|a||b|}(1_{N} \ot b) \circ (a\ot 1_{M'}) \in \Hom^{|a|+|b|}(M\ot M',N \ot N'),$$ for $a \in \Hom^{|a|}(M,N), b \in \Hom^{|b|}(M',N')$. In particular, $\wtv_{2,2} \circ \wtv_1=-\wtv_{1,2} \circ \wtv_1$ from the isotopy of two disjoint short strands of degree one. As an analogue of $u_k$ defined in (\ref{def u}), define a sum \begin{gather} \label{def wtu} \wtu_k = \sum\limits_{i=1}^{k}\wtv_{i,k} \in \Hom^1(\cf^k,\cf^{k-1}), \end{gather} for $k \ge 1$. Then $\wtu_k \circ \wtu_{k+1}=0$ by the super isotopy relation of disjoint short strands. Define the idempotent $$\wte_k=\pak x_1^{k-1}\cdots x_{k-1} \in NH_k.$$ It is the flip of $e_k$ with respect to the horizontal axis. There is an induced idempotent endomorphism $r(\wte_k) \in \Hom^0(\cf^k, \cf^k)$. Define $\wtfk$ to be the direct summand of $\cf^k$ corresponding to the idempotent endomorphism $r(\wte_k)$. The morphism $\wtu_k \in \Hom^1(\cf^k,\cf^{k-1})$ induces a morphism in $\Hom^1(\wt{\cf}^{(k)},\wt{\cf}^{(k-1)})$ as the restriction of $\wtu_k$ to $\wt{\cf}^{(k)}$ followed by a projection onto $\wt{\cf}^{(k-1)}$. Let $d(\wtu_k)$ denote the resulting morphism. \begin{lemma} \label{lem eue'} There is an equality $r(\wte_{k-1}) \circ \wtu_k \circ r(\wte_k)=r(\wte_{k-1}) \circ \wtu_k \in \Hom^1(\cf^k,\cf^{k-1})$. \end{lemma} \begin{proof} The proof is similar to that of Lemma \ref{lem eue} except that there is a minus sign in the exchange relation $\wtv_{2,2} \circ r(\pa_{1,2})=-\wtv_{1,2}\circ r(\pa_{1,2})$, as in Corollary \ref{cor ext rel}. \end{proof} The lemma above implies that \begin{align*} &(r(\wte_{k-1}) \circ \wtu_k \circ r(\wte_k)) \circ (r(\wte_{k}) \circ \wtu_{k+1} \circ r(\wte_{k+1})) \\ =&r(\wte_{k-1}) \circ \wtu_k \circ r(\wte_{k}) \circ \wtu_{k+1} \circ r(\wte_{k+1}) \\ =&r(\wte_{k-1}) \circ \wtu_k \circ \wtu_{k+1} \circ r(\wte_{k+1})=0, \end{align*} where the last equality holds because $\wtu_k \circ \wtu_{k+1}=0$. Hence, $d(\wtu_{k}) \circ d(\wtu_{k+1}) \in \Hom^2(\wt{\cf}^{(k+1)}, \wt{\cf}^{(k-1)})$ is zero. We define a complex \begin{gather} \label{def e^f} \exp(\cf)=\left(\bigoplus\limits_{k \ge 0} \wtfk, d=\bigoplus\limits_{k \ge 1}d_k\right), \end{gather} where the components of the differential $d$ are given by $d_k=d(\wtu_k) \in \Hom^1(\wt{\cf}^{(k)}, \wt{\cf}^{(k-1)})$. \section{Discussions} \label{Sec dis} \n {\bf Non-invertibility of $\exp(\cf)$ and $\exp(-\cf)$.} Suppose that $\exp(\cf), \exp(-\cf)$ are invertible objects. They should induce invertible endofunctors via categorical actions. We will consider certain actions of $\mf{D}(\nh^e)$ or its variants, and show that the induced functors cannot be invertible. The derived tensor product over $\nh$ induces an action of $\mf{D}(\nh^e)$ on the derived category $\mf{D}(\nh)$ of left $\nh$-modules. Let $P_{(k)}=\nh \cdot e_k$ denote the left projective $\nh$-module. There are isomorphisms $$\cf^{(k)}(P_0) \cong \wt{\cf}^{(k)}(P_0) \cong P_{(k)} \in \mf{D}(\nh).$$ By definition, $\exp(\cf)$ is an iterated extension of $\wt{\cf}^{(k)}$'s. So $\exp({\cf})(P_0)$ is isomorphic to an iterated extension of $\wt{\cf}^{(k)}(P_0)$'s. But there is no nontrivial extension between $\wt{\cf}^{(k)}(P_0)$'s since $\wt{\cf}^{(k)}(P_0) \cong P_{(k)}$ all projective for all $k$. Thus, $$\exp({\cf})(P_0) \cong \bigoplus\limits_{k=0}^{\infty}P_{(k)}.$$ The induced map: $\End(P_0) \ra \End(\exp({\cf})(P_0))$ is very far from being surjective. Hence, $\exp({\cf})$ cannot be an invertible endofunctor of $\mf{D}(\nh)$, nor an invertible object in $\mf{D}(\nh^e)$. In the case of $\exp(-\cf)$, we consider a cyclotomic quotient $\nh(1)$ of $\nh$, where the two-sided ideal under quotient is generated by $x_{1,n}$ for $n>0$. Idempotents $1_n$ are in the ideal for $n>1$, and $\nh(1)$ has a $\Z$-basis $\{1_0, 1_1, v_1\}$. There are two projective $\nh(1)$-modules $P_0$ and $P_1$. Let $\cf(1)$ denote the corresponding quotient of $\cf$ as an $\nh(1)$-bimodule. It induces an endofunctor of $\mf{D}(\nh(1))$. Then $\cf(1)(P_0) \cong P_1, \cf(1)(P_1) = 0$, and $\cf(1)^2=0$. The object $\exp(-\cf(1))$ reduces to a complex $(\mb \ra \cf)$ of two terms. Then $$\exp(-\cf(1))(P_0) \cong (P_0 \xrightarrow{\cdot v_1} P_1), \qquad \exp(-\cf(1))(P_1) \cong P_1.$$ So $\Hom(\exp(-\cf(1))(P_0), \exp(-\cf(1))(P_1))=0$, while $\Hom(P_0, P_1) \neq 0$. Hence, the endofunctor $\exp(-\cf(1))$ is not invertible. \vspace{.2cm} \n {\bf Lifting $\sum\limits_{n\ge 0}(-1)^nx^n$ and $\sum\limits_{n\ge 0}x^n$.} If we remove the nilHecke generators $x_{i,n}, \pa_{i,n}$ from $\nh$, we obtain a simpler diagrammatic algebra $R$ generated by vertical strands $1_n$ and short strands $v_{i,n}$ only. They satisfy the same relations as in Definition \ref{def nh}. The construction of the induction bimodule $\cf$, the morphisms $v \in \Hom(\mb,\cf), \wtv \in \Hom^1(\cf,\mb)$ still works in the case of $R$. The difference is that we do not have the idempotents $e_k, \wte_k$, and the corresponding direct summands $\cfk, \wtfk$ of $\cf^k$. Unlike $\exp(\cf), \exp(-\cf)$ using $\cfk, \wtfk$, one can define similar objects in the derived category of $R$-bimodules where $\cfk, \wtfk$ are replaced by $\cf^k$. The two objects lift $\sum\limits_{n\ge 0}(-1)^nx^n$ and $\sum\limits_{n\ge 0}x^n$, respectively, but these liftings are not invertible functors either.
1,314,259,992,885
arxiv
\section{Introduction} \label{sec:intro} Since network models can well describe the spreading of infectious disease among populations, many efforts have been devoted to studying this field~\cite{newman2003structure,newman2010networks}. At first, researchers mainly paid attention to analyze the impact of the network structure on epidemic spreading and the control strategies, for example, how the network topology affects the epidemic threshold and the prevalence of epidemic~\cite{BBPSV:2005,PhysRevLett.105.218701,newman2002spread,PhysRevE.91.042811,pastor2001epidemic,peter}, or how to design an effective immunization strategy to control the outbreaks of epidemics~\cite{pastor2002immunization,cohen2003efficient}. In reality, outbreaks of epidemics can trigger spontaneous behavioral responses of individuals to take preventive measures, which in turn alters the epidemic dynamics and affects the disease transmission process~\cite{haifeng2013braess,PhysRevE.86.036117,bauch2003group,bauch2004vaccination,wang2012estimating,funk2010modelling,liu2012impact}. Thus, recently, some studies have made attempts to evaluate the impact of the human behaviors on epidemic dynamics. For instance, Funk \emph{et ~al.}~\cite{funk2009spread} studied the impacts of awareness spread on both epidemic threshold and prevalence, and they found that, in a well-mixed population, spread of awareness can reduce the prevalence of epidemic but does not tend to affect the epidemic threshold, yet the epidemic threshold is altered when considering on social networks; Sahneh \emph{et al.} considered a Susceptible-Alter-Infected-Susceptible ($SAIS$) model~\cite{sahneh2012existence}, and they found that the way of behavioral response can enhance the epidemic threshold; Meloni \emph{et~al.} constructed a meta-population model incorporating several scenarios of self-initiated behavioral changes into the mobility patterns of individuals, and they found that such behavioral changes do not alter the epidemic threshold, but may increase rather than decrease the prevalence of epidemic~\cite{meloni2011modeling}. Meanwhile, in Ref.~\cite{wu2012impact}, by designing the transmission rate of epidemic as a function of the local infected density or the global infected density, Wu \emph{et~al.} investigated the effect of such a behavioral response on the epidemic threshold. One fact is that, the infectious neighbors can infect a susceptible individual, they can also trigger the awareness of the susceptible individual~\cite{sahneh2012existence,zhang2014suppression}. In view of this, in Ref.~\cite{perra2011towards}, Perra \emph{et~al.} introduced a new class of individuals, $S^F$, that represents susceptible people who self-initiate behavioral changes that lead to a reduction in the transmissibility of the infectious disease, into the classical $SIR$ model, and they found that such a model ($SS^FIR$) can induce a rich phase space with multiple epidemic peaks and tipping points. However, the network structure was not incorporated into these models. As we know, when the model is considered within the network based framework, new theoretical tools should be used and new phenomena may be observed. In view of this, we incorporate the network structure into the $SS^FIR$ model~\cite{perra2011towards} to investigate its spreading dynamics. In the model, when contacting an infectious neighbor, susceptible individuals may be infected (from $S$ state $I$ state) with a transmission rate or a behavioral response may be triggered (from $S$ state to $S^F$ state) with a response rate. We provide a theoretical formula for the epidemic threshold as well as the prevalence of epidemic via the percolation method~\cite{newman2002spread}, our results show that the introduction of $S^F$ class can enhance the epidemic threshold and reduce the prevalence of epidemic. We also demonstrate that a wrong result can be obtained---the introduction of $S^F$ class cannot alter the epidemic threshold when using mean field method to such a model. The reasons are presented in Sec.~\ref{sec:mean-field}. \section{Descriptions of the model} \label{sec:model} For the classical $SIR$ model on complex network, where each node on network can be in one of three states: Susceptible ($S$), Infected ($I$) or Recovered ($R$). The transmission rate along each $SI$ link is $\beta$, and an infected node can enter $R$ state with a recovery rate $\mu$. To reflect the fact that, upon observation of infection, susceptible individuals may adopt protective measures to lower their infection risk, a new class, denoted by $S^F$, is introduced into the original $SIR$ model, we use $SS^FIR$ model to denote the modified $SIR$ model in this study. In the model, when an $S$ node contacts an $I$ neighbor, besides the probability of being infected, the $S$ node can go to $S^F$ state with a response rate $\beta_F\geq0$. The transmission rate for the $S^F$ nodes is lower, thus we assume the transmission rate for $S^F$ nodes is $\gamma\beta$, where $0\leq\gamma<1$ is a discount factor. The $SS^FIR$ model is described by the four following reactions and the associated rates: \begin{eqnarray}\label{1} S+I\xrightarrow{\beta} 2I, \end{eqnarray} \begin{eqnarray}\label{2} S+I\xrightarrow{\beta_F} S^F+I, \end{eqnarray} \begin{eqnarray}\label{3} S^F+I\xrightarrow{\gamma\beta} 2I, \end{eqnarray} \begin{eqnarray}\label{4} I\xrightarrow{\mu} R. \end{eqnarray} Note that the $SS^FIR$ model returns to the $SIR$ model once $\beta_F=0$, and the $S^F$ state corresponds to fully vaccinated state when $\gamma=0$. \section{Theoretical analysis} \label{sec:theory} In our model, during a sufficiently small time interval $\Delta t$, the transition\emph{rates} of an $SI$ edge becoming an $II$, $S^FI$ and $SR$ edge are $\beta$, $\beta_F$ and $\mu$, respectively. As a result, the \emph{probabilities} of an $SI$ edge becoming an $II$ and $S^FI$ edge are given as $T_1=\frac{\beta}{\beta+\beta_F+\mu}$ and $T_2=\frac{\beta_F}{\beta+\beta_F+\mu}$, respectively. Similarly, since the transition rate of an $S^FI$ edge becoming an $II$ and $S^FR$ during a sufficiently small time interval $\Delta t$ are $\gamma\beta$ and $\mu$, the probability of an $S^FI$ edge becoming an $II$ edge is $T_3=\frac{\gamma\beta}{\gamma\beta+\mu}$~\cite{hebert2013pathogen}. To analyze our proposed model, we first define``externally infected neighbor'' (EIN) for any node. For node $i$, if a neighbor $j$ is an EIN means that $j$ is infected by its neighbors other than $i$ ( i.e., $j$ is infected even though node $i$ is deleted from the networks, which is the basic assumption of the cavity theory in statistical physics. Note that this method is suitable for the networks with negligible number of loops as the network size is sufficiently large~\cite{newman2013interacting}). The probability of neighbor $j$ being an EIN of $i$ is defined as $\theta$, then, for the a node with degree $k$, the probability of having $m$ EINs is given as: \begin{eqnarray}\label{5} p(m|k)=\big(^k_m\big)\theta^m(1-\theta)^{k-m} \end{eqnarray} Let $p(R|m)$ be the probability of $i$ being infected when it has number $m$ of EIN. To calculate such a probability, we need to recognize that, in our model, an $S$ node can become an $I$ node through two ways: (a) the $S$ node is directly infected; or (b) the $S$ node first goes to $S^F$ state and then is infected. To facilitate the analysis, we approximately assume that the impacts of $i$'s infected neighbors on node $i$ happen in a non-overlapping order, i.e., they play roles on node $i$ one by one. For case (a), the probability of node $i$ being infected by the $s$th infected neighbor is given as: \begin{eqnarray}\label{6} A_1=(1-T_1-T_2)^{s-1}T_1, \end{eqnarray} Eq.~(\ref{6}) indicates that previous $s-1$ infected neighbors have not changed the state of node $i$ (not become $I$ or $S^F$ state) before they become $R$ state. Therefore, the probability of $i$ being infected is: \begin{eqnarray}\label{7} p_1(R|m)=\sum_{s=1}^{m}A_1=\frac{T_1}{T_1+T_2}[1-(1-T_1-T_2)^m]. \end{eqnarray} For case (b), node $i$ should first become $S^F$ state, and the probability that the susceptible node $i$ is altered by the $l$th infected neighbors and becomes $S^F$ can be written as: \begin{eqnarray}\label{8} A_2=(1-T_1-T_2)^{l-1}T_2, \end{eqnarray} which also indicates that previous $l-1$ infected neighbors have not changed the state of node $i$ before they become $R$ state. For the remainder $m-l+1$ infected neighbors (including the infected neighbor who just made $i$ go to $S^F$ state), they can infect $i$ with probability: \begin{eqnarray}\label{9} A_3=1-(1-T_3)^{m-l+1}. \end{eqnarray} As a result, the probability of node $i$ first becoming $S^F$ state and then going to $I$ state is: \begin{eqnarray}\label{10} \nonumber p_2(R|m)&&=\sum\limits_{l= 1}^m {A_2* A_3} =\frac{{{T_2}}}{{{T_1} + {T_2}}}[1 - {(1 - {T_1} - {T_2})^m}]\\ -&&\frac{{(1 - {T_3}){T_2}}}{{{T_1} + {T_2} - {T_3}}}[{(1 - {T_3})^m} - {(1 - {T_1} - {T_2})^m}]. \end{eqnarray} The probability $p(R|m)$ is \begin{eqnarray}\label{11} \nonumber p(R|m) &=&p_1(R|m)+p_2(R|m)= 1 - {(1 - {T_1} - {T_2})^m} \\ \nonumber&&-\frac{{(1 - {T_3}){T_2}}}{{{T_1} + {T_2} - {T_3}}}[{(1 - {T_3})^m} - {(1 - {T_1} - {T_2})^m}]\\ \nonumber&=&1 - \frac{{{T_1} - {T_3} + {T_2}{T_3}}}{{{T_1} + {T_2} - {T_3}}}{(1 - {T_1} - {T_2})^m}\\ && -\frac{{(1 - {T_3}){T_2}}}{{{T_1} + {T_2} - {T_3}}}{(1 - {T_3})^m}. \end{eqnarray} Combing Eqs. (\ref{5}) and (\ref{11}), the probability of a node with degree $k$ being infected is \begin{eqnarray}\label{12} \nonumber p(R|k)& =& \sum\limits_m {p(R|m)} p(m|k) = 1 - {(1 - \theta {T_1} - \theta {T_2})^k} \\ \nonumber && - {{(1 - {T_3}){T_2}} \over {{T_1} + {T_2} - {T_3}}}[{(1 - \theta {T_3})^k} - {(1 - \theta {T_1} - \theta {T_2})^k}] \\ \nonumber & =& 1 - {{{T_1} - {T_3} + {T_2}{T_3}} \over {{T_1} + {T_2} - {T_3}}}{(1 - \theta {T_1} - \theta {T_2})^k}\\ && - {{(1 - {T_3}){T_2}} \over {{T_1} + {T_2} - {T_3}}}{(1 - \theta {T_3})^k}. \end{eqnarray} Then the EIN probability $\theta$ is the solution to the self-consistent condition \begin{eqnarray}\label{13} \nonumber\theta & =& \sum\limits_k {Q(k)p(R|k)}\\ \nonumber & =& 1 - \frac{{{T_1} - {T_3} + {T_2}{T_3}}}{{{T_1} + {T_2} - {T_3}}}{G_1}(1 - \theta {T_1} - \theta {T_2})\\ &&- \frac{{(1 - {T_3}){T_2}}}{{{T_1} + {T_2} - {T_3}}}{G_1}(1 - \theta {T_3})=f(\theta). \end{eqnarray} In Eq.~(\ref{13}), $Q(k)=\frac{(k+1)P(k+1)}{\langle k \rangle}$ is the excess degree distribution, where $P(k)$ is the degree distribution and $\langle k \rangle$ is the average degree. The generating function for $Q(k)$ is given as \begin{eqnarray}\label{15} G_1(x)=\sum\limits_{k=0} {Q(k) x^k}. \end{eqnarray} There is a trivial solution $\theta=0$ in self-consistent equation (\ref{13}). To have a non-trivial solution, the following condition must be met: \begin{equation} \label{16} \frac{df(\theta)}{d\theta}|_{\theta=0}=(T_1+T_2T_3) G'_1(1)\geq 1, \end{equation} which implies the epidemic can outbreak when \begin{equation} \label{17} \frac{\beta}{\mu+\gamma\beta}\frac{\mu+\gamma\beta+\gamma\beta_F}{\mu+\beta+\beta_F} \geq \frac{\langle k \rangle}{\langle k^2 \rangle-\langle k \rangle}. \end{equation} For the prevalence of epidemic (defined as $R(\infty)$), we can numerically solve $\theta$ from self-consistent equation~(\ref{13}), then the formula of $R(\infty)$ is \begin{eqnarray}\label{18} \nonumber R(\infty) & =& \sum\limits_k {P(k)p(R|k)}\\ \nonumber & =& 1 - \frac{{{T_1} - {T_3} + {T_2}{T_3}}}{{{T_1} + {T_2} - {T_3}}}{G_0}(1 - \theta {T_1} - \theta {T_2})\\ &&- \frac{{(1 - {T_3}){T_2}}}{{{T_1} + {T_2} - {T_3}}}{G_0}(1 - \theta {T_3}). \end{eqnarray} In Eq.~(\ref{18}), $G_0(.)$ is the generating function of degree distribution $P(k)$, which is described as: \begin{eqnarray}\label{14} G_0(x)=\sum\limits_{k=0} {P(k) x^k}. \end{eqnarray} \section{Simulation results} \label{sec:result} In this section, we perform an extensive set of Monte Carlo simulations to validate the theoretical predictions in Section~\ref{sec:theory}. Here we carry out simulations on an Erd\H{o}s-R\'{e}nyi network (labeled ER network)~\cite{erdos1960evolution} with network size $N=10000$ and average degree $\langle k \rangle=10$, and a configuration network generated by an uncorrelated configuration model (UCM)~\cite{newman2001random}. The configuration network also has $N=10000$ nodes and the degree distribution meets $P(k)\sim k^{-3.0}$, whose minimal and maximal degrees are $k_{min}=3$ and $k_{max}=\sqrt{N}$, respectively. \subsection{Results on ER network} Differing from the $SIS$ (Susceptible-Infected-Susceptible) model, it is not an easy thing to determine the epidemic threshold for the $SIR$ model owing to the non-zero value of $R$. In doing so, in Ref.~\cite{shu2014simulated}, Shu et~al. suggested that the variability measure \begin{eqnarray}\label{measure} \Delta=\frac{\sqrt{\langle \rho^2\rangle-\langle \rho\rangle^2}}{\langle \rho\rangle} \end{eqnarray} can well predict the epidemic threshold for the $SIR$ model, where $\rho$ denotes the prevalence of epidemic in one simulation realization~\cite{crepey2006epidemic,shu2012effects}. $\Delta$ can be explained as the standard deviation of the epidemic prevalence, and is a standard measure to determine critical point in equilibrium phase on magnetic system~\cite{ferreira2011quasistationary}. In our simulations, we have taken at least 1000 independent realizations to predict the epidemic threshold. For convenience, in this study, we set recovery rate $\mu=1.0$. In Fig.~\ref{fig1}, for different response rate $\beta_F$, the value of $R(\infty)$ (upper panels) and the measure $\Delta$ (lower panels) as the functions of the transmission rate $\beta$ are investigated. As shown in Fig.~\ref{fig1}, one can observe that the variability measure can well predict the epidemic threshold for our $SS^FIR$ model. As a result, in the following figures, we use this method to determine the epidemic threshold, i.e., the point where the value of $\Delta$ is the maximal. Fig.~\ref{fig1} also describes that, no matter $\gamma=0.1$ [see Figs.~\ref{fig1}(a)-(b)] or $\gamma=0.3$ [see Figs.~\ref{fig1}(c)-(d)], on the one hand, the epidemic threshold is enhanced as the response rate $\beta_F$ is increased. On the other hand, for the a fixed value of $\beta$, Figs.~\ref{fig1}(a)and (c) suggest that the prevalence of epidemic is remarkably reduced when $\beta_F$ is increased. The results suggest that, by introducing an additional protective state, $S^F$, to the classical $SIR$ model, the conclusions are quite different from the previous results which have not incorporated the impacts of human behavioral responses. The result again emphasizes the fact that the spontaneous behavioral responses of individuals to the emergent diseases have vital impacts on the epidemic dynamics. If the behavioral responses are ignored in mathematical modelling, the obtained results cannot really reflect the spreading mechanism of epidemics among human population. \begin{figure} \begin{center} \includegraphics[width=3.5in]{fig1.eps} \caption{(Color online) On ER networks, the prevalence of epidemic $R(\infty)$ (upper panels) and the variability measure $\Delta$ (lower panels) as functions of the transmission rate $\beta$ for different values of $\beta_F$ and $\gamma$. Note that, here $\beta_F$ is a rate rather probability, as a result, whose value may be larger than one. (a)-(b) $\gamma=0.1$; (c)-(d) $\gamma=0.3$. The pink lines are given to demonstrate that the peak value of $\Delta$ corresponds to the epidemic threshold $\beta_c$.} \label{fig1} \end{center} \end{figure} We then compare the theoretical results with the Monte Carlo simulations on ER network in Fig.~\ref{fig3} and Fig.~\ref{fig4}. Since the degree distribution of an Erd\H{o}s-R\'{e}nyi network is ${P(k)} = {e^{ - \langle k \rangle}}\frac{\langle k \rangle^k}{{k!}}$, the generating functions meet: \begin{equation} \label{19} G_0(x)=G_1(x)=e^{\langle k \rangle(x-1)} \end{equation} According to Ineq.~(\ref{16}), the epidemic threshold $\beta_c$ for ER network is determined by the following equation \begin{equation} \label{20} \frac{\beta_c}{\mu+\gamma\beta_c}\frac{\mu+\gamma\beta_c+\gamma\beta_F}{\mu+\beta_c+\beta_F} = \frac{1}{\langle k \rangle}. \end{equation} Moreover, substituting Eq.~(\ref{19}) into Eq.~(\ref{13}) and Eq.~(\ref{18}), the prevalence of epidemic $R(\infty)$ can be easily solved. The comparison of $R(\infty)$ between the simulations and the theoretical result is plotted in Fig.~\ref{fig3}, which indicates that the numerical simulation and the theoretical result are in good agreement. Meanwhile, the epidemic threshold for $\beta_c$ obtained from Eq.~(\ref{20}) and from numerical method (i.e., the point where $\Delta$ is maximal) is compared in Fig.~\ref{fig4}, which also indicates that the epidemic threshold predicated by our method is remarkable agreement with numerical simulations. The result in Fig.~\ref{fig4} also suggests that the epidemic threshold $\beta_c$ is increased as the value of $\gamma$ is decreased. Importantly, the reduction is more efficient when the response rate $\beta_F$ is larger. \begin{figure} \begin{center} \includegraphics[width=3.5in]{fig2.eps} \caption{(Color online) Comparison between the Monte Carlo based simulations and the theoretical predictions for $R(\infty)$ on ER networks. The simulation results are denoted by symbols and the theoretical predictions are denoted by the corresponding lines. The theoretical results are obtained by substituting Eq.~(\ref{19}) into Eqs.~(\ref{13}) and (\ref{18}), and the value of $\theta$ is numerically solved from Eq.~(\ref{13}), then $R(\infty)$ is got from Eq.~(\ref{18}). (a) $\gamma=0.1$; (b) $\gamma=0.3$. } \label{fig3} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.5in]{fig3.eps} \caption{(Color online) Comparison between the Monte Carlo based simulations and the theoretical predictions for the epidemic threshold $\beta_c$ on ER networks. The theoretical predictions denoted by lines are obtained from Eq.~(\ref{20}), and the simulation results denoted by symbols are the points where the values $\Delta$ are maximal. (a) $\beta_c$ as a function of the response rate $\beta_F$ for different values of $\gamma$; (b) $\beta_c$ as a function of the discount factor $\gamma$ for different values of $\beta_F$.} \label{fig4} \end{center} \end{figure} \subsection{Results on UCM network} Real complex networked systems often possess certain degree of skewness in their degree distributions, typically represented by some scale-free topology. We thus check our model on UCM network with degree distribution $P(k)\sim k^{-3.0}$ to illustrate that our theory can generalize to the networks with heterogenous degree distribution and in the absence of degree-to-degree correlation. As shown in Fig.~\ref{fig5} and Fig.~\ref{fig6}, one can see that the analytical results are in good agreement with the numerics. They also indicate that, since increasing $\beta_F$ can induce more susceptible individuals go to $S^F$ state and reducing $\gamma$ can lower the risk of $S^F$ nodes, as a result, both of them can lower the prevalence of epidemic $R(\infty)$ and increase the epidemic threshold $\beta_c$. \begin{figure} \begin{center} \includegraphics[width=3.5in]{fig4.eps} \caption{(Color online) Comparison between the Monte Carlo based simulations and the theoretical predictions for $R(\infty)$ on UCM networks. The simulation results are denoted by symbols and the theoretical predictions are denoted by the corresponding lines. The theoretical results are obtained by substituting a fixed degree distribution $P(k)$ into Eqs.~(\ref{14}) and (\ref{14}), and then $R(\infty)$ can be solved from Eqs.~(\ref{14}) and (\ref{14}) as described in Fig.~\ref{fig3}. (a) $\gamma=0.1$; (b) $\gamma=0.3$. } \label{fig5} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.5in]{fig5.eps} \caption{(Color online) Comparison between the Monte Carlo based simulations and the theoretical predictions for the epidemic threshold $\beta_c$ on UCM networks. The theoretical predictions denoted by lines are obtained from Ineq.~(\ref{17}), and the simulation results denoted by symbols are the points where the values $\Delta$ are maximal. (a) $\beta_c$ as a function of the response rate $\beta_F$ for different values of $\gamma$; (b) $\beta_c$ as a function of the discount factor $\gamma$ for different values of $\beta_F$.} \label{fig6} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.5in]{fig6.eps} \caption{(Color online) Based on the differential Eqs.~(\ref{21})-(\ref{22}), the prevalence of epidemic $R(\infty)$ as a function of $\beta$ is presented. (a) $\gamma=0.1$; (b) $\gamma=0.3$. } \label{fig7} \end{center} \end{figure} \section{Mean field method for the model} \label{sec:mean-field} One possible way to describe our proposed $SS^{F}IR$ model is the mean field method, which can be written as: \begin{eqnarray} \frac{dS_{k}(t)}{dt}&=&-\beta kS_{k}\Theta-\beta_{F}kS_{k}\Theta,\label{21}\\ \frac{dS_{k}^{F}(t)}{dt}&=&\beta_{F}kS_{k}\Theta-\beta\gamma kS_{k}^{F}\Theta,\label{22}\\ \frac{dI_{k}(t)}{dt}&=&k\Theta\beta(S_{k}+ \gamma S_{k}^{F})-I_{k},\label{23}\\ \frac{dR_{k}(t)}{dt}&=&I_{k},\label{24} \end{eqnarray} where the factor $\Theta(t)=\sum_{k'}P(k'|k)I_{k'}(t)$ represents the probability that any given link points to an infected node. In absence of any degree correlations, $\Theta(t)=\frac{1}{\langle k\rangle}\sum_{k}kP(k)I_{k}(t)$~\cite{moreno2002epidemic}. Based on the above differential equations, we can obtain that the epidemic threshold $\beta_c=\frac{\langle k\rangle}{\langle k^2\rangle}$ (detailed derivation is given in section~\ref{sec:appendix}), which means that the epidemic threshold for our model is not related to the response rate $\beta_F$ or the discount factor $\gamma$, and which is the same to the epidemic threshold of classical $SIR$ model. The simulation results based on the Eqs.~(\ref{21}-\ref{24}) in Fig.~\ref{fig7} also indicate that, based on mean field method, the epidemic threshold is not altered by different values of $\beta_F$ or $\gamma$. However, our previous simulation results based on Monte Carlo method have suggested that the epidemic threshold $\beta_c$ is increased when $\beta_F$ is increased or $\gamma$ is reduced. That is to say, the conclusion obtained by mean-field method is wrong. Now let us explain why the mean field method gives a wrong result. It is known that, near the epidemic threshold, the fraction of infected nodes (label $\rho_I$) is very small. When using the mean field method, the dynamic correlation is neglected, the probability of $S$ node becoming $S^F$ is proportional to $O(\rho_I)$ since the average fraction of infected nodes among neighborhood equals to $O(\rho_I)$. Similarly, the probability of $S^F$ node becoming $I$ node is also proportional to $O(\rho_I)$. As a result, the probability of $S\rightarrow S^F\rightarrow I$ is proportional to $O(\rho_I^2)$, which leads to the effect of the $S^F$ is ignored and the epidemic threshold obtained by the mean field method is not related to the value of $\beta_F$ or $\gamma$. In fact, when an $S$ node becomes an $S^F$ node there must exist at least one infected nodes among the neighborhood of the $S$ node. More importantly, these infected neighbors may exist for a certain time interval, so the probability of $S\rightarrow S^F\rightarrow I$ is not proportional to $O(\rho_I^2)$. However, as deduced in Eq.~(\ref{9}), the dynamics correlation near the epidemic threshold is considered in our above analysis, which can accurately predict the epidemic threshold. \section{Conclusions} \label{sec:conclusion} To summarize, we have proposed an $SS^FIR$ epidemiological model in complex networks, in which the probability of susceptible individuals becoming $S^F$ state is proportional to the number of infected neighbors, to reflect the fact that individuals are more likely to take protective measures when they find their neighbors are infected. By using theoretical analysis as well as numerical simulations, we found that the prevalence of epidemic is effectively reduced and the epidemic threshold is remarkably increased when the response rate $\beta_F$ is increased or the discount factor $\gamma$ is reduced. Moreover, we have demonstrated that the mean field based analysis provides a wrong result: the epidemic threshold is not related to the response rate $\beta_F$ or discount factor $\gamma$. The reason is that, near the epidemic threshold, the probability of $S\rightarrow S^F\rightarrow I$ is a second order infinitesimal since the mean field method ignores the dynamic correlation, which makes the effect of $S^F$ state to be ignored. With the development of technology, information induced awareness or self-protective behaviors can not only diffuse through the contact networks where the diseases spread but also can fast diffuse through many different channels, such as, the word of mouth, news media, online social networks, and so on. In view of this, recent well-studied multiplex network theory may be an ideal framework to mimic the interplay of information or related awareness and the epidemic dynamics~\cite{granell2013dynamical,wang2014asymmetrically,boccaletti2014structure}. Thus, how to generalize our model to multiplex networks and provide theoretical analysis to the model is a challenge in our further work. \section{Appendix} \label{sec:appendix} When assuming $S_{k}(0)\approx1$, then from Eq.~(\ref{21}), we have~\cite{moreno2002epidemic} \begin{eqnarray}\label{S1} S_{k}(t)=e^{-k(\beta+\beta^{F})\phi(t)}, \end{eqnarray} where $\phi(t)=\int_0^{t}\Theta(t')dt'=\frac{1}{\langle k\rangle}\sum_{k}{kP(k)R_k(t)}$. Substituting Eq.~(\ref{S1}) into Eq.~(\ref{22}), one has: \begin{eqnarray}\label{S2} \frac{dS_{k}^{F}}{dt}=\beta_{F}k\Theta(t) e^{-k(\beta+\beta_{F})\phi(t)}-\beta \gamma kS_{k}^{F}\Theta(t). \end{eqnarray} By using the variation of constants method, $S_{k}^{F}(t)$ is solved as: \begin{eqnarray} \nonumber S_{k}^{F}(t)=\frac{\beta_{F}}{\beta+\beta_{F}-\beta \gamma}e^{-\beta \gamma k\phi (t)}(1-e^{-k(\beta+\beta_{F}-\beta \gamma)\phi(t)}).\\ \end{eqnarray} Then, \begin{widetext} \begin{eqnarray} \label{S3} \nonumber \Theta (t)&=&\frac{d\phi(t)}{dt} =1-\phi(t)-\frac{1}{\langle k\rangle}\Sigma kP(k)e^{-k(\beta+\beta_{F})\phi(t)}\\&&-\frac {1}{\langle k\rangle}\Sigma kP(k)\frac{\beta_{F}}{\beta+\beta_{F}-\beta \gamma}e^{-\beta \gamma k\phi(t)}(1-e^{-k(\beta+\beta_{F}-\beta \gamma)\phi(t)}). \end{eqnarray} \end{widetext} By letting $\phi_{\infty}=\lim_{t\rightarrow\infty}\phi(t)$, and with the fact that $\lim_{t\rightarrow\infty}\frac{d\phi(t)}{dt}=0$ and $I(t\rightarrow\infty)=0$ when the epidemics end, a self-consistent equation can be got from Eq.~(\ref{S3}): \begin{widetext} \begin{eqnarray} \label{S4} \phi_\infty&=&1-\frac{1}{\langle k\rangle}\Sigma kP(k)e^{-k(\beta+\beta_{F})\phi_\infty}\nonumber\\ &-&\frac {1}{\langle k\rangle}\Sigma kP(k)\frac{\beta_{F}}{\beta+\beta_{F}-\beta \gamma}e^{-\beta \gamma k\phi_\infty}(1-e^{-k(\beta+\beta_{F}-\beta \gamma)\phi_\infty})=g(\phi_\infty). \end{eqnarray} \end{widetext} The value $\phi_{\infty}=0$ is always a solution. In order to have a non-zero solution, the condition \begin{eqnarray} \label{S5} \nonumber\frac{df(\phi_\infty)}{d\phi_\infty}|_{\phi_\infty=0}&=&\frac{(\beta+\beta_{F})\langle k^2\rangle}{\langle k\rangle}-\frac{\beta_{F}\langle k^2\rangle}{\langle k\rangle}\\ &=&\frac{\beta\langle k^2\rangle}{\langle k\rangle}\geq 1 \end{eqnarray} should be satisfied, which means that the epidemic threshold $\beta_c=\frac{\langle k\rangle}{\langle k^2\rangle}$. \section*{Acknowledgments} This work is funded by the National Natural Science Foundation of China (Grant Nos. 61473001, 91324002, 11331009).
1,314,259,992,886
arxiv
\section{Introduction} {In recent years there has been an impressive increase in the analysis of models with interactions and associated optimal control problems.} The motivations {and fields of interest} are various and range from statistical mechanics to biology, from crowd dynamics to the description of economical and financial phenomena, and many others. A lot of different mathematical models and techniques have been proposed in the literature and it seems impossible to be exhaustive in accounting here all the developments. We refer to \cite{carrillo2014derivation, fornasier2014mean, carmona2018probabilistic, fornasier2018mean} and the references therein for some of the recent results. A large part of the literature concentrates the attention on the evolution of populations of similar individuals where the single agent feels the interaction with the others through an averaged term. In this case, when the number of individuals is very large, an aggregation effect takes place and the (discrete) collection of agents is usually replaced by its spatial density. This idea comes from the so called \emph{mean field} approach in statistical physics where it has been fruitfully used to develop a limit theory when the number of particles goes to infinity. Within this framework, optimal control problems, both at the microscopic and macroscopic levels, are naturally considered, see e.g. \cite{fornasier2014mean, piccoli2015control, carrillo2020mean, jimenez2020optimal, cavagnari2020generalized, cristiani2014}. A first motivation for the introduction of (centralized) controls is the incompleteness of the concept of self-organization. In fact, for a population of interacting particles/agents, global coordination or pattern formation is not a priori guaranteed and the intervention of a central planner on the dynamics could promote these mechanisms: this leads to the definition of \emph{multi-agent} control problems. A further motivation is the analysis of interacting rational agents with similar optimization goals. In this case, the mean field approach consists in approximating a large number of agents with a single representative individual, whose aim is to solve a control problem constrained to a field equation (encoding the averaged behaviour of the population). The mean field term influences both the dynamics and the cost functional. Whether the representative agent can or cannot influence the mean field term depends on the model under consideration. The interested reader is referred to the books \cite{carmona2018probabilistic, bensoussan2013mean} for a detailed description of various aspects of mean field models. \medskip In the present paper we study different formulations of \emph{multi-agent optimal control problems}, that we denote respectively with \emph{Lagrangian}, \emph{Eulerian} and \emph{Kantorovich} as well as the corresponding \emph{limit theory}. \smallskip As already mentioned above, \emph{multi-agent optimal control}, also known in literature as centralized optimal control of Vlasov dynamics, represents non-standard optimal control problems where each individual is influenced by the averaged behaviour of all the others and the central planner aims at minimizing a cost functional which depends on the distribution of all the agents. A large effort has been devoted in the last years to extend results of classical optimal control theory to the mean field setting, with a particular attention to the measure-formulation of the problems in Wasserstein spaces. In this direction, let us mention \cite{bonnet2019pontryagin, pogodaev2016} for necessary conditions for optimality in the form of a Pontryagin maximum principle, \cite{cavagnari2020generalized} for a generalized version of dynamic programming, \cite{bonnet2020mean} for the study of differential inclusions and the contribution \cite{jimenez2020optimal} for the analysis of {necessary and sufficient conditions for optimality in the form of a Hamilton-Jacobi-Bellman} equation in the Wasserstein space. \smallskip \emph{Lagrangian}, \emph{Eulerian} and \emph{Kantorovich} refer to different points of view that can be adopted to study the dynamics of the problems. The Lagrangian and Eulerian terminologies come from fluid-dynamics and they have been recently adopted in the theory of optimal transport, from which we also took inspiration for the Kantorovich formulation. In general terms, the \emph{Lagrangian} approach consists in labelling each particle and following the corresponding trajectory. The \emph{Eulerian} description, on the other hand, aims at measuring the velocity of particles flowing at a point at a fixed time. In this paper we introduce a further point of view, that we name \emph{Kantorovich} in analogy with the Kantorovich extension of Monge problem (in the same spirit, see also \cite{ambrosio2009geodesics, ambrosio2008gradient, benamou2017variational}). The Kantorovich formulation turns out to be fundamental in connecting the Lagrangian and Eulerian points of view and it is based on the representation of solutions of the continuity equation provided by the superposition principle (see e.g. \cite[Theorem 8.2.1]{ambrosio2008gradient} or \cite[Theorem 5.8]{bernard2008young}). \smallskip \emph{Limit theory} refers instead to the question of connecting optimal control problems with a finite number of agents with the infinite dimensional description given by a continuum of players. The study of interacting particle systems becomes intractable when the population is very large and, in many circumstances, the connection with a limit (mean field) approximation heavily simplifies the study. For this reason, many results concerning asymptotic behaviour when the size of the system grows have been developed in the literature, both from the theoretical and applicative point of view. Even if mean field approximations are mainly proved in the uncontrolled stochastic setting, some applications to deterministic models can be found e.g. in \cite{naldi2010mathematical} and \cite{difrancesco2015rigorous}. Concerning the controlled case, an important effort has been directed to the case of mean field games (see below for some references) but, to the best of our knowledge, only few rigorous results for deterministic multi-agent systems can be found in the literature. Let us mention \cite{fornasier2014mean}, \cite{fornasier2018mean} for Gamma-convergence techniques under different assumptions on the controls and velocity fields, and \cite{cesaroni20201} for an extension when distance constraints among the agents are imposed. \smallskip The analysis we develop here aims at providing a unifying framework for the study of deterministic multi-agent optimal control problems and surely benefits from the connections with the theory of optimal transport. This is more evident when dealing with a measure-formulation of the problem, where a non-local continuity equation is guiding the dynamics, or with the Kantorovich description, where a superposed measure in the space of continuous paths selects the trajectories of the system. Another source of inspiration is the theory of Young measures, classically used in control theory, which plays a crucial role in the description of relaxed problems. Apart from its theoretical interest, we trust the present investigation could serve as a founding step into a general treatment of optimality conditions in the rapidly-growing context of Wasserstein spaces. Within this context a major role is played by the Hamilton-Jacobi-Bellman (HJB) equations in the space of probability measures, for which different notions of solution has been already proposed in the literature. We think that the analysis of HJB equation could benefit from the equivalence results and the limit theory developed here and we leave it to future investigation . Let us just briefly mention some contributions in this direction: a general analysis in metric setting can be found in \cite{ambrosio2008hamiltonan, gangbo2015metric}, see also \cite{gangbo2019differentiability} for the particular choice of the Wasserstein space. Viscosity solutions for HJB are studied e.g. in \cite{cardaliaguet2008deterministic, cosso2019} for the case of random differential games and in \cite{jimenez2020optimal} for multi-agent systems. \smallskip Let us finally report on the stochastic counterpart of the theory (that we do not treat herein) and make some further comments on the connections with mean field games. \smallskip \textbf{Stochastic counterpart.} In the stochastic setting, mean field behaviour of interacting particles systems is classically referred to as \emph{propagation of chaos}. The literature on the subject is far too vast to be discussed here and the interested reader is referred to Sznitman's Saint-Flour lectures \cite{sznitman1991topics} for a beautiful treatment of the subject and to the references in e.g. \cite{djete2020mckean, orrieri2018large} for some of the more recent developments. Concernig stochastic control problems, a rigorous consistency result for controlled McKean-Vlasov dynamics has been obtained by Lacker in \cite{lacker2017limit} using martingale problems and relaxation. It is interesting to notice that the result contained in \cite{lacker2017limit} allows also for degenerate diffusion. Further extensions has been pursued in \cite{djete2020mckean}, where a common noise is also introduced, and in \cite{djete2020extended} where the state dynamics depends upon the joint distribution of state and control. For what concerns equivalence results, to the best of our knoledge, the more general study in the stochastic setting is formulated in \cite{djete2020mckean}. There, the authors prove existence of optimal controls and show the equivalence at the level of value functions of the so-called strong, weak (in a probabilistic sense) and relaxed (Lagrangian) formulations of the stochastic control problems. Notably, the results contained in \cite{djete2020mckean} extends the ones in \cite{lacker2017limit} to the far more general case of common noise. Observe also that the equivalence between the weak and strong formulations has a fundamental role in establishing a dynamic programming principle in \cite{djete2019mckean}. To conclude, let us just remark on a possible application of deterministic consistency results to the study of limit behaviour of uncontrolled stochastic particle systems. The idea stems from the link between Gamma-convergence and large deviations developped by Mariani in \cite{mariani2018gamma}. If a consistency result for deterministic control problems is proved in terms of Gamma-convergence, then it is possible to translate the $\Gamma-\liminf$ and $\Gamma-\limsup$ inequalities in corresponding lower and upper bound estimates for suitable associated stochastic systems. An example of this technique is contained in \cite{orrieri2018large} (see also \cite{budhiraja2020asymptotic} for a recent extension) where the limit theory developed in \cite{fornasier2018mean} has been used to prove a large deviations principle for stochastic equations in the mean-field and small-noise regime. \smallskip \textbf{Connections with MFGs.} The theory of Mean Field Games (MFGs), separately introduced by Lasry and Lions \cite{lasry2007mean} and Huang, Caines, Malham\'e \cite{huang2006large} aims at describing non-cooperative indistinguishable players interacting through their empirical distribution. Opposed to Vlasov control problems, the optimization problem of each agent in mean field games leads to the notion of Nash equilibrium for the system. Differences and similarities between centralized optimal control of (McKean)-Vlasov dynamics and equilibria in MFGs are discussed in the literature, see e.g. \cite{carmona2013control}, \cite{bensoussan2013mean} and \cite{cardaliaguet2019inefficiency}. For what concerns the limit of $N$-players differential games towards the MFG system, a fundamental result was obtained in \cite{cardaliaguet2019master} via the so-called Master equation. Among the various extensions of \cite{cardaliaguet2019master}, let us mention the convergence of open and closed-loop Nash equilibria to MFGs equilibria obtained respectively in \cite{fischer2017connection} and \cite{lacker2020convergence}. The case of first-order MFGs has been taken into account in \cite{fischer2020asymptotic}. \bigskip We now briefly describe the various formulations of the optimal control problems we deal with and we present the main results contained in the paper. To improve the readability of the introduction, we just sketch the essential features of the problems, omitting the details and heavily simplifying the setting and theorems whenever possible. Precise statements are given in the forthcoming sections. Throughout the paper we use interchangeably the terms particles/agents as they differ only in view of the different applications. \smallskip \textbf{Lagrangian formulation {($\pL$)}.} The Lagrangian formulation has a probabilistic flavour and it is built upon a probability space $(\Omega, \mathfrak B, \P)$ which acts as a parametrization space for the particles. More precisely, given a finite time horizon $T >0$ and a (compact metric) space of control actions $U$, the controlled dynamics $X:[0,T] \times \Omega \to \mathbb{R}^d$ is given by \begin{equation}\label{eq:systemL_intro} \begin{cases} \dot X_t(\omega)=f(X_t(\omega),u_t(\omega),(X_t)_\sharp\P), &\textrm{for a.e. }t\in (0,T)\\ X_{|t=0}(\omega)=X_0(\omega),& \end{cases} \end{equation} where the dependence of the vector field $f$ on the measure $(X_t)_\sharp \P$ models the interaction among particles {and/or the interaction of the mass with the surrounding environment} and it is usually referred to as a \emph{mean field} interaction. A natural motivation for the introduction of a parametrization space comes from a large variety of problems where a finite number of particles/agents are involved. In this case $\Omega$ can be simply interpreted as a set of labels $\Omega^N = \{1, \ldots, N\}$, with $\mathfrak B^N$ the associated algebra of parts and $\P^N(\{\omega\}) = \frac{1}{N}$, $\omega = 1, \ldots, N$, the normalized counting measure. Each particle is indistinguishable from the others and the interaction enters the system through the empirical measure $\mu^N_t=\frac{1}{N}\sum_{\omega=1}^N \delta_{X_t(\omega)}$, with $t \in [0,T]$. \smallskip Given $X_0\in L^p(\Omega;\mathbb{R}^d)$, for $p\geq 1$, an admissible pair $(X,u) \in {\ensuremath{\mathcal A}}_{\pL}(X_0)$ for the Lagrangian optimal control problem ($\pL$) consists in a measurable control $u:[0,T] \times \Omega \to U$ and a solution (in a suitable sense) of \eqref{eq:systemL_intro}. Associated to the dynamics, the \emph{cost functional} to be minimized has the form \[J_{\pL}(X,u):= \int_\Omega \int_0^T\mathcal C(X_t(\omega),u_t(\omega),(X_t)_\sharp \P)\,\d t \, \d\P(\omega) +\int_\Omega \mathcal C_T(X_T(\omega),(X_T)_\sharp \P)\,\d\P(\omega),\] where the running cost $\mathcal C$ and the final cost $\mathcal C_T$ are non-local as they could depend on the measures $(X_\cdot)_\sharp \P$ and $(X_T)_\sharp \P$. {The optimization of the cost functional among admissible pairs leads to the definition of the so-called} \emph{value function} which, for the Lagrangian problem, can be written as \begin{equation}\label{value:funct:L_intro} V_{\pL}(X_0):=\inf\left\{J_{\pL}(X,u)\,:\,(X,u)\in\mathcal A_{\pL}(X_0)\right\}. \end{equation} Let us notice that existence of optimal pairs $(X,u)$ (for which the minimum is achieved in \eqref{value:funct:L_intro}) is not guaranteed in general. A counterexample for the Lagrangian problem, even in the relaxed formulation, is given by the Wasserstein barycenter problem with suitable initial distribution (see Section \ref{sec:counterexample} for details). From a probabilistic point of view, the Lagrangian formulation can be thought as a \emph{random} optimal control problem in strong formulation, where the randomness is encoded in the initial distribution of the dynamics. In this context, we do not consider stochastic perturbation of the dynamics given e.g. from independent Brownian motions and/or common noise. \smallskip \textbf{Relaxed Lagrangian formulation {($\RL$)}.} A classical generalization of optimal control problems is the so called \emph{relaxed} version, where controls are allowed to take values in the space of probability measures $\sigma: [0,T] \times \Omega \to \mathscr{P}(U)$. This greatly enlarges the class of admissible pairs (classical controls can be recovered choosing $\sigma:= \delta_u$, with $u \in U$) and provides a convexification of the problem under consideration. Indeed, the controlled trajectories satisfy the linear (in the control action) dynamics \begin{equation*} \begin{cases} \dot X_t(\omega)=\displaystyle\int_U f(X_t(\omega),u,(X_t)_\sharp\P)\,\d\sigma_{t,\omega}(u), &\textrm{for a.e. }t\in]0,T]\\ X_{|t=0}(\omega)=X_0(\omega), & \end{cases} \end{equation*} with control $\sigma_{t,\omega}:=\sigma(t,\omega)\in\mathscr P(U)$. Furthermore, the cost functional takes the form \begin{equation*} J_{\RL}(X,\sigma):=\int_{\Omega}\int_0^T\int_U \mathcal C(X_t(\omega),u,(X_t)_\sharp\P)\,\d\sigma_{t,\omega}(u)\, \d t \,\d\P(\omega) \ +\int_\Omega \mathcal C_T(X_T(\omega),(X_T)_\sharp\P)\,\d\P(\omega), \end{equation*} with associated \emph{value function} $V_{\RL}:L^p(\Omega;\mathbb{R}^d)\to[0,+\infty)$ given by \begin{equation*} V_{\RL}(X_0):=\inf\left\{J_{\RL}(X,\sigma)\,:\,(X,\sigma)\in\mathcal A_{\RL}(X_0)\right\}. \end{equation*} Relaxation is a fundamental concept in optimal control theory and has its roots in the theory of Young measures. In Section \ref{sec_relaxed} of the paper, we provide a detailed analysis of the relaxation procedure in the context of multi-agent systems, emphasizing its connections with the Lagrangian problem. Of particular interest is the extension of a suitable version of the chattering theorem, which permits to approximate the Relaxed Lagrangian formulation with a sequence of (not relaxed) Lagrangian ones. This readily implies the equality of the respective value functions: $V_{\RL}(X_0) = V_\pL(X_0)$ for any $X_0 \in L^p(\Omega;\mathbb{R}^d)$. Let us finally notice that the Relaxed Lagrangian formulation is the prototype for the class of control problems satisfying suitable \emph{Convexity Assumptions} (see Assumption \ref{CA} below). In this particular case, the control space is the convex space of measures $\mathscr{P}(U)$, the dynamics is affine in the controls and the cost functional is convex (actually linear). However, as already observed before, this relaxation procedure is not sufficient to guarantee existence of minimizers for a general optimal control problem (see Section \ref{sec:counterexample}). A further step in this direction is the introduction of the Eulerian formulation of the problem. \smallskip \textbf{Eulerian formulation {($\E$)}.} To simplify the presentation, here we suppose to directly deal with relaxed controls, which are represented by a Borel measurable map $\sigma: [0,T] \times \mathbb{R}^d \to \mathscr{P}(U)$: this is fundamental to get existence of minimizers. Then, the evolution of the system is guided by the following non-local Vlasov equation \begin{equation*} \begin{cases} \partial_t\mu_t+\mathrm{div}\,(v_t\mu_t)=0, &\textrm{in }[0,T]\times\mathbb{R}^d\\ \mu_{t=0}=\mu_0,& \end{cases} \end{equation*} where the controlled vector field $v$ depends on the evolving state itself and it is given by $v_t(x):=\int_{U}f(x, u,\mu_t)\d \sigma_{t,x}(u)$. Within this framework, the cost functional takes the form \begin{equation*} J_{\E}(\mu,\sigma):=\int_0^T\int_{\mathbb{R}^d}\int_U{\ensuremath{\mathcal C}}(x, u,\mu_t)\, \d \sigma_{t,x}(u) \,\d\mu_t(x)\,\d t+\int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}_T(x,\mu_T)\,\d\mu_T(x), \end{equation*} and the \emph{value function} $V_{\E}:\mathscr{P}_p(\mathbb{R}^d)\to[0,+\infty)$ is given by \begin{equation*} V_{\E}(\mu_0):=\inf\{J_{\E}(\mu,\sigma)\,:\,(\mu,\sigma)\in\mathcal A_{\E}(\mu_0)\}. \end{equation*} In the Eulerian description of the optimal control problem, the system can be described by a curve of probability measures. This point of view is intimately connected with the theory of optimal transport from which ideas and techniques are borrowed. \smallskip \textbf{Kantorovich formulation {($\K$)}.} A somewhat intermediate formulation is given by the \emph{Kantorovich} optimal control problem (in analogy to the Kantorovich formulation of the optimal transport problem). This formulation has its roots in the representation of solutions of the continuity equation by superposition of continuous curves belonging to $\Gamma_T:= C([0,T];\mathbb{R}^d)$. An admissible pair for the Kantorovich problem is given by $(\boldsymbol\eta,\sigma)$ where $\boldsymbol\eta \in \mathscr{P}(\Gamma_T)$ is a probability measure on the space of continuous curves, and $\sigma: [0,T] \times \Gamma_T \to \mathscr{P}(U)$ is a relaxed control. Furthermore, given $\mu_0\in\mathscr{P}(\mathbb{R}^d)$ with finite $p$-moment, an admissible measure $\boldsymbol\eta$ has to match the initial condition in the form $(e_0)_\sharp\boldsymbol\eta = \mu_0$. Even more important, defining $\mu_t:=(e_t)_\sharp\boldsymbol\eta$ for all $t\in[0,T]$, $\boldsymbol\eta$ has to be concentrated on the set of absolutely continuous solutions of the differential equation \[\dot\gamma(t)=\int_U f(\gamma(t),u,\mu_t)\d \sigma_{t, \gamma}(u) ,\qquad \text{for ${\ensuremath{\mathcal L}}_T$-a.e. }t\in[0,T].\] This clearly links the Kantorovich formulation with the Eulerian one via the superposition principle (see Theorem \ref{thm:sup_princ}). The \emph{cost functional} associated to the Kantorovich formulation is written in the form \begin{equation*} J_{\K}(\boldsymbol\eta,\sigma):=\int_{\Gamma_T}\int_0^T\int_U{\ensuremath{\mathcal C}}(\gamma(t),u,\mu_t)\, \d \sigma_{t, \gamma}(u)\,\d t \, \d\boldsymbol\eta(\gamma) +\int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}_T(x,\mu_T)\,\d\mu_T(x), \end{equation*} where $\Gamma_T$ act as a parametrization space (as in the Lagrangian framework), but the minimization involves measures $\boldsymbol\eta \in \mathscr{P}(\Gamma_T)$ instead of trajectories, in line with the Kantorovich formulation of the optimal transportation problem: $\mu_t = (e_t)_\sharp \boldsymbol\eta$ can be considered as a time dependent family of marginals of $\boldsymbol\eta$. The associated \emph{value function} is given by \begin{equation*} V_{\K}(\mu_0):=\inf\{J_{\K}(\boldsymbol\eta,\sigma)\,:\,(\boldsymbol\eta,\sigma)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)\}. \end{equation*} Let us stress that, also in this setting, the choice of relaxed controls is sufficient to prove the existence of minimizers. \smallskip \textbf{Equivalence results.} A natural question is whether the problems introduced above are somewhat related. One of the aims of the present paper is to prove equivalences among the various formulations introduced above. At the level of the value functions, we can summarize the main result in the following theorem (see Theorem \ref{cor:VL=VE'} for a precise statement) \begin{theorem*}[equivalence] Let $(\Omega, {\ensuremath{\mathcal B}}, \P)$ be a Polish space such that $\P$ is without atoms. If $X_0\in L^p(\Omega;\mathbb{R}^d)$, then $$V_{\pL}(X_0)=V_{\RL}(X_0)= V_{\E}((X_0)_\sharp\P)=V_{\K}((X_0)_\sharp\P).$$ \end{theorem*} The first step of the proof consists in the approximation of the Lagrangian problem by piecewise constant controls (see Theorem \ref{prop:RLntoRL}). This is possible whenever the probability space $(\Omega, \mathfrak B, \P)$ under consideration satisfies a suitable \emph{finite approximation property} (Definition \ref{approx_prop}), which surely holds in the Polish framework. Once the piecewise approximation is established, we are able to formulate a suitable version of the \emph{chattering theorem} (see Theorem \ref{thm:chat}) where trajectories, controls and cost functional of the Relaxed Lagrangian formulation are approximated by the corresponding objects in the Lagrangian setting. The comparison between the Lagrangian and Eulerian formulations (see { Theorems \ref{cor:VL=VE}, \ref{cor:VL=VE'} and Section \ref{sec:LEK}}) is more delicate and it is achieved by exploiting the Kantorovich description of the control problem on the space of curves $\Gamma_T$. The idea is to separately connect the Eulerian and Kantorovich descriptions (Theorem \ref{cor:E=K}) and then the Kantorovich and Lagrangian ones (Theorem \ref{prop:K>L}). Starting from an admissible pair for the Eulerian problem, the application of the superposition principle given in Theorem \ref{thm:sup_princ} easily provides a candidate admissible pair for the Kantorovich problem paying the same cost (see the proof of Proposition \ref{prop:E>K} for details). Conversely, if a pair $(\boldsymbol\eta,\sigma)$ for the Kantorovich problem is given, the Eulerian control action can be obtained by averaging with respect to a suitable disintegration of the measure $\boldsymbol\eta$ on curves, as it is shown in the proof of Proposition \ref{prop:K>E}. For what concerns the Kantorovich/Lagrangian comparison, in Lemma \ref{l:fromKtoL} we firstly interpret the Kantorovich problem as a Lagrangian one with parametrization space given by $\Gamma_T$, i.e. the space of curves, and with trajectories given by the evaluation map. Due to the continuity of the initial datum (i.e. the evaluation map $e_0$), we then approximate controls with continuous ones (see Proposition \ref{lemmaB} for a general result in this direction) and finally we approximate the {obtained Lagrangian problem in $\Gamma_T$} with Lagrangian {ones which are set in} a generic parametrization space $\Omega$, not necessarily the space $\Gamma_T$. A precise description of this technique is contained in the proof of Theorem \ref{prop:K>L}. An immediate consequence of the equivalence theorem is the equality of the value functions for different initial data, whenever the respective laws coincide (see Theorem \ref{cor:VL=VE}). In fact, if $\mu_0 := (X_0)_\sharp\P=(X_0')_\sharp\P$, then it holds that \[V_{\pL}(X_0)=V_{\pL}(X_0') \quad \big( \, =V_{\E}(\mu_0) \big).\] A further consequence is the continuity of the value functions with respect to the initial data, see Theorems \ref{th:contVE} and \ref{th:contVL} for precise statements. \smallskip Equivalence results between Lagrangian, Eulerian and Kantorovich formulations represent a first step towards a general analysis of optimality conditions for multi-agents control systems. A second step in this direction is the study of the corresponding limit theory. \smallskip \textbf{Approximation by finite particle systems.} We aim to provide a rigorous limit theory for multi-agent optimal control problems both for the Lagrangian and Eulerian formulations. To do it, we have to define appropriate discrete versions of the two formulations. The $N$-particle Lagrangian control problem $\pL^N$ simply relies on the choice of $\Omega^N= \{1, \ldots, N \}$ as parametrization space. On the other hand, a genuine discrete Eulerian problem $\E^N$ requires the introduction of a constraint on the number of particles (see Definition \ref{def:E^N}), precisely an admissible trajectory $\mu$ satisfies $\mu_t\in\mathscr{P}^N(\mathbb{R}^d)$, where $$\mathscr{P}^N(\mathbb{R}^d):= \left\{\mu = \frac{1}{N} \sum_{i =1}^N \delta_{x_i} \; \text{ for some } x_i \in \mathbb{R}^d \right\}.$$ The discrete Eulerian and Lagrangian control problems $\pL^N$ and $\E^N$ turn out to be equivalent (see Theorem \ref{cor:LN=EN}) in the sense that \begin{equation}\label{equiv_E_L_N} V_{\pL^N}(X_0)= V_{\E^N}((X_0)_\sharp\P^N), \quad \text{ for any } X_0\in L^p(\Omega^N;\mathbb{R}^d). \end{equation} Note that this is not a direct consequence of the general equivalence result given above, where the reference probability measure $\P$ was required to be without atoms. To prove the equality in \eqref{equiv_E_L_N} we derive a discrete formulation of the superposition principle for empirical probability measures (see Theorem \ref{lem:EN}) that we believe might be of interest in itself. Once the equivalence at the level of $N$-particle systems is established, we derive Gamma-convergence results respectively for the Lagrangian and Eulerian problems as the number of particles diverges (see Propositions \ref{prop:gamma_conv} and \ref{prop:ENtoE}). A major consequence is contained in the following theorem (see Theorem \ref{thm:convVE} for a detailed description). \begin{theorem*}\label{thm:convVE_intro} Let $(\Omega,{\ensuremath{\mathcal B}},\P)$ be a Polish space such that $\P$ is without atoms. Assume that the Convexity assumption \ref{CA} holds. \begin{itemize} \item If $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $X^N_0:\Omega^N\to\mathbb{R}^d$, $N\in\mathbb{N}$, satisfy $X^N_0 \to X_0$ as $N\to+\infty$ (see \eqref{CNOmega}), then \[\lim_{N \to +\infty} V_{\pL^N} (X_0^N) = V_{\pL}(X_0).\] \item If $\mu_0\in\mathscr{P}_p(\mathbb{R}^d)$ and $\mu_0^N\in\mathscr{P}^N(\mathbb{R}^d)$, $N\in\mathbb{N}$, satisfy $W_p(\mu_0^N,\mu_0)\to 0$ as $N\to+\infty$, then \[\lim_{N \to +\infty} V_{\E^N} (\mu_0^N) = V_{\E}(\mu_0).\] \item Moreover, if $(X_0^N)_\sharp \P^N = \mu_0^N $ it holds that \[\lim_{N \to +\infty} V_{\pL^N} (X_0^N) = V_{\E}(\mu_0).\] \end{itemize} \end{theorem*} Notice that the usual \textit{mixed} Lagrangian-Eulerian consistency in the third item is a simple byproduct of the equivalence in \eqref{equiv_E_L_N}. \medskip \textbf{Structure of the paper.} The paper is organized as follows. In Section \ref{sec_prel} we fix the notation and present some preliminary material. We start by revising some properties of Borel probability measures and we recall a refined version of the Skorohod representation theorem. We further provide some material on optimal transport, Wasserstein spaces and we present the classical superposition principle. Finally, we discuss the disintegration theorem and give some properties of Young measures. Section \ref{sec:assumptions} contains our standing hypotheses, divided into two sets: the \emph{Basic} Assumptions and the \emph{Convexity} Assumptions. The first ones require compactness of the metrizable space of controls and Lipschitz continuity of the velocity field with respect to the state and the mass distribution. The cost functional has to be continuous and to satisfy a polynomial growth condition. The convexity assumptions impose convexity of the control set and of the cost functional (with respect to controls). Furthermore, the dynamics has to be affine with respect to the control actions. The relaxed setting is the guiding example of the convex case. In Section \ref{sec_lagrangian} we present and study the Lagrangian optimal control problem. In particular, we exhibit two different approximation procedures: the first one by piecewise constant controls and the second one by continuous controls and trajectories. The Relaxed Lagrangian problem is defined in Section \ref{sec_relaxed}, where its representation as a Lagrangian problem in the lifted space of measures is also discussed. We prove the equivalence between Lagrangian and Relaxed Lagrangian formulations of the control problem by approximating relaxed controls with (non-relaxed) ones in the sense of Young convergence. This procedure, known as chattering theorem, exploits the approximation by piecewise constant controls developed in the Lagrangian setting. Section \ref{sec_eulerian} contains the definition and properties of the Eulerian control problem. Under the convexity Assumptions we are able to prove existence of minimizers for the control problem via a direct method. In Section \ref{sec:K} we introduce the Kantorovich problem and we prove its equivalence with the Eulerian one under the convexity assumptions. The main equivalence results are contained in Section \ref{sec:E=L}. Exploiting the Kantorovich formulation, we firstly show the equality between value functions of Lagrangian and Eulerian problems under the Convexity Assumptions. The general case is then obtained by interpreting the Relaxed Lagrangian as a (convex) Lagrangian problem in the space of probability measures. As a consequence, we also get the continuity of the value functions, with respect to the initial condition, for the various formulations. In Subsection \ref{sec:counterexample} we discuss the possible non-existence of minimizers for the Lagrangian and Relaxed Lagrangian problems. This is not guaranteed, if the initial condition is assigned, even under the Convexity Assumptions. We produce a counterexample to the existence of minimizers given by the Wasserstein barycenter problem. Section \ref{sec:finite} contains all the material regarding finite particle problems and the respective limit theory. We define a discrete version of the Eulerian control problem imposing a constraint on the number of particles. To connect the Eulerian and Lagrangian formulations in the $N$-particle case we introduce a \emph{Feedback} Lagrangian control problem, where control actions are indeed in feedback form, and we make use of the already mentioned discrete superposition principle. A Gamma-convergence result both for the Lagrangian and Eulerian formulations is then established as the number of particles tends to infinity. We finally conclude proving the convergence of the associated value functions. \smallskip The appendix contains various technical tools. In particular, in Appendix \ref{app:SP} we state and prove the superposition principle for the evolution of empirical measures. \section{Preliminaries and notations} \label{sec_prel} {We list here the main notation.} {\small\begin{longtable}{ll} $\# A$&the cardinality of a set $A$;\\ $i_X(\cdot)$&the identity function on a set $X$, $i_X:X\to X$ defined by $i_X(x)=x$;\\ $\mathds 1_A(\cdot)$&the characteristic function of $A\subset X$,\\ &$\mathds 1_A:X\to \mathbb{R}$ defined by $\mathds 1_A(x)=1$ if $x\in A$, $\mathds 1_A(x)=0$ if $x \in X\setminus A$;\\ $(S, \mathfrak B)$&measurable space $S$ with $\sigma$-algebra $\mathfrak B$;\\ $(S, {\ensuremath{\mathcal B}}_{S})$&topological space $S$ with Borel $\sigma$-algebra ${\ensuremath{\mathcal B}}_S$;\\ $\mathrm{M}(X;Y)$&the set of measurable functions from the measurable space $X$ \\ &to the measurable space $Y$;\\ $\mathrm{B}(X;Y)$&the set of Borel measurable functions from the topological space $X$ \\ &to the topological space $Y$;\\ $C(X;Y)$&the set of continuous functions from the topological space $X$ \\ &to the topological space $Y$;\\ $C_c(X;Y)$&the set of continuous compactly supported functions from the topological space $X$\\ & to the topological space $Y$;\\ $C_b(X;Y)$&the set of continuous bounded functions from the topological space $X$\\ & to the metric space $Y$;\\ $\mathrm{AC}^p([0,T];X)$&the set of absolutely continuous functions from $[0,T]$ to the metric space $X$\\ &with metric derivative in $L^p([0,T];\mathbb{R})$;\\ $\Gamma_T$&the set of continuous curves from $[0,T]$ to $\mathbb{R}^d$, i.e., $\Gamma_T=C([0,T];\mathbb{R}^d)$;\\ $e_t$&the evaluation map at time $t\in[0,T]$, $e_t:\Gamma_T\to\mathbb{R}^d$ defined by $e_t(\gamma)=\gamma(t)$;\\ $\mathscr P(X)$&the set of probability measures on the measurable space $X$;\\ {$\mathscr{P}^N(\mathbb{R}^d)$}&{the set of empirical probability measures on $\mathbb{R}^d$ defined in \eqref{eq:discreteProb};}\\ $\mathrm{m}_p(\mu)$&the $p$-th moment of a probability measure $\mu\in\mathscr{P}(\mathbb{R}^d)$, defined by\\ & ${\mathrm{m}}_p(\mu)=\left(\int_{\mathbb{R}^d}|x|^p\,\d\mu(x)\right)^{1/p}$;\\ $r_\sharp\mu$&the push-forward of the measure $\mu\in\mathscr P(X)$ by the measurable map $r\in\mathrm{M}(X;Y)$;\\ $\mu\otimes\nu$&the product measure of $\mu \in\mathscr P(X)$ and $\nu \in \mathscr{P}(Y)$;\\ $\pi^i$&the $i$-th projection map $\pi^i:X_1\times\cdots\times X_N\to X_i$ defined by $\pi^i(x_1,\dots,x_N)=x_i$;\\ $\pi^{i,j}$&the $(i,j)$-th projection map $\pi^{i,j}:X_1\times\cdots\times X_N\to X_i\times X_j$\\ & defined by $\pi^{i,j}(x_1,\dots,x_N)=(x_i,x_j)$;\\ $W_p(\mu,\nu)$&the $p$-Wasserstein distance between $\mu$ and $\nu$ (see Definition \ref{def:was});\\ $\mathscr P_p(\mathbb{R}^d)$&the metric space of the elements in $\mathscr P(X)$ with finite $p$-moment, \\ &endowed with the $p$-Wasserstein distance;\\ ${\ensuremath{\mathcal L}}_T$&the normalized Lebesgue measure restricted to the interval $[0,T]$,\\ &i.e. ${\ensuremath{\mathcal L}}_T:=\frac{1}{T}{\ensuremath{\mathcal L}} \mres [0,T]$.\\ \end{longtable}} \subsection{Borel probability measures} Let $(S,\mathfrak B)$ be a measurable space. When $S$ is a Polish topological space, we will implicitely assume that $\mathfrak B$ coincides with the Borel $\sigma$-algebra ${\ensuremath{\mathcal B}}_S$ of $S$. We say that $(S,\mathfrak B)$ is \emph{a standard Borel space} if it is isomorphic (as a measurable space) to a Borel subset of a complete and separable metric space; equivalently, one can find a Polish topology $\tau$ on $S$ such that $\mathfrak B={\ensuremath{\mathcal B}}_{(S,\tau)}$. If $(E,\tilde \mathfrak B)$ is another measurable space, we denote by $\mathrm{M}(S;E)$ the set of measurable functions from $S$ to $E$. If $S$ is a topological space we denote with $\mathrm{B}(S;E)$ the set of Borel measurable functions. $\mathscr{P}(S)$ is the set of probability measures on $S$; when $S$ is a Polish space (and $\mathfrak B={\ensuremath{\mathcal B}}_S$) we will endow $\mathscr{P}(S)$ with the weak (Polish) topology induced by the duality with the continuous and bounded functions of $\mathrm C_b(S):= C_b(S;\mathbb{R})$. Given $\mu\in\mathscr{P}(S)$ and $r:S\to E$ a measurable map, we define the \emph{push forward of $\mu$ through} $r$, denoted by $r_\sharp\mu\in\mathscr{P}(E)$, by $r_\sharp\mu(B):=\mu(r^{-1}(B))$ for all measurable sets $B\in\tilde \mathfrak B$ {(the $\sigma$-algebra on $E$)}, or equivalently, \[\int_S f(r(x))\,\d\mu(x)=\int_E f(y)\,\d r_\sharp\mu(y),\] for every positive, or $r_\sharp\mu$-integrable, function $f:E\to\mathbb{R}$. \\ Given another measurable space $Z$, $\mu\in\mathscr{P}(S)$, and $r:S\to E$, $s:E\to Z$ measurable maps, the following composition rule holds \begin{equation}\label{eq:composition} (s\circ r)_\sharp\mu=s_\sharp(r_\sharp\mu). \end{equation} Moreover, if $r:S\to E$ is a continuous map (with respect to suitable Polish topologies in $S$ and $E$) then $r_\sharp:\mathscr{P}(S)\to\mathscr{P}(E)$ is continuous as well. The following proposition generalizes to some extent the classical Skorohod representation Theorem, see e.g. \cite[Theorem 6.7]{billingsley1999convergence}. For a (more general) result and the proof we refer to \cite[Theorems~3.1~and~3.2]{berti2007skorohod}. \begin{proposition}\label{prop:Skorohod} Let $(\Omega, \mathfrak B, \P)$ be a probability space such that $\P$ is without atoms and let $S$ be a Polish space. \begin{itemize} \item[(i)] If $\nu \in \mathscr{P}(S)$, then there exists a measurable map $X: \Omega \to S$ such that $X_{\sharp} \P = \nu$. \item[(ii)] If $\nu^n, \nu \in \mathscr{P}(S)$ with $\nu^n \to \nu$ weakly, then there exist measurable maps $X^n,X : \Omega \to S$, $n \in \mathbb{N}$, such that $X^n_\sharp \P = \nu^n$, $X_\sharp \P = \nu$ and $X^n(\omega) \to X(\omega)$ for $\P$-a.e. $\omega \in \Omega$. \end{itemize} \end{proposition} Notice that, when $(\Omega,\mathfrak B,\P)$ is a standard Borel space, $\tau$ a Polish topology on $\Omega$ such that $\mathfrak B = {\ensuremath{\mathcal B}}_{(\Omega,\tau)}$, then the maps $X$ and $X^n$ in Proposition \ref{prop:Skorohod} are Borel measurable. A particular and significant case occurs when we choose $(\Omega, \mathfrak B, \P) = ([0,1], {\ensuremath{\mathcal B}}, {\ensuremath{\mathcal L}}_1)$, with ${\ensuremath{\mathcal B}}$ the Borel $\sigma$-algebra and ${\ensuremath{\mathcal L}}_1$ the Lebesgue measure restricted to $[0,1]$. \medskip If $ \mathfrak m\in\mathscr{P}(S)$ and $E$ is a separable Banach space, we denote by $L^p_\mathfrak m(S;E)$ the space of (the equivalence classes of) $\mathfrak m$-measurable functions $f:S\to E$ such that $\int_S \|f(x)\|^p\,\d\mathfrak m(x)<+\infty$. Since $E$ is separable, the notions of weak and strong measurability coincide. We will often adopt the notation $L^p(S;E)$ in place of $L^p_\mathfrak m(S;E)$ when the measure $\mathfrak m$ is clear from the context.\\ We say that a sequence of measurable functions $u_n\in\mathrm{M}(S;E)$ converges in $\mathfrak m$-measure to $u\in\mathrm{M}(S;E)$ if \begin{equation} \forall\,\varepsilon > 0, \quad \lim_{n \to +\infty} \mathfrak m \left( \lbrace x \in S: \|u_n(x) - u(x)\| \geq \varepsilon \rbrace \right) = 0. \end{equation} If $u_n$ take values in a compact subset $U$ of $E$, $u_n\in\mathrm{M}(S;U)$, the convergence of $u_n$ to $u\in\mathrm{M}(S;U)$ in $\mathfrak m$-measure is equivalent to the convergence of $u_n$ to $u$ in $L^p(S;E)$ for every $p\in[1,+\infty)$. Given $(S, d)$ a metric space and $p \in [1,+\infty]$, we say that a curve $\gamma: [0,T] \to S$ belongs to $\mathrm{AC}^p([0,T];S)$ if there exists $m \in L^p(0,T;\mathbb{R})$ such that \begin{equation*} d(\gamma(t_1), \gamma(t_2)) \leq \int_{t_1}^{t_2} m(s) \, \d s , \qquad \forall \, t_1, t_2 \in [0,T], \; t_1 \leq t_2. \end{equation*} \medskip \subsection{The Wasserstein metric and the Superposition Principle} We provide a brief collection of the main notions on optimal transport and Wasserstein distance, addressing the reader to \cites{ambrosio2008gradient,santambrogio2015optimal,villani2003topics}. \par\medskip\par Given $\mu\in\mathscr{P}(\mathbb{R}^d)$ and $p\geq 1$, we define the $p$-moment of $\mu$ by $$\mathrm{m}_p(\mu):=\left(\int_{\mathbb{R}^d}|x|^p\,\d\mu(x)\right)^{1/p}.$$ We define $\mathscr{P}_p(\mathbb{R}^d):=\{\mu\in\mathscr{P}(\mathbb{R}^d):\mathrm{m}_p(\mu)<+\infty \}.$ The set $\mathscr{P}_p(\mathbb{R}^d)$ can be metrized by the following distance. \begin{definition}[Wasserstein distance]\label{def:was} Let $p\ge 1$. Given $\mu_1,\mu_2\in\mathscr P_p(\mathbb{R}^d)$, we define the $p$-\emph{Wasserstein distance} between $\mu_1$ and $\mu_2$ by setting \begin{equation}\label{eq:wasserstein} W_p(\mu_1,\mu_2):=\left(\min\left\{\int_{\mathbb{R}^d\times \mathbb{R}^d}|x_1-x_2|^p\,d\gamma(x_1,x_2)\,:\,\gamma\in \Gamma(\mu_1,\mu_2)\right\}\right)^{1/p}\,, \end{equation} where the set of \emph{admissible transport plans} $\Gamma(\mu_1,\mu_2)$ is given by \begin{align*} \Gamma(\mu_1,\mu_2):=&\left\{\gamma\in \mathscr P(\mathbb{R}^d\times \mathbb{R}^d):\,\pi^1_\sharp\gamma=\mu_1,\pi^2_\sharp\gamma=\mu_2 \right\}, \end{align*} with $\pi^i:\mathbb{R}^d\times\mathbb{R}^d\to\mathbb{R}^d$, $\pi^i(x^1,x^2)=x^i$, the projection operator, $i=1,2$. \end{definition} By the previous definitions, given a measurable space $\Omega$ and $\P\in\mathscr{P}(\Omega)$, it follows immediately that for any $Z\in L^p(\Omega;\mathbb{R}^d)$, we have $\mu:=Z_\sharp\P\in\mathscr{P}_p(\mathbb{R}^d)$ and \begin{equation}\label{eq:MpLp} \mathrm m_p(\mu)= \|Z\|_{ L^p(\Omega;\mathbb{R}^d)}, \end{equation} moreover \begin{equation}\label{eq:WpLp} W_p(Z^1_\sharp\P,Z^2_\sharp\P)\leq \|Z^1-Z^2 \|_{ L^p(\Omega;\mathbb{R}^d)}, \qquad \forall\, Z^1,Z^2\in L^p(\Omega;\mathbb{R}^d). \end{equation} \medskip The space $\mathscr{P}_p(\mathbb{R}^d)$ endowed with the $p$-Wasserstein metric $W_p$ is a complete and separable metric space. \medskip The existence of a minimizer in \eqref{eq:wasserstein} can be proved by the direct method in Calculus of Variations. When the measure $\mu_1$ is absolutely continuous with respect to Lebesgue measure ${\ensuremath{\mathcal L}}^d$ on $\mathbb{R}^d$, the minimizer $\gamma$ is unique and it is concentrated on the graph of a map, $\gamma=(i_{\mathbb{R}^d},T)_\sharp\mu_1$, where $i_{\mathbb{R}^d}$ is the identity map of $\mathbb{R}^d$ and $T$ is a minimizer in the Monge transport problem \begin{equation}\label{eq:Monge} \inf\left\{\int_{\mathbb{R}^d}|x-S(x)|^p\,\d\mu_1(x)\,:\,S_\sharp\mu_1=\mu_2\right\}. \end{equation} The Wasserstein distance has the following characterization, known as Benamou-Brenier formula: \begin{equation}\label{B&B} W_p^p(\mu_0,\mu_1)=\min \left\{ \int_0^1\int_{\mathbb{R}^d} |v_t(x)|^p \,\d\mu_t(x)\,\d t : (\mu,v) \in \operatorname{CE}, \, \mu_{t=0}=\mu_0, \mu_{t=1}=\mu_1\right\}, \end{equation} where \begin{equation*} \begin{aligned} \operatorname{CE}:= \Big\{&(\mu,v): \mu \in C([0,T];\mathscr{P}_p(\mathbb{R}^d)), v \in L^p([0,1] \times \mathbb{R}^d; \mu_t \otimes \d t)\\ &\text{ such that } \partial_t\mu_t+\div(v_t\mu_t)= 0 \text{ in the sense of distributions } \Big\}. \end{aligned} \end{equation*} Notice that the minimizers are the constant speed geodesics joining $\mu_0$ to $\mu_1$, i.e. $\{\sigma_t\}_{t\in[0,1]}$ such that $\sigma_0=\mu_0$, $\sigma_1=\mu_1$ and $W_p(\sigma_t,\sigma_s)=|t-s|W_p(\mu_0,\mu_1)$ for any $t,s\in[0,1]$. \medskip We recall the following definition as in \cite[Definition~2.2]{fornasier2018mean}. \begin{definition}\label{def:admphi} We say that $\psi:[0,+\infty)\to[0,+\infty)$ is an \emph{admissible} function if $\psi(0)=0$, $\psi$ is strictly convex and of class $C^1$ with $\psi'(0)=0$, superlinear at $+\infty$, i.e., $\lim_{r\to+\infty}\dfrac{\psi(r)}{r}=+\infty$, and doubling, i.e., there exists $A>0$ such that \[\psi(2r)\le A (1+\psi(r))\quad \textrm{for any }r\in[0,+\infty).\] \end{definition} \medskip We observe that an admissible function $\psi$ satisfies \begin{equation}\label{doubling} r\psi'(r)\leq A(1+\psi(r)),\qquad \forall\,r\in [0,+\infty). \end{equation} \medskip The following result provides equivalent conditions for the convergence in the space $\mathscr{P}_p(\mathbb{R}^d)$ and the characterization of compactness. \begin{proposition}\label{prop:wassconv} Let $\{\mu_n\}_{n\in\mathbb{N}}\subseteq\mathscr P_p(\mathbb{R}^d)$ and $\mu\in\mathscr P_p(\mathbb{R}^d)$, the following assertions are equivalent: \begin{enumerate} \item $\displaystyle\lim_{n\to\infty}W_p(\mu_n,\mu)=0$; \item \label{Nmoment} $\mu_n$ weakly converges to $\mu$ and $\mathrm{m}_p(\mu_n)\to \mathrm{m}_p(\mu)$ as $n\to+\infty$; \item $\displaystyle\lim_{n\to+\infty}\int_{\mathbb{R}^d} \varphi(x)\,\d\mu_n(x)=\int_{\mathbb{R}^d}\varphi(x)\,\d\mu(x)$,\\ for every continuous function $\varphi:\mathbb{R}^d\to\mathbb{R}$ s.t. $|\varphi(x)|\leq C(1+|x|^p)$ for any $x\in\mathbb{R}^d$; \item \label{Npsi} $\mu_n$ weakly converges to $\mu$ and there exists $\psi:[0,+\infty)\to[0,+\infty)$ admissible, according to Definition \ref{def:admphi}, such that \begin{equation}\label{eq:unifmompsi} \sup_{n \in \mathbb{N}}\int_{\mathbb{R}^d} \psi(|x|^p)\,\d\mu_n(x) < +\infty. \end{equation} \end{enumerate} Moreover, a family $\mathscr{K}\subset \mathscr{P}_p(\mathbb{R}^d)$ is relatively compact if and only if there exists an admissible function $\psi:[0,+\infty)\to[0,+\infty)$ such that \begin{equation}\label{eq:unifmompsiF} \sup_{\mu \in \mathscr{K}}\int_{\mathbb{R}^d} \psi(|x|^p)\,\d\mu(x) < +\infty. \end{equation} \end{proposition} The proof can be carried on using \cite[Lemma 5.1.7, Proposition 7.1.5]{ambrosio2008gradient}. Concerning the implication \eqref{Nmoment} to \eqref{Npsi}, it follows by De la Vall\'ee Poussin and Dunford-Pettis theorems together with \cite[Lemma~2.3]{fornasier2018mean} for the admissibility property. \medskip The following representation result for the (absolutely continuous) solutions of the continuity equation will play a key role in the sequel (see \cite[Theorem 8.2.1]{ambrosio2008gradient}). We denote by $\Gamma_T=\mathrm C([0,T];\mathbb{R}^d)$ the Banach space of the continuous functions, endowed with the $\sup$ norm. We denote by $e_t:\Gamma_T\to\mathbb{R}^d$ the evaluation map at time $t\in[0,T]$ defined by $e_t(\gamma):=\gamma(t)$. We say that $\boldsymbol\eta \in \mathscr{P}(\Gamma_T)$ is concentrated on a set $B$ if $\boldsymbol\eta(\Gamma_T \setminus B) = 0$. \begin{theorem}[Superposition principle]\label{thm:sup_princ} Let $p\ge1$. Let $\mu=\{\mu_t\}_{t\in [0,T]}\in C([0,T];\mathscr{P}_p(\mathbb{R}^d))$ be a distributional solution of the continuity equation $\partial_t \mu_t+\mathrm{div}(v_t\mu_t)=0$ for a Borel vector field $v:[0,T]\times\mathbb{R}^d\to\mathbb{R}^d$ satisfying \begin{equation}\label{Summabilityv} \int_0^T\int_{\mathbb{R}^d}|v_t(x)|^p\,\d\mu_t(x)\,\d t<+\infty. \end{equation} Then there exists a probability measure $\boldsymbol\eta\in \mathscr{P}(\Gamma_T)$ such that \begin{enumerate} \item[(i)] $\mu_t=(e_t)_\sharp\boldsymbol\eta$ for every $t\in [0,T]$; \item[(ii)] $\boldsymbol\eta$ is concentrated on the set of curves $ \gamma \in \mathrm{AC}^p([0,T];\mathbb{R}^d)$ satisfying $$\dot\gamma(t)=v_t(\gamma(t)),\qquad \text{for ${\ensuremath{\mathcal L}}_T$-a.e. }t\in[0,T].$$ \end{enumerate} Conversely, given $\boldsymbol\eta\in\mathscr{P}(\Gamma_T)$ satisfying item (ii) {and \eqref{Summabilityv} with} $\mu_t:= (e_t)_\sharp\boldsymbol\eta$ for every $t \in [0,T]$, then $(\mu,v)$ is a distributional solution of $\partial_t\mu_t+\mathrm{div}(v_t\mu_t)=0$ . \end{theorem} In Theorem \ref{lem:EN} in Appendix \ref{app:SP}, we prove a version of the superposition principle in the discrete setting. \subsection{Disintegration and Young measures} Let $\S$ and $S$ be Polish spaces. We say that a map $x\in S \mapsto \mu_x\in\mathscr{P}(\S)$ is a Borel map if $x\mapsto \mu_x(A)$ is a Borel map for any open set $A\subset\S$. If $x\in S \mapsto \mu_x\in\mathscr{P}(\S)$ is a Borel map and $\lambda\in\mathscr{P}(S)$ we define the measure $\mu_x\otimes\lambda \in \mathscr{P}(\S)$ by $$(\mu_x\otimes\lambda) (A):= \int_S \mu_x(A)\,\d\lambda(x)$$ for any Borel set $A\subset\S$. Equivalently $$\int_\S \varphi(z)\,\d(\mu_x\otimes\lambda)(z):= \int_S \int_\S \varphi(z)\,\d\mu_x(z)\,\d\lambda(x)$$ for any bounded Borel function $\varphi:\S\to\mathbb{R}$. We state the following disintegration result (see for instance \cite[Section 5.3]{ambrosio2008gradient}). \begin{theorem}[Disintegration]\label{thm:disint} Let $\S$ and $S$ be Polish spaces. Let $\mu\in\mathscr{P}(\S)$ and $r:\S\to S$ a Borel map. Then there exists a Borel measurable family of probability measures $\{\mu_x\}_{x\in S}\subset \mathscr{P}(\S)$, uniquely defined for $r_\sharp\mu$-a.e. $x\in S$, such that $\mu_x(\S\setminus r^{-1}(x))=0$ for $r_\sharp\mu$-a.e. $x\in S$, and $\mu=\mu_x\otimes(r_\sharp \mu)$. In particular, for any bounded Borel map $\varphi:\S\to \mathbb{R}$ we have \begin{equation}\label{disintegration} \int_{\S}\varphi(z)\,d\mu(z)=\int_S \int_{r^{-1}(x)}\varphi(z)\,d\mu_x(z)\,d(r_\sharp \mu)(x). \end{equation} \end{theorem} \begin{remark}\label{rem:disint_proj} A typical case is given by $\S=S\times Y$, where $Y$ is a Polish space, and $r=\pi^1$. Since $(\pi^1)^{-1}(x)=\{x\}\times Y$ for all $x\in S$, we identify each measure $\mu_x\in\mathscr{P}(S\times Y)$, which is concentrated in $\{x\}\times Y$, with a measure $\mu_x\in\mathscr{P}(Y)$. With this identification, the formula \eqref{disintegration} takes the form \begin{equation}\label{disintegration2} \int_{S\times Y}\varphi(x,y)\,d\mu(x,y)=\int_S \int_{Y}\varphi(x,y)\,d\mu_x(y)\,d(r_\sharp \mu)(x). \end{equation} \end{remark} \medskip Let $\mathbb{T}$ and $S$ be Polish spaces, $\lambda\in\mathscr{P}(\mathbb{T})$ and $E$ be a Banach space. We say that $h: \mathbb{T} \times S \to E$ is a Carath\'eodory function if $$ \text{for $\lambda$-a.e. }t\in \mathbb{T}, \qquad x\mapsto h(t,x) \text{ is continuous}, $$ $$ \forall\, x\in S, \qquad t\mapsto h(t,x) \text{ is $\lambda$-measurable}. $$ \medskip Let us now recall the definition of \emph{Young measure} (see \cite{bernard2008young,valadier2004young}) and a density result which will turn out to be a crucial tool in our treatment. \begin{definition}\label{def:Youngconv} Let $\mathbb{T}$ and $S$ be Polish spaces and $\lambda\in\mathscr{P}(\mathbb{T})$. We say that $\nu \in \mathscr{P}(\mathbb{T} \times S)$ is a \textit{Young measure} on $\mathbb{T}\times S$ if $\pi^1_\sharp \nu = \lambda$. Furthermore given $\nu^n, \nu \in \mathscr{P}(\mathbb{T}\times S)$ Young measures, we say that $\nu^n \xrightarrow{\mathcal{Y}} \nu$ as $n \to +\infty$ if \[ \lim_{n \to + \infty} \int_{\mathbb{T} \times S} h(\tau, u) \,\d \nu^n(\tau,u) = \int_{\mathbb{T} \times S} h(\tau, u) \,\d \nu(\tau,u), \] for any $h: \mathbb{T} \times S \to \mathbb{R}$ Carath\'eodory and bounded. \end{definition} \begin{remark}\label{rem:Young-weak} Let $\mathbb{T}$, $S$ be Polish spaces, $\lambda\in\mathscr{P}(\mathbb{T})$ and $\nu^n,\nu$ Young measures on $\mathbb{T}\times S$. Then $\nu^n \xrightarrow{\mathcal{Y}} \nu$ in the sense of Definition \ref{def:Youngconv} if and only if $\nu^n\to\nu$ weakly. One implication follows immediately from the definitions, while the other comes from \cite[Theorem~7]{valadier1990young} (see also \cite{valadier2004young}). We also recall that weak convergence in $\mathscr{P}([0,T] \times S)$ is induced by a distance $\delta$. When $S$ is compact, we can choose as $\delta$ any Wasserstein distance on $\mathscr{P}([0,T] \times S)$. \end{remark} To any Borel map $u:\mathbb{T}\to S$ we can associate the Young measure $\nu:=(i_\mathbb{T},u)_\sharp\lambda$, which is concentrated on the graph of $u$. In this case, $\nu$ can be written as $\nu=\delta_{u(\tau)}\otimes\lambda$ and, using the disintegration Theorem \ref{thm:disint}, we have that $\nu_\tau=\delta_{u(\tau)}$ for $\lambda$-a.e. $\tau\in\mathbb{T}$. Given a Young measure $\nu$, in general the disintegration $\nu_\tau$ of $\nu$ w.r.t. $\lambda$ is not of the form $\delta_{u(\tau)}$ on a set of $\lambda$ positive measure, for some $u:\mathbb{T}\to S$. The following classical Lemma states that the Young measures induced by maps are ``dense'', in the set of Young measures, provided $\lambda$ is non atomic. We say that a measure $\lambda\in\mathscr{P}(\mathbb{T})$ is non atomic if $\lambda(\{\tau\})=0$ for any $\tau\in\mathbb{T}$. \begin{lemma}[{see \cite[Theorem~2.2.3]{valadier2004young}}]\label{lemma:young} Let $\mathbb{T}$ and $S$ be Polish spaces and $\lambda\in\mathscr{P}(\mathbb{T})$ non atomic. If $\nu \in \mathscr{P}(\mathbb{T} \times S)$ is a Young measure, then there exists a sequence of Borel maps $u^n: \mathbb{T} \to S$ such that \[ \nu^n:= (i_\mathbb{T},u^n)_\sharp \lambda \xrightarrow{\;\;\mathcal{Y}\;\;\;} \nu=\nu_\tau\otimes\lambda. \] Precisely, \begin{equation} \lim_{n \to +\infty} \int_{\mathbb{T}} h(\tau, u^n(\tau)) \,\d \lambda(\tau) = \int_{\mathbb{T}} \int_S h (\tau,u) \,\d \nu_\tau(u)\,\d\lambda(\tau), \end{equation} for every $h: \mathbb{T} \times S \to \mathbb{R} $ Carath\'eodory and bounded. \end{lemma} \section{Structural assumptions for the dynamics of the optimal control problems}\label{sec:assumptions} In this section we collect our main structural assumptions on the system $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ characterizing the dynamics and the cost of the control problems under study, where $U$ is the space of controls, $f$ is the vector field driving the particles motion, ${\ensuremath{\mathcal C}}$ and ${\ensuremath{\mathcal C}}_T$ are the running and terminal cost functionals. We fix $p\in[1,+\infty)$ and denote by $\sfd p$ the following metric on $\mathbb{R}^d\times\mathscr P_p(\mathbb{R}^d)$: \[\sfd p((x,\mu),(y,\nu)):=\left(|x-y|^p+W_p^p(\mu,\nu)\right)^{1/p}.\] \begin{assumption}[Basic Assumption]\label{BA} We assume that the system $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfies: \begin{enumerate}[label=(A.\arabic*)] \item \label{hp:U} $U$ is a compact metrizable space; \item \label{itemf:main} $f:\mathbb{R}^d\times U\times\mathscr P_p(\mathbb{R}^d)\to\mathbb{R}^d $ is continuous and Lipschitz continuous w.r.t. the metric $\sfd p$, uniformly in $u\in U$. Precisely, there exists $L>0$ such that \begin{equation}\label{eq:Lipf} |f(x,u,\mu)-f(y,u,\nu)|\le L\, \sfd p((x,\mu),(y,\nu)) \ , \end{equation} for every $u\in U$ and $(x,\mu),(y,\nu)\in\mathbb{R}^d\times\mathscr P_p(\mathbb{R}^d)$. \item \label{itemg:main} $\mathcal C:\mathbb{R}^d\times U\times\mathscr P_p(\mathbb{R}^d)\to[0,+\infty)$ and $\mathcal C_T:\mathbb{R}^d\times\mathscr P_p(\mathbb{R}^d)\to[0,+\infty)$ are continuous functions such that \begin{equation}\label{eq:growthC} \begin{aligned} & \mathcal C(x,u,\mu) \le D\left(1+|x|^p+\mathrm{m}_p^p(\mu)\right) \qquad \forall\, (x,u,\mu)\in\mathbb{R}^d\times U\times\mathscr P_p(\mathbb{R}^d)\\ & \mathcal C_T(x,\mu) \le D\left(1+|x|^p+\mathrm{m}_p^p(\mu)\right) \qquad \forall\, (x,\mu)\in\mathbb{R}^d\times \mathscr P_p(\mathbb{R}^d), \end{aligned} \end{equation} for some $D>0$. \end{enumerate} \end{assumption} \begin{remark} From Assumption \ref{BA} it holds \begin{equation}\label{f:growth} |f(x,u,\mu)|\le C\left(1+|x|+\mathrm{m}_p(\mu)\right), \qquad \forall\,(x,u,\mu)\in\mathbb{R}^d\times U\times\mathscr P_p(\mathbb{R}^d), \end{equation} for some $C>0$. Indeed, it is sufficient to choose $(y,\nu)=(0,\delta_0)$ in \eqref{eq:Lipf} and observe that $f(0,u,\delta_0)$ is bounded and $W_p(\mu,\delta_0)=\mathrm{m}_p(\mu)$. \end{remark} Concerning item \ref{hp:U} of Assumption \ref{BA}, let us recall the following result. \begin{proposition} If $U$ is compact metrizable space then, for every distance $d_{U}$ inducing the original topology of $U$, there exists a separable Banach space $V$ and an isometry $j: U \to V$. In particular, the image $j(U)$ is a compact subset of $V$. \end{proposition} \begin{proof} Fix a point $u_0\in U$ and consider the Banach space $B:=\{F\in \mathrm{Lip}(U): F(u_0)=0\}$ endowed with the norm $$ \|F\|_B:=\sup_{u,v\in U, u\not=v}\frac{|F(u)-F(v)|}{d_{U}(u,v)}.$$ Denoting by $B'$ the dual space of $B$, we define the map $j: U \to B'$ by $\langle j(u),F\rangle_{B',B}:=F(u)$. By the definition of dual norm, it is immediate to check that $$\|j(u)-j(v) \|_{B'}\leq d_{ U}(u,v), \quad \forall \, u,v \in U$$ On the other hand, evaluating $\langle j(u)-j(v), F\rangle_{B',B}$ with $F(z):=d_{U}(z,u)-d_{ U}(u,u_0)$, we obtain that $$\|j(u)-j(v) \|_{B'}= d_{U}(u,v),$$ so that $j$ is an isometry from $U$ to $j(U)\subset B'$. We eventually set $V:=\overline{\mathrm{span}(j(U))}^{B'}$, which is a separable Banach space since $U$, and therefore $j( U)$, is separable. \end{proof} {When specified, we will assume the following further hypothesis.} \begin{assumption}[Convexity Assumption]\label{CA} We say that $\S =(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfies the \emph{convexity assumption} if $\S$ satisfies Assumption \ref{BA} and \begin{enumerate}[label=(C.\arabic*)] \item $U$ is a compact convex subset of a separable Banach space $V$; \item for any $x\in\mathbb{R}^d$ and $\mu\in\mathscr{P}_p(\mathbb{R}^d)$, the map $u\mapsto f(x,u,\mu)$ satisfies the \emph{affinity condition}: \begin{equation*} f(x,\alpha u + (1-\alpha) v,\mu) = \alpha f(x,u,\mu) + (1-\alpha) f(x,v,\mu), \quad \forall\,u,v \in U,\; \forall\, \alpha \in [0,1]; \end{equation*} \item for any $x\in\mathbb{R}^d$ and $\mu\in\mathscr{P}_p(\mathbb{R}^d)$ the map $u \mapsto\mathcal C(x,u,\mu)$ is convex: \begin{equation*} {\ensuremath{\mathcal C}}(x,\alpha u + (1-\alpha) v,\mu) \leq \alpha {\ensuremath{\mathcal C}}(x,u,\mu) + (1-\alpha) {\ensuremath{\mathcal C}}(x,v,\mu), \quad \forall\,u,v \in U,\; \forall\, \alpha \in [0,1]. \end{equation*} \end{enumerate} \end{assumption} \subsection{The relaxed setting}\label{sub:Relax} For later use, we define a so-called \emph{relaxation/lifting} of $\S$ as follows. \begin{definition}\label{def:relax_setting} Given the system $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfying Assumption \ref{BA}, we define $\S'=(\mathscr{U},\mathscr{F},\mathscr{C},\mathscr{C}_T)$ as follows: \begin{enumerate}[label=(\roman*)] \item $\mathscr{U}:=\mathscr{P}(U)$; \item $\mathscr{F}:\mathbb{R}^d\times\mathscr{U}\times\mathscr{P}_p(\mathbb{R}^d)\to \mathbb{R}^d$ with $$\mathscr{F}(x,\sigma,\mu):=\int_U f(x,u,\mu)\,\d\sigma(u);$$ \item $\mathscr{C}:\mathbb{R}^d\times\mathscr{U}\times\mathscr{P}_p(\mathbb{R}^d)\to [0,+\infty)$ with $$\mathscr{C}(x,\sigma,\mu):=\int_U {\ensuremath{\mathcal C}}(x,u,\mu)\,\d\sigma(u).$$ \item $\mathscr{C}_T := {\ensuremath{\mathcal C}}_T$. \end{enumerate} \end{definition} \begin{proposition}\label{prop:ConvexRL} If $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfies Assumption \ref{BA}, then {its relaxation} $\S' = (\mathscr{U},\mathscr{F},\mathscr{C},{\ensuremath{\mathcal C}}_T)${, given in Definition \ref{def:relax_setting},} satisfies the Convexity Assumption \ref{CA}. Moreover, defining $D_U:=\{\delta_u: u\in U\}\subset\mathscr{U}$, the maps $\mathscr{F}$ and $\mathscr{C}$ restricted to $\mathbb{R}^d\times D_U\times\mathscr{P}_p(\mathbb{R}^d)$ coincide with $f$ and ${\ensuremath{\mathcal C}}$ respectively. \end{proposition} \begin{proof} The space $\mathscr{U}:=\mathscr{P}(U)$ can be identified with a subset of the dual space $B'$, where $B$ is the Banach space $B:=\{F\in \mathrm{Lip}(U): F(u_0)=0\text{ for some }u_0\in U\}$. The identification is given associating to $\sigma \in \mathscr{P}(U)$ the continuous linear functional $F \mapsto \int_U F(u)\,\d\sigma(u)$. With this identification, the norm in $\mathscr{P}(U)$ is given by $$\|\sigma\|=\sup_{F\in B, \|F\|_B\leq 1} \int_U F(u)\,\d\sigma(u).$$ By the Kantorovich-Rubinstein Theorem (see e.g. \cite[Theorem 1.14]{villani2003topics}) it holds that $ \|\sigma\|=W_1(\sigma,\delta_{u_0})$ and $ \|\sigma^1-\sigma^2\|=W_1(\sigma^1,\sigma^2)$. Hence, the topology on $\mathscr{P}(U)$ induced by $B'$ coincides with the topology induced by the Wasserstein distance $W_1$. Since $U$ is compact, this coincides with the topology induced by the weak convergence. By Prokhorov Theorem, $\mathscr{P}(U)$ is compact. Finally, $(\mathscr{P}(U), \| \cdot \|)$ is a separable Banach space thanks to the separability of the (complete) metric space $(\mathscr{P}(U),W_1)$. The convexity of $\mathscr{U}$, the affinity of $\mathscr{F}$ and the convexity of $\mathscr{C}$ with respect to $\sigma$ easily follows from their definitions. \end{proof} \section{Lagrangian optimal control problem} \label{sec_lagrangian} In this section we deal with a (finite-horizon) optimal control problem in Lagrangian formulation. It relies on a system $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfying Assumptions \ref{BA} and on a probability space $(\Omega,\mathfrak B,\P)$, whose elements act as parameters of the particles. We also fix a final time horizon $T>0$ and we denote with ${\ensuremath{\mathrm{Leb}}}_{[0,T]}$ the $\sigma$-algebra of Lebesgue measurable sets on $[0,T]$ and with ${\ensuremath{\mathcal L}}_T$ the normalized Lebesgue measure restricted to $[0,T]$. Recall that $\mathrm{M}([0,T] \times \Omega;U)$ denotes the set of measurable functions with respect to the product $\sigma$-algebra ${\ensuremath{\mathrm{Leb}}}_{[0,T]} \otimes \mathfrak B$. \begin{definition}[Lagrangian optimal control problem $(\pL)$]\label{def:L} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and let $(\Omega,\mathfrak B,\P)$ be a probability space. Given $X_0\in L^p(\Omega;\mathbb{R}^d)$, we say that $(X,u)\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$ if \begin{itemize} \item [(i)] $u\in \mathrm{M}([0,T]\times\Omega;U)$; \item [(ii)] $X\in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ and for $\P$-a.e. $\omega\in\Omega$, $X(\omega)$ is a solution of the following Cauchy problem \begin{equation}\label{eq:systemL} \begin{cases} \dot X_t(\omega)=f(X_t(\omega),u_t(\omega),(X_t)_\sharp\P), &\textrm{for ${\ensuremath{\mathcal L}}_T$-a.e. }t\in (0,T)\\ X_{|t=0}(\omega)=X_0(\omega),& \end{cases} \end{equation} where $X_t: \Omega \to \mathbb{R}^d$ is defined by $X_t(\omega):= X(t,\omega)$ for $\P$-a.e. $\omega \in \Omega$. \end{itemize} We refer to $(X,u)\in\mathcal A_{\pL}(X_0)$ as to an \emph{admissible pair}, with $X$ a \emph{trajectory} and $u$ a \emph{control}.\\ We define the \emph{cost functional} $J_{\pL}:L^p(\Omega;C([0,T];\mathbb{R}^d))\times \mathrm{M}([0,T]\times\Omega;U)\to[0,+\infty)$, by \[J_{\pL}(X,u):= \int_\Omega \int_0^T\mathcal C(X_t(\omega),u_t(\omega),(X_t)_\sharp \P)\,\d t \, \d\P(\omega) +\int_\Omega \mathcal C_T(X_T(\omega),(X_T)_\sharp \P)\,\d\P(\omega),\] and the \emph{value function} $V_{\pL}:L^p(\Omega;\mathbb{R}^d)\to[0,+\infty)$ by \begin{equation} V_{\pL}(X_0):=\inf\left\{J_{\pL}(X,u)\,:\,(X,u)\in\mathcal A_{\pL}(X_0)\right\}. \end{equation} \end{definition} \smallskip In the following, $\pL(\Omega, \mathfrak B,\P;\S)$ denotes the Lagrangian problem given in Definition \ref{def:L}. We will frequently shorten the notation to $\pL(\Omega, \mathfrak B, \P)$ when the system $\S$ is clear from the context. \begin{remark}\label{rem:lagrangian} Observe that, thanks to condition \eqref{eq:growthC}, the functional $J_{\pL}$ is finite. Moreover, from Proposition \ref{prop:existL} below it follows that ${\ensuremath{\mathcal A}}_{\pL}(X_0)\not=\emptyset$, for any $X_0\in L^p(\Omega;\mathbb{R}^d)$, and so the value function $V_{\pL}$ is well defined. We point out that existence of minimizers for the Lagrangian problem is not guaranteed in general, even under the Convexity Assumption \ref{CA}. This will be further discussed in Section \ref{sec:counterexample}. \end{remark} \begin{remark}\label{re:equiv} In view of Proposition \ref{prop:equivSpaces} in Appendix A, we will frequently identify $X \in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ and $X \in \mathrm{AC}^p([0,T];L^p(\Omega;\mathbb{R}^d))$, depending on the convenience. \end{remark} Let us introduce a suitable equivalence relation among Lagrangian problems when the parametrization space is varying. \begin{definition}[Equivalence of Lagrangian problems]\label{def:equiv_L} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA}. Let $(\Omega_1,\mathfrak B_1,\P_1)$ and $(\Omega_2,\mathfrak B_2,\P_2)$ be probability spaces. We say that $\pL_1 := \pL(\Omega_1,\mathfrak B_1,\P_1; \S)$ and $\pL_2:=\pL(\Omega_2,\mathfrak B_2,\P_2; \S)$ are \emph{equivalent} (and we write $\pL_1 \sim \pL_2$) if \begin{itemize} \item[(i)] for every $X_0^1\in L^p(\Omega_1;\mathbb{R}^d)$ and every $(X^1,u^1) \in {\ensuremath{\mathcal A}}_{\pL_1}(X_0^1)$ there exist $X_0^2\in L^p(\Omega_2;\mathbb{R}^d)$ and $(X^2,u^2) \in {\ensuremath{\mathcal A}}_{\pL_2}(X_0^2)$ such that \[ J_{\pL_1}(X^1,u^1) = J_{\pL_2}(X^2,u^2), \qquad \quad V_{\pL_1}(X^1_0) = V_{\pL_2}(X^2_0);\] \item[(ii)] for every $X_0^2\in L^p(\Omega_2;\mathbb{R}^d)$ and every $(X^2,u^2) \in {\ensuremath{\mathcal A}}_{\pL_2}(X_0^2)$ there exist $X_0^1\in L^p(\Omega_1;\mathbb{R}^d)$ and $(X^1,u^1) \in {\ensuremath{\mathcal A}}_{\pL_1}(X_0^1)$ such that \[ J_{\pL_2}(X^2,u^2) = J_{\pL_1}(X^1,u^1), \qquad \quad V_{\pL_2}(X^2_0) = V_{\pL_1}(X^1_0).\] \end{itemize} \end{definition} \begin{remark} The relation $\sim$ of Definition \ref{def:equiv_L} is an equivalence relation on the set of Lagrangian problems $\lbrace \pL(\Omega, \mathfrak B,\P) : (\Omega, \mathfrak B,\P) \text{ probability space} \rbrace$. \end{remark} \begin{proposition}\label{p:equivalence} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA}. Let $(\Omega_1,\mathfrak B_1,\P_1)$ and $(\Omega_2,\mathfrak B_2,\P_2)$ be probability spaces. Suppose there exist measurable maps $\psi: \Omega_1 \to \Omega_2$ and $\phi: \Omega_2 \to \Omega_1$ such that $\psi_\sharp\P_1 = \P_2$, $\phi_\sharp\P_2 = \P_1$ and \begin{align} \label{eq:strana1} &\forall\, X_0^1 \in L^p(\Omega_1;\mathbb{R}^d) \text{ it holds } X_0^1 = X_0^1 \circ \phi \circ \psi \, ; \\ \label{eq:strana2} &\forall\, X_0^2 \in L^p(\Omega_2;\mathbb{R}^d) \text{ it holds } X_0^2 = X_0^2 \circ \psi \circ \phi. \end{align} Then $\pL(\Omega_1,\mathfrak B_1,\P_1; \S) \sim \pL(\Omega_2,\mathfrak B_2,\P_2; \S)$. \end{proposition} \begin{proof} For every $(X^1,u^1) \in {\ensuremath{\mathcal A}}_{\pL_1}(X^1_0)$, we define $X^2:= X^1\circ \phi$, and $u^2(t,\omega_2):= u^1(t, \phi(\omega_2))$, for every $(t, \omega_2) \in [0,T] \times \Omega_2$. Using that $\phi_\sharp \P_2 = \P_1$, it easily follows that $(X^2,u^2)\in{\ensuremath{\mathcal A}}_{\pL_2}(X_0^1\circ\phi)$ and $J_{\pL_1}(X^1,u^1) = J_{\pL_2}(X^2,u^2)$. Hence, for every $X_0^1 \in L^p(\Omega_1; \mathbb{R}^d)$, we have \begin{equation}\label{V1geV2} V_{\pL_1}(X^1_0) \geq V_{\pL_2}(X^1_0 \circ \phi). \end{equation} Analogously, for every $(X^2,u^2) \in {\ensuremath{\mathcal A}}_{\pL_2}(X^2_0)$, we define $X^1:= X^2\circ \psi$, and $u^1(t,\omega_1):= u^2(t, \psi(\omega_1))$, for every $(t, \omega_1) \in [0,T] \times \Omega_1$. So that, from $\psi_\sharp \P_1 = \P_2$ it holds $(X^1,u^1)\in{\ensuremath{\mathcal A}}_{\pL_1}(X_0^2\circ\psi)$ and $J_{\pL_1}(X^1,u^1) = J_{\pL_2}(X^2,u^2)$. Moreover, for every $X_0^2 \in L^p(\Omega_2; \mathbb{R}^d)$ we have \begin{equation}\label{V2geV1} V_{\pL_2}(X^2_0) \geq V_{\pL_1}(X^2_0 \circ \psi). \end{equation} The combination of \eqref{V1geV2} and \eqref{V2geV1} gives \[ V_{\pL_2}(X^2_0) \geq V_{\pL_1}(X^2_0 \circ \psi) \geq V_{\pL_2}(X^2_0 \circ \psi \circ \phi)\] hence, using \eqref{eq:strana1} we have \begin{equation}\label{eq:prima} V_{\pL_2}(X^2_0) = V_{\pL_1}(X^2_0 \circ \psi). \end{equation} Thanks to \eqref{eq:strana2} and \eqref{eq:prima} we finally get \[V_{\pL_1}(X^1_0) = V_{\pL_1}(X^1_0 \circ \phi \circ \psi) = V_{\pL_2}(X^1_0 \circ \phi). \] \end{proof} \begin{remark} \ \begin{enumerate} \item Notice that the assumptions of Proposition \ref{p:equivalence} are satisfied if there exists a bijective function $\psi:\Omega_1\to\Omega_2$ such that $\psi$ and $\psi^{-1}$ are measurable and $\psi_\sharp\P_1=\P_2$. Indeed, it sufficies to choose $\phi=\psi^{-1}$. \item Proposition \ref{p:equivalence} still holds when the maps $\psi$ and $\phi$ are defined up to sets of null measure, meaning that \[\psi: \Omega_1\setminus\cN_1 \to \Omega_2\setminus\cN_2,\quad \phi: \Omega_2\setminus\cN_2 \to \Omega_1\setminus\cN_1\] for some $\cN_1\in \mathfrak B_1$ such that $\P_1(\cN_1)=0$ and $\cN_2\in \mathfrak B_2$ such that $\P_2(\cN_2)=0$. \end{enumerate} \end{remark} \subsection{Basic results} Here we collect some properties of the Lagrangian problem. In particular, we show existence and uniqueness of solutions, a priori estimates, compactness for the associated laws and we derive a stability result for trajectories and cost when initial data and control converge in a suitable sense. \begin{proposition}[Existence and uniqueness]\label{prop:existL} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $u\in \mathrm{M}([0,T]\times\Omega;U)$ be given. Then there exists a unique $X\in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ such that $(X,u)\in\mathcal A_{\pL}(X_0)$. Moreover, if $(X^i,u^i)\in{\ensuremath{\mathcal A}}_\pL(X_0)$, $i=1,2$, and $u^1=u^2$ ${\ensuremath{\mathcal L}}_T\otimes\P$-a.e., then $X^1=X^2$. \end{proposition} \begin{proof} We define $F_u: [0,T] \times L^p(\Omega;\mathbb{R}^d)\to L^p(\Omega;\mathbb{R}^d)$ by \begin{equation}\label{eq:defFu} F_u(t,Z)(\omega):=f(Z(\omega),u(t,\omega),Z_\sharp\P). \end{equation} We observe that the continuity of $f$ and the measurability of $u$ imply that $F_u$ is a Carath\'eodory function. Moreover, by \eqref{eq:Lipf} and \eqref{eq:WpLp}, $F_u$ satisfies condition \eqref{LipFE}. Since $F_u(t,0)(\omega)=f(0,u_t(\omega),\delta_0)$, by continuity of $f$ and compactness of $U$ it follows that $F_u$ satisfies \eqref{boundFE}. Theorem \ref{th:ExistenceODE} with the choice $E=L^p(\Omega;\mathbb{R}^d)$ and $F=F_u$ yields the existence and uniqueness of a curve $X \in\mathrm{AC}^p([0,T];L^p(\Omega;\mathbb{R}^d))$ solving \begin{equation*} X_t=X_0+\int_0^t F_u(s,X_s)\,\d s, \qquad \forall\,t\in[0,T]. \end{equation*} Thanks to Proposition \ref{prop:EquivLp} we finally get $X\in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ which is the unique solution of \eqref{eq:systemL}. The last assertion follows from the equality $F_{u^1}(t,Z)=F_{u^2}(t,Z)$ for ${\ensuremath{\mathcal L}}_T$-a.e. $t\in[0,T]$ and for every $Z \in L^p(\Omega;\mathbb{R}^d)$. \end{proof} \begin{proposition}[A priori estimates]\label{prop:estimatesL} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $(X,u)\in\mathcal A_{\pL}(X_0)$. Then there exist $C$ and $C_T$ independent of $u$ and $X_0$ such that \begin{equation}\label{boundXt} \sup_{t\in[0,T]}\|X_t\|_{L^p(\Omega;\mathbb{R}^d)}\le e^{2CT}\left(\|X_0\|_{L^p(\Omega;\mathbb{R}^d)}+CT\right), \end{equation} \begin{equation}\label{LipXt} \|X_t-X_s\|_{L^p(\Omega;\mathbb{R}^d)}\le C_T\,|t-s|\,\left(1+\|X_0\|_{L^p(\Omega;\mathbb{R}^d)}\right) \qquad \forall\, s,t\in[0,T], \end{equation} \begin{equation}\label{boundXtomega} \sup_{t\in[0,T]}|X_t(\omega)| \le e^{CT}\left(|X_0(\omega)|+ C_T(1+ \|X_0\|_{L^p(\Omega;\mathbb{R}^d)})\right), \quad \text{for }\P\text{-a.e. }\omega\in\Omega. \end{equation} \end{proposition} \begin{proof} The estimates \eqref{boundXt} and \eqref{LipXt} follows from \eqref{boundZt} and \eqref{LipZt} for $F=F_u$ defined in \eqref{eq:defFu} and $E=L^p(\Omega;\mathbb{R}^d)$ . In order to prove \eqref{boundXtomega} we write \eqref{eq:systemL} in integral form \begin{equation}\label{systemLintp} X_t(\omega)= X_0(\omega)+\int_0^t f(X_s(\omega),u(s,\omega),(X_s)_\sharp\P)\,\d s, \quad \forall\,t\in[0,T] \quad \text{for }\P\text{-a.e. }\omega\in\Omega. \end{equation} Then by \eqref{f:growth} we have \begin{align*} |X_t(\omega)| &= \left |X_0(\omega) + \int_0^t f(X_s(\omega),u(s,\omega),(X_s)_\sharp\P)\,\d s \right |\\ &\le |X_0(\omega)| + \int_0^t | f(X_s(\omega),u(s,\omega),(X_s)_\sharp\P) | \,\d s\\ &\le |X_0(\omega)| + \int_0^t C\left( 1+|X_s(\omega)|+ \|X_s\|_{L^p(\Omega;\mathbb{R}^d)}\right) \,\d s. \end{align*} Using \eqref{boundXt} and Gronwall inequality we obtain \eqref{boundXtomega}. \end{proof} In the following Lemma, we derive a compactness result for the laws of Lagrangian trajectories, when the initial data belong to a compact subset of $L^p(\Omega;\mathbb{R}^d)$. \begin{lemma}\label{lemma:cpt} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space. Let $K \subseteq L^p(\Omega;\mathbb{R}^d)$ compact. Then the set \begin{equation}\label{eq:defcompactKmu} {\ensuremath{\mathcal K}}_K := \left\{ \mu \in \operatorname{AC}([0,T]; \mathscr{P}_p(\mathbb{R}^d)) : \mu_t = (X_t)_\sharp \P, \; (X,u) \in \mathcal A_{\pL}(X_0), \; X_0 \in K \right\} \end{equation} is relatively compact in $C([0,T]; \mathscr{P}_p(\mathbb{R}^d))$. \end{lemma} \begin{proof} Let $\{\mu^n\}_{n\in\mathbb{N}}\subset{\ensuremath{\mathcal K}}_K$ be a sequence. By definition, there exist $(X^n,u^n) \in \mathcal A_{\pL}(X^n_0)$, $X^n_0 \in K$ such that $\mu_t = (X_t)_\sharp \P$ for all $t\in[0,T]$. Since $\sup_{n\in\mathbb{N}}\|X^n_0\|_{L^p(\Omega;\mathbb{R}^d)}<+\infty$, by the estimate \eqref{boundXtomega} there exits a constant $C>0$ such that \begin{equation}\label{eqK} |X^n_t(\omega)|^p \le C\left(1+|X^n_0(\omega)|^p\right), \quad \forall\, n\in\mathbb{N}, \; \forall\, t\in[0,T], \text{ for }\P\text{-a.e. }\omega\in\Omega. \end{equation} Since $K$ is compact in $L^p(\Omega;\mathbb{R}^d)$, there exists an admissible $\psi:[0,+\infty)\to[0,+\infty)$, according to Definition \ref{def:admphi}, such that \[ \sup_{n\in\mathbb{N}}\int_\Omega \psi(|X_0^n(\omega)|^p)\,\d\P(\omega)<+\infty.\] By the doubling and monotonicity property of $\psi$ and \eqref{eqK} we have \begin{equation*}\label{} \psi(|X^n_t(\omega)|^p) \le C\left(1+\psi(|X^n_0(\omega)|^p)\right), \quad \forall\, n\in\mathbb{N}, \; \forall\, t\in[0,T], \text{ for }\P\text{-a.e. }\omega\in\Omega. \end{equation*} and then \[ \sup_{t\in[0,T],n\in\mathbb{N}}\int_\Omega \psi(|X_t^n(\omega)|^p)\,\d\P(\omega)<+\infty,\] that can be rewritten as \[ \sup_{t\in[0,T],n\in\mathbb{N}}\int_{\mathbb{R}^d} \psi(|x|^p)\,\d\mu_t^n(x)<+\infty.\] By Proposition \ref{prop:wassconv} there exists a compact $\mathscr{K}\subset \mathscr{P}_p(\mathbb{R}^d)$ such that $\mu^n_t\in\mathscr{K}$ for any $t\in[0,T]$ and $n\in\mathbb{N}$. Moreover, by \eqref{LipXt} and the boundedness of $\|X^n_0\|_{L^p(\Omega;\mathbb{R}^d)}$, there exists $C>0$ such that \[ W_p(\mu^n_t,\mu^n_s)\le \|X^n_t-X^n_s\|_{L^p(\Omega;\mathbb{R}^d)} \le C |t-s|, \quad \forall\, s,t \in[0,T], \; \forall\,n\in\mathbb{N}. \] We can thus apply Ascoli-Arzel\`a theorem in $C([0,T]; \mathscr{P}_p(\mathbb{R}^d))$ to conclude. \end{proof} \medskip We conclude the subsection proving a first stability result for the Lagrangian problem. \begin{proposition}[Stability for \pL]\label{prop:costRLntoRL} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $(X,u)\in\mathcal A_{\pL}(X_0)$. Let $X_0^n \in L^p(\Omega;\mathbb{R}^d)$ be a sequence such that $\|X^n_0-X_0\|_{L^p(\Omega;\mathbb{R}^d)}\to 0$, as $n \to +\infty$. If $(X^n,u^n)\in{\ensuremath{\mathcal A}}_{\pL}(X^n_0)$, for any $n\in\mathbb{N}$, and $u^n\to u$ in ${\ensuremath{\mathcal L}}_T\otimes\P$-measure as $n\to+\infty$, then \begin{equation}\label{convX} \sup_{t\in[0,T]}\|X^n_t-X_t\|_{L^p(\Omega;\mathbb{R}^d)}\to 0,\quad \textrm{as }n\to+\infty, \end{equation} and \begin{equation}\label{convJ} J_{\pL}(X^n,u^n)\to J_{\pL}(X,u),\quad \textrm{as }n\to+\infty. \end{equation} \end{proposition} \begin{proof} In order to prove \eqref{convX} we apply Proposition \ref{prop:StabilityODE} with the choice $E=L^p(\Omega;\mathbb{R}^d)$, $F=F_u$ and $F^n=F_{u^n}$, defined as in \eqref{eq:defFu}. We have to check that \eqref{Resto} holds. Defining $G^n,G:[0,T]\times\Omega\to\mathbb{R}^d$ by $G^n(t,\omega):=F_{u^n}(t,X_t)(\omega)$ and $G(t,\omega):=F_{u}(t,X_t)(\omega)$, it is sufficient to prove that $G^n\to G$ in $L^p([0,T]\times\Omega;\mathbb{R}^d)$. Since $u^n$ converges to $u$ in ${\ensuremath{\mathcal L}}_T\otimes\P$-measure, there exists a subsequence $u^{n_k}$ such that $u^{n_k}_t(\omega)$ converges to $u_t(\omega)$ for ${\ensuremath{\mathcal L}}_T\otimes\P$-a.e. $(t,\omega)\in[0,T]\times\Omega$. By the continuity of $f$, we have that $$|G^{n_k}(t,\omega) - G(t,\omega)|=|F_{u^{n_k}}(t,X_t)(\omega) - F_{u}(t,X_t)(\omega)|\to 0$$ for ${\ensuremath{\mathcal L}}_T\otimes\P$-a.e. $(t,\omega)\in[0,T]\times\Omega$. Moreover $$|G^{n}(t,\omega) - G(t,\omega)|^p = |F_{u^{n}}(t,X_t)(\omega) - F_{u}(t,X_t)(\omega) |^p \le C\left(1+|X_t(\omega)|^p+\|X_t\|^p_{L^p(\Omega;\mathbb{R}^d)}\right).$$ By dominated convergence we conclude that $G^{n_k}\to G$ in $L^p([0,T]\times\Omega;\mathbb{R}^d)$. Since the limit is independent of the subsequence, we conclude that \begin{equation*} \int_0^T \int_\Omega |F_{u^n}(t,X_t)(\omega) - F_{u}(t,X_t)(\omega)|^p\,\d\P(\omega)\,\d t \to 0. \end{equation*} Let us prove \eqref{convJ}. For any $t\in[0,T]$, we use the notation $\mu_t^n:=(X^n_t)_\sharp\P$ and $\mu_t:=(X_t)_\sharp\P$. By \eqref{convX} we have \begin{equation}\label{convmu} \sup_{t\in[0,T]}W_p(\mu^n_t,\mu_t) \to 0,\quad \textrm{as }n\to+\infty. \end{equation} We focus on the running cost ${\ensuremath{\mathcal C}}$. Since \begin{align}\label{eq:estimCchat} \begin{split} &\left|\int_0^T\int_\Omega\left({\ensuremath{\mathcal C}}(X^n_t(\omega),u^n_t(\omega),\mu_t^n)- {\ensuremath{\mathcal C}}(X_t(\omega),u_t(\omega),\mu_t)\right)\,\d\P(\omega)\, \d t\right|\\ \le& \int_0^T\int_\Omega\left|{\ensuremath{\mathcal C}}(X^n_t(\omega),u^n_t(\omega),\mu_t^n)-{\ensuremath{\mathcal C}}(X_t(\omega),u_t(\omega),\mu_t)\right|\,\d\P(\omega)\, \d t, \end{split} \end{align} defining $H^n,H:[0,T]\times\Omega\to\mathbb{R}^d$ by $H^n(t,\omega):={\ensuremath{\mathcal C}}(X^n_t(\omega),u^n_t(\omega),\mu_t^n)$ and $H(t,\omega):={\ensuremath{\mathcal C}}(X_t(\omega),u_t(\omega),\mu_t)$, it is sufficient to prove that $H^n\to H$ in $L^1([0,T]\times\Omega;\mathbb{R}^d)$. Since $u^n$ converges to $u$ in ${\ensuremath{\mathcal L}}_T\otimes\P$-measure, \eqref{convX} and \eqref{convmu} hold, and ${\ensuremath{\mathcal C}}$ is continuous, then there exists a subsequence $H^{n_k}$ such that $H^{n_k}(t,\omega)$ converges to $H(t,\omega)$ for ${\ensuremath{\mathcal L}}_T\otimes\P$-a.e. $(t,\omega)\in[0,T]\times\Omega$. Moreover, by the growth assumptions \eqref{eq:growthC} we have \begin{equation*} H^n(t,\omega) \leq C(1+|X^{n}_t(\omega)|^p+\|X^n_t\|^p_{L^p(\Omega;\mathbb{R}^d)}). \end{equation*} By a variant of the dominated convergence Theorem (see Theorem 1.20 in \cite{evans2015measure}) we conclude that $H^{n_k}\to H$ in $L^1([0,T]\times\Omega;\mathbb{R}^d)$. For the same argument as before we obtain that the whole sequence $H^n\to H$ in $L^1([0,T]\times\Omega;\mathbb{R}^d)$. The proof that $$\int_\Omega {\ensuremath{\mathcal C}}_T(X^n_T(\omega),\mu^n_T)\,\d\P(\omega) \to \int_\Omega {\ensuremath{\mathcal C}}_T(X_T(\omega),\mu_T)\,\d\P(\omega), \quad \textrm{as }n\to+\infty$$ follows from the same argument. \end{proof} \begin{proposition}[Upper semicontinuity of the value function]\label{prop:uscVL} Let $\S:=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space. If $X_0^n,X_0 \in L^p(\Omega;\mathbb{R}^d) $ satisfy $\|X^n_0 - X_0\|_{L^p(\Omega;\mathbb{R}^d)} \to 0$ as $n \to +\infty$, then \begin{equation*} \limsup_{n\to +\infty} V_{\pL} (X_0^n) \leq V_{\pL}(X_0). \end{equation*} \end{proposition} \begin{proof} Let $\varepsilon>0$ and $(X^\varepsilon,u^\varepsilon)\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$ such that $J_{\pL}(X^\varepsilon,u^\varepsilon) \leq V_{\pL}(X_0) + \varepsilon$. By Proposition \ref{prop:existL}, for any $n\in\mathbb{N}$ there exists $X^{\varepsilon,n}$ such that $(X^{\varepsilon,n},u^\varepsilon)\in{\ensuremath{\mathcal A}}_{\pL}(X_0^n)$. By Proposition \ref{prop:costRLntoRL}, $J_{\pL}(X^{\varepsilon,n},u^\varepsilon) \to J_{\pL}(X^\varepsilon,u^\varepsilon)$, as $n\to +\infty$. Hence \[ \limsup_{n\to +\infty}V_{\pL}(X_0^n) \leq \limsup_{n\to +\infty} J_{\pL}(X^{\varepsilon,n},u^\varepsilon) = J_{\pL}(X^\varepsilon,u^\varepsilon) \leq V_{\pL}(X_0) + \varepsilon. \] Since $\varepsilon$ is arbitrary, we conclude. \end{proof} \subsection{Approximation by piecewise constant controls}\label{sec:approxLpiecew} In this subsection, we approximate admissible controls for the Lagrangian problem with a sequence of suitable piecewise constant controls (i.e. measurable with respect to finite algebras of $\Omega$) so that the corresponding trajectories and costs converge. This is the content of Theorem \ref{prop:RLntoRL}. The same result is then rephrased in the context of finite particle approximations in Proposition \ref{prop:approx_lagrangian_n}. \smallskip Let $(\Omega,\bar \mathfrak B, \P)$ be a probability space with $\bar \mathfrak B $ a finite algebra. It can be shown that $\bar \mathfrak B$ induces a unique minimal (with respect to the inclusion) partition of $\Omega$, that we denote by \begin{equation}\label{eq:partBn} {\ensuremath{\mathcal P}}(\bar \mathfrak B)=\{A_k:k=1,\ldots,m\}. \end{equation} Given a topological space $E$, observe that, since $\bar \mathfrak B$ is finite, $g \in \mathrm{M}((\Omega, \bar\mathfrak B); (E,{\ensuremath{\mathcal B}}_E))$ if and only if $g$ is constant on the elements of ${\ensuremath{\mathcal P}}$. Let us give the following definition. { \begin{definition}\label{def:discreteL} Let $(\Omega,\bar\mathfrak B, \P)$ be a probability space with $\bar\mathfrak B$ a finite algebra and ${\ensuremath{\mathcal P}}(\bar \mathfrak B)$ the associated unique minimal partition \eqref{eq:partBn}. Given $m := \# {\ensuremath{\mathcal P}}(\bar \mathfrak B)$, we define the probability space $(\Omega^m, \mathcal{S}(\Omega^m), \P^m)$, where $\Omega^m:=\{1,\dots,m\}$, $\mathcal{S}(\Omega^m)$ is the algebra generated by $( \lbrace 1\rbrace, \ldots \lbrace m\rbrace )$ and $\P^m(\{k\}):=\P(A_k)$, $k=1,\dots,m$. \end{definition} } \begin{proposition}\label{prop:correspLdiscrete} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA}. Let $(\Omega,\bar\mathfrak B, \P)$ and $(\Omega^m, \mathcal{S}(\Omega^m), \P^m)$ as in Definition \ref{def:discreteL}. Then the Lagrangian problems $\pL_{\bar \mathfrak B}:=\pL(\Omega,\bar\mathfrak B,\P)$ and $\pL^m:=\pL(\Omega^m, \mathcal{S}(\Omega^m), \P^m)$ are equivalent in the sense of Definition \ref{def:equiv_L}. \end{proposition} \begin{proof} {Let $\psi: \Omega \to \Omega^m$ the function given by \[ \psi(\omega)=k, \quad \text{ if } \omega\in A_k, \quad k=1,\dots,m\] and $\phi:\Omega^m \to \Omega$ defined by \[ \phi(k)=\omega_k, \quad k=1,\dots,m,\] for a fixed $\omega_k\in A_k$. We prove that the maps $\psi$ and $\phi$ satisfy} the assumptions of Proposition \ref{p:equivalence}. Measurability of the map $\psi$ follows from the fact that $\psi^{-1}(\{k\})=A_k$ for any $k=1,\dots,m$, while the measurability of $\phi$ is trivial since $\Omega^m$ is equipped with the algebra $\mathcal{S}(\Omega^m)$. Moreover, it is immediate to verify that $\psi\circ\phi=i_{\Omega^m}$, which implies \eqref{eq:strana1}. We have to verify \eqref{eq:strana2}: given $X_0\in L^p(\Omega;\mathbb{R}^d)$, we have that $X_0$ is constant on the elements $A_k$ of the partition ${\ensuremath{\mathcal P}}(\bar\mathfrak B)$, hence it easily follows that $X_0=X_0\circ\phi\circ\psi$. Let us verify that $\psi_\sharp\P=\P^m$: for any $k=1,\dots,m$, we have $(\psi_\sharp\P)(\{k\})=\P(\psi^{-1}(\{k\}))=\P(A_k)=\P^m(\{k\})$. Finally, we verify that $\phi_\sharp\P^m=\P$: for any measurable function $g:\Omega\to\mathbb{R}$, recalling that $g$ is piecewise constant on the elements of ${\ensuremath{\mathcal P}}(\bar\mathfrak B)$, we have \begin{align*} \int_\Omega g(\omega)\,\d(\phi_\sharp\P^m)(\omega)&=\int_{\Omega^m}g(\phi(k))\,\d\P^m(k)=\sum_{k=1}^m g(\omega_k)\P(A_k)\\ &=\int_\Omega g(\omega)\,\d\P(\omega). \end{align*} \end{proof} We recall the notation ${\ensuremath{\mathcal B}}_{[0,T]}$ for the Borel $\sigma$-algebra on $[0,T]$. \begin{definition}\label{approx_prop}[Finite Approximation Property] Let $(\Omega,\mathfrak B, \P)$ be a probability space. We say that the family of finite algebras $\mathfrak B^n \subset \mathfrak B$, $n\in\mathbb{N}$, satisfies the \emph{finite approximation property} if for any Banach space $E$ and any $g \in L_\P^1(\Omega; E)$, there exists a sequence $g^n: \Omega \to E$ such that \begin{enumerate}[label=\textbf{(\roman*)}] \item \label{FAP1} $g^n$ is $\mathfrak B^n$-measurable for any $n\in\mathbb{N}$; \item \label{FAP2} $g^n(\Omega) \subset \mathrm{\overline{co}}\left(g(\Omega)\right)$, where $\mathrm{\overline{co}}\left(g(\Omega)\right)$ denotes the closed convex hull of $g(\Omega)$; \item \label{FAP3} $\left\| g^n - g \right\|_{L_\P^1(\Omega;E)} \to 0$, as $n \to +\infty$; \item \label{FAP4} {if $G:[0,T]\times\Omega\to E$ is $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\mathfrak B)$-measurable and $g_t(\cdot):=G(t,\cdot)\in L_\P^1(\Omega; E)$ for any $t\in[0,T]$, then the maps $G^n:[0,T]\times\Omega\to E$ defined by $G^n(t,\omega):=g^n_t(\omega)$, where $g^n_t$ is a sequence associated to $g_t$ satisfying items \ref{FAP1},\ref{FAP2},\ref{FAP3}, are $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\mathfrak B^n)$-measurable for any $n\in\mathbb{N}$.} \end{enumerate} \end{definition} \begin{proposition}\label{prop:FAP} Let $(\Omega,\mathfrak B,\P)$ be a standard Borel space. \begin{enumerate} \item Then there exists a family of finite algebras $\mathfrak B^n \subset \mathfrak B$, $n \in \mathbb{N}$, satisfying the finite approximation property of Definition \ref{approx_prop}. \item If $\P$ is without atoms, then there exists a family $\mathfrak B^n \subset \mathfrak B$, $n \in \mathbb{N}$, satisfying the finite approximation property of Definition \ref{approx_prop} such that the associated minimal partition ${\ensuremath{\mathcal P}}(\mathfrak B^n)= \{A_k^n:k=1,\ldots,n\} $ contains exactly $n$ elements and \begin{equation}\label{eq:equipart} \P(A^n_k)=\frac1n, \qquad k=1,\ldots,n. \end{equation} \end{enumerate} \end{proposition} The proof of Proposition \ref{prop:FAP} is postponed in Appendix \ref{app_FP}. Results similar to item (1) of Proposition \ref{prop:FAP} can be found in \cite[Theorem 6.1.12]{stroock2011probability}, where a martingale approach is employed. \begin{theorem}[Approximation by piecewise constant controls for \pL]\label{prop:RLntoRL} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} with $U$ a convex compact subset of a separable Banach space $V$. Let $(\Omega,\mathfrak B,\P)$ be a probability space and assume that there exists $\{\mathfrak B^n\}_{n\in\mathbb{N}}$ satisfying the finite approximation property of Definition \ref{approx_prop}. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $(X,u)\in\mathcal A_{\pL}(X_0)$. If $\{X^n_0\}_{n\in\mathbb{N}}\subset L^p(\Omega;\mathbb{R}^d)$ satisfies $\|X^n_0-X_0\|_{L^p(\Omega;\mathbb{R}^d)}\to0$, then there exists a sequence $(X^n,u^n)\in\mathcal A_{\pL}(X^n_0)$ such that \begin{enumerate} \item $u^n$ is $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\mathfrak B^n)$-measurable; \item $u^n\to u$ in $({\ensuremath{\mathcal L}}_T\otimes\P)$-measure, as $n\to+\infty$; \item $\sup_{t\in[0,T]}\|X^n_t-X_t\|_{L^p(\Omega;\mathbb{R}^d)}\to 0$, as $n\to+\infty$; \item $J_{\pL}(X^n,u^n)\to J_{\pL}(X,u)$, as $n\to+\infty$. \end{enumerate} Moreover, if $X_0^n$ is $\mathfrak B^{n}$-measurable, $n\in\mathbb{N}$, then $X_t^n$ is $\mathfrak B^{n}$-measurable for any $t\in[0,T]$. \end{theorem} \begin{proof} For any $t \in [0,T]$, we denote with $u_t: \Omega \to U$ the measurable control function $u$ at time $t$. For any $n\in\mathbb{N}$ and $t\in[0,T]$ let $u^n_t:\Omega\to U$ be the $\mathfrak B^n$-measurable approximation of $u_t$ given by Definition \ref{approx_prop}. By the convexity of $U$ and its compactness, from the property \ref{FAP2} of Definition \ref{approx_prop} it follows that $u^n_t(\Omega) \subset U$. Defining $u^n(t,\omega):=u^n_t(\omega)$, thanks to the property \ref{FAP4} of Definition \ref{approx_prop}, we have that $u^n$ is $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\mathfrak B^n)$-measurable. By Proposition \ref{prop:existL} we have the existence of $X^n$ with $(X^n,u^n)\in\mathcal A_{\pL}(X^n_0)$. From the compactness of $U$ and the dominated convergence theorem it follows that $\|u^n-u\|_{L^1([0,T]\times\Omega;V)}\to 0$, as $n\to+\infty$. Consequently, (2) holds. Properties (3) and (4) follow by Proposition \ref{prop:costRLntoRL}. \end{proof} {In the following, we reformulate the approximation result of Theorem \ref{prop:RLntoRL} with the language of particles.} Let $(\Omega,\mathfrak B, \P)$ be a probability space and $\mathfrak B^n$ a finite algebra, $n \in \mathbb{N}$. Denote with ${\ensuremath{\mathcal P}}(\mathfrak B^n):= \{ A_k^n: k = 1, \ldots, k(n) \}$ the associated unique minimal partition and define $(\Omega^{k(n)}, \mathcal{S}(\Omega^{k(n)}), \P^{k(n)})$ by \begin{equation}\label{eq:omega_kn} \begin{aligned} &\Omega^{k(n)}:=\{1,\dots,k(n)\}, \quad \mathcal{S}(\Omega^{k(n)}):= \sigma ( \lbrace 1\rbrace, \ldots \lbrace k(n)\rbrace) \\ &\P^{k(n)}(\{k\}):=\P(A^{n}_k), \quad k=1,\dots,{k(n)}. \end{aligned} \end{equation} In order to approximate trajectories, controls and costs of a Lagrangian problem $\pL=\pL(\Omega,\mathfrak B,\P)$ with the respective quantities in $\pL^{k(n)}=\pL(\Omega^{k(n)},\mathcal{S}(\Omega^{k(n)}),\P^{k(n)})$ we introduce, for every $n \in \mathbb{N}$, the maps $\psi^n, \phi^n$ and ${\ensuremath{\mathcal K}}^n$. This is necessary due to the fact that the trajectories are not defined on the same space. For every $n \in \mathbb{N}$, we denote with $\psi^n, \phi^n$ the maps \begin{equation}\label{def:psinphin} \begin{split} &\psi^n: \Omega \to \Omega^{k(n)},\quad\psi^n(\omega)=k, \quad \text{ if } \omega\in A^n_k, \quad k=1,\dots,k(n);\\ &\phi^n:\Omega^{k(n)} \to \Omega,\quad \phi^n(k)=\omega_{k}^n, \quad k=1,\dots,k(n), \quad \text{for a fixed }\omega_{k}^n\in A^n_k. \end{split} \end{equation} Moreover, for every $n \in \mathbb{N}$, we introduce the map ${\ensuremath{\mathcal K}}^n: L^p(\Omega^{k(n)}; C([0,T];\mathbb{R}^d)) \times \mathrm{M}([0,T] \times \Omega^{k(n)};U)$ $\to L^p((\Omega, \mathfrak B^n, \P); C([0,T];\mathbb{R}^d)) \times \mathrm{M}([0,T] \times \Omega;U)$ given by \begin{equation}\label{def:Kn} \begin{aligned} &{\ensuremath{\mathcal K}}^n( Y, v):= (Y\circ\psi^n, \hat v), \\ &\text{ where} \qquad \hat v(t,\omega):= v(t,\psi^n(\omega)), \quad \forall \, (t,\omega)\in[0,T]\times\Omega.\\ \end{aligned} \end{equation} \begin{proposition}\label{prop:approx_lagrangian_n} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} with $U$ a convex compact subset of a separable Banach space $V$. Let $(\Omega,\mathfrak B,\P)$ and be a probability space and assume that there exists $\{\mathfrak B^n\}_{n\in\mathbb{N}}$ satisfying the finite approximation property of Definition \ref{approx_prop}. For every $n \in \mathbb{N}$, let $(\Omega^{k(n)},\mathcal{S}(\Omega^{k(n)}),\P^{k(n)})$ as in \eqref{eq:omega_kn}. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $(X,u)\in\mathcal A_{\pL}(X_0)$. If $ Y^n_0 \in L^p(\Omega^{k(n)};\mathbb{R}^d)$, $n \in \mathbb{N}$, satisfies \[\lim_{n \to +\infty} \|Y^n_0\circ\psi^n-X_0\|_{L^p(\Omega;\mathbb{R}^d)} = 0,\] then there exists a sequence $( Y^n,v^n)\in\mathcal A_{\pL^{k(n)}}(Y^n_0)$ such that \begin{enumerate} \item ${\ensuremath{\mathcal K}}^n(Y^n, v^n) \to (X,u)$ in $C([0,T];L^p(\Omega;\mathbb{R}^d)) \times L^1([0,T] \times \Omega;V)$, as $n\to+\infty$; \item $J_{\pL^{k(n)}}(Y^n, v^n)\to J_{\pL}(X,u)$, as $n\to+\infty$. \end{enumerate} \end{proposition} \begin{proof} For every $n \in \mathbb{N}$, by Proposition \ref{prop:correspLdiscrete} it holds that $\pL_{\mathfrak B^n} \sim \pL^{k(n)}$ in the sense of Definition \ref{def:equiv_L}. Thanks to Proposition \ref{prop:equivSpaces} we have that $X \in \mathrm{AC}^p([0,T];L^p(\Omega;\mathbb{R}^d))$, hence we conclude applying Theorem \ref{prop:RLntoRL}. Since $U$ is compact, recall that the convergence $u^n \to u \in L^1([0,T] \times \Omega;V)$ is equivalent to the convergence in $({\ensuremath{\mathcal L}}_T \otimes \P)$-measure. \end{proof} \begin{remark}\label{rmk:noequiprob} If $(\Omega,\mathfrak B,\P)$ is a standard Borel space, the existence of a sequence of finite algebras $\mathfrak B^n$ is guaranteed by item (1) in Proposition \ref{prop:FAP}. Moreover, if $\P$ is without atoms, it is possible to choose $\mathfrak B^n$ s.t. $\# \mathfrak B^n = n$ and satisfying the property \eqref{eq:equipart} given in item (2) in Proposition \ref{prop:FAP}. \noindent Notice that the assumption $\P$ without atoms is necessary to get a sequence of $\mathfrak B^n$ with the property \eqref{eq:equipart}. Indeed, if there exists $\omega_0\in\Omega$ such that $\P(\{\omega_0\})=\alpha>0$, then for $n \in \mathbb{N}$ big enough the property \eqref{eq:equipart} fails. \end{remark} \subsection{Approximation by continuous controls and trajectories} The objective of the subsection is twofold. In Proposition \ref{lemmaA}, under continuity assumptions on both the initial datum and the control, we exhibit a stability result for a Lagrangian problem $\pL(\Omega, \mathfrak B, \P)$ when $\P$ is approximated by a sequence of probability measures $\P^n$. Then, in Theorem \ref{lemmaB} we approximate admissible controls with \emph{continuous} controls so that the associated trajectories are continuous as well and the associated costs converge. These results are useful to prove the equivalence between Lagrangian and Eulerian optimal control problems (see the proof of Theorem \ref{prop:K>L}).\\ Throughout the section, we assume that \begin{equation}\label{ass:topo} \text{$(\Omega,\mathfrak B,\P)$ standard Borel space, $\tau$ a Polish topology on $\Omega$ such that $\mathfrak B = {\ensuremath{\mathcal B}}_{(\Omega,\tau)}$.} \end{equation} \noindent In the following regularity result, we prove the existence of a continuous trajectory {for the Lagrangian dynamics} whenever both the initial datum and the control are continuous. \begin{lemma}[Continuity]\label{lemmaAA} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and let $(\Omega,\mathfrak B,\P)$ satisfy \eqref{ass:topo}. Let $\tilde X_0\in C(\Omega;\mathbb{R}^d)$ such that $\tilde X_0\in L^p(\Omega;\mathbb{R}^d)$ and $u\in C([0,T]\times\Omega;U)$. If $(X,u) \in {\ensuremath{\mathcal A}}_{\pL}(\tilde X_0)$ and $\mu_t:=(X_t)_\sharp\P$ then there exists a unique $\tilde X \in C([0,T] \times \Omega; \mathbb{R}^d)$ satisfying for any $\omega \in \Omega$ \begin{equation}\label{eq:systemLomega} \begin{cases} \dot {\tilde X}_t(\omega)=f(\tilde X_t(\omega),u_t(\omega),\mu_t), & \forall \ t\in (0,T)\\ \tilde X_{|t=0}(\omega)=\tilde X_0(\omega). & \end{cases} \end{equation} Moreover, $\tilde X_t(\omega) = X_t(\omega)$ for every $t \in [0,T]$ and $\P$-a.e. $\omega \in \Omega$. \end{lemma} \begin{proof} For any $\omega\in\Omega$, there exists a unique solution $\tilde X(\omega)\in C^1([0,T];\mathbb{R}^d)$ of \eqref{eq:systemLomega} thanks to the Lipschitz assumptions on the vector field $f$. Since $u, (\mu_t)_{t \in [0,T]}$ and $\tilde X_0$ are fixed, the solutions of \eqref{eq:systemL} and \eqref{eq:systemLomega} coincide, hence $\tilde X_t(\omega)= X_t(\omega)$ for any $t\in[0,T]$, for $\P$-a.e. $\omega\in\Omega$. Denoting with $\tilde X :[0,T] \times \Omega\to \mathbb{R}^d$ the function $\tilde X(t,\omega)=\tilde X_t(\omega)$, we prove the continuity of $\tilde X$. We fix $(t,\omega)\in[0,T]\times\Omega$, and a sequence $(t_n,\omega_n)\in[0,T]\times\Omega$ converging to $(t,\omega)$ as $n\to+\infty$. By triangular inequality, \begin{equation*} | \tilde X_{t_n}(\omega_n) - \tilde X_{t}(\omega)| \leq | \tilde X_{t_n}(\omega_n) - \tilde X_{t_n}(\omega)| +| \tilde X_{t_n}(\omega) - \tilde X_{t}(\omega)|. \end{equation*} The second term is estimated by \[ | \tilde X_{t_n}(\omega) - \tilde X_{t}(\omega)| \leq \int_t^{t_n} \left| f(\tilde X_s(\omega),u_s(\omega),\mu_s) \right| \d s . \] Concerning the first term, \begin{equation*} \begin{split} | \tilde X_{t_n}(\omega_n) - \tilde X_{t_n}(\omega)| &\leq |\tilde X_0(\omega_n) - \tilde X_0(\omega)| \\ &+ \int_0^{t_n} \left| f(\tilde X_s(\omega_n),u_s(\omega_n),\mu_s) - f(\tilde X_s(\omega),u_s(\omega_n),\mu_s) \right| \d s \\ &+ \int_0^{t_n} \left| f(\tilde X_s(\omega),u_s(\omega_n),\mu_s) - f(\tilde X_s(\omega),u_s(\omega),\mu_s) \right| \d s \\ &\leq |\tilde X_0(\omega_n) - \tilde X_0(\omega)| + L \int_0^{t_n} | \tilde X_s(\omega_n) - \tilde X_s(\omega) | \,\d s \\ &+ \int_0^T \left| f(\tilde X_s(\omega),u_s(\omega_n),\mu_s) - f(\tilde X_s(\omega),u_s(\omega),\mu_s) \right| \d s. \end{split} \end{equation*} By Gronwall lemma we have \begin{equation*} \begin{split} |\tilde X_{t_n}(\omega_n) - \tilde X_{t_n}(\omega)| \leq e^{LT}\left(|\tilde X_0(\omega_n) - \tilde X_0(\omega)| + \int_0^T R_s(\omega_n,\omega)\,\d s\,\right), \end{split} \end{equation*} where $R_s(\omega_n,\omega):=\left| f(\tilde X_s(\omega),u_s(\omega_n),\mu_s) - f(\tilde X_s(\omega),u_s(\omega),\mu_s) \right|$. Collecting the previous inequalities we get \begin{equation*} \begin{split} | \tilde X_{t_n}(\omega_n) - \tilde X_{t}(\omega)| & \leq e^{LT}\left(|\tilde X_0(\omega_n) - \tilde X_0(\omega)| + \int_0^T R_s(\omega_n,\omega)\,\d s\,\right) \\ & + \int_t^{t_n} \left| f(\tilde X_s(\omega),u_s(\omega),\mu_s) \right| \d s, \end{split} \end{equation*} By the continuity of $\tilde X_0$, the growth property \eqref{f:growth} and the continuity of $u$ we can pass to the limit in the right hand side and we conclude. \end{proof} \begin{proposition}[Stability for $\P$]\label{lemmaA} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ satisfying \eqref{ass:topo}. Let $\P^n,\P \in \mathscr{P}(\Omega)$, $n \in \mathbb{N}$, such that $\P^n \to \P$ weakly. Let $\tilde X_0\in C(\Omega;\mathbb{R}^d)$ and $u\in C([0,T]\times\Omega;U)$ such that $\tilde X_0\in \left[\bigcap_{n\in\mathbb{N}}L^p_{\P^n}(\Omega;\mathbb{R}^d)\right]\cap L^p_\P(\Omega;\mathbb{R}^d)$ and \begin{equation}\label{eq:conv_X0} \|\tilde X_0\|_{L^p_{\P^n}(\Omega;\mathbb{R}^d)}\to\|\tilde X_0\|_{L^p_{\P}(\Omega;\mathbb{R}^d)} \quad \text{ if }\ n\to+\infty. \end{equation} We denote by $\pL^{n}:= \pL(\Omega,\mathfrak B, \P^n)$ and $\pL:= \pL(\Omega,\mathfrak B, \P)$. Let $(X,u) \in {\ensuremath{\mathcal A}}_{\pL}(\tilde X_0)$ and $(X^n,u) \in {\ensuremath{\mathcal A}}_{\pL^{n}}(\tilde X_0)$ and denote with $\tilde X, \tilde X^n \in C([0,T] \times \Omega; \mathbb{R}^d)$ {the corresponding solutions} given in Lemma \ref{lemmaAA} associated with $\P$ and $\P^n$, respectively. Then \begin{equation}\label{unifconvX} \sup_{(t,\omega)\in [0,T] \times \Omega}|\tilde X^n_t(\omega)-\tilde X_t(\omega)|\to 0, \qquad \text{ as } n \to +\infty, \end{equation} \begin{equation}\label{convJcont} J_{\pL^{n}}(X^n,u) \longrightarrow J_{\pL}(X,u), \qquad \text{ as } n \to +\infty. \end{equation} \end{proposition} \begin{proof} We denote $\mu^n_t:= (X^n_t)_\sharp \P^n$ and $\mu_t:= (X_t)_\sharp \P$. Since $\tilde X_0:\Omega\to\mathbb{R}^d$ is continuous, the weak convergence $\P^n \to \P$ implies that $\mu_0^n=(\tilde X_0)_\sharp\P^n\to \mu_0=(\tilde X_0)_\sharp\P$ weakly and \eqref{eq:conv_X0} guarantees $\mathrm{m}_p(\mu_0^n)\to\mathrm{m}_p(\mu_0)$. Consequently, by Proposition \ref{prop:wassconv}, it holds $W_p(\mu_0^n,\mu_0)\to 0$ as $n\to+\infty$ and there exists an admissible $\psi:[0,+\infty)\to[0,+\infty)$, according to Definition \ref{def:admphi}, such that \begin{equation}\label{mompsi} \sup_{n \in \mathbb{N}}\int_{\mathbb{R}^d} \psi(|x|^p)\,\d\mu_0^n(x) < +\infty. \end{equation} Using the same argument of the proof of Lemma \ref{lemma:cpt}, thanks to the estimates \eqref{boundXtomega} and \eqref{LipXt} there exist $\tilde \mu \in C([0,T];\mathscr{P}_p(\mathbb{R}^d))$ and a (not relabelled) subsequence such that \begin{equation}\label{convmutilde} \lim_{n\to+\infty}\sup_{s \in [0,T]}W_p(\mu^n_s, \tilde \mu_s) =0. \end{equation} We define $\bar X\in C([0,T]\times\Omega;\mathbb{R}^d)$ through the system \eqref{eq:systemLomega} using $\tilde\mu_t$ instead of $\mu_t$, i.e., for any $\omega\in\Omega$, $t\mapsto \bar X(t,\omega)$ is the solution of the problem \begin{equation}\label{eq:systemLomegabar} \begin{cases} \dot{\bar{X}}_t(\omega)=f(\bar X_t(\omega),u_t(\omega),\tilde\mu_t), &\forall \, t\in (0,T)\\ \bar X_{|t=0}(\omega)=\tilde X_0(\omega). & \end{cases} \end{equation} We show that \begin{equation}\label{unifconvbarX} \sup_{(t,\omega)\in [0,T] \times \Omega}|\tilde X^n_t(\omega)-\bar X_t(\omega)|\to 0, \qquad \text{ as } n \to +\infty. \end{equation} Indeed, for any $\omega\in\Omega$ and $t\in[0,T]$, \begin{equation*} \begin{split} | \tilde X_t^n(\omega) - \bar X_t(\omega)| &\leq \int_0^t \left| f(\tilde X_s^n(\omega),u_s(\omega),\mu^n_s) - f(\bar X_s(\omega),u_s(\omega),\tilde\mu_s) \right| \d s \\ &\leq L \int_0^t \left( | \tilde X_s^n(\omega) -\bar X_s(\omega) | + W_p(\mu^n_s, \tilde \mu_s) \right)\,\d s, \end{split} \end{equation*} and, by Gronwall inequality, we obtain \begin{equation*} |\tilde X_t^n(\omega) - \bar X_t(\omega)| \leq LT e^{LT} \sup_{s \in [0,T]}W_p(\mu^n_s, \tilde \mu_s), \end{equation*} which, by \eqref{convmutilde}, proves \eqref{unifconvbarX}.\\ We have to show that $\bar X=\tilde X$. We first prove that $\tilde\mu_t=(\tilde X_t)_\sharp\P$. By the uniform convergence \eqref{unifconvbarX}, the continuity of $\bar X_t$ and the weak convergence $\P^n\to\P$, we obtain that (see \cite[Lemma 5.2.1]{ambrosio2008gradient}) $$ \int_\Omega \phi(\tilde X_t^n(\omega))\,\d\P^n(\omega) \to \int_\Omega \phi(\bar X_t(\omega))\,\d\P(\omega), \qquad \forall\, \phi \in {C}_b(\mathbb{R}^d;\mathbb{R}).$$ Since $\displaystyle{\int_\Omega \phi(\tilde X_t^n(\omega))\,\d\P^n(\omega) = \int_{\mathbb{R}^d} \phi(x)\,\d\mu_t^n(x)}$, by the uniqueness of the weak limit we obtain that $\tilde\mu_t=(\bar X_t)_\sharp\P$. Then, $\bar X_t(\omega)$ satisfies \begin{equation}\label{eq:systemLomegabar2} \begin{cases} \dot{\bar X}_t(\omega)=f(\bar X_t(\omega),u_t(\omega),(\bar X_t)_\sharp\P), &\forall \, t\in (0,T)\\ \bar X_{|t=0}(\omega)=\tilde X_0(\omega). & \end{cases} \end{equation} By the uniqueness result of Proposition \ref{prop:existL} and the definition of the Lagrangian problem, we obtain that $\bar X_t=X_t$ in $L^p_\P(\Omega;\mathbb{R}^d)$ for any $t\in[0,T]$. In particular we have that $\tilde\mu_t=\mu_t$ for any $t\in[0,T]$. It follows that the systems \eqref{eq:systemLomegabar} and \eqref{eq:systemLomega} are the same, and then $\bar X_t(\omega)=\tilde X_t(\omega)$ for any $(t,\omega)\in[0,T]\times\Omega$. Finally, the convergence \eqref{unifconvX} follows by \eqref{unifconvbarX}, because the limit $\tilde \mu$ given by the compactness is uniquely determined and it is independent of the subsequence. For what concerns \eqref{convJcont}, we first observe that \begin{equation*} J_{\pL^{n}}(X^n,u)=J_{\pL^{n}}(\tilde X^n,u), \qquad J_{\pL}(X,u)=J_{\pL}(\tilde X,u). \end{equation*} We write the running cost as \begin{equation*} \begin{split} & \int_{\Omega}\int_0^T {\ensuremath{\mathcal C}}(\tilde X^n_t(\omega), u_t(\omega),\mu^n_t) \, \d t \,\d\P^n(\omega) \\ &= \int_{\Omega} \int_0^T \left({\ensuremath{\mathcal C}}(\tilde X^n_t(\omega), u_t(\omega),\mu^n_t) - {\ensuremath{\mathcal C}}(\tilde X_t(\omega), u_t(\omega),\mu_t) \right) \,\d t \,\d\P^n(\omega) \\ &+ \int_{\Omega}\int_0^T {\ensuremath{\mathcal C}}(\tilde X_t(\omega), u_t(\omega),\mu_t) \,\d t\, \d\P^n(\omega). \end{split} \end{equation*} By \eqref{unifconvX} and the continuity of ${\ensuremath{\mathcal C}}$, we have that $ \left({\ensuremath{\mathcal C}}(\tilde X^n_t(\omega), u_t(\omega),\mu^n_t) - {\ensuremath{\mathcal C}}(\tilde X_t(\omega), u_t(\omega),\mu_t) \right)\to 0$ uniformly on compact sets of $[0,T]\times\Omega$. By the weak convergence of $\P^n$ towards $\P$ we conclude that \begin{equation*} \begin{split} \lim_{n\to +\infty } \int_{\Omega} \int_0^T \left({\ensuremath{\mathcal C}}(\tilde X^n_t(\omega), u_t(\omega),\mu^n_t) - {\ensuremath{\mathcal C}}(\tilde X_t(\omega), u_t(\omega),\mu_t) \right) \,\d t \,\d\P^n(\omega) = 0. \end{split} \end{equation*} We have to prove that \begin{equation}\label{eq:lastlimit} \lim_{n \to +\infty}\int_{\Omega} \int_0^T {\ensuremath{\mathcal C}}(\tilde X_t(\omega), u_t(\omega),\mu_t) \d t \,\d\P^n(\omega) = \int_{\Omega} \int_0^T {\ensuremath{\mathcal C}}(\tilde X_t(\omega), u_t(\omega),\mu_t) \d t \,\d\P(\omega). \end{equation} By \eqref{mompsi}, which can be rewritten as \begin{equation*} \sup_{n \in \mathbb{N}}\int_{\Omega} \psi(|\tilde X_0(\omega)|^p)\,\d \P^n(\omega) < +\infty, \end{equation*} by estimate \eqref{boundXtomega} and the doubling property of $\psi$ we get \begin{equation*} \sup_{t\in[0,T]}\sup_{n \in \mathbb{N}}\int_{\Omega} \psi(|\tilde X_t(\omega)|^p)\,\d \P^n(\omega) < +\infty. \end{equation*} By the growth condition \eqref{eq:growthC} and the doubling property of $\psi$ we obtain that the map $(t,\omega) \mapsto {\ensuremath{\mathcal C}}(\tilde X_t(\omega), u_t(\omega),\mu_t)$ is uniformly integrable w.r.t. $\{{\ensuremath{\mathcal L}}_T\otimes\P^n\}_n$. Since this map is also continuous, by \cite[Lemma~5.1.7]{ambrosio2008gradient} we obtain \eqref{eq:lastlimit}. Analogously one proves that \begin{equation*} \lim_{n\to+\infty} \int_\Omega \mathcal C_T(\tilde X^n_T(\omega),\mu^n_T)\,\d\P^n(\omega) = \int_\Omega \mathcal C_T(\tilde X_T(\omega),\mu_T)\,\d\P(\omega). \end{equation*} \end{proof} \begin{proposition}[Approximation by continuous controls]\label{lemmaB} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} with $U$ a convex compact subset of a separable Banach space $V$, and $(\Omega,\mathfrak B,\P)$ satisfying \eqref{ass:topo}. Let $\tilde X_0\in C(\Omega;\mathbb{R}^d)$ and $u \in \mathrm{M}([0,T] \times \Omega;U)$ such that $\tilde X_0\in L^p(\Omega;\mathbb{R}^d)$. If $(X,u) \in {\ensuremath{\mathcal A}}_{\pL}(\tilde X_0)$ then there exits a sequence $(\tilde X^n,u^n) \in {\ensuremath{\mathcal A}}_{\pL}(\tilde X_0)$ such that \begin{enumerate} \item $u^n \in C([0,T]\times\Omega; U)$ and $\tilde X^n \in C([0,T] \times \Omega; \mathbb{R}^d)$ for any $n \in \mathbb{N}$; \item $u^n(t,\omega) \to u(t,\omega)$ for $({\ensuremath{\mathcal L}}_T\otimes\P)$-a.e. $(t,\omega)\in[0,T]\times\Omega$; \item $\,\lim_{n \to +\infty}\sup_{t \in [0,T]} |\tilde X^n_t(\omega) - X_t(\omega)|=0$ for $\P$-a.e. $\omega\in\Omega$,\\ $\,\lim_{n \to +\infty}\sup_{t \in [0,T]} \|\tilde X^n_t - X_t\|^p_{L^p(\Omega;\mathbb{R}^d)} = 0$; \item $J_{\pL}(\tilde X^n,u^n) \to J_{\pL}(X,u)$, as $n\to +\infty$. \end{enumerate} \end{proposition} \begin{proof} Since $u\in\mathrm{M}([0,T]\times\Omega;U)$, by Lusin's theorem applied to the space $[0,T]\times\Omega$ with the measure ${\ensuremath{\mathcal L}}_T\otimes\P$, there exists a sequence of compact subsets $A_n\subset[0,T]\times\Omega$ such that $A_n \subset A_{n+1}$, ${\ensuremath{\mathcal L}}_T\otimes\P([0,T]\times\Omega \setminus A_n) < \frac{1}{n}$ for every $n\in\mathbb{N}$ and $u|_{A_n}: A_n \to U$ is continuous. Applying Dugundji's extension theorem \cite[Theorem~4.1]{dugundji1951extension} we can extend $u|_{A_n}$ to a continuous map $u^n: [0,T]\times\Omega \to V$ such that $u^n([0,T]\times\Omega)$ is contained in the closed convex subset $U$ of $V$. Moreover, for $({\ensuremath{\mathcal L}}_T\otimes\P)$-a.e. $(t,\omega)\in[0,T]\times\Omega$ it holds that $u^n(t,\omega) \to u(t,\omega)$, thanks to the convergence ${\ensuremath{\mathcal L}}_T\otimes\P([0,T]\times\Omega \setminus A_n) \to 0$ as $n \to +\infty$. Thanks to Proposition \ref{prop:existL} and Lemma \ref{lemmaAA}, for any $n \in \mathbb{N}$ there exists a unique $\tilde X^n \in C([0,T] \times \Omega; \mathbb{R}^d)$ such that $(\tilde X^n,u^n) \in {\ensuremath{\mathcal A}}_\pL(\tilde X_0)$. Defining $\mu^n_t := (\tilde X^n_t)_\sharp\P$, by Lemma \ref{lemma:cpt} there exists $\bar \mu \in C([0,T]; \mathscr{P}_p(\mathbb{R}^d))$ such that, up to subsequences, \begin{equation}\label{convmun} \sup_{t \in [0,T]}W_p(\mu^n_t, \bar \mu_t) \to 0, \qquad \text{ as } n \to +\infty. \end{equation} For every $\omega\in\Omega$ we define $\bar X(\omega)$ as the unique solution to the problem \begin{equation*} \begin{cases} \dot {\bar X}_t(\omega)=f(\bar X_t(\omega),u_t(\omega),\bar\mu_t), &\textrm{for a.e. }t\in (0,T)\\ \bar X_{|t=0}(\omega)=\tilde X_0(\omega). & \end{cases} \end{equation*} Then for any $(t,\omega) \in [0,T]\times \Omega$ it holds \begin{equation}\label{eq:est_x^n_to_X} \begin{split} | \tilde X_t^n(\omega) - \bar X_t(\omega)| &\leq \int_0^t \left| f(\tilde X^n_s(\omega),u^n_s(\omega),\mu^n_s) - f(\bar X_s(\omega),u^n_s(\omega), \bar \mu_s) \right| \d s \\ &+ \int_0^t \left| f(\bar X_s(\omega),u^n_s(\omega),\bar \mu_s) - f(\bar X_s(\omega),u_s(\omega), \bar \mu_s) \right| \d s \\ &\leq L\int_0^t \left( | \tilde X_s^n(\omega) -\bar X_s(\omega) | + W_p(\mu^n_s, \bar \mu_s) + \mathcal G_{s,\omega}(u^n_s(\omega), u_s(\omega))\right)\d s, \end{split} \end{equation} where $ \mathcal G_{s,\omega}(u^n_s(\omega), u_s(\omega)):=\left| f(\bar X_s(\omega),u^n_s(\omega),\bar \mu_s) - f(\bar X_s(\omega),u_s(\omega), \bar \mu_s) \right|$. Since by \eqref{f:growth} we have $\mathcal G_{s,\omega}(u^n_s(\omega), u_s(\omega))\le C(1+ |\bar X_s(\omega)| + m_p(\bar\mu_s))$, by the convergence in item $(2)$ we get that \begin{equation}\label{eq:convGG} \lim_{n\to+\infty} \int_0^T \mathcal G_{s,\omega}(u^n_s(\omega), u_s(\omega))\d s =0, \quad \mbox{for $\P$-a.e. }\omega\in\Omega. \end{equation} By Gronwall inequality, from \eqref{eq:est_x^n_to_X} we have \begin{equation}\label{eq:est_X} \begin{split} | \tilde X_t^n(\omega) - \bar X_t(\omega)| &\leq e^{LT}LT\Big( \sup_{s\in[0,T]}W_p(\mu^n_s, \bar \mu_s) + \int_0^T \mathcal G_{s,\omega}(u^n_s(\omega), u_s(\omega))\,\d s\Big), \end{split} \end{equation} which, by \eqref{convmun} and \eqref{eq:convGG}, implies \begin{equation}\label{limittilde} \lim_{n \to +\infty}\sup_{t \in [0,T]} |\tilde X^n_t(\omega) - \bar X_t(\omega)|=0,\quad \mbox{for $\P$-a.e. }\omega\in\Omega. \end{equation} From \eqref{limittilde} we have that $\mu_t^n$ weakly converges to $(\bar X_t)_\sharp\P$ and by \eqref{convmun} it follows that $\bar\mu_t=(\bar X_t)_\sharp\P$ for any $t\in[0,T]$. By the definition of $\bar X$ and $\bar\mu_t=(\bar X_t)_\sharp\P$ and by the uniqueness result of Proposition \ref{prop:existL}, we obtain that $\bar X_t(\omega)= X_t(\omega)$ for $\P$-a.e. $\omega\in\Omega$ and for any $t\in[0,T]$. The first convergence in item $(3)$ follows from \eqref{limittilde}, while the second convergence comes from the first one and Proposition \ref{prop:estimatesL} through dominated convergence. Finally, item $(4)$ follows by the same argument as in the proof of Proposition \ref{prop:costRLntoRL}. \end{proof} \section{Relaxed Lagrangian optimal control problem} \label{sec_relaxed} In this Section we define a relaxed version of the Lagrangian problem analyzed in Section \ref{sec_lagrangian}, then we study its properties and its relation with the non-relaxed one. \begin{definition}[Relaxed Lagrangian optimal control problem (\RL)]\label{def:RL} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and let $(\Omega,\mathfrak B,\P)$ be a probability space. Given $X_0\in L^p(\Omega;\mathbb{R}^d)$, we say that $(X,\sigma)\in\mathcal A_{\RL}(X_0)$ if \begin{itemize} \item[(i)] $\sigma\in \mathrm{M}([0,T]\times\Omega;\mathscr P(U))$; \item[(ii)] $X\in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ and for $\P$-a.e. $\omega\in\Omega$, $X(\omega)$ is a solution of the following Cauchy problem \begin{equation}\label{eq:systemRL} \begin{cases} \dot X_t(\omega)=\displaystyle\int_U f(X_t(\omega),u,(X_t)_\sharp\P)\,\d\sigma_{t,\omega}(u), &\textrm{for ${\ensuremath{\mathcal L}}_T$-a.e. }t\in]0,T]\\ X_{|t=0}(\omega)=X_0(\omega), & \end{cases} \end{equation} where $X_t: \Omega \to \mathbb{R}^d$ is defined by $X_t(\omega):= X(t,\omega)$ for $\P$-a.e. $\omega \in \Omega$ and $\sigma_{t,\omega}:=\sigma(t,\omega)\in\mathscr P(U)$. \end{itemize} We refer to $(X,u)\in\mathcal A_{\RL}(X_0)$ as to an \emph{admissible pair}, with $X$ a \emph{trajectory} and $\sigma$ a \emph{relaxed control}. \\ We define the \emph{cost functional} $J_{\RL}: L^p(\Omega;C([0,T];\mathbb{R}^d))\times \mathrm{M}([0,T]\times\Omega;\mathscr{P}(U))\to[0,+\infty)$, by \begin{equation*} J_{\RL}(X,\sigma):=\int_{\Omega}\int_0^T\int_U \mathcal C(X_t(\omega),u,(X_t)_\sharp\P)\,\d\sigma_{t,\omega}(u)\, \d t \,\d\P(\omega) \ +\int_\Omega \mathcal C_T(X_T(\omega),(X_T)_\sharp\P)\,\d\P(\omega), \end{equation*} and the \emph{value function} $V_{\RL}:L^p(\Omega;\mathbb{R}^d)\to[0,+\infty)$ by \begin{equation}\label{eq:valueRL} V_{\RL}(X_0):=\inf\left\{J_{\RL}(X,\sigma)\,:\,(X,\sigma)\in\mathcal A_{\RL}(X_0)\right\}. \end{equation} \end{definition} \smallskip \begin{remark}\label{rem:RL-L} By Proposition \ref{prop:ConvexRL} the Relaxed Lagrangian problem $\RL$ in $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ is a particular Lagrangian convex problem $\pL'$ in the lifted space $\S' = (\mathscr{U},\mathscr{F},\mathscr{C},{\ensuremath{\mathcal C}}_T)$ defined in Definition \ref{def:relax_setting}. In particular, the system \eqref{eq:systemRL} can be rewritten as \begin{equation*}\label{eq:systemRL_} \begin{cases} \dot X_t(\omega)=\mathscr{F}(X_t(\omega),\sigma_t(\omega),(X_t)_\sharp\P), &\textrm{for a.e. }t\in (0,T]\\ X_{|t=0}(\omega)=X_0(\omega). & \end{cases} \end{equation*} and the cost functional as \begin{equation*} J_{\RL}(X,\sigma):=\int_{\Omega} \int_0^T \mathscr{C}(X_t(\omega),\sigma(t,\omega),(X_t)_\sharp\P)\, \d t \, \d\P(\omega)+\int_\Omega \mathcal C_T(X_T(\omega),(X_T)_\sharp\P)\,\d\P(\omega). \end{equation*} As a consequence, the results proved for the Lagrangian problem $\pL$ also apply to the Relaxed Lagrangian problem $\RL$. We further point out that even in the relaxed Lagrangian setting, existence of minimizers is not guaranteed in general (see also Remark \ref{rem:lagrangian}). We refer to Section \ref{sec:counterexample} for a detailed discussion and in particular to Remark \ref{rmk:RLex}. \end{remark} \subsection{Equivalence of $\pL$ and $\RL$. Chattering result} In this subsection we prove that the value functions for {the Lagrangian and the Relaxed Lagrangian optimal control problems, set in the same parametrization space $(\Omega,\mathfrak B,\P)$ and same system $\S$,} coincide. Precisely, we aim at showing the following theorem whose proof is postponed at the end of the section. \begin{theorem} \label{cor:VL=VRL} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space such that there exists $\{\mathfrak B^n\}_{n\in\mathbb{N}}$ satisfying the finite approximation property of Definition \ref{approx_prop}. If $X_0\in L^p(\Omega;\mathbb{R}^d)$, then $V_{\pL}(X_0)=V_{\RL}(X_0)$. \end{theorem} The proof of Theorem \ref{cor:VL=VRL} easily follows from the combination of Theorem \ref{thm:chat} and Proposition \ref{prop:RL<L} given below. Theorem \ref{thm:chat} is a suitable extension of the classical (in optimal control theory) \emph{chattering theorem} which permits to approximate relaxed controls with piecewise-constant controls. Notice that Theorem \ref{cor:VL=VRL} holds in particular if $(\Omega,\mathfrak B,\P)$ is a standard Borel space thanks to Proposition \ref{prop:FAP}. \medskip Let us start with the following proposition. \begin{proposition}\label{prop:RL<L} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$. If $(X,u)\in\mathcal A_{\pL}(X_0)$, then, defining $\sigma:[0,T]\times\Omega\to\mathscr P(U)$ by $\sigma(t,\omega)=\delta_{u(t,\omega)}$ we have $(X,\sigma)\in\mathcal A_{\RL}(X_0)$ and $J_{\pL}(X,u)=J_{\RL}(X,\sigma)$. In particular $V_{\RL}(X_0)\le V_{\pL}(X_0)$. \end{proposition} \begin{proof} The result follows immediately by Proposition \ref{prop:ConvexRL}. \end{proof} Recall that if $\bar \mathfrak B$ is a finite algebra on $\Omega$ and $E$ is a Banach space, a function $g:\Omega\to E$ is $\bar \mathfrak B$-measurable if and only if $g$ is constant on the elements of a partition of $\Omega$ contained in $\bar \mathfrak B$. In the following proposition, given a piecewise constant \emph{relaxed} control we approximate it with a sequence of piecewise constant \emph{(non-relaxed)} controls so that the associated trajectories and costs converge. \begin{proposition}\label{p:chattering:finito} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA} and $(\Omega,\mathfrak B,\P)$ be a probability space. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$, $\bar\mathfrak B\subset \mathfrak B$ a finite algebra, $(X,\sigma)\in\mathcal A_{\RL}(X_0)$ such that $\sigma$ is $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\bar\mathfrak B)$-measurable. Then there exists a sequence $\{ (X^m,u^m) \}_{m\in \mathbb{N}} \subset {\ensuremath{\mathcal A}}_{\pL}(X_0)$ such that \begin{enumerate} \item $u^m$ are $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\bar\mathfrak B)$-measurable; \item for any $\omega \in \Omega$, $(i_{[0,T]}, u^m(\cdot, \omega))_\sharp {\ensuremath{\mathcal L}}_T \xrightarrow{\mathcal{Y}} \sigma_\omega$, where $\sigma_\omega :=\sigma(t,\omega)\otimes{\ensuremath{\mathcal L}}_T\in\mathscr P([0,T]\times U)$; \item\label{item2:prop415} $\sup_{t\in[0,T]}\|X^m_t-X_t\|_{L^p(\Omega;\mathbb{R}^d)}\to 0$ as $m\to+\infty$; \item $J_{\pL}(X^m,u^m)\to J_{\RL}(X,\sigma)$, as $m\to+\infty$. \end{enumerate} Moreover, if $X_0$ is $\bar \mathfrak B$-measurable then $X_t$, $X_t^m$ are $\bar \mathfrak B$-measurable for any $m \in \mathbb{N}$ and $t\in [0,T]$. \end{proposition} \begin{proof} We fix the minimal (w.r.t. inclusion) partition associated to the finite algebra $\bar \mathfrak B$ that we denote by ${\ensuremath{\mathcal P}}:= \lbrace A_k: k=1, \ldots n \rbrace \subset \bar \mathfrak B$. For any $k=1, \ldots n $ we select $\omega_k \in A_k$ and apply Lemma \ref{lemma:young} (with $\mathbb{T} = [0,T]$, $S = U$ and $\lambda = {\ensuremath{\mathcal L}}_T$) to the measure $\nu = \sigma_{\omega_k}:=\sigma(t,\omega_k)\otimes{\ensuremath{\mathcal L}}_T\in\mathscr P([0,T]\times U)$. This yields a sequence of ${\ensuremath{\mathcal B}}_{[0,T]}$-measurable functions $u^m_k:[0,T]\to U$ such that \begin{equation*} (i_{[0,T]}, u^m_k(\cdot))_\sharp {\ensuremath{\mathcal L}}_T \xrightarrow{\mathcal{Y}} \sigma_{\omega_k}. \end{equation*} Thus, we define $u^m: [0,T] \times \Omega \to U$ setting $u^m(t,\omega):= u^m_k(t)$ if $\omega \in A_k$. By construction, the function $\omega\mapsto u^m(t,\omega)$ is constant on $A_k$, for any $k= 1, \ldots, n$. Furthermore, for any $\omega\in\Omega$ the maps $t \mapsto u^m(t,\omega)$ are ${\ensuremath{\mathcal B}}_{[0,T]}$-measurable. The sequence of controls $u^m\in \mathrm{M}([0,T]\times\Omega)$ readily satisfies items ($1$) and ($2$). \smallskip Given $u^m$ constructed above, by Proposition \ref{prop:existL} there exists a unique $X^m \in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ such that $(X^m,u^m) \in {\ensuremath{\mathcal A}}_{\pL}(X_0)$. Thanks to Remark, \ref{re:equiv} we interpret $X^m\in \mathrm{AC}^p([0,T]; L^p(\Omega;\mathbb{R}^d))$ and define $ \mu^m \in \mathrm{AC}^p([0,T]; \mathscr{P}_p(\mathbb{R}^d))$ by $\mu_t^m := (X_t^m)_\sharp\P$. By Lemma \ref{lemma:cpt} there exists a (non relabeled) subsequence $\mu^{m}$ and $\tilde \mu \in C([0,T]; \mathscr{P}_p(\mathbb{R}^d))$ such that \begin{equation}\label{eq:convW} \lim_{m\to\infty}\sup_{t\in[0,T]}W_p(\mu^{m}_t,\tilde\mu_t)=0. \end{equation} We define $g:[0,T]\times\mathbb{R}^d\times U\to \mathbb{R}^d$ by \begin{equation*} g(t, y , u) := f(y, u, \tilde \mu_t). \end{equation*} Selecting a representative $X_0$ defined for every $\omega\in\Omega$, let $Y^{m}(\omega) \in \mathrm{AC}^p([0,T]; \mathbb{R}^d)$ be the unique solution of the Cauchy problem \begin{equation} \begin{cases} \dot Y^m_t(\omega) = g(t, Y^m_t(\omega), u^{m}(t,\omega)), & \textrm{for a.e. }t\in(0,T) \\ Y^m_{|t=0}(\omega) = X_0(\omega). \end{cases} \end{equation} For any $\omega \in \Omega$, let also $Y(\omega) \in \mathrm{AC}^p([0,T]; \mathbb{R}^d)$ be the unique solution of the Cauchy problem \begin{equation}\label{eq:CPY} \begin{cases}\displaystyle \dot Y_t(\omega) = \int_U g(t,Y_t(\omega),u) \,\d \sigma_{t,\omega}(u), & \textrm{for a.e. }t\in(0,T) \\ Y_{|t=0}(\omega) = X_0(\omega). \end{cases} \end{equation} Hence, by item $(2)$ and assumptions \eqref{eq:Lipf} and \eqref{f:growth} we can apply Lemma \ref{lemma:youngODE} to obtain \begin{equation}\label{eq:convY} \lim_{m \to + \infty}\sup_{t \in [0,T]} \left|Y^{m}_t(\omega) - Y_t(\omega)\right| = 0, \qquad \forall\,\omega\in\Omega. \end{equation} Since $(X^m,u^m) \in {\ensuremath{\mathcal A}}_{\pL}(X_0)$, by definition of the Lagrangian problem, for $\P$-a.e. $\omega\in\Omega$ $X^m(\omega) \in \mathrm{AC}^p([0,T];\mathbb{R}^d)$ and \begin{equation}\label{eq:X^m} \begin{cases} \dot X^{m}_t(\omega) = f(X^{m}_t(\omega), u^{m}_t(\omega),\mu^m_t), \quad \textrm{for a.e. }t\in(0,T) & \\ X^{m}_{|t=0}(\omega) = X_0(\omega). \end{cases} \end{equation} Then \begin{equation*} \begin{split} \left| Y^{m}_t(\omega) - X^{m}_t(\omega) \right| &\leq \int_0^t |f(Y^{m}_s(\omega), u^{m}(s,\omega),\tilde\mu_s) - f(X^{m}_s(\omega), u^{m}(s,\omega),\mu^m_s) | \,\d s \\ &\leq L \int_0^t \left(\left| Y^{m}_s(\omega) - X^{m}_s(\omega) \right| + W_p(\tilde \mu_s, \mu_s^{m})\right) \,\d s \\ &\leq L \int_0^t \left| Y^{m}_s(\omega) - X^{m}_s(\omega) \right| \,\d s +LT\sup_{s\in[0,T]} W_p(\tilde \mu_s, \mu_s^{m}). \end{split} \end{equation*} By Gronwall inequality we get \begin{equation}\label{conv:yx} \sup_{t \in [0,T]}\left| Y^{m}_t(\omega) - X^{m}_t(\omega) \right| \leq LTe^{LT}\sup_{s \in [0,T]}W_p(\tilde\mu_s,\mu_s^{m}). \end{equation} From \eqref{conv:yx}, \eqref{eq:convW} and \eqref{eq:convY} it follows that \begin{equation}\label{eq:convX} \lim_{m \to + \infty}\sup_{t \in [0,T]} \left|X^{m}_t(\omega) - Y_t(\omega)\right| = 0, \qquad \text{for }\P\text{-a.e. }\omega\in\Omega. \end{equation} By \eqref{eq:convX} it follows that $\mu^{m}_t = (X^{m}_t)_\sharp\P \to (Y_t)_\sharp\P$ weakly for any $t \in [0,T]$, and then, by \eqref{eq:convW}, it holds that $\tilde\mu_t = (Y_t)_\sharp\P$ for any $t \in [0,T]$. Thus, thanks to \eqref{eq:CPY} and the definition of $g$ we conclude that $(Y,\sigma) \in \mathcal A_{\RL}(X_0)$. Since $(X,\sigma) \in \mathcal A_{\RL}(X_0)$, by the uniqueness result of Propositions \ref{prop:existL} we have that $Y = X$ and \begin{equation}\label{eq:convXunif} \lim_{m \to + \infty}\sup_{t \in [0,T]} \left|X^{m}_t(\omega) - X_t(\omega)\right| = 0, \qquad \text{for }\P\text{-a.e. }\omega\in\Omega. \end{equation} Finally, to prove item (3) it is enough to observe that \begin{equation}\label{eq:convLp} \sup_{t \in [0,T]} \left\|X^{m}_t-X_t\right\|^p_{L^p(\Omega;\mathbb{R}^d)} \le \int_{\Omega}\sup_{t \in [0,T]} \left|X^{m}_t(\omega)-X_t(\omega)\right|^p\,\d\P(\omega). \end{equation} By \eqref{eq:convX}, and \eqref{boundXtomega} we can pass to the limit in \eqref{eq:convLp} by dominated convergence. \smallskip To prove item ($4$) we write \begin{equation}\label{eq:2stabC} \begin{split} &\left| \int_\Omega \int_0^T {\ensuremath{\mathcal C}}(X^m_t(\omega),u^m_t(\omega),\mu^m_t)\,\d t \, \d\P(\omega) -\int_\Omega \int_0^T \int_U {\ensuremath{\mathcal C}}(X_t(\omega),u,\mu_t)\,\d\sigma_{t,\omega}(u)\,\d t \, \d\P(\omega)\right|\\ &\leq \left| \int_\Omega \left(\int_0^T {\ensuremath{\mathcal C}}(X^m_t(\omega),u^m_t(\omega),\mu^m_t) -{\ensuremath{\mathcal C}} (X_t(\omega),u^m_t(\omega),\mu_t)\,\d t\right)\,\d\P(\omega) \right|\\ &+\left| \int_\Omega \left(\int_0^T {\ensuremath{\mathcal C}}(X_t(\omega),u^m_t(\omega),\mu_t)\,\d t -\int_{[0,T] \times U} {\ensuremath{\mathcal C}}(X_t(\omega),u,\mu_t)\,\d\sigma_{\omega}(t,u)\right) \,\d\P(\omega)\right|.\\ \end{split} \end{equation} Since \eqref{eq:convXunif} holds, for $\P$-a.e. $\omega\in\Omega$, there exists a compact $K_\omega \subset \mathbb{R}^d$ such that $X^m_t(\omega),X_t(\omega)\in K_\omega$ for any $m\in\mathbb{N}$ and $t\in[0,T]$. Analogously, by \eqref{eq:convW} there exists a compact ${\ensuremath{\mathcal K}} \subset \mathscr{P}_p(\mathbb{R}^d)$ such that $\mu^m_t,\mu_t\in {\ensuremath{\mathcal K}}$ for any $m\in\mathbb{N}$ and $t\in[0,T]$. By Proposition \ref{prop:wassconv} there exists an admissible $\psi$ such that \begin{equation}\label{eq:estequi} \sup_{t\in[0,T]}\sup_{m \in \mathbb{N}}\int_{\Omega} \psi(|X^m_t(\omega)|^p)\,\d \P(\omega) = \sup_{t\in[0,T]}\sup_{m \in \mathbb{N}}\int_{\mathbb{R}^d} \psi(|x|^p)\,\d \mu_t^m(x) < +\infty. \end{equation} By the continuity of ${\ensuremath{\mathcal C}}$ there exists a modulus of continuity $\alpha_\omega:[0,+\infty)\to[0,+\infty)$ for the restriction of ${\ensuremath{\mathcal C}}$ to the compact set $K_\omega\times U\times{\ensuremath{\mathcal K}}$. Then, for $\P$-a.e. $\omega\in\Omega$, \begin{equation*} \sup_{t\in[0,T]}|{\ensuremath{\mathcal C}}(X_t^{m}(\omega), u_t^{m}(\omega),\mu_t^{m})-{\ensuremath{\mathcal C}}(X_t(\omega),u_t^{m}(\omega),\mu_t)|\le \alpha_\omega\big(\sup_{t\in[0,T]}(|X_t^{m}(\omega)- X_t(\omega)|+W_p(\mu_t^{m},\mu_t))\big). \end{equation*} Taking into account the previous consideration together with \eqref{eq:convXunif}, \eqref{eq:convW}, the growth condition \eqref{eq:growthC} and \eqref{eq:estequi}, we obtain $$ \left| \int_\Omega \left(\int_0^T {\ensuremath{\mathcal C}}(X^m_t(\omega),u^m_t(\omega),\mu^m_t) -{\ensuremath{\mathcal C}} (X_t(\omega),u^m_t(\omega),\mu_t)\,\d t\right)\,\d\P(\omega) \right| \to 0.$$ For the second term in the right hand side of \eqref{eq:2stabC}, for $\P$-a.e. $\omega \in \Omega$ we define $h_\omega: [0,T] \times U \to \mathbb{R}$ by $h_\omega(t,u):= {\ensuremath{\mathcal C}} (X_t(\omega),u,(X_t)_\sharp\P)$. Notice that $h_\omega$ is continuous and bounded in $[0,T] \times U$, hence from the Young convergence of item (2) we get \begin{equation} \left| \int_0^T h_\omega (t,u^m_t(\omega))\,\d t - \int_{[0,T] \times U} h_\omega (t,u)\,\d \sigma_{\omega}(t,u)\right| \to 0, \qquad \text{ for $\P$-a.e. } \omega \in \Omega. \end{equation} From the growth assumptions \eqref{eq:growthC} and dominated convergence theorem we obtain that $$\left| \int_\Omega \left(\int_0^T {\ensuremath{\mathcal C}}(X_t(\omega),u^m_t(\omega),\mu_t)\,\d t -\int_{[0,T] \times U} {\ensuremath{\mathcal C}}(X_t(\omega),u,\mu_t)\,\d\sigma_{\omega}(t,u)\right) \,\d\P(\omega)\right|\to 0. $$ Finally, thanks to \eqref{eq:convXunif} and \eqref{eq:convW} we also obtain that \begin{equation*} \lim_{m\to+\infty} \int_\Omega {\ensuremath{\mathcal C}}_T(X^m_T(\omega),\mu^m_T)\,\d\P(\omega) = \int_\Omega {\ensuremath{\mathcal C}}_T(X_T(\omega),\mu_T)\,\d\P(\omega). \end{equation*} For what concerns the last statement, since $X_0$ is $\bar \mathfrak B$-measurable, (hence constant on the elements of the partition ${\ensuremath{\mathcal P}}$), the measurability of $X_t^m$ with respect to the algebra $\bar \mathfrak B$ follows by uniqueness of solutions to \eqref{eq:X^m}. The same argument also yields that $X_t$ is $\bar \mathfrak B$-measurable. \end{proof} Combining Theorem \ref{prop:RLntoRL} and Proposition \ref{prop:costRLntoRL} applied to the Relaxed Lagrangian problem $\RL$, with Proposition \ref{p:chattering:finito}, we can prove the following Theorem. \begin{theorem}[Chattering]\label{thm:chat} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA}. Let $(\Omega,\mathfrak B,\P)$ be a probability space and $\{\mathfrak B^n\}_{n\in\mathbb{N}}$ satisfying the finite approximation property of Definition \ref{approx_prop}. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$, $(X,\sigma)\in\mathcal A_{\RL}(X_0)$ and $\{X^n_0\}_{n\in\mathbb{N}}\subset L^p(\Omega;\mathbb{R}^d)$, such that \begin{equation}\label{eq:conv_X0n} \|X^n_0-X_0\|_{L^p(\Omega;\mathbb{R}^d)} \to 0 \quad \text{ as } n \to +\infty. \end{equation} Then there exists a sequence $\{(\tilde X^n,\tilde u^n)\}_{n\in \mathbb{N}}$ such that $(\tilde X^n,\tilde u^n) \in {\ensuremath{\mathcal A}}_{\pL}(X^n_0)$ for every $n \in \mathbb{N}$ and the following hold \begin{enumerate} \item $\tilde u^n$ are $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\mathfrak B^{n})$-measurable; \item for $\P$-a.e. $\omega \in \Omega$, $(i_{[0,T]}, \tilde u^n(\cdot,\omega))_\sharp {\ensuremath{\mathcal L}}_T \xrightarrow{\mathcal{Y}} \sigma_\omega$, as $n\to+\infty$, where $\sigma_\omega :=\sigma(t,\omega)\otimes{\ensuremath{\mathcal L}}_T\in\mathscr P([0,T]\times U)$; \item $\sup_{t\in[0,T]}\|\tilde X^n_t-X_t\|_{L^p(\Omega;\mathbb{R}^d)}\to 0$ as $n\to+\infty$; \item $J_{\pL}(\tilde X^n,\tilde u^n)\to J_{\RL}(X,\sigma)$, as $n\to+\infty$. \end{enumerate} Moreover, if $X_0^n$ is $\mathfrak B^{n}$-measurable, $n\in\mathbb{N}$, then $\tilde X_t^n$ is $\mathfrak B^{n}$-measurable for any $t\in[0,T]$. \end{theorem} \begin{proof} Let $(X,\sigma)\in\mathcal A_{\RL}(X_0)$ and $\{X^n_0\}_{n\in \mathbb{N}}$ satisfying \eqref{eq:conv_X0n}. Applying Theorem \ref{prop:RLntoRL} to the relaxed problem $\RL$ (which is a Lagrangian problem in a lifted space as discussed in Remark \ref{rem:RL-L}), there exists a sequence $\{(X^n,\sigma^n)\}_{n\in\mathbb{N}}$ such that $(X^n,\sigma^n) \in {\ensuremath{\mathcal A}}_{\RL}(X^n_0)$ for every $n \in \mathbb{N}$. Moreover $\sigma^n$ are $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\mathfrak B^n)$-measurable, $\sigma^n\to\sigma$ in $({\ensuremath{\mathcal L}}_T\otimes\P)$-measure and, as a consequence, { we have that $\sigma^n(t,\omega)\to\sigma(t,\omega)$ weakly in $\mathscr{P}(U)$ for ${\ensuremath{\mathcal L}}_T\otimes\P$-a.e. $(t,\omega)$, up to a non-relabelled subsequence. Thus, by Remark \ref{rem:Young-weak}, } we get (up to a non-relabelled subsequence) \begin{equation}\label{eq:convysigma} \sigma^n_\omega:=\sigma^n(t,\omega)\otimes{\ensuremath{\mathcal L}}_T \xrightarrow{\mathcal{Y}} \sigma_\omega, \qquad \text{ for $\P$-a.e. } \omega \in \Omega. \end{equation} By Proposition \ref{p:chattering:finito}, for any fixed $n\in\mathbb{N}$, there exists a sequence $\{(Y^{n,m},u^{n,m})\}_{m\in\mathbb{N}} \subset {\ensuremath{\mathcal A}}_{\pL}(X^n_0)$, with $u^{n,m}$ $({\ensuremath{\mathcal B}}_{[0,T]}\otimes\mathfrak B^n)$-measurable, such that \begin{enumerate} \item[(i)] for any $\omega \in \Omega$, $(i_{[0,T]}, u^{n,m}(\cdot, \omega))_\sharp {\ensuremath{\mathcal L}}_T \xrightarrow{\mathcal{Y}} \sigma^n_\omega$, as $m\to+\infty$; \item[(ii)] $\sup_{t\in[0,T]}\|X^n_t-Y^{n,m}_t\|_{L^p(\Omega;\mathbb{R}^d)}\to 0$, as $m\to+\infty$; \item[(iii)] $J_{\pL}(Y^{n,m},u^{n,m})\to J_{\RL}(X^n,\sigma^n)$, as $m\to+\infty$. \end{enumerate} Let us denote by ${\ensuremath{\mathcal P}}(\mathfrak B^n):= \lbrace A^n_k, k=1,\ldots, k(n)\rbrace$ the minimal (finite) partition induced by $\mathfrak B^n$. Let also $\sigma_\omega^{n,m}:=(i_{[0,T]}, u^{n,m}(\cdot, \omega))_\sharp {\ensuremath{\mathcal L}}_T$ and observe that the map $\omega\mapsto \sigma_\omega^{n,m}$ is constant on the elements of ${\ensuremath{\mathcal P}}^n$. Then, if we select a representative $\omega_k \in A^n_k$ for any $k =1,\ldots, k(n)$, from item (i) it follows that \begin{equation} \lim_{m\to+\infty}\sup_{k = 1, \ldots, k(n)} \, \delta(\sigma^{n,m}_{\omega_k}, \sigma^n_{\omega_k} )= \lim_{m\to+\infty}\sup_{\omega\in\Omega} \, \delta(\sigma^{n,m}_\omega, \sigma^n_\omega )=0, \end{equation} where $\delta$ metrizes the Young convergence in $[0,T] \times U$. Recall that Young convergence is indeed equivalent to the weak convergence in $[0,T] \times U$, see Remark \ref{rem:Young-weak}. For any $n\in\mathbb{N}$, let $m(n)$ be such that $$\sup_{t\in[0,T]}\|X^n_t-Y^{n,m(n)}_t\|_{L^p(\Omega;\mathbb{R}^d)} < \frac1n,$$ $$|J_{\pL}(Y^{n,m(n)},u^{n,m(n)})- J_{\RL}(X^n,\sigma^n)| < \frac1n$$ and $$\sup_{\omega\in\Omega}\, \delta(\sigma^{n,m(n)}_\omega, \sigma^n_\omega )<\frac1n.$$ Let us define $\tilde X^n:=Y^{n,m(n)}$, the control function $\tilde u^n:=u^{n,m(n)}:[0,T]\times\Omega\to U$ and $\tilde\sigma_\omega^{n}:=\sigma_\omega^{n,m(n)}=(i_{[0,T]}, \tilde u^{n}(\cdot, \omega))_\sharp {\ensuremath{\mathcal L}}_T$. Notice that, by construction, $(\tilde X^n,\tilde u^n)\in\mathcal A_{\pL}(X^n_0)$. Fix now $\varepsilon>0$. By Theorem \ref{prop:RLntoRL} there exists $n_\varepsilon$ such that $$\sup_{t\in[0,T]}\|X^n_t-X_t\|_{L^p(\Omega;\mathbb{R}^d)} < \varepsilon, \quad \forall n>n_\varepsilon,$$ and $$|J_{\RL}(X^{n},\sigma^{n})- J_{\RL}(X,\sigma)| < \varepsilon, \quad \forall n>n_\varepsilon.$$ Then, using the definition of $\tilde X^n$ and $\tilde u^n$, for any $n>n_\varepsilon$ it holds \begin{align*} \sup_{t\in[0,T]}\|\tilde X^n_t-X_t\|_{L^p(\Omega;\mathbb{R}^d)}&\le \sup_{t\in[0,T]}\|Y^{n,m(n)}_t-X^n_t\|_{L^p(\Omega;\mathbb{R}^d)}\\ &+\sup_{t\in[0,T]}\|X^n_t-X_t\|_{L^p(\Omega;\mathbb{R}^d)} <\frac1n +\varepsilon, \end{align*} and \begin{align*} |J_{\pL}(\tilde X^n,\tilde u^n)-J_{\RL}(X,\sigma)|&\le |J_{\pL}(Y^{n,m(n)},u^{n,m(n)})-J_{\RL}(X^n,\sigma^n)|\\ &\quad+|J_{\RL}(X^n,\sigma^n)-J_{\RL}(X,\sigma)| < \frac1n +\varepsilon. \end{align*} If we send $n \to +\infty$, items (3) and (4) follow by the arbitrariness of $\varepsilon >0$. It remains to show item (2). Fix again $\varepsilon >0$ and choose $\omega\in\Omega$ for which the convergence in \eqref{eq:convysigma} holds. Then, there exists $n_\varepsilon(\omega)>0$ such that $$\delta(\sigma^{n}_\omega, \sigma_\omega ) <\varepsilon, \quad \forall n>n_\varepsilon(\omega),$$ and \begin{equation*} \delta(\tilde \sigma^{n}_\omega, \sigma_\omega ) \leq \delta(\tilde \sigma^{ n}_\omega, \sigma^n_\omega ) + \delta(\sigma^{n}_\omega, \sigma_\omega )< \frac1n+\varepsilon, \quad \forall \, n>n_\varepsilon(\omega). \end{equation*} Sending $n\to+\infty$ we get item (2). \end{proof} \section{Eulerian optimal control problem} \label{sec_eulerian} In this Section we describe the Eulerian formulation of the optimal control problem and we study its properties under the Convexity Assumption \ref{CA}. {In particular, as stated in Theorem \ref{thm:minE}, in this setting we get the existence of minimizers.} Recall that $\mathrm{B}([0,T] \times \mathbb{R}^d;U)$ denotes the set of Borel measurable functions. \begin{definition}[Eulerian optimal control problem (\E)]\label{def:E1} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA}. Given $\mu_0\in\mathscr{P}_p(\mathbb{R}^d)$, we say that $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$ if \begin{itemize} \item[(i)] $\underline u\in\mathrm{B}([0,T]\times\mathbb{R}^d;U)$; \item[(ii)] $\mu\in \mathrm{AC}^p([0,T];\mathscr{P}_p(\mathbb{R}^d))$ is a distributional solution of the Cauchy problem \begin{equation}\label{eq:systemE} \begin{cases} \partial_t\mu_t+\mathrm{div}\,(v_t\mu_t)=0, &\textrm{in }[0,T]\times\mathbb{R}^d\\ \mu_{t=0}=\mu_0,& \end{cases} \end{equation} where $v\in\mathrm{B}([0,T]\times\mathbb{R}^d;\mathbb{R}^d)$ is defined by $v_t(x):=f(x,\underline u(t,x),\mu_t)$ and $\mu_t:= \mu(t)$. \end{itemize} We refer to $(\mu, \underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$ as to an \emph{admissible pair}, with $\mu$ a \emph{measure trajectory} and $\underline u$ a \emph{Eulerian control}.\\ We define the \emph{cost functional} $J_{\E}:\operatorname{C}([0,T];\mathscr{P}_p(\mathbb{R}^d))\times {\mathrm{B}([0,T]\times\mathbb{R}^d; U)}\to[0,+\infty)$ by \begin{equation*} J_{\E}(\mu,\underline u):=\int_0^T\int_{\mathbb{R}^d}{\ensuremath{\mathcal C}}(x,\underline u(t,x),\mu_t)\,\d\mu_t(x)\,\d t+\int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}_T(x,\mu_T)\,\d\mu_T(x), \end{equation*} and the \emph{value function} $V_{\E}:\mathscr{P}_p(\mathbb{R}^d)\to[0,+\infty)$ by \begin{equation*} V_{\E}(\mu_0):=\inf\{J_{\E}(\mu,\underline u)\,:\,(\mu,\underline u)\in\mathcal A_{\E}(\mu_0)\}. \end{equation*} \end{definition} \begin{remark} Notice that, given $\underline u\in \mathrm{B}([0,T]\times\mathbb{R}^d;U)$, $\mu\in C([0,T];\mathscr{P}_p(\mathbb{R}^d))$ and setting $v_t(x):=f(x,\underline u(t,x),\mu_t)$ as in Definition \ref{def:E1}, we have \begin{align*} \int_{\mathbb{R}^d}|v_t(x)|^p\,\d\mu_t(x)&= \int_{\mathbb{R}^d}|f(x,\underline u(t,x),\mu_t)|^p\,\d\mu_t(x)\\ &\le \tilde C \left( 1+\int_{\mathbb{R}^d}|x|^p\,\d\mu_t(x)\right)\le \bar C, \quad \forall t\in[0,T] \end{align*} for some constants $\tilde C,\bar C>0$, thanks to the growth condition \eqref{f:growth} and since $\mu\in C([0,T];\mathscr{P}_p(\mathbb{R}^d))$. In particular, we get $v\in L^p(0,T;L^p_{\mu_t}(\mathbb{R}^d;\mathbb{R}^d))$. Thus, if $\mu$ is also a distributional solution of \eqref{eq:systemE}, then $\mu\in \mathrm{AC}^p([0,T];\mathscr{P}_p(\mathbb{R}^d))$. Hence, in Definition \ref{def:E1}(ii) we could have just required $\mu\in C([0,T];\mathscr{P}_p(\mathbb{R}^d))$. Observe also that the functional $J_\E$ is finite thanks to the growth condition \eqref{eq:growthC}. \end{remark} \begin{proposition} Let $\mu_0\in\mathscr{P}_p(\mathbb{R}^d)$. Then ${\ensuremath{\mathcal A}}_\E(\mu_0)\not=\emptyset$. \end{proposition} \begin{proof} Let us fix $u_0\in U$ and define $\underline u(t,x)=u_0$ for any $(t,x)\in[0,T]\times\mathbb{R}^d$. Applying Proposition \ref{prop:existL} with $\Omega=\mathbb{R}^d$, $\P=\mu_0$, $X_0(x)=x$ for any $x\in\mathbb{R}^d$ and $u = \underline u$, there exists $X\in L^p_{\mu_0}(\mathbb{R}^d;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ such that $(X,\underline u) \in {\ensuremath{\mathcal A}}_\L(X_0)$. Defining $\mu_t:=(X_t)_\sharp\mu_0$, from the definition of the Lagrangian problem it holds \begin{equation}\label{flow} X_t(x)=x+\int_0^t f(X_s(x),u_0,\mu_s)\,\d s , \qquad \forall \, t \in [0,T] \text{ and }\mu_0\text{-a.e. }x\in\mathbb{R}^d, \end{equation} Furthermore, in view of Proposition \ref{prop:equivSpaces} we have that $X\in \mathrm{AC}^p([0,T];L^p_{\mu_0}(\mathbb{R}^d;\mathbb{R}^d))$ and from \eqref{eq:WpLp} we get $W_p(\mu_t,\mu_s)\le \|X_t-X_s\|_{L^p_{\mu_0}(\mathbb{R}^d;\mathbb{R}^d)}$ for any $t,s\in[0,T]$. This readily implies that $\mu \in \mathrm{AC}^p([0,T];\mathscr{P}_p(\mathbb{R}^d))$. If we define $v_t(x):= f(x,u_0,\mu_t)$, it remains to show that $\mu$ is a distributional solution of \eqref{eq:systemE}. This is a standard argument, in view of the fact that \eqref{flow} represents the system of characteristics of \eqref{eq:systemE} (see e.g. \cite[Lemma 8.1.6]{ambrosio2008gradient}). \end{proof} \begin{remark} Given $\mu_0\in\mathscr{P}_p(\mathbb{R}^d)$, general results granting the existence of solutions to the Cauchy problem \eqref{eq:systemE} are provided for instance in \cite[Theorem~A.2]{orrieri2018large} when $\underline u\in\mathrm{B}([0,T]\times\mathbb{R}^d;U)$ is also a Carath\'eodory function. \end{remark} \begin{definition}\label{def:nu_converges} Let $U$ be a subset of a separable Banach space $V$, and denote with $V'$ the dual of $V$. Let $(\mu^n,\underline u^n), (\mu,\underline u)\in \operatorname{C}([0,T];\mathscr{P}_p(\mathbb{R}^d)) \times {\mathrm{B}([0,T]\times\mathbb{R}^d; U)}$. We say that $(\mu^n,\underline u^n)$ converges to $(\mu,\underline u)$ if \begin{itemize} \item $\mu^n$ converges to $\mu$ in $\operatorname{C}([0,T];\mathscr{P}_p(\mathbb{R}^d))$, \item for any $\phi\in C_c([0,T]\times\mathbb{R}^d;V')$ we have \begin{equation}\label{convweaknu} \lim_{n\to+\infty}\int_0^T\int_{\mathbb{R}^d}\langle \phi(t,x), \underline u^n(t,x)\rangle\,\d \mu_t^n(x)\,\d t =\int_0^T\int_{\mathbb{R}^d}\langle \phi(t,x), \underline u(t,x)\rangle\,\d \mu_t(x)\,\d t. \end{equation} \end{itemize} \end{definition} \begin{proposition}[Compactness]\label{prop:compactness} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA}. Let $\mu_0, \mu_0^n \in \mathscr{P}_p(\mathbb{R}^d)$ such that $W_p(\mu_0^n, \mu_0) \to 0$, as $n \to +\infty$. If $(\mu^n,\underline u^n) \in {\ensuremath{\mathcal A}}_{\E}(\mu^n_0)$, $n\in\mathbb{N}$, then there exist $(\mu,\underline u) \in {\ensuremath{\mathcal A}}_{\E}(\mu_0)$ and a subsequence $(\mu^{n_k},\underline u^{n_k})$ such that $(\mu^{n_k},\underline u^{n_k})$ converges to $(\mu,\underline u)$, as $k\to+\infty$, according to Definition \ref{def:nu_converges}. \end{proposition} \begin{proof} Let $(\mu^n,\underline u^n) \in {\ensuremath{\mathcal A}}_{\E}(\mu^n_0)$. Since $W_p(\mu_0^n,\mu_0) \to 0$, by Proposition \ref{prop:wassconv} it holds that \begin{equation}\label{bound_mu0p} \sup_{n \in \mathbb{N}}\int_{\mathbb{R}^d} |x|^p\d \mu^n_0(x) < +\infty \end{equation} and there exists an admissible $\psi:[0,+\infty)\to[0,+\infty)$, in the sense of Definition \ref{def:admphi}, such that \begin{equation}\label{eq:unifmompsi2} \sup_{n \in \mathbb{N}}\int_{\mathbb{R}^d} \psi(|x|^p)\d \mu^n_0(x) < +\infty. \end{equation} In order to apply Ascoli-Arzel\`a Theorem to the sequence $\{\mu^n\}_{n\in\mathbb{N}}$, we show that \begin{equation}\label{eq:cpt_muN} \sup_{n \in \mathbb{N}}\sup_{t \in [0,T]}\int_{\mathbb{R}^d} \psi(|x|^p)\d \mu^n_t(x) < +\infty \end{equation} and there exists a constant $C$ such that \begin{equation}\label{eq:BB} W^p_p (\mu^n_s,\mu^n_t) \leq C|t-s|, \qquad \forall\, s,t\in[0,T]. \end{equation} We start by estimating $\int_{\mathbb{R}^d} |x|^p\d \mu^n_t(x)$. We formally use the map $x\mapsto |x|^p$ as a test function for the weak formulation of the continuity equation (a rigorous approach would require an approximation of this map through cut-off functions, see \cite[Section~5]{fornasier2018mean}). Defining $v^n(t,x):= f(x,\underline{u}^n(t,x), \mu^n_t)$, using the growth condition on $f$ given in \eqref{f:growth} and Young inequality we obtain that \begin{equation*} \begin{split} \int_{\mathbb{R}^d} |x|^p\,\d \mu^n_t(x) &\leq \int_{\mathbb{R}^d} |x|^p\,\d \mu^n_0(x) + p\int_0^t \int_{\mathbb{R}^d} |v^n(s,x)||x|^{p-1} \,\d \mu^n_s(x)\,\d s \\ &\leq \int_{\mathbb{R}^d} |x|^p\,\d \mu^n_0(x) + Cp\int_0^t \int_{\mathbb{R}^d}\left( 1 + |x| + \mathrm{m}_p(\mu_s^n) \right)|x|^{p-1} \,\d \mu^n_s(x)\,\d s\\ &\leq \int_{\mathbb{R}^d} |x|^p\d \,\mu^n_0(x) + \tilde C\int_0^t \int_{\mathbb{R}^d} |x|^p \,\d \mu^n_s(x)\,\d s, \end{split} \end{equation*} for some $\tilde C >0$ independent of $n$ and $t\in[0,T]$. By Gronwall's inequality and \eqref{bound_mu0p} we get that \begin{equation}\label{eq:bound_xp} \sup_{n \in \mathbb{N}}\sup_{t \in [0,T]}\int_{\mathbb{R}^d} |x|^p\d \mu^n_t(x) < +\infty. \end{equation} Formally using the map $x\mapsto \psi(|x|^p)$ as a test function for the weak formulation of the continuity equation, by the growth condition on $f$ in \eqref{f:growth} and the bound \eqref{eq:bound_xp}, we have \begin{equation}\label{eq:psimom} \begin{split} \int_{\mathbb{R}^d} \psi(|x|^p)\,\d \mu^n_t(x) &\leq \int_{\mathbb{R}^d} \psi(|x|^p)\,\d\mu^n_0(x) + \int_0^t \int_{\mathbb{R}^d} |v^n(s,x)| |\nabla(\psi(|x|^p))| \,\d \mu^n_s(x)\,\d s \\ &\leq \int_{\mathbb{R}^d} \psi(|x|^p)\,\d \mu^n_0(x) + pC\int_0^t \int_{\mathbb{R}^d}\left(1 + |x| +\mathrm{m}_p(\mu^n_s))\right) \psi'(|x|^p)|x|^{p-1} \,\d \mu^n_s(x)\,\d s\\ &\leq \int_{\mathbb{R}^d} \psi(|x|^p)\,\d \mu^n_0(x) + C_1\int_0^t \int_{\mathbb{R}^d}\left(|x|^{p-1} + |x|^p \right) \psi'(|x|^p)\,\d \mu^n_s(x)\,\d s \end{split} \end{equation} for some $C_1>0$ independent of $n$ and $t\in[0,T]$. Notice that by the monotonicity of $\psi'$, denoting by $B_1$ the unitary ball of $\mathbb{R}^d$, we have \begin{equation*} \begin{split} \int_{\mathbb{R}^d} |x|^{p-1} \psi'(|x|^p)\,\d \mu^n_s(x) &\leq \int_{B_1} |x|^{p-1} \psi'(|x|^p)\,\d \mu^n_s(x) + \int_{\mathbb{R}^d\setminus B_1} |x|^{p} \psi'(|x|^p)\,\d \mu^n_s(x)\\ &\leq \psi'(1) + \int_{\mathbb{R}^d} |x|^{p} \psi'(|x|^p)\,\d \mu^n_s(x). \end{split} \end{equation*} By the previous inequality and \eqref{doubling}, from \eqref{eq:psimom} we get \begin{equation*} \begin{split} \int_{\mathbb{R}^d} \psi(|x|^p)\,\d \mu^n_t(x) &\leq \int_{\mathbb{R}^d} \psi(|x|^p)\,\d \mu^n_0(x) + C_2\int_0^t \int_{\mathbb{R}^d} \left( 1 +\psi(|x|^p)\right) \,\d \mu^n_s(x)\,\d s \end{split} \end{equation*} for some $C_2>0$ independent of $n$ and $t\in[0,T]$. By Gronwall's inequality and \eqref{eq:unifmompsi2} we obtain \eqref{eq:cpt_muN}. Using Benamou-Brenier formula \eqref{B&B}, the growth condition on $f$ in \eqref{f:growth}, for $s,t\in[0,T]$, $s \leq t$, it holds \begin{equation*} \begin{split} W^p_p (\mu^n_s,\mu^n_t) &\leq \int_s^t \int_{\mathbb{R}^d} |v^n(r,x)|^p \,\d \mu^n_r(x) \d r \\ &\leq C\int_s^t \int_{\mathbb{R}^d} \left(1 + |x| + \mathrm{m}_p(\mu_r^n) \right)^p \,\d \mu_r^n(x) \d r, \end{split} \end{equation*} for some $C>0$ independent of $n$, $s$ and $t$. Using the bound \eqref{eq:bound_xp} we obtain \eqref{eq:BB}. By Ascoli-Arzel\`a theorem in $\mathscr{P}_p(\mathbb{R}^d)$ there exists $\mu \in C([0,T];\mathscr{P}_p(\mathbb{R}^d))$ and a subsequence $\mu^n$ (not relabeled) such that $\mu^n\to\mu$ in $C([0,T];\mathscr{P}_p(\mathbb{R}^d))$. \smallskip For what concerns the weak compactness of $u^n$ (in the sense of convergence \eqref{convweaknu}), we denote by $\tilde{\mu}^n = \mu^n_t\otimes{\ensuremath{\mathcal L}}_T\in\mathscr{P}([0,T] \times \mathbb{R}^d)$ and $\tilde{\mu} = \mu_t\otimes{\ensuremath{\mathcal L}}_T\in\mathscr{P}([0,T] \times \mathbb{R}^d)$. From the convergence of $\mu^n$ to $\mu$ it follows that $\tilde{\mu}^n\to\tilde{\mu}$ weakly. Defining $\gamma^n:= (i_{[0,T]\times\mathbb{R}^d}, \underline{u}^n)_\sharp \tilde{\mu}^n \in \mathscr{P}(([0,T] \times \mathbb{R}^d) \times U)$, we observe that $\pi^1_\sharp \gamma^n = \tilde{\mu}^n\in\mathscr{P}([0,T] \times \mathbb{R}^d)$ and $\pi^2_\sharp \gamma^n = \underline{u}^n_\sharp\tilde{\mu}^n\in\mathscr{P}(U)$, where $\pi^1: [0,T] \times \mathbb{R}^d \times U \to [0,T] \times \mathbb{R}^d$ is the projection on $[0,T] \times \mathbb{R}^d$ and $\pi^2: [0,T] \times \mathbb{R}^d \times U \to U$ is the projection on $U$. Since $\pi^1_\sharp \gamma^n$ weakly converges and $U$ is compact, the families $\{\pi^1_\sharp \gamma^n\}_{n \in \mathbb{N}}$ and $\{\pi^2_\sharp \gamma^n\}_{n \in \mathbb{N}}$ are tight. Thanks to \cite[Lemma 5.2.2]{ambrosio2008gradient} it follows that $\{ \gamma^n \}_{n \in \mathbb{N}}$ is tight and, by Prokhorov's Theorem, there exists $\gamma \in \mathscr{P}([0,T] \times \mathbb{R}^d \times U)$ and a subsequence $\gamma^n$ (not relabeled) such that, $\gamma^n \to \gamma$ weakly, as $n \to +\infty$.\\ Let $\phi\in C_c([0,T]\times\mathbb{R}^d;V')$. Recalling that $U$ is compact, using the continuous and bounded test function $(t,x,u)\mapsto \langle \phi(t,x), u\rangle$, by the weak convergence of $\gamma^n$ to $\gamma$ we have \begin{equation}\label{eq:youngun} \begin{split} & \int_0^T\int_{\mathbb{R}^d}\langle\phi(t,x), \underline u^n(t,x)\rangle \,\d \tilde\mu^n(t,x) =\\ & = \int_0^T\int_{\mathbb{R}^d}\int_U\langle \phi(t,x), u\rangle\,\d \gamma^n(t,x,u) \to \int_0^T\int_{\mathbb{R}^d}\int_U \langle\phi(t,x), u\rangle\,\d \gamma(t,x,u), \end{split} \end{equation} as $n\to+\infty$. Using Theorem \ref{thm:disint} (specifically Remark \ref{rem:disint_proj}) and observing that $\pi^1_\sharp \gamma = \tilde{\mu}$, we disintegrate $\gamma$ with respect to $\pi^1$ to get \begin{equation*} \int_0^T\int_{\mathbb{R}^d}\int_U \langle \phi(t,x), u\rangle \,\d \gamma(t,x,u) = \int_0^T\int_{\mathbb{R}^d}\int_U\langle \phi(t,x), u\rangle \,\d \gamma_{t,x}(u)\,\d\tilde\mu(t,x). \end{equation*} We define now $\underline u: [0,T] \times \mathbb{R}^d \to U$ by \begin{equation}\label{eq:baricentreu} \underline{u}(t,x) := \int_U u \, \d \gamma_{t,x}(u), \qquad \forall\, (t,x) \in [0,T] \times \mathbb{R}^d, \end{equation} where the integral in \eqref{eq:baricentreu} is a Bochner integral. Since the map $(t,x) \in [0,T] \times \mathbb{R}^d \mapsto \gamma_{t,x}\in\mathscr{P}(U)$ is a Borel map, then $\underline u\in\mathrm{B}([0,T] \times \mathbb{R}^d;U)$. We call the map $\underline u$ the barycentric projection of $\gamma$ with respect to $\pi^1_\sharp\gamma$. Since the Bochner integral commutes with continuous linear functionals, it holds \begin{equation*} \int_0^T\int_{\mathbb{R}^d}\int_U\langle\phi(t,x), u\rangle\,\d \gamma_{t,x}(u)\,\d\tilde\mu(t,x) =\int_0^T\int_{\mathbb{R}^d}\langle\phi(t,x), \underline u(t,x)\rangle\,\d\tilde\mu(t,x). \end{equation*} Using \eqref{eq:youngun} we obtain the convergence of $\underline u^n \to \underline u$ in the sense of \eqref{convweaknu}. \smallskip In order to prove that $(\mu, \underline u) \in {\ensuremath{\mathcal A}}_{\E}(\mu_0)$ we show that \eqref{eq:systemE} is satisfied. Let $\varphi \in C^1_c([0,T] \times \mathbb{R}^d)$. Since $(\mu^n,\underline u^n) \in {\ensuremath{\mathcal A}}_{\E}(\mu^n_0)$, for every $t \in [0,T]$ it holds \begin{equation}\label{eq:mu_n_continuity} \int_{\mathbb{R}^d} \varphi(t,x)\,\d \mu^n_t(x) - \int_{\mathbb{R}^d} \varphi(0,x)\,\d \mu^n_0(x) = \int_0^t \int_{\mathbb{R}^d} \left( \partial_s \varphi(s,x) + v^n(s,x)\cdot \nabla \varphi(s,x) \right) \,\d \mu^n_s(x) \,\d s. \end{equation} By the convergence $\mu^n \to\mu$ in $ C([0,T];\mathscr{P}_p(\mathbb{R}^d))$ we immediately pass to the limit, as $n \to +\infty$, in the left hand side of equation \eqref{eq:mu_n_continuity} as well as on the term $\int_0^t \int_{\mathbb{R}^d} \partial_s \varphi(s,x) \,\d \mu^n_s(x) \,\d s$. Finally, let us rewrite \begin{equation*} \begin{split} &\int_0^t \int_{\mathbb{R}^d} v^n(s,x)\cdot \nabla \varphi(s,x) \,\d \mu^n_s(x) \,\d s\\ &=\int_0^t \int_{\mathbb{R}^d} f(x, \underline{u}^n(s,x), \mu^n_s) \cdot \nabla \varphi(s,x) \, \d \mu^n_s(x)\,\d s \\ &= \int_{[0,t] \times \mathbb{R}^d \times U} f(x, u, \mu^n_s)\cdot \nabla \varphi(s,x) \, \d \gamma^n(s,x,u) \\ &= \int_{[0,t] \times \mathbb{R}^d \times U} ( f(x, u, \mu^n_s) - f(x, u, \mu_s))\cdot \nabla \varphi(s,x) \, \d \gamma^n(s,x,u) \\ &\quad + \int_{[0,t] \times \mathbb{R}^d \times U} f(x, u, \mu_s)\cdot \nabla \varphi(s,x) \, \d \gamma^n(s,x,u).\\ \end{split} \end{equation*} By \eqref{eq:Lipf}, the first term on the right hand side can be estimated by \begin{equation*} \begin{split} &\left|\int_{[0,t] \times \mathbb{R}^d \times U} \langle f(x, u, \mu^n_s) - f(x, u, \mu_s), \nabla \varphi(s,x) \rangle \, \d \gamma^n(s,x,u)\right| \\ &\quad \leq L\sup_{r \in [0,T]}W_p(\mu^n_r, \mu_r)\int_{[0,t] \times \mathbb{R}^d \times U} |\nabla \varphi(s,x)| \, \d \gamma^n(s,x,u) \\ &\quad = L\sup_{r \in [0,T]}W_p(\mu^n_r, \mu_r)\int_{[0,t] \times \mathbb{R}^d} |\nabla \varphi(s,x)| \, \d \mu^n_s(x)\,\d s, \end{split} \end{equation*} which goes to zero as $n \to +\infty$ by the convergence $\mu^n\to\mu$ in $C([0,T];\mathscr{P}_p(\mathbb{R}^d))$. Hence \begin{equation*} \begin{split} \lim_{n \to +\infty} &\int_0^t \int_{\mathbb{R}^d} v^n(s,x)\cdot \nabla \varphi(s,x) \,\d \mu^n_s(x) \,\d s \\ =\lim_{n \to +\infty} &\int_{[0,t] \times \mathbb{R}^d \times U} f(x, u, \mu_s)\cdot \nabla \varphi(t,x) \, \d \gamma^n(s,x,u)\\ &=\int_{[0,t] \times \mathbb{R}^d \times U} f(x, u, \mu_s)\cdot \nabla \varphi(t,x) \, \d \gamma(s,x,u),\\ \end{split} \end{equation*} by the weak convergence of $\gamma^n$ to $\gamma$. Recall that, by the Convexity Assumption \ref{CA}, for any $(x,s)\in\mathbb{R}^d\times[0,T]$ the map $u \mapsto f(x,u,\mu_s)$ is affine. Thus, using that $ \gamma_{s,x}$ is a probability measure, we have \begin{equation*} \int_U f(x, u, \mu_s) \, \d \gamma_{s,x}(u) = f\left(x, \int_U u \,\d \gamma_{s,x}(u) , \mu_s\right), \quad \text{ for } \tilde \mu\text{-a.e. } (s,x) \in [0,T] \times \mathbb{R}^d. \end{equation*} Hence we get \begin{equation*} \begin{split} \int_{[0,t] \times \mathbb{R}^d} &\int_U f(x, u, \mu_s)\cdot \nabla \varphi(s,x) \, \d \gamma_{s,x}(u) \,\d \tilde \mu(s,x)\\ &=\int_{[0,t] \times \mathbb{R}^d} f\left(x, \int_U u \,\d \gamma_{s,x}(u) , \mu_s\right)\cdot \nabla \varphi(s,x) \, \d \tilde \mu(s,x)\\ &=\int_0^t \int_{\mathbb{R}^d} f\left(x, \underline u(s,x) ,\mu_s\right)\cdot \nabla \varphi(s,x) \, \d \mu_s(x) \,\d s.\\ \end{split} \end{equation*} Defining $v(t,x):= f(x,\underline{u}(t,x), \mu_t)$, we have proved that \begin{equation*} \begin{split} \lim_{n \to +\infty} &\int_0^t \int_{\mathbb{R}^d} v^n(s,x)\cdot \nabla \varphi(s,x) \,\d \mu^n_s(x) \,\d s \\ &=\int_0^t \int_{\mathbb{R}^d} v(s,x)\cdot \nabla \varphi(s,x) \,\d \mu_s(x) \,\d s \end{split} \end{equation*} and this concludes the proof. \end{proof} \begin{proposition}[Lower semicontinuity for convex $J_\E$]\label{prop:lscJE} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA}. If $(\mu^n,\underline u^n)$ converges to $(\mu,\underline u)$ according to Definition \ref{def:nu_converges}, then \begin{equation}\label{lscJE} \liminf_{n\to+\infty} J_\E(\mu^n,\underline u^n)\ge J_\E(\mu,\underline u). \end{equation} \end{proposition} \begin{proof} Denoting by $\tilde{\mu}^n = \mu^n_t\otimes{\ensuremath{\mathcal L}}_T\in\mathscr{P}([0,T] \times \mathbb{R}^d)$ and $\tilde{\mu} = \mu_t\otimes{\ensuremath{\mathcal L}}_T\in\mathscr{P}([0,T] \times \mathbb{R}^d)$, we define $\gamma^n:= (i_{[0,T]\times\mathbb{R}^d}, \underline{u}^n)_\sharp \tilde{\mu}^n \in \mathscr{P}(([0,T] \times \mathbb{R}^d) \times U)$. Reasoning as in the proof of Proposition \ref{prop:compactness}, we obtain that there exists $\gamma \in \mathscr{P}(([0,T] \times \mathbb{R}^d) \times U)$ such that $\pi^1_\sharp \gamma= \tilde{\mu}$ and, up to subsequences, $\gamma^n\to\gamma$ weakly as $n\to+\infty$. Moreover, defining $\underline u_{b}$ by \eqref{eq:baricentreu}, up to subsequences, $(\mu^n,\underline u^n)$ converges to $(\mu,\underline u_b)$ according to Definition \ref{def:nu_converges}. Then, \begin{equation*} \int_0^T\int_{\mathbb{R}^d}\langle\phi(t,x), \underline u(t,x)\rangle\,\d\tilde\mu(t,x) = \int_0^T\int_{\mathbb{R}^d}\langle\phi(t,x), \underline u_b(t,x)\rangle\,\d\tilde\mu(t,x), \quad \forall\,\phi\in C_c([0,T]\times\mathbb{R}^d;V'), \end{equation*} which implies that $ \underline u_b(t,x)= \underline u(t,x)$ for $\tilde\mu$-a.e. $(t,x)\in [0,T]\times\mathbb{R}^d$. Let ${\ensuremath{\mathcal K}} \subset \mathscr{P}_p(\mathbb{R}^d)$ be a compact set containing $\mu^n_t,\mu_t$ for any $n\in\mathbb{N}$ and $t\in[0,T]$. Then, for any compact $K \subset \mathbb{R}^d$, thanks to the continuity of ${\ensuremath{\mathcal C}}$, there exists a modulus of continuity $\alpha:[0,+\infty)\to[0,+\infty)$ for the restriction of ${\ensuremath{\mathcal C}}$ to the compact set $K\times U \times{\ensuremath{\mathcal K}}$ so that $$\sup_{(t,x,u)\in [0,T] \times K \times U}|{\ensuremath{\mathcal C}}(x,u,\mu^n_t) - {\ensuremath{\mathcal C}}(x,u,\mu_t)| \leq \alpha \Big( \sup_{t\in[0,T]}W_p(\mu_t^{n},\mu_t)\Big) \to 0 \quad \text{as } n\to+\infty.$$ Then, taking into account that ${\ensuremath{\mathcal C}} \ge 0$, we get \begin{equation*} \begin{split} &\liminf_{n \to +\infty}\int_0^T\int_{\mathbb{R}^d}\mathcal C(x,\underline u^n(t,x),\mu^n_t)\,\d\mu^n_t(x)\,\d t \\ &\geq \liminf_{n \to +\infty}\int_{[0,T] \times K \times U}\mathcal C(x,u,\mu^n_t)\,\d\gamma^n(t,x,u)\\ &\geq \int_{[0,T] \times K \times U}\mathcal C(x,u,\mu_t)\,\d\gamma(t,x,u). \end{split} \end{equation*} Since $K$ is arbitrary we obtain \begin{equation}\label{eqliminf} \liminf_{n \to +\infty}\int_0^T\int_{\mathbb{R}^d}\mathcal C(x,\underline u^n(t,x),\mu^n_t)\,\d\mu^n_t(x)\,\d t \geq \int_{[0,T] \times \mathbb{R}^d \times U}\mathcal C(x,u,\mu_t)\,\d\gamma(t,x,u). \end{equation} Denoting $\gamma_{t,x}$ the disintegration of $\gamma$ with respect to $\pi^1$ (as in the proof of Proposition \ref{prop:compactness}), the convexity of the map $u \mapsto {\ensuremath{\mathcal C}}(x, u, \mu_t)$ for any $(t,x)\in[0,T]\times\mathbb{R}^d$ and Jensen's inequality yield \begin{equation*} \begin{split} \int_{[0,T] \times \mathbb{R}^d \times U}\mathcal C(x,u,\mu_t)\,\d\gamma(t,x,u) &=\int_0^T \int_{\mathbb{R}^d} \int_U\mathcal C(x,u,\mu_t)\,\d\gamma_{t,x}(u)\,\d \mu_t(x) \,\d t \\ &\geq \int_0^T \int_{\mathbb{R}^d}\mathcal C\left(x,\int_U u \, \d\gamma_{t,x}(u),\mu_t\right)\,\d \mu_t(x) \,\d t \\ &= \int_0^T \int_{\mathbb{R}^d}\mathcal C\left(x,\underline u_b(t,x),\mu_t\right)\,\d \mu_t(x) \,\d t\\ &= \int_0^T \int_{\mathbb{R}^d}\mathcal C\left(x,\underline u(t,x),\mu_t\right)\,\d \mu_t(x) \,\d t. \end{split} \end{equation*} By the continuity of ${\ensuremath{\mathcal C}}_T$, using the same argument of the proof of \eqref{eqliminf}, we obtain \begin{equation*} \begin{split} \liminf_{n \to +\infty}\int_{\mathbb{R}^d}\mathcal C_T(x,\mu^n_T)\,\d\mu^n_T(x) \geq \int_{\mathbb{R}^d} \mathcal C_T(x,\mu_T)\,\d\mu_T(x). \end{split} \end{equation*} \end{proof} {Propositions \ref{prop:compactness} and \ref{prop:lscJE} give immediately the existence of optimizers for our optimal control problem in Eulerian formulation.} \begin{theorem}[Existence of minimizers for convex \E]\label{thm:minE} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA}. If $\mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$, then there exists $(\mu,\underline u)\in\mathcal A_{\E}(\mu_0)$ such that $$J_{\E}(\mu,\underline u)=V_{\E}(\mu_0).$$ \end{theorem} As a consequence, we derive the lower semicontinuity of the value function for the Eulerian problem. \begin{proposition}[Lower semicontinuity of $V_{\E}$]\label{prop:lscVE} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA} and $\mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$. If $\{\mu_0^n\}_{n\in\mathbb{N}}\subset \mathscr{P}_p(\mathbb{R}^d)$ is a sequence such that $W_p(\mu_0^n, \mu_0) \to 0$ as $n \to +\infty$, then $$\liminf_{n \to +\infty} V_{\E}(\mu_0^n)\geq V_\E(\mu_0).$$ \end{proposition} \begin{proof} From Theorem \ref{thm:minE} there exists a sequence $(\mu^n,\underline u^n) \in \mathcal{A}_{\E}(\mu_0^n)$ such that $V_{\E}(\mu_0^n)=J_\E(\mu^n,\underline u^n)$. By Proposition \ref{prop:compactness} there exists $(\mu,\underline u) \in \mathcal{A}_{\E}(\mu_0)$ such that, up to subsequences, $(\mu^n,\underline u^n)$ converges to $(\mu,\underline u)$ according to Definition \ref{def:nu_converges}. Then, using Proposition \ref{prop:lscJE}, we have \[ \liminf_{n \to +\infty} V_{\E}(\mu_0^n) = \liminf_{n \to +\infty} J_{\E}(\mu^n,\underline u^n) \geq J_{\E}(\mu,\underline u) \geq V_{\E}(\mu_0). \] \end{proof} \section{Kantorovich optimal control problem and equivalence with the Eulerian}\label{sec:K} In this section, we provide a further formulation of optimal control problems which we call \emph{Kantorovich formulation} in analogy with the terminology used in optimal transport theory. This formulation acts as a bridge between the Lagrangian and the Eulerian formulations and it is based on the representation of solutions of the continuity equation by superposition of continuous curves in $\Gamma_T=C([0,T];\mathbb{R}^d)$ (see Theorem \ref{thm:sup_princ}). This formulation turns out to be equivalent to the Eulerian one and it will be useful in Section \ref{sec:E=L} to prove the equivalence between the Eulerian and the Lagrangian problems.\\ \noindent We recall that, for any $t\in[0,T]$, $e_t:\Gamma_T\to\mathbb{R}^d$ denotes the evaluation map $e_t(\gamma):=\gamma(t)$. \begin{definition}[Kantorovich optimal control problem (\K)]\label{def:K} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA}. Given $\mu_0\in\mathscr{P}_p(\mathbb{R}^d)$, we say that $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$ if \begin{itemize} \item[(i)] $u\in\mathrm{B}([0,T]\times\Gamma_T;U)$; \item[(ii)] $\boldsymbol\eta\in \mathscr{P}(\Gamma_T)$, $(e_0)_\sharp\boldsymbol\eta = \mu_0$ and, defining $\mu_t:=(e_t)_\sharp\boldsymbol\eta$ for all $t\in[0,T]$, \begin{equation}\label{eq:momp} \int_0^T\int_{\mathbb{R}^d}|x|^p\,\d\mu_t(x)\,\d t <+\infty. \end{equation} $\boldsymbol\eta$ is concentrated on the set of absolutely continuous solutions of the differential equation \[\dot\gamma(t)=f(\gamma(t),u(t,\gamma),\mu_t),\qquad \text{for ${\ensuremath{\mathcal L}}_T$-a.e. }t\in[0,T].\] \end{itemize} We define the \emph{cost functional} $J_{\K}:\mathscr{P}(\Gamma_T)\times{\mathrm{B}([0,T]\times\Gamma_T; U)}\to[0,+\infty]$ by \begin{equation*} J_{\K}(\boldsymbol\eta,u):=\int_0^T\int_{\Gamma_T}{\ensuremath{\mathcal C}}(\gamma(t),u(t,\gamma),\mu_t)\,\d\boldsymbol\eta(\gamma)\,\d t+\int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}_T(x,\mu_T)\,\d\mu_T(x), \end{equation*} and the \emph{value function} $V_{\K}:\mathscr{P}_p(\mathbb{R}^d)\to[0,+\infty)$ by \begin{equation*} V_{\K}(\mu_0):=\inf\{J_{\K}(\boldsymbol\eta,u)\,:\,(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)\}. \end{equation*} \end{definition} \begin{remark}\label{rem:etaACp} We observe that, by the growth condition of $f$ in \eqref{f:growth} and condition \eqref{eq:momp}, $\boldsymbol\eta$ is actually concentrated on $\mathrm{AC}^p([0,T]; \mathbb{R}^d)$, indeed \begin{equation*} \begin{split} \int_0^T \int_{\Gamma_T} |\dot \gamma(t)|^p\, \d \boldsymbol\eta(\gamma) \, \d t & \leq \int_0^T \int_{\Gamma_T} |f(\gamma(t),u(t,\gamma), \mu_t)|^p\, \d \boldsymbol\eta(\gamma) \, \d t \\ &\leq C\int_0^T \int_{\Gamma_T} |1 + |\gamma(t)| + \mathrm{m}_p(\mu)|^p\, \d \boldsymbol\eta(\gamma) \, \d t\\ &\leq \tilde C \Big( 1 + \int_0^T\int_{\mathbb{R}^d}|x|^p \, \d \mu_t(x) \d t\Big). \end{split} \end{equation*} Hence, by Fubini theorem, for $\boldsymbol\eta$-a.e. $\gamma \in \Gamma_T$, $\dot \gamma \in L^p([0,T]; \mathbb{R}^d)$. Moreover, thanks to \eqref{eq:momp} and the growth condition \eqref{eq:growthC}, $J_\K(\boldsymbol\eta,u) < +\infty$ for every $(\boldsymbol\eta,u) \in {\ensuremath{\mathcal A}}_{\K}(\mu_0)$ (the proof of $\int_{\mathbb{R}^d}|x|^p\,\d\mu_T(x) <+\infty$ follows by items (i) and (ii) and the same argument used to show \eqref{eq:bound_xp} in Proposition \ref{prop:compactness}).\\ Finally, the value function $V_\K$ is well defined since ${\ensuremath{\mathcal A}}_\K$ is non empty (see Propostion \ref{prop:E>K}). \end{remark} The aim of this section is to prove the existence of minimizers for the Kantorovich optimal control problem under the Convexity Assumptions \ref{CA} and to show the equivalence with the Eulerian formulation of the problem described in Section \ref{sec_eulerian}. In particular, we get the equality of the corresponding value functions. This is the content of the following theorem. \begin{theorem}\label{cor:E=K} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA}. If $\mu_0\in \mathscr{P}_p(\mathbb{R}^d)$, then there exist $(\boldsymbol\eta,u)\in\mathcal A_\K(\mu_0)$ and $(\mu,\underline u)\in\mathcal A_\E(\mu_0)$ such that \[J_\K(\boldsymbol\eta,u)=J_\E(\mu,\underline u)=V_{\K}(\mu_0)=V_{\E}(\mu_0).\] \end{theorem} The proof of Theorem \ref{cor:E=K} follows by the combination of Theorem \ref{thm:minE} and Propositions \ref{prop:E>K}, \ref{prop:K>E} below. \medskip \begin{proposition}\label{prop:E>K} Let $\S:=(U,f,\mathcal C,\mathcal C_T)$ satisfy Assumption \ref{BA}. Let $\mu_0\in \mathscr{P}_p(\mathbb{R}^d)$. If $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$, then there exists $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$ such that $J_{\K}(\boldsymbol\eta,u)=J_{\E}(\mu,\underline u)$. In particular ${\ensuremath{\mathcal A}}_{\K}(\mu_0)\not=\emptyset$ and $V_\K(\mu_0)\le V_\E(\mu_0)$. \end{proposition} \begin{proof} Let $(\mu,\underline u) \in {\ensuremath{\mathcal A}}_\E(\mu_0)$. Applying Theorem \ref{thm:sup_princ} to $\mu$ and $v_t(x)=f(x,\underline u(t,x),\mu_t)$ we get $\boldsymbol\eta \in \mathscr{P}(\Gamma_T)$ such that $(e_t)_\sharp\boldsymbol\eta=\mu_t$ for every $t\in[0,T]$ and $\boldsymbol\eta$ is concentrated on the absolutely continuous solutions of $\dot \gamma(t) = v_t(\gamma(t))$. Condition \eqref{eq:momp} is automatically satisfied in view of the fact that $(\mu,\underline u) \in {\ensuremath{\mathcal A}}_\E(\mu_0)$. Then, for every $(t,\gamma)\in[0,T]\times\Gamma_T$ we define $u(t,\gamma):=\underline u(t,\gamma(t))$ so that $u$ is Borel measurable and we have $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$. Finally, from the evaluation $(e_t)_\sharp \boldsymbol\eta = \mu_t$ it holds that $J_{\K}(\boldsymbol\eta,u)=J_{\E}(\mu,\underline u)$. \end{proof} Under the Convexity Assumption \ref{CA} it also holds that $V_\K(\mu_0)\ge V_\E(\mu_0)$. \begin{proposition}\label{prop:K>E} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA}. Let $\mu_0\in \mathscr{P}_p(\mathbb{R}^d)$. If $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$, then there exists $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$ such that $J_{\E}(\mu,\underline u)\le J_{\K}(\boldsymbol\eta,u)$. In particular $V_\K(\mu_0)\ge V_\E(\mu_0)$. \end{proposition} \begin{proof} Let $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$. We firstly define $\mu_t:=(e_t)_\sharp\boldsymbol\eta$, for every $t\in[0,T]$. We introduce the continuous evaluation map $e:[0,T]\times\Gamma_T\to[0,T]\times\mathbb{R}^d$ by setting $$e(t,\gamma):=(t,\gamma(t))=(i_{[0,T]}(t),e_t(\gamma))$$ and we denote by $(t,x)\mapsto\tilde\boldsymbol\eta_{t,x}$ the Borel map obtained from the disintegration of ${\ensuremath{\mathcal L}}_T\otimes\boldsymbol\eta$ with respect to $e$, see Theorem \ref{thm:disint}. Then we define the function $\underline u:[0,T]\times\mathbb{R}^d\to U$ by \[\underline u(t,x):=\int_{[0,T]\times\Gamma_T}u(t,\gamma)\,\d\tilde\boldsymbol\eta_{t,x}(\gamma).\] Notice that $\underline u$ is Borel measurable thanks to the Borel measurability of $(t,x) \mapsto \tilde \boldsymbol\eta_{t,x}$. The measure $\tilde\boldsymbol\eta_{t,x}$ is concentrated on $\{t\}\times\{\gamma:\gamma(t)=x\}$, so that $\tilde\boldsymbol\eta_{t,x}=\delta_t\otimes\boldsymbol\eta_{t,x}$, where for any $t\in[0,T]$, the function $x\mapsto\boldsymbol\eta_{t,x}$ is the Borel map given by the disintegration of $\boldsymbol\eta$ with respect to the continuous map $e_t$. Hence we have also that \begin{equation}\label{ubaru} \underline u(t,x)=\int_{\Gamma_T}u(t,\gamma)\,\d\boldsymbol\eta_{t,x}(\gamma). \end{equation} Defining the set $A:=\{(t,\gamma)\in[0,T]\times\Gamma_T: \exists \, \dot\gamma(t) \, \text{ and } \dot\gamma(t) = f(\gamma(t),u(t,\gamma),\mu_t) \}$, by item (ii) of Definition \ref{def:K} we have that ${\ensuremath{\mathcal L}}_T\otimes\boldsymbol\eta(([0,T] \times \Gamma_T )\setminus A)=0$. Then, for ${\ensuremath{\mathcal L}}_T$-a.e. $t\in[0,T]$ we have \[ \dot\gamma(t)=f(\gamma(t),u(t,\gamma),\mu_t), \qquad \text{for $\boldsymbol\eta$-a.e. }\gamma\in\Gamma_T.\] Let $\varphi\in C^1_c(\mathbb{R}^d;\mathbb{R})$. For any $s,t\in[0,T]$, $s<t$, we have \begin{equation}\label{eq:continuity_KE} \begin{aligned} \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x) - \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_s(x) &= \int_{\Gamma_T}(\varphi(\gamma(t))- \varphi(\gamma(s)))\,\d\boldsymbol\eta(\gamma)\\ &=\int_{\Gamma_T}\int_s^t \frac{\d}{\d r}\varphi(\gamma(r))\,\d r\,\d\boldsymbol\eta(\gamma)\\ &=\int_{\Gamma_T}\int_s^t \nabla\varphi(\gamma(r))\cdot\dot\gamma(r)\,\d r\,\d\boldsymbol\eta(\gamma)\\ &=\int_s^t \int_{\Gamma_T}\nabla\varphi(\gamma(r))\cdot f(\gamma(r),u(r,\gamma),\mu_r)\,\d\boldsymbol\eta(\gamma)\,\d r. \end{aligned} \end{equation} Using the growth condition of $f$ in \eqref{f:growth} we have \begin{align*} \left| \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x) - \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_s(x)\right| \leq C\|\nabla\varphi\|_\infty \int_s^t \int_{\Gamma_T}(1+|\gamma(r)|+\mathrm{m}_p(\mu_r))\,\d\boldsymbol\eta(\gamma)\,\d r. \end{align*} By \eqref{eq:momp} the map $r\mapsto \int_{\Gamma_T}(1+|\gamma(r)|+\mathrm{m}_p(\mu_r))\,\d\boldsymbol\eta(\gamma)$ belongs to $L^1(0,T)$ and then the map $t\mapsto \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x)$ is absolutely continuous. Thanks to \eqref{eq:continuity_KE}, for ${\ensuremath{\mathcal L}}_T$-a.e. $t\in[0,T]$ we have \begin{align*} \frac{\d}{\d t}\int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x) =\int_{\Gamma_T}\nabla\varphi(\gamma(t))\cdot f(\gamma(t),u(t,\gamma),\mu_t)\,\d\boldsymbol\eta(\gamma). \end{align*} Using the affinity of $f$, the disintegration of $\boldsymbol\eta$ with respect to $e_t$, recalling that $\boldsymbol\eta_{t,x}$ is concentrated on $\{\gamma:\gamma(t)=x\}$, and by \eqref{ubaru} we obtain \begin{align*} &\int_{\Gamma_T}\nabla\varphi(\gamma(t))\cdot f(\gamma(t),u(t,\gamma),\mu_t)\,\d\boldsymbol\eta(\gamma)\\ &=\int_{\mathbb{R}^d}\int_{\Gamma_T}\nabla\varphi(\gamma(t))\cdot f(\gamma(t),u(t,\gamma),\mu_t)\,\d\boldsymbol\eta_{t,x}(\gamma)\,\d\mu_t(x)\\ &=\int_{\mathbb{R}^d}\int_{\Gamma_T}\nabla\varphi(x)\cdot f(x,u(t,\gamma),\mu_t)\,\d\boldsymbol\eta_{t,x}(\gamma)\,\d\mu_t(x)\\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot f\left(x,\int_{\Gamma_{T}}u(t,\gamma)\,\d\boldsymbol\eta_{t,x}(\gamma),\mu_t\right)\,\d\mu_t(x)\\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot f(x,\underline u(t,x),\mu_t)\,\d\mu_t(x). \end{align*} Then $\mu$ satisfies the continuity equation $\partial_t\mu_t+\div(v_t\mu_t)=0$ for the vector field $v_t(x)= f(x,\underline u(t,x),\mu_t)$ in the sense of distributions (see e.g. \cite[equation 8.1.4]{ambrosio2008gradient}). Since $|v_t(x)|^p\leq \tilde C(1+|x|^p+\mathrm{m}_p^p(\mu_t))$, from \eqref{eq:momp} it follows that $t\mapsto \|v_t\|_{L^p_{\mu_t}(\mathbb{R}^d;\mathbb{R}^d)}$ belongs to $L^p(0,T)$ and then $\mu \in \mathrm{AC}^p([0,T];\mathscr{P}_p(\mathbb{R}^d))$. Hence, $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$. Finally, by the convexity of ${\ensuremath{\mathcal C}}$ with respect to $u$ and Jensen's inequality we obtain \begin{equation*} \begin{split} &\int_0^T \int_{\Gamma_T} {\ensuremath{\mathcal C}}(\gamma(t),u(t,\gamma),\mu_t) \, \d \boldsymbol\eta(\gamma)\, \d t \\ &= \int_0^T \int_{\mathbb{R}^d}\int_{\Gamma_{T}}{\ensuremath{\mathcal C}}(x,u(t,\gamma),\mu_t) \, \d \boldsymbol\eta_{t,x}(\gamma)\d \mu_t(x)\, \d t \\ &\ge \int_0^T \int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}\left( x, \int_{\Gamma_{T}} u(t,\gamma)\,\d\boldsymbol\eta_{t,x}(\gamma),\mu_t\right) \,\d \mu_t(x) \,\d t\\ &= \int_0^T \int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}\left( x, \underline u(t,x),\mu_t\right) \,\d \mu_t(x)\, \d t. \end{split} \end{equation*} Hence we obtain $J_{\K}(\boldsymbol\eta,u) \ge J_{\E}(\mu,\underline u)$. \end{proof} \section{Equivalence of Eulerian and Lagrangian problems}\label{sec:E=L} In this Section we study the equivalence between the Eulerian and Lagrangian formulations of the optimal control problem. We anticipate here the main results of this section. \begin{theorem}\label{cor:VL=VE} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA} and $(\Omega, \mathfrak B, \P)$ be a standard Borel space such that $\P$ is without atoms. If $X_0\in L^p(\Omega;\mathbb{R}^d)$, then $$V_{\pL}(X_0)= V_{\E}((X_0)_\sharp\P).$$ In particular, given $X_0,X_0'\in L^p(\Omega;\mathbb{R}^d)$ s.t. $(X_0)_\sharp\P=(X_0')_\sharp\P$, then \[V_{\pL}(X_0)=V_{\pL}(X_0').\] \end{theorem} The proof of Theorem \ref{cor:VL=VE} follows immediately by Proposition \ref{prop:E<L}, Theorem \ref{prop:K>L} and Theorem \ref{cor:E=K}. \medskip We stress again that the Convexity Assumption \ref{CA} is sufficient to prove the existence of a minimizer for the Eulerian and Kantorovich optimal control problems (see Theorems \ref{thm:minE} and \ref{cor:E=K}). However in general, even assuming the Convexity Assumption \ref{CA}, the Lagrangian optimal control problem could not have minimizers as we show in Section \ref{sec:counterexample}. \medskip If we remove the Convexity Assumption \ref{CA}, we can still give the following equivalence result. \begin{theorem}\label{cor:VL=VE'} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} and $\S'=(\mathscr{U},\mathscr{F},\mathscr{C},\mathscr{C}_T)$ as in Definition \ref{def:relax_setting}. Let $(\Omega, \mathfrak B, \P)$ be a standard Borel space such that $\P$ is without atoms. Let $\L, \RL$ be the Lagrangian and Relaxed Lagrangian problems associated to $\S$. Let also $\L',\E'$ the Lagrangian and Eulerian problems associated to $\S'$. If $X_0\in L^p(\Omega;\mathbb{R}^d)$, then $$V_{\pL}(X_0)=V_{\RL}(X_0)=V_{\pL'}(X_0)= V_{\E'}((X_0)_\sharp\P)=V_{\K'}((X_0)_\sharp\P).$$ \end{theorem} The proof of Theorem \ref{cor:VL=VE'} is postponed at the end of Section \ref{sec:LEK}. \subsection{Comparison between $\pL$, $\E$ and $\K$}\label{sec:LEK} We start by comparing the Eulerian and Lagrangian problems under the Convexity Assumption \ref{CA}. Assuming $\P$ without atoms, we further exhibit the equivalence between the associated value functions exploiting the Kantorovich formulation introduced in Section \ref{sec:K}. The following is a first comparison between the Eulerian and Lagrangian problems. \begin{proposition}\label{prop:E<L} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA} and $(\Omega, \mathfrak B, \P)$ be a probability space. If $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $(X,u)\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$, then there exists $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}((X_0)_\sharp\P)$ such that $J_{\pL}(X,u) \geq J_{\E}(\mu,\underline u)$. In particular, $V_{\pL}(X_0)\ge V_{\E}((X_0)_\sharp\P)$. \end{proposition} \begin{proof} Let $(X,u)\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$. We firstly define $\mu_t:= (X_t)_\sharp \P$, for every $t \in [0,T]$. Thanks to Proposition \ref{prop:equivSpaces} in Appendix \ref{appendix_A}, $X \in \mathrm{AC}^p([0,T]; L^p(\Omega;\mathbb{R}^d))$ so that $\mu \in \mathrm{AC}^p([0,T];\mathscr{P}_p(\mathbb{R}^d))$. We define $\sigma\in \mathscr{P}([0,T] \times \Omega\times U)$ by $\sigma:=\delta_{u(t,\omega)}\otimes\P \otimes {\ensuremath{\mathcal L}}_T$ and $\theta\in \mathscr{P}([0,T]\times \mathbb{R}^d\times U)$ by $\theta:=(i_{[0,T]},X_t,i_U)_\sharp\sigma$. Let $\pi^{1,2}: [0,T]\times \mathbb{R}^d \times U \to [0,T]\times \mathbb{R}^d$ be the projection map $(t,x,u) \mapsto (t,x)$, observe that $\pi^{1,2}_\sharp\theta=\mu_t \otimes {\ensuremath{\mathcal L}}_T$. If we denote $\theta_{t,x}$ the disintegration of $\theta$ with respect to $\pi^{1,2}$, then we have $\theta=\theta_{t,x}\otimes\mu_t \otimes {\ensuremath{\mathcal L}}_T$. We define now $\underline u:[0,T]\times \mathbb{R}^d\to U$ by \begin{equation}\label{eq:ubaru_theta} \underline u(t,x) := \int_U u \,\d \theta_{t,x}(u). \end{equation} Thanks to Theorem \ref{thm:disint}, the map $(t,x)\mapsto\theta_{t,x}$ is Borel measurable, so that $\underline u \in \mathrm{B}([0,T] \times \mathbb{R}^d;U)$. The rest of the proof follows the same line of the proof of Proposition \ref{prop:K>E}. We write the details for the reader's convenience. Defining the set $A:=\{(t,\omega)\in[0,T]\times\Omega: \exists \,\dot X_t(\omega) \text{ and } \dot X_t(\omega) = f(X_t(\omega),u(t,\omega),\mu_t) \}$, by item (ii) of Definition \ref{def:L}, we have that ${\ensuremath{\mathcal L}}_T\otimes\P(([0,T] \times \Omega \setminus A)=0$. Then, for ${\ensuremath{\mathcal L}}_T$-a.e. $t\in[0,T]$ it holds \[ \dot X_t(\omega)=f(X_t(\omega),u(t,\omega),\mu_t), \qquad \text{for $\P$-a.e. }\omega\in\Omega.\] Let $\varphi\in C^1_c(\mathbb{R}^d;\mathbb{R})$. For any $s,t\in[0,T]$, $s<t$, we have \begin{equation}\label{eq:distr_mu} \begin{aligned} \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x) - \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_s(x) &= \int_{\Omega}(\varphi(X_t(\omega))- \varphi(X_s(\omega)))\,\d\P(\omega)\\ &=\int_{\Omega}\int_s^t \frac{\d}{\d r}\varphi(X_r(\omega))\,\d r\,\d\P(\omega)\\ &=\int_{\Omega}\int_s^t \nabla\varphi(X_r(\omega))\cdot\dot X_r(\omega)\,\d r\,\d\P(\omega)\\ &=\int_s^t \int_{\Omega}\nabla\varphi(X_r(\omega))\cdot f(X_r(\omega)),u(r,\omega),\mu_r)\,\d\P(\omega)\,\d r. \end{aligned} \end{equation} Using the growth condition of $f$ in \eqref{f:growth} we have \begin{align*} \left| \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x) - \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_s(x)\right| \leq C\|\nabla\varphi\|_\infty \int_s^t \int_{\Omega}(1+|X_r(\omega)|+\mathrm{m}_p(\mu_r))\,\d\P(\omega)\,\d r. \end{align*} Thanks to \eqref{boundXt} in Proposition \ref{prop:estimatesL}, it follows that the map $r\mapsto \int_{\Omega}(1+|X_r(\omega)|+\mathrm{m}_p(\mu_r))\,\d\P(\omega)$ belongs to $L^1(0,T)$ so that the map $t\mapsto \int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x)$ is absolutely continuous. Then, from \eqref{eq:distr_mu} it holds that \begin{align*} \frac{\d}{\d t}\int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t(x) = \int_{\Omega}\nabla\varphi(X_t(\omega))\cdot f(X_t(\omega)),u(t,\omega),\mu_t)\,\d\P(\omega), \quad \text{ for } {\ensuremath{\mathcal L}}_T\text{-a.e.} t\in[0,T]. \end{align*} For ${\ensuremath{\mathcal L}}_T$-a.e. $t \in [0,T]$, we denote now $\sigma_t\in \mathscr{P}(\Omega\times U)$ and $\theta_t \in \mathscr{P}(\mathbb{R}^d\times U)$ the disintegrations of $\sigma$ and $\theta$ with respect to the projection maps $\pi^1_{\Omega}: [0,T] \times \Omega \times U \to [0,T]$, $\pi^1_{\mathbb{R}^d}: [0,T] \times \mathbb{R}^d \times U \to [0,T]$, respectively. It can be shown that $\theta_t=(X_t,i_U)_\sharp\sigma_t =\theta_{t,x}\otimes\mu_t$, for ${\ensuremath{\mathcal L}}_T$-a.e. $t \in [0,T]$. Using the affinity of $f$, and the definition of $\underline u$ in \eqref{eq:ubaru_theta} we obtain \begin{align*} & \int_{\Omega}\nabla\varphi(X_t(\omega))\cdot f(X_t(\omega)),u(t,\omega),\mu_t)\,\d\P(\omega)\\ &=\int_{\Omega\times U}\nabla\varphi(X_t(\omega))\cdot f(X_t(\omega),u,\mu_t)\,\d \sigma_t(\omega,u)\\ &=\int_{\mathbb{R}^d\times U}\nabla\varphi(x)\cdot f(x,u,\mu_t)\,\d\theta_t(x,u)\\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot\int_U f(x,u,\mu_t)\,\d\theta_{t,x}(u)\,\d\mu_t(x) \\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot f\left(x,\int_U u\,\d\theta_{t,x}(u),\mu_t\right)\,\d\mu_t(x)\\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot f(x,\underline u(t,x),\mu_t)\,\d\mu_t(x). \end{align*} Then $\mu$ satisfies the continuity equation $\partial_t\mu_t+\div(v_t\mu_t)=0$ for the vector field $v_t(x):= f(x,\underline u(t,x),\mu_t)$ in the sense of distributions (see e.g. \cite[equation (8.1.4)]{ambrosio2008gradient}). Since $|v_t(x)|^p\leq \tilde C(1+|x|^p+\mathrm{m}_p^p(\mu_t))$, from \eqref{boundXt} it follows that $t\mapsto \|v_t\|_{L^p_{\mu_t}(\mathbb{R}^d;\mathbb{R}^d)}$ belongs to $L^p(0,T)$ and $\mu \in \mathrm{AC}^p([0,T];\mathscr{P}_p(\mathbb{R}^d))$. Hence, $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$. Finally, by the convexity of ${\ensuremath{\mathcal C}}$ with respect to $u$ and Jensen's inequality we obtain \begin{equation*} \begin{split} &\int_0^T \int_{\Omega} {\ensuremath{\mathcal C}}(X_t(\omega),u_t(\omega),\mu_t) \, \d \P(\omega) \,\d t \\ &= \int_0^T \int_{\Omega\times U} {\ensuremath{\mathcal C}}(X_t(\omega),u,\mu_t) \, \d \sigma_{t}(\omega,u)\,\d t \\ &=\int_0^T \int_{\mathbb{R}^d\times U} {\ensuremath{\mathcal C}}(x,u,\mu_t) \, \d \theta_{t}(x,u)\, \d t \\ &=\int_0^T \int_{\mathbb{R}^d} \int_{U}{\ensuremath{\mathcal C}}(x,u,\mu_t) \, \d \theta_{t,x}(u)\,\d \mu_t(x)\, \d t \\ &\ge \int_0^T \int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}\left( x, \int_U u \, \d \theta_{t,x}(u) ,\mu_t\right) \,\d \mu_t(x) \d t\\ &= \int_0^T \int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}\left( x, \underline u(t,x),\mu_t\right) \,\d \mu_t(x)\, \d t. \end{split} \end{equation*} This readily implies that $J_{\pL}(X,u) \ge J_{\E}(\mu,\underline u)$. \end{proof} In the next Lemma, {we are given an admissible pair $(\boldsymbol\eta,u)$ for the Kantorovich problem. Considering the evaluation map $Z_t(\gamma)=\gamma(t)$, we associate to $(\boldsymbol\eta,u)$ the pair $(Z,u)$ which is admissible for the Lagrangian problem with parametrization space $(\Gamma_T, {\ensuremath{\mathcal B}}_{\Gamma_T}, \boldsymbol\eta)$ and with the same cost as $(\boldsymbol\eta,u)$.} \begin{lemma}\label{l:fromKtoL} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA}. Denote with $Z(t,\gamma):= e_t(\gamma)$, for every $t \in [0,T]$ and $\gamma \in \Gamma_T$. If $\mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$, $(\boldsymbol\eta,u) \in {\ensuremath{\mathcal A}}_\K(\mu_0)$ and we denote with $\pL_{\boldsymbol\eta} = \pL(\Gamma_T, {\ensuremath{\mathcal B}}_{\Gamma_T}, \boldsymbol\eta)$, then $(Z,u)\in {\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta}}(e_0)$. Moreover, \begin{equation*} J_{\pL_{\boldsymbol\eta}}(Z,u)=J_{\K}(\boldsymbol\eta,u). \end{equation*} \end{lemma} \begin{proof} Let $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$ and denote by $\pL_{\boldsymbol\eta}=\pL(\Gamma_T,{\ensuremath{\mathcal B}}_{\Gamma_T},\boldsymbol\eta)$. Denoting with $Z:[0,T]\times\Gamma_T\to\mathbb{R}^d$ the map defined by $Z(t,\gamma):=\gamma(t)=e_t(\gamma)$, let us show that $(Z,u)\in{\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta}}(e_0)$. Since $Z(0,\gamma)_\sharp \boldsymbol\eta = \mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$ then $e_0 = Z(0,\cdot) \in L^p_{\boldsymbol\eta}(\Gamma_T)$. By item (i) in Definition \ref{def:K} we have $u \in \mathrm{B}([0,T] \times \Gamma_T;U)$. Thanks to \eqref{eq:momp} it readily follows that $Z \in L^p_{\boldsymbol\eta}(\Gamma_T; L^p(0,T;\mathbb{R}^d))$ and from Remark \ref{rem:etaACp} we actually have that $Z \in L^p_{\boldsymbol\eta}(\Gamma_T; \mathrm{AC}^p([0,T];\mathbb{R}^d))$. Moreover, from item (ii) of Definition \ref{def:K}, for $\boldsymbol\eta$-a.e. $\gamma \in \Gamma_T$, we have \[\dot Z(t,\gamma) = f(Z(t,\gamma), u(t,\gamma), Z(t,\cdot)_\sharp \boldsymbol\eta) \quad \text{ for } {\ensuremath{\mathcal L}}_T\text{-a.e.} t \in [0,T].\] Hence $(Z,u)\in{\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta}}(e_0)$ and, by definition of $Z$, $J_{\pL_{\boldsymbol\eta}}(Z,u)=J_{\K}(\boldsymbol\eta,u)$. \end{proof} {When the parametrization space $(\Omega,\mathfrak B,\P)$ is fixed a priori, an interesting first comparison between the Kantorovich and Lagrangian problems is given below.} \begin{theorem}\label{prop:K>L} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} with $U$ convex compact subset of a separable Banach space $V$. Let $(\Omega,\mathfrak B,\P)$ be a standard Borel space such that $\P$ is without atoms. If $\mu_0\in\mathscr{P}_p(\mathbb{R}^d)$ and $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$, then for every $X_0 \in L^p(\Omega;\mathbb{R}^d)$ with $(X_0)_\sharp\P=\mu_0$ there exits a sequence $(X^n,u^n)\in{\ensuremath{\mathcal A}}_{\L}(X_0)$ such that \begin{equation}\label{eq:convKL} \lim_{n\to+\infty}J_\pL(X^n,u^n)=J_\K(\boldsymbol\eta,u). \end{equation} Moreover, for every $\mu_0\in\mathscr{P}_p(\mathbb{R}^d)$ and every $X_0 \in L^p(\Omega;\mathbb{R}^d)$ with $(X_0)_\sharp\P=\mu_0$ it holds \begin{equation}\label{VKgeVL} V_{\K}(\mu_0)\ge V_{\pL}(X_0). \end{equation} \end{theorem} \begin{proof} {\bf Step 1.} Let $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$. We denote by $\pL_{\boldsymbol\eta}:= \pL(\Gamma_T,{\ensuremath{\mathcal B}}_{\Gamma_T},\boldsymbol\eta)$. Defining $Z(t,\gamma):=\gamma(t)=e_t(\gamma)$, by Lemma \ref{l:fromKtoL} it holds that $(Z,u)\in{\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta}}(e_0)$ and \begin{equation}\label{eq:JLetaeqJK} J_{\pL_{\boldsymbol\eta}}(Z,u)=J_{\K}(\boldsymbol\eta,u). \end{equation} Thanks to the continuity of the evaluation map $e_0: \Gamma_T \to \mathbb{R}^d$, we apply Proposition \ref{lemmaB} for the problem $\pL_{\boldsymbol\eta}$ in the Polish space $(\Gamma_T, {\ensuremath{\mathcal B}}_{\Gamma_T}, \boldsymbol\eta)$. Then there exists a sequence $(\bar Z^m,\bar u^m)\in{\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta}}(e_0)$ such that $\bar Z^m:[0,T]\times\Gamma_T\to\mathbb{R}^d$ and $\bar u^m:[0,T]\times\Gamma_T\to U$ are continuous and \begin{equation}\label{eq:convJLeta} \lim_{m\to+\infty}J_{\pL_{\boldsymbol\eta}}(\bar Z^m,\bar u^m)=J_{\pL_{\boldsymbol\eta}}(Z,u). \end{equation} {\bf Step 2.} Let $(\Omega,\mathfrak B,\P)$ be a standard Borel space such that $\P$ is without atoms and $\tau$ be a Polish topology on $\Omega$ such that $\mathfrak B = {\ensuremath{\mathcal B}}_{(\Omega,\tau)}$. Denote with $\pL = \pL(\Omega,{\ensuremath{\mathcal B}}_{(\Omega,\tau)},\P)$ the Lagrangian problem for the system $\S$. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$ with $(X_0)_\sharp\P=\mu_0$. Given $\bar Z\in C([0,T]\times\Gamma_T;\mathbb{R}^d)$ and $\bar u\in C([0,T]\times\Gamma_T;U)$ such that $(\bar Z,\bar u)\in{\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta}}(e_0)$, let us prove that there exists a sequence $(\tilde X^n,\tilde u^n)\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$ such that \begin{equation}\label{eq:convJLXtilde} \lim_{n\to+\infty}J_{\pL}(\tilde X^n,\tilde u^n)=J_{\pL_{\boldsymbol\eta}}(\bar Z,\bar u). \end{equation} We define the sets \begin{itemize} \item $\tilde{\Gamma}:=\left\{(x,\gamma)\in\mathbb{R}^d\times\Gamma_T\,:\,x=\gamma(0)=e_0(\gamma)\right\};$ \item $\Gamma_0:=\left\{\gamma\in\Gamma_T\,:\,\gamma(0)=e_0(\gamma)=0\right\},$ \end{itemize} and the continuous maps \begin{itemize} \item $r:\tilde\Gamma\to\mathbb{R}^d\times\Gamma_0$, $r(x,\gamma):=(x,\gamma-x)$, where $\gamma-x$ is the curve $t\mapsto\gamma(t)-x$. Notice that $r$ admits a left inverse $r^{-1}:\mathbb{R}^d\times\Gamma_0\to\tilde\Gamma$, $r^{-1}(x,\gamma_0)=(x,\gamma_0+x)$, that obviously satisfies $r^{-1}\circ r=i_{\tilde\Gamma}$; \item $s:\Omega\times\mathbb{R}^d\times\Gamma_0\to\Omega\times\tilde\Gamma$, $s(\omega,x,\gamma_0)=(\omega,x,\gamma_0+x)$. Observe that $s=(i_\Omega,r^{-1})$. \end{itemize} Let us consider the couplings \[\rho:=(i_\Omega, X_0)_\sharp\P\in\mathscr P(\Omega\times\mathbb{R}^d),\quad \tilde\eta:=(e_0, i_{\Gamma_T})_\sharp\boldsymbol\eta\in\mathscr P(\tilde\Gamma),\quad \hat\eta:=r_\sharp\tilde\eta\in\mathscr P(\mathbb{R}^d\times\Gamma_0).\] Notice that $\pi^2_\sharp\rho=\pi^1_\sharp\tilde\eta=\pi^1_\sharp\hat\eta=\mu_0$. We define a measure $\hat\sigma\in\mathscr P(\Omega\times\mathbb{R}^d\times\Gamma_0)$ satisfying $\pi^{1,2}_\sharp\hat\sigma=\rho$ and $\pi^{2,3}_\sharp\hat\sigma=\hat\eta$. Since $\P$ is without atoms and $\pi^1_\sharp\rho=\P$ then also $\rho\in\mathscr{P}(\Omega\times\mathbb{R}^d)$ is without atoms. Applying Lemma \ref{lemma:young} with $\mathbb{T}=\Omega\times\mathbb{R}^d$, $S = \Gamma_0$, $\lambda=\rho$ and $\nu=\hat\sigma$, there exists a sequence of Borel maps $\hat w_n:\Omega\times\mathbb{R}^d\to\Gamma_0$ such that \begin{equation}\label{eq:hatyoung} \hat\sigma_n:=(i_{\Omega\times\mathbb{R}^d},\hat w_n)_\sharp\rho\xrightarrow{\mathcal{Y}}\hat\sigma, \quad \textrm{ as }n\to+\infty. \end{equation} Define $w_n:\Omega\times\mathbb{R}^d\to\Gamma_T$ by $w_n(\omega,x):=\hat w_n(\omega,x)+x$ and note that $s(\omega,x,\hat w_n(\omega,x))=(\omega,x,w_n(\omega,x))$. Thanks to the continuity of $s$, then $s_\sharp$ is weakly continuous. From Remark \ref{rem:Young-weak}, by the composition rule \eqref{eq:composition} and \eqref{eq:hatyoung}, we have that \begin{equation}\label{eq:convYn} \tilde\sigma_n:=s_\sharp\hat\sigma_n=(i_{\Omega\times\mathbb{R}^d}, w_n)_\sharp\rho\xrightarrow{\mathcal{Y}}s_\sharp\hat\sigma=:\tilde\sigma\in\mathscr P(\Omega\times\mathbb{R}^d\times\Gamma_T), \quad \textrm{ as }n\to+\infty. \end{equation} From \eqref{eq:composition}, a direct computation shows \begin{align*} &\pi^{1,2}_\sharp\tilde\sigma=\rho,\\ &\tilde\eta=r^{-1}_\sharp\hat\eta=\left(r^{-1}\circ\pi^{2,3}\right)_\sharp\hat\sigma=\left( r^{-1}\circ\pi^{2,3}\circ s^{-1} \right)_\sharp\tilde\sigma=\pi^{2,3}_\sharp\tilde\sigma. \end{align*} We define $\boldsymbol\eta^n:=\pi^3_\sharp\tilde\sigma_n\in\mathscr{P}(\Gamma_T)$. Observing that $\pi^3_\sharp\tilde\sigma=\boldsymbol\eta$, by \eqref{eq:convYn} we have $\boldsymbol\eta^n\to\boldsymbol\eta$ weakly in $\mathscr{P}(\Gamma_T)$. Notice also that $(e_0)_\sharp\boldsymbol\eta^n=(e_0\circ w_n\circ(i_\Omega, X_0))_\sharp\P=(X_0)_\sharp\P=\mu_0$ and $(e_0)_\sharp\boldsymbol\eta=\mu_0$. For every $n \in \mathbb{N}$, denote by $\pL_{\boldsymbol\eta^n}:= \pL(\Gamma_T, {\ensuremath{\mathcal B}}_{\Gamma_T}, \boldsymbol\eta^n)$ the Lagrangian problem for the system $\S$. Since $e_0$, $\bar u$ are continuous, we can apply Proposition \ref{lemmaA} in the probability space $(\Gamma_T, {\ensuremath{\mathcal B}}_{\Gamma_T}, \boldsymbol\eta)$, with $\boldsymbol\eta^n, \boldsymbol\eta\in\mathscr{P}(\Gamma_T)$ and initial datum $e_0$. Thus if $(Z^n,\bar u)\in{\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta^n}}(e_0)$, we have that \begin{equation}\label{eq:convJLetan} \lim_{n\to+\infty}J_{\pL_{\boldsymbol\eta^n}}(Z^n,\bar u)=J_{\pL_{\boldsymbol\eta}}(\bar Z,\bar u). \end{equation} Finally, for any $n\in\mathbb{N}$, we define the pair $(\tilde X^n,\tilde u^n)$ by \begin{align*} \tilde X^n:[0,T]\times\Omega\to\mathbb{R}^d,\quad &\quad \tilde X^n(t,\omega):=Z^n(t,w_n(\omega,X_0(\omega)));\\ \tilde u^n:[0,T]\times\Omega\to U,\quad &\quad \tilde u^n(t,\omega):=\bar u(t,w_n(\omega,X_0(\omega))). \end{align*} Observe that $\tilde X^n(0,\omega)=Z^n(0,w_n(\omega, X_0(\omega)))=e_0(w_n(\omega, X_0(\omega)))=X_0(\omega)$. Moreover, thanks to the composition rule \eqref{eq:composition} we have $\pi^3_\sharp\tilde\sigma_n=(w_n)_\sharp\rho$ so that \[(\tilde X^n_t)_\sharp\P=(Z^n_t\circ w_n\circ (i_\Omega, X_0))_\sharp\P=(Z^n_t)_\sharp\boldsymbol\eta^n.\] By construction we have $(\tilde X^n,\tilde u^n)\in{\ensuremath{\mathcal A}}_\pL(X_0)$ and it is immediate to verify that \[J_{\pL}(\tilde X^n,\tilde u^n)=J_{\pL_{\boldsymbol\eta^n}}(Z^n,\bar u).\] Then, by \eqref{eq:convJLetan} we obtain \eqref{eq:convJLXtilde}. {\bf Step 3.} We apply Step 2 to the sequence $(\bar Z^m,\bar u^m)$ constructed in Step 1. Fix $m \in \mathbb{N}$, then there exists a sequence $(\tilde X^{m,n},\tilde u^{m,n})_{n \in \mathbb{N}}$ such that $(\tilde X^{m,n},\tilde u^{m,n})\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$ for every $n \in \mathbb{N}$ and \begin{equation*} \lim_{n\to+\infty}J_{\pL}(\tilde X^{m,n},\tilde u^{m,n})=J_{\pL_{\boldsymbol\eta}}(\bar Z^m,\bar u^m), \qquad \forall\,m\in\mathbb{N}. \end{equation*} Thanks to \eqref{eq:convJLeta}, by a simple diagonal argument we can select a (not relabelled) sequence $(X^{n},u^{n})\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$ satisfying \[\lim_{n\to+\infty}J_{\pL}(X^{n},u^{n})=J_{\pL_{\boldsymbol\eta}}(Z, u),\] where $(Z,u)$ are defined in Step 1. From \eqref{eq:JLetaeqJK} we finally get \eqref{eq:convKL}. {\bf Step 4.} By \eqref{eq:convKL} and the definition of $V_\pL$, for any $\varepsilon>0$ there exists $n_\varepsilon>0$ such that for $n\ge n_\varepsilon$ \[V_{\pL}(X_0)\le J_{\pL}(X^n,u^n)\le J_{\K}(\boldsymbol\eta,u)+\varepsilon.\] From the arbitrariness of $\varepsilon >0$ we have \[V_{\pL}(X_0)\le J_{\K}(\boldsymbol\eta,u), \quad \forall \, (\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0),\] hence the required inequality. \end{proof} \begin{remark}\label{rem:minimizerLeta} Under the Convexity Assumption \ref{CA}, if $(\boldsymbol\eta,u) \in {\ensuremath{\mathcal A}}_{\K}(\mu_0)$ is an optimal pair, then $(Z,u)$ given by Lemma \ref{l:fromKtoL} is optimal for the Lagrangian problem $\pL_{\boldsymbol\eta}$. This is a consequence of Theorem \ref{cor:E=K}, Proposition \ref{prop:E<L} and of \eqref{VKgeVL} in Theorem \ref{prop:K>L}. \end{remark} We conclude the section with the proof of Theorem \ref{cor:VL=VE'}. \begin{proof}[Proof of Theorem \ref{cor:VL=VE'}] Thanks to Proposition \ref{prop:ConvexRL} and Remark \ref{rem:RL-L} the relaxed Lagrangian problem $\RL$ in $\S$ coincides with $\pL'$ in $\S'$. Precisely, for every $X_0\in L^p(\Omega;\mathbb{R}^d)$ it holds ${\ensuremath{\mathcal A}}_{\RL}(X_0)={\ensuremath{\mathcal A}}_{\pL'}(X_0)$. Moreover, $J_{\RL}(X,\sigma)=J_{\pL'}(X,\sigma)$ for every $(X,\sigma)\in {\ensuremath{\mathcal A}}_{\RL}(X_0)={\ensuremath{\mathcal A}}_{\pL'}(X_0)$. Hence, $V_{\RL}(X_0)=V_{\pL'}(X_0)$. The equality $V_{\pL}(X_0)=V_{\RL}(X_0)$ follows from Theorem \ref{cor:VL=VRL}. Finally, Theorem \ref{cor:VL=VE} yields $V_{\pL'}(X_0)= V_{\E'}((X_0)_\sharp\P)=V_{\K'}((X_0)_\sharp\P)$. \end{proof} \subsection{Continuity of $V_\E$, $V_\K$ and $V_\pL$} Here, we prove continuity results for the value functions of the various proposed formulations. \begin{theorem}[Continuity of $V_\E$ and $V_\K$]\label{th:contVE} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA}. If $\mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$ and $\{\mu_0^n\}_{n\in\mathbb{N}}\subset \mathscr{P}_p(\mathbb{R}^d)$ is a sequence such that $W_p(\mu_0^n, \mu_0) \to 0$ as $n \to +\infty$, then \begin{equation*} \lim_{n \to +\infty} V_{\E}(\mu_0^n) = V_\E(\mu_0), \qquad \lim_{n \to +\infty} V_{\K}(\mu_0^n) = V_\K(\mu_0). \end{equation*} \end{theorem} \begin{proof} Let $\mu_0^n$ converge to $\mu_0$ in $\mathscr{P}_p(\mathbb{R}^d)$. By Proposition \ref{prop:Skorohod} with $S = \mathbb{R}^d $, there exist $X_0,X_0^n\in \mathrm{B}([0,1];\mathbb{R}^d)$ such that $(X_0)_\sharp{\ensuremath{\mathcal L}}_1=\mu_0$, $(X^n_0)_\sharp{\ensuremath{\mathcal L}}_1=\mu^n_0$ and $X_0^n(\omega)\to X_0(\omega)$ for ${\ensuremath{\mathcal L}}_1$-a.e. $\omega \in [0,1]$. Since $\mu_0^n, \mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$ we have $X_0^n, X_0 \in L^p([0,1];\mathbb{R}^d)$. Moreover by the convergence $W_p(\mu_0^n,\mu_0)\to0$ and Proposition \ref{prop:wassconv} there exists $\psi:[0,+\infty)\to[0,+\infty)$ admissible (according to Definition \ref{def:admphi}) such that \begin{equation} \sup_{n \in \mathbb{N}}\int_{[0,1]} \psi(|X^n_0(t)|^p)\,\d{\ensuremath{\mathcal L}}_1(t) = \sup_{n \in \mathbb{N}}\int_{\mathbb{R}^d} \psi(|x|^p)\,\d\mu_0^n(x) < +\infty. \end{equation} Thanks to Vitali theorem we get \[\|X_0^n-X_0\|_{L^p([0,1];\mathbb{R}^d)}\to 0\quad \text{as } n\to+\infty.\] Applying Proposition \ref{prop:uscVL} to the Lagrangian problem in $\S$ with $(\Omega, {\ensuremath{\mathcal B}}, \P) = ([0,1], {\ensuremath{\mathcal B}}_{[0,1]}, {\ensuremath{\mathcal L}}_1)$ we get $\limsup_{n\to +\infty} V_{\pL} (X_0^n) \leq V_{\pL}(X_0)$. Theorem \ref{cor:VL=VE} yields \[\limsup_{n \to +\infty} V_{\E}(\mu_0^n) = \limsup_{n\to +\infty} V_{\pL} (X_0^n) \leq V_{\pL}(X_0) = V_\E(\mu_0).\] By Proposition \ref{prop:lscVE} we get $\lim_{n \to +\infty} V_{\E}(\mu_0^n) = V_\E(\mu_0)$. Finally, the continuity of $V_\K$ follows by Theorem \ref{cor:E=K} and the continuity of $V_\E$. \end{proof} \begin{theorem}[Continuity of $V_\pL$]\label{th:contVL} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} and $(\Omega, \mathfrak B, \P)$ be a standard Borel space such that $\P$ is without atoms. If $X_0\in L^p(\Omega;\mathbb{R}^d)$ and $\{X_0^n\}_{n\in\mathbb{N}}\subset L^p(\Omega;\mathbb{R}^d)$ is a sequence such that $\|X_0^n-X_0\|_{L^p(\Omega;\mathbb{R}^d)}\to 0$ as $n \to +\infty$, then \begin{equation*} \lim_{n \to +\infty} V_{\pL}(X_0^n) = V_\pL(X_0). \end{equation*} \end{theorem} \begin{proof} From Theorem \ref{cor:VL=VE'} we have $V_{\pL}(X_0^n) = V_{\E'}((X_0^n)_\sharp \P)$ and $V_\pL(X_0)= V_{\E'}((X_0)_\sharp \P)$. The application of Theorem \ref{th:contVE} to $\E'$ in $\S' = (\mathscr{U},\mathscr{F},\mathscr{C},\mathscr{C}_T)$ (see Definition \ref{def:relax_setting}) concludes the proof. \end{proof} \subsection{A counterexample: Non-existence of minimizers for $\pL$}\label{sec:counterexample} In the previous sections we have shown that, under the Convexity Assumption \ref{CA}, the Eulerian and Kantorovich problems always admit a minimizer, see Theorems \ref{thm:minE}, \ref{cor:E=K}. This is not always true in the Lagrangian setting. Existence of minimizers has been shown in Remark \ref{rem:minimizerLeta} in the very particular case $\pL = \pL_{\boldsymbol\eta}$, where $\boldsymbol\eta$ is optimal for a Kantorovich problem. In general, for a given parametrization space $(\Omega, \mathfrak B,\P)$, the choice of the initial condition is relevant as highlighted in the following. \begin{theorem} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA} and $(\Omega, \mathfrak B,\P)$ be a standard Borel space such that $\P$ is without atoms. If $\mu_0\in \mathscr{P}_p(\mathbb{R}^d)$ then there exists $X_0 \in L^p(\Omega;\mathbb{R}^d)$ with $(X_0)_\sharp \P = \mu_0$ and $(X,u) \in {\ensuremath{\mathcal A}}_\pL(X_0)$ such that \begin{equation}\label{eq:JL=VL} J_\pL(X,u) = V_\pL(X_0) = V_\E(\mu_0). \end{equation} \end{theorem} \begin{proof} Let $\mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$, by Theorem \ref{cor:E=K} there exists $(\boldsymbol\eta,\bar u) \in {\ensuremath{\mathcal A}}_{\K}(\mu_0)$ such that $J_\K(\boldsymbol\eta,\bar u) = V_\K(\mu_0)$. Fix $\tau$ a Polish topology on $\Omega$ such that $\mathfrak B = {\ensuremath{\mathcal B}}_{(\Omega,\tau)}$. Since $\P$ is without atoms, thanks to Proposition \ref{prop:Skorohod} there exists a Borel map $\psi: \Omega \to \Gamma_T$ such that $\psi_\sharp \P = \boldsymbol\eta$. For every $t \in [0,T]$ we define $X_t:= e_t \circ \psi$ and the Borel map $u(t,\omega):= \bar u(t, \psi(\omega))$. Using the same techniques as in the proof of Lemma \ref{l:fromKtoL} we deduce that $(X,u) \in {\ensuremath{\mathcal A}}_\pL(X_0)$ (where $X_0 := e_0 \circ \psi$) and $J_\pL(X,u) = J_\K(\boldsymbol\eta,\bar u)$. By Theorems \ref{cor:VL=VE} and \ref{cor:E=K} we finally get \eqref{eq:JL=VL}. \end{proof} In general, if the initial condition $X_0$ is assigned a priori, existence of minimizers for the Lagrangian problem is not guaranteed. We consider the Wasserstein barycenter problem, for which we study the Eulerian and Lagrangian formulations. In particular, we exhibit an initial datum $X_0$ whose corresponding Lagrangian problem does not admit minimizers. We stress that the system under consideration satisfies the Convexity Assumption \ref{CA}. \subsubsection{Wasserstein barycenter problem: Eulerian formulation}\label{subsec:ExBarE} We consider the setting $\S = (U, f, {\ensuremath{\mathcal C}}, {\ensuremath{\mathcal C}}_T)$ as follows: let $U := \overline{B_R(0)}\subset \mathbb{R}^d$, for some $R >0$ sufficiently large, $T=1$ and $p =2$. We fix $\nu\in\mathscr{P}(\mathbb{R}^d)$ with compact support. We consider the velocity field $f:\mathbb{R}^d\times U\times \mathscr{P}_2(\mathbb{R}^d)\to \mathbb{R}^d$, the cost functions ${\ensuremath{\mathcal C}}:\mathbb{R}^d\times U\times \mathscr{P}_2(\mathbb{R}^d)\to [0,+\infty)$ and ${\ensuremath{\mathcal C}}_T:\mathbb{R}^d\times \mathscr{P}_2(\mathbb{R}^d)\to [0,+\infty)$ defined by \[ f(x,u,\mu) = u, \qquad {\ensuremath{\mathcal C}}(x,u,\mu) = |u|^2, \qquad {\ensuremath{\mathcal C}}_T(x,\mu) = W^2_2(\mu, \nu). \] In this setting, the cost functional has the form \[ J_{\E}(\mu,\underline u) = \int_0^1 \int_{\mathbb{R}^d} |\underline u(t,x)|^2 \d \mu_t(x) \d t + W_2^2(\mu_1, \nu). \] For any $\mu_0\in\mathscr{P}_2(\mathbb{R}^d)$, the associated value function is given by \begin{equation}\label{ExBarE} V_{\E}(\mu_0) := \inf_{(\mu,\underline u) \in {\ensuremath{\mathcal A}}_{\E}(\mu_0)} J_{\E}(\mu,\underline u), \end{equation} and recall that by Theorem \ref{thm:minE} the infimum in \eqref{ExBarE} is actually a minimum. Let us now fix $\mu_0 \in \mathscr{P}(\mathbb{R}^d)$ with compact support and characterize the value function and the corresponding minimizers. By the Benamou-Brenier formula \eqref{B&B}, we have the lower bound \begin{equation}\label{ExBar2} \begin{split} \inf_{(\mu,\underline u) \in {\ensuremath{\mathcal A}}_{\E}(\mu_0)} \left(\int_0^1 \int_{\mathbb{R}^d} |\underline u(t,x)|^2 \d \mu_t(x) \d t + W_2^2(\mu_1, \nu) \right) \geq \inf_{\mu_1\in\mathscr{P}_2(\mathbb{R}^d)} \left[W_2^2(\mu_0, \mu_1) + W_2^2(\mu_1, \nu) \right]. \end{split} \end{equation} Using the triangle inequality, it is easy to prove that \begin{equation}\label{ExBar1} W_2^2(\mu_0, \mu_1) + W_2^2(\mu_1, \nu)\ge \frac12 W_2^2(\mu_0, \nu), \qquad \forall\, \mu_1\in\mathscr{P}_2(\mathbb{R}^d), \end{equation} and, for any constant speed Wasserstein geodesic $\{\sigma_t\}_{t\in[0,1]}$ such that $\sigma_0=\mu_0$ and $\sigma_1=\nu$, the measure $\mu_1=\sigma_{1/2}$ realizes the equality in \eqref{ExBar1}. Since the supports of $\mu_0$ and $\nu$ are compact, then the support of $\sigma_{1/2}$ is compact and, denoting by $\{\mu_t\}_{t\in[0,1]}$ a Wasserstein geodesic joining $\mu_0$ to $\sigma_{1/2}$, we also have that a vector field $\underline u$ realizing the equality \begin{equation}\label{ExBar3} \int_0^1 \int_{\mathbb{R}^d} |\underline u(t,x)|^2 \d \mu_t(x) \d t = W_2^2(\mu_0, \sigma_{1/2}) \end{equation} is bounded (see e.g. \cite[Section~5.4]{santambrogio2015optimal}). Then, using $\underline u$ satisfying \eqref{ExBar3} and choosing $R$ sufficiently large, we obtain the equality in \eqref{ExBar2}. The value of the minimum is \begin{equation}\label{eq:min_wasserstein_bar} V_\E(\mu_0)= W_2^2(\mu_0, \sigma_{1/2}) + W_2^2(\sigma_{1/2}, \nu) = \frac12 W_2^2(\mu_0, \nu). \end{equation} Notice that the minimizer $(\mu, \underline u)$ is not unique a priori. If at least one of the measures $\nu$ and $\mu_0$ is absolutely continuous with respect to ${\ensuremath{\mathcal L}}^d$, then the geodesic $\{\sigma_t\}_{t\in[0,1]}$ is unique and the map $\eta \mapsto W^2_2(\eta, \nu)$ is strictly convex. In this case $\sigma_{1/2}$ is the {unique} minimizer of the functional $\eta \mapsto W^2_2(\mu_0, \eta) + W^2_2(\eta, \nu)$ and the pair $(\mu,\underline u)$, with $\mu_t:=\sigma_{t/2}$ for all $t\in[0,1]$, is the (unique) minimizer for the Eulerian problem. \subsubsection{Wasserstein barycenter problem: Lagrangian formulation}\label{subsec:ExBarL} Let $(\Omega, \mathfrak B, \P)$ be a standard Borel space such that $\P\in\mathscr{P}(\Omega)$ is without atoms. The Lagrangian cost functional of the Wasserstein barycenter problem is given by \begin{equation} J_{\pL}(X,u):= \int_0^1 \int_{\Omega} |u_t(\omega)|^2 \d \P(\omega) \d t + W_2^2((X_1)_\sharp\P, \nu). \end{equation} For any $X_0 \in L^2(\Omega;\mathbb{R}^d)$, the corresponding value function is \begin{equation*} V_{\pL}(X_0) = \inf_{(X,u) \in {\ensuremath{\mathcal A}}_{\pL}(X_0)} J_{\pL}(X,u). \end{equation*} Since $(X,u) \in {\ensuremath{\mathcal A}}_{\pL}(X_0)$ satisfies, for $\P$-a.e. $\omega\in\Omega$, the system \begin{equation*} \begin{system} \dot X_t(\omega) = u_t(\omega), \qquad \text{ for } {\ensuremath{\mathcal L}}_1\text{-a.e. } t \in (0,1) \\ X|_{t = 0}(\omega) = {X_0(\omega)}, \end{system} \end{equation*} we have \begin{equation}\label{eq:ex:lagrangian} \begin{split} \int_0^1 &\int_{\Omega} |u_t(\omega)|^2 \d \P(\omega) \d t \geq \int_{\Omega} \left| \int_0^1 u_t(\omega) \d t \right|^2\d \P(\omega) \\ &=\int_{\Omega} \left| \int_0^1 \dot X_t(\omega) \d t \right|^2\d \P(\omega) = \int_{\Omega} \left| X_1(\omega) - X_0(\omega) \right|^2\d \P(\omega) \end{split} \end{equation} where we have applied Fubini theorem and Jensen's inequality. Notice that the inequality in \eqref{eq:ex:lagrangian} becomes an equality if $(X,u)$ belongs to the restrict admissibility class given by \begin{equation*} \begin{split} \bar {\ensuremath{\mathcal A}}_{\pL}(X_0):= \lbrace (X,u)\in {\ensuremath{\mathcal A}}_{\pL}(X_0)\,: \; u_t(\omega) = \bar u(\omega), \, \forall t \in [0,1], \text{ for $\P$-a.e. }\omega\in\Omega \rbrace. \end{split} \end{equation*} Suppose now that $\mu_0:= (X_0)_\sharp \P $ has compact support (i.e. $X_0$ bounded), then we can compare the Lagrangian and Eulerian formulation of the Wasserstein barycenter problem. Indeed, for $R$ sufficiently large we have \begin{equation}\label{ExBarL1} \begin{split} V_{\pL}(X_0) &=\inf_{(X,\bar u) \in \bar{\ensuremath{\mathcal A}}_{\pL}(X_0)} \left(\int_{\Omega} |X_1(\omega)-X_0(\omega)|^2 \d \P(\omega) + W_2^2((X_1)_\sharp\P, \nu) \right) \\ =V_{\E}(\mu_0) &= \min_{\mu_1 \in \mathscr{P}_2(\mathbb{R}^d)}\left(W^2_2(\mu_0, \mu_1) + W^2_2(\mu_1, \nu) \right) = \frac12 W_2^2(\mu_0, \nu). \end{split} \end{equation} where the first equality follows by the choice $(X,\bar u) \in \bar{\ensuremath{\mathcal A}}_{\pL}(X_0)$, the second equality is given by Theorem \ref{cor:VL=VE} and the last two equalities are exactly \eqref{eq:min_wasserstein_bar}. \medskip We now exhibit an example where the infimum for the Lagrangian problem is not a minimum. Let us consider $\Omega = [0,1]$, $\P = {\ensuremath{\mathcal L}}_1$ and fix the dimension $d=2$.\\ We set $\nu:= {\ensuremath{\mathcal L}}^2 \mres [0,1]^2$, $X_0:[0,1]\to\mathbb{R}^2$ defined by $X_0(\omega) = (1/2,\omega)$. We observe that $\mu_0:= (X_0)_\sharp\P=\mathscr{H}^1\mres (\lbrace 1/2 \rbrace \times [0,1])$. We also notice that $X_0^{-1}:\mathbb{R}^2\to [0,1]$, defined $\mu_0$-a.e. has the form $X_0^{-1}(1/2,\omega)=\omega$.\\ Since \begin{equation*} \begin{split} \int_{[0,1]} \left| X_1(\omega) - X_0(\omega) \right|^2\d \P(\omega) = \int_{\mathbb{R}^2} \left| X_1(X_0^{-1}(x)) - x \right|^2\d \mu_0(x), \end{split} \end{equation*} and defining \begin{equation}\label{ExBarLB} \begin{split} \mathscr{B}_{\mu_0} &:=\lbrace X_1\circ X_0^{-1} : X_1 = X_0 + \bar u \, \text{ and } (X,\bar u) \in \bar {\ensuremath{\mathcal A}}_{\pL}(X_0) \rbrace \\ &= \lbrace Y\in L^2((\mathbb{R}^2,\mu_0);\mathbb{R}^2): |Y(x)-x|\leq R, \text{ for $\mu_0$-a.e. }x\in\mathbb{R}^2 \rbrace , \end{split} \end{equation} we easily get \begin{equation}\label{ExBarL2} \begin{split} V_{\pL}(X_0) &= \inf_{(X,\bar u) \in \bar {\ensuremath{\mathcal A}}_{\pL}(X_0)} J_{\pL}(X,\bar u) \\ &= \inf_{Y \in \mathscr{B}_{\mu_0}} \left[ \int_{\mathbb{R}^2} \left| Y(x) - x \right|^2\d \mu_0(x) + W^2_2(Y_\sharp \mu_0, \nu)\right]. \end{split} \end{equation} As already observed at the end of subsection \ref{subsec:ExBarE}, since $\nu$ is absolutely continuous with respect to ${\ensuremath{\mathcal L}}^2$, there exists a unique geodesic $\{\sigma_t\}_{t\in[0,1]}$ joining $\mu_0$ to $\nu$. Furthermore, $\sigma_{1/2}$ is the {unique} minimizer of the functional $\eta \mapsto W^2_2(\mu_0, \eta) + W^2_2(\eta, \nu)$. Then, for $R$ sufficiently large, from \eqref{ExBarL1} and \eqref{ExBarL2} we know that \begin{equation}\label{ExBarL3} \begin{split} W^2_2(\mu_0, \sigma_{1/2}) + W^2_2(\sigma_{1/2}, \nu) &=\inf_{Y \in \mathscr{B}_{\mu_0}} \left[ \int_{\mathbb{R}^d} \left| Y(x) - x \right|^2\d \mu_0(x) + W^2_2(Y_\sharp \mu_0, \nu)\right] \\ &\leq \inf_{\substack{Y \in \mathscr{B}_{\mu_0} \\ Y_\sharp \mu_0 = \sigma_{1/2}} } \int_{\mathbb{R}^d} \left| Y(x) - x \right|^2\d \mu_0(x) + W^2_2(\sigma_{1/2}, \nu). \end{split} \end{equation} On the other hand, since $\mathop{\rm supp}\nolimits(\sigma_{1/2}) \subset [0,1]^2$, by \cite[Theorem~B]{pratelli2007equality} we have \begin{equation}\label{OTP} W^2_2(\mu_0, \sigma_{1/2}) = \inf_{\substack{Y \in \mathscr{B}_{\mu_0} \\ Y_\sharp \mu_0 = \sigma_{1/2}} } \int_{\mathbb{R}^d} \left| Y(x) - x \right|^2\d \mu_0(x) \end{equation} and, consequently, equality holds in \eqref{ExBarL3}. Moreover the infimum in \eqref{OTP} is not attained. Indeed, the map $T:\mathbb{R}^2\to \mathbb{R}^2$ defined by $T(x_1,x_2)=(1/2,x_2)$ satisfies $T_\sharp\nu=\mu_0$ and $T=\nabla\varphi$ for $\varphi(x_1,x_2)=\frac{1}{2}(x_1+x_2^2)$, therefore $T$ is the optimal transport map from $\nu$ to $\mu_0$. The unique geodesic joining $\mu_0$ to $\nu$ is $\sigma_t=(t(x_1,x_2)+(1-t)(1/2,x_2))_\sharp\nu$ and $\sigma_{1/2}=(\frac12x_1+\frac14,x_2))_\sharp\nu$ coincides with the uniform probability measure on $[1/4,3/4]\times[0,1]$. The map $T$ is still the optimal transport map from $\sigma_{1/2}$ to $\mu_0$ and the unique optimal transport plan between $\sigma_{1/2}$ and $\mu_0$ is $((x_1,x_2),(1/2,x_2))_\sharp\sigma_{1/2}$. Then the unique optimal transport plan between $\mu_0$ and $\sigma_{1/2}$ is $\gamma:=((1/2,x_2),(x_1,x_2))_\sharp\sigma_{1/2}$. Since $\gamma$ is not concentrated on the graph of a map, the optimal transport map from $\mu_0$ to $\sigma_{1/2}$ does not exist. Since \eqref{OTP} has not minimizers, then \eqref{ExBarL2} cannot have minimizers. Indeed, suppose there exists a minimizer $(X,u) \in {\ensuremath{\mathcal A}}_{\pL}(X_0)$ for $J_\pL$ in \eqref{ExBarL2}. Then $X_1: [0,1]\to\mathbb{R}^2$ satisfies $(X_1)_\sharp\P=\sigma_{1/2}$, $u =X_1-X_0$ and $X_t=t u+X_0$. Defining $Y=X_1\circ X_0^{-1}$, we have that $Y_\sharp\mu_0=\sigma_{1/2}$ so that $Y$ is a minimizer in \eqref{OTP}, which is absurd. \medskip \begin{remark}\label{rmk:RLex} Notice that existence of minimizers is not guaranteed even for Relaxed Lagrangian problems. Indeed, the same results obtained for the Wasserstein barycenter problem in $\S=(U,f, {\ensuremath{\mathcal C}}, {\ensuremath{\mathcal C}}_T)$ given in Section \ref{subsec:ExBarE}, can be easily extended to the lifted system $\S'=(\mathscr{U},\mathscr{F},\mathscr{C},\mathscr{C}_T)$ associated to $\S$ (see Definition \ref{def:relax_setting}). In the proposed example, the Lagrangian and Eulerian problems $\pL', \E'$ associated to $\S'$ can be treated as the problems $\pL, \E$ associated to $\S$ thanks to the following simple observation: given a probability measure $\rho\in \mathscr{P}(U)$, by Jensen's inequality we have \[\int_U |u|^2\,\d\rho(u)\ge\left|\int_Uu\,\d\rho(u)\right|^2,\] and the equality holds if and only if $\rho=\delta_{\underline u}$ for some $\underline u\in U$. This guarantees that possible control minimizers for $\pL', \E'$ are of the form $ \delta_{\underline u}$ with $\underline u$ \emph{non-relaxed} control for $\pL, \E$, respectively. The corresponding trajectories for $\pL', \E'$ with control $\delta_{\underline u}$ coincide with the ones associated to $\underline u$ for problems $\pL$ and $\E$, respectively. Finally, thanks to Remark \ref{rem:RL-L}, non-existence of minima for $\pL'$ corresponds to non-existence of minima for $\RL$. \end{remark} \section{Finite particle systems and Gamma-convergence}\label{sec:finite} To model the evolution of a finite number of particles, we introduce a discrete finite space $\Omega^N$ with the corresponding normalized counting measure $\P^N$. In this setting, in order to prove equivalence between Eulerian and Lagrangian problems, we cannot directly apply the results given in Theorems \ref{cor:VL=VE} due to the requirement on the probability measure $\P$ to be without atoms (see in particular Theorem \ref{prop:K>L}). Hence, we introduce a further formulation of the Lagrangian problem in the context of feedback controls (see Definition \ref{def:FL}) and we exploit a discrete formulation of the superposition principle for which we refer to Theorem \ref{lem:EN}. Furthermore, in Subsections \ref{sec:gamma} and \ref{sec:gammaE}, we prove a (discrete to continuous) $\Gamma$-convergence result respectively for the Lagrangian and Eulerian cost functionals when the number of particles goes to infinity. \subsection{Equivalences between $N$-particles problems}\label{sec:finiteN} Let $(\Omega^N,\mathcal{S}(\Omega^N),\P^N)$ given by \begin{equation}\label{def:omegaN} \begin{aligned} &\Omega^{N}:=\{1,\dots,N\}, \quad \mathcal{S}(\Omega^{N}):= \sigma ( \lbrace 1\rbrace, \ldots \lbrace N\rbrace)), \\ &\P^{N}(\{k\}):=\frac 1N, \quad k=1,\dots,{N}. \end{aligned} \end{equation} We will refer to $\P^N$ as the normalized counting measure, which can be written as \begin{equation*} \P^N=\frac{1}{N}\sum_{k=1}^N\delta_k. \end{equation*} Let us denote with $\pL^N = \pL(\Omega^N, \mathcal{S}(\Omega^N),\P^N )$ the Lagrangian problem associated to the probability space $(\Omega^N, \mathcal{S}(\Omega^N),\P^N )$. Notice that the functional space $L^p(\Omega^N;\mathbb{R}^d)$ coincides with the space of all maps $g: \Omega^N \to \mathbb{R}^d$, which can be identified with $(\mathbb{R}^d)^N$. Differently from the Lagrangian problem $\pL^N$, where we just need to fix the parametrization space, the definition of the $N$-particle Eulerian problem requires the introduction of a further constraint. Let us firstly define the subspace of $\mathscr{P}(\mathbb{R}^d)$ given by the discrete measures as \begin{equation}\label{eq:discreteProb} \mathscr{P}^N(\mathbb{R}^d):= \left\lbrace \mu = \frac{1}{N} \sum_{i =1}^N \delta_{x_i} \; \text{ for some } x_i \in \mathbb{R}^d \right\rbrace. \end{equation} \begin{definition}[Discrete Eulerian optimal control problem $(\E^N)$]\label{def:E^N} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA}. Given $\mu_0\in\mathscr{P}^N(\mathbb{R}^d)$, we say that $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E^N}(\mu_0)$, if \begin{itemize} \item[(i)] $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$; \item[(ii)] $\mu_t \in \mathscr{P}^N(\mathbb{R}^d)$, for every $t \in [0,T]$. \end{itemize} We define the \emph{cost functional} $J_{\E^N}:=J_\E$ and the \emph{value function} \[ V_{\E^N}(\mu_0):=\inf\{J_{\E^N}(\mu,\underline u): (\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E^N}(\mu_0)\}.\] \end{definition} \begin{remark} Notice that item (ii) in Definition \ref{def:E^N} does not follow from the requirement $\mu_0 \in \mathscr{P}^N(\mathbb{R}^d)$. Indeed, the control map $\underline u$ in general is not Lipschitz continuous so that uniqueness of characteristics is not guaranteed. \end{remark} Observe that, for every $N \in \mathbb{N}$, it holds \begin{equation}\label{eq:VEN>VE} V_{\E^N}(\mu_0)\geq V_{\E}(\mu_0), \qquad \forall\,\mu_0\in\mathscr{P}^N(\mathbb{R}^d). \end{equation} The main result of this section is given in the following theorem. \begin{theorem}\label{cor:LN=EN} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA}. Let $p\geq 1$ and $X_0\in L^p(\Omega^N;\mathbb{R}^d)$. Then \[V_{\pL^N}(X_0)= V_{\E^N}((X_0)_\sharp\P^N).\] \end{theorem} The proof is a direct consequence of Propositions \ref{prop:LN>EN}, \ref{prop:equivFRL} and \ref{prop:FLN<E} below. \medskip \noindent Exploiting the argument contained in \cite[Lemma~6.2]{fornasier2018mean} we derive a first comparison between $V_{\pL^N}(X_0)$ and $V_{\E^N}((X_0)_\sharp\P^N)$. \begin{proposition}\label{prop:LN>EN} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} and let $X_0 \in L^p(\Omega^N;\mathbb{R}^d)$. If $(X,u) \in \mathcal{A}_{\pL^N}(X_0)$, then there exists $(\mu,\underline u) \in \mathcal{A}_{\E^N}((X_0)_\sharp \P^N)$ such that $J_{\pL^N}(X,u) \geq J_{\E^N}(\mu,\underline u)$. Moreover, $V_{\pL^N}(X_0)\ge V_{\E^N}((X_0)_\sharp\P^N)$. \end{proposition} \begin{proof} Let $(X,u) \in \mathcal{A}_{\pL^N}(X_0)$. Let us define $\mathcal{X}(t) := \{x\in \mathbb{R}^d: X_t(\omega)=x\text{ for some }\omega\in \Omega^N\}$ and \begin{equation} \label{eq:J_tx} J(t,x):=\{\omega\in \Omega^N:X_t(\omega)=x\}, \qquad \text{ for any }\, (t,x)\in [0,T]\times \mathbb{R}^d, \end{equation} and denote by $\mathcal{P}$ the collection of partitions $P$ of $\Omega^N$. It is clear that the family $P_{X}(t) := \{J(t,x): x \in \mathcal{X}(t)\}$ belongs to $\mathcal{P}$. As proved in \cite[Lemma~6.2]{fornasier2018mean}, there exists a finite partition on Borel sets of the interval $[0,T]$ of the form $\{S_P: P\in \mathcal{P}\}$, where $S_P:=\{t\in [0,T]: P_{X}(t)=P\}$. Given $\omega, \omega' \in \Omega^N$ and a Borel set $S \subset [0,T]$, if $X_t(\omega) = X_t(\omega')$ for any $t \in S$ then, by the absolute continuity of the curves $t\mapsto X_t(\omega)$ and $t\mapsto X_t(\omega')$, we have $\dot X_t(\omega) = \dot X_t(\omega')$, for ${\ensuremath{\mathcal L}}_T$-a.e. $t \in S$. We define $\mu_t:=(X_t)_\sharp\P^N$ and we observe that \begin{equation}\label{eq:mut_classes} \mu_t = \frac{1}{N}\sum_{x \in \mathcal{X}(t)}\# J(t,x) \delta_x. \end{equation} Moreover, defining for any $(t,x)\in [0,T]\times \mathbb{R}^d$ \[\bar\omega_{t,x}\in\mathop{\rm argmin}\limits_{\omega \in J(t,x)} {\ensuremath{\mathcal C}}(x,u(t,\omega), \mu_t),\] we set \begin{equation}\label{eq:defubarclasses} \underline u(t,x):= u(t,\bar\omega_{t,x}). \end{equation} We show that $(\mu,\underline u) \in \mathcal{A}_{\E^N}((X_0)_\sharp \P^N)$. Let us fix $\varphi \in C_c^1(\mathbb{R}^d)$. For ${\ensuremath{\mathcal L}}_T$-a.e. $t \in [0,T]$ it holds \begin{equation}\label{eq:classes_continuity} \begin{split} \frac{\d}{\d t} \int_{\mathbb{R}^d} \varphi(x) \,\d ((X_t)_\sharp \P^N)(x) &= \int_{\Omega^N} \nabla\varphi(X_t(\omega)) \cdot f(X_t(\omega),u(t,\omega), \mu_t) \,\d \P^N(\omega) \\ &=\frac{1}{N}\sum_{\omega = 1}^N \nabla \varphi(X_t(\omega)) \cdot f(X_t(\omega),u(t,\omega), \mu_t)\\ &=\frac{1}{N}\sum_{x \in \mathcal{X}(t)} \nabla \varphi(x) \cdot \sum_{\omega \in J(t,x)} f(x,u(t,\omega), \mu_t). \end{split} \end{equation} Since $f(x,u(t,\omega), \mu_t) = f(x,\underline u(t,x), \mu_t)$ for every $\omega \in J(t,x)$, we can rewrite the right hand side of \eqref{eq:classes_continuity} to get \begin{equation*} \begin{split} \frac{1}{N}\sum_{x \in \mathcal{X}(t)} \nabla \varphi(x) \cdot \sum_{\omega \in J(t,x)} f(x,u(t,\omega), \mu_t) &= \frac{1}{N}\sum_{x \in \mathcal{X}(t)} \#J(t,x) \,\nabla \varphi(x) \cdot f(x,\underline u(t,x), \mu_t)\\ &=\int_{\mathbb{R}^d} \nabla \varphi(x) \cdot f(x,\underline u(t,x), \mu_t) \,\d \mu_t(x), \end{split} \end{equation*} where the last equality is a consequence of \eqref{eq:mut_classes}. For what concerns the cost functional, we have \begin{equation*} \begin{split} &\int_0^T \int_{\Omega} {\ensuremath{\mathcal C}}(X_t(\omega),u(t,\omega), \mu_t)\, \d \P^N(\omega) \,\d t \\ &\quad = \frac{1}{N} \int_0^T \sum_{x \in \mathcal{X}(t)}\sum_{\omega \in J(t,x)} {\ensuremath{\mathcal C}}(x,u(t,\omega), \mu_t)\, \d t \\ &\quad \geq \frac{1}{N}\int_0^T \sum_{x \in \mathcal{X}(t)}{\# J(t,x)}{\ensuremath{\mathcal C}}(x,\underline u(t,x), \mu_t)\, \d t \\ &\quad = \int_0^T \int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}(x,\underline u(t,x), \mu_t) \, \d\mu_t(x) \,\d t, \end{split} \end{equation*} where the inequality comes from the definition of $\underline u$ in \eqref{eq:defubarclasses} and the last equality follows from \eqref{eq:mut_classes}. Since $$\int_{\Omega^N} {\ensuremath{\mathcal C}}_T(X_T(\omega),\mu_T)\,\d\P^N(\omega)=\int_{\mathbb{R}^d} {\ensuremath{\mathcal C}}_T(x,\mu_T)\,\d\mu_T(x),$$ we conclude that $J_{\pL^N}(X,u) \geq J_{\E^N}(\mu,\underline u)$. \end{proof} Here we introduce a feedback formulation of the Lagrangian optimal control problem in order to prove the reverse inequality $V_{\pL^N}(X_0)\le V_{\E^N}((X_0)_\sharp\P^N)$. We firstly show its relation with the Lagrangian and Eulerian problems in a general context, i.e. where the probability space $(\Omega,\mathfrak B,\P)$ is not necessarily the space $(\Omega^N,\mathcal{S}(\Omega^N),\P^N)$ associated to the $N$-particles framework. \begin{definition}[Feedback Lagrangian optimal control problem \textbf{(FL)}]\label{def:FL} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} and let $(\Omega, \mathfrak B,\P)$ be a probability space. Given $X_0\in L^p(\Omega;\mathbb{R}^d)$, we say that $(X,\underline u)\in{\ensuremath{\mathcal A}}_{\FL}(X_0)$ if \begin{itemize} \item[(i)] $\underline u\in\mathrm{B}([0,T]\times\mathbb{R}^d;U)$; \item[(ii)] $X\in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d))$ and for $\P$-a.e. $\omega\in\Omega$, $X(\omega)$ is a solution of the following Cauchy problem \begin{equation}\label{eq:systemFRL} \begin{cases} \dot X_t(\omega)=f(X_t(\omega),\underline u_t(X_t(\omega)),(X_t)_\sharp\P), &\text{ for } {\ensuremath{\mathcal L}}_T \text{-a.e. } t\in]0,T]\\ X_{|t=0}(\omega)=X_0(\omega), & \end{cases} \end{equation} where $X_t: \Omega \to \mathbb{R}^d$ is defined by $X_t(\omega):= X(t,\omega)$ for $\P$-a.e. $\omega \in \Omega$. \end{itemize} We refer to $(X, \underline u)\in{\ensuremath{\mathcal A}}_{\FL}(X_0)$ as to an \emph{admissible pair}, with $X$ a \emph{trajectory} and $\underline u$ a \emph{feedback control}.\\ We define the cost functional $J_{\FL}: L^p(\Omega;C([0,T];\mathbb{R}^d))\times {\mathrm{B}([0,T]\times\mathbb{R}^d;U)}\to[0,+\infty)$ by \[J_{\FL}(X,\underline u):=\int_\Omega \int_0^T{\ensuremath{\mathcal C}}(X_t(\omega),\underline u_t(X_t(\omega)),(X_t)_\sharp\P)\,\d t \,\d\P(\omega) +\int_\Omega {\ensuremath{\mathcal C}}_T(X_T(\omega),(X_T)_\sharp\P)\,\d\P(\omega),\] and the corresponding \emph{value function} $V_{\FL}:L^p(\Omega;\mathbb{R}^d)\to[0,+\infty)$ by \begin{equation}\label{eq:valueFRL} V_{\FL}(X_0):=\inf\left\{J_{\FL}(X,\underline u)\,:\,(X,\underline u)\in{\ensuremath{\mathcal A}}_{\FL}(X_0)\right\}. \end{equation} \smallskip In the following, $\FL(\Omega, \mathfrak B,\P)$ denotes the Feedback Lagrangian problem given in Definition \ref{def:FL}. We short the notation to $\FL$ when the probability space is clear from the context. \end{definition} \medskip \begin{remark} Observe that, choosing a constant (feedback) control function $\underline u$, from Proposition \ref{prop:existL} it is immediate to prove that ${\ensuremath{\mathcal A}}_{\FL}(X_0)\not=\emptyset$. In general, given $\underline u\in\mathrm{B}([0,T]\times\mathbb{R}^d;U)$, the existence and uniqueness of solutions to the Cauchy problem \eqref{eq:systemFRL} is not guaranteed. \end{remark} \medskip The following result follows directly from Definitions \ref{def:L}, \ref{def:E1} and \ref{def:FL}. \begin{proposition}\label{prop:equivFRL} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} and $(\Omega, \mathfrak B, \P)$ be a probability space. Let $X_0\in L^p(\Omega;\mathbb{R}^d)$. If $(X,\underline u)\in{\ensuremath{\mathcal A}}_{\FL}(X_0)$, then \begin{itemize} \item[(i)] defining $u:[0,T]\times\Omega\to U$ by $u(t,\omega):=\underline u(t,X_t(\omega))$, we have that $(X,u)\in{\ensuremath{\mathcal A}}_{\pL}(X_0)$ and $J_{\FL}(X,\underline u)=J_{\pL}(X,u)$. In particular it holds $V_{\FL}(X_0)\ge V_{\pL}(X_0)$. \item[(ii)] defining $\mu_t:=(X_t)_\sharp\P$ for any $t\in[0,T]$, we have that $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}((X_0)_\sharp\P)$ and $J_{\FL}(X,\underline u)=J_{\E}(\mu,\underline u)$. In particular it holds $V_{\FL}(X_0)\ge V_{\E}(\mu_0)$. \end{itemize} \end{proposition} Taking advantage of Proposition \ref{prop:equivFRL} and of the discrete superposition principle given in Theorem \ref{lem:EN}, we have the following equivalence result between $\FL^N:=\FL(\Omega^N,\mathcal{S}(\Omega^N),\P^N)$ and $\E^N$ defined in Definition \ref{def:E^N}. \begin{proposition}\label{prop:FLN<E} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA}. Let $p\geq 1$, $X_0\in L^p(\Omega^N;\mathbb{R}^d)$ and $\mu_0:=\frac{1}{N}\sum_{\omega=1}^N\delta_{X_0(\omega)}=(X_0)_\sharp\P^N$. If $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E^N}(\mu_0)$, then there exists $X\in C([0,T];L^p(\Omega^N))$ such that $\mu_t=(X_t)_\sharp\P^N$ for all $t\in[0,T]$, $(X,\underline u)\in{\ensuremath{\mathcal A}}_{\FL^N}(X_0)$ and $J_{\E^N}(\mu,\underline u)=J_{\FL^N}(X,\underline u)$. Moreover, $V_{\E^N}(\mu_0)=V_{\FL^N}(X_0)$. \end{proposition} \begin{proof} Let $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E^N}(\mu_0)$. By the superposition principle given in Theorem \ref{lem:EN} applied to $\mu$ and the vector field $v(t,x)=f(x,\underline u(t,x),\mu_t)$, there exists $\eta \in \mathscr{P}(\Gamma_T)$ such that \[ \eta= \frac{1}{N} \sum_{\omega = 1}^N \delta_{\gamma_\omega}, \] where $\gamma_\omega \in \Gamma_T$, $\omega = 1, \ldots, N$. We define $X:[0,T]\times\Omega^N\to\mathbb{R}^d$ by $X(t,\omega):=\gamma_\omega(t)$ that satisfies $(X,u) \in {\ensuremath{\mathcal A}}_{\FL^N}$ thanks to \eqref{eq:Cauchyend} in Theorem \ref{lem:EN}. Hence it readily follows that $J_{\FL^N}(X,\underline u) =J_{\E^N}(\mu,\underline u)$ and $V_{\FL^N}(X_0)\le V_{\E^N}(\mu_0)$. By Proposition \ref{prop:equivFRL} we have that $V_{\FL^N}(X_0)\ge V_{\E^N}(\mu_0)$ and we conclude the proof. \end{proof} \subsection{Finite particle approximation for $\L$}\label{sec:gamma} The aim of this section is to approximate a general Lagrangian problem $\pL=\pL(\Omega,\mathfrak B,\P)$ with finite particle Lagrangian problems $\pL^N=\pL(\Omega^N,\mathcal{S}(\Omega^N),\P^N)$, $N \in \mathbb{N}$, where $(\Omega^N,\mathcal{S}(\Omega^N),\P^N)$ is defined in \eqref{def:omegaN}. A first result in this direction has been already obtained in Proposition \ref{prop:approx_lagrangian_n} (see also Remark \ref{rmk:noequiprob}) in Section \ref{sec:approxLpiecew}. Here, we specialize the result in the case of equally distributed masses which is suitable for the application to a finite particle/agent model. Recall that if $(\Omega,\mathfrak B,\P)$ is a standard Borel space and $\P$ is without atoms, thanks to item (ii) in Proposition \ref{prop:FAP} there exists a family of finite algebras $\mathfrak B^N\subset\mathfrak B$, $N\in\mathbb{N}$, satisfying the finite approximation property of Definition \ref{approx_prop} and $\P(A^N_k)=\frac1N$, with $k=1,\ldots,N$. Recall the definition of $\psi^N$, $\phi^N$ and ${\ensuremath{\mathcal K}}^N$ in \eqref{def:psinphin} and \eqref{def:Kn}, respectively. \medskip A Gamma-convergence result for the functional $J_\pL$ is given in the following proposition. \begin{proposition}[Finite particle approximation for $\pL$]\label{prop:gamma_conv} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA}. Let $(\Omega,\mathfrak B,\P)$ be a standard Borel space such that $\P$ is without atoms. The following holds: \begin{itemize} \item[(i)] Suppose that $(X,u) \in L^p(\Omega;\mathrm{AC}^p([0,T];\mathbb{R}^d)) \times \mathrm{M}([0,T] \times \Omega; U)$ and $(X^N,u^N) \in C([0,T];L^p(\Omega^N;\mathbb{R}^d)) \times \mathrm{M}([0,T] \times \Omega^N; U)$ such that \begin{equation*} \lim_{N \to +\infty} {\ensuremath{\mathcal K}}^N(X^N, u^N) = (X,u), \quad \text{ in } C([0,T];L^p(\Omega;\mathbb{R}^d)) \times L^1([0,T] \times \Omega; U). \end{equation*} Then \[ \lim_{N \to +\infty} J_{\pL^N}(X^N,u^N) = J_{\pL}(X,u).\] \item[(ii)] Assume that $U$ is a compact convex subset of a separable Banach space $V$. Suppose that $X_0 \in L^p(\Omega;\mathbb{R}^d)$ and $(X,u) \in \mathcal{A}_{\pL}(X_0)$. If $X_0^N \in L^p(\Omega^N;\mathbb{R}^d)$ such that \begin{equation}\label{CNOmega} \lim_{N \to +\infty} \| X^N_0 \circ \psi^N - X_0\|_{L^p(\Omega;\mathbb{R}^d)} = 0 \end{equation} then there exists a sequence $(X^N,u^N) \in \mathcal{A}_{\pL^N}(X^N_0)$ such that \begin{equation*} \lim_{N \to +\infty} {\ensuremath{\mathcal K}}^N(X^N, u^N) = (X,u), \quad \text{ in } C([0,T];L^p(\Omega;\mathbb{R}^d)) \times L^1([0,T] \times \Omega; U) \end{equation*} and \[ \lim_{N \to +\infty} J_{\pL^N}(X^N,u^N) = J_{\pL}(X,u). \] \end{itemize} \end{proposition} \begin{proof} Thanks to Proposition \ref{prop:equivSpaces} it holds that $X \in C([0,T]; L^p(\Omega;\mathbb{R}^d))$. Item (i) can be proved exactly by the same technique used in the second part of the proof of Proposition \ref{prop:costRLntoRL} applied to the sequence ${\ensuremath{\mathcal K}}^N(X^N,u^N)$ and recalling that $J_{\pL}\left({\ensuremath{\mathcal K}}^N(X^N,u^N)\right) = J_{\pL^N}(X^N, u^N)$ (see Proposition \ref{prop:correspLdiscrete}). Notice that, since $U$ is metrizable and compact the convergence $u^N \to u$ in $L^1([0,T] \times \Omega;U)$ is equivalent to the convergence in $({\ensuremath{\mathcal L}}_T \otimes \P)$-measure. Item (ii), is a direct application of Proposition \ref{prop:approx_lagrangian_n} to the sequence of finite algebras $\mathfrak B^N$ given in item (ii) of Proposition \ref{prop:FAP}. \end{proof} \begin{proposition}[Convergence of the value functions]\label{prop:convVL} Let $\S = (U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy Assumption \ref{BA} with $U$ a compact convex subset of a separable Banach space $V$. Let $(\Omega,\mathfrak B,\P)$ be a standard Borel space such that $\P$ is without atoms. If $X_0 \in L^p(\Omega;\mathbb{R}^d) $ and $X_0^N\in L^p(\Omega^N;\mathbb{R}^d) $ satisfy \begin{equation}\label{eq:conv_x0n} \lim_{N \to+\infty} \| X^N_0 \circ \psi^N - X_0\|_{L^p(\Omega;\mathbb{R}^d)} = 0, \end{equation} then \begin{equation*} \limsup_{N \to +\infty} V_{\pL^N} (X_0^N) \leq V_{\pL}(X_0). \end{equation*} Moreover, if $\S =(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfies the Convexity Assumption \ref{CA}, then \begin{equation*} \liminf_{N \to +\infty} V_{\pL^N} (X_0^N) \geq V_{\pL}(X_0). \end{equation*} In particular, under the Convexity Assumption \ref{CA}, \[\lim_{N \to +\infty} V_{\pL^N} (X_0^N) = V_{\pL}(X_0).\] \end{proposition} \begin{proof} By definition of inf, for every $\varepsilon>0$ there exists $(X_\varepsilon,u_\varepsilon)\in\mathcal A_{\pL}(X_0)$ such that $V_{\pL}(X_0) \geq J_{\pL}(X_\varepsilon,u_\varepsilon) - \varepsilon$. Moreover from item (ii) in Proposition \ref{prop:gamma_conv} there exists $(X^N_\varepsilon,u^N_\varepsilon)$ such that $J_{\pL^N}(X^N_\varepsilon,u^N_\varepsilon) \to J_{\pL}(X_\varepsilon,u_\varepsilon)$, as $N\to +\infty$. Hence \[ \limsup_{N\to +\infty}V_{\pL^N}(X_0^N) \leq \limsup_{N\to +\infty} J_{\pL^N}(X^N_\varepsilon,u^N_\varepsilon) = J_{\pL}(X_\varepsilon,u_\varepsilon) \leq V_{\pL}(X_0) + \varepsilon. \] By the arbitrariness of $\varepsilon$ we conclude.\\ By Proposition \ref{prop:LN>EN} and \eqref{eq:VEN>VE} we get \begin{equation*} V_{\pL^N}(X_0^N) \geq V_{\E^N}(\mu_0^N) \geq V_{\E}(\mu_0^N). \end{equation*} In the convex setting, by the lower semicontinuity of the value function $V_{\E}$ (see Proposition \ref{prop:lscVE}) and by Corollary \ref{cor:VL=VE} ($\P$ is without atoms by assumption) we have the desired convergence. \end{proof} \begin{remark} A natural choice for $X^N_0$ in Proposition \ref{prop:convVL} is given by $X_0^N:= \tilde X^N_0 \circ \phi^N$ where \[ \tilde X^N_0:= \sum_{k=1}^N \mathds{1}_{A_k^N} \, \fint_{A_k^N} X_0(\omega)\, \d\P(\omega). \] For a proof of the convergence \eqref{eq:conv_x0n} we refer to Lemma \ref{lemma:partL1} and Proposition \ref{prop:BeqL1} in Appendix \ref{app_FP}. \end{remark} \subsection{Finite particle approximation for $\E$}\label{sec:gammaE} In this section, we show that the Eulerian problem $\E$ can be approximated by finite particle Eulerian problems $\E^N$ defined in Definition \ref{def:E^N}. Thanks to Theorem \ref{cor:LN=EN}, we are able to approximate the Eulerian problem also with a sequence of finite particle Lagrangian problems $\pL^N=\pL(\Omega^N,\mathcal{S}(\Omega^N),\P^N)$, $N \in \mathbb{N}$, with $(\Omega^N,\mathcal{S}(\Omega^N),\P^N)$ as in \eqref{def:omegaN}. This is relevant from the point of view of applications. The main result of the section is stated in the following theorem. \begin{theorem}[Convergence of the value functions]\label{thm:convVE} Let $\S = (U,f,\mathcal C,\mathcal C_T)$ satisfy the Convexity Assumption \ref{CA}. Let $\mu_0 \in \mathscr{P}_p(\mathbb{R}^d)$ and $\mu_0^N \in \mathscr{P}^N(\mathbb{R}^d)$ such that $W_p(\mu_0^N,\mu_0)\to 0$ as $N\to+\infty$, then \[\lim_{N \to +\infty} V_{\E^N} (\mu_0^N) = V_{\E}(\mu_0).\] Moreover, for every $X_0^N \in L^p(\Omega^N;\mathbb{R}^d)$ such that $(X_0^N)_\sharp \P^N = \mu_0^N $ it holds that \[\lim_{N \to +\infty} V_{\pL^N} (X_0^N) = V_{\E}(\mu_0).\] \end{theorem} In order to prove Theorem \ref{thm:convVE}, we start with the following proposition. \begin{proposition}[Finite particle approximation for $\E$]\label{prop:ENtoE} Let $\S=(U,f,{\ensuremath{\mathcal C}},{\ensuremath{\mathcal C}}_T)$ satisfy the Convexity Assumption \ref{CA}. Let $\mu_0\in \mathscr{P}_p(\mathbb{R}^d)$ and $\mu_0^N \in \mathscr{P}^N(\mathbb{R}^d)$, $N\in\mathbb{N}$, such that \begin{equation}\label{eq:conv_mu0N} \lim_{N\to +\infty}W_p(\mu_0^N,\mu_0)= 0. \end{equation} If $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$, then there exists a sequence $(\mu^N,\underline u^N)\in{\ensuremath{\mathcal A}}_{\E^N}(\mu_0^N)$ such that \begin{enumerate} \item $(\mu^N,\underline u^N)$ converges to $(\mu,\underline u)$ according to Definition \ref{def:nu_converges}; \item $\displaystyle\lim_{N\to+\infty} J_{\E^N}(\mu^N,\underline u^N) = J_{\E}(\mu,\underline u)$. \end{enumerate} \end{proposition} \begin{proof} {\bf Step 1.} Let $(\mu,\underline u)\in{\ensuremath{\mathcal A}}_{\E}(\mu_0)$. In this step we associate to $(\mu,\underline u)$ an admissible pair $(\tilde X, \tilde u) \in {\ensuremath{\mathcal A}}_{\pL}(\tilde X_0)$ for the Lagrangian problem $\pL:= \pL([0,1];{\ensuremath{\mathcal B}}_{[0,1]},{\ensuremath{\mathcal L}}_1)$ such that \begin{equation}\label{eq:equivLE} J_{\pL}(\tilde X,\tilde u)=J_{\E}(\mu,\underline u). \end{equation} By Proposition \ref{prop:E>K}, there exists $(\boldsymbol\eta,u)\in{\ensuremath{\mathcal A}}_{\K}(\mu_0)$ such that $u(t,\gamma)=\underline u(t,\gamma(t))$, $(e_t)_\sharp\boldsymbol\eta=\mu_t$ and $J_{\K}(\boldsymbol\eta,u)=J_{\E}(\mu,\underline u)$. Thanks to Lemma \ref{l:fromKtoL}, the map $Z:[0,T]\times\Gamma_T\to\mathbb{R}^d$ defined by $Z(t,\gamma):=\gamma(t)=e_t(\gamma)$ satisfies $(Z,u)\in{\ensuremath{\mathcal A}}_{\pL_{\boldsymbol\eta}}(e_0)$ with $\pL_{\boldsymbol\eta}:=\pL(\Gamma_T,{\ensuremath{\mathcal B}}_{\Gamma_T},\boldsymbol\eta)$ and \begin{equation}\label{eq:JLetaeqJK1} J_{\pL_{\boldsymbol\eta}}(Z,u)=J_{\K}(\boldsymbol\eta,u), \end{equation} By Proposition \ref{prop:Skorohod} applied to $S= \Gamma_T$ and $\nu = \boldsymbol\eta$, there exists a Borel map $P:[0,1]\to\Gamma_T$ such that $P_\sharp{\ensuremath{\mathcal L}}_1=\boldsymbol\eta$. We define $\tilde X:[0,T]\times[0,1]\to\mathbb{R}^d$ by $\tilde X_t(\omega):=Z_t(P(\omega))$ and $\tilde u:[0,T]\times[0,1]\to U$ by $\tilde u_t(\omega):=u_t(P(\omega))$. Notice that \begin{equation}\label{eq:mu_t:P} (\tilde X_t)_\sharp{\ensuremath{\mathcal L}}_1=(e_t\circ P)_\sharp{\ensuremath{\mathcal L}}_1=(e_t)_\sharp\boldsymbol\eta=\mu_t, \end{equation} and it is easy to prove that $(\tilde X,\tilde u)\in\mathcal A_{\pL}(\tilde X_0)$. Moreover $J_{\pL}(\tilde X,\tilde u)=J_{\pL_{\boldsymbol\eta}}(Z,u)$ and, by \eqref{eq:JLetaeqJK1}, we obtain \eqref{eq:equivLE}. \medskip {\bf Step 2.} We use the partition of $[0,1]$ defined in Lemma \ref{lemma:partL1}. We define the piecewise constant initial data $\tilde X_0^N:[0,1]\to\mathbb{R}^d$ and controls $\tilde u^N:[0,T]\times[0,1]\to U$ by \begin{align*} \tilde X_0^N&:=\sum_{k=1}^{N}\mathds 1_{I^N_k} \,\fint_{I^N_k}\tilde X_0(\omega)\,\d{\ensuremath{\mathcal L}}_1(\omega),\\ \tilde u_t^N&:=\sum_{k=1}^{N}\mathds 1_{I^N_k} \, \fint_{I^N_k}\tilde u_t(\omega)\,\d{\ensuremath{\mathcal L}}_1(\omega)\, , \quad \textrm{for all }t\in[0,T]. \end{align*} From the definition of $\tilde X_0^N$ and Lemma \ref{lemma:partL1} we have \begin{equation}\label{eq:wpn2} W_p(\tilde \mu_0^N,\mu_0)\leq \|\tilde X_0^N-\tilde X_0\|_{L^p([0,1];\mathbb{R}^d)}\to 0 \qquad \text{as } N\to+\infty, \end{equation} where \[\tilde \mu_0^N:=(\tilde X_0^N)_\sharp{\ensuremath{\mathcal L}}_1=\frac{1}{N}\sum_{k=1}^N\delta_{\tilde x^N_k}, \qquad \tilde x^N_k:=\fint_{I^N_k}\tilde X_0(\omega)\,\d{\ensuremath{\mathcal L}}_1(\omega)\in\mathbb{R}^d.\] Since $\mu_0^N=\frac{1}{N}\sum_{k=1}^N\delta_{x^N_k}$, there exists a permutation of indexes $\sigma^N:\{1,\dots,N\}\to\{1,\dots,N\}$ and a map $X_0^N:[0,1]\to\mathbb{R}^d$ defined by $X_0^N:=\sum_{k=1}^{N} x^N_{\sigma^N(k)} \mathds 1_{I^N_k}$ such that \begin{equation}\label{eq:wpn} W_p(\mu_0^N,\tilde\mu_0^N)=\|X_0^N-\tilde X_0^N\|_{L^p([0,1];\mathbb{R}^d)}. \end{equation} Using \eqref{eq:conv_mu0N}, \eqref{eq:wpn2} and \eqref{eq:wpn} we obtain \begin{equation} \begin{split} \|X_0^N-\tilde X_0\|_{L^p([0,1];\mathbb{R}^d)} &\leq \|X_0^N-\tilde X^N_0\|_{L^p([0,1];\mathbb{R}^d)} + \|\tilde X_0^N-\tilde X_0\|_{L^p([0,1];\mathbb{R}^d)}\\ &= W_p(\mu_0^N,\tilde\mu_0^N) + \|\tilde X_0^N-\tilde X_0\|_{L^p([0,1];\mathbb{R}^d)}\\ &\leq W_p(\mu_0^N,\mu_0) + W_p(\mu_0,\tilde\mu_0^N) + \|\tilde X_0^N-\tilde X_0\|_{L^p([0,1];\mathbb{R}^d)} \to0. \end{split} \end{equation} Let $(X^N,\tilde u^N)\in{\ensuremath{\mathcal A}}_{\pL}(X_0^N)$. By Lemma \ref{lemma:partL1} and dominated convergence we have that $\|\tilde u^N-\tilde u\|_{L^1([0,1] \times [0,T];V)}\to0$ as $N\to+\infty$. Then, by Proposition \ref{prop:costRLntoRL}, we have \begin{equation}\label{convAUX} \sup_{t\in[0,T]}\|X_t^N-\tilde X_t\|_{L^p([0,1];\mathbb{R}^d)}\to 0 \qquad \mbox{ as } N\to+\infty \end{equation} and \begin{equation}\label{eq:convJLnL2} \lim_{N\to+\infty}J_{\pL}(X^N,\tilde u^N)=J_{\pL}(\tilde X,\tilde u). \end{equation} We observe that, for any $t\in[0,T]$, $X_t^N$ is constant on the elements $I_k^N$ of partition $\{I_k^N:k=1,\ldots,N\}$ so that it is of the form $$X_t^N:=\sum_{k=1}^{N} (x_t^N)_k \mathds 1_{I^N_k}$$ for some $ (x_t^N)_k\in\mathbb{R}^d$. We define $\mu_t^N:=(X_t^N)_\sharp{\ensuremath{\mathcal L}}_1\in\mathscr{P}_p(\mathbb{R}^d)$. From the observation above, $$\mu_t^N=\frac1N\sum_{k=1}^{N} \delta_{(x_t^N)_k} $$ By \eqref{convAUX} we obtain that $\mu^N\to\mu$ in $C([0,T];\mathscr{P}_p(\mathbb{R}^d))$ as $N\to+\infty$. \medskip {\bf Step 3.} For any $t\in[0,T]$ we define $\rho_t^N:=(X_t^N,\tilde u_t^N)_\sharp{\ensuremath{\mathcal L}}_1\in\mathscr{P}(\mathbb{R}^d\times U)$, and notice that $\pi^1_\sharp\rho_t^N=\mu_t^N$. Denoting by $\rho_{t,x}^N\in\mathscr{P}(U)$ the disintegration of $\rho_t^N$ w.r.t. $\pi^1$, we define the Borel map $\underline u^N:[0,T]\times\mathbb{R}^d\to U$ by $$\underline u^N(t,x):=\int_{U}u\,\d\rho_{t,x}^N(u).$$ By the definition of $\mu_t^N$ and $\underline u^N$, from item (2) of the Convexity Assumption \ref{CA}, we obtain that $(\mu^N,\underline u^N)\in{\ensuremath{\mathcal A}}_{\E^N}(\mu_0^N)$. Indeed, given $\varphi\in C^1_c(\mathbb{R}^d;\mathbb{R})$, for ${\ensuremath{\mathcal L}}_T$-a.e. $t\in[0,T]$, we have \begin{align*} \frac{\d}{\d t}\int_{\mathbb{R}^d}\varphi(x)\,\d\mu_t^N(x)&=\frac{\d}{\d t}\int_{[0,1]}\varphi(X_t^N(\omega))\,\d{\ensuremath{\mathcal L}}_1(\omega) \\ &=\int_{[0,1]}\nabla\varphi(X_t^N(\omega))\cdot \dot X_t^N(\omega)\,\d{\ensuremath{\mathcal L}}(\omega)\\ &=\int_{[0,1]}\nabla\varphi(X_t^N(\omega))\cdot f(X_t^N(\omega),\tilde u_t^N(\omega),\mu_t^N)\,\d\mathcal L(\omega)\\ &=\int_{\mathbb{R}^d\times U}\nabla\varphi(x)\cdot f(x,u,\mu_t^N)\,\d\rho_t^N(x,u)\\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot\int_U f(x,u,\mu_t^N)\,\d\rho_{t,x}^N(u)\,\d\mu_t^N(x)\\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot f\left(x,\int_U u\,\d\rho_{t,x}^N(u),\mu_t^N\right)\,\d\mu_t^N(x)\\ &=\int_{\mathbb{R}^d}\nabla\varphi(x)\cdot f\left(x,\underline u^N(t,x),\mu_t^N\right)\,\d\mu_t^N(x). \end{align*} Let us conclude the proof of the convergence showing that \eqref{convweaknu} holds. For any $t\in[0,T]$, using $(\tilde X,\tilde u)$ introduced in Step 1, we define $\rho_t:=(\tilde X_t,\tilde u_t)_\sharp{\ensuremath{\mathcal L}}_1\in \mathscr{P}(\mathbb{R}^d\times U)$. By the convergence \eqref{convAUX} and the convergence of $\tilde u^N$ to $\tilde u$, it easily follows that $\rho^N:=\rho^N_t\otimes{\ensuremath{\mathcal L}}_T$ weakly converges to $\rho:=\rho_t\otimes{\ensuremath{\mathcal L}}_T$ in $\mathscr{P}([0,T]\times\mathbb{R}^d\times U)$. Let now $\phi\in C_c([0,T]\times\mathbb{R}^d;V')$. Using the definition of $\rho^N$ and the weak convergence of $\rho^N$ to $\rho$ we have that \begin{align*} \lim_{N\to+\infty}\int_0^T\int_{\mathbb{R}^d}\langle\phi(t,x),\bar u^N(t,x)\rangle\,\d\mu_t^N(x)\,\d t &=\lim_{N\to+\infty}\int_0^T\int_{\mathbb{R}^d}\left\langle\phi(t,x),\int_U u\,\d\rho^N_{t,x}(u)\right\rangle\,\d\mu^N_t(x)\,\d t\\ &=\lim_{N\to+\infty}\int_0^T\int_{\mathbb{R}^d}\int_U\left\langle\phi(t,x), u \right\rangle\,\d\rho^N_{t,x}(u)\,\d\mu^N_t(x)\,\d t\\ &=\lim_{N\to+\infty}\int_{[0,T]\times\mathbb{R}^d\times U}\langle\phi(t,x), u\rangle\,\d\rho^N(t,x,u),\\ &=\int_{[0,T]\times\mathbb{R}^d\times U}\langle\phi(t,x) ,u\rangle \,\d\rho(t,x,u),\\ \end{align*} Using $\boldsymbol\eta \in \mathscr{P}(C([0,T];\mathbb{R}^d))$, $P: [0,1] \to \Gamma_T$ and \eqref{eq:mu_t:P} introduced in Step 1, recalling that $\rho = (\tilde X_t, \tilde u_t)_\sharp{\ensuremath{\mathcal L}}_1 \otimes {\ensuremath{\mathcal L}}_T$ we get \begin{align*} \int_{[0,T]\times\mathbb{R}^d\times U}\langle\phi(t,x) ,u\rangle \,\d\rho(t,x,u) &=\int_0^T\int_0^1\langle\phi(t,\tilde X_t(\omega)),\tilde u_t(\omega)\rangle\,\d{\ensuremath{\mathcal L}}_1(\omega)\,\d t\\ &=\int_0^T\int_0^1\langle\phi(t,e_t(P(\omega))), u_t(P(\omega))\rangle\,\d{\ensuremath{\mathcal L}}_1(\omega)\,\d t\\ &=\int_0^T\int_{\Gamma_T}\langle\phi(t,e_t(\gamma)) ,u_t(\gamma)\rangle\,\d\boldsymbol\eta(\gamma)\,\d t\\ &=\int_0^T\int_{\Gamma_T}\langle\phi(t,e_t(\gamma)),\underline u(t,e_t(\gamma))\rangle\,\d\boldsymbol\eta(\gamma)\,\d t\\ &=\int_0^T\int_{\mathbb{R}^d}\langle\phi(t,x) ,\underline u(t,x)\rangle\,\d\mu_t(x)\,\d t. \end{align*} \medskip {\bf Step 4.} Finally we prove that $J_{\E^N}(\mu^N,\underline u^N) \to J_{\E}(\mu,\underline u)$, as $N \to +\infty$.\\ Using item (3) of the Convexity Assumption \ref{CA}, we have \begin{align*} J_{\E^N}(\mu^N,\underline u^N) &=\int_0^T\int_{\mathbb{R}^d}{\ensuremath{\mathcal C}}(x,\bar u^N(t,x),\mu_t^N)\,\d\mu^N_t(x)\,\d t + \int_{\mathbb{R}^d}{\ensuremath{\mathcal C}}_T(x,\mu_T^N)\,\d\mu_T^N(x)\\ &=\int_0^T\int_{\mathbb{R}^d}{\ensuremath{\mathcal C}}\left(x,\int_U u\,\d\rho_{t,x}^N(u),\mu_t^N\right)\,\d\mu_t^N(x)\,\d t +\int_{[0,1]}{\ensuremath{\mathcal C}}_T(X_T^N(\omega),\mu_T^N)\,\d{\ensuremath{\mathcal L}}_1(\omega)\\ &\le \int_0^T\int_{\mathbb{R}^d\times U}{\ensuremath{\mathcal C}}(x,u,\mu_t^N)\,\d\rho_t^N(x,u)\,\d t+\int_{[0,1]}{\ensuremath{\mathcal C}}_T(X_T^N(\omega),\mu_T^N)\,\d{\ensuremath{\mathcal L}}_1(\omega)\\ &=\int_0^T\int_{[0,1]}{\ensuremath{\mathcal C}}(X_t^N(\omega),\tilde u_t^N(\omega),\mu_t^N)\,\d{\ensuremath{\mathcal L}}_1(\omega)\,\d t+\int_{[0,1]}{\ensuremath{\mathcal C}}_T(X_T^N(\omega),\mu_T^N)\,\d{\ensuremath{\mathcal L}}_1(\omega)\\ &= J_{\pL}(X^N,\tilde u^N). \end{align*} By the previous inequality, recalling \eqref{eq:convJLnL2} and \eqref{eq:equivLE}, we get \[\limsup_{N\to+\infty}J_{\E^N}(\mu^N,\underline u^N)\le\limsup_{N\to+\infty}J_{\pL}(X^N,\tilde u^N)=J_{\pL}(\tilde X,\tilde u)=J_{\E}(\mu,\underline u).\] By Proposition \ref{prop:lscJE} we conclude. \end{proof} We conclude this section with the proof of Theorem \ref{thm:convVE}. \begin{proof}[Proof of Theorem \ref{thm:convVE}] Observe that $V_{\E^N} (\mu_0^N) \geq V_{\E}(\mu_0^N)$, then by Proposition \ref{prop:lscVE} we obtain \[\liminf_{N \to +\infty} V_{\E^N} (\mu_0^N) \ge V_{\E}(\mu_0).\] Let us prove that $\limsup_{N \to +\infty} V_{\E^N} (\mu_0^N) \le V_{\E}(\mu_0)$. Fix $\varepsilon>0$. By definition of $V_\E$ there exists $(\mu_\varepsilon,\underline u_\varepsilon)\in\mathcal A_{\E}(\mu_0)$ such that $V_{\E}(\mu_0) \geq J_{\E}(\mu_\varepsilon,\underline u_\varepsilon) - \varepsilon$. By Proposition \ref{prop:ENtoE} there exists $(\mu^N_\varepsilon,\underline u^N_\varepsilon)\in\mathcal A_{\E^N}(\mu_0^N)$ such that $\lim_{N\to+\infty}J_{\E^N}(\mu^N_\varepsilon,\underline u^N_\varepsilon) = J_{\E}(\mu_\varepsilon,\underline u_\varepsilon)$. Thus, \[ \limsup_{N\to +\infty}V_{\E^N}(\mu_0^N) \leq \limsup_{N\to +\infty} J_{\E^N}(\mu^N_\varepsilon,\underline u^N_\varepsilon) = J_{\E}(\mu_\varepsilon,\underline u_\varepsilon) \leq V_{\E}(\mu_0) + \varepsilon. \] By the arbitrariness of $\varepsilon$ we conclude. Finally, thanks to Theorem \ref{cor:LN=EN} it holds that $V_{\E^N}(\mu_0^N) = V_{\pL^N}(X_0^N)$ for every $N \in \mathbb{N}$, so that $\lim_{N \to +\infty} V_{\pL^N} (X_0^N) = V_{\E}(\mu_0)$. \end{proof}
1,314,259,992,887
arxiv
\section{Introduction}\label{a-sec1} Fujita's freeness conjecture is very famous and is still open for higher-dimensional varieties. Now we know that it holds true in dimension $\leq 5$ (for the details, see \cite{ye-zhu} and the references therein). \begin{conj}[Fujita's freeness conjecture]\label{a-conj1.1} Let $X$ be a smooth projective variety of dimension $n$. Let $L$ be an ample Cartier divisor. Then the complete linear system $|K_{X}+(n+1)L|$ is basepoint-free. \end{conj} In this paper, we treat a generalization of Fujita's freeness conjecture for highly singular varieties. More precisely, we are mainly interested in quasi-log canonical pairs. A quasi-log canonical pair may be reducible and is not necessarily equidimensional. The union of some log canonical centers of a given log canonical pair is a typical example of quasi-log canonical pairs. We think that it is worth formulating and studying various conjectures for quasi-log canonical pairs in order to solve the original conjecture by some inductive arguments on the dimension. \begin{conj}[Fujita-type freeness for quasi-log canonical pairs]\label{a-conj1.2} Let $[X, \omega]$ be a projective quasi-log canonical pair of dimension $n$. Let $M$ be a Cartier divisor on $X$. We put $N=M-\omega$. Assume that $N^{\dim X_i}\cdot X_i> (\dim X_i)^{\dim X_i}$ for every positive-dimensional irreducible component $X_i$ of $X$. For every positive-dimensional subvariety $Z$ which is not an irreducible component of $X$, we put $$n_Z=\min_i\{ \dim X_i \, |\, {\text{$X_i$ is an irreducible component of $X$ with $Z\subset X_i$}}\}$$ and assume that $N^{\dim Z}\cdot Z\geq n_Z^{\dim Z}$. Then the complete linear system $|M|$ is basepoint-free. \end{conj} If $N^{\dim X_i}\cdot X_i>\left(\frac{1}{2}n(n+1)\right)^{\dim X_i}$ and $N^{\dim Z}\cdot Z>\left(\frac{1}{2}n(n+1)\right)^{\dim Z}$ hold in Conjecture \ref{a-conj1.2}, then we have already known that the complete linear system $|M|$ is basepoint-free by the second author's theorem (see \cite[Theorem 1.1]{liu} for the precise statement). It is a generalization of Angehrn--Siu's theorem (see \cite{angehrn-siu}). When $\dim X=1$, we can easily check that Conjecture \ref{a-conj1.2} holds true. \begin{thm}[Theorem \ref{a-thm3.1}]\label{a-thm1.3} Conjecture \ref{a-conj1.2} holds true for $n=1$. \end{thm} The main technical result of this paper is the following theorem. \begin{thm}[Theorem \ref{a-thm3.2}]\label{a-thm1.4} Let $[X, \omega]$ be a quasi-log canonical pair such that $X$ is a normal projective irreducible surface. Let $M$ be a Cartier divisor on $X$. We put $N=M-\omega$. We assume that $N^2>4$ and $N\cdot C\geq 2$ for every curve $C$ on $X$. Let $P$ be any closed point of $X$ that is not included in $\Nqklt(X, \omega)$, the union of all qlc centers of $[X, \omega]$. Then there exists $s\in H^0(X, \mathcal I_{\Nqklt(X, \omega)} \otimes \mathcal O_X(M))$ such that $s(P)\ne 0$, where $\mathcal I_{\Nqklt(X, \omega)}$ is the defining ideal sheaf of $\Nqklt(X, \omega)$ on $X$. \end{thm} The proof of Theorem \ref{a-thm1.4} in Section \ref{a-sec3} heavily depends on the first author's new result obtained in \cite{fujino-slc-trivial} (see Theorem \ref{u-thm2.12} below), which comes from the theory of variations of mixed Hodge structure on cohomology with compact support. By combining Theorems \ref{a-thm1.3} and \ref{a-thm1.4} with our result on the normalization of quasi-log canonical pairs (see Theorem \ref{u-thm2.11} below), we prove Conjecture \ref{a-conj1.2} for $n=2$ in full generality. \begin{cor}[Corollary \ref{a-cor3.3}]\label{a-cor1.5} Conjecture \ref{a-conj1.2} holds true for $n=2$. \end{cor} We note that we can recover the main theorem of \cite{fujino-slc-surface} by combining Theorem \ref{a-thm1.3} and Corollary \ref{a-cor1.5} with the main result of \cite{fujino-funda-slc}. \begin{cor}[{\cite[Theorem 1.3]{fujino-slc-surface}}]\label{a-cor1.6} Let $(X, \Delta)$ be a projective semi-log canonical pair of dimension $n$. Let $M$ be a Cartier divisor on $X$. We put $N=M-(K_X+\Delta)$. Assume that $N^n\cdot X_i>n^n$ for every irreducible component $X_i$ of $X$ and that $N^k\cdot Z\geq n^k$ for every subvariety $Z$ with $0<\dim Z=k<n$. We further assume that $n=1$ or $2$. Then the complete linear system $|M|$ is basepoint-free. \end{cor} Let us quickly explain our strategy to prove Conjecture \ref{a-conj1.2}. From now on, we will use the same notation as in Conjecture \ref{a-conj1.2}. We take an arbitrary closed point $P$ of $X$. Then it is sufficient to find $s\in H^0(X, \mathcal O_X(M))$ with $s(P)\ne 0$. Let $X_i$ be an irreducible component of $X$ such that $P\in X_i$. By adjunction (see Theorem \ref{u-thm2.8} (i)), $[X_i, \omega|_{X_i}]$ is a quasi-log canonical pair. By the vanishing theorem (see Theorem \ref{u-thm2.8} (ii)), the natural restriction map $H^0(X, \mathcal O_X(M))\to H^0(X_i, \mathcal O_{X_i}(M))$ is surjective. Therefore, by replacing $X$ with $X_i$, we may assume that $X$ is irreducible. By adjunction again, $[\Nqklt(X, \omega), \omega|_{\Nqklt(X, \omega)}]$ is a quasi-log canonial pair. By the vanishing theorem, the natural restriction map $H^0(X, \mathcal O_X(M)) \to H^0(\Nqklt(X, \omega), \mathcal O_{\Nqklt(X, \omega)}(M))$ is surjective. Therefore, if $P\in \Nqklt(X, \omega)$, then we can use induction on the dimension. Thus we may further assume that $P\not\in\Nqklt(X, \omega)$. In this situation, we know that $X$ is normal at $P$. Let $\nu: \widetilde X\to X$ be the normalization. Then, by Theorem \ref{u-thm2.11}, $[\widetilde X, \nu^*\omega]$ is a quasi-log canonical pair with $\nu_*\mathcal I_{\Nqklt(\widetilde X, \nu^*\omega)}=\mathcal I_{\Nqklt(X, \omega)}$. Therefore, it is sufficient to find $\widetilde s\in H^0(\widetilde X, \mathcal I_{\Nqklt(\widetilde X, \nu^*\omega)}\otimes \mathcal O_{\widetilde X} (\nu^*M))$ with $\widetilde s(\widetilde P)\ne 0$, where $\widetilde P=\nu^{-1}(P)$. By replacing $X$ with $\widetilde X$, we may assume that $X$ is a normal irreducible variety. By using Theorem \ref{u-thm2.12}, we can take a boundary $\mathbb R$-divisor $\Delta$, that is, an effective $\mathbb R$-divisor $\Delta$ with $\Delta=\Delta^{\leq 1}$, on $X$ such that $K_X+\Delta\sim _{\mathbb R} \omega+\varepsilon N$ for $0<\varepsilon \ll 1$ and $\mathcal J(X, \Delta)=\mathcal I_{\Nqklt(X, \omega)}$. Note that $\mathcal J(X, \Delta)$ is the multiplier ideal sheaf of $(X, \Delta)$. Since $\mathcal J(X, \Delta)=\mathcal I_{\Nqklt(X, \omega)}$, $(X, \Delta)$ is klt in a neighborhood of $P$. Anyway, it is sufficient to find $s\in H^0(X, \mathcal J(X, \Delta)\otimes \mathcal O_X(M))$ with $s(P)\ne 0$. In this paper, we will carry out the above strategy in dimension two. \begin{ack} The first author was partially supported by JSPS KAKENHI Grant Numbers JP16H03925, JP16H06337. The authors would like to thank Kenta Hashizume and Professor Wenfei Liu for discussions. They also thank the referee for valuable comments. \end{ack} We will work over $\mathbb C$, the complex number field, throughout this paper. A scheme means a separated scheme of finite type over $\mathbb C$. A variety means a reduced scheme, that is, a reduced separated scheme of finite type over $\mathbb C$. We sometimes assume that a variety is irreducible without mentioning it explicitly if there is no risk of confusion. We will freely use the standard notation of the minimal model program and the theory of quasi-log schemes as in \cite{fujino-funda} and \cite{fujino-foundations}. For the details of semi-log canonical pairs, see \cite{fujino-funda-slc}. \section{Preliminaries}\label{a-sec2} In this section, we collect some basic definitions and explain some results on quasi-log schemes. \begin{defn}[$\mathbb R$-divisors]\label{u-def2.1} Let $X$ be an equidimensional variety, which is not necessarily regular in codimension one. Let $D$ be an $\mathbb R$-divisor, that is, $D$ is a finite formal sum $\sum _i d_i D_i$, where $D_i$ is an irreducible reduced closed subscheme of $X$ of pure codimension one and $d_i$ is a real number for every $i$ such that $D_i\ne D_j$ for $i\ne j$. We put \begin{equation*} D^{<1} =\sum _{d_i<1}d_iD_i, \quad D^{\leq 1}=\sum _{d_i\leq 1} d_i D_i, \quad D^{> 1}=\sum _{d_i>1} d_i D_i, \quad \text{and} \quad D^{=1}=\sum _{d_i=1}D_i. \end{equation*} We also put $$ \lceil D\rceil =\sum _i \lceil d_i \rceil D_i \quad \text{and} \quad \lfloor D\rfloor=-\lceil -D\rceil, $$ where $\lceil d_i\rceil$ is the integer defined by $d_i\leq \lceil d_i\rceil <d_i+1$. When $D=D^{\leq 1}$ holds, we usually say that $D$ is a {\em{subboundary}} $\mathbb R$-divisor. Let $B_1$ and $B_2$ be $\mathbb R$-Cartier divisors on $X$. Then $B_1\sim _{\mathbb R} B_2$ means that $B_1$ is $\mathbb R$-linearly equivalent to $B_2$. \end{defn} Let us quickly recall singularities of pairs for the reader's convenience. We recommend the reader to see \cite[Section 2.3]{fujino-foundations} for the details. \begin{defn}[Singularities of pairs]\label{u-def2.2} Let $X$ be a normal variety and let $\Delta$ be an $\mathbb R$-divisor on $X$ such that $K_X+\Delta$ is $\mathbb R$-Cartier. Let $f:Y\to X$ be a projective birational morphism from a smooth variety $Y$. Then we can write $$ K_Y=f^*(K_X+\Delta)+\sum _E a(E, X, \Delta)E, $$ where $a(E, X, \Delta)\in \mathbb R$ and $E$ is a prime divisor on $Y$. By taking $f:Y\to X$ suitably, we can define $a(E, X, \Delta)$ for any prime divisor $E$ {\em{over}} $X$ and call it the {\em{discrepancy}} of $E$ with respect to $(X, \Delta)$. If $a(E, X, \Delta)>-1$ (resp.~$a(E, X, \Delta)\geq -1$) holds for any prime divisor $E$ over $X$, then we say that $(X, \Delta)$ is {\em{sub klt}} (resp.~{\em{sub log canonical}}). If $(X, \Delta)$ is sub klt (resp.~sub log canonical) and $\Delta$ is effective, then we say that $(X, \Delta)$ is {\em{klt}} (resp.~{\em{log canonical}}). If $(X, \Delta)$ is log canonical and $a(E, X, \Delta)>-1$ for any prime divisor $E$ that is exceptional over $X$, then we say that $(X, \Delta)$ is {\em{plt}}. If there exist a projective birational morphism $f:Y\to X$ from a smooth variety $Y$ and a prime divisor $E$ on $Y$ such that $a(E, X, \Delta)=-1$ and $(X, \Delta)$ is log canonical in a neighborhood of the generic point of $f(E)$, then $f(E)$ is called a {\em{log canonical center}} of $(X, \Delta)$. \end{defn} \begin{defn}[Multiplier ideal sheaves]\label{u-def2.3} Let $X$ be a normal variety and let $\Delta$ be an effective $\mathbb R$-divisor on $X$ such that $K_X+\Delta$ is $\mathbb R$-Cartier. Let $f:Y\to X$ be a projective birational morphism from a smooth variety such that $$ K_Y+\Delta_Y=f^*(K_X+\Delta) $$ and $\Supp \Delta_Y$ is a simple normal crossing divisor on $Y$. We put $$ \mathcal J(X, \Delta)=f_*\mathcal O_Y(-\lfloor \Delta_Y\rfloor) $$ and call it the {\em{multiplier ideal sheaf}} of $(X, \Delta)$. We can easily check that $\mathcal J(X, \Delta)$ is a well-defined ideal sheaf on $X$. The closed subscheme defined by $\mathcal J(X, \Delta)$ is denoted by $\Nklt(X, \Delta)$. \end{defn} The notion of {\em{globally embedded simple normal crossing pairs}} plays a crucial role in the theory of quasi-log schemes described in \cite[Chapter 6]{fujino-foundations}. \begin{defn}[Globally embedded simple normal crossing pairs]\label{u-def2.4} Let $Y$ be a simple normal crossing divisor on a smooth variety $M$ and let $B$ be an $\mathbb R$-divisor on $M$ such that $Y$ and $B$ have no common irreducible components and that the support of $Y+B$ is a simple normal crossing divisor on $M$. In this situation, $(Y, B_Y)$, where $B_Y:=B|_Y$, is called a {\em{globally embedded simple normal crossing pair}}. A {\em{stratum}} of $(Y, B_Y)$ means a log canonical center of $(M, Y+B)$ included in $Y$. \end{defn} Let us recall the notion of {\em{quasi-log schemes}}, which was first introduced by Florin Ambro (see \cite{ambro}). The following definition is slightly different from the original one. For the details, see \cite[Appendix A]{fujino-pull-back}. In this paper, we will use the framework of quasi-log schemes established in \cite[Chapter 6]{fujino-foundations}. \begin{defn}[Quasi-log schemes]\label{u-def2.5} A {\em{quasi-log scheme}} is a scheme $X$ endowed with an $\mathbb R$-Cartier divisor (or $\mathbb R$-line bundle) $\omega$ on $X$, a closed subscheme $\Nqlc(X, \omega)\subsetneq X$, and a finite collection $\{C\}$ of reduced and irreducible subschemes of $X$ such that there exists a proper morphism $f:(Y, B_Y)\to X$ from a globally embedded simple normal crossing pair $(Y, B_Y)$ satisfying the following properties: \begin{itemize} \item[(1)] $f^*\omega\sim_{\mathbb R} K_Y+B_Y$. \item[(2)] The natural map $\mathcal O_X\to f_*\mathcal O_Y(\lceil -(B^{<1}_Y)\rceil)$ induces an isomorphism $$ \mathcal I_{\Nqlc(X, \omega)}\overset{\sim}\longrightarrow f_*\mathcal O_Y(\lceil -(B^{<1}_Y)\rceil-\lfloor B^{>1}_Y\rfloor), $$ where $\mathcal I_{\Nqlc(X, \omega)}$ is the defining ideal sheaf of $\Nqlc(X, \omega)$. \item[(3)] The collection of subvarieties $\{C\}$ coincides with the images of $(Y, B_Y)$-strata that are not included in $\Nqlc(X, \omega)$. \end{itemize} We simply write $[X, \omega]$ to denote the above data $$\left(X, \omega, f:(Y, B_Y)\to X\right) $$ if there is no risk of confusion. We note that the subvarieties $C$ are called the {\em{qlc strata}} of $\left(X, \omega, f:(Y, B_Y)\to X\right)$ or simply of $[X, \omega]$. If $C$ is a qlc stratum of $[X, \omega]$ but is not an irreducible component of $X$, then $C$ is called a {\em{qlc center}} of $[X, \omega]$. The union of all qlc centers of $[X, \omega]$ is denoted by $\Nqklt(X, \omega)$. \end{defn} If $B_Y$ is a subboundary $\mathbb R$-divisor, then $[X, \omega]$ in Definition \ref{u-def2.5} is called a {\em{quasi-log canonical pair}}. \begin{defn}[Quasi-log canonical pairs]\label{u-def2.6} Let $(X, \omega, f:(Y, B_Y)\to X)$ be a quasi-log scheme as in Definition \ref{u-def2.5}. We say that $(X, \omega, f:(Y, B_Y)\to X)$ or simply $[X, \omega]$ is a {\em{quasi-log canonical pair}} ({\em{qlc pair}}, for short) if $\Nqlc(X, \omega)=\emptyset$. Note that the condition $\Nqlc(X, \omega)=\emptyset$ is equivalent to $B^{>1}_Y=0$, that is, $B_Y=B^{\leq 1}_Y$. \end{defn} The following example is very important. Precisely speaking, the notion of quasi-log schemes was originally introduced by Florin Ambro (see \cite{ambro}) in order to establish the cone and contraction theorem for {\em{generalized log varieties}}. Note that a generalized log variety $(X, \Delta)$ means that $X$ is a normal variety and $\Delta$ is an effective $\mathbb R$-divisor on $X$ such that $K_X+\Delta$ is $\mathbb R$-Cartier as in Example \ref{u-ex2.7} below. \begin{ex}\label{u-ex2.7} Let $X$ be a normal irreducible variety and let $\Delta$ be an effective $\mathbb R$-divisor on $X$ such that $K_X+\Delta$ is $\mathbb R$-Cartier. Let $f:Y\to X$ be a projective birational morphism from a smooth variety $Y$. We define $\Delta_Y$ by $$K_Y+\Delta_Y=f^*(K_X+\Delta). $$ We may assume that $\Supp \Delta_Y$ is a simple normal crossing divisor on $Y$ by taking $f:Y\to X$ suitably. We put $M=Y\times \mathbb C$ and consider $Y\simeq Y\times \{0\}\hookrightarrow Y\times \mathbb C=M$. Then we can see $(Y, \Delta_Y)$ as a globally embedded simple normal crossing pair. We put $\omega:=K_X+\Delta$ and $$\mathcal I_{\Nqlc(X, \omega)}:=f_*\mathcal O_Y(\lceil -(\Delta^{<1}_Y)\rceil -\lfloor \Delta^{>1}_Y\rfloor)\subset \mathcal O_X. $$ Then $\left(X, \omega, f: (Y, \Delta_Y)\to X\right)$ is a quasi-log scheme. In this case, $C$ is a qlc center of $[X, \omega]$ if and only if $C$ is a log canonical center of $(X, \Delta)$. If $C$ is a qlc stratum but is not a qlc center of $[X, \omega]$, then $C$ is nothing but $X$. \end{ex} One of the most important results in the theory of quasi-log schemes is the following theorem. \begin{thm}\label{u-thm2.8} Let $[X,\omega]$ be a quasi-log scheme and let $X'$ be the union of $\Nqlc(X, \omega)$ with a $($possibly empty$)$ union of some qlc strata of $[X,\omega]$. Then we have the following properties. \begin{itemize} \item[(i)] {\em{(Adjunction)}}.~Assume that $X'\neq \Nqlc(X, \omega)$. Then $[X', \omega']$ is a quasi-log scheme with $\omega'=\omega \vert_{X'}$ and $\Nqlc(X', \omega')=\Nqlc(X, \omega)$. Moreover, the qlc strata of $[X',\omega']$ are exactly the qlc strata of $[X,\omega]$ that are included in $X'$. \item[(ii)] {\em{(Vanishing theorem)}}.~Assume that $ \pi: X \rightarrow S$ is a proper morphism between schemes. Let $L$ be a Cartier divisor on $X$ such that $L-\omega$ is nef and log big over $S$ with respect to $[X,\omega]$, that is, $L-\omega$ is $\pi$-nef and $(L-\omega)|_C$ is $\pi$-big for every qlc stratum $C$ of $[X, \omega]$. Then $R^{i}\pi_{*}(\mathcal {I}_{X'}\otimes \mathcal{O}_{X}(L))=0$ for every $i>0$, where $\mathcal{I}_{X'}$ is the defining ideal sheaf of $X'$ on $X$. \end{itemize} \end{thm} For the proof of Theorem \ref{u-thm2.8}, see, for example, \cite[Theorem 6.3.5] {fujino-foundations}. We note that we generalized Koll\'ar's torsion-free and vanishing theorems in \cite[Chapter 5]{fujino-foundations} by using the theory of mixed Hodge structures on cohomology with compact support in order to establish Theorem \ref{u-thm2.8}. \medskip Let us quickly recall the definition of semi-log canonical pairs for the reader's convenience. \begin{defn}[Semi-log canonical pairs]\label{u-def2.9} Let $X$ be an equidimensional variety that is normal crossing in codimension one and satisfies Serre's $S_2$ condition and let $\Delta$ be an effective $\mathbb R$-divisor on $X$ such that the singular locus of $X$ contains no irreducible components of $\Supp \Delta$. Assume that $K_X+\Delta$ is $\mathbb R$-Cartier. Let $\nu:\widetilde X\to X$ be the normalization. We put $K_{\widetilde X}+\Delta_{\widetilde X}=\nu^*(K_X+\Delta)$, that is, $\Delta_{\widetilde X}$ is the union of the inverse images of $\Delta$ and the conductor of $X$. If $(\widetilde X, \Delta_{\widetilde X})$ is log canonical, then $(X, \Delta)$ is called a {\em{semi-log canonical pair}}. \end{defn} The theory of quasi-log schemes plays an important role for the study of semi-log canonical pairs by the following theorem:~Theorem \ref{u-thm2.10}. For the precise statement and some related results, see \cite{fujino-funda-slc}. \begin{thm}[{\cite[Theorem 1.2]{fujino-funda-slc}}]\label{u-thm2.10} Let $(X, \Delta)$ be a quasi-projective semi-log canonical pair. Then $[X, K_X+\Delta]$ is a quasi-log canonical pair. \end{thm} For the proof of Corollary \ref{a-cor1.5}, we will use Theorem \ref{u-thm2.11} below. \begin{thm}[{\cite[Theorem 1.1]{fujino-haidong}}]\label{u-thm2.11} Let $[X, \omega]$ be a quasi-log canonical pair such that $X$ is irreducible. Let $\nu: \widetilde{X}\to X$ be the normalization. Then $[\widetilde{X}, \nu^*\omega]$ naturally becomes a quasi-log canonical pair with the following properties: \begin{itemize} \item[(i)] if $C$ is a qlc center of $[\widetilde{X}, \nu^*\omega]$, then $\nu(C)$ is a qlc center of $[X, \omega]$, and \item[(ii)] $\Nqklt(\widetilde{X}, \nu^*\omega)=\nu^{-1}(\Nqklt(X, \omega))$. More precisely, the equality $$ \nu_*\mathcal I_{\Nqklt(\widetilde{X}, \nu^*\omega)}= \mathcal I_{\Nqklt(X, \omega)} $$ holds, where $\mathcal I_{\Nqklt(X, \omega)}$ and $\mathcal I_{\Nqklt(\widetilde{X}, \nu^*\omega)}$ are the defining ideal sheaves of $\Nqklt(X, \omega)$ and $\Nqklt(\widetilde{X}, \nu^*\omega)$ respectively. \end{itemize} \end{thm} The following theorem is a special case of \cite[Theorem 1.5] {fujino-slc-trivial}. It is a deep result based on the theory of variations of mixed Hodge structure on cohomology with compact support. \begin{thm}[{\cite[Theorem 1.5]{fujino-slc-trivial}}]\label{u-thm2.12} Let $[X, \omega]$ be a quasi-log canonical pair such that $X$ is a normal projective irreducible variety. Then there exists a projective birational morphism $p:X'\to X$ from a smooth projective variety $X'$ such that $$ K_{X'}+B_{X'}+M_{X'}=p^*\omega, $$ where $B_{X'}$ is a subboundary $\mathbb R$-divisor, that is, $B_{X'}=B^{\leq 1}_{X'}$, such that $\Supp B_{X'}$ is a simple normal crossing divisor and that $p_*B_{X'}$ is effective, and $M_{X'}$ is a nef $\mathbb R$-divisor on $X'$. Furthermore, we can make $B_{X'}$ satisfy $p(B^{=1}_{X'})=\Nqklt(X, \omega)$. \end{thm} We close this section with an easy lemma, which is essentially contained in \cite[Chapter 6]{fujino-foundations}. \begin{lem}\label{u-lem2.13} Let $[X, \omega]$ be a quasi-log canonical pair such that $X$ is an irreducible curve. Let $P$ be a smooth point of $X$ such that $P$ is not a qlc center of $[X, \omega]$. Then we can consider a natural quasi-log structure on $[X, \omega+tP]$ induced from $[X, \omega]$ for every $t\geq 0$. We put $$ c=\max \{t\geq 0\, |\, \text{$[X, \omega+tP]$ is quasi-log canonical}\}. $$ Then $0<c\leq 1$ holds. \end{lem} \begin{proof} Since $[X, \omega]$ is a qlc pair, we can take a projective surjective morphism $f:(Y, B_Y)\to X$ from a globally embedded simple normal crossing pair $(Y, B_Y)$ such that $B_Y$ is a subboundary $\mathbb R$-divisor on $Y$ and that the natural map $\mathcal O_X\to f_*\mathcal O_Y(\lceil -(B^{<1}_Y)\rceil)$ is an isomorphism. By taking some blow-ups, we may further assume that $(Y, \Supp B_Y+\Supp f^*P)$ is a globally embedded simple normal crossing pair. Then it is easy to see that $$\left(X, \omega+tP, f: (Y, B_Y+tf^*P)\to X \right) $$ is a quasi-log scheme for every $t\geq 0$. We assume that $c>1$. Then $\mult _S (B_Y+f^*P)<1$ for any irreducible component $S$ of $\Supp f^*P$. Therefore, $f^*P\leq \lceil -(B^{<1}_Y)\rceil$ holds. Thus we have $\mathcal O_X\subsetneq \mathcal O_X(P)\subset f_*\mathcal O_Y(\lceil -(B^{<1}_Y)\rceil)$. This is a contradiction. This means that $c\leq 1$ holds. By definition, we can easily see that $0<c$ holds. \end{proof} \section{Proof}\label{a-sec3} In this section, we prove the results in Section \ref{a-sec1}, that is, Theorems \ref{a-thm1.3}, \ref{a-thm1.4}, Corollaries \ref{a-cor1.5}, and \ref{a-cor1.6}. \medskip First, we prove Theorem \ref{a-thm1.3}, that is, we prove Conjecture \ref{a-conj1.2} when $\dim X=1$. \begin{thm}[Theorem \ref{a-thm1.3}]\label{a-thm3.1} Let $[X,\omega]$ be a projective quasi-log canonical pair of dimension one. Let $M$ be a Cartier divisor on $X$. We put $N=M-\omega$. Assume that $N \cdot X_i> 1$ for every one-dimensional irreducible component $ X_i$ of $X$. Then the complete linear system $|M|$ is basepoint-free. \end{thm} \begin{proof} Let $P$ be an arbitrary closed point of $X$. If $P$ is a qlc center of $[X, \omega]$, then $H^1(X, \mathcal I_P\otimes \mathcal O_X(M))=0$ by Theorem \ref{u-thm2.8} (ii), where $\mathcal I_P$ is the defining ideal sheaf of $P$ on $X$. Therefore, the natural restriction map \begin{equation*} H^0(X, \mathcal O_X(M))\to \mathcal O_X(M)\otimes \mathbb C(P) \end{equation*} is surjective. Thus, the complete linear system $|M|$ is basepoint-free in a neighborhood of $P$. From now on, we assume that $P$ is not a qlc center of $[X, \omega]$. Let $X_i$ be the unique irreducible component of $X$ containing $P$. By Theorem \ref{u-thm2.8} (ii), $H^1(X, \mathcal I_{X_i} \otimes \mathcal O_X(M))=0$, where $\mathcal I_{X_i}$ is the defining ideal sheaf of $X_i$ on $X$. We note that $X_i$ is a qlc stratum of $[X, \omega]$. Thus, the restriction map $$ H^0(X, \mathcal O_X(M))\to H^0(X_i, \mathcal O_{X_i}(M)) $$ is surjective. Therefore, by replacing $X$ with $X_i$, we may assume that $X$ is irreducible. By Lemma \ref{u-lem2.13}, we can take $c\in \mathbb R$ such that $0<c\leq 1$ and that $P$ is a qlc center of $[X, \omega+cP]$. Since $\deg (M-(\omega+cP))>1-c\geq 0$, we have $H^1(X, \mathcal I_P\otimes \mathcal O_X(M))=0$ by Theorem \ref{u-thm2.8} (ii). Therefore, by the same argument as above, $|M|$ is basepoint-free in a neighborhood of $P$. Thus we obtain that the complete linear system $|M|$ is basepoint-free. \end{proof} Next, we prove Theorem \ref{a-thm1.4}, which is the main technical result of this paper. \begin{thm}[Theorem \ref{a-thm1.4}]\label{a-thm3.2} Let $[X, \omega]$ be a quasi-log canonical pair such that $X$ is a normal projective irreducible surface. Let $M$ be a Cartier divisor on $X$. We put $N=M-\omega$. We assume that $N^2>4$ and $N\cdot C\geq 2$ for every curve $C$ on $X$. Let $P$ be any closed point of $X$ that is not included in $\Nqklt(X, \omega)$. Then there exists $s\in H^0(X, \mathcal I_{\Nqklt(X, \omega)} \otimes \mathcal O_X(M))$ such that $s(P)\ne 0$. \end{thm} \begin{proof} By assumption and Nakai's ampleness criterion for $\mathbb R$-divisors (see \cite{campana-peternell}), $N$ is ample. In Step \ref{a-step1}, we will prove Theorem \ref{a-thm3.2} under the extra assumption that $P$ is a smooth point of $X$. In Step \ref{a-step2}, we will treat the case where $P$ is a singular point of $X$. \begin{step}\label{a-step1} In this step, we assume that $P$ is a smooth point of $X$. Since $N^2>4$, we can take an effective $\mathbb R$-divisor $B$ on $X$ such that $B\sim _{\mathbb R} N$ with $\mult_P B=2+\alpha>2$. By Theorem \ref{u-thm2.12}, there exists a projective birational morphism $p:X'\to X$ from a smooth projective surface $X'$ such that $K_{X'}+B_{X'}+M_{X'}=p^*\omega$, where $B_{X'}$ is a subboundary $\mathbb R$-divisor such that $p_*B_{X'}$ is effective and $M_{X'}$ is a nef $\mathbb R$-divisor on $X'$. Let $\Exc(p)$ denote the exceptional locus of $p$. By taking some more blow-ups, we may further assume that $p(B^{=1}_{X'})=\Nqklt(X, \omega)$ and that $\Supp B_{X'}\cup \Supp p^{-1}_*B\cup \Exc(p)$ is contained in a simple normal crossing divisor $\Sigma$ on $X'$ (see Theorem \ref{u-thm2.12}). Let $\varepsilon$ be a small positive real number such that $(1-\varepsilon)(2+\alpha)>2$. We can take an effective $p$-exceptional $\mathbb Q$-divisor $E$ on $X'$ such that $-E$ is $p$-ample and that $M_{X'}+\varepsilon (p^*N-E)$ is semi-ample for any $\varepsilon >0$. For $0<\varepsilon \ll 1$, we put $\Delta_{\varepsilon}:=p_*(B_{X'}+\varepsilon E+G_{\varepsilon})$ where $G_\varepsilon$ is a general effective $\mathbb R$-divisor such that $G_{\varepsilon}\sim _{\mathbb R} M_{X'}+\varepsilon (p^*N-E)$, $\Supp G_{\varepsilon}$ and $\Supp \Sigma$ have no common irreducible components, $\lfloor G_\varepsilon \rfloor=0$, and $\Supp (\Sigma+G_{\varepsilon})$ is a simple normal crossing divisor on $X'$. Since the effective part of $-\lfloor B_{X'}+\varepsilon E+G_{\varepsilon}\rfloor$ is $p$-exceptional and $p(B^{=1}_{X'})=\Nqklt(X, \omega)$, we obtain \begin{equation*} \begin{split} \mathcal J(X, \Delta_{\varepsilon}) &=p_*\mathcal O_{X'} (-\lfloor B_{X'}+\varepsilon E+G_{\varepsilon}\rfloor) \\ &=p_*\mathcal O_{X'} (-\lfloor (B_{X'}+\varepsilon E+G_{\varepsilon})^{\geq 1}\rfloor) \\&=p_*\mathcal O_{X'}(-B^{=1}_{X'}) \\&=\mathcal I _{\Nqklt(X, \omega)}. \end{split} \end{equation*} We put $B_{\varepsilon}:=(1-\varepsilon)B$ and define $$ r_{\varepsilon}=\max\{t\geq 0\, |\, (X, \Delta_{\varepsilon}+tB_{\varepsilon}) \ \text{is log canonical at $P$}\}. $$ By construction, $\mult_P B_{\varepsilon}>2$ and $\Delta_{\varepsilon}$ is an effective $\mathbb R$-divisor on $X$. Therefore, we have $0<r_{\varepsilon}<1$. Note that $(X, \Delta_{\varepsilon})$ is klt at $P$. By construction again, there is an irreducible component $S_{\varepsilon}$ of $\Sigma$ such that $$ r_{\varepsilon}\mult _{S_{\varepsilon}}p^*B_{\varepsilon}+ \mult _{S_{\varepsilon}}B_{X'}+\varepsilon \mult _{S_{\varepsilon}}E=1. $$ Therefore, $$ 0<r_{\varepsilon}=\frac{1-\mult _{S_{\varepsilon}}B_{X'}- \varepsilon \mult _{S_{\varepsilon}} E}{ (1-\varepsilon)\mult _{S_{\varepsilon}}p^*B} <1 $$ holds. Since there are only finitely many components of $\Sigma$, we can take $\{\varepsilon_i\}_{i=1}^\infty$ and $\delta>0$ such that $0<\varepsilon _i\ll 1$, $\mathcal J(X, \Delta_{\varepsilon _i}) =\mathcal I_{\Nqklt(X, \omega)}$, $(X, \Delta_{\varepsilon_i})$ is klt at $P$, $(X, \Delta_{\varepsilon_i}+r_{\varepsilon_i}B_{\varepsilon_i})$ is log canonical at $P$ but is not klt at $P$ with $\delta<r_{\varepsilon_i}<1$ for every $i$, and $\varepsilon_i\searrow 0$ for $i\nearrow \infty$. By $p:X'\to X$, we get a natural quasi-log structure on $[X, \omega_{\varepsilon}]$ with $\omega_\varepsilon:=K_X+\Delta_{\varepsilon}+r_{\varepsilon} B_{\varepsilon}$ for any $\varepsilon=\varepsilon _i$ (see Example \ref{u-ex2.7}). Note that $[X, \omega_{\varepsilon}]$ is qlc in a neighborhood of $P$ since $(X, \Delta_{\varepsilon}+r_{\varepsilon} B_{\varepsilon})$ is log canonical around $P$. Let $W_\varepsilon$ be the minimal qlc center of $[X, \omega_{\varepsilon}]$ passing through $P$, equivalently, let $W_\varepsilon$ be the minimal log canonical center of $(X, \Delta_{\varepsilon}+r_{\varepsilon}B_{\varepsilon})$ passing through $P$. Let $V_\varepsilon$ be the union of all qlc centers of $[X, \omega_{\varepsilon}]$ contained in $\Nqklt(X, \omega)=\Nklt(X, \Delta_{\varepsilon})$. We put $Z_\varepsilon =\Nqlc(X, \omega_{\varepsilon}) \cup V_\varepsilon\cup W_\varepsilon$ and $Y_\varepsilon=\Nqlc(X, \omega_{\varepsilon})\cup V_\varepsilon$. Then $[Z_\varepsilon, \omega_\varepsilon|_{Z_{\varepsilon}}]$ and $[Y_\varepsilon, \omega_\varepsilon|_{Y_{\varepsilon}}]$ have natural quasi-log structures induced from $[X, \omega_{\varepsilon}]$ by adjunction (see Theorem \ref{u-thm2.8} (i)). Since $$M-\omega_{\varepsilon}= M-(K_X+\Delta_{\varepsilon}+r_{\varepsilon}B_{\varepsilon}) \sim _{\mathbb R} (1-r_{\varepsilon})(1-\varepsilon)N, $$ which is still ample, the restriction map \begin{equation}\label{u-eq3.1} H^0(X, \mathcal O_X(M))\to H^0(Z_{\varepsilon}, \mathcal O_{Z_{\varepsilon}}(M)) \end{equation} is surjective by Theorem \ref{u-thm2.8} (ii). \begin{case}\label{a-case1} If $\dim W_\varepsilon=0$, then $W_\varepsilon =P$ is isolated in $Z_\varepsilon$ by construction. Thus $Z_\varepsilon$ is the disjoint union of $P$ and $Y_\varepsilon$. Therefore, by \eqref{u-eq3.1}, the restriction map $$ H^0(X, \mathcal O_X(M))\to H^0(Y_{\varepsilon}, \mathcal O_{Y_\varepsilon}(M))\oplus H^0(P, \mathcal O_P(M)) $$ is surjective. This means that there exists $s\in H^0(X, \mathcal O_X(M))$ such that $s(P)\ne 0$ and $s\in H^0(X, \mathcal I_{Y_{\varepsilon}}\otimes \mathcal O_X(M))\subset H^0(X, \mathcal I_{\Nqklt(X, \omega)}\otimes \mathcal O_X(M))$. Note that $\mathcal I_{Y_\varepsilon}$ is the defining ideal sheaf of $Y_{\varepsilon}$ on $X$ and the natural inclusion $\mathcal I_{Y_{\varepsilon}}\subset \mathcal I_{\Nqklt(X, \omega)}$ holds by construction. This is what we wanted. \end{case} \begin{case}\label{a-case2} By Case \ref{a-case1}, we may assume that $\dim W_{\varepsilon}=1$ for any $\varepsilon=\varepsilon_i$. By construction, $P$ is not a qlc center of $[Z_\varepsilon, \omega_{\varepsilon}|_{Z_\varepsilon}]$. Therefore, $Z_\varepsilon$ is smooth at $P$ since $\dim W_\varepsilon =1$ (see, for example, \cite[Theorem 6.3.11 (ii)]{fujino-foundations}). Let us consider $[Z_\varepsilon, \omega_\varepsilon|_{Z_\varepsilon} +c_{\varepsilon}P]$ where $c_{\varepsilon}$ is the minimum positive real number such that $P$ is a qlc center of $[Z_\varepsilon, \omega_\varepsilon|_{Z_\varepsilon} +c_{\varepsilon}P]$ (see Lemma \ref{u-lem2.13} and its proof). We write $\Delta_{\varepsilon}+r_{\varepsilon}B_{\varepsilon} =W_{\varepsilon}+\Delta'_{\varepsilon}$. We put $\mult _P \Delta'_{\varepsilon}=\beta_{\varepsilon}\geq 0$. Then $$ \beta_{\varepsilon} =\mult _P \Delta_{\varepsilon} +r_{\varepsilon} (1-\varepsilon)(2+\alpha)-1\geq r_{\varepsilon} (1-\varepsilon)(2+\alpha)-1. $$ We note that $$ \beta_{\varepsilon}\leq \mult _P(\Delta'_{\varepsilon}|_{W_\varepsilon})<1 $$ holds because $(X, W_{\varepsilon} +\Delta'_{\varepsilon})$ is plt in a neighborhood of $P$. We note that $$(X, W_\varepsilon+\Delta'_{\varepsilon}+(1-\mult_P(\Delta'_ {\varepsilon}|_{W_{\varepsilon}}))H)$$ is log canonical but is not plt in a neighborhood of $P$, where $H$ is a general smooth curve passing through $P$. Therefore, $$ c_{\varepsilon}=1-\mult_P(\Delta'_{\varepsilon}|_{W_{\varepsilon}}) \leq 1-\beta_{\varepsilon}\leq 2-r_{\varepsilon}(1-\varepsilon)(2+\alpha). $$ In this situation, \begin{equation*} \begin{split} \deg ((M-\omega_{\varepsilon})|_{W_{\varepsilon}}-c_{\varepsilon}P) &=(1-r_{\varepsilon})(1-\varepsilon)N\cdot W_\varepsilon -c_{\varepsilon} \\&\geq 2(1-r_{\varepsilon})(1-\varepsilon)-2+ r_{\varepsilon}(1-\varepsilon)(2+\alpha) \\ &=(1-\varepsilon)r_{\varepsilon}\alpha-2\varepsilon. \end{split} \end{equation*} Here we used the assumption $N\cdot W_\varepsilon\geq 2$. We note that $(1-\varepsilon _i)r_{\varepsilon _i}\alpha -2\varepsilon _i>0$ for every $i\gg 0$ since $\varepsilon_i \searrow 0$ for $i\nearrow \infty$ and $r_{\varepsilon _i}>\delta>0$ for every $i$ by construction. Therefore, if we choose $0<\varepsilon=\varepsilon_i\ll 1$, then $$ \deg (M|_{W_{\varepsilon}}-(\omega_{\varepsilon}|_{W_{\varepsilon}}+c_ {\varepsilon}P))>0. $$ Thus, we see that the restriction map \begin{equation}\label{u-eq3.2} H^0(Z_{\varepsilon}, \mathcal O_{Z_{\varepsilon}}(M)) \to H^0(Y_{\varepsilon}, \mathcal O_{Y_{\varepsilon}}(M))\oplus H^0(P, \mathcal O_P(M)) \end{equation} is surjective by considering the quasi-log structure of $[Z_{\varepsilon}, \omega_{\varepsilon}|_{Z_{\varepsilon}}+c_{\varepsilon}P]$ with the aid of Theorem \ref{u-thm2.8}. By combining \eqref{u-eq3.2} with \eqref{u-eq3.1}, the restriction map $$ H^0(X, \mathcal O_X(M))\to H^0(Y_{\varepsilon}, \mathcal O_{Y_{\varepsilon}}(M))\oplus H^0(P, \mathcal O_P(M)) $$ is surjective. As in Case \ref{a-case1}, we get $s\in H^0(X, \mathcal O_X(M))$ such that $s(P)\ne 0$ and $s\in H^0(X, \mathcal I_{\Nqklt(X, \omega)}\otimes \mathcal O_X(M))$. \end{case} Anyway, we can construct $s\in H^0(X, \mathcal I_{\Nqklt(X, \omega)}\otimes \mathcal O_X(M))$ such that $s(P)\ne 0$ when $P$ is a smooth point of $X$. \end{step} \begin{step}\label{a-step2} In this step, we assume that $P$ is a singular point of $X$. Let $\pi:Y\to X$ be the minimal resolution of $P$. Then we have the following commutative diagram $$ \xymatrix{ X' \ar[dr]^-p\ar[d]_-q& \\ Y \ar[r]_-\pi& X, } $$ where $p:X'\to X$ is a projective birational morphism from a smooth surface $X'$ constructed in Step \ref{a-step1} by using Theorem \ref{u-thm2.12}. Let $\Delta_{\varepsilon}$ be an effective $\mathbb R$-divisor on $X$ as in Step \ref{a-step1}. We put $\pi^*(K_X+\Delta_{\varepsilon})=K_Y+\Delta^Y_{\varepsilon}$. We note that $\Delta^Y_{\varepsilon}$ is effective since $\pi$ is the minimal resolution of $P$. By construction, $\pi$ is an isomorphism outside $\pi^{-1}(P)$. In particular, $\pi$ is an isomorphism over some open neighborhood of $\Nqklt(X, \omega)$. Therefore, $\mathcal J(Y, \Delta^Y_{\varepsilon})=\mathcal I_{\pi^{-1} (\Nqklt(X, \omega))}$ holds since $\mathcal J(X, \Delta_{\varepsilon})=\mathcal I_{\Nqklt(X, \omega)}$, where $\mathcal I_{\pi^{-1}(\Nqklt(X, \omega))}$ is the defining ideal sheaf of $\pi^{-1}(\Nqklt(X, \omega))$. Since $(\pi^*N)^2>4$, we can take an effective $\mathbb R$-divisor $B$ on $X$ such that $B\sim _{\mathbb R}N$ and $\mult_Q D>2$, where $D=\pi^*B$, for some $Q\in \pi^{-1}(P)$. We put $D_{\varepsilon}:=(1-\varepsilon)D$ and $B_{\varepsilon}:=(1-\varepsilon)B$ and define $$ s_\varepsilon=\max\{t\geq 0\, |\, (Y, \Delta^Y_{\varepsilon}+tD_{\varepsilon}) \ \text{is log canonical at any point of $\pi^{-1}(P)$}\}. $$ Then we have $0<s_\varepsilon<1$ since $\mult_QD_{\varepsilon}>2$ for $0<\varepsilon \ll 1$. Therefore, we can take $Q_{\varepsilon}\in \pi^{-1}(P)$ such that $(Y, \Delta^Y_{\varepsilon}+s_{\varepsilon}D_{\varepsilon})$ is log canonical but is not klt at $Q_{\varepsilon}$. As in Step \ref{a-step1}, we may assume that $\Supp B_{X'}\cup \Supp p^{-1}_*B\cup \Exc(p)$ is contained in a simple normal crossing divisor $\Sigma$ on $X'$. By the same argument as in Step \ref{a-step1}, we can take some point $R$ on $\pi^{-1}(P)$, $\{\varepsilon_i\}_{i=1}^\infty$, and $\delta>0$ such that $0<\varepsilon _i \ll 1$, $\varepsilon _i\searrow 0$ for $i\nearrow \infty$, $\mathcal J(Y, \Delta^Y_{\varepsilon_i}) =\mathcal I_{\pi^{-1}(\Nqklt(X, \omega))}$, $(Y, \Delta^Y_{\varepsilon_i}+s_{\varepsilon _i}D_{\varepsilon_i})$ is log canonical at $R$ but is not klt at $R$ with $\delta<s_{\varepsilon_i}<1$ for every $i$ since there are only finitely many components of $\Sigma$. By $q:X'\to Y$, we have a natural quasi-log structure on $[Y, \omega^Y_\varepsilon]$ with $\omega^Y_{\varepsilon}:=K_Y+\Delta^Y_{\varepsilon}+ s_{\varepsilon}D_\varepsilon$ for any $\varepsilon =\varepsilon_i$ (see Example \ref{u-ex2.7}). If there is a one-dimensional qlc center $C$ of $[Y, \omega^Y_{\varepsilon}]$ for some $\varepsilon$ with $(\pi^*M-\omega^Y_{\varepsilon})\cdot C=0$, then $C\subset \pi^{-1}(P)$. This is because $$(\pi^*M-\omega^Y_{\varepsilon})\cdot C =(1-s_{\varepsilon})(1-\varepsilon)N\cdot \pi_*C=0. $$ This means that $P$ is a qlc center of $[X, \omega_{\varepsilon}]$, where $\omega_{\varepsilon}:=K_X+\Delta_{\varepsilon}+s_{\varepsilon}B_\varepsilon$. In this case, we can use Case \ref{a-case1} in Step \ref{a-step1}. Therefore, for any $\varepsilon=\varepsilon_i$, we may assume that $(\pi^*M-\omega^Y_{\varepsilon})\cdot C>0$ for every one-dimensional qlc center $C$ of $[Y, \omega^Y_{\varepsilon}]$. Now we can apply the arguments for $[X, \omega_{\varepsilon}]$ and $M$ in Step \ref{a-step1} to $[Y, \omega^Y_{\varepsilon}]$ and $\pi^*M$ here. We note that $\pi^*M-\omega^Y_{\varepsilon}$ is not ample but is nef and log big with respect to $[Y, \omega^Y_{\varepsilon}]$. Thus we can use Theorem \ref{u-thm2.8} (ii). Then we obtain $s^Y\in H^0(Y, \mathcal I_{\pi^{-1}(\Nqklt(X, \omega))} \otimes \mathcal O_Y(\pi^*M))$ such that $s^Y(R)\ne 0$. Therefore, we have $s\in H^0(X, \mathcal I_{\Nqklt(X, \omega)}\otimes \mathcal O_X(M))$ such that $\pi^*s=s^Y$. In particular, $s(P)\ne 0$. This is what we wanted. \end{step} Anyway, we finish the proof of Theorem \ref{a-thm3.2}. \end{proof} Now the proof of Corollary \ref{a-cor1.5} is easy. \begin{cor}[Corollary \ref{a-cor1.5}]\label{a-cor3.3} Conjecture \ref{a-conj1.2} is true in dimension two. \end{cor} \begin{proof} Let $P$ be an arbitrary closed point of $X$ and let $W$ be the unique minimal qlc stratum of $[X, \omega]$ passing through $P$. Note that $W$ is irreducible by definition. By adjunction (see Theorem \ref{u-thm2.8} (i)), $[W, \omega|_W]$ is an irreducible quasi-log canonical pair. By Theorem \ref{u-thm2.8} (ii), the natural restriction map \begin{equation}\label{a-eq3.2} H^0(X, \mathcal O_X(M))\to H^0(W, \mathcal O_W(M)) \end{equation} is surjective. From now on, we will see that $|M|$ is basepoint-free in a neighborhood of $P$. If $W=P$, that is, $P$ is a qlc center of $[X, \omega]$, then the complete linear system $|M|$ is obviously basepoint-free in a neighborhood of $P$ by the surjection \eqref{a-eq3.2}. Let us consider the case where $\dim W=1$. We put $M'=M|_W$ and $N'=N|_W=M'-\omega|_W$. Then $\deg N'=N\cdot W >1$ by assumption. Therefore, by Theorem \ref{a-thm3.1}, $|M'|$ is basepoint-free at $P$ because $[W, \omega|_W]$ is an irreducible projective quasi-log canonical curve. Therefore, by the surjection \eqref{a-eq3.2}, we see that $|M|$ is basepoint-free in a neighborhood of $P$. Thus we may assume that $\dim W=\dim X=2$ and $X$ is irreducible by replacing $X$ with $W$ since the restriction map \eqref{a-eq3.2} is surjective. Therefore, we can assume that $X$ is irreducible and that $X$ is the unique qlc stratum of $[X, \omega]$ passing through $P$. In particular, $X$ is normal at $P$ (see, for example, \cite[Theorem 6.3.11 (ii)]{fujino-foundations}). Let $\nu: \widetilde{X}\to X$ be the normalization. Note that $[\widetilde{X}, \nu^*\omega]$ is a qlc pair by Theorem \ref{u-thm2.11}. We put $\widetilde{M}=\nu^*M$ and $\widetilde{N}=\nu^*N=\widetilde{M}-\nu^*\omega$. It is obvious that $\widetilde{M}$ is Cartier. Moreover, we have $(\widetilde{N})^2=N^2> 4$ and $\widetilde{N}\cdot Z\geq N \cdot \nu(Z)\geq 2$ for every curve $Z$ on $\widetilde{X}$. Note that $\dim \nu(Z)=\dim Z=1$ since $\nu$ is finite. We also note that $P':=\nu^{-1}(P)$ is a point since $\nu: \widetilde{X}\to X$ is an isomorphism over some open neighborhood of $P$. This is because $X$ is normal at $P$. Now the assumptions of Theorem \ref{a-thm3.2} are all satisfied. Therefore, there is a section $s' \in H^0(\widetilde {X}, \mathcal I_{\Nqklt(\widetilde{X}, \nu^*\omega)} \otimes \mathcal O_{\widetilde{X}}(\widetilde{M}))$ such that $s'(P')\neq 0$. We note that the non-normal part of $X$ is contained in $\Nqklt(X, \omega)$ (see, for example, \cite[Theorem 6.3.11 (ii)]{fujino-foundations}) and that the equality $$ \nu_*\mathcal I_{\Nqklt(\widetilde{X}, \nu^*\omega)}= \mathcal I_{\Nqklt(X, \omega)} $$ holds by Theorem \ref{u-thm2.11}. Therefore, we have $$ H^0(\widetilde{X}, \mathcal I_{\Nqklt(\widetilde{X}, \nu^*\omega) }\otimes \mathcal O_{\widetilde{X}}(\widetilde{M})) \simeq H^0(X,\mathcal I_{\Nqklt(X, \omega)} \otimes \mathcal O_{X}(M)). $$ Thus we can descend the section $s'$ on $\widetilde X$ to a section $s\in H^0(X, \mathcal I_{\Nqklt(X, \omega)} \otimes \mathcal O_X(M))$ with $s(P)\neq 0$. Therefore, by this section $s \in H^0(X, \mathcal O_{X}(M))$, we see that $|M|$ is basepoint-free in a neighborhood of $P$. This is what we wanted. \end{proof} We close this section with the proof of Corollary \ref{a-cor1.6}. \begin{proof}[Proof of Corollary \ref{a-cor1.6}] Let $(X, \Delta)$ be a projective semi-log canonical pair. Then, by Theorem \ref{u-thm2.10}, $[X, K_X+\Delta]$ is a quasi-log canonical pair. Therefore, Corollary \ref{a-cor1.6} is a direct consequence of Theorem \ref{a-thm1.3} and Corollary \ref{a-cor1.5}. \end{proof}
1,314,259,992,888
arxiv
\section{Introduction} In this work we study the existence of solutions to the following concave-convex critical problem involving the spectral fractional Laplacian, \begin{equation}\label{p_lambda}\tag{$P_\lambda$} \left\{ \begin{tabular}{lcl} $(-\Delta)^su=\lambda u^q+u^{2_s^*-1}$, & $u>0$ &in $\Omega$, \\ $\mkern+21muB(u)=0$& &on $\partial\Omega$, \end{tabular} \right. \end{equation} where $\frac{1}{2}<s<1$, $0<q<2_s^*-1$, $q\neq 1$, being $2_s^*=\frac{2N}{N-2s}$ the critical fractional Sobolev exponent, $\lambda>0$, $\Omega\subset\mathbb{R}^N$ is a smooth bounded domain with mixed boundary conditions \begin{equation}\label{mixed} B(u)=u\rchi_{\Sigma_{\mathcal{D}}}+\frac{\partial u}{\partial \nu}\rchi_{\Sigma_{\mathcal{N}}}, \end{equation} where $\nu$ is the outwards normal to $\partial\Omega$, $\chi_A$ stands for the characteristic function of a set $A$, $\Sigma_{\mathcal{D}}$ and $\Sigma_{\mathcal{N}}$ are smooth $(N-1)$-dimensional submanifolds of $\partial\Omega$ such that $\Sigma_{\mathcal{D}}$ is a closed submanifold of $\partial\Omega$ with measure $|\Sigma_{\mathcal{D}}|=\alpha>0,\ \alpha\in(0,|\partial\Omega|)$; $\Sigma_{\mathcal{D}}\cap\Sigma_{\mathcal{N}}=\emptyset$, $\Sigma_{\mathcal{D}}\cup\Sigma_{\mathcal{N}}=\partial\Omega$ and $\Sigma_{\mathcal{D}}\cap\overline{\Sigma}_{\mathcal{N}}=\Gamma$ is a smooth $(N-2)$-dimensional submanifold. The range $\frac 12<s<1$ is the appropriate range for mixed boundary problems due to the natural embedding of the associated functional space, see Remark \ref{rem:rango_s}.\newline Concave-convex critical problems are nowadays a well-known topic in the field of nonlinear PDE's as they have been broadly studied since the works of Brezis and Nirenberg (cf. \cite{Brezis1983a}) and Ambrosetti, Brezis and Cerami (cf. \cite{Ambrosetti1994}). The seminal paper \cite{Brezis1983a} deals with critical elliptic problems with Dirichlet boundary data for the classical Laplace operator $(s=1)$ and the exponent $q=1$. The authors introduced the main ideas to prove existence of solutions to critical problems with lower order perturbation terms. In \cite{Ambrosetti1994}, the authors analyzed the main effects concave-convex nonlinearities, $f_{\lambda}(u)=\lambda u^q+u^p$, $0<q<1<p\leq 2^*=\frac{N+2}{N-2}$, have on issues related to the existence and multiplicity of solutions. Problems similar to \eqref{p_lambda} have been also studied for the $p$-Laplace operator (cf. \cite{Azorero1991}) or fully nonlinear operators (cf. \cite{Charro2009}) both considering Dirichlet boundary data. Results in these lines also hold when one considers the classical Laplace operator endowed now with mixed Dirichlet-Neumann boundary data. Lions, Pacella and Tricarico (cf. \cite{Lions1988}) analyzed the pure critical power problem $(\lambda=0)$ and the attainability of the associated Sobolev constant. In \cite{Grossi1990}, the corresponding mixed Brezis-Nirenberg problem was studied. Mixed concave-convex problems were addressed in \cite{Colorado2003} by considering subcritical nonlinearities and in \cite{Abdellaoui2006} by considering critical problems involving Caffarelli-Khon-Nirenberg weights. Regarding the fractional setting, the aim of this work is to extend to the mixed boundary data framework the results of \cite{Barrios2012}, where the Dirichlet problem ($|\Sigma_{\mathcal{N}}|=0$) corresponding to \eqref{p_lambda} was studied. Concave-convex critical fractional problems dealing with a different fractional operator, defined through a singular integral were studied in \cite{Barrios2015}. Fractional problems or nonlocal problems involving more general kernels and critical nonlinearities were studied in \cite{MolicaBisci2015, Servadei2013, Servadei2013a, Servadei2014, Servadei2015, Servadei2016}. Coming back to problem \eqref{p_lambda}, using a generalized Pohozaev identity (cf. \cite{Pohozaev1965}) it can be seen that, under Dirichlet boundary conditions ($|\Sigma_{\mathcal{N}}|=0$), problem \eqref{p_lambda} has no solution for $\lambda=0$ and $\Omega$ a star-shaped domain (cf. \cite{Braendle2013}). Similar non existence results based on Pohozaev type identities for mixed problems can be found in \cite{Lions1988} and \cite{Colorado2019}. Nevertheless, mixed boundary critical problems behave quite differently from critical Dirichlet problems and, taking the Dirichlet boundary part small enough, one can prove the existence of positive solution for the pure critical problem corresponding to $\lambda=0$, (cf. \cite{Lions1988}, \cite[Theorem 2.1]{Abdellaoui2006} and \cite[Theorem 2.9]{Colorado2019}). Indeed, \eqref{p_lambda} with $\lambda\geq0$ and the exponent $q=1$ was analyzed in \cite{Colorado2019} where it was proved the following. \begin{theorem}\cite[Theorem 1.1]{Colorado2019}\label{fractBN} Assume that $q=1$, $\frac 12<s<1$ and $N\geq 4s$. Let $\lambda_{1,s}$ be the first eigenvalue of the fractional operator $(-\Delta)^s$ with mixed Dirichlet-Neumann boundary conditions \eqref{mixed}. Then, the problem \eqref{p_lambda} \begin{enumerate} \item has no solution for $\lambda\geq\lambda_{1,s}$, \item has at least one solution for $0<\lambda<\lambda_{1,s}$, \item has at least one solution for $\lambda=0$ and $|\Sigma_{\mathcal{D}}|$ small enough. \end{enumerate} \end{theorem} Our aim is then to obtain existence results for problem \eqref{p_lambda} for the whole range of exponents $0<q<2_s^*-1,\, q\neq1$. Precisely we will prove the following two main results. \begin{theorem}\label{sublinear} Let $0<q<1$ and $\frac 12<s<1$. Then, there exists $0<\Lambda<\infty$ such that the problem \eqref{p_lambda} \begin{enumerate} \item has no solution for $\lambda>\Lambda$, \item has a minimal solution for any $0<\lambda<\Lambda$. Moreover, the family of minimal solutions is increasing with respect to $\lambda$, \item has at least one solution for $\lambda=\Lambda$, \item has at least two solutions for $0<\lambda<\Lambda$. \end{enumerate} \end{theorem} \begin{theorem}\label{superlinear} Let $1<q<2_s^*-1$, $\frac{1}{2}<s<1$ and $N>2s\left(1+\frac{1}{q}\right)$. Then, the problem \eqref{p_lambda} has at least one solution for any $\lambda>0$. \end{theorem} The proof of Theorem \ref{sublinear} follows from nowadays well-known arguments. The existence of a positive minimal solution follows by using sub and supersolution, comparison and iterative arguments. To prove the existence of a second positive solution we will need to use a recently proved Strong Maximum Principle for mixed fractional problems (cf. \cite{Ortega2021}), from which we will obtain a separation result (see Lemma \ref{lem:separation} below) that implies that the minimal solution is indeed a minimum of the energy functional associated to \eqref{p_lambda}. This step is fundamental to prove \textit{(4)} in Theorem \ref{sublinear} since it allows us to use a Mountain Pass type argument. Due to the lack of compactness of the Sobolev embedding at the critical exponent $2_s^*$, we prove next that a local PS condition holds below a certain critical level $c_{\mathcal{D}-\mathcal{N}}^*$. We conclude by constructing paths whose energy is below the critical level $c_{\mathcal{D}-\mathcal{N}}^*$. At this point we have two options as the mixed pure critical problem can have solution for Dirichlet boundary size small enough (Theorem \ref{fractBN} - \textit{(3)} above). If the Sobolev constant associated to \eqref{p_lambda} (see Definition \ref{defi_sob_const} below) is attained we use the associated extremal functions to find paths with energy below the critical level $c_{\mathcal{D}-\mathcal{N}}^*$. Otherwise, this step is accomplished by the use of appropriate truncations of the extremal functions of the fractional Sobolev inequality.\newline Most of the arguments of the concave case $0<q<1$ also works for the convex case $q>1$ so we will only indicate the main steps to prove Theorem \ref{superlinear}. \textbf{Organization of the paper:} In Section \ref{Functionalsetting} we introduce the appropriate functional setting and some results for a Sobolev-like constant associated to \eqref{p_lambda} useful in the sequel. In Section \ref{Section_concave} we will address the proof of Theorem \ref{sublinear}. We finish with the proof of Theorem \ref{superlinear} in Section \ref{Section_convex}. \section{Functional setting and preliminaries}\label{Functionalsetting} As far as the fractional Laplace operator is concerned, we recall its definition given through the spectral decomposition. Let $(\varphi_i,\lambda_i)$ be the eigenfunctions (normalized with respect to the $L^2(\Omega)$-norm) and the eigenvalues of $(-\Delta)$ equipped with homogeneous mixed Dirichlet--Neumann boundary data, respectively. Then, $(\varphi_i,\lambda_i^s)$ are the eigenfunctions and eigenvalues of the fractional operator $(-\Delta)^s$, where, given $\displaystyle u_i(x)=\sum_{j\geq1}\langle u_i,\varphi_j\rangle\varphi_j$, $i=1,2$, it holds \begin{equation*} \langle(-\Delta)^s u_1, u_2\rangle=\sum_{j\ge 1} \lambda_j^s\langle u_1,\varphi_j\rangle \langle u_2,\varphi_j\rangle, \end{equation*} i.e., the action of the fractional operator on a smooth function $u$ is given by \begin{equation*} (-\Delta)^su=\sum_{j\ge 1} \lambda_j^s\langle u,\varphi_j\rangle\varphi_j. \end{equation*} As a consequence, the fractional Laplace operator $(-\Delta)^s$ is well defined through its spectral decomposition in the following space of functions that vanish on $\Sigma_{\mathcal{D}}$, \begin{equation*} H_{\Sigma_{\mathcal{D}}}^s(\Omega)=\left\{u=\sum_{j\ge 1} a_j\varphi_j\in L^2(\Omega):\ u=0\ \text{on }\Sigma_{\mathcal{D}},\ \|u\|_{H_{\Sigma_{\mathcal{D}}^s}(\Omega)}^2= \sum_{j\ge 1} a_j^2\lambda_j^s<\infty\right\}. \end{equation*} For $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$, it follows that $\displaystyle \|u\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}=\left\|(-\Delta)^{\frac{s}{2}}u\right\|_{L^2(\Omega)}$. \begin{remark}\label{rem:rango_s} As it is proved in \cite[Theorem 11.1]{Lions1972}, if $0<s\le \frac{1}{2}$ then $H_0^s(\Omega)=H^s(\Omega)$ and, therefore, also $H_{\Sigma_{\mathcal{D}}}^s(\Omega)=H^s(\Omega)$, while for $\frac 12<s<1$, $H_0^s(\Omega)\subsetneq H^s(\Omega)$. Hence, the range $\frac 12<s<1$ guarantees that $H_{\Sigma_{\mathcal{D}}}^s(\Omega)\subsetneq H^s(\Omega)$ and it provides us with the appropriate functional space for the mixed boundary problem \eqref{p_lambda}. \end{remark} This definition of the fractional powers of the Laplace operator allows us to integrate by parts in the proper spaces, so that a natural definition of weak solution to problem \eqref{p_lambda} is the following. \begin{definition} We say that $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ is a weak solution to \eqref{p_lambda} if \begin{equation}\label{energy_sol} \int_{\Omega}(-\Delta)^{s/2}u(-\Delta)^{s/2}\psi dx=\int_{\Omega}\left(\lambda u^q+u^{2_s^*-1}\right)\psi dx,\ \ \text{for all}\ \psi\in H_{\Sigma_{\mathcal{D}}}^s(\Omega). \end{equation} \end{definition} The right-hand side of \eqref{energy_sol} is well defined because of the embedding $H_{\Sigma_{\mathcal{D}}}^s(\Omega)\hookrightarrow L^{2_s^*}(\Omega)$, so $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ then $\lambda u^q+u^{2_s^*-1}\in L^{\frac{2N}{N+2s}}\hookrightarrow \left(H_{\Sigma_{\mathcal{D}}}^s(\Omega)\right)'$.\newline The energy functional associated to problem \eqref{p_lambda} is \begin{equation}\label{energy_functional} I_\lambda(u)=\frac{1}{2}\int_{\Omega}|(-\Delta)^{s/2}u|^2dx-\frac{\lambda}{q+1}\int_{\Omega}|u|^{q+1}dx-\frac{N-2s}{2N}\int_{\Omega}|u|^{\frac{2N}{N-2s}}dx. \end{equation} $I_\lambda$ is well defined in $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ and positive critical points of $I_\lambda$ correspond to solutions of \eqref{p_lambda}.\newline Due to the nonlocal nature of the fractional operator $(-\Delta)^s$ some difficulties arise when one tries to obtain an explicit expression of the action of the fractional Laplacian on a given function. In order to overcome these difficulties, we use the ideas by Caffarelli and Silvestre (see \cite{Caffarelli2007}) together with those of \cite{Braendle2013,Cabre2010,Capella2011} to give an equivalent definition of the operator $(-\Delta)^s$ by means of an auxiliary problem that we introduce next. Given a domain $\Omega \subset \mathbb{R}^N$, we set the cylinder $\mathscr{C}_{\Omega}=\Omega\times(0,\infty)\subset\mathbb{R}_+^{N+1}$. We denote by $(x,y)$ those points that belong to $\mathscr{C}_{\Omega}$ and by $\partial_L\mathscr{C}_{\Omega}=\partial\Omega\times[0,\infty)$ the lateral boundary of the cylinder. Let us also denote by $\Sigma_{\mathcal{D}}^*=\Sigma_{\mathcal{D}}\times[0,\infty)$ and $\Sigma_{\mathcal{N}}^*=\Sigma_{\mathcal{N}}\times[0,\infty)$ as well as $\Gamma^*=\Gamma\times[0,\infty)$. It is clear that, by construction, \begin{equation*} \Sigma_{\mathcal{D}}^*\cap\Sigma_{\mathcal{N}}^*=\emptyset\,, \quad \Sigma_{\mathcal{D}}^*\cup\Sigma_{\mathcal{N}}^*=\partial_L\mathscr{C}_{\Omega} \quad \mbox{and} \quad \Sigma_{\mathcal{D}}^*\cap\overline{\Sigma_{\mathcal{N}}^*}=\Gamma^*\,. \end{equation*} Then, given a function $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ we define its $s$-harmonic extension, $w (x,y)=E_{s}[u(x)]$, as the solution to the problem \begin{equation*} \left\{ \begin{array}{rlcl} -\text{div}(y^{1-2s}\nabla w )&\!\!\!\!=0 & & \mbox{ in } \mathscr{C}_{\Omega} , \\ B(w )&\!\!\!\!=0 & & \mbox{ on } \partial_L\mathscr{C}_{\Omega} , \\ w(x,0)&\!\!\!\!=u(x) & & \mbox{ on } \Omega\times\{y=0\} , \end{array} \right. \end{equation*} where \begin{equation*} B(w)=w\rchi_{\Sigma_{\mathcal{D}}^*}+\frac{\partial w}{\partial \nu}\rchi_{\Sigma_{\mathcal{N}}^* }, \end{equation*} being $\nu$, with an abuse of notation\footnote{Let $\nu$ be the outward normal to $\partial\Omega$ and $\nu_{(x,y)}$ the outward normal to $\mathscr{C}_{\Omega}$ then $\nu_{(x,y)}=(\nu,0)$, $y>0$.}, the outward normal to $\partial_L\mathscr{C}_{\Omega}$. The extension function belongs to the space \begin{equation*} \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega}) : =\overline{\mathcal{C}_{0}^{\infty} ((\Omega\cup\Sigma_{\mathcal{N}})\times[0,\infty))}^{\|\cdot\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}}, \end{equation*} where we define \begin{equation}\label{norma} \|\cdot\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2:=\kappa_s\int_{\mathscr{C}_{\Omega}}\mkern-5mu y^{1-2s} |\nabla (\cdot)|^2dxdy, \end{equation} with $\kappa_s=2^{2s-1}\frac{\Gamma(s)}{\Gamma(1-s)}$ being $\Gamma(s)$ the Gamma function. The space $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ is a Hilbert space equipped with the norm $\|\cdot\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}$ which is induced by the scalar product \begin{equation*} \langle w, z \rangle_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}=\kappa_s \int_{\mathscr{C}_{\Omega}}y^{1-2s} \langle\nabla w,\nabla z\rangle dxdy. \end{equation*} Moreover, the following inclusions are satisfied, \begin{equation} \label{embedd} \mathcal{X}_0^s(\mathscr{C}_{\Omega}) \subsetneq \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega}) \subsetneq \mathcal{X}^s(\mathscr{C}_{\Omega}), \end{equation} being $\mathcal{X}_0^s(\mathscr{C}_{\Omega})$ the space of functions that belongs to $\mathcal{X}^s(\mathscr{C}_{\Omega})\equiv H^1(\mathscr{C}_{\Omega},y^{1-2s}dxdy)$ and vanish on the lateral boundary of $\mathscr{C}_{\Omega}$, denoted by $\partial_L\mathscr{C}_{\Omega}$. The key point of the extension function is that it is related to the fractional Laplacian of the original function through the formula \begin{equation*} \frac{\partial w}{\partial \nu^s}:= -\kappa_s \lim_{y\to 0^+} y^{1-2s}\frac{\partial w}{\partial y}=(-\Delta)^su(x). \end{equation*} By the above arguments, we can reformulate problem \eqref{p_lambda} in terms of the extension problem as follows \begin{equation}\label{extension_problem} \left\{ \begin{array}{rlcl} -\text{div}(y^{1-2s}\nabla w )&\!\!\!\!=0 & & \mbox{ in } \mathscr{C}_{\Omega} , \\ B(w)&\!\!\!\!=0 & & \mbox{ on } \partial_L\mathscr{C}_{\Omega} , \\ \frac{\partial w}{\partial \nu^s}&\!\!\!\!=\lambda w^q+w^{2_s^*-1} & & \mbox{ on } \Omega\times\{y=0\}. \end{array} \right. \tag{$P_{\lambda}^*$} \end{equation} A weak solution to \eqref{extension_problem} is a function $w\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ such that, for all $\varphi\in\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, \begin{equation*} \kappa_s\int_{\mathscr{C}_{\Omega}} y^{1-2s}\langle\nabla w,\nabla\varphi \rangle dxdy=\int_{\Omega} \left(\lambda w^q(x,0)+w^{2_s^*-1}(x,0)\right)\varphi(x,0)dx. \end{equation*} Given $w\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ a solution to problem \eqref{extension_problem} the function $u(x)=Tr[w](x)=w(x,0)$ belongs to $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ and it is a solution to problem \eqref{p_lambda} and vice versa, if $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ is a solution to \eqref{p_lambda} then $w=E_s[u]\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ is a solution to \eqref{extension_problem}. Thus, both formulations are equivalent and the {\it extension operator} $$ E_s: H_{\Sigma_{\mathcal{D}}}^s(\Omega) \to \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega}), $$ allows us to switch between each other. Moreover, according to \cite{Braendle2013, Caffarelli2007}, due to the choice of the constant $\kappa_s$, the extension operator $E_s$ is an isometry, i.e., \begin{equation}\label{norma2} \|E_s[\varphi] \|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}= \|\varphi \|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}\quad \text{for all}\ \varphi\in H_{\Sigma_{\mathcal{D}}}^s(\Omega). \end{equation} Finally, the energy functional associated to problem $(P_{\lambda}^*)$ is \begin{equation}\label{extensionfunctional} J_\lambda(w)=\frac{\kappa_s}{2}\int_{\mathscr{C}_{\Omega}}y^{1-2s}|\nabla w|^2dxdy-\frac{\lambda}{q+1}\int_{\Omega}|w|^{q+1}dx-\frac{N-2s}{2N}\int_{\Omega}|w|^{2_s^*}dx. \end{equation} Plainly, (positive) critical points of $J_\lambda$ in $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ correspond to (positive) critical points of $I_\lambda$ in $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$. Moreover, minima of $J_\lambda$ also correspond to minima of $I_\lambda$. The proof of this fact is similar to the one of the Dirichlet case, (cf. \cite[Proposition 3.1]{Barrios2012}).\newline When one considers Dirichlet boundary conditions the following \textit{trace inequality} holds (cf. \cite[Theorem 4.4]{Braendle2013}): there exists $C=C(N,s,r,|\Omega|)>0$ such that, for all $z\in\mathcal{X}_0^s(\mathscr{C}_{\Omega})$, \begin{equation}\label{sobext} \kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}|\nabla z(x,y)|^2dxdy\geq C\left(\int_{\Omega}|z(x,0)|^rdx\right)^{\frac{2}{r}}, \end{equation} for $1\leq r\leq 2_s^*,\ N>2s$. Because of \eqref{norma2} the trace inequality \eqref{sobext} is equivalent to the fractional Sobolev inequality, \begin{equation}\label{sobolev} C\left(\int_{\Omega}|v|^rdx\right)^{\frac{2}{r}}\leq \int_{\Omega}|(-\Delta)^{\frac{s}2}v|^2dx \qquad\text{for all } v\in H_{0}^s(\Omega),\ 1\leq r\leq 2^*_s ,\ N>2s. \end{equation} If $r=2_s^*$ the best constant in \eqref{sobolev} (and, thanks to \eqref{norma2}, in \eqref{sobext}), namely the fractional Sobolev constant, denoted by $S(N,s)$, is independent of the domain $\Omega$ and its exact value is given by \begin{equation*} S(N,s)=2^{2s}\pi^s\frac{\Gamma\left(\frac{N+2s}{2}\right)}{\Gamma\left(\frac{N-2s}{2}\right)}\left(\frac{\Gamma(\frac{N}{2})}{\Gamma(N)}\right)^{\frac{2s}{N}}. \end{equation*} Since it is not achieved in any bounded domain we have that \begin{equation*} \kappa_s\int_{\mathbb{R}_{+}^{N+1}}\!\!y^{1-2s}|\nabla z(x,y)|^2dxdy\geq S(N,s)\left(\int_{\mathbb{R}^N}|z(x,0)|^{\frac{2N}{N-2s}}dx\right)^{\frac{N-2s}{N}} \!,\ \forall z\in \mathcal{X}^s(\mathbb{R}_{+}^{N+1}), \end{equation*} where $\mathcal{X}^s(\mathbb{R}_{+}^{N+1})=\overline{\mathcal{C}^{\infty}(\mathbb{R}^N\times[0,\infty))}^{\|\cdot\|_{\mathcal{X}^s(\mathbb{R}_{+}^{N+1})}}$, with $\|\cdot\|_{\mathcal{X}^s(\mathbb{R}_{+}^{N+1})}$ defined as \eqref{norma} replacing $\mathscr{C}_\Omega$ by $\mathbb{R}_{+}^{N+1}$. Indeed, in the whole space the latter inequality is achieved for the family $w_{\varepsilon}= E_s[u_{\varepsilon}]$, \begin{equation}\label{eq:sob_extremal} u_{\varepsilon}(x)=\frac{\varepsilon^{\frac{N-2s}{2}}}{(\varepsilon^2+|x|^2)^{\frac{N-2s}{2}}}, \end{equation} with arbitrary $\varepsilon>0$, (cf. \cite{Braendle2013}). When mixed boundary conditions are considered the situation is quite similar since the Dirichlet condition is imposed on a set $\Sigma_{\mathcal{D}} \subset \partial \Omega$ such that $0<|\Sigma_{\mathcal{D}}|<|\partial\Omega|$. \begin{definition}\label{defi_sob_const}The Sobolev constant relative to the Dirichlet boundary $\Sigma_{\mathcal{D}}$ is defined by \begin{equation*} \widetilde{S}(\Sigma_{\mathcal{D}})=\inf_{\substack{u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)\\ u\not\equiv 0}}\frac{\|u\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2}{\|u\|_{L^{2_s^*}(\Omega)}^2}. \end{equation*} Since the extension function minimizes the $\|\cdot\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}$ norm along all functions with the same trace on $\{y=0\}$ (cf. \cite[Lemma 2.4]{Colorado2019}), by \eqref{norma2}, the constant $\widetilde{S}(\Sigma_{\mathcal{D}})$ is equivalently defined by, \begin{equation*} \widetilde{S}(\Sigma_{\mathcal{D}})=\inf_{\substack{w\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})\\ w\not\equiv 0}}\frac{\|w\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2}{\|w(\cdot,0)\|_{L^{2_s^*}(\Omega)}^2}. \end{equation*} \end{definition} As mentioned above, since the Dirichlet condition is imposed on a set $\Sigma_{\mathcal{D}} \subset \partial \Omega$ such that $0<|\Sigma_{\mathcal{D}}|<|\partial\Omega|$, by the inclusions \eqref{embedd}, we have \begin{equation}\label{const} 0<\widetilde{S}(\Sigma_{\mathcal{D}})\vcentcolon=\inf_{\substack{u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)\\ u\not\equiv 0}}\frac{\|u\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2}{ \|u\|_{L^{2_s^*}(\Omega)}^2} <\inf_{\substack{u\in H_{0}^s(\Omega)\\ u\not\equiv 0}}\frac{\|u\|_{H_{0}^s(\Omega)}^2}{\|u\|_{L^{2_s^*}(\Omega)}^2}. \end{equation} As we will see the constant $\widetilde{S}(\Sigma_{\mathcal{D}})$ plays a main role in the existence issues of problem \eqref{p_lambda}, i.e., $\widetilde{S}(\Sigma_{\mathcal{D}})$ is to mixed problems what Sobolev constant $S(N,s)$ is to Dirichlet problems. \begin{remark}\label{remark_att} Due to the spectral definition of the fractional operator, using H\"older's inequality we deduce $\widetilde{S}(\Sigma_{\mathcal{D}})\leq|\Omega|^{\frac{2s}{N}}\lambda_1^s(\alpha)$, with $\lambda_1(\alpha)$ the first eigenvalue of the Laplace operator endowed with mixed boundary conditions on the sets $\Sigma_{\mathcal{D}}=\Sigma_{\mathcal{D}}(\alpha)$ and $\Sigma_{\mathcal{N}}= \Sigma_{\mathcal{N}}(\alpha)$. Since $\lambda_1(\alpha)\to0$ as $\alpha\to0^+$, (cf. \cite[Lemma 4.3]{Colorado2003}), we have $\widetilde{S}(\Sigma_{\mathcal{D}})\to0$ as $\alpha\to0^+$. \end{remark} Gathering together \eqref{const} and \eqref{norma2} it follows that, for all $\varphi \in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, \begin{equation*} \widetilde{S}(\Sigma_{\mathcal{D}})\left(\int_\Omega |\varphi(x,0)|^{2^*_s} dx\right)^{\frac{2}{2^*_s}}\leq\|\varphi(x,0)\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2=\|E_s[\varphi(x,0)]\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2. \end{equation*} This Sobolev--type inequality provides us with a trace inequality adapted to the mixed boundary data framework. \begin{lemma}\cite[Lemma 2.4]{Colorado2019}\label{lem:traceineq} For all $\varphi \in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, we have \begin{equation*} \widetilde{S}(\Sigma_{\mathcal{D}})\left(\int_\Omega|\varphi(x,0)|^{2^*_s} )dx\right)^{\frac{2}{2^*_s}}\leq\kappa_s\int_{\mathscr{C}_{\Omega}} y^{1-2s} |\nabla \varphi|^2 dxdy. \end{equation*} \end{lemma} Let us collect some results for $\widetilde{S}(\Sigma_{\mathcal{D}})$ proven in \cite{Colorado2019} useful in the sequel. \begin{proposition}\cite[Proposition 3.6]{Colorado2019}\label{prop_cota} If $\Omega$ is a smooth bounded domain, then $$\widetilde{S}(\Sigma_{\mathcal{D}})\leq 2^{\frac{-2s}{N}}S(N,s).$$ \end{proposition} \begin{theorem}\cite[Theorem 2.9]{Colorado2019}\label{th_att} If $\widetilde{S}(\Sigma_{\mathcal{D}})<2^{\frac{-2s}{N}}S(N,s)$ then $\widetilde{S}(\Sigma_{\mathcal{D}})$ is attained. \end{theorem} This result makes the difference between Dirichlet problems and mixed Dirichlet-Neumann problems. By taking $|\Sigma_{\mathcal{N}}|=0$ (Dirichlet case) and $\lambda=0$ in \eqref{p_lambda}, we have the pure critical power problem which has no positive solution, i.e., the Sobolev constant $S(N,s)$ is not attained, under some geometrical assumptions on $\Omega$, for instance, under star-shapeness assumptions (cf. \cite{Braendle2013, Pohozaev1965}). Analogous non-existence results based on a Pohozaev--type identity and star-shapeness like assumptions holds for mixed problems (cf. \cite{Lions1988,Colorado2019}). Nevertheless, in the mixed case, the corresponding Sobolev constant $\widetilde{S}(\Sigma_{\mathcal{D}})$ can be achieved thanks to Theorem \ref{th_att}. Moreover, taking in mind Remark \ref{remark_att} the hypothesis of Theorem \ref{th_att} can be fulfilled by making the size of the Dirichlet boundary part small enough (cf. \cite[Theorem 1.1-(3)]{Colorado2019}).\newline At last, we enunciate a concentration-compactness result adapted to our fractional setting with mixed boundary conditions useful in the sequel. First, we recall the concept of a tight sequence. \begin{definition} We say that a sequence $\{y^{1-2s}|\nabla w_n|^2\}_{n\in\mathbb{N}}\subset L^1(\mathscr{C}_{\Omega})$ is tight if for any $\eta>0$ there exists $\rho>0$ such that \begin{equation*} \int_{\{y>\rho\}}\int_{\Omega}y^{1-2s}|\nabla w_n|^2dxdy\leq\eta,\ \forall n\in\mathbb{N}. \end{equation*} \end{definition} \begin{theorem}\cite[Theorem 4.4]{Colorado2019}\label{concompact} Let $\{w_n\}\subset\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ be a weakly convergent sequence to $w$ in $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ such that $\{y^{1-2s}|\nabla w_n|^2\}_{n\in\mathbb{N}}$ is tight. Let us denote $u_n=Tr[w_n]$, $u=Tr[w]$ and let $\mu,\nu$ be two nonnegative measures such that, in the sense of measures, \begin{equation}\label{limitcompactness} \kappa_sy^{1-2s}|\nabla w_n|^2\to\mu\quad \text{and}\quad |u_n|^{2_s^*}\to\nu, \end{equation} Then there exist an at most countable set $\mathfrak{I}$ and points $\{x_i\}_{i\in \mathfrak{I}}\subset\overline{\Omega}$ such that \begin{enumerate} \item $\nu=|u|^{2_s^*}+\sum\limits_{i\in \mathfrak{I}}\nu_{i}\delta_{x_{i}},\ \nu_{i}>0,$ \item $\mu\geq\kappa_sy^{1-2s}|\nabla w|^2+\sum\limits_{i\in \mathfrak{I}}\mu_{i}\delta_{x_{i}},\ \mu_{i}>0,$ \item $\mu_i\geq \widetilde{S}(\Sigma_{\mathcal{D}})\nu_i^{\frac{2}{2_s^*}}$. \end{enumerate} \end{theorem} Using Theorem \ref{concompact} and Brezis-Lieb Lemma (cf. \cite{Brezis1983}) it can be proved the following. \begin{theorem}\cite[Theorem 4.5]{Colorado2019}\label{CCA} Let $w_m$ be a minimizing sequence of $\widetilde{S}(\Sigma_{\mathcal{D}})$. Then either $w_m$ is relatively compact or the weak limit, $w\equiv 0$. Even more, in the latter case there exist a subsequence $w_{m}$ and a point $x_0\in\overline{\Sigma}_{\mathcal{N}}$ such that \begin{equation}\label{acumulacion} \kappa_sy^{1-2s}|\nabla w_m|^2\to \widetilde{S}(\Sigma_{\mathcal{D}})\delta_{x_0}\quad \text{and}\quad |u_m|^{2_s^*}\to\delta_{x_0}, \end{equation} with $u_m=Tr[w_m]$. \end{theorem} \section{Concave case: $0<q<1$}\label{Section_concave} In this section we prove Theorem \ref{sublinear}. The existence of a positive minimal solution and related results follow from nowadays well known arguments so we will be brief in details. The existence of a second positive solution follows from the following scheme: first we prove a separation result deduced from a recent work of the author (cf. \cite{Ortega2021}) which implies that the minimal solution is a minimum of the energy functional $I_\lambda$. Next, due to the lack of compactness of the Sobolev embedding at the critical exponent $2_s^*$, we prove that a local PS condition holds below the critical level $c_{\mathcal{D}-\mathcal{N}}^*=\frac{s}{N}[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}$. Finally, we construct paths below the critical level $c_{\mathcal{D}-\mathcal{N}}^*$. This is done either using the extremal functions of the constant $\widetilde{S}(\Sigma_{\mathcal{D}})$ (in case this constant is attained) or, taking in mind Theorems \ref{limitcompactness} and \ref{CCA}, by concentrating the extremal functions of the Sobolev inequality at points on the Neumann boundary $\Sigma_{\mathcal{N}}$. Let us consider the following fractional mixed problem with a general nonlinearity, \begin{equation}\label{prob:general_f} \left\{ \begin{tabular}{lcl} $(-\Delta)^su=f(x,u),$ & $u>0$ &in $\Omega$, \\ $\mkern+22mu B(u)=0$& &on $\partial\Omega$. \end{tabular} \right. \tag{$P_{f}$} \end{equation} Note that \eqref{p_lambda} corresponds to \eqref{prob:general_f} for $f=f_\lambda(t)=\lambda|t|^{q-1}t+|t|^{2_s^*-1}t$. The associated energy functional, $I_f: H_{\Sigma_{\mathcal{D}}}^s(\Omega)\mapsto\mathbb{R}$, is given by \begin{equation*} I_f(u)=\frac{1}{2}\int_{\Omega}|(-\Delta)^{s/2}u|^2dx-\int_{\Omega}F(w(x,0))dx, \end{equation*} where $F(t)=\int_0^\tau f(\tau)d\tau$ (resp. $F_\lambda(t)=\int_0^\tau f_\lambda(\tau)d\tau=\frac{1}{q+1}|t|^{q+1}+\frac{1}{2_s^*}|t|^{2_s^*}$). The corresponding extension problem reads \begin{equation}\label{prob:general_f_extension} \left\{ \begin{array}{rlcl} \displaystyle -\text{div}(y^{1-2s}\nabla w)&\!\!\!\!=0 & & \mbox{ in } \mathcal{C}_{\Omega} , \\ \displaystyle B(w)&\!\!\!\!=0 & & \mbox{ on } \partial_L\mathcal{C}_{\Omega} , \\ w&\!\!\!\!>0 & &\mbox{ on } \Omega\times\{y=0\}, \\ \displaystyle \frac{\partial w}{\partial \nu^s}&\!\!\!\!= f(w) & & \mbox{ on } \Omega\times\{y=0\}. \end{array} \right. \tag{$P_{f}^*$} \end{equation} The associated energy functional, $J_f: \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})\mapsto\mathbb{R}$, is \begin{equation*} J_f(w)=\frac{\kappa_s}{2}\int_{\mathscr{C}_{\Omega}}y^{1-2s}|\nabla w|^2dxdy-\int_{\Omega}F(w(x,0))dx. \end{equation*} \begin{lemma} A function $u_0\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ is a local minimum of the energy functional $I_\lambda$ if and only if $w_0=E_s[u_0]\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ is a local minimum of the energy functional $J_{\lambda}$. \end{lemma} \begin{proof} The follows exactly as that of \cite[Proposition 3.1]{Barrios2012}, so we omit the details. \end{proof} \begin{proposition}\label{prop:bound} Let $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ be a solution to \eqref{prob:general_f} with the nonlinearity $f$ satisfying \begin{equation*} 0\leq f(x,t)\leq C(1+|t|^p)\quad\text{for all } (x,t)\in\Omega\times\mathbb{R}\ \text{and } 0< p\leq 2_s^*-1. \end{equation*} Then $u\in L^{\infty}(\Omega)$. \end{proposition} \begin{proof} Following verbatim the proof of \cite[Proposition 5.1]{Barrios2012}, by a truncation argument we deduce $u^{\beta+1}\in L^{2_s^*}(\Omega)$ for $\beta\geq0$. Then, by an iteration argument, we get $f(\cdot,u)\in L^r(\Omega)$, $r>\frac{N}{2s}$ after a finite number of steps. Because of \cite[Theorem 3.7]{Carmona2020} we conclude $u\in L^{\infty}(\Omega)$. \end{proof} The next result deals with the sub and supersolutions method. \begin{lemma}\cite[Lemma 5.2]{Carmona2020a}\label{existencia} Suppose that there exist a subsolution $w_1$ and a supersolution $w_2$ to \eqref{prob:general_f_extension}, i.e., $w_1,w_2 \in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ such that, for any nonnegative $\phi \in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, \begin{equation*} \begin{split} \kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}\nabla w_1\nabla \phi dxdy&\leq\int_{\Omega}f(w_1(x,0))\phi (x,0)dx\,, \\ \kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}\nabla w_2\nabla \phi dxdy&\geq\int_{\Omega}f(w_2(x,0))\phi (x,0) dx\,. \end{split} \end{equation*} \vskip -2pt \noindent Assume moreover that $w_1\leq w_2$ in $\mathscr{C}_{\Omega}$. Then, there exists a solution $w$ to problem \eqref{prob:general_f_extension} verifying $w_1\leq w \leq w_2$ in $\mathscr{C}_{\Omega}$. \end{lemma} Finally we recall a comparison result. \begin{lemma}\cite[Lemma 5.3]{Carmona2020a}\label{orden} Let $w_1,w_2\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ be respectively a positive subsolution and a positive supersolution to \eqref{prob:general_f_extension} and assume that $f(t)/t$ is decreasing for $t>0$. Then $w_1\leq w_2$ in $\mathscr{C}_{\Omega}$. \end{lemma} We address now the proof of Theorem \ref{sublinear}. \begin{lemma} \label{lem:Lambda} Let $\Lambda$ be defined by \begin{equation*} \Lambda=\sup\{\lambda>0:\eqref{p_lambda}\ \text{has solution} \}, \end{equation*} \vskip -4pt \noindent then, $0<\Lambda<\infty$. \end{lemma} \vspace*{-4pt} \begin{proof} Let $(\lambda_1,\varphi_1)$ be the first eigenvalue and a corresponding positive eigenfunction of the fractional Laplacian in $\Omega$. Using $\varphi_1$ as a test function in problem \eqref{p_lambda}, we have \vskip -11pt \begin{equation}\label{eigen} \int_{\Omega}(\lambda u^q+u^{2_s^*-1})\varphi_1dx=\lambda_1^s\int_{\Omega}u\varphi_1dx. \end{equation} Since there exists constants $c=c(N,s,q)<1$ and $\delta=\frac{2_s^*-2}{2_s^*-q}$ such that $\lambda t^q+t^{2_s^*-1}>c\lambda^{\delta}t$ for any $t>0$, from \eqref{eigen} we deduce $c\lambda^{\delta}<\lambda_1^s$ and hence $\Lambda<\infty$. In particular, this also proves that there is no solution to \eqref{p_lambda} for $\lambda>\Lambda$. In order to prove that $\Lambda>0$, we prove, by means of the sub and supersolution technique, the existence of solution to \eqref{extension_problem} for any small positive $\lambda$. Indeed, for $\varepsilon>0$ small enough, $\underline{U}=\varepsilon E_s[\varphi_1]$ is a subsolution to \eqref{extension_problem}. Because of Proposition \ref{prop:bound}, a supersolution can be constructed as an appropiate multiple of the function $G$, the solution to \begin{equation*} \left\{ \begin{array}{rlcl} \displaystyle -\text{div}(y^{1-2s}\nabla G)&\!\!\!\!=0 & & \mbox{ in } \mathcal{C}_{\Omega} , \\ \displaystyle B(G)&\!\!\!\!=0 & & \mbox{ on } \partial_L\mathcal{C}_{\Omega} , \\ \displaystyle \frac{\partial G}{\partial \nu^s}&\!\!\!\!= 1& & \mbox{ on } \Omega\times\{y=0\}. \end{array} \right. \end{equation*} Note that, as the trace function $g(x)=G(x,0)$ is a solution to \begin{equation*} \left\{ \begin{tabular}{rcl} $(-\Delta)^sg=1$ & &in \ $\Omega$, \\[2pt] $B(g)=0$ & &on \ $\partial\Omega$, \end{tabular} \right. \end{equation*} by \cite[Theorem 3.7]{Carmona2020} we have $\|g\|_{L^{\infty}(\Omega)}<+\infty$. Next, since $0<q<1$ we can find $\lambda_0>0$ such that for all $0<\lambda\leq\lambda_0$ there exists $M=M(\lambda)$ such that \begin{equation}\label{eq:Mlto0} M\geq\lambda M^q\|g\|_{L^{\infty}(\Omega)}^q+M^{2_s^*-1}\|g\|_{L^{\infty}(\Omega)}^{2_s^*-1}. \end{equation} As a consequence, the function $h=Mg$ satisfies $M=(-\Delta)^sh\geq \lambda h^q+h^{2_s^*-1}$ and, by the Maximum Principle (cf. \cite[Lemma 2.3]{Capella2011}), the extension function $\overline U=E_s[h]$ is a supersolution and $\underline U \leq \overline U$. Applying Lemma \ref{existencia} we conclude the existence of a solution $U(x,y)$ to problem \eqref{extension_problem}. Therefore, its trace $u(x)=U(x,0)$ is a solution to problem \eqref{p_lambda} with $\lambda<\lambda_0$. \end{proof} \begin{lemma}\label{lem:minimal} Problem \eqref{p_lambda} has at least a positive minimal solution for every $0<\lambda<\Lambda$. Moreover, the family $\{u_{\lambda}\}$ of minimal solutions is increasing with respect to $\lambda$. \end{lemma} \begin{proof} By definition of $\Lambda$, for any $0<\lambda<\Lambda$ there exists $\mu\in (\lambda,\Lambda]$ such that $(P_{\mu}^*)$ has a solution $W_{\mu}$. It is easy to see that $W_{\mu}$ is a supersolution for \eqref{extension_problem}. On the other hand, let $V_{\lambda}$ be the unique solution to problem \eqref{prob:general_f_extension} with $f(t)= \lambda t^q$ (the existence can be deduced by minimization, while uniqueness follows from Lemma~\ref{orden}). It is clear that $V_{\lambda}$ is a subsolution to problem \eqref{extension_problem} and, by Lemma \ref{orden}, we have $V_{\lambda} \leq W_{\mu}$. Thus, by Lemma \ref{existencia}, we conclude that there is a solution to \eqref{extension_problem} and, as a consequence, for the whole open interval $(0,\Lambda)$. Finally, we prove the existence of a minimal solution for all $0<\lambda<\Lambda$. Given a solution $u$ to \eqref{p_lambda} we take $U=E_s[u]$ with $U$ solution to \eqref{extension_problem}. By Lemma~\ref{orden}, we have $V_{\lambda}\leq U$ with $V_{\lambda}$ solution to \eqref{prob:general_f_extension} with $f(t)= \lambda t^q$. Then, $v_{\lambda}(x)=V_{\lambda}(x,0)$ is a subsolution to \eqref{p_lambda} and the monotone iteration \begin{equation*} \begin{array}{c} (-\Delta)^s u_{n+1}=\lambda u_{n}^q+u_n^r,\quad u_n \in H_{\Sigma_{\mathcal{D}}}^s ({\Omega})\quad \mbox{ with } \quad u_0=v_{\lambda}, \end{array} \end{equation*} verifies $u_n\leq U(x,0)=u$ and $u_n\nearrow u_{\lambda}$ with $u_{\lambda}$ solution to problem \eqref{p_lambda}. In particular, $u_\lambda\leq u$ and we conclude that $u_\lambda$ is a minimal solution. The monotonicity follows directly from the first part of the proof, taking $U_\mu=E_s[u_\mu]$ which leads to $u_\lambda\leq u_\mu$ whenever $0<\lambda < \mu\leq \Lambda$. \end{proof} \vspace*{-5pt} \begin{lemma}\label{lem:solLambda} Problem \eqref{p_lambda} has at least one solution if $\lambda = \Lambda$. \end{lemma} To prove Lemma \ref{lem:solLambda} we need the following result that guarantees that the linearized equation corresponding to \eqref{p_lambda} has non-negative eigenvalues at the minimal solution. \vspace*{-3pt} \begin{proposition} Let $u_{\lambda}\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ be the minimal solution to problem \eqref{p_lambda} and let us define $a_{\lambda}=a_{\lambda}(x)=\lambda q u_{\lambda}^{q-1}+(2_s^*-1) u_{\lambda}^{2_s^*-2}$. Then, the operator $[(-\Delta)^s-a_\lambda(x)]$ with mixed boundary conditions has a first eigenvalue $\nu_1\geq0$. In particular, it follows that \begin{equation}\label{ineq:posit} \int_{\Omega}\left(|(-\Delta)^{s/2}v|^2-a_{\lambda}v^2\right)dx\geq0,\quad \mbox{for any } \ \, v\in H_{\Sigma_{\mathcal{D}}}^s(\Omega). \end{equation} \end{proposition} \begin{proof} The proof follows verbatim that of \cite[Proposition 5.1]{Carmona2020a}, so we omit the details. \end{proof} \medskip \begin{proof}[Proof of Lemma \ref{lem:solLambda}] Let $\{\lambda_n\}$ be a sequence such that $\lambda_n \nearrow \Lambda$ and denote by $u_n=u_{\lambda_n}$ the minimal solution to problem $(P_{\lambda_n})$. Let $U_n=E_s[u_n]$, then \begin{equation*} I_{\lambda_n}(u_n)=\frac{1}{2}\int_{{\Omega}}|(-\Delta)^{\frac{s}{2}}u_n|^2dx-\frac{\lambda_n}{q+1}\int_{\Omega} |u_n|^{q+1}dx-\frac{1}{2_s^*}\int_{\Omega}|u_n|^{2_s^*}dx. \end{equation*} Moreover, since $u_n$ is a solution to $(P_{\lambda_n})$, it also satisfies \vskip -10pt \begin{equation*} \int_{{\Omega}}|(-\Delta)^{\frac{s}{2}}u_n|^2dx=\lambda_n\int_{\Omega} |u_n|^{q+1}dx+\int_{\Omega}|u_n|^{2_s^*}dx. \end{equation*} On the other hand, using \eqref{ineq:posit} with $v=u_n$, \begin{equation*} \int_{{\Omega}}|(-\Delta)^{\frac{s}{2}}u_n|^2dx-\lambda_nq\int_{\Omega} |u_n|^{q+1}dx-(2_s^*-1)\int_{\Omega}|u_n|^{2_s^*}dx\geq0. \end{equation*} As in \cite[Theorem 2.1]{Ambrosetti1994}, we conclude $I_{\lambda_n}(u_n)<0$. Moreover, as $I_{\lambda_n}'(u_n) = 0$, then ${\|u_n\|}_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}\leq C$. Hence, there exists a weakly convergent subsequence $u_n\to u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ and, thus, $u$ is a weak solution of \eqref{p_lambda} for $\lambda=\Lambda$. \end{proof} Having proved the first three items of Theorem \ref{sublinear}, next we focus on proving the existence of a second solution. As commented above, first we show that the minimal solution is a local minimum of the energy functional $I_\lambda$; so we can use the Mountain Pass Theorem, obtaining a minimax PS sequence. To find a second solution, we prove next a local PS$_c$ condition for energy levels $c$ under a critical level $c_{\mathcal{D-N}}^*$. Let us stress that, in order to obtain a second solution, it is fundamental that the minimal solution is a minimum of the energy functional $I_\lambda$ or, equivalently, its $s$-harmonic extension $w_\lambda=E_s[u_\lambda]$ is a minimum of $J_\lambda$. Following the ideas of \cite{Colorado2003} we begin with a separation Lemma. Let us consider $v$ solution to \begin{equation}\label{p_v \left\{ \begin{tabular}{rcl} $(-\Delta)^sv=g$ & &in $\Omega$,\\[2pt] $B(v)=0\mkern+0.5mu$& &on $\partial\Omega$, \end{tabular} \right.\quad\text{with } g\in L^p(\Omega),\ p>\frac{N}{s}. \end{equation} The following result is proven in \cite{Ortega2021}. \begin{theorem}\cite[Theorem 1.2]{Ortega2021}\label{th_stMax} Let $u$ be the solution to \begin{equation*} \left\{ \begin{tabular}{rcl} $(-\Delta)^su=f$ & &in $\Omega$, \\[2pt] $B(u)=0$ & &on $\partial\Omega$, \end{tabular} \right. \end{equation*} with $f\in L^\infty(\Omega)$, $f\gneq0$ and let $v$ be the solution to \eqref{p_v}. Then, there exists a constant $C>0$ such that \begin{equation*} \left\| \ \frac vu \ \right \|_ {L^\infty(\Omega)}\leq C\|g\|_{L^p(\Omega)}, \end{equation*} with the constant $C$ depending on $N$, $p$, $s$, $\Omega$, $\Sigma_{\mathcal{D}}$, $\|u\|_{L^{\infty}(\Omega)}$, $\|f\|_{L^{\infty}(\Omega)}$ and $1/(\int_{\Omega}f(z)d(z)dz)$ where $d(x)=\textrm{dist}(x,\partial\Omega)$. \end{theorem} Let us define the class \begin{equation*} \mathfrak{C}_v(\Omega)=\left\{\omega\in\mathcal{C}^0(\overline{\Omega})\cap H_{\Sigma_{\mathcal{D}}}^s(\Omega): \left\|\frac{\omega}{v}\right\|_{L^{\infty}(\Omega)}<\infty\right\}. \end{equation*} By application of Theorem \ref{th_stMax} with $g\equiv1$ we obtain the following separation result in the $\mathfrak{C}_v(\Omega)$-topology. Then, from now $v$ denotes the solution to \eqref{p_v} with $g\equiv1$. \begin{lemma}\label{lem:separation} Let $0<\lambda_0<\lambda_1<\lambda_2<\Lambda$ and $u_{\lambda_0}, u_{\lambda_1}$ and $u_{\lambda_2}$ be the minimal solutions to \eqref{p_lambda} with $\lambda=\lambda_0, \lambda_1$ and $\lambda_2$ respectively. Then, if $X=\{\omega\in \mathfrak{C}_v(\Omega): u_{\lambda_0}\leq\omega\leq u_{\lambda_2}\}$, there exists $\varepsilon>0$ such that \begin{equation*} u_{\lambda_1}+\varepsilon B_1(0)\subset X, \end{equation*} where $B_1(0)=\{\omega\in \mathfrak{C}_v(\Omega):\left\|\frac{\omega}{v}\right\|_{L^{\infty}(\Omega)}<1\}$. \end{lemma} \begin{proof} By the Maximum Principle (cf. \cite[Lemma 2.3]{Capella2011}), $u_{\lambda_0}<u_{\lambda_1}<u_{\lambda_2}$. Next, note that the function $\underline{v}\vcentcolon= u_{\lambda_1}-u_{\lambda_0}$ solves $(-\Delta)^s\underline{v}=f_{\lambda_1}(u_{\lambda_1})-f_{\lambda_0}(u_{\lambda_0})$ and the function $\overline{v}\vcentcolon= u_{\lambda_2}-u_{\lambda_1}$ solves $(-\Delta)^s\overline{v}=f_{\lambda_2}(u_{\lambda_2})-f_{\lambda_1}(u_{\lambda_1})$, in both cases with the same boundary condition as in problem \eqref{p_lambda}. Since $f_\lambda(t)$ is increasing the right-hand sides are nonnegative and, by Proposition \ref{prop:bound}, they are also bounded. Then, by Theorem \ref{th_stMax}, there exists $\varepsilon>0$ such that \begin{equation*} u_{\lambda_0}(x)+\varepsilon v(x)\leq u_{\lambda_1}(x)\leq u_{\lambda_2}(x)-\varepsilon v(x),\quad\text{for all } x\in\overline{\Omega}, \end{equation*} and the result follows. \end{proof} Let us define the functions \begin{equation*} \overline{F}_\lambda(u)=\int_{0}^{u}\overline{f}_\lambda(t)dt\qquad\text{with}\qquad \overline{f}_\lambda(t)=\left\{ \begin{tabular}{lr} $\lambda t^q+t^{2_s^*-1}$&if $t\geq0$, \\[5pt] $0$&if $t<0$, \end{tabular} \right. \end{equation*} and the energy functional \begin{equation*} \overline{I}_\lambda(u)=\frac{1}{2}\|u\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2-\int_{\Omega}\overline{F}_\lambda(u)dx. \end{equation*} Clearly $\overline{I}_\lambda(u)=I_\lambda(u)$, if $u>0$. The Euler--Lagrange equation corresponding to $\overline{I}_\lambda(u)$ is \begin{equation}\label{p_lambda+}\tag{$P_\lambda^+$} \left\{ \begin{tabular}{lcl} $(-\Delta)^su=\lambda (u^+)^q+(u^+)^{2_s^*-1}$ & &in $\Omega$, \\ $\mkern+22muB(u)=0$& &on $\partial\Omega$, \end{tabular} \right. \end{equation} where $u^+=\max\{0,u\}$. If $u$ is a nontrivial solution to \eqref{p_lambda+}, i.e., $u$ is a nontrivial critical point of $\overline{I}_\lambda$ then, by the Maximum Principle (cf. \cite[Lemma 2.3]{Capella2011}), $u$ is strictly positive in $\Omega$ and, therefore, it is also a solution to \eqref{p_lambda}, i.e., a positive critical point of $I_\lambda$; and vice-versa, if $u$ is a solution to \eqref{p_lambda}, then $u>0$ in $\Omega$ and $u^+=u$, so that $u$ is a solution to \eqref{p_lambda+}. \begin{proposition}\label{prop:min_C} For all $\lambda\in(0,\Lambda)$ there exists a solution $u_0\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ to \eqref{p_lambda} which is a local minimum of the energy functional $\overline{I}_{\lambda}$ in $\mathfrak{C}_v(\Omega)$, i.e., there exists $r>0$ such that \begin{equation*} \overline{I}_{\lambda}(u_0)\leq \overline{I}_{\lambda}(\varphi),\quad \text{for all } \omega\in \mathfrak{C}_v(\Omega)\text{ with } \left\|\frac{u_0-\omega}{v}\right\|_{L^{\infty}(\Omega)}\leq r. \end{equation*} \end{proposition} \begin{proof} Fixed $0<\lambda_1<\lambda<\lambda_2<\Lambda$ let us consider the corresponding minimal solutions $u_{\lambda_1}$ and $u_{\lambda_2}$. Note that $u_{\lambda_1}\leq u_{\lambda_2}$ and $u_{\lambda_1}$, $u_{\lambda_2}$ are a subsolution and a supersolution to \eqref{p_lambda}. Moreover, setting $z\vcentcolon= u_{\lambda_2}-u_{\lambda_1}$, then \begin{equation*} \left\{ \begin{tabular}{lcl} $(-\Delta)^sz\geq0$ & &in $\Omega$, \\ $\mkern+22muB(z)=0$& &on $\partial\Omega$. \end{tabular} \right. \end{equation*} Next, let us take the functions \begin{equation*} F_\lambda^*(u)=\int_{0}^{u}f_\lambda^*(x,t)dt\qquad\text{with}\qquad f_\lambda^*(x,t)=\left\{ \begin{tabular}{ll} $\overline{f}_{\lambda}(u_{\lambda_1})$&if $t\leq u_{\lambda_1}$, \\[1pt] $\overline{f}_{\lambda}(t)$&if $u_{\lambda_1}<t< u_{\lambda_2}$, \\[1pt] $\overline{f}_{\lambda}(u_{\lambda_2})$&if $t\geq u_{\lambda_2}$, \end{tabular} \right. \end{equation*} and the energy functional \begin{equation*} I_\lambda^*(u)=\frac{1}{2}\|u\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2-\int_{\Omega}F_\lambda^*(u)dx. \end{equation*} It is clear that $I_\lambda^*$ attains its global minimum at some $u_0\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$, that is \begin{equation}\label{ineq:min} I_\lambda^*(u_0)\leq I_{\lambda}^*(u),\quad\text{for all }u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega). \end{equation} Moreover, \begin{equation*} \left\{ \begin{tabular}{lcl} $(-\Delta)^su_0=f_\lambda^*(x,u_0)$, & &in $\Omega$, \\ $\mkern+22muB(u_0)=0$& &on $\partial\Omega$. \end{tabular} \right. \end{equation*} Because of Theorem \ref{th_stMax} we get, for some $\varepsilon>0$, \begin{equation*} u_{\lambda_1}(x)+\varepsilon v(x)\leq u_0(x)\leq u_{\lambda_2}(x)-\varepsilon v(x), \end{equation*} for all $x\in\overline{\Omega}$, so that \begin{equation*} u_{\lambda_1}<u_0<u_{\lambda_2}, \end{equation*} for all $x\in\Omega$. Then, taking $ \left\| \frac{\omega-u_0}{v}\right\|_{L^{\infty}(\Omega)}\leq\tau$ with $0<\tau<\varepsilon$, we get $u_{\lambda_1}\leq\omega\leq u_{\lambda_2}$ in $\Omega$. Since $I_{\lambda}^*(\omega)-\overline{I}_{\lambda}(\omega)$ is zero for all $u_{\lambda_1}\leq\omega\leq u_{\lambda_2}$, by \eqref{ineq:min} we conclude \begin{equation*} \overline{I}_\lambda(\omega)=I_\lambda^*(\omega)\geq I_\lambda^*(u_0)=\overline{I}_{\lambda}(u_0), \end{equation*} for all $\omega\in \mathfrak{C}_v(\Omega)$ with $\left\| \frac{\omega-u_0}{v}\right\|_{L^{\infty}(\Omega)}\leq\tau$ and the result follows. \end{proof} \begin{proposition}\label{prop:minC_minH} Let $u_0\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ be a local minimum of the energy functional $I_\lambda$ in $\mathfrak{C}_v(\Omega)$, i.e., there exists $r>0$ such that \begin{equation*} I_{\lambda}(u_0)\leq I_{\lambda}(u_0+\omega),\quad \text{for all } \omega\in \mathfrak{C}_v(\Omega)\text{ with } \left\|\frac{\omega}{v}\right\|_{L^{\infty}(\Omega)}\leq r. \end{equation*} Then, $u_0$ is a local minimum of the energy functional $I_\lambda$ in $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$, i.e., there exists $\delta>0$ such that \begin{equation*} I_{\lambda}(u_0)\leq I_{\lambda}(u_0+\varphi),\quad \text{for all } \varphi\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)\text{ with } \|\varphi\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}\leq \delta. \end{equation*} \end{proposition} \begin{proof} Arguing by contradiction we assume that, \begin{equation}\label{eq:contradiction} \forall \varepsilon>0,\ \exists\, u_{\varepsilon}\in B_{\varepsilon} (u_0) \ \text{such that } I_{\lambda}(u_{\varepsilon})<I_{\lambda}(u_0), \end{equation} where $\displaystyle B_{\varepsilon}(u_0)=\{u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega): \|u-u_0\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}\leq \varepsilon\}$. For every $j>0$ let us consider the truncation map \begin{equation*} T_j(t)=\left\{ \begin{tabular}{ll} $t$&if $0<t<j$, \\[1pt] $j$&if $t\geq j$, \end{tabular} \right. \end{equation*} let us set \begin{equation*} F_{\lambda,j}(u)=\int_{0}^{u}f_{\lambda,j}(t)dt\qquad\text{where}\qquad f_{\lambda,j}(t)=f_{\lambda}(T_j(t)), \end{equation*} with $f_\lambda(t)=\lambda|t|^{q-1}t+|t|^{2_s^*-1}t$, and the energy functional \begin{equation*} I_{\lambda,j}(u)=\frac{1}{2}\|u\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2-\int_{\Omega}F_{\lambda,j}(u)dx. \end{equation*} For each $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$, we have $I_{\lambda,j}(u)\to I_{\lambda}(u)$ as $j\to\infty$. Hence, for any $\varepsilon>0$ there exists $j=j(\varepsilon)$ big enough such that $I_{\lambda,j(\varepsilon)}(u_{\varepsilon})<I_{\lambda}(u_0)$. Clearly, $\min_{B_{\varepsilon}(u_0)}I_{\lambda,j(\varepsilon)}$ is attained at some point, say $\omega_{\varepsilon}$. Therefore, we have \begin{equation*} I_{\lambda,j(\varepsilon)}(\omega_{\varepsilon})\leq I_{\lambda,j(\varepsilon)}(u_{\varepsilon})<I_{\lambda}(u_0). \end{equation*} Now we want to prove that \begin{equation}\label{eq:conv} \omega_{\varepsilon}\to u_0\quad\text{in }\mathfrak{C}_v(\Omega)\text{ as }\varepsilon\to0, \end{equation} and we will arrive to a contradiction with \eqref{eq:contradiction}. The Euler--Lagrange equation satisfied by $\omega_{\varepsilon}$ involves a Lagrange multiplier $\xi_{\varepsilon}$ such that \begin{equation*} \langle I_{\lambda,j(\varepsilon)}'(\omega_{\varepsilon}),\psi\rangle_{{\small H^{-s}(\Omega), H_{\Sigma_{\mathcal{D}}}^s(\Omega)}}=\xi_{\varepsilon}\langle \omega_{\varepsilon},\psi \rangle_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)},\quad\forall\psi\in H_{\Sigma_{\mathcal{D}}}^s(\Omega), \end{equation*} that is, \begin{equation}\label{eq:multiplier} \int_{\Omega}(-\Delta)^{\frac{s}{2}}\omega_{\varepsilon}(-\Delta)^{\frac{s}{2}}\psi dx-\int_{\Omega}f_{\lambda,j(\varepsilon)}(\omega_{\varepsilon})\psi dx=\xi_{\varepsilon}\int_{\Omega}(-\Delta)^{\frac{s}{2}}\omega_{\varepsilon}(-\Delta)^{\frac{s}{2}}\psi dx, \end{equation} for all $\psi\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$. Since $\omega_{\varepsilon}$ is a minimum of $I_{\lambda,j(\varepsilon)}$, it holds that, for $0<\varepsilon\ll1$, \begin{equation}\label{eq:multiplier2} \xi_{\varepsilon}=\frac{\langle I_{\lambda,j(\varepsilon)}'(\omega_{\varepsilon}),\omega_{\varepsilon}\rangle_{{\small H^{-s}(\Omega), H_{\Sigma_{\mathcal{D}}}^s(\Omega)}}}{\|\omega_{\varepsilon}\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2}\qquad\text{and}\qquad\xi_{\varepsilon}\to0\quad\text{as }\varepsilon\to0. \end{equation} Let us observe that, by \eqref{eq:multiplier}, the function $\omega_{\varepsilon}$ satisfies the problem \begin{equation*} \left\{ \begin{tabular}{lcl} $(-\Delta)^s\omega_{\varepsilon}=\frac{1}{1-\xi_{\varepsilon}}f_{\lambda,j(\varepsilon)}(\omega_{\varepsilon})\vcentcolon=f_{\lambda}^{\varepsilon}(\omega_{\varepsilon})$, & &in $\Omega$, \\ $\mkern+22muB(\omega_{\varepsilon})=0$& &on $\partial\Omega$. \end{tabular} \right. \end{equation*} Clearly $\|\omega_{\varepsilon}\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}\leq C$. Moreover, by Proposition \ref{prop:bound}, we have $\|\omega_{\varepsilon}\|_{L^{\infty}(\Omega)}\leq C_1$ for a constant $C_1>0$ independent of $\varepsilon$. Thus, by \eqref{eq:multiplier2}, we get $\|f_{\lambda}^{\varepsilon}(\omega_{\varepsilon})\|_{L^{\infty}(\Omega)}\leq C_2$ so that \cite[Theorem 1.2]{Carmona2020} implies $\|\omega_{\varepsilon}\|_{\mathcal{C}^{\gamma}}\leq C_3$ for some $\gamma\in(0,\frac12)$ and a constant $C_3>0$ independent of $\varepsilon$. Here $\mathcal{C}^{\gamma}$ denotes the space of Hölder continuous functions with exponent $\gamma$. Then, by the Ascoli--Arzelá Theorem, there exist a subsequence, still denoted by $\omega_{\varepsilon}$, such that $\omega_{\varepsilon}\to u_0$ uniformly as $\varepsilon\to0$. Since $\omega_{\varepsilon}\to u_0$ in $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$, we have $\omega_0=u_0$ and, by the Maximum Principle and Theorem \ref{th_stMax}, we get \begin{equation*} \left\| \frac{\omega_{\varepsilon}-u_0}{v} \right\|_{L^{\infty}(\Omega)}\leq C \sup\limits_{\Omega}|f_{\lambda}^{\varepsilon}(\omega_{\varepsilon})-f_{\lambda}^{0}(u_0)|\to0\quad\text{as }\varepsilon\to0, \end{equation*} and \eqref{eq:conv} is proved. \end{proof} Propositions \ref{prop:min_C} and \ref{prop:minC_minH} provide us with a local minimum of $I_\lambda$ in $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ which will be denoted by $u_0$. Now, fixed $\lambda>0$, we look for a second solution of the form $\hat{u}=u_0+\tilde{u}$ with $u_0$ the above solution and $\tilde{u}>0$. The corresponding problem satisfied by $\tilde{u}$ is \begin{equation*} \left\{ \begin{tabular}{lcl} $(-\Delta)^s\tilde{u}=\lambda(u_0+\tilde{u})^q-\lambda u_0^q+(u_0+\tilde{u})^{2_s^*-1}-u_0^{2_s^*-1}$ & &\mbox{in } $\Omega$, \\ $\mkern+21mu B(\tilde{u})=0\mkern+0.5mu$& &\mbox{on } $\partial\Omega$, \end{tabular} \right. \end{equation*} Let us define the functions \begin{equation*} G_\lambda(u)=\int_{0}^{u}g_{\lambda}(x,t)dt\quad\text{with}\quad g_\lambda(x,t)=\left\{ \begin{tabular}{lr} $\lambda(u_0+t)^q-\lambda u_0^q+(u_0+t)^{2_s^*-1}-u_0^{2_s^*-1}$&\mbox{if } $t\geq0$, \\[5pt] $0$&\mbox{if } $t<0$, \end{tabular} \right. \end{equation*} and the energy functional \begin{equation*} \widetilde{I}_\lambda(u)=\frac{1}{2}\|u\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^2-\int_{\Omega}G_\lambda(u)dx. \end{equation*} Note that, since $u\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$, then $\widetilde{I}_\lambda$ is well defined. Finally let us define the moved problem \begin{equation}\label{p_tilde} \left\{ \begin{tabular}{lcl} $(-\Delta)^su=g_\lambda(x,u)$ & &\mbox{in } $\Omega$, \\ $\mkern+22mu B(u)=0\mkern+0.5mu$& &\mbox{on } $\partial\Omega$, \end{tabular} \right. \tag{$\widetilde{P}_\lambda$} \end{equation} Thus, if $\tilde{u}\not\equiv0$ is a critical point of $\widetilde{I}_\lambda$ then it is a solution to $(\widetilde{P}_\lambda)$. Moreover, by the Maximum Principle (cf. \cite[Lemma 2.3]{Capella2011}), we have $\tilde{u}>0$ in $\Omega$. Hence $\hat{u}=u_0+\tilde{u}$ will be a second solution to \eqref{p_lambda} with a concave nonlinearity $0<q<1$. Then, in order to prove the existence of a second solution we have to analyze the existence of nontrivial critical points of the functional $\widetilde{I}_\lambda$. First we have the following. \begin{lemma}\label{lem:min_0} $u=0$ is a local minimum of $\widetilde{I}_\lambda$ in $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$. \end{lemma} \begin{proof} Note that, because of Proposition \ref{prop:minC_minH}, it is sufficient to prove that $u=0$ is a local minimum of $\widetilde{I}_\lambda$. Let $u^+$ be the positive part of $u$. Since \begin{equation*} G_{\lambda}(u^+)=\overline{F}_{\lambda}(u_0+u^+)-\overline{F}(u_0)-\left(\lambda u_0^q+u_0^{2_s^*-1}\right)u^+, \end{equation*} then \begin{equation*} \begin{split} \widetilde{I}_\lambda(u =&\frac{1}{2}\|u^+\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\frac{1}{2}\|u^-\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}\\ &-\int_{\Omega}\overline{F}_{\lambda}(u_0+u^+)\, dx+\int_{\Omega}\overline{F}(u_0)\, dx+\int_{\Omega}\left(\lambda u_0^q-u_0^{2_s^*-1}\right)u^+dx. \end{split} \end{equation*} On the other hand \begin{equation*} \begin{split} \overline{I}_\lambda(u_0+u^+)=& \frac{1}{2}\|u_0+u^+\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}-\int_{\Omega}\overline{F}_{\lambda}(u_0+u^+) dx\\ =&\frac{1}{2}\|u_0\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\frac{1}{2}\|u^+\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\int_{\Omega}(-\Delta)^{\frac{s}{2}}u_0(-\Delta)^{\frac{s}{2}}u^+dx\!-\!\int_{\Omega}\overline{F}_{\lambda}(u_0+u^+)dx\\ =& \frac{1}{2}\|u_0\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\frac{1}{2}\|u^+\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\int_{\Omega}\left(\lambda u_0^q+u_0^{2_s^*-1}\right)u^+dx\!-\!\int_{\Omega}\overline{F}_{\lambda}(u_0+u^+)dx. \end{split} \end{equation*} Hence \begin{equation*} \begin{split} \widetilde{I}_\lambda(u)=&\frac{1}{2}\|u^-\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\overline{I}_\lambda(u_0+u)-\frac{1}{2}\|u_0\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\int_{\Omega}\overline{F}_{\lambda}(u_0)dx\\ =&\frac{1}{2}\|u^-\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}+\overline{I}_\lambda(u_0+u)-\overline{I}_\lambda(u_0). \end{split} \end{equation*} Using Proposition \ref{prop:min_C}, it follows that \begin{equation*} \widetilde{I}_\lambda(u)\geq\frac{1}{2}\|u^-\|_{H_{\Sigma_{\mathcal{D}}}^s(\Omega)}^{2}\geq0, \end{equation*} provided $\displaystyle \left\| \frac{u}{v}\right\|_{L^{\infty}(\Omega)}\leq\varepsilon$ for some $\varepsilon>0$ small enough. \end{proof} As a consequence of Lemma \ref{lem:min_0} the following result holds for the energy functional associated to the extension problem corresponding to the moved problem $(\widetilde{P}_\lambda)$, namely, \begin{equation*} \widetilde{J}_{\lambda}(w)=\frac{1}{2}\|w\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2-\int_{\Omega}G_\lambda(w(x,0))dx. \end{equation*} \begin{corollary}\label{cor:min_0_ext} $w=0$ is a local minimum of $\widetilde{J}_\lambda$ in $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$. \end{corollary} \subsection{The Palais--Smale condition}\hfill\newline Assuming that $w=0$ is the unique critical point of the energy functional $\widetilde{J}_{\lambda}$, corresponding to the extension problem of the moved problem $(\widetilde{P}_{\lambda})$, we prove that $\widetilde{J}_{\lambda}$ satisfies a local Palais–Smale condition for energy levels $c$ below the critical level $c_{\mathcal{D-N}}^*=\frac{s}{N}[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}$. As commented above, the main tool for proving this fact is the extension to the fractional-mixed setting of the concentration-compactness principle by Lions (cf. \cite{Lions1985}) provided by Theorem \ref{concompact}. We will also need some estimates for the action of the fractional Laplacian on the Sobolev extremal functions \eqref{eq:sob_extremal} in order to find paths with energy below $c_{\mathcal{D-N}}^*$. \begin{lemma}\label{lem:PS} If $w=0$ is the only critical point of $\widetilde{J}_{\lambda}$ in $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, then $\widetilde{J}_{\lambda}$ satisfies a local Palis--Smale condition below the critical level \begin{equation*} c_{\mathcal{D-N}}^*=\frac{s}{N}[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}. \end{equation*} \end{lemma} \begin{proof} Let $\{w_n\}$ be a PS sequence for $\widetilde{J}_{\lambda}$ verifying \begin{equation}\label{lem:PS_eq0} \widetilde{J}_{\lambda}(w_n)\to c< c_{\mathcal{D-N}}^*\quad\text{and}\quad \widetilde{J}_{\lambda}'(w_n)\to0. \end{equation} Then, it is clear that the sequence $\{w_n\}$ is uniformly bounded in $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, say $\|w_n\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2\leq M$ and, since by hypothesis $w=0$ is the unique critical point of $\widetilde{J}_{\lambda}$, it follows that \begin{equation*} \begin{split} w_n&\rightharpoonup 0\quad\text{weakly in }\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega}),\\ w_n(x,0)&\to 0\quad\text{strongly in }L^r(\Omega),\ 1\leq r<2_s^*,\\ w_n(x,0)&\to 0\quad\text{a.e. in }\Omega. \end{split} \end{equation*} Also, since $w_0=E_s[u_0]$ is a critical point of $J_{\lambda}$, we get \begin{equation}\label{lem:PS_eq1} \widetilde{J}_{\lambda}(w_n)+J_\lambda(w_0)\geq J_\lambda(z_n) \end{equation} where $z_n=w_n+w_0$ and, then, \begin{equation}\label{lem:PS_eq2} J_\lambda(z_n)\to c+J_{\lambda}(w_0)\quad\text{and}\quad J'_{\lambda}(z_n)\to0. \end{equation} From \eqref{lem:PS_eq1} and \eqref{lem:PS_eq2} we get that the sequence $\{z_m\}$ is uniformly bounded in $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, say $\|z_n\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2\leq M_2$. This, together with the fact that $w=0$ is the unique critical point of $\widetilde{J}_{\lambda}$, up to a subsequence, we get \begin{equation}\label{lem:PS_eq1b} \begin{split} z_n&\rightharpoonup w_0\quad\text{weakly in }\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega}),\\ z_n(x,0)&\to w_0(x,0)\quad\text{strongly in }L^r(\Omega),\ 1\leq r<2_s^*,\\ z_n(x,0)&\to w_0(x,0)\quad\text{a.e. in }\Omega. \end{split} \end{equation} In order to apply the concentration-compactness result provided by Theorem \ref{concompact} we claim the following:\newline \textbf{Claim:} The sequence $\{y^{1-2s}|\nabla z_n|^2\}_{n\in\mathbb{N}}$ is tight, i.e., for any $\eta>0$ there exists $\rho>0$ such that \begin{equation*} \int_{\{y>\rho\}}\int_{\Omega}y^{1-2s}|\nabla z_n|^2dxdy\leq\eta,\ \forall n\in\mathbb{N}. \end{equation*} The proof of the claim follows verbatim that of \cite[Lemma 3.6]{Barrios2012} (see also the proof of \cite[Theorem 4.5]{Colorado2019}) so we omit the details. We can then apply Theorem \ref{concompact}. Hence, up to a subsequence, there exists an at most countable index set $\mathfrak{I}$, a sequence of points $\{x_i\}_{i\in\mathfrak{I}}\subset\overline{\Omega}$ and nonnegative real numbers $\mu_i$ and $\nu_i$ such that \begin{equation}\label{lem:PS_eq3} \kappa_s y^{1-2s}|\nabla z_n|^2\to \mu\geq\kappa_sy^{1-2s}|\nabla w_0|^2+\sum\limits_{i\in \mathfrak{I}}\mu_{i}\delta_{x_{i}}, \end{equation} and \begin{equation}\label{lem:PS_eq4} |z_n(x,0)|^{2_s^*}\to \nu=|w_0(x,0)|^{2_s^*}+\sum\limits_{i\in \mathfrak{I}}\nu_{i}\delta_{x_{i}}, \end{equation} in the sense of measures and satisfying the relation \begin{equation}\label{ineq:compac} \mu_i\geq \widetilde{S}(\Sigma_{\mathcal{D}})\nu_i^{\frac{2}{2_s^*}},\quad\text{for }i\in\mathfrak{I}. \end{equation} Next, we fix $i_0\in \mathfrak{I}$ and we let $\phi\in\mathcal{C}_0^{\infty}(\mathbb{R}_+^{N+1})$ be a non-increasing smooth cut-off function verifying $\phi=1$ in $B_1^+(x_{i_0})$ and $\phi=0$ in $B_2^+(x_{i_0})^c$, with $B_r^+(x_{i_0})\subset\mathbb{R}^{N}\times\{y\geq0\}$ the $(N+1)$-dimensional semi-ball of radius $r>0$ centered at $x_{i_0}$. Let now $\phi_{\varepsilon}(x,y)=\phi(x/\varepsilon,y/\varepsilon)$, clearly $|\nabla\phi_{\varepsilon}|\leq\frac{C}{\varepsilon}$ and let us denote by $\Gamma_{2\varepsilon}=B_{2\varepsilon}^+(x_{i_0})\cap\{y=0\}$. Then, taking the dual product in \eqref{lem:PS_eq2} with $\phi_{\varepsilon}z_n$, we have \begin{equation*} \begin{split} \lim_{n\to\infty}&\kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}\langle\nabla z_n,\nabla\phi_{\varepsilon}\rangle z_ndxdy\\ &=\lim_{n\to\infty} \left(\lambda\int_{\Gamma_{2\varepsilon}}|z_n|^{q+1}\phi_{\varepsilon}dx+\int_{\Gamma_{2\varepsilon}}|z_n|^{2_s^*}\phi_{\varepsilon}dx-\kappa_s\int_{B_{2\varepsilon}^+(x_{i_0})}y^{1-2s}|\nabla z_n|^2\phi_{\varepsilon}dxdy\right) \end{split} \end{equation*} Then, thanks to \eqref{lem:PS_eq1b}, \eqref{lem:PS_eq3} and \eqref{lem:PS_eq4}, we find, \begin{equation}\label{eq:tozero} \begin{split} \lim_{n\to\infty}&\kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}\langle\nabla z_n,\nabla\phi_{\varepsilon}\rangle z_ndxdy\\ &=\lambda\int_{\Gamma_{2\varepsilon}}|w_0|^{q+1}\phi_{\varepsilon}dx+\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}d\nu-\kappa_s\int_{B_{2\varepsilon}^+(x_{i_0})}y^{1-2s}\phi_{\varepsilon}d\mu. \end{split} \end{equation} Assume for the moment that the left hand side of \eqref{eq:tozero} vanishes as $\varepsilon\to0$. Thus, \begin{equation}\label{compacidad} 0=\lim_{\varepsilon\to0}\left(\lambda\int_{\Gamma_{2\varepsilon}}|w_0|^{q+1}\phi_{\varepsilon}dx+\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}d\nu-\kappa_s\int_{B_{2\varepsilon}^+(x_{i_0})}y^{1-2s}\phi_{\varepsilon}d\mu\right)=\nu_{i_0}-\mu_{i_0}. \end{equation} We have then two options, either the compactness of the PS sequence or concentration around the point $x_{i_0}$. In other words, either $\nu_{i_0}=0$, so that $\mu_{i_0}=0$ or, by \eqref{compacidad} and \eqref{ineq:compac}, $$\nu_{i_0}\geq [\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}.$$ In case of having concentration, we find, \begin{equation*} \begin{split} c+J_{\lambda}(w_0)=&\lim_{n\to\infty}J_{\lambda}(z_n)-\frac{1}{2}\langle J_{\lambda}'(z_n),z_n\rangle\\ \geq&\lambda\left(\frac{1}{2}-\frac{1}{q+1}\right)\int_{\Omega}|w_0|^{q+1}dx+\frac{s}{N}\int_{\Omega}|w_0|^{2_s^*}dx+\frac{s}{N}\nu_{i_0}\\ \geq&J_{\lambda}(w_0)+\frac{s}{N}[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}=J_{\lambda}(w_0)+c_{\mathcal{D-N}}^*, \end{split} \end{equation*} in contradiction with the hypotheses $c<c_{\mathcal{D-N}}^*$. Since $i_0$ was chosen arbitrarily, $\nu_i=0$ for all $i\in\mathfrak{I}$. As a consequence $u_n\to u_0$ in $L^{2_s^*}(\Omega)$. Since convergence of $u_n$ in $L^{2_s^*}(\Omega)$ implies convergence of $f_{\lambda}(u_n)$ in $L^{\frac{2N}{N+2s}}(\Omega)$ using the continuity of the inverse operator $(-\Delta)^{-s}$ we get the strong convergence of $u_n$ in $H_{\Sigma_{\mathcal{D}}}^s(\Omega)$. It only remains to prove that the left hand side of \eqref{eq:tozero} vanishes as $\varepsilon\to0$. The PS sequence $\{z_m\}$ is uniformly bounded in $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ and, up to a subsequence, $z_n\rightharpoonup w_0\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ and $z_n\rightarrow w_0$ a.e. in $\mathscr{C}_{\Omega}$. Moreover, for $r<2^*=\frac{2(N+1)}{N-1}$ we have the compact inclusion $\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})\hookrightarrow L^{r}(\mathscr{C}_{\Omega},y^{1-2\beta}dxdy)$. Then, applying H\"older's inequality with $p=\frac{N+1}{N-1}$ and $q=\frac{N+1}{2}$, we find, \begin{equation*} \begin{split} \int_{B_{2\varepsilon}^+(x_{i_0})} &y^{1-2s}|\nabla\phi_{\varepsilon}|^2|z_n|^2dxdy\\ \leq& \left(\int_{B_{2\varepsilon}^+(x_{i_0})}y^{1-2s}|\nabla\phi_{\varepsilon}|^{N+1}dxdy\right)^{\frac{2}{N+1}}\left(\int_{B_{2\varepsilon}^+(x_{i_0})} y^{1-2s}|z_n|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\ \leq&\frac{1}{\varepsilon^2}\left(\int_{B_{2\varepsilon}(x_{i_0})}\int_0^\varepsilon y^{1-2s}dxdy\right)^{\frac{2}{N+1}}\left(\int_{B_{2\varepsilon}^+(x_{i_0})} y^{1-2s}|z_n|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\ \leq& c_0\varepsilon^{\frac{2(1-2s)}{N+1}}\left(\int_{B_{2\varepsilon}^+(x_{i_0})}y^{1-2s}|z_n|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\ \leq& c_0 \varepsilon^{\frac{2(1-2s)}{N+1}}\varepsilon^{\frac{(2+N-2s)(N-1)}{(N+1)}}\left(\int_{B_{2}^+(x_{i_0})}y^{1-2s}|z_n(\varepsilon x,\varepsilon y)|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\ \leq& c_1 \varepsilon^{N-2s}, \end{split} \end{equation*} for appropriate positive constants $c_0$ and $c_1$. Thus, we conclude, \begin{equation*} \begin{split} 0\leq&\lim_{n\to\infty}\left|\kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}\langle\nabla z_n,\nabla\phi_{\varepsilon} \rangle z_ndxdy\right|\\ \leq&\kappa_s\lim_{n\to\infty}\left(\int_{\mathscr{C}_{\Omega}}y^{1-2s}|\nabla z_n|^2dxdy\right)^{1/2}\left(\int_{B_{2\varepsilon}^+(x_{k_0})}y^{1-2s}|\nabla\phi_{\varepsilon}|^2|z_n|^2dxdy\right)^{1/2}\\ \leq&C \varepsilon^{\frac{N-2s}{2}}\to0, \end{split} \end{equation*} as $\varepsilon\to 0$ and the proof of the Lemma \ref{lem:PS} is complete. \end{proof} In Lemma \ref{lem:PS} we have proved that, if $w=0$ is the only critical point of the functional $\widetilde{J}_{\lambda}$, then $\widetilde{J}_{\lambda}$ verifies the Palais-Smale condition at any level $c<c^*_{\mathcal{D-N}}$. Next, we have to show that we can obtain a local PS$_c$ sequence for the energy functional $\widetilde{J}_{\lambda}$ with energy level $c<c^*_{\mathcal{D-N}}$. To do that we will use the family of minimizers $u_{\varepsilon}$ of the fractional Sobolev inequality \eqref{sobolev} given by \eqref{eq:sob_extremal} and its $s$-harmonic extension $w_{\varepsilon}=E_s[u_{\varepsilon}]$. Consider a smooth non-increasing cut-off function $\phi_0(t)\in\mathcal{C}^{\infty}(\mathbb{R}_+)$, satisfying $\phi_0(t)=1$ for $0\leq t\leq\frac{1}{2}$ and $\phi_0(t)=0$ for $t\geq1$, and $|\phi_0'(t)|\le C$ for any $t\ge 0$. Assume, without loss of generality, that $0\in\Omega$, and define, for some $\rho>0$ small enough such that $B_{\rho}^+((0,0))\subseteq{\mathscr{C}}_{\Omega}$, the function $\phi(x,y)=\phi_{\rho}(x,y)=\phi_0(\frac{r_{xy}}{\rho})$ with $r_{xy}=|(x,y)|=(|x|^2+y^2)^{\frac{1}{2}}$. Then, we have the following. \begin{lemma}\cite[Lemma 3.3]{Colorado2019}\label{lemma_est} The family $\{\phi_{\rho} w_{\varepsilon}\}$ and its trace on $\{y=0\}$, namely $\{\phi_{\rho} u_{\varepsilon}\}$, satisfy \begin{equation*} \|\phi_\rho w_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2=\|w_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2+ O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-2s}\right) \end{equation*} and \begin{equation*} \int_{\Omega}|\phi_\rho u_{\varepsilon}|^{2_s^*}dx=\|u_{\varepsilon}\|_{L^{2_s^*}(\mathbb{R}^N)}^{2_s^*}+O\left(\left(\frac{\varepsilon}{\rho}\right)^N\right). \end{equation*} for $\varepsilon>0$ small enough. \end{lemma} We consider now a cut-off function centered at $x_0\in\Sigma_{\mathcal{N}}$, namely, $\psi(x,y)=\psi_{\rho}(x,y)=\phi_0(\frac{\overline{r}_{xy}}{\rho})$ with $\overline{r}_{xy}=|(x-x_0,y)|=(|x-x_0|^2+y^2)^{\frac{1}{2}}$ and $\phi_0(t)$ as the cut-off function of Lemma \ref{lemma_est}. Then, in the spirit of \cite[Lemma 3.2]{Grossi1990} the following holds. \begin{lemma}\label{lemma_est_boundary} The family $\{\psi_{\rho} w_{\varepsilon}\}$ and its trace on $\{y=0\}$, namely $\{\psi_{\rho} u_{\varepsilon}\}$, satisfy \begin{equation}\label{est1} \|\psi_\rho w_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2=\frac{1}{2}\|w_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2+ O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-2s}\right). \end{equation} and \begin{equation}\label{est1b} \|\psi_\rho u_{\varepsilon}\|_{L^{2_s^*}(\Omega)}^{2_s^*}=\frac{1}{2}\|u_{\varepsilon}\|_{L^{2_s^*}(\mathbb{R}^N)}^{2_s^*}+O\left(\left(\frac{\varepsilon}{\rho}\right)^N\right). \end{equation} for $\varepsilon>0$ small enough. \end{lemma} \begin{proof} Take $X_0=(x_0,0)$ and denote by $\Omega_{\rho}$ the set $\Omega\cap B_{\rho}(x_0)$ and by $\mathscr{C}_\rho$ the set $\mathscr{C}_{\Omega}\cap B_{\rho}^+(X_0)$. Let us estimate the norm of the truncated function centered at $x_0\in\Sigma_{\mathcal{N}}$, \begin{equation}\label{st_lem} \begin{split} \|\psi_\rho w_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2&=\kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}\psi_{\rho}^2|\nabla w_{\varepsilon}|^2dxdy\\ &+\kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}|w_{\varepsilon}\nabla\psi_{\rho}|^2dxdy+2\kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}\langle w_{\varepsilon}\nabla\psi_{\rho},\psi_{\rho}\nabla w_{\varepsilon} \rangle dxdy. \end{split} \end{equation} Following verbatim the proof of Lemma \ref{lemma_est}, the next estimate holds, \begin{equation*} \kappa_s\int_{\mathscr{C}_{\Omega}}y^{1-2s}|w_{\varepsilon}\nabla\psi_{\rho}|^2dxdy+\kappa_s \int_{\mathscr{C}_{\Omega}}y^{1-2s}\langle w_{\varepsilon}\nabla\psi_{\rho},\psi_{\rho}\nabla w_{\varepsilon} \rangle dxdy=O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-2s}\right). \end{equation*} To estimate the remaining term in the right-hand side of \eqref{st_lem}, we use that, by \cite[Lemma 3.1]{Barrios2012}, \begin{equation*} |\nabla w_{1,s}(x,y)|\leq C w_{1,s-\frac12}(x,y),\ \frac{1}{2}<s<1,\ (x,y)\in\mathbb{R}_+^{N+1}, \end{equation*} and, for $(x,y)\in B_{\frac{\rho}{\varepsilon}}^+(X_0)$, we have $\displaystyle w_1(x,y)=O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-2s}\right)$, so that \begin{equation*} \int_{\{r_{xy}\leq \rho\}}y^{1-2s}|\nabla w_{\varepsilon}|^2dxdy\leq C\int\limits_{\{r_{xy}\leq \frac{\rho}{\varepsilon}\}}y^{1-2s}\left(\frac{\varepsilon}{\rho}\right)^{2(N-2(s-1/2))}dxdy =O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-2s}\right). \end{equation*} Then, since $\Omega$ is a smooth domain, by the above estimates we find \begin{equation*} \begin{split} \int_{\mathscr{C}_{\Omega}}y^{1-2s}\psi_{\rho}^2|\nabla w_{\varepsilon}|^2dxdy&=\int_{\mathscr{C}_\rho}y^{1-2s}\psi_{\rho}^2|\nabla w_{\varepsilon}|^2dxdy=\int_{\mathscr{C}_\rho}y^{1-2s}|\nabla w_{\varepsilon}|^2dxdy+O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-2s}\right)\\ &=\frac{1}{2}\int_{B_\rho^+(0)}y^{1-2s}|\nabla w_{\varepsilon}|^2dxdy+O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-2s}\right), \end{split} \end{equation*} Then, the estimate \eqref{est1} follows by applying Lemma \ref{lemma_est}. On the other hand, since \begin{equation*} \int_{|x|<\rho}|u_{\varepsilon}|^{2_s^*}dx=\int_{|x|<\rho}\frac{\varepsilon^N}{(\varepsilon^2+|x|^2)^N}dx=O\left(\left(\frac{\varepsilon}{\rho}\right)^{N}\right), \end{equation*} we have \begin{equation*} \begin{split} \int_{\Omega}|\psi_\rho u_{\varepsilon}|^{2_s^*}dx&=\int_{\Omega_\rho}|\psi_\rho u_{\varepsilon}|^{2_s^*}dx=\int_{\Omega_\rho}|u_{\varepsilon}|^{2_s^*}dx+O\left(\left(\frac{\varepsilon}{\rho}\right)^{N}\right)\\ &=\frac{1}{2}\int_{B_\rho(0)}| u_{\varepsilon}|^{2_s^*}+O\left(\left(\frac{\varepsilon}{\rho}\right)^{N}\right)dx. \end{split} \end{equation*} Using Lemma \ref{lemma_est} we get \eqref{est1b}. \end{proof} Next, let us consider the family of functions \begin{equation}\label{eq:eta} \eta_{\varepsilon}=\frac{\psi_{\rho} w_{\varepsilon}}{\|\psi_{\rho} u_{\varepsilon}\|_{L^{2_s^*}(\Omega)}}. \end{equation} with $\rho=\varepsilon^{\alpha}$ and $0<\alpha<1$ to be chosen (see \eqref{alpha_condition} below). Because of Lemma \ref{lemma_est_boundary}, we get \begin{equation}\label{eq:trunc_sobolev} \begin{split} \|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2& = \frac{\frac{1}{2}\|w_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2+O(\varepsilon^{(1-\alpha)(N-2s)})}{\left(\frac{1}{2}\right)^{\frac{2}{2_s^*}}\|u_{\varepsilon}\|_{L^{2_s^*}(\mathbb{R}^N)}^{2}+O(\varepsilon^{(1-\alpha)N})}\leq2^{-\frac{2s}{N}}\frac{\|w_{\varepsilon}\|_{\mathcal{X}^s(\mathbb{R}_+^{N+1})}^2+O(\varepsilon^{(1-\alpha)(N-2s)})}{\|u_{\varepsilon}\|_{L^{2_s^*}(\mathbb{R}^N)}^{2}+O(\varepsilon^{(1-\alpha)N})}\\ & \leq2^{-\frac{2s}{N}}S(N,s)+O(\varepsilon^{(1-\alpha)(N-2s)}). \end{split} \end{equation} \begin{lemma}\cite[Lemma 3.8]{Barrios2012}\label{lemma_est_L2} The family $\{\phi_{\rho} w_{\varepsilon}\}$ and its trace on $\{y=0\}$, namely $\{\phi_{\rho} u_{\varepsilon}\}$, satisfy \begin{equation*} \|\phi_\rho u_{\varepsilon}\|_{L^2(\Omega)}^2=\left\{ \begin{tabular}{lr} $C\varepsilon^{2s}+O(\varepsilon^{N-2s})$ & if $N>4s$, \\ $C\varepsilon^{2s}\log(1/\varepsilon)+O(\varepsilon^{2s})$ & if $N=4s$, \\ \end{tabular} \right. \end{equation*} and \begin{equation*} \|\phi_\rho u_{\varepsilon} \|_{L^{2_s^*-1}(\Omega)}^{2_s^*-1}\geq C\varepsilon^{\frac{N-2s}{2}},\quad \text{if }N>2s, \end{equation*} for $\varepsilon>0$ small enough. \end{lemma} \begin{remark} In this case the dependence on the cut-off radius can be neglected since \begin{equation*} \begin{split} \|\phi_\rho u_{\varepsilon}\|_{L^2(\Omega)}^2&=\int_{\Omega}|\phi_\rho u_\varepsilon|^2dx\geq\int_{\{|x|<\frac{\rho}{2}\}}|u_\varepsilon|^2dx=\varepsilon^{-(N-2s)}\int_{\{|x|<\frac{\rho}{2}\}}\left|u_1\left(\frac{x}{\varepsilon}\right)\right|^2dx\\ &=\varepsilon^{2s}\int_{\{|x|<\frac{\rho}{2\varepsilon}\}}|u_1(x)|^2dx, \end{split} \end{equation*} so that, for $\rho=\varepsilon^\alpha$ with $0<\alpha<1$ as above, we have \begin{equation*} \|\phi_\rho u_{\varepsilon}\|_{L^2(\Omega)}^2\geq\varepsilon^{2s}\int_{\{|x|<\varepsilon^{-(1-\alpha)}\}}|u_1(x)|^2dx\geq\varepsilon^{2s}\int_{\{|x|<1\}}|u_1(x)|^2dx, \end{equation*} for $\varepsilon<<1$. Moreover, noticing that \begin{equation*} \varepsilon^{2s}\int_{\{|x|>\frac{\rho}{2\varepsilon}\}}|u_1|^2dx=\varepsilon^{2s}\cdot O\left(\left(\frac{\varepsilon}{\rho}\right)^{N-4s}\right), \end{equation*} takin $\rho=\varepsilon^\alpha$ as above we have $\|\phi_\rho u_{\varepsilon}\|_{L^2(\Omega)}^2=\varepsilon^{2s}\left(\| u_1\|_{L^2(\mathbb{R}^N)}^2+O\left(\varepsilon^{(1-\alpha)(N-4s)}\right)\right)$. Then, it follows that also $\|\psi_\rho u_{\varepsilon}\|_{L^2(\Omega)}^2=O(\varepsilon^{2s})$. \end{remark} We finally prove that we can find paths with energy below the critical level $c^*_{\mathcal{D-N}}$. As commented above, contrary to the fractional Sobolev constant \eqref{sobolev}, the constant $\widetilde{S}(\Sigma_{\mathcal{D}})$ is attained for $|\Sigma_{\mathcal{D}}|$ small enough. Indeed, as by Proposition \ref{prop_cota} we have $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})\leq 2^{\frac{-2s}{N}}S(N,s)$, we have two options: \begin{itemize} \item[1)]$\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})<2^{\frac{-2s}{N}}S(N,s)$. In this case, the constant $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})$ is attained by Theorem \ref{th_att} and no concentration occurs.\vspace{0.2cm} \item[2)]$\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$. In this case, by Theorem \ref{CCA}, minimizing sequences for $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})$ concentrate at a boundary point $x_0\in\overline{\Sigma}_{\mathcal{N}}$. \end{itemize} \begin{lemma}\label{lem:below_level0} Assume $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})<2^{\frac{-2s}{N}}S(N,s)$. Then, there exists $\tilde{w}\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$, $\tilde{w}>0$ such that \begin{equation*} \sup\limits_{t\geq0}\widetilde{J}_{\lambda}(t\tilde{w})<c^*_{\mathcal{D-N}}. \end{equation*} \end{lemma} \begin{proof} If $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})<2^{\frac{-2s}{N}}S(N,s)$ no concentration occurs and $\widetilde{S}(\Sigma_{\mathcal{D}})$ is attained at some $\tilde{u}_{}\in H_{\Sigma_{\mathcal{D}}}^s(\Omega)$ that we can assume to be positive (cf. \cite{Colorado2019}). Take $\displaystyle\tilde{w}=\frac{E_s[\tilde{u}]}{\|\tilde{u}\|_{2_s^*}}$ so that, by \eqref{norma2}, \begin{equation*} \|\tilde{w}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2=\widetilde{S}(\Sigma_{\mathcal{D}}). \end{equation*} Then, as $\lambda>0$ and $\tilde{w}>0$, \begin{equation*} \begin{split} \widetilde{J}_{\lambda}(t\tilde{w})&=\frac{t^2}{2}\|\tilde{w}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2-\frac{\lambda}{q+1}\|\tilde{w}(x,0)\|_{L^{q+1}(\Omega)}^{q+1}-\frac{t^{2_s^*}}{2_s^*}\|\tilde{w}(x,0)\|_{L^{2_s^*}(\Omega)}^{2_s^*}\\ &=\left(\frac{t^2}{2}\widetilde{S}(\Sigma_{\mathcal{D}})-\frac{\lambda}{q+1}\|\tilde{w}(x,0)\|_{L^{q+1}(\Omega)}^{q+1}-\frac{t^{2_s^*}}{2_s^*}\right)\\ &<\left(\frac{t^2}{2}\widetilde{S}(\Sigma_{\mathcal{D}})-\frac{t^{2_s^*}}{2_s^*}\right)=\vcentcolon g(t). \end{split} \end{equation*} Since $g(t)$ attains its maximum at $t_0=[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{1}{2_s^*-2}}$ and $g(t_0)=[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}$, we conclude \begin{equation*} \sup\limits_{t\geq0}\widetilde{J}_{\lambda}(t\tilde{w})<[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}=c^*_{\mathcal{D-N}}. \end{equation*} \end{proof} Next we address the case $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$. Since now the constant $\widetilde{S}(\Sigma_{\mathcal{D}})$ is not attained, we need to use the family of functions $\eta_{\varepsilon}$ defined in \eqref{eq:eta} and based on the extremal functions of the Sobolev inequality to construct paths below the critical level $c^*_{\mathcal{D-N}}=\frac{s}{N}[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}=\frac{s}{2N}[S(N,s)]^{\frac{N}{2s}}$. \begin{lemma}\label{lem:below_level} Assume $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$. Then, there exists $\varepsilon>0$ small enough such that \begin{equation*} \sup\limits_{t\geq0}\widetilde{J}_{\lambda}(t\eta_{\varepsilon})<c^*_{\mathcal{D-N}}. \end{equation*} \end{lemma} \begin{proof} Assume $N\geq4s$. Then, using as before that \begin{equation}\label{ineq:ab} (a+b)^p\geq a^p+b^p+\mu a^{p-1}b,\quad\text{for } a,b\geq0,\ p>1,\ \text{and some }\mu=\mu(p)>0, \end{equation} we have \begin{equation*} G_{\lambda}(w)\geq\frac{1}{2_s^*}w^{2_s^*}+\frac{\mu}{2}w^2w_0^{2_s^*-2}\qquad\text{and}\qquad g_{\lambda}(x,t)\geq t^{2_s^*-1}+\mu w_0^{2_s^*-2}. \end{equation*} Hence, \begin{equation*} \widetilde{J}_{\lambda}(t\eta_{\varepsilon})\leq \frac{t^2}{2}\|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2-\frac{t^{2_s^*}}{2_s^*}-\frac{t^2}{2}\mu\int_{\Omega}w_0^{2_s^*-2}\eta_{\varepsilon}^2dx. \end{equation*} Since $w_0\geq a_0>0$ in $supp(\eta_{\varepsilon})$, we get \begin{equation*} \widetilde{J}_{\lambda}(t\eta_{\varepsilon})\leq \frac{t^2}{2}\|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2-\frac{t^{2_s^*}}{2_s^*}-\frac{t^2}{2}\widetilde{\mu}\|\eta_{\varepsilon}\|_{L^2(\Omega)}^2=\vcentcolon g_{\varepsilon}(t). \end{equation*} It is clear that $\lim\limits_{t\to+\infty}g_{\varepsilon}(t)=-\infty$ and $\sup\limits_{t\geq 0}g_{\varepsilon}(t)$ is attained at some $t_{\varepsilon}$. In particular, since \begin{equation*} 0=g_{\varepsilon}'(t_{\varepsilon})=t_{\varepsilon}\|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2-t_{\varepsilon}^{2_s^*-1}-t_{\varepsilon}\widetilde{\mu}\|\eta_{\varepsilon}\|_{L^2(\Omega)}, \end{equation*} we have \begin{equation*} t_{\varepsilon}\leq \|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^{\frac{2}{2_s^*-2}}. \end{equation*} Leet us note that, by Lemma \ref{lemma_est_boundary}, $t_{\varepsilon}\geq C>0$. On the other hand, the function \begin{equation*} h_{\varepsilon}(t)=\frac{t^2}{2}\|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2-\frac{t^{2_s^*}}{2_s^*}, \end{equation*} is increasing on $[0, \|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^{\frac{2}{2_s^*-2}}]$. Therefore, \begin{equation*} \sup\limits_{t\geq 0}g_{\varepsilon}(t)=g_{\varepsilon}(t_{\varepsilon})\leq\frac{s}{N}\|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^{\frac{N}{s}}-C\|\eta_{\varepsilon}\|_{L^2(\Omega)}^2. \end{equation*} Since $\|u_{\varepsilon}\|_{L^{2_s^*}(\Omega)}$ does not depends on $\varepsilon$ and $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$, by \eqref{eq:trunc_sobolev} and Lemma \ref{lemma_est_L2}, we have \begin{equation*} \begin{split} \|\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}^2&\leq\widetilde{S}(\Sigma_{\mathcal{D}})+O(\varepsilon^{(1-\alpha)(N-2s)}),\\ \|\eta_{\varepsilon}\|_{L^2(\Omega)}^2&=\left\{ \begin{tabular}{lr} $O(\varepsilon^{2s})$ & if $N>4s$, \\ $O(\varepsilon^{2s}\log(1/\varepsilon))$ & if $N=4s$. \\ \end{tabular} \right. \end{split} \end{equation*} Thus, for $N>4s$ we get \begin{equation*} g_{\varepsilon}(t_{\varepsilon})\leq\frac{s}{N}[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}+C_1\varepsilon^{(1-\alpha)(N-2s)}-C_2\varepsilon^{2s}<\frac{s}{N}[\widetilde{S}(\Sigma_{\mathcal{D}})]^{\frac{N}{2s}}=c_{\mathcal{D-N}}^*, \end{equation*} for $\varepsilon>0$ small enough and \begin{equation}\label{alpha_condition} 0<\alpha<\frac{N-4s}{N-2s}. \end{equation} Note that a similar relation between the concentration parameter $\varepsilon>0$ and the cut-off radius $\rho=\varepsilon^{\alpha}$ was obtained in \cite[Lemma 3.2]{Grossi1990}.\newline If $N=4s$, the same conclusion follows. The case $2s<N<4s$ follows using the inequality \eqref{ineq:ab}, which gives, for some $\mu'>0$, \begin{equation}\label{ineq:fin} G_{\lambda}(w)\geq\frac{1}{2_s^*}w^{2_s^*}+\mu'w_0w^{2_s^*-1}. \end{equation} The result then follows arguing in a similar way as above, using \eqref{ineq:fin} together with the second estimate in Lemma \ref{lemma_est_L2}. \end{proof} \begin{remark} Let us recall the value $c_{\mathcal{D}}^*=\frac{s}{N}[S(N,s)]^{\frac{N}{2s}}$ establishes the level below of which a local Palais--Smale condition holds for \eqref{p_lambda} under Dirichlet boundary condition (cf. \cite[Lemma 3.5]{Barrios2012}). Then, since $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})\leq2^{\frac{-2s}{N}}S(N,s)$, we have \begin{equation*} c_{\mathcal{D}-\mathcal{N}}^*\leq\frac{1}{2}c_{\mathcal{D}}^*. \end{equation*} \end{remark} \begin{proof}[Proof of Theorem \ref{sublinear}-(4)] Let us fix $\lambda\in(0,\Lambda)$. In case of having $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})<2^{\frac{-2s}{N}}S(N,s)$, since $\lim\limits_{t\to+\infty}\widetilde{J}_{\lambda}(t\tilde{w})=-\infty$, there exists $M\gg1$ such that $\widetilde{J}_{\lambda}(M\tilde{w})<\widetilde{J}_{\lambda}(0)$. On the other hand, if $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$, since $\lim\limits_{t\to+\infty}\widetilde{J}_{\lambda}(t\eta_{\varepsilon})=-\infty$, there exists $M_{\varepsilon}\gg1$ such that $\widetilde{J}_{\lambda}(M_{\varepsilon}\eta_{\varepsilon})<\widetilde{J}_{\lambda}(0)$. By Corollary \ref{cor:min_0_ext}, there exists $\rho>0$ such that, if $\|w\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}=\rho$, then $\widetilde{J}_{\lambda}(w)\geq \widetilde{J}_{\lambda}(0)$. Next, if $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})<2^{\frac{-2s}{N}}S(N,s)$, let us define \begin{equation*} \Gamma=\{\gamma\in\mathcal{C}([0,1],\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})):\ \gamma(0)=0,\gamma(1)=M\tilde{w}\}, \end{equation*} and the minimax value \begin{equation*} c=\inf\limits_{\gamma\in\Gamma}\sup\limits_{0\leq t\leq1}\widetilde{J}_{\lambda}(\gamma(t)). \end{equation*} By the arguments above, $c\geq\widetilde{J}_{\lambda}(0)$. Moreover, by Lemma \ref{lem:below_level0}, we have \begin{equation*} c\leq\sup\limits_{0\leq t\leq1}\widetilde{J}_{\lambda}(tM\tilde{w})=\sup\limits_{t\geq0}\widetilde{J}_{\lambda}(t\tilde{w})<c_{\mathcal{D-N}}^*. \end{equation*} On the other hand, if $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$, let us define \begin{equation*} \Gamma_{\varepsilon}=\{\gamma\in\mathcal{C}([0,1],\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})):\ \gamma(0)=0,\gamma(1)=M_{\varepsilon}\eta_{\varepsilon}\}, \end{equation*} and the minimax value \begin{equation*} c_{\varepsilon}=\inf\limits_{\gamma\in\Gamma_{\varepsilon}}\sup\limits_{0\leq t\leq1}\widetilde{J}_{\lambda}(\gamma(t)). \end{equation*} By the arguments above, $c_{\varepsilon}\geq\widetilde{J}_{\lambda}(0)$. Moreover, by Lemma \ref{lem:below_level}, we have \begin{equation*} c_{\varepsilon}\leq\sup\limits_{0\leq t\leq1}\widetilde{J}_{\lambda}(tM_{\varepsilon}\eta_{\varepsilon})=\sup\limits_{t\geq0}\widetilde{J}_{\lambda}(t\eta_{\varepsilon})<c_{\mathcal{D-N}}^*, \end{equation*} for $\varepsilon\ll1$ small enough. Thus, by Lemma \ref{lem:PS} and the Mountain Pass theorem (cf. \cite{Ambrosetti1973}) if $c>\widetilde{J}_{\lambda}(0)$ (resp. $c_{\varepsilon}>\widetilde{J}_{\lambda}(0)$), or the refinement of the MPT (cf. \cite{Ghoussoub1989}) if $c>\widetilde{J}_{\lambda}(0)$ (resp.$c_{\varepsilon}=\widetilde{J}_{\lambda}(0)$), we get the existence of a non-trivial solution to $(\widetilde{P}_{\lambda})$, provided $u\equiv0$ is its unique solution. Of course this is a contradiction. Hence, $\widetilde{J}_{\lambda}$ admits a critical point $\tilde{w}>0$ so that $\widetilde{I}_{\lambda}$ admits a nontrivial critical point $\tilde{u}=\tilde{w}(x,0)>0$. As a consequence, $\hat{u}=u_0+\tilde{u}$ is a solution, different of $u_0$, to \eqref{p_lambda}. \end{proof} \section{Convex case, $1<q<2_s^*-1$}\label{Section_convex} In this section we address problem \eqref{p_lambda} in the convex setting $q>1$. Since the arguments carried out in the former section works with minor modifications in this convex case we will only indicate the main differences. First, we have that the functional $J_{\lambda}$ has the appropriate geometry. \begin{proposition}\label{prop:mpt_geom} Let $\lambda>0$ and $1<q<2_s^*-1$. Then, the functional $J_{\lambda}$ has the Mountain Pass geometry. That is, there exists $\rho>0$ and $\beta>0$ such that \begin{enumerate} \item $J_{\lambda}(0)=0$, \item for all $w\in\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ with $\|w\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}=\rho$ we have $J_{\lambda}(w)\geq\beta$, \item there exists a positive function $h\in \mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ such that $\|h\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}>\rho$ and $J_{\lambda}(h)<\beta$. \end{enumerate} \end{proposition} \begin{proof} The proof follows as in \cite[Proposition 3.1]{Barrios2015}, so we omit the details. \end{proof} Since the proof of Lemma \ref{lem:PS} can be adapted for this convex case $q>1$, the main point in order to prove Theorem \ref{superlinear} is then to show that we can find a local PS$_c$ sequence with energy level under the critical level $c_{\mathcal{D-N}}^*$. This step follows the same scheme as in the concave case. Ii $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})<2^{\frac{-2s}{N}}S(N,s)$, we use the extremal functions of the constant $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})$. Otherwise, if $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$, we proceed as in Lemma \ref{lem:below_level} using now the estimate (cf. \cite[Lemma 3.4]{Barrios2015}), \begin{equation}\label{estim} \|\eta_{\varepsilon}\|_{L^{q+1}(\Omega)}^{q+1}\geq C\varepsilon^{N-\left(\frac{N-2s}{2}\right)(q+1)},\quad\text{for }N>2s\left(1+\frac{1}{q}\right), \end{equation} with $\eta_{\varepsilon}$ as in \eqref{eq:eta}. Note that in this case, there is no restriction on the size of the parameter $\lambda>0$. We conclude the proof by using the Mountain Pass Theorem. \begin{proof}[Proof of Theorem \ref{superlinear}] Assume $\displaystyle \widetilde{S}(\Sigma_{\mathcal{D}})=2^{\frac{-2s}{N}}S(N,s)$ (the other case is similar). Let us define \begin{equation*} \Gamma_{\varepsilon}=\{\gamma\in\mathcal{C}([0,1],\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})):\ \gamma(0)=0,\gamma(1)=M_{\varepsilon}\eta_{\varepsilon}\}, \end{equation*} for some $M_{\varepsilon}\gg1$ such that $J_{\lambda}(M_{\varepsilon}\eta_{\varepsilon})<0$ with $\eta_{\varepsilon}$ defined as in \eqref{eq:eta}. Note that, for any $\gamma\in\Gamma_{\varepsilon}$ the function $t\mapsto\|\gamma(t)\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}$ is continuous in $[0,1]$. Then, since $\|\gamma(0)\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}=0$ and $\|\gamma(1)\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}=\|M_{\varepsilon}\eta_{\varepsilon}\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}>\rho$ for $M_{\varepsilon}$ large enough, there exists $t_0\in(0,1)$ such that $\|\gamma(t_0)\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}=\rho$ for $\rho$ given in Proposition \ref{prop:mpt_geom}. As a consequence, \begin{equation*} \sup\limits_{0\leq t\leq1}J_{\lambda}(\gamma(t))\geq J_{\lambda}(\gamma(t_0))\geq\inf\limits_{\|g\|_{\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})}=\rho}J_{\lambda}(g)\geq\beta>0 \end{equation*} with $\beta>0$ given in Proposition \ref{prop:mpt_geom}. Thus, \begin{equation*} c_{\varepsilon}=\inf\limits_{\gamma\in\Gamma_{\varepsilon}}\sup\limits_{0\leq t\leq1}J_{\lambda}(\gamma(t))>0. \end{equation*} By the Mountain Pass Theorem (cf. \cite{Ambrosetti1973}) we conclude that the functional $J_{\lambda}$ has a critical point $w\in\mathcal{X}_{\Sigma_{\mathcal{D}}}^s(\mathscr{C}_{\Omega})$ provided $N>2s\left(1+\frac{1}{q}\right)$. Moreover, since $J_{\lambda}(w)=c_{\varepsilon}\geq\beta>0$ and $J_{\lambda}(0)=0$ the function $w\neq0$. Therefore, $u=w(x,0)$ is a nontrivial solution to \eqref{p_lambda} for $q>1$ and $\lambda>0$. \end{proof}
1,314,259,992,889
arxiv
\section{Introduction} \label{sec:Introduction} The aim of this paper is to study Betti structures in solutions of certain univariate hypergeometric $\mathcal{D}$-modules. These differential systems have a long history, starting from work of Euler and Gau\ss. In modern language, they are the most basic examples of rigid $\mathcal{D}$-modules. Recent applications include quantum differential equations of toric manifolds (\cite{Giv7}, \cite{Ir2,Ir-Periods} or \cite{ReiSe, ReiSe2}) and an in-depth study of the Hodge theoretic properties of such systems (\cite{Sa15, SevCast, SevCastReich, Sa16}). From the Hodge theoretic point of view, but also for many other applications, it is important to find subfields of the complex numbers over which the solutions of these equations can be defined. This also fits to the more general program to understand Betti structures in holonomic $\mathcal{D}$-modules, see, e.g.\ \cite{Mo7}. A first example of a result in this direction is a theorem of Fedorov (\cite[Theorem 2]{Fedorov}), which was conjectured by Corti and Golyshev (\cite{CortiGolyshev}) and which gives arithmetic conditions on the exponents for a \emph{regular} (i.e., non-confluent) hypergeometric system to underly a \emph{real} variation of polarized Hodge structures. The rigidity property of these systems is used in an essential way in his argument. In the present paper, we take up the question of the existence of Betti structures in the solution spaces of univariate hypergeometric differential equations. We replace Fedorov's proof by a geometric argument, linking these $\mathcal{D}$-modules to Gau\ss--Manin systems of certain Laurent polynomials. Our approach is distinct from Fedorov's in at least two ways: \begin{enumerate} \item We consider more generally not necessarily regular hypergeometric equations (these irregular ones are usually called \emph{confluent}) using the theory of enhanced solutions of D'Agnolo--Kashiwara (\cite{DK16}). \item We discuss a more general setup, where we consider any finite Galois extension $L/K$ such that the solutions of the given hypergeometric systems are a priori defined over $L$ (e.g.\ if all exponents of the system are rational, then $L$ is a cyclotomic field), and we establish and prove a group theoretic criterion for them to be defined over $K$. \end{enumerate} As an application, we get a general criterion for the enhanced solutions to be defined over the real numbers (similar in shape to the one from \cite{Fedorov}), and, if all exponents are rational, we determine when the solutions are defined over ${\mathds Q}$. One can deduce in a rather straightforward way similar statements for the perverse sheaf of (classical) solutions, and we obtain in particular, in the non-confluent case, a new proof of Fedorov's result. The paper is organized as follows: We first discuss in Section \ref{sec:BettiConj} the following question: Given a Galois extension $L/K$ and an enhanced ind-sheaf defined over $L$, what are criteria to ensure that it comes from (i.e.\ is obtained by extension of scalars from) an object defined over $K$? This is applied in Section \ref{sec:Laurent}, where we introduce hypergeometric modules and prove a geometric realization of them, relying on earlier work of Schulze--Walther (\cite{SchulWalth2}) and Reichelt (\cite{Reich2}). Our main result is then Theorem \ref{theo:MainTheo2}, which gives the group theoretic criterion alluded to above. In order to illustrate this result, we formulate here a shortened version, which covers the cases we are mostly interested in. \begin{theorem}\label{theo:MainTheoIntro} Let numbers $\alpha_1,\ldots,\alpha_n,\beta_1,\ldots,\beta_m \in [0,1)$ be given, where $n\geq m$ and where we assume that $\alpha_i\neq \beta_j$ for all $i\in\{1,\ldots,n\}$ and all $j\in \{1,\ldots,m\}$. Consider the differential operator in one variable $$ P:=\prod_{i=1}^{n}\left(q\partial_q-\alpha_i\right)-q\cdot \prod_{j=1}^{m}\left(q\partial_q-\beta_j\right) $$ and let $\mathcal{H}(\alpha;\beta):=\mathcal{D}_{\dG_{m,q}}/\mathcal{D}_{\dG_{m,q}} \cdot P \in \mathrm{Mod}_\mathrm{hol}(\mathcal{D}_{\dG_{m,q}})$ be the corresponding irreducible hypergeometric module on the one-dimensional torus $\dG_{m,q}$. Suppose that $L\subset {\mathds C}$ is a field such that $e^{2\pi i \alpha_i}$, $e^{2\pi i \beta_j}\in L$ for all $i,j$. Let $K\subset L$ be such that $L/K$ is a finite Galois extension. Then if $\textup{Gal}(L/K)$ induces actions on $\{e^{2\pi i\alpha_1},\ldots,e^{2\pi i\alpha_n}\}$ and on $\{e^{2\pi i\beta_1},\ldots,e^{2\pi i\beta_m}\}$, the enhanced ind-sheaf $\SolE{\dG_{m,q}}(\mathcal{H}(\alpha;\beta))$ (see section \ref{sectionRH} below) is defined over $K$, that is, comes from an enhanced ind-sheaf over $K$ by extension of scalars.\end{theorem} The proof of this theorem will be given right after the proof of Theorem \ref{theo:MainTheo2} on page \pageref{page:proofMainTheoIntro}. As an example (see Theorem \ref{theo:RealStruct}) the above criterion applies if those of the numbers $\alpha_i$ and $\beta_j$ which are non-zero are symmetric around $\frac{1}{2}$, in which case (taking $L={\mathds C}$ and $K={\mathds R}$) we obtain that the enhanced solutions of the system $\mathcal{H}(\alpha;\beta)$ are defined over ${\mathds R}$. A similar reasoning leads to a criterion (Theorem \ref{theo:RatStruct}) showing the existence of rational structures. Finally, in Section \ref{sec:Stokes}, we draw some consequences for Stokes matrices associated to the irregular singular point of confluent hypergeometric equations.\vspace{0.3cm} \textbf{Acknowledgements.} We are grateful to Takuro Mochizuki for some explanations about his work on Betti structures and enhanced ind-sheaves, and for pointing out some ideas on the proofs in Section~\ref{sec:Stokes}. We also thank Andrea D'Agnolo for useful correspondence during the preparation of this work. \section{Betti structures and enhanced ind-sheaves} \label{sec:BettiConj} The functors of complex conjugation on the category of complex vector spaces and complexification of real vector spaces are well-known. In this chapter, we develop a theory of Galois conjugation and extension of scalars for more general field extensions $L/K$ in the context of sheaves, ind-sheaves and enhanced ind-sheaves. Since the latter are the topological counterpart of holonomic $\mathcal{D}$-modules via the irregular Riemann--Hilbert correspondence, this will produce a framework for studying the question of when solutions of a differential system (a priori defined over ${\mathds C}$) admit a structure over subfields of ${\mathds C}$. In particular, we will prove that in certain cases, the fact that an object over $L$ is isomorphic to all its Galois conjugates (in a compatible way) implies that this object already comes from an object defined over $K$. \subsection{Enhanced ind-sheaves} In \cite{DK16}, the authors introduced the category of enhanced ind-sheaves, which we will briefly recall here. We assume all topological spaces to be \emph{good} in this section (i.e.\ Hausdorff, locally compact, second countable and of finite flabby dimension). Let $k$ be an arbitrary field, and let $X$ be a topological space. We denote by $\Mod{k}{X}$ the category of sheaves of $k$-vector spaces and by $\Db{k}{X}$ its bounded derived category with the six Grothendieck operations $\mathrm{R}\mathcal{H} om_{k_X}$, $\otimes$, $\mathrm{R} f_*$, $f^{-1}$, $\mathrm{R} f_!$ and $f^!$. The category $\Ind{k}{X}$ of \emph{ind-sheaves} on $X$ is the category of ind-objects for $\Modc{k}{X}$, the category of compactly supported sheaves of $k$-vector spaces on $X$ (see \cite{KS01} and \cite{KS06} for the theory of ind-sheaves and -objects). In other words, an object $F\in\Ind{k}{X}$ is of the form $$F=\indlim{i\in I} \mathcal{F}_i,$$ where the $\mathcal{F}_i\in\Modc{k}{X}$ form a small filtrant inductive system and $\indlim{}$ denotes\vspace{-0.3cm} the inductive limit in the category of functors $\Modc{k}{X}^\mathrm{op}\to \mathrm{Mod}({\mathds Z})$ (i.e.\ one considers the sheaves $\mathcal{F}_i$ after Yoneda embedding). There is a fully faithful and exact embedding from the category of (not necessarily compactly supported) sheaves into the category of ind-sheaves $\iota_X\colon \Mod{k}{X}\hookrightarrow \Ind{k}{X}$, which has an exact left adjoint denoted by $\alpha_X$. Moreover, one has a formalism of the six Grothendieck operations on $\Ind{k}{X}$, where one denotes the proper direct image by $\mathrm{R} f_{!!}$ to distinguish it from the operation $\mathrm{R} f_!$ for sheaves, since it is not compatible with $\iota_X$. The derived category of $\Ind{k}{X}$ is denoted by $\DbI{k}{X}$. Now, let $\mathcal{X}=(X,\widehat{X})$ be a bordered space, i.e.\ a pair of topological spaces such that $X\subseteq \widehat{X}$ is an open subset. One defines the category of enhanced ind-sheaves on $\mathcal{X}$ by two successive quotients (we refer to \cite{DK16} and \cite{EnhPerv} for details on this construction): Denote by $\mathsf{P}=\mathrm{P}^1({\mathds R})$ the real projective line and define the bordered space ${\mathds R}_\infty=({\mathds R},\mathsf{P})$. Then $$\DbI{k}{\mathcal{X}\times{\mathds R}_\infty}\vcentcolon= \DbI{k}{\widehat{X}\times\mathsf{P}}/\DbI{k}{(\widehat{X}\times\mathsf{P})\setminus (X\times{\mathds R})},\qquad \DbI{k}{\mathcal{X}}\vcentcolon= \DbI{k}{\widehat{X}}/\DbI{k}{\widehat{X}\setminus X}$$ and $$\EbI{k}{\mathcal{X}}\vcentcolon= \DbI{k}{\mathcal{X}\times{\mathds R}_\infty}/\pi^{-1}\DbI{k}{\mathcal{X}},$$ where $\pi\colon \mathcal{X}\times{\mathds R}_\infty\to \mathcal{X}$ is the projection. The total quotient functor $Q\colon \DbI{k}{\widehat{X}\times\mathsf{P}}\to \EbI{k}{\mathcal{X}}$ has a fully faithful left adjoint, which we will denote by $\mathcal{L}$ (it is denoted by $\mathrm{R} j_{\mathcal{X}\times{\mathds R}_\infty !!}\mathrm{L}^\mathrm{E}$ in \cite{EnhPerv}). The category of enhanced ind-sheaves still comes with the six-functor formalism and these operations are denoted by $\mathrm{R}\cI hom^+$, $\mathbin{\overset{+}{\otimes}}$, $\mathrm{E} f_*$, $\mathrm{E} f^{-1}$, $\mathrm{E} f_{!!}$ and $\mathrm{E} f^!$ (for a morphism $f$ of bordered spaces). In addition, for an object $\mathcal{F}\in \Db{k}{X}$, one has the operation \begin{equation}\label{eq:pitens} \EbI{k}{\mathcal{X}}\to \EbI{k}{\mathcal{X}}, \quad H\mapsto \pi^{-1}\mathcal{F} \otimes H \end{equation} induced by the tensor product on $\DbI{k}{\widehat{X}\times\mathsf{P}}$. We will in particular abbreviate $H_V\vcentcolon=\pi^{-1}k_V\otimes H$ for a subset $V\subset X$, and we recall that $H_V\cong \mathrm{E}i_{V_\infty!!}\mathrm{E}i_{V_\infty}^{-1}H$, where $i_V\colon (V,\overline{V})\to \mathcal{X}$ is the embedding and $\overline{V}$ denotes the closure of $V$ in $\widehat{X}$ )(see \cite[Lemma 2.7.6]{EnhPerv}). We will often encounter the objects \begin{equation}\label{eq:DefExp} k^\mathrm{E}_\mathcal{X}\vcentcolon= \indlim{a\to\infty} k_{\{t\geq a\}}\in\EbI{k}{\mathcal{X}}\qquad \textnormal{and} \qquad {\mathds E}^{\phi}_k\vcentcolon= k^\mathrm{E}_\mathcal{X}\mathbin{\overset{+}{\otimes}} k_{\{t+\phi\geq 0\}}\cong \indlim{a\to\infty}k_{\{t+\phi \geq a\}}\in\EbI{k}{\mathcal{X}}, \end{equation} where $\{t\geq a\}\vcentcolon= \{(x,t)\in\widetilde{X}\times\mathsf{P};x\in X, t\in {\mathds R}, t\geq a\}$, and for a continuous function $\phi\colon U\to {\mathds R}$ on an open subset $U\subseteq X$ we set $\{t+\phi\geq a\}\vcentcolon= \{(x,t)\in\widetilde{X}\times\mathsf{P}; x\in U, t\in {\mathds R}, t+\phi(x)\geq a\}$ . Moreover, for each $H\in\EbI{k}{\mathcal{X}}$, we set \begin{equation}\label{eq:Sh} \mathsf{sh}_{\mathcal{X}}(H)= \alpha_{X} j^{-1} \mathrm{R}\overline{\pi}_* \mathrm{R}\cI hom (k_{\{t\geq 0\}}\oplus k_{\{t\leq 0\}},\mathcal{L}H)\in \Db{k}{X}, \end{equation} where $j\colon X\hookrightarrow \widehat{X}$ is the embedding and $\overline{\pi}\colon \widehat{X}\times\mathsf{P}\to \widehat{X}$ is the projection. One calls $\mathsf{sh}_\mathcal{X}$ the \emph{sheafification functor} for enhanced ind-sheaves on the bordered space (see \cite{DKsheafification} for a detailed study). If $\mathcal{X}$ is a real analytic bordered space (i.e.\ $\widehat{X}$ is a real analytic manifold and $X\subset \widehat{X}$ is a subanalytic subset), one has the notion of ${\mathds R}$-constructibility (resp.\ ind-${\mathds R}$-constructibility) for sheaves (resp.\ ind-sheaves) on $\widehat{X}$ (see \cite{KS90}, \cite{KS01}). The full subcategory $\EbRcI{k}{\mathcal{X}}\subset\EbI{k}{\mathcal{X}}$ consisting of ${\mathds R}$-constructible enhanced ind-sheaves was introduced in \cite{DK16, EnhPerv}. The \emph{standard t-structure} (cf.\ \cite[§3.4]{DK16}, \cite{EnhPerv}) on $\EbI{k}{\mathcal{X}}$ is the one induced by the standard t-structure on the derived category $\DbI{k}{\widehat{X}\times\mathsf{P}}$, i.e. \begin{align*} \mathrm{E}^{\leq n}(\mathrm{I}k_\mathcal{X})=\{ H\in\EbI{k}{\mathcal{X}}; \mathcal{L}(H)\in \mathrm{D}^{\leq n}(\mathrm{I}k_{\widehat{X}\times\mathsf{P}}) \}\\ \mathrm{E}^{\geq n}(\mathrm{I}k_\mathcal{X})=\{ H\in\EbI{k}{\mathcal{X}}; \mathcal{L}(H)\in \mathrm{D}^{\geq n}(\mathrm{I}k_{\widehat{X}\times\mathsf{P}}) \} \end{align*} for $n\in{\mathds Z}$. Its heart $\mathrm{E}^0(\mathrm{I}k_\mathcal{X})$ therefore consists of those objects $H\in\EbI{k}{\mathcal{X}}$ such that $\mathcal{L}(H)$ is concentrated in degree zero. The associated cohomology functors are denoted by $\mathcal{H}^n$. \subsection{Galois descent for enhanced ind-sheaves} In this section, we generalize two constructions, which are well-known for vector spaces, to the category of enhanced ind-sheaves: For a given finite Galois extension $L/K$, we consider, firstly, conjugation of an enhanced ind-sheaf over $L$ with respect to elements of the Galois group and, secondly, extension of scalars on an enhanced ind-sheaf over $K$. We will then show that the existence of suitable isomorphisms between an object over $L$ and its conjugates implies that the object comes from an object over the subfield $K$ by extension of scalars. Let $L/ K$ be a Galois extension and $G$ its Galois group. For an element $g\in G$ and an $L$-vector space $V$, one can define the \emph{$g$-conjugate of $V$}, denoted by $\overline{V}^g$, as follows: As a $K$-vector space, $\overline{V}^g=V$, and the action of $L$ is given by $l\cdot v\vcentcolon= g(l)v$. One easily checks that this construction (by applying it to sections over any open set) defines $g$-conjugation functors for sheaves of $L$-vector spaces on a topological space $M$: \begin{equation}\label{eq:ConjSheaves} \mathrm{Mod}(L_M)\to \mathrm{Mod}(L_M), \mathcal{F}\mapsto \overline{\mathcal{F}}^g \end{equation} (The restriction morphisms of $\overline{\mathcal{F}}^g$ are the same as those of $\mathcal{F}$.) This functor is exact for any $g\in G$ and hence induces a functor on the derived category of sheaves $$\Db{L}{M}\to\Db{L}{M}, \mathcal{F}^\bullet \mapsto \overline{\mathcal{F}^\bullet}^g.$$ The following lemma shows that conjugation is a very ``tame'' operation. \begin{lemma} \label{lemmaConjCompatShv} Let $f\colon X\to Y$ be a morphism of topological spaces and let $\mathcal{F},\mathcal{F}_1,\mathcal{F}_2\in\Db{L}{X}$, $\mathcal{G}\in\Db{L}{Y}$. Then we have isomorphisms for any $g,h\in G$ \begin{itemize} \item[(i)] $\overline{\mathrm{R}f_*\mathcal{F}}^g\cong \mathrm{R}f_*\overline{\mathcal{F}}^g$ and $\overline{\mathrm{R}f_!\mathcal{F}}^g\cong \mathrm{R}f_!\overline{\mathcal{G}}^g$, \item[(ii)] $\overline{\mathrm{R}f^{-1}\mathcal{G}}^g\cong \mathrm{R}f^{-1}\overline{\mathcal{G}}^g$ and $\overline{\mathrm{R}f^!\mathcal{G}}^g\cong \mathrm{R}f^!\overline{\mathcal{G}}^g$, \item[(iii)] $\overline{\mathcal{F}_1\otimes\mathcal{F}_2}^g\cong \overline{\mathcal{F}_1}^g\otimes\overline{\mathcal{F}_2}^g$ and $\mathrm{R}\mathcal{H} om_{L_X}(\overline{\mathcal{F}_1}^g,\overline{\mathcal{F}_2}^g)\cong \overline{\mathrm{R}\mathcal{H} om_{L_X}(\mathcal{F}_1,\mathcal{F}_2)}^g$, \item[(iv)] $\overline{\overline{\mathcal{F}}^g}^h\cong \overline{\mathcal{F}}^{gh}$. \end{itemize} \end{lemma} \begin{proof} \begin{description} \item[\textnormal{(i)}] By definition, for the underived direct image functor we have $(f_*\mathcal{F})(U)=\mathcal{F}(f^{-1}(U))$ for any open subset $U\subseteq X$. Since conjugation for sheaves is defined on sections, this yields a natural isomorphism $\overline{f_*\mathcal{F}}^g\cong f_*\overline{\mathcal{F}}^g$. The functor $\overline{(\bullet)}^g$ is exact, and hence one obtains the first statement by derivation. The second isomorphism follows similarly: We note that $f_!\mathcal{F}$ is a subsheaf of $f_*\mathcal{F}$ and that the above isomorphism induces an isomorphism $\overline{f_!\mathcal{F}}^g\cong f_!\overline{\mathcal{F}}^g$, and conclude again by deriving the composition of functors. \item[\textnormal{(ii)}] It is not difficult to see that $$\mathrm{Hom}_{\Db{L}{X}}\big(\overline{\mathcal{F}_1}^g,\mathcal{F}_2\big) \cong \mathrm{Hom}_{\Db{L}{X}}\big(\mathcal{F}_1,\overline{\mathcal{F}_2}^{g^{-1}}\big)$$ for $\mathcal{F}_1,\mathcal{F}_2\in\Db{L}{X}$. Using this, the statements in (ii) follow from (i) by adjunction. \item[\textnormal{(iii) and (iv)}] follow directly from the corresponding statements for vector spaces (for the tensor product, note that $g$-conjugation is compatible with sheafification, or derive it by adjunction from the statement about $\mathrm{R}\mathcal{H} om$). \end{description} \end{proof} By the general theory of ind-sheaves (cf.\ \cite[p.\ 7]{KS01}), $g$-conjugation further extends to a functor on ind-sheaves $$\Ind{L}{M}\to\Ind{L}{M}, F=\indlim{i} \mathcal{F}_i \mapsto \overline{F}^g=\indlim{i}\overline{\mathcal{F}_i}^g.$$ Since this functor is still exact (cf.\ \cite[p.\ 11]{KS01}), we get a $g$-conjugation on the derived category of ind-sheaves $$\DbIL{M}\to\DbIL{M}, F^\bullet \mapsto \overline{F^\bullet}^g.$$ A corresponding statement like Lemma \ref{lemmaConjCompatShv} holds for ind-sheaves. Now, the category $\EbIL{\mathcal{X}}$ of enhanced ind-sheaves on a bordered space $\mathcal{X}=(X,\widehat{X})$ is a quotient category of $\DbIL{\widehat{X}\times\mathsf{P}}$, and it can be checked that the above conjugation functor for ind-sheaves on $M=\widehat{X}\times\mathsf{P}$ induces a well-defined functor $$\EbIL{\mathcal{X}}\to\EbIL{\mathcal{X}}, K\mapsto \overline{K}^g.$$ The following lemma is not difficult to prove, and it shows that conjugation functors are compatible with many of the standard operations. \begin{lemma}\label{lemmaConjCompat} Let $\mathcal{X}=(X,\widehat{X})$ and $\mathcal{Y}=(Y,\widehat{Y})$ be bordered spaces and $f\colon \mathcal{X}\to \mathcal{Y}$ a morphism. Let $H,H_1,H_2\in \EbIL{\mathcal{X}}$, $\mathcal{F}\in\Db{L}{X}$ and $K\in \EbIL{\mathcal{Y}}$. Then for any $g,h\in G$ we have the following isomorphisms: \begin{itemize} \item[(i)] $\overline{\mathrm{E}f_*H}^g\cong \mathrm{E}f_*\overline{H}^g$ and $\overline{\mathrm{E}f_{!!}H}^g\cong \mathrm{E}f_{!!}\overline{H}^g$, \item[(ii)] $\overline{\mathrm{E}f^{-1}K}^g\cong \mathrm{E}f^{-1}\overline{K}^g$ and $\overline{\mathrm{E}f^!K}^g\cong \mathrm{E}f^!\overline{K}^g$, \item[(iii)] $\overline{H_1\mathbin{\overset{+}{\otimes}} H_2}^g\cong \overline{H_1}^g\mathbin{\overset{+}{\otimes}} \overline{H_2}^g$ and $\overline{\pi^{-1}\mathcal{F}\otimes H}^g\cong \pi^{-1}\overline{\mathcal{F}}^g\otimes \overline{H}^g$, \item[(iv)] $\overline{\overline{H}^g}^h\cong \overline{H}^{gh}$ \item[(v)] $\overline{\mathcal{H}^n H}^g\cong \mathcal{H}^n\overline{H}^g$ for any $n\in {\mathds Z}$. \end{itemize} \end{lemma} \begin{proof} (i)--(iv) Since the operations on enhanced ind-sheaves are induced by operations on ind-sheaves, this follows from Lemma \ref{lemmaConjCompatShv} and the corresponding statements for ind-sheaves. \begin{itemize} \item[(v)] The statement is clear since conjugation is exact. \end{itemize} \end{proof} Conjugation associates to an object over $L$ a different object over $L$. On the other hand, given an arbitrary field extension $L/K$, one can extend scalars, starting from objects defined over $K$: Classically, if $V$ is a $K$-vector space, then $L\otimes_K V$ is a vector space over $L$. This construction naturally extends to sheaves: Let $\mathcal{F}\in \Mod{K}{X}$ and denote by $L_X$ (resp.\ $K_X$) the constant sheaf with stalk $L$ (resp. $K$), then $L_X\otimes_{K_X}\mathcal{F}\in \Mod{L}{X}$. Since tensor products over fields are exact, we obtain a functor \begin{equation}\label{eq:extensionSh} \Db{K}{X}\to\Db{L}{X}, \quad \mathcal{F}\mapsto L_X\otimes_{K_X}\mathcal{F}. \end{equation} Similarly, if $F=\indlim{i\in I}\mathcal{F}_i\in\Ind{K}{X}$ for $\mathcal{F}_i\in\Modc{K}{X}$, then $L_X\otimes_{K_X}F\cong\indlim{i\in I}(L_X\otimes_{K_X}\mathcal{F}_i)\in \Ind{L}{X}$. This again extends to a functor between derived categories and hence, using notation as in \eqref{eq:pitens}, induces a functor \begin{equation}\label{eq:extension} \EbI{K}{\mathcal{X}}\to\EbI{L}{\mathcal{X}}, \quad H\mapsto \pi^{-1}L_X\otimes_{\pi^{-1}K_X} H \end{equation} for any bordered space $\mathcal{X}=(X,\widehat{X})$. (We emphasize here the field over which we take the tensor product.) \begin{definition}\label{def:KStructure} We will say that an object $H\in\EbI{L}{\mathcal{X}}$ has a $K$-structure if it is contained in the essential image of the functor \eqref{eq:extension}.\\ Similarly, we say that an object $\mathcal{F}\in\Db{L}{X}$ has a $K$-structure if it is contained in the essential image of the functor \eqref{eq:extensionSh}. \end{definition} Extension of scalars behaves nicely under certain functors. \begin{lemma}\label{lemmaExtensionCompatibility} Let $L/K$ be a field extension and $f\colon \mathcal{X}=(X,\widehat{X})\to \mathcal{Y}=(Y,\widehat{Y})$ a morphism of bordered spaces. Then we have isomorphisms \begin{itemize} \item[(i)] $\mathrm{E} f_{!!}(\pi^{-1}L_X\otimes_{\pi^{-1}K_X}F)\cong \pi^{-1}L_Y\otimes_{\pi^{-1}K_Y}\mathrm{E} f_{!!}F $ for any $F\in\EbI{K}{\mathcal{X}}$, \item[(ii)] $\mathrm{E} f^{-1}(\pi^{-1}L_Y\otimes_{\pi^{-1}K_Y}G)\cong \pi^{-1}L_X\otimes_{\pi^{-1}K_X}\mathrm{E} f^{-1}G $ for any $G\in\EbI{K}{\mathcal{Y}}$, \item[(iii)] $(\pi^{-1}L_X\otimes_{\pi^{-1}K_X}F_1)\mathbin{\overset{+}{\otimes}} (\pi^{-1}L_X\otimes_{\pi^{-1}K_X}F_2)\cong \pi^{-1}L_X\otimes_{\pi^{-1}K_X} (F_1\mathbin{\overset{+}{\otimes}} F_2)$ for $F_1,F_2\in \EbI{K}{\mathcal{X}}$, \item[(iv)] $\mathcal{H}^n(\pi^{-1}L_X\otimes_{\pi^{-1}K_X}F)\cong \pi^{-1}L_X\otimes_{\pi^{-1}K_X}\mathcal{H}^n F$ for any $F\in\EbI{K}{\mathcal{X}}$. \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[(i)] Since all the operations involved are induced by operations on $\DbI{K}{\mathcal{X}\times{\mathds R}_\infty}$, it is enough to prove an isomorphism $$\mathrm{R} \tilde{f}_{!!}(\pi^{-1}L_X\otimes_{\pi^{-1}K_X}F)\cong \pi^{-1}L_Y\otimes_{\pi^{-1}K_Y}\mathrm{R} \tilde{f}_{!!}F $$ for any $F\in \DbI{K}{\mathcal{X}\times{\mathds R}_\infty}$, where $\tilde{f}=f\times\id_{{\mathds R}_\infty}$. Noting that $\pi^{-1}L_X\cong \tilde{f}^{-1}\pi^{-1}L_Y$, this follows from the projection formula for ind-sheaves on bordered spaces (see the first isomorphism in \cite[Proposition 3.3.13]{DK16}). \item[(ii)] Similarly, here we need to prove an isomorphism $$\mathrm{R} \tilde{f}^{-1}(\pi^{-1}L_Y\otimes_{\pi^{-1}K_Y}G)\cong \pi^{-1}L_X\otimes_{\pi^{-1}K_X}\mathrm{R} \tilde{f}^{-1}G $$ for any $G\in\DbI{K}{\mathcal{Y}\times{\mathds R}_\infty}$, and this follows as above, using the second isomorphism in \cite[Proposition 3.3.13]{DK16}. \item[(iii)] The convolution product is defined on $\DbI{K}{\mathcal{X}\times{\mathds R}_\infty}$ by $$F_1\mathbin{\overset{+}{\otimes}} F_2\vcentcolon= \mathrm{R}\mu_{!!}(q_1^{-1}F_1\otimes q_2^{-1}F_2),$$ where the maps $q_1,q_2,\mu\colon \mathcal{X}\times{\mathds R}_\infty^2\to\mathcal{X}\times{\mathds R}_\infty$ are given by the projections and by addition of the real variables, respectively. Hence, as in (i) and (ii), one concludes that this operation is compatible with extension of scalars. \item[(iv)] This follows directly from the exactness of tensor products over fields (see \cite[Lemma 2.7.5]{EnhPerv}). \end{itemize} \end{proof} If we restrict our focus to ${\mathds R}$-constructible enhanced ind-sheaves, we can also prove compatibility of $K$-structures with the sheafification functor. As a preparation, we need two lemmas. The following statement was suggested to us by Takuro Mochizuki, to whom we are very grateful for this. \begin{lemma}\label{lemma:MocRc} Let $X$ be a real analytic manifold and let $\mathcal{F}_1,\mathcal{F}_2\in\mathrm{Mod}_{\dR\textnormal{-c}}(K_X)$ be ${\mathds R}$-constructible sheaves with compact support. Then there is a natural isomorphism of $L$-vector spaces $$L\otimes_K \mathrm{Hom}_{K_X}(\mathcal{F}_1,\mathcal{F}_2) \to \mathrm{Hom}_{L_X}(L_X\otimes_{K_X}\mathcal{F}_1,L_X\otimes_{K_X}\mathcal{F}_2).$$ \end{lemma} \begin{proof} We only sketch the idea of proof here: ${\mathds R}$-constructible sheaves on real analytic manifolds can be considered as constructible sheaves on simplicial complexes (see \cite[Chap.\ VIII]{KS90}). By compactness of the support, finitely many strata are sufficient to describe the sheaves. One starts by proving the statement for constant sheaves on single strata and concludes by induction on the dimension of the strata. \end{proof} Thanks to the lemma above, it is easy to prove the following. \begin{lemma}\label{lemmaIhomExtensions} Let $X$ be a real analytic manifold, $\mathcal{F}\in\mathrm{D}^\mathrm{b}_{\dR\textnormal{-c}}(L_X)$ and $G\in\DbIrc{L}{X}$ such that $\mathcal{F}\cong L_X\otimes_{K_X}\mathcal{F}_K$ for some $\mathcal{F}_K\in\mathrm{D}^\mathrm{b}_{\dR\textnormal{-c}}(K_X)$ and $G\cong L_X\otimes_{K_X}G_K$ for some $G_K\in\DbIrc{K}{X}$, then we have $$\mathrm{R}\mathcal{I} hom(\mathcal{F},G)\cong L_X\otimes_{K_X} \mathrm{R}\mathcal{I} hom(\mathcal{F}_K,G_K).$$ \end{lemma} \begin{proof} Let us first assume that $\mathcal{F}_K,\mathcal{G}_K$ are concentrated in degree $0$. We can write $G_K=\indlim{i} \mathcal{G}_i$ for some $\mathcal{G}_i\in\mathrm{Mod}_{\dR\textnormal{-c}}^c(K_X)$ (by definition of $\mathrm{I}_{\dR\textnormal{-c}}(K_X)$) and hence $G=\indlim{i} L_X\otimes_{K_X} \mathcal{G}_i$. Then by \cite[Corollary 4.2.8(iii) and Proposition 4.2.4]{KS01} we have \begin{equation}\label{IhomC}\mathcal{I}hom(\mathcal{F},G)\cong \indlim{i} \mathcal{I}hom(\mathcal{F},L_X\otimes_{K_X}\mathcal{G}_i)\cong \indlim{i} \mathcal{H}om(L_X\otimes_{K_X}\mathcal{F}_K,L_X\otimes_{K_X}\mathcal{G}_i).\end{equation} Let us show \begin{equation}\label{HomC} \mathcal{H}om(L_X\otimes_{K_X}\mathcal{F}_K,L_X\otimes_{K_X}\mathcal{G}_i)\cong L_X\otimes_{K_X}\mathcal{H}om(\mathcal{F}_K,\mathcal{G}_i)\end{equation} Let $U$ be a suffiently small ball in $X$, then, denoting by $C$ the (compact) support of $\mathcal{G}_i$, we have \begin{align*} \Gamma(U;\mathcal{H}om(L_X\otimes_{K_X}\mathcal{F}_K,L_X\otimes_{K_X}\mathcal{G}_i))&=\mathrm{Hom}_{L_U}(L_U\otimes_{K_U}\mathcal{F}_K|_U,L_U\otimes_{K_U}\mathcal{G}_i|_U)\\ &\cong \mathrm{Hom}_{L_X}(L_X\otimes_{K_X}(\mathcal{F}_K)_{C\cap U},L_X\otimes_{K_X}(\mathcal{G}_i)_U)\\ &\cong L\otimes_K \mathrm{Hom}_{K_X}((\mathcal{F}_K)_{C\cap U},(\mathcal{G}_i)_U)\\ &\cong L\otimes_K \mathrm{Hom}_{K_U}(\mathcal{F}_K|_U,\mathcal{G}_i|_U) \end{align*} where the second line follows by full faithfulness of extension by zero (from $U$ to $X$) and the third line from Lemma~\ref{lemma:MocRc}, since the sheaves $(\mathcal{F}_K)_{C\cap U}$ and $(\mathcal{G}_i)_U$ are compactly supported and ${\mathds R}$-constructible. On the other hand, the tensor product $L_{X}\otimes_{K_X}\mathcal{H}om(\mathcal{F}_K,\mathcal{G}_i)$ is the presheaf associated to $$U\mapsto L_X(U)\otimes_{K}\mathrm{Hom}_{K_U}(\mathcal{F}_K|_U,\mathcal{G}_i|_U)$$ On a basis of the topology consisting of small open balls $U$, we have $L_X(U)=L$ and this coincides with above, so \eqref{HomC} follows. Together with \eqref{IhomC} and the fact that $\indlim{}$ commutes with tensor products, this proves the lemma. By deriving functors (note that $L_X\otimes_{K_X}(\bullet)$ is exact), one gets the statement of the lemma in the derived case. \end{proof} We can now state the above-mentioned compatibility of the sheafification functor with extension of scalars. \begin{corollary}\label{corSheafificationExtension} Let $\mathcal{X}$ be a real analytic bordered space and $H\in\EbRcI{L}{\mathcal{X}}$ such that $H\cong \pi^{-1}L_X\otimes_{\pi^{-1}K_X} H_K$ for some $H_K\in\EbRcI{K}{\mathcal{X}}$. Then $$ \mathsf{sh}_\mathcal{X}(H)\cong L_X\otimes\mathsf{sh}_\mathcal{X}(H_K). $$ \end{corollary} \begin{proof} By definition (see formula \eqref{eq:Sh}), one has $$\mathsf{sh}_\mathcal{X}(H)=\alpha_X j^{-1}\mathrm{R}\overline{\pi}_*\mathrm{R}\mathcal{I} hom\big(\pi^{-1}L_X\otimes_{\pi^{-1}K_X} (K_{\{t\geq 0\}}\oplus K_{\{t\leq 0\}}),\pi^{-1}L_X\otimes \mathcal{L} H_K\big).$$ By Lemma~\ref{lemmaIhomExtensions}, the functor $\mathrm{R}\mathcal{I} hom$ is compatible with $K$-structures (since there is only a usual sheaf in the first component). Moreover, the functor $\alpha_X$ commutes with tensor products and $\mathrm{R}\overline{\pi}_*=\mathrm{R}\overline{\pi}_{!!}$ (note that $\overline{\pi}\colon \widehat{X}\times\mathsf{P}\to\widehat{X}$ is proper) as well as $j^{-1}$ commute with extension of scalars (similarly to Lemma~\ref{lemmaExtensionCompatibility}). This concludes the proof. \end{proof} We want to use the concept of $g$-conjugation in order to describe when an object defined over a field $L$ actually comes from an object over a subfield $K$ by extension of scalars. To this end, we introduce the following notion. This is inspired by the corresponding results for vector spaces in \cite{Conrad}, where the notion of $G$-structure on an $L$-vector space was (equivalently) formulated in terms of a semilinear action of the Galois group on the underlying $K$-vector space. From now on, let $L/K$ be a finite Galois extension with Galois group $G$. \begin{definition}\label{def:GStructure} Let $X$ be a topological space and let $\mathcal{X}$ be a bordered space. A $G$-structure on an object $F\in \Db{L}{X}$ (resp.\ $F\in\DbI{L}{X}$, $F\in \EbI{L}{\mathcal{X}}$) consists of the data of an isomorphism in $\Db{L}{X}$ (resp.\ $\DbI{L}{X}$, $\EbI{L}{\mathcal{X}}$) $$\varphi_g\colon F\overset{\cong}{\longrightarrow} \overline{F}^g$$ for each $g\in G$ such that for any $g,h\in G$, we have $\varphi_{gh}=\overline{\varphi_g}^h\circ \varphi_{h}$. \end{definition} The following three statements show that the existence of such a $G$-structure on objects concentrated in one degree (with respect to the standard t-structures, i.e.\ sheaves and ind-sheaves rather than complexes thereof) often implies the existence of a structure over the subfield $K$. \begin{lemma}\label{lemmaGstructureSheaves} Let $X$ be a topological space and let $\mathcal{F}\in\mathrm{Mod}(L_X)$ be a sheaf equipped with a $G$-structure. Then there exists $\mathcal{F}_K\in\mathrm{Mod}(K_X)$ such that $\mathcal{F}\cong L_X\otimes_{K_X} \mathcal{F}_K$. \end{lemma} \begin{proof} Recall that, for all $g\in G$, the underlying sheaf of $K$-vector spaces of $\mathcal{F}$ and $\overline{\mathcal{F}}^g$ is the same. Hence, each $\varphi_g\colon \mathcal{F}\overset{\cong}{\longrightarrow}\overline{\mathcal{F}}^g$ defines a $K$-linear automorphism of $\mathcal{F}$. More precisely, we get a $K$-linear automorphism $\varphi_g^U\colon \mathcal{F}(U)\overset{\cong}{\longrightarrow}\mathcal{F}(U)$ for any open $U\subseteq X$ which in addition is $g$-semilinear, meaning that $\varphi_g^U(l\cdot v)=g(l)\cdot \varphi_g^U(v)$ for any $l\in L$ and $v\in\mathcal{F}(U)$. Now we set for any open $U\subseteq X$ $$\mathcal{F}_K(U)\vcentcolon=\mathcal{F}(U)^G\vcentcolon=\{v\in \mathcal{F}(U)\mid \varphi_g^U(v)=v \; \text{for any $g\in G$}\}.$$ This clearly defines a subsheaf of $\mathcal{F}$. Moreover, the natural morphisms $$L\otimes_K \mathcal{F}_K(U)\to \mathcal{F}(U)$$ are isomorphisms (see, e.g., \cite[Theorem 2.14]{Conrad}). Consequently, the natural morphism of sheaves $$L_X\otimes_{K_X} \mathcal{F}_K\to \mathcal{F}$$ is also an isomorphism (noting that $L_X\otimes_{K_X} \mathcal{F}_K$ is the sheaf associated to the presheaf $U\mapsto L\otimes \mathcal{F}_K(U)$). \end{proof} \begin{lemma}\label{lemmaGstructureIndsheaves} Let $X$ be a real analytic manifold and let $F\in\mathrm{I}_{\dR\textnormal{-c}}(L_X)$ be an ind-${\mathds R}$-constructible ind-sheaf equipped with a $G$-structure. Then there exists $F_K\in\mathrm{I}_{\dR\textnormal{-c}}(K_X)$ such that $F\cong L_X\otimes_{K_X} F_K$. \end{lemma} \begin{proof} By \cite[Theorem 6.3.5]{KS01}, there is an equivalence of categories $$\mathrm{I}_{\dR\textnormal{-c}}(L_X)\simeq \Mod{L}{X_\mathrm{sa}},$$ where $X_{\mathrm{sa}}$ is the subanalytic site associated to $X$ (whose open sets are the subanalytic open subsets of $X$ and whose coverings are required to be locally finite) and $\Mod{L}{X_{\mathrm{sa}}}$ is the category of sheaves of $L$-vector spaces on this site. This equivalence is compatible with $g$-conjugation (where $g$-conjugation on subanalytic sheaves is -- similarly to sheaves on the ordinary topology -- defined by considering $g$-conjugation on sections over any subanalytic open set). Hence, via this equivalence $F$ corresponds to a sheaf $\mathfrak{F}\in \mathrm{Mod}(L_{X_{\mathrm{sa}}})$ of $L$-vector spaces on the subanalytic site, and we have a $G$-structure on $\mathfrak{F}$. Now, with the same arguments as in the proof of Lemma~\ref{lemmaGstructureSheaves} (considering for $U$ the open subanalytic subsets of $X$), we can show that there is a sheaf $\mathfrak{F}_K\in\mathrm{Mod}(K_{X_{\mathrm{sa}}})$ such that $\mathfrak{F}\cong L_X\otimes_{K_X}\mathfrak{F}_K$. Finally, we obtain $F_K$ as the ind-sheaf corresponding to $\mathfrak{F}_K$ via the equivalence $$\mathrm{I}_{\dR\textnormal{-c}}(K_X)\simeq \mathrm{Mod}(K_{X_{\mathrm{sa}}}).$$ Since this equivalence is compatible with tensor products (cf.\ \cite[Proposition 1.3.1]{Prelli}), we can conclude that $F\cong L_X\otimes_{K_X}F_K$. \end{proof} \begin{proposition}\label{prop:KStructure} Let $\mathcal{X}=(X,\widehat{X})$ be a real analytic bordered space such that $X\subset \widehat{X}$ is relatively compact and subanalytic. Let $H\in \mathrm{E}^0(\mathrm{I}L_{\mathcal{X}})\cap\EbRcI{L}{\mathcal{X}}$ be equipped with a $G$-structure. Then $H$ has a $K$-structure in the sense of Definition \ref{def:KStructure}. \end{proposition} \begin{proof} By the definition of ${\mathds R}$-constructibility for enhanced ind-sheaves (see \cite[Definition 4.9.2]{DK16} and \cite[Definition 3.3.1]{EnhPerv}) and because $X$ is relatively compact in $\widehat{X}$, we have an isomorphism in $\EbIL{\mathcal{X}}$ $$H\cong L^\mathrm{E}_\mathcal{X}\mathbin{\overset{+}{\otimes}} \mathcal{F}$$ for some $\mathcal{F}\in\mathrm{D}^\mathrm{b}(L_{X\times{\mathds R}})$ such that $j_{X\times{\mathds R} !}\mathcal{F}\in\mathrm{D}^\mathrm{b}_{\dR\textnormal{-c}}(L_{\widehat{X}\times\mathsf{P}})$ (where $j_{X\times{\mathds R}}\colon X\times{\mathds R}\hookrightarrow \widehat{X}\times\mathsf{P}$ is the embedding). Denote by $Q\colon \mathrm{D}^\mathrm{b}(\mathrm{I}L_{\widehat{X}\times\mathsf{P}})\to \EbIL{\mathcal{X}}$ the quotient functor and by $\mathcal{L}$ its fully faithful left adjoint, satisfying $Q\circ \mathcal{L}\cong \mathrm{id}_{\EbIL{\mathcal{X}}}$. In our case, one has $$\mathcal{L} (H)\cong \mathcal{L}\big(L^\mathrm{E}_\mathcal{X}\mathbin{\overset{+}{\otimes}} \mathcal{F}\big)\cong \indlim{a\to\infty} L_{\{t\geq a\}} \mathbin{\overset{+}{\otimes}}\iota_{\widehat{X}\times\mathsf{P}} j_{X\times{\mathds R} !}\mathcal{F}.$$ Since $j_{X\times{\mathds R}!}\mathcal{F}$ is ${\mathds R}$-constructible, it follows that $\iota_{\widehat{X}\times\mathsf{P}}j_{X\times{\mathds R} !}\mathcal{F}\in \mathrm{D}^\mathrm{b}_{\mathrm{I}\dR\textnormal{-c}}(\mathrm{I}L_{\widehat{X}\times\mathsf{P}})$. We also note that $\indlim{a\to \infty}L_{\{t\geq a\}}=\indlim{a\to\infty}L_{\{(x,t)\in \widehat{X}\times \mathsf{P}; t\in{\mathds R}, t\geq a\}}\in \mathrm{D}^\mathrm{b}_{\mathrm{I}\dR\textnormal{-c}}(\mathrm{I}L_{\widehat{X}\times\mathsf{P}})$, and hence $\mathcal{L} (H)\in \mathrm{D}^\mathrm{b}_{\mathrm{I}\dR\textnormal{-c}}(\mathrm{I}L_{\widehat{X}\times\mathsf{P}})$, which follows from the fact that ind-${\mathds R}$-constructibility is preserved by the Grothendieck operations involved in the definition of $\mathbin{\overset{+}{\otimes}}$ (cf.\ \cite[Lemma 7.1.3]{KS01}). Since the functor $\mathcal{L}$ is compatible with $g$-conjugation, this object still carries a $G$-structure. Now, by the assumption that $H\in\mathrm{E}^0(\mathrm{I}L_\mathcal{X})$ (which, by definition, means that $\mathcal{L} (H)$ is concentrated in degree zero), it follows that $\mathcal{L}(H)\in \mathrm{I}_{\dR\textnormal{-c}}(L_{\widehat{X}\times\mathsf{P}})$. We have thus reduced to a situation where we can apply Lemma~\ref{lemmaGstructureIndsheaves}, which guarantees the existence of an object $F_K\in\mathrm{I}_{\dR\textnormal{-c}}(K_{\widehat{X}\times\mathsf{P}})$ such that $\mathcal{L}(H)\cong L_{\widehat{X}\times\mathsf{P}}\otimes_{K_{\widehat{X}\times\mathsf{P}}}F_K$. If we set $H_K\vcentcolon=Q(F_K)$ (denoting by $Q$ also the quotient functor $Q\colon \mathrm{D}^\mathrm{b}(\mathrm{I}K_{\widehat{X}\times\mathsf{P}})\to \mathrm{E}^\mathrm{b}(\mathrm{I}K_\mathcal{X})$), we obtain the desired result: $$H\cong Q\big( \mathcal{L}(H)\big) \cong Q(L_{\widehat{X}\times\mathsf{P}}\otimes_{K_{\widehat{X}\times\mathsf{P}}}F_K)\cong \pi^{-1}L_X\otimes_{\pi^{-1}K_X} H_K.$$ \end{proof} \subsection{$\mathcal{D}$-modules and the enhanced Riemann--Hilbert correspondence}\label{sectionRH} Let $X$ be a smooth complex algebraic variety. We denote by $\mathcal{D}_X$ the sheaf of algebraic differential operators on $X$ and by $\mathrm{Mod}_{\mathrm{hol}}(\mathcal{D}_X)$ the category of holonomic $\mathcal{D}_X$-modules. Moreover, we denote by $\Dbhol{X}$ the subcategory of the derived category of $\mathcal{D}_X$-modules consisting of complexes with holonomic cohomologies. For a morphism $f\colon X\to Y$, we will denote the direct and inverse image operations on $\mathcal{D}_X$-modules by $f_+$ and $f^+$, respectively. Recall that the classical Riemann--Hilbert correspondence for regular holonomic $\mathcal{D}$-modules gives an equivalence $$\Sol{X}\colon \mathrm{D}^\mathrm{b}_{\mathrm{reghol}}(\mathcal{D}_X)^\mathrm{op}\xlongrightarrow{\sim} \mathrm{D}^\mathrm{b}_{{\mathds C}\textrm{-c}}({\mathds C}_X)$$ between the derived category of regular holonomic $\mathcal{D}$-modules and the derived category of ${\mathds C}$-constructible sheaves of complex vector spaces on a smooth algebraic variety $X$ (and similarly on a complex manifold). In \cite{DK16}, the authors established a generalization of this result to (not necessarily regular) holonomic $\mathcal{D}$-modules, where the category of enhanced ind-sheaves serves as a target category for the Riemann--Hilbert functor. Concretely, for $X$ a complex manifold, they introduce a fully faithful functor $$\SolE{X}\colon \Dbhol{X}^\mathrm{op}\hookrightarrow \EbRcI{{\mathds C}}{X}.$$ The theories of algebraic and analytic $\mathcal{D}$-modules are often parallel, but differ in certain aspects. Analytification gives a way of associating to an algebraic $\mathcal{D}$-module on a smooth algebraic variety $X$ an analytic $\mathcal{D}$-module on the corresponding complex manifold $X^\mathrm{an}$. However, one generally needs to extend the algebraic $\mathcal{D}$-module to a completion of $X$ first in order not to lose information during this procedure. The details on an algebraic version of the Riemann--Hilbert correspondence for holonomic $\mathcal{D}$-modules have been given in \cite{Ito} and we will briefly recall the construction here. Let $X$ be a smooth complex algebraic variety. Then by classical results of Hironaka there exists a smooth completion $\widetilde{X}$, i.e.\ a smooth complete algebraic variety containing $X$ as an open subvariety, such that $\widetilde{X}\setminus X\subset \widetilde{X}$ is a normal crossing divisor. We denote by $X_\infty=(X,\widetilde{X})$ the (algebraic) bordered space thus defined and by $j\colon X\hookrightarrow\widetilde{X}$ the inclusion. Although the space $\widetilde{X}$ is not unique, the bordered space $X_\infty$ is determined up to isomorphism. Furthermore, we write $j_\infty^\mathrm{an}\colon X_\infty^\mathrm{an}=(X^\mathrm{an},\widetilde{X}^\mathrm{an})\to \widetilde{X}^\mathrm{an}=(\widetilde{X}^\mathrm{an},\widetilde{X}^\mathrm{an})$ for the natural morphism of bordered spaces given by the embedding. \begin{theorem}[{cf.\ \cite[Theorem 3.12]{Ito}}] Let $X$ be a smooth complex algebraic variety and $\widetilde{X}$ a smooth completion as above. Then the functor \begin{align*} \SolE{X_\infty}\colon \Dbhol{X}^\mathrm{op}&\longrightarrow \EbRcI{{\mathds C}}{X_\infty^\mathrm{an}}\\ \mathcal{M} &\longmapsto \mathrm{E} (j_\infty^\mathrm{an})^{-1} \SolE{\widetilde{X}^\mathrm{an}}\big((j_+\mathcal{M})^\mathrm{an}\big) \end{align*} is fully faithful. \end{theorem} We will often write $\EbRcI{{\mathds C}}{X_\infty}$ for $\EbRcI{{\mathds C}}{X_\infty^\mathrm{an}}$, since the category of enhanced ind-sheaves was only defined for analytic bordered spaces (algebraic vartieties are not good topological spaces), and hence there is no risk of confusion. If $f\colon X\to Y$ is a morphism of smooth complex algebraic varieties, we denote by $f_\infty\colon X_\infty^\mathrm{an}\to Y_\infty^\mathrm{an}$ the induced morphism of (analytic) bordered spaces. The enhanced solution functor satisfies many convenient compatibilities, which we summarize in the following lemma (see \cite[Corollary 9.4.10]{DK16} and \cite[Proposition 3.13]{Ito}). \begin{lemma}\label{lemmaAlgRHCompat} Let $f\colon X\to Y$ be a morphism of smooth complex algebraic varieties, and let $\mathcal{M}\in\Dbhol{X}$, $\mathcal{N}\in\Dbhol{Y}$. Then the following isomorphisms hold: \begin{itemize} \item[(i)] Let $\mathcal{M}\in\Dbhol{X}$, then there is an isomorphism in $\EbI{{\mathds C}}{Y_\infty}$ $$\SolE{Y_\infty}(f_+\mathcal{M})\cong \mathrm{E} f_{\infty !!} \SolE{X_\infty}(\mathcal{M})[d_X-d_Y],$$ where $d_X$ and $d_Y$ are the (complex) dimensions of $X$ and $Y$, respectively. \item[(ii)] Let $\mathcal{N}\in\Dbhol{Y}$. Then there is an isomorphism in $\EbI{{\mathds C}}{X_\infty}$ $$\SolE{X_\infty}(f^+\mathcal{N})\cong \mathrm{E} f_\infty^{-1}\SolE{Y_\infty}(\mathcal{N}).$$ \item[(iii)] Let $\mathcal{M}\in\Dbhol{X}$ and $\mathcal{R}\in\mathrm{D}^\mathrm{b}_\mathrm{reghol}(\mathcal{D}_X)$, then there is an isomorphism in $\EbI{{\mathds C}}{X_\infty}$ $$\SolE{X_\infty}(\mathcal{R}\otimes_{\mathcal{O}_X}\mathcal{M})\cong \pi^{-1}\Sol{X}(\mathcal{R})\otimes \SolE{X_\infty}(\mathcal{M}).$$ \end{itemize} \end{lemma} Let $f$ be an algebraic function on $X$. Denote by $\mathcal{E}^f\in\mathrm{Mod}_{\mathrm{hol}}(\mathcal{D}_X)$ the algebraic $\mathcal{D}_X$-module associated to differential operators with solutions $e^f$. Then there is an isomorphism $$\SolE{X_\infty}(\mathcal{E}^f)\cong {\mathds E}^{\mathop{\mathrm{Re}} f}_{\mathds C}.$$ Moreover, the usual holomorphic solutions of an algebraic $\mathcal{D}_X$-module $\mathcal{M}$ are recovered from its enhanced solutions via the sheafification functor: $$\mathsf{sh}_{X_\infty}\big(\SolE{X_\infty}(\mathcal{M})\big)\cong \Sol{X}(\mathcal{M}).$$ We end this section by applying a result of T.\ Mochizuki about the enhanced solutions of meromorphic connections to the algebraic enhanced solution functor. \begin{lemma}\label{lemmaMochizukiDegreeZero} Let $X$ be a smooth complex algebraic variety and let $\mathcal{M}\in\mathrm{Mod}_{\mathrm{hol}}(\mathcal{D}_X)$ be an integrable connection on $X$ (i.e.\ a holonomic $\mathcal{D}_X$-module, locally free as an $\mathcal{O}_X$-module). Then $\SolE{X_\infty}(\mathcal{M})\in \mathrm{E}^0(\mathrm{I}{\mathds C}_{X_\infty})$. \end{lemma} \begin{proof} If $\mathcal{M}$ is such a module, then $(j_+\mathcal{M})^\mathrm{an}\in \mathrm{Mod}_{\mathrm{hol}}(\mathcal{D}_{\widetilde{X}^\mathrm{an}})$ is a meromorphic connection with poles on $D\vcentcolon=\widetilde{X}^\mathrm{an}\setminus X^\mathrm{an}$., i.e. $\mathrm{SingSupp}((j_+\mathcal{M})^\mathrm{an})\subseteq D$ and $(j_+\mathcal{M})^\mathrm{an}\cong (j_+\mathcal{M})^\mathrm{an}\otimes_{\mathcal{O}_{\widetilde{X}^\mathrm{an}}}\mathcal{O}_{\widetilde{X}^\mathrm{an}}(*D)$. Combining results from \cite[§9, Corollary 5.21]{MocCurveTest}, it follows that $\SolE{\widetilde{X}^\mathrm{an}}((j_+\mathcal{M})^\mathrm{an})\in \mathrm{E}^0(\mathrm{I}{\mathds C}_{\widetilde{X}^\mathrm{an}})$. Since $\mathrm{E} (j_\infty^\mathrm{an})^{-1}$ is exact with respect to the standard t-structures on $\EbI{{\mathds C}}{\widetilde{X}^\mathrm{an}}$ and $\EbI{{\mathds C}}{X_\infty}$ (see \cite[Proposition 2.7.3(iv)]{EnhPerv}), the assertion follows. \end{proof} \section{Hypergeometric $\mathcal{D}$-modules and families of Laurent polynomials} \label{sec:Laurent} We briefly recall a few fundamental facts on one-dimensional hypergeometric $\mathcal{D}$-modules. Standard references are \cite{Ka}, etc. (one may follow \cite[section 2]{SevCast}). As a matter of notation, we denote by ${\mathds G}_m$ a $1$-dimensional algebraic torus, if we want to fix a coordinate, say, $q$, on it, we also write $\dG_{m,q}$. \begin{definition} Let $m,n\in {\mathds N}_0$ and let $\alpha_1,\ldots,\alpha_n,\beta_1,\ldots,\beta_m\in {\mathds C}$ be given. Consider the differential operator in one variable $$ P:=\prod_{i=1}^{n}\left(q\partial_q-\alpha_i\right)-q\cdot \prod_{j=1}^{m}\left(q\partial_q-\beta_j\right), $$ and the left $\mathcal{D}_{\dG_{m,q}}$-module $$ \mathcal{H}(\alpha;\beta):=\mathcal{D}_{\dG_{m,q}}/\mathcal{D}_{\dG_{m,q}} \cdot P \in \mathrm{Mod}_\mathrm{hol}(\mathcal{D}_{\dG_{m,q}}). $$ $\mathcal{H}(\alpha;\beta)$ is called one-dimensional (or univariate) hypergeometric $\mathcal{D}$-module. \end{definition} \begin{remark} We will suppose from now on that all $\alpha_i$ and all $\beta_j$ are real numbers. Although this is not strictly necessary for what follows, it simplifies some arguments, and corresponds to the cases of interest, specifically if one studies Hodge properties of hypergeometric systems. \end{remark} We will mainly be concerned with the case where $n\neq m$ and where the system $\mathcal{H}(\alpha;\beta)$ is irreducible. In this case, we have the following important fact due to Katz. \begin{proposition}[{see \cite[Proposition 2.11.9, Proposition 3.2]{Ka}}]\label{prop:PropHyp} Suppose that $n\neq m$. Let $\alpha_i,\beta_j\in {\mathds R}$ be given, and consider the $\mathcal{D}_{\dG_{m,q}}$-module $\mathcal{H}(\alpha;\beta)$ as defined above. Then: \begin{enumerate} \item $\mathcal{H}(\alpha;\beta)$ is irreducible if and only if for all $i\in\{1,\ldots,n\}$ and $j\in \{1,\ldots,m\}$ we have $\beta_j-\alpha_i\notin {\mathds Z}$. \item If $\mathcal{H}(\alpha;\beta)$ is irreducible, then for any $k,l\in {\mathds Z}$, for any $i\in\{1,\ldots,n\}$ and for any $j\in \{1,\ldots,m\}$ we have that $$ \mathcal{H}(\alpha;\beta) \cong \mathcal{H}(\alpha_1,\ldots,\alpha_i+k,\ldots,\alpha_n; \beta_1,\ldots,\beta_j+l,\ldots,\beta_m). $$ Hence, for irreducible hypergeometric modules $\mathcal{H}(\alpha;\beta)$ we may assume, up to isomorphism, that $\alpha_i,\beta_j\in [0,1)$ with $\beta_j\neq\alpha_i$ for all $i,j$. \end{enumerate} \end{proposition} A major step in our approach to the existence of Betti structures is a geometric realization of $\mathcal{H}(\alpha;\beta)$ via a family of Laurent polynomials. We will prove our main results in several steps, first under some special assumptions on the numbers $\alpha_i$ and $\beta_j$. Namely, suppose that $n > m$ and that $\alpha_1=0$. This latter hypothesis is a technical but crucial assumption to obtain the geometric realization of $\mathcal{H}(\alpha;\beta)$. We write $\alpha=(0,\alpha_2,\ldots,\alpha_n)\in {\mathds R}^n$, $\beta=(\beta_1,\ldots,\beta_m)\in {\mathds R}^m$ and we define $\gamma=(\gamma_1,\ldots,\gamma_{N-1}):=(\beta_1,\ldots,\beta_m,\alpha_2,\ldots,\alpha_n)\in {\mathds R}^{N-1}$, where $N=n+m$. Here and later we will use twisted structure sheaves on algebraic tori, defined as follows. Let $k,l\in {\mathds N}_0$ be arbitrary (but excluding $(k,l)=(0,0)$), and consider any vector $\gamma=(\gamma_1,\ldots,\gamma_k)\in {\mathds R}^k$. We consider the torus ${\mathds G}_m^{k+l}$ and the module \begin{equation}\label{eq:DefOG} \mathcal{O}_{{\mathds G}_m^{k+l}}^\gamma:=\mathcal{O}_{{\mathds G}_m^k}^\gamma\boxtimes\mathcal{O}_{{\mathds G}_m^l}, \quad \quad \quad \textup{where} \quad \quad \quad \quad \mathcal{O}_{{\mathds G}_m^k}^\gamma := \mathcal{D}_{{\mathds G}_m^k}/\left(\partial_{x_i} x_i +\gamma_i\right)_{i=1,\ldots, k} \end{equation} In particular, when $N=n+m$ is as above, we put ${\mathds G}={\mathds G}_m^{N-1}\times \dG_{m,q}$ and we consider the sheaves $\mathcal{O}_{{\mathds G}_m^{N-1}}^\gamma$ and $ \mathcal{O}_{\mathds G}^\gamma=\mathcal{O}_{{\mathds G}^{N-1}}^\gamma\boxtimes \mathcal{O}_{\dG_{m,q}}$. Then the following holds \begin{proposition}\label{prop:HypGMSystem} Let $f:=x_1+\ldots+x_m+\frac{1}{x_{m+1}}+\ldots+ \frac{1}{x_{N-1}}+q\cdot x_1\cdots\ldots\cdot x_{N-1}\in \mathcal{O}_{\mathds G}$. Consider the elementary irregular module $\mathcal{E}^{\gamma,f}:=\mathcal{O}^\gamma_{\mathds G}\cdot e^f=\mathcal{O}^\gamma_{\mathds G}\otimes_{\mathcal{O}_{\mathds G}}\mathcal{E}^f$, where $\gamma=(\beta_1,\ldots,\beta_m,\alpha_2,\ldots,\alpha_n)\in[0,1)^{N-1}$ is as above (i.e., $n>m$ and such that $\alpha_1=0$ and that $\alpha_i\neq \beta_j$ for all $i\in\{1,\ldots,n\}$ and $j\in\{1,\ldots,m\}$). Write $p:{\mathds G}\twoheadrightarrow\dG_{m,q}$ for the projection to the last factor. Then we have $\mathcal{H}^i p_+ \mathcal{E}^{\gamma,f} =0$ for all $i\neq 0$ and \begin{equation}\label{eq:FourierHypergeom} \mathcal{H}(\alpha;\beta)=\mathcal{H}(0,\alpha_2,\ldots,\alpha_n;\beta_1,\ldots,\beta_m)\cong\kappa^+\mathcal{H}^0 p_+ \mathcal{E}^{\gamma,f} =\kappa^+ p_+ \mathcal{E}^{\gamma,f}, \end{equation} where $\kappa:\dG_{m,q}\hookrightarrow \dG_{m,q}$ sends $q$ to $(-1)^m\cdot q$. \end{proposition} As a preliminary step towards the proof of this statement, we have the following result on a torus embedding. \begin{lemma}\label{lem:ApplySchulzeWalther} Let $\gamma\in {\mathds R}^{N-1}$ be arbitrary, and consider the torus embedding $$ \begin{array}{rcl} h:{\mathds G}^{N-1}_m & \longrightarrow & {\mathbb A}^N \\ (x_1,\ldots,x_{N-1}) & \longmapsto & (x_1\cdot\ldots\cdot x_{N-1},\frac{1}{x_{m+1}},\ldots,\frac{1}{x_{N-1}},x_1,\ldots,x_m)=:(y_1,\ldots,y_N). \end{array} $$ Then there exists an integer vector $c\in{\mathds Z}^{N-1}$ such that, writing $\widetilde{\gamma}:=\gamma+c$, we have an isomorphism of left $\mathcal{D}_{{\mathbb A}^N}$-modules: $$ \begin{array}{rcl} h_+ \mathcal{O}^{\widetilde{\gamma}}_{{\mathds G}_m^{N-1}} &\cong& \frac{\displaystyle \mathcal{D}_{{\mathbb A}^n}}{\displaystyle \left(\Box, (E_{m+i-1}+\widetilde{\gamma}_{m+i-1})_{i=2,\ldots,n}, (E_j+\widetilde{\gamma}_j)_{j=1,\ldots,m}\right)} \\ \\ &=& \frac{\displaystyle \mathcal{D}_{{\mathbb A}^n}}{\displaystyle \left(\Box, (E_{m+i-1}+\alpha_i+c_{m+i-1})_{i=2,\ldots,n}, (E_j+\beta_j+c_j)_{j=1,\ldots,m}\right)}, \end{array} $$ where $$ \begin{array}{rcl} \Box & := & y_1\cdot\ldots\cdot y_n-y_{n+1}\cdot\ldots\cdot y_N \\ E_{m+i-1} & := & \partial_{y_1}y_1 - \partial_{y_i} y_i\\ E_j & := & \partial_{y_1}y_1+\partial_{y_{n+j}}y_{n+j} \end{array} $$ \end{lemma} \begin{proof} First we notice that $h$ factors as $h=h_2\circ h_1$, where $h_1:{\mathds G}_m^{N-1}\hookrightarrow {\mathds G}_m^N$ is the closed (monomial) embedding defined exactly as $h$, i.e.\ sending $(x_1,\ldots,x_{N-1})$ to $(x_1\cdot\ldots\cdot x_{N-1},1/x_{m+1},\ldots,1/x_{N-1},x_1,\ldots,x_m)$, and where $h_2:{\mathds G}_m^N\hookrightarrow {\mathbb A}^N$ is the canonical open embedding. Then it is an easy exercise using the definition of the direct image (i.e.\ the explicit expression via transfer modules) to show that $$ h_{1,+} \mathcal{O}^{\widetilde{\gamma}}_{{\mathds G}_m^{N-1}} \cong \frac{\displaystyle \mathcal{D}_{{\mathds G}_m^N}}{\displaystyle \left(\Box, (E_{m+i-1}+\alpha_i+c_{m+i-1})_{i=2,\ldots,n}, (E_j+\beta_j+c_j)_{j=1,\ldots,m}\right)}. $$ Now in order to show the statement of the lemma, it suffices to prove that for all $k\in\{1,\ldots, N\}$, left multiplication with $y_k$ is invertible on the module $$ \frac{\displaystyle \mathcal{D}_{{\mathbb A}^N}}{\displaystyle \left(\Box, (E_{m+i-1}+\alpha_i+c_{m+i-1})_{i=2,\ldots,n}, (E_j+\beta_j+c_j)_{j=1,\ldots,m}\right)}. $$ In order to show this, we use \cite[Theorem 3.6.]{SchulWalth2}, together with \cite[Lemma 1.10]{Reich2}. Consider the matrix \begin{equation}\label{eq:MatrixA} A=\left(\begin{array}{c|c|c} \underline{1}_m & \underline{0}_{m\times(n-1)} & \operatorname{Id}_m\\[3pt] \hline & & \vspace{-10pt}\\ \underline{1}_{n-1} & -\operatorname{Id}_{n-1} & \underline{0}_{(n-1)\times m}\end{array} \right) \end{equation} and notice that its columns define the monomials of the map $h_1$. It follows then from \cite[Theorem 3.6.]{SchulWalth2} and \cite[Lemma 1.10]{Reich2} that there is a vector $\delta_A\in {\mathds N} A$ such that for all $\gamma'\in \delta_A+{\mathds R}_{\geq 0}A$, multiplication with $y_k$ for $k\in\{1,\ldots,N\}$ is invertible on the module $ \mathcal{D}_{{\mathbb A}^N}/(\Box,(E_{m+i-1}+\gamma'_{m+i-2}),(E_j+\gamma'_j))$. Now since ${\mathds R}_{\geq 0} A$ is a cone in ${\mathds R}^{N-1}$, it is clear by an elementary topological argument that we can find a $c\in {\mathds Z}^{N-1}$ such that $\widetilde{\gamma}=\gamma+c \in \delta_A+{\mathds R}_{\geq A}$. Notice that it follows from the normality of the semigroup ${\mathds N} A$, using \cite[Lemma 1.11]{Reich2}, that we can actually take $\delta_A$ to be zero, but we will not use this fact here. We thus obtain that for this choice of $c\in {\mathds Z}^{N-1}$, multiplication by $y_k$ for $k\in\{1,\ldots,N\}$ is invertible on $ \mathcal{D}_{{\mathbb A}^N}/\left(\Box, (E_{m+i-1}+\widetilde{\gamma}_{m+i-1})_{i=2,\ldots,n}, (E_j+\widetilde{\gamma}_j)_{j=1,\ldots,m}\right)$. which implies that $$ h_+ \mathcal{O}_{{\mathds G}_m^{N-1}}^{\widetilde{\gamma}} \cong \frac{\displaystyle \mathcal{D}_{{\mathbb A}^N}}{\displaystyle \left(\Box, (E_{m+i-1}+\widetilde{\gamma}_{m+i-1})_{i=2,\ldots,n}, (E_j+\widetilde{\gamma}_j)_{j=1,\ldots,m}\right)}. $$ \end{proof} Using this lemma, we can give a proof of the proposition by applying some twisted version of a Fourier--Laplace transformation. \begin{proof}[Proof of Proposition \ref{prop:HypGMSystem}] We first show the isomorphism $$ \kappa^+\mathcal{H}^0 p_+ \mathcal{E}^{\gamma,f} \cong\mathcal{H}(\alpha;\beta). $$ Notice that given $\gamma=(\beta_1,\ldots,\beta_m,\alpha_2,\ldots,\alpha_n)\in [0,1)^{N-1}$, we can find an integer vector $c\in {\mathds Z}^{N-1}$ as in the previous lemma \ref{lem:ApplySchulzeWalther}, and consider $$ \widetilde{\gamma}=\gamma+c=: (\widetilde{\beta}_1,\ldots,\widetilde{\beta}_m,\widetilde{\alpha}_2,\ldots,\widetilde{\alpha}_n). $$ Then since $\mathcal{H}(\alpha;\beta)$ is irreducible, we have an isomorphism of $\mathcal{D}_{\dG_{m,q}}$-modules $$ \mathcal{H}(\alpha;\beta) \cong \mathcal{H}(\widetilde{\alpha};\widetilde{\beta}) $$ by Proposition \ref{prop:PropHyp}. Hence, in order to prove the statement, we can replace $\mathcal{H}(\alpha;\beta)$ by $\mathcal{H}(\widetilde{\alpha};\widetilde{\beta})$ or, in other words, assume from the very beginning that the statement of the previous lemma \ref{lem:ApplySchulzeWalther} holds for $\gamma$ itself. Consider the following diagram $$ \begin{tikzcd} {\mathds G}={\mathds G}^{N-1}_m\times\dG_{m,q} \ar{rr}{h\times \id_{\dG_{m,q}}} \ar[swap]{dd}{p_1} \ar[bend left=40]{ddrrrr}{p} && {\mathbb A}^N\times \dG_{m,q} \ar[swap]{dd}{\pi_1} \ar{rrdd}{\pi_2} \\ \\ {\mathds G}_m^{N-1} \ar[hook]{rr}{h} && {\mathbb A}^N&& \dG_{m,q} \end{tikzcd} $$ where the square is Cartesian. Consider also the exponential module $\mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi \in \mathrm{Mod}_\mathrm{hol}(\mathcal{D}_{{\mathbb A}^N\times \dG_{m,q}})$, where $\psi=q\cdot y_1+y_2+\ldots+y_N\in\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}$. Then we have $$ \begin{array}{rcl} \FL^\psi(h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}) & :=& \mathcal{H}^0 \pi_{2,+}\left([\pi_1^+ h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}] \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi\right) \\ \\ &\cong & \mathcal{H}^0 \pi_{2,+}\left( [(h\times\id_{\dG_{m,q}})_+p_1^+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}] \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi\right) \\ \\ &\cong & \mathcal{H}^0 \pi_{2,+}\left( [(h\times\id_{\dG_{m,q}})_+\mathcal{O}^\gamma_{{\mathds G}}] \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi\right) \\ \\ &\cong &\mathcal{H}^0 \pi_{2,+} (h\times\id_{\dG_{m,q}})_+(\underbrace{\mathcal{O}^\gamma_{{\mathds G}} \otimes_{\mathcal{O}_{{\mathds G}}} \mathcal{E}_{{\mathds G}}^f}_{\mathcal{E}^{\gamma,f}}) \cong \mathcal{H}^0 p_+ \mathcal{E}^{\gamma,f}, \end{array} $$ where the first isomorphism follows by base change and the last one from the obvious identity $p=\pi_2\circ (h\times\id_{\dG_{m,q}})$. It thus remains to prove that $$ \kappa^+\FL^\psi(h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}) \cong \mathcal{H}(\alpha;\beta), $$ using the explicit expression for $h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}$ from Lemma \ref{lem:ApplySchulzeWalther} (as well as the remark made at the beginning of the current proof). We have $$ \begin{array}{rcl} \pi_1^+h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}} &\cong& \frac{\displaystyle \mathcal{D}_{{\mathbb A}^N}}{\displaystyle \left(\Box, (E_{m+i-1}+\alpha_i)_{i=2,\ldots,n}, (E_j+\beta_j)_{j=1,\ldots,m}\right)}\boxtimes \mathcal{O}_{\dG_{m,q}}\\ \\ &\cong& \frac{\displaystyle \mathcal{D}_{{\mathbb A}^N\times\dG_{m,q}}}{\displaystyle \left(\Box, (E_{m+i-1}+\alpha_i)_{i=2,\ldots,n}, (E_j+\beta_j)_{j=1,\ldots,m},\partial_q\right)}, \end{array} $$ and $$ \begin{array}{rcl} &&[\pi_1^+ h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}] \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi\\ \\ &\cong& \frac{\displaystyle \mathcal{D}_{{\mathbb A}^N\times\dG_{m,q}}}{\displaystyle \left(\Box, (E_{m+i-1}+\alpha_i)_{i=2,\ldots,n}, (E_j+\beta_j)_{j=1,\ldots,m},\partial_q\right)} \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \frac{\displaystyle \mathcal{D}_{{\mathbb A}^N\times\dG_{m,q}}}{\displaystyle \left(\partial_q-y_1,\partial_{y_1}-q,(\partial_{y_i}-1)_{k=2,\ldots,N}\right)}\\ \\ &\cong& \frac{\displaystyle \mathcal{D}_{{\mathbb A}^N\times\dG_{m,q}}}{\displaystyle \left(\Box, (\partial_{y_1}y_1-q\partial_q-\partial_{y_i}y_i+y_i+\alpha_i)_{i=2,\ldots,n}, (\partial_{y_1}y_1-q\partial_q+\partial_{y_{n+j}}y_{n+j}-y_{n+j}+\beta_j)_{j=1,\ldots,m},\partial_q-y_1\right)}. \end{array} $$ Now since the map $\pi_2$ is affine, the top cohomology $\mathcal{H}^0 \pi_{2,+}\left([\pi_1^+ h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}] \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi\right)$ is nothing but the $N$-th cohomology of the complex $\pi_{2,*}\DR_{{\mathbb A}^N\times\dG_{m,q}/\dG_{m,q}}^\bullet\left([\pi_1^+ h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}] \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi\right)$. Notice that in this cohomology group, the (class of the) operator $\partial_{y_1}y_1-q\partial_q-\partial_{y_i}y_i+y_i+\alpha_i$ resp. $\partial_{y_1}y_1-q\partial_q+\partial_{y_{n+j}}y_{n+j}-y_{n+j}+\beta_j$ equals (the class of) $-q\partial_q+y_i+\alpha_i$ resp. $-q\partial_q-y_{n+j}+\beta_j$. Hence, we can replace in (the class of) $\Box$ the variable $y_i$ by $q\partial_q-\alpha_i$ (for $i=2,\ldots,n$), the variable $y_{n+j}$ by $-(q\partial_q-\beta_j)$ (for $j=1,\ldots,m$) and the variable $y_1$ by $\partial_q$ to obtain $$ \begin{array}{rcl} \mathcal{H}^0 \pi_{2,+}\left([\pi_1^+ h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}}] \otimes_{\mathcal{O}_{{\mathbb A}^N\times \dG_{m,q}}} \mathcal{E}_{{\mathbb A}^N\times \dG_{m,q}}^\psi\right) &\cong &\frac{\displaystyle \mathcal{D}_{\dG_{m,q}}}{\displaystyle \left(\partial_q\prod_{i=2}^n(q\partial_q-\alpha_i)-(-1)^m\prod_{j=1}^m(q\partial_q-\beta_j)\right)} \\ \\ \cong\frac{\displaystyle \mathcal{D}_{\dG_{m,q}}}{\displaystyle \left((q\partial_q)\prod_{i=2}^n(q\partial_q-\alpha_i)-(-q)^m\prod_{j=1}^m(q\partial_q-\beta_j)\right)} &\cong& \kappa^+\mathcal{H}(\alpha;\beta) \end{array} $$ Since $\kappa$ is an involution, the statement we are after follows. It remains to prove the vanishing of $\mathcal{H}^i p_+ \mathcal{E}^{\gamma,f} $ for $i\neq 0$. To this end, notice that we can rewrite $\FL^\psi$ as the composition $k^+\circ\FL$, where $\FL:\mathrm{Mod}_\mathrm{hol}(\mathcal{D}_{{\mathbb A}^N})\rightarrow \mathrm{Mod}_\mathrm{hol}(\mathcal{D}_{({\mathbb A}^N)^\vee})$ is the ordinary Fourier transformation (using the kernel $\sum_{i=1}^n x_i\cdot y_i$ instead of the kernel $\psi$), and where $k$ is the embedding $$ \begin{array}{rcl} k:\dG_{m,q} & \hookrightarrow & ({\mathbb A}^N)^\vee\\ \\ q&\longmapsto & (q,1,\ldots,1). \end{array} $$ The statement therefore follows once we know that $k$ is non-characteristic for the module $\FL(h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}})$. However, it follows from Lemma \ref{lem:ApplySchulzeWalther} that the latter is nothing but the GKZ-system $\mathcal{M}_A^\gamma\in\mathrm{Mod}_\mathrm{hol}(\mathcal{D}_{({\mathbb A}^N)^\vee})$ (see \cite{Reich2} for a thorough discussion of the construction of GKZ-systems). Moreover, it is easy to see that this system $\mathcal{M}^\gamma_A$ appears as the restriction (inverse image) to $z=1$ of a system $\widehat{\mathcal{M}}^{(\gamma_0,\gamma)}_B\in\mathrm{Mod}_\mathrm{hol}(\mathcal{D}_{{\mathds C}_z\times({\mathbb A}^N)^\vee})$ (considered in \cite[Lemma 3.14]{ReiSe2}) where $B\in \textup{Mat}(N\times (N+1),{\mathds Z})$ is the homogenization of $A$ (i.e.\ adding the line $(1,\ldots,1)$ as first row and putting all elements of the first columns except the first one to zero) and where we denote by $z$ the coordinate on the first factor. Then, as proved in loc.~cit., $\widehat{\mathcal{M}}^{(\gamma_0,\gamma)}_B$ is smooth on ${\mathds C}_z^*\times W^\circ$, where $W^\circ \subset ({\mathds G}_m^N)^\vee\subset ({\mathbb A}^N)^\vee$ is the locus on which the generic family $$ \lambda_1\cdot x_1+\ldots+\lambda_m\cdot x_m+\frac{\lambda_{m+1}}{y_{m+1}}+\ldots+ \frac{\lambda_{N_1}}{x_{N-1}}+\lambda_N\cdot x_1\cdots\ldots\cdot x_{N-1} $$ is non-degenerate (in the sense of \cite[page 274]{Adolphson}). Now it follows by some elementary considerations in toric geometry (see e.g.\ \cite[Section 4.2]{ReiSe2} and \cite[Lemma 2.8]{ReiSe}) that in our case $W^\circ=({\mathds G}_m^N)^\vee$ since the columns of the matrix $A$ from equation \eqref{eq:MatrixA} (these are the exponents of the monomials of $f$ resp.\ of the generic family $F$) are the primitive integral generators of the rays of the fan of the the total space of the bundle $\mathcal{O}_{{\mathds P}^{n-1}}(-1)^m$ and the fact that the anticanonical divisor of this total space is a ample (since $n>m$). Hence, we obtain that $\mathcal{M}_A^\gamma$ is smooth on $({\mathds G}_m^N)^\vee\subset ({\mathbb A}^N)^\vee$, but the above embedding $k$ factors over $({\mathds G}_m^N)^\vee$, so it is non-characteristic for $\mathcal{M}_A^\gamma \cong \FL(h_+\mathcal{O}^\gamma_{{\mathds G}_m^{N-1}})$, as required. \end{proof} Let $L$ be a subfield of ${\mathds C}$ such that $L \supset {\mathds Q}(e^{2\pi i \gamma_1},\ldots, e^{2\pi i\gamma_{N-1}})$. Consider a field $K\subset L$ such that $L/ K$ is a finite Galois extension. We write $G:=\textup{Gal}(L/K)$, so that we are in the situation considered in Section \ref{sec:BettiConj}. Let $(\dG_{m,q})_\infty$ and ${\mathds G}_\infty$ be the bordered spaces associated to $\dG_{m,q}$ and ${\mathds G}$, respectively, by a smooth completion as in Section \ref{sectionRH}. Our main objective is to find criteria such that the object $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$ has a $K$-structure, i.e.\ is defined over $K$ in the sense of Proposition \ref{prop:KStructure}. Using the properties of the algebraic enhanced solution functor and Proposition \ref{prop:HypGMSystem}, we can write \begin{align*} \SolE{(\dG_{m,q})_\infty}\big(\mathcal{H}(\alpha,\beta)\big)&\cong \SolE{(\dG_{m,q})_\infty}\big(\kappa^+ p_+ \mathcal{E}^{\gamma,f}\big)\\ &\cong \mathrm{E}\kappa_\infty^{-1} \mathrm{E} p_{\infty !!} \big(\pi^{-1}\Sol{{\mathds G}}(\mathcal{O}^\gamma)\otimes \SolE{{\mathds G}_\infty}(\mathcal{E}^f)[N-1])\\ &\cong \mathrm{E}\kappa_\infty^{-1} \mathrm{E} p_{\infty !!} \big(\pi^{-1}\Sol{{\mathds G}}(\mathcal{O}^\gamma)\otimes {\mathds E}_{\mathds C}^{\mathop{\mathrm{Re}} f}[N-1]\big)\in \EbI{{\mathds C}}{(\dG_{m,q})_{\infty}}. \end{align*} Note that $\Sol{{\mathds G}}(\mathcal{O}^\gamma)\in\Mod{{\mathds C}}{{\mathds G}}$ is a local system with semi-simple monodromy with eigenvalues contained in $L$, so one can find a local system $\mathcal{F}^\gamma\in \Mod{L}{{\mathds G}}$ such that $\Sol{{\mathds G}}(\mathcal{O}^\gamma)\cong {\mathds C}_{\mathds G} \otimes_{L_{\mathds G}} \mathcal{F}^\gamma$. On the other hand, we clearly have ${\mathds E}^{\mathop{\mathrm{Re}} f}_{\mathds C}\cong \pi^{-1}{\mathds C}_{\mathds G}\otimes_{\pi^{-1}L_{\mathds G}} {\mathds E}^{\mathop{\mathrm{Re}} f}_L$ (indeed, ${\mathds E}^{\mathop{\mathrm{Re}} f}_{\mathds C}$ admits a structure over any subfield of ${\mathds C}$). Consequently, in view of Lemma \ref{lemmaExtensionCompatibility}, we obtain \begin{equation}\label{eq:SolEHypergeom} \SolE{(\dG_{m,q})_\infty}\big(\mathcal{H}(\alpha,\beta)\big)\cong \pi^{-1}{\mathds C}_{\dG_{m,q}}\otimes_{\pi^{-1}L_{\dG_{m,q}}}\mathrm{E} \kappa_\infty^{-1} \mathrm{E} p_{\infty !!} \big(\pi^{-1}\mathcal{F}^\gamma\otimes {\mathds E}_L^{\mathop{\mathrm{Re}} f}[N-1]\big), \end{equation} so this object naturally carries a structure over $L$. We cannot, however, expect to find a structure over $K$ in general. However, under suitable assumptions on $\gamma$, we will show that the above object carries a $K$-structure (even if $\mathcal{F}^\gamma$ itself does not). To this end, we introduce the following group theoretic condition on the exponent vector we are interested in. \begin{definition}\label{def:Ggood} Let $k\in {\mathds N}$ be arbitrary, and consider any vector $\gamma=(\gamma_1,\ldots,\gamma_k)\in {\mathds R}^k$. Let $L\subset {\mathds C}$ such that $L\supset {\mathds Q}(e^{2\pi i \gamma_1},\ldots, e^{2\pi i \gamma_k})$. For any $l\in {\mathds N}_0$, let $$ \mathcal{O}_{{\mathds G}_m^{k+l}}^\gamma:=\mathcal{O}_{{\mathds G}_m^k}^\gamma\boxtimes\mathcal{O}_{{\mathds G}_m^l}, $$ be defined as in formula \eqref{eq:DefOG}. We let $\mathcal{F}^\gamma$ be the $L$-local system on ${\mathds G}_m^{k+l}$ such that $\Sol{{\mathds G}_m^{k+l}}(\mathcal{O}_{{\mathds G}_m^{k+l}}^\gamma)={\mathds C}_{{\mathds G}_m^{k+l}} \otimes_{L_{{\mathds G}_m^{k+l}}}\mathcal{F}^\gamma$. Let $G$ be a given finite group, then we say that $\gamma$ is $G$-good if there is an action \begin{align*} \varrho:G&\longrightarrow \Aut({\mathds G}^k_m) \subset \Aut({\mathds G}_k^{k+l})\\ g&\longmapsto \varrho_g \end{align*} such that for all $g\in G$ we have an isomorphism \begin{equation}\label{eq:ConjugationCond} \psi_g:{\varrho}_{g !} \mathcal{F}^\gamma \stackrel{\cong}{\longrightarrow} \overline {\mathcal{F}^\gamma}^g \end{equation} such that for all $g,h\in G$, the following diagram commutes: \begin{equation}\label{eq:Cocycle} \begin{tikzcd} \varrho_{g !} \varrho_{h !}\mathcal{F}^\gamma \arrow{rr}{\varrho_{g !}\psi_h}\arrow[equal]{drr}{\sim} && \varrho_{g !} \overline{\mathcal{F}^\gamma}^h \cong \overline{\varrho_{g !} \mathcal{F}^\gamma}^h\arrow{r}{\overline{\psi_g}^h} & \overline{\overline{\mathcal{F}^\gamma}^g}^h\cong &[-27pt]\overline{\mathcal{F}^\gamma}^{gh} \\ &&\varrho_{gh !}\mathcal{F}^\gamma\arrow{rru}[swap]{\psi_{gh}}. \end{tikzcd} \end{equation} \end{definition} Notice that by requiring that the action of $G$ on ${\mathds G}$ factors over ${\mathds G}_m^k$ we impose that $G$ acts trivially on the last $l$ factors ${\mathds G}_m^{k+l}$. This notion of $G$-goodness will be the crucial point for the hypergeometric systems to have a $G$-structure in the sense of Definition \ref{def:GStructure} and consequently to be defined over the field $K$. To this end, we apply the previous definition for the case $k=N-1$ and $l=1$, that is, we let $\mathcal{F}^\gamma$ be the $L$-local system on ${\mathds G}$ such that $\Sol{{\mathds G}}(\mathcal{O}^\gamma_{\mathds G}) = {\mathds C}_{\mathds G}\otimes \mathcal{F}^\gamma$. Moreover, for any function $f\in \mathcal{O}_{\mathds G}$, we write ${\mathds E}^{\mathop{\mathrm{Re}} f}\vcentcolon= {\mathds E}^{\mathop{\mathrm{Re}} f}_L$ and ${\mathds E}^{\gamma,f}\vcentcolon=\pi^{-1}\mathcal{F}^\gamma\otimes {\mathds E}^{\mathop{\mathrm{Re}} f}\in\EbI{L}{{\mathds G}_\infty}$ for short. \begin{proposition}\label{prop:ConjProperty} For $N\in {\mathds N}$ and $\gamma\in{\mathds R} ^{N-1}$ let $L\subset {\mathds C}$ be as above. Chose a subfield $K\subset L$, such that \begin{enumerate} \item $L/K$ is a finite Galois extension with $G=\textup{Gal}(L/K)$, \item $\gamma$ is $G$-good. \end{enumerate} Then for any $G$-invariant function $f$, i.e.\ for any $f\in\mathcal{O}_{\mathds G}^{\im(\varrho)}$ (where $\varrho$ is the action from the previous definition), the object $\mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}=\mathrm{E}p_{\infty !!}(\pi^{-1}\mathcal{F}^\gamma\otimes{\mathds E}^{\mathop{\mathrm{Re}} f})\in \EbI{L}{(\dG_{m,q})_\infty}$ has a $G$-structure. \end{proposition} \begin{proof} By definition, the condition that $\gamma$ is $G$-good means that we have isomorphisms $$\psi_g\colon \varrho_{g !}\mathcal{F}^\gamma\xlongrightarrow{\cong} \overline{\mathcal{F}^\gamma}^g$$ for any $g\in G$ (satisfying the compatibility condition given by diagram \eqref{eq:Cocycle}). Furthermore, there are isomorphisms for any $g\in G$ $${\mathds E}^{\mathrm{Re}\, f}\xlongrightarrow{\cong} \overline{{\mathds E}^{\mathrm{Re}\, f}}^g\quad\text{and}\quad \mathrm{E}(\varrho_{g})_{\infty}^{-1}{\mathds E}^{\mathrm{Re}\, f} \cong {\mathds E}^{\mathrm{Re}\, (f\circ \varrho_g)} \cong {\mathds E}^{\mathrm{Re}\, f},$$ where the first isomorphism is given by the action of $g$ on $L$ and the second isomorphism follows from \cite[Remark 3.3.21]{DK16} and the fact that $f$ is invariant under the action of $\varrho_g$. Then, we can conclude using the projection formula: \begin{align}\label{eq:IsosToConjugate} \overline{\mathrm{E}p_{\infty !!}(\pi^{-1}\mathcal{F}^\gamma\otimes {\mathds E}^{\mathop{\mathrm{Re}} f})}^g&\cong \mathrm{E}p_{\infty !!}\big(\pi^{-1}\overline{\mathcal{F}^\gamma}^g\otimes \overline{{\mathds E}^{\mathrm{Re}\,f}}^g\big)\notag\\ &\xleftarrow{\sim} \mathrm{E}p_{\infty !!}\big(\pi^{-1}\varrho_{g !}\mathcal{F}^\gamma\otimes {\mathds E}^{\mathrm{Re}\,f}\big)\notag\\ &\cong \mathrm{E}(p\circ \varrho_g )_{\infty !!}\big(\pi^{-1}\mathcal{F}^\gamma\otimes \mathrm{E}(\varrho_g)_\infty^{-1}{\mathds E}^{\mathrm{Re}\,f}\big)\notag\\ &\cong\mathrm{E}p_{\infty !!}(\pi^{-1}\mathcal{F}^\gamma \otimes {\mathds E}^{\mathop{\mathrm{Re}} f}), \end{align} where we used $p\circ\varrho_g=p$ in the last isomorphism and the second isomorphism is induced by $\psi_g$ and the action of $g$ on the first and second factor of the tensor product, respectively. Let us call these isomorphisms $\varphi_g: \mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}\to \overline{\mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}}^g$ (from right to left in \eqref{eq:IsosToConjugate}). It remains to check that these isomorphisms satisfy the compatibilities from Definition \ref{def:GStructure}, i.e.\ that for $g,h\in G$ the isomorphism \begin{align*} \varphi_{gh}: \mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}\to \overline{\mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}}^g \end{align*} coincides with the isomorphism \begin{align*}\mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}\xrightarrow{\varphi_h} \overline{\mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}}^h \xrightarrow{\overline{\varphi_g}^h} \overline{\overline{\mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}}^{g}}^h \cong \overline{\mathrm{E}p_{\infty !!}{\mathds E}^{\gamma,f}}^{gh}. \end{align*} This is mostly a matter of checking that ``resolving'' the $gh$-conjugation is equivalent to resolving the $g$- and $h$-conjugations one after another in \eqref{eq:IsosToConjugate}. Since many of the isomorphisms used in \eqref{eq:IsosToConjugate} are natural (such as the projection formula and the isomorphisms from Lemma \ref{lemmaConjCompat}), the main step is to check the term in parentheses and see that the isomorphism $$ \pi^{-1}\overline{\mathcal{F}^\gamma}^{gh}\otimes \overline{{\mathds E}^{\mathrm{Re}\,f}}^{gh}\xlongleftarrow{\pi^{-1}\psi_{gh}\otimes (gh)} \pi^{-1}\varrho_{gh!}\mathcal{F}^\gamma \otimes {\mathds E}^{\mathrm{Re}\,f}$$ coincides with $$ \pi^{-1}\overline{\overline{\mathcal{F}^\gamma}^g}^h\otimes \overline{\overline{{\mathds E}^{\mathrm{Re}\,f}}^g}^h\xlongleftarrow{\pi^{-1}\overline{\psi_g}^h\otimes\overline{g}^h} \pi^{-1}\overline{\varrho_{g !}\mathcal{F}^\gamma}^h\otimes \overline{{\mathds E}^{\mathrm{Re}\,f}}^h\cong \pi^{-1}\varrho_{g !}\overline{\mathcal{F}^\gamma}^h\otimes \overline{{\mathds E}^{\mathrm{Re}\,f}}^h\xlongleftarrow{\pi^{-1}\varrho_{g !}\psi_h \otimes h} \pi^{-1}\varrho_{g !}\varrho_{h !}\mathcal{F}^\gamma \otimes {\mathds E}^{\mathrm{Re}\,f}$$ and this holds by \eqref{eq:Cocycle} since $\gamma$ was chosen to be $G$-good. \end{proof} We now specify to the case we are mostly interested in, that is, we let $\gamma=(\beta_1,\ldots,\beta_m,\alpha_2,\ldots,\alpha_n)\in[0,1)^{N-1}$, with $\alpha_1=0$ and $\alpha_i\neq \beta_j$ for all $i\in\{1,\ldots,n\}$ and all $j\in\{1,\ldots,m\}$. Then we have the following. \begin{corollary}\label{cor:Kstruct-directimage} Under the assumptions of the previous Proposition on $\gamma$, $K$ and $f$, the enhanced ind-sheaf $\SolE{(\dG_{m,q})_\infty}(\kappa^+p_+\mathcal{E}^{\gamma,f})$ admits a $K$-structure, i.e.\ there exists $H_K\in \EbI{K}{(\dG_{m,q})_\infty}$ such that $$ \SolE{(\dG_{m,q})_\infty}(\kappa^+p_+\mathcal{E}^{\gamma,f})\cong \pi^{-1}{\mathds C}_{\dG_{m,q}}\otimes_{\pi^{-1}K_{\dG_{m,q}}}H_K. $$ \end{corollary} \begin{proof} Since the $\mathcal{D}_{\dG_{m,q}}$-module $\kappa^+p_+\mathcal{E}^{\gamma,f}$ is concentrated in degree $0$, we know from Lemma \ref{lemmaMochizukiDegreeZero} that also $\SolE{(\dG_{m,q})_\infty}(\kappa^+p_+\mathcal{E}^{\gamma,f})\in \mathrm{E}^0(\mathrm{I}{\mathds C}_{X_\infty})$. Consequently, by our preliminary observations in \eqref{eq:SolEHypergeom}, we have \begin{align*} \SolE{(\dG_{m,q})_\infty}\big(\kappa^+p_+\mathcal{E}^{\gamma,f}\big)&\cong \mathcal{H}^0\Big(\pi^{-1}{\mathds C}_{\dG_{m,q}}\otimes_{\pi^{-1}L_{\dG_{m,q}}}\mathrm{E} \kappa_\infty^{-1} \mathrm{E} p_{\infty !!} \big({\mathds E}^{\gamma,f}[N-1]\big)\Big)\\ &\cong \pi^{-1}{\mathds C}_{\dG_{m,q}}\otimes_{\pi^{-1}L_{\dG_{m,q}}}\mathcal{H}^0\mathrm{E} \kappa_\infty^{-1} \mathrm{E} p_{\infty !!} \big({\mathds E}^{\gamma,f}[N-1]\big). \end{align*} The object $\mathrm{E} p_{\infty !!}{\mathds E}^{\gamma,f}\in\EbRcI{L}{(\dG_{m,q})_\infty}$ carries a $G$-structure by Proposition \ref{prop:ConjProperty}, and so does $$ \mathcal{H}^0\mathrm{E} \kappa_\infty^{-1} \mathrm{E} p_{\infty !!} \big({\mathds E}^{\gamma,f}[N-1]\big)\in\mathrm{E}^0(\mathrm{I}L_{(\dG_{m,q})_\infty})\cap\EbRcI{L}{(\dG_{m,q})_\infty} $$ by Lemma \ref{lemmaConjCompat}. We are now in the situation to apply Proposition~\ref{prop:KStructure} and obtain the desired object $H_K$ satisfying \begin{align*} \SolE{(\dG_{m,q})_\infty}(\kappa^+p_+\mathcal{E}^{\gamma,f})&\cong \pi^{-1}{\mathds C}_{\dG_{m,q}}\otimes_{\pi^{-1}L_{\dG_{m,q}}}\left(\mathcal{H}^0\mathrm{E} \kappa_\infty^{-1} \mathrm{E}\widetilde{p}_{!!}({\mathds E}^{\gamma,f}[N-1])\right)\\ &\cong \pi^{-1}{\mathds C}_{\dG_{m,q}}\otimes_{\pi^{-1}L_{\dG_{m,q}}} (\pi^{-1}L_{\dG_{m,q}} \otimes_{\pi^{-1}K_{\dG_{m,q}}} H_K)\\ &\cong \pi^{-1}{\mathds C}_{\dG_{m,q}}\otimes_{\pi^{-1}L_{\dG_{m,q}}} H_K. \end{align*} \end{proof} Putting these results together, we arrive at the following first main result. \begin{theorem}\label{theo:MainTheo} Let $n>m$, put $N:=n+m$ and let $\gamma=(\beta_1,\ldots,\beta_m,\alpha_2,\ldots,\alpha_n)\in [0,1)^{N-1}$ be given, where $\alpha_i\neq \beta_j$ for all $i\in\{1,\ldots,n\}$ and for all $j\in \{1,\ldots,m\}$ (with $\alpha_1=0$). Write again $\mathcal{H}(\alpha;\beta)=\mathcal{H}(0,\alpha_2,\ldots,\alpha_n;\beta_1,\ldots,\beta_m)$ for the corresponding irreducible hypergeometric system. Consider $L\subset {\mathds C}$ such that $L\supset{\mathds Q}(e^{2\pi i \gamma_1},\ldots, e^{2\pi i\gamma_{N-1}})$ and let $K\subset L$ be a subfield satisfying the following properties \begin{enumerate} \item $L/K$ is a finite Galois extension with $G:=\textup{Gal}(L/K)$, \item $\gamma$ is $G$-good, \item the function $$ f:=x_1+\ldots+x_m+\frac{1}{x_{m+1}}+\ldots+ \frac{1}{x_{N-1}}+q\cdot x_1\cdots\ldots\cdot x_{N-1}\in \mathcal{O}_{\mathds G} $$ is $G$-invariant, in the sense that $f\in \mathcal{O}_{\mathds G}^{\im(\varrho)}$, where $\varrho$ is the action from Definition \ref{def:Ggood}. \end{enumerate} Then then enhanced ind-sheaf $\SolE{(\dG_{m,q})_\infty}\big(\mathcal{H}(\alpha;\beta)\big)$ has a $K$-structure in the sense of Definition \ref{def:KStructure}, which means that there is some $H_K\in \mathrm{E}^\mathrm{b}(\mathrm{I}K_{(\dG_{m,q})_\infty})$ such that $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))\cong \pi^{-1}{\mathds C}_{\dG_{m,q}} \otimes_{\pi^{-1}K_{\dG_{m,q}}} H_K$. \end{theorem} \begin{proof} This follows by combining Corollary \ref{cor:Kstruct-directimage} with the isomorphism \eqref{eq:FourierHypergeom} in Proposition \ref{prop:HypGMSystem}. \end{proof} In the remainder of this section, we will remove the two conditions $n>m$ and $\alpha_1=0$ that we had to impose so far. This will be done in two steps. Consider first the case where we are given numbers $n,m\in {\mathds N}_0$ (with $(n,m)\neq (0,0)$), where now we only ask that $n\geq m$. Let $\alpha_1,\ldots,\alpha_n\in [0,1)$ and $\beta_1,\ldots,\beta_m\in (0,1)$ be given, with $\alpha_i\neq \beta_j$. The case where at least one of the numbers $\beta_j$ equals zero is thus excluded, and will be treated later. We still want to consider irreducible hypergeometric modules, and thus assume that $\alpha_i\neq \beta_j$ for all $i\in\{1,\ldots,n\}$ and all $j\in\{1,\ldots,m\}$. We let $\gamma:= (\beta_1,\ldots,\beta_m,\alpha_1,\ldots,\alpha_n)\in [0,1)^N$, and, as before, we consider a field $L\subset {\mathds C}$ such that $e^{2\pi i \gamma_1},\ldots,e^{2\pi i \gamma_N}\in L$. Then the following holds. \begin{corollary}\label{cor:CaseNonZero} Under these hypotheses, let $K\subset L$ be a field such that \begin{enumerate} \item $L/K$ is finite Galois with Galois group $G$, \item $\gamma$ is $G$-good, where now the numbers $k,l$ in Definition \ref{def:Ggood} are $k=N$ and $l=1$, \item The function $$ f:=x_1+\ldots+x_m+\frac{1}{x_{m+1}}+\ldots+ \frac{1}{x_N}+q\cdot x_1\cdots\ldots\cdot x_N\in \mathcal{O}_{\mathds G} =\mathcal{O}_{{\mathds G}_m^N}\boxtimes\mathcal{O}_{\dG_{m,q}} $$ is $G$-invariant. \end{enumerate} Then then enhanced ind-sheaf $\SolE{(\dG_{m,q})_\infty}\big(\mathcal{H}(\alpha;\beta)\big)$ has a $K$-structure. \end{corollary} Before entering into the proof, let us make a preliminary remark concerning Betti structures of perverse sheaves in dimension one. Let $\Delta_r\subset {\mathds C}$ be an open disc in ${\mathds C}$ centered at $0$ of radius $r$. Then for any field $K$, we can consider the category of $K$-perverse sheaves on $\Delta_r$ with respect to the stratification $\Delta_r^*,\{0\}$, i.e. those perverse sheaves on $\Delta_r$ which are smooth outside $0\in \Delta_r$. This category is obviously equivalent to the category of germs of $K$-perverse sheaves $\Perv_K(\Delta_r,0)$, and it is well-known (see e.g.\ \cite[Proposition 5.2.26]{Di}) that the latter is equivalent to the category of diagrams of $K$-vector spaces and $K$-linear mappings $$ \begin{tikzcd} E \arrow[rrr, bend left, "can"] & & & F \arrow[lll, bend left, "var"] \end{tikzcd} $$ such that $can \circ var + \id_F \in \Aut_K(F)$ and such that $var \circ can + \id_E \in \Aut_K(E)$ (recall that the spaces $E$ resp. $F$, for a given object $\mathcal{S}^\bullet\in\Perv_K(\Delta_r,0)$, are $E:=\psi_q(\mathcal{S}^\bullet)$ resp. $F:=\varphi_q(\mathcal{S}^\bullet)$, where $q$ is a coordinate on $\Delta_r$). In particular, for a given extension $K\subset L$, an element $\mathcal{S}^\bullet \in \Perv_L(\Delta_r,0)$ is defined over $K$, in the sense that there is some $\mathcal{F}^\bullet \in \Perv_K(\Delta_r,0)$ such that $\mathcal{S}^\bullet = \mathcal{F}^\bullet\otimes_{K_{\Delta_r}} L_{\Delta_r}$ if and only if the diagram corresponding to $\mathcal{S}^\bullet$ is defined over $K$, that is, if and only if there are bases of $E$ and of $F$ such that the corresponding matrices of the maps $can$ and $var$ have only entries in $K$. \begin{proof}[Proof of Corollary \ref{cor:CaseNonZero}] By Theorem~\ref{theo:MainTheo}, the assumptions of this corollary imply that the hypergeometric module $\mathcal{H}(0,\alpha;\beta) =\mathcal{H}(0,\alpha_1,\ldots,\alpha_n;\beta_1,\ldots,\beta_m)$ admits a $K$-structure on enhanced solutions. By the first formula in \cite[Proposition 5.3.3]{Ka}, we have $$\mathcal{H}(0,\alpha;\beta)\cong j^+ \FL (j_+\mathrm{inv}_+\mathcal{H}(\alpha;\beta)),$$ where $\mathrm{inv}\colon {\mathds G}_m\to{\mathds G}_m, q\mapsto q^{-1}$ is given by multiplicative inversion, $j\colon {\mathds G}_m\hookrightarrow{\mathbb A}^1$ is the embedding and $\FL(\bullet)$ is the Fourier transform with kernel $e^{qw}$ for $\mathcal{D}$-modules on ${\mathbb A}^1$. By \cite[Lemma 2.9.5]{Ka} (more precisely, by the implication (3) $\Rightarrow$ (4) in loc.cit.), we know (using that $\alpha_i\in(0,1)$) that $$ j_\dag\mathcal{H}(0,\alpha;\beta) = \mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot P=\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}(q\cdot Q), $$ where $$ Q:=\partial_q\cdot\prod_{i=1}^n \left(q\partial_q -\alpha_i\right)-\prod_{j=1}^m \left(q\partial_q -\beta_j\right). $$ It follows that we have an exact sequence of $\mathcal{D}_{{\mathbb A}^1}$-modules \begin{equation}\label{eq:SplitQ} 0 \longrightarrow \mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot q \longrightarrow j_\dag \mathcal{H}(0,\alpha;\beta) \longrightarrow \mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot Q \longrightarrow 0. \end{equation} It is readily verified that \begin{equation}\label{eq:FLTrafoKatz} \mathrm{inv}_+j^+\big[\mathrm{FL}^{-1}\big(\mathcal{D}_{{\mathbb A}^1}/(Q)\big) \big] \cong \mathcal{H}(\alpha;\beta). \end{equation} Now we choose real numbers $r', r$ with $0<r'<r$ so that we have the covering $$ {\mathds P}^{1,\textup{an}} = \Delta_r \cup \left({\mathds P}^{1,\textup{an}} \backslash \Delta_{r'}\right). $$ In the remainder of this proof, we will compare enhanced and usual solutions for some algebraic $\mathcal{D}_{{\mathbb A}^1}$-modules. For such an object $M\in\textup{Mod}_{\mathrm{hol}}(\mathcal{D}_{{\mathbb A}^1})$ and for any open set $U\subset {\mathds P}^{1,\textup{an}}$, we will write $$ \Sol{U}(M):= {\mathcal{H}\!}om_{\mathcal{D}^{\textup{an}}_{U}}\left((j^\infty_+ M)^{\textup{an}}_{|U},\mathcal{O}^{\textup{an}}_{U}\right) \in \Perv_{{\mathds C}}(U), $$ where $j^\infty:{\mathbb A}^1\hookrightarrow {\mathds P}^1$ is the canonical embedding. Consider the solutions of $\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot Q$ on the disc $\Delta_r$, that is, $\mathcal{P}^\bullet := \Sol{\Delta_r}(\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot Q) $, then clearly $\mathcal{P}^\bullet \in \Perv_{{\mathds C}}(\Delta_r,0)$. We claim that $\mathcal{P}^\bullet$ has a $K$-structure, i.e., that there is an object $\mathcal{F}^\bullet\in\Perv_K(\Delta_r,0)$ such that $\mathcal{P}^\bullet \cong \mathcal{F}^\bullet \otimes_{K_{\Delta_r}} {\mathds C}_{\Delta_r}$. In order to show this claim, notice first that $\mathcal{S}^\bullet:=\Sol{\Delta_r}(j_\dag \mathcal{H}(0,\alpha;\beta))\cong Rj_*\Sol{\Delta_r^*}(\mathcal{H}(0,\alpha;\beta))$ has a $K$-structure, where $j:\Delta_r^*\hookrightarrow \Delta_r$. Namely, as mentioned above, we already know from Theorem \ref{theo:MainTheo} that under our assumptions, the enhanced ind-sheaf $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(0,\alpha;\beta))$ has a $K$-structure, from which it follows (applying the functor $\mathsf{sh}_{(\dG_{m,q})_\infty}$, see Corollary~\ref{corSheafificationExtension}, and restricting to $\Delta_r^*$) that $\Sol{\Delta_r^*}(\mathcal{H}(0,\alpha;\beta))$ is defined over $K$. Now since $(\mathcal{H}^{-1}\mathcal{S}^\bullet)_0$ resp.\ $(\mathcal{H}^0\mathcal{S}^\bullet)_0$ are the kernel resp.\ the cokernel of $\text{Id}-M$, where $M$ is the monodromy of this local system, they are also defined over $K$, and so is $\mathcal{S}^\bullet$. Moreover, the perverse sheaf $$ \mathcal{D}^\bullet:=\Sol{\Delta_r}(\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot q) $$ clearly is defined over any subfield of ${\mathds C}$, since $\psi_q(D^\bullet)=0$ ($\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot q$ is supported at $q=0$), so that both of the maps $can$ and $var$ for this sheaf are zero. It thus suffices to show that the morphism $$ \mathcal{S}^\bullet=\Sol{\Delta_r}(j_\dag \mathcal{H}(0,\alpha;\beta)) \longrightarrow \Sol{\Delta_r}(\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot q)=\mathcal{D}^\bullet $$ comes from a morphism in $\Perv_K(\Delta_r,0)$. By the correspondence mentioned before this proof, it corresponds to a diagram $$ \begin{tikzcd} \psi_q(\mathcal{S}^\bullet) \arrow[rrr, bend left, "c_{\mathcal{S}^\bullet}"] \ar[dddd, "e"] & & & \varphi_q(\mathcal{S}^\bullet) \arrow[lll, bend left, "v_{\mathcal{S}^\bullet}"] \ar[dddd, "f"] \\ \\ \\ \\ \psi_q(\mathcal{D}^\bullet) \arrow[rrr, bend left, "c_{\mathcal{D}^\bullet}"] & & & \varphi_q(\mathcal{D}^\bullet) \arrow[lll, bend left, "v_{\mathcal{D}^\bullet}"] \end{tikzcd} $$ of ${\mathds C}$-vector spaces, and we have $\psi_q(\mathcal{D}^\bullet)=0$, so that $c_{\mathcal{D}^\bullet}=v_{\mathcal{D}^\bullet}=e=0$. Since the ${\mathds C}$-perverse sheaf $\mathcal{S}^\bullet$ admits a $K$-structure, both maps $c_{\mathcal{S}^\bullet}$ and $v_{\mathcal{S}^\bullet}$ come from maps defined over $K$. It remains to show that the same holds for $f$, but this is obvious, since it only amounts to choosing a $K$-structure in the space $\varphi_q(\mathcal{D}^\bullet)$ such that $f$ maps the given $K$-vector subspace of $\varphi_q(\mathcal{S}^\bullet)$ to this $K$-subspace, and such a choice is always possible. Now that we have seen that $\Sol{\Delta_r}(\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot Q)$ is the scalar extension of an object in $\Perv_K(\Delta_r)$, it follows that also $\SolE{{\mathbb A}^1_\infty}(\mathcal{D}_{{\mathbb A}^1}/\mathcal{D}_{{\mathbb A}^1}\cdot Q)$ has a $K$-structure (in the sense of Definition \ref{def:KStructure}) by the following gluing argument: We will abbreviate $\mathcal{S}^\mathrm{E}_Q\vcentcolon=\SolE{{\mathbb A}^1_\infty}(\mathcal{D}_{{\mathbb A}^1}/(Q))$, $\mathcal{S}^\mathrm{E}_{j_\dag\mathcal{H}}\vcentcolon= \SolE{{\mathbb A}^1_\infty}(j_\dag \mathcal{H}(0,\alpha;\beta))$ and $\mathcal{S}^\mathrm{E}_q\vcentcolon= \SolE{{\mathbb A}^1_\infty}(\mathcal{D}_{{\mathbb A}^1}/(q))$, and writing $A\vcentcolon=\Delta_r$, $B\vcentcolon={\mathbb A}^1\setminus \Delta_{r'}$. Recall the notation from \eqref{eq:pitens}. Then one has a commutative diagram whose rows and columns are distinguished triangles $$\begin{tikzcd} (\mathcal{S}^\mathrm{E}_Q)_{A\cap B}\arrow[dashed]{r}\arrow{d} & (\mathcal{S}^\mathrm{E}_{j_\dag\mathcal{H}})_{A\cap B}\arrow[dashed]{r}\arrow[dashed]{d} & (\mathcal{S}^\mathrm{E}_q)_{A\cap B}\arrow[dashed]{r}{+1}\arrow[dashed]{d} & \text{}\\ (\mathcal{S}^\mathrm{E}_Q)_A\oplus (\mathcal{S}^\mathrm{E}_Q)_B\arrow[dashed]{r}\arrow{d} & (\mathcal{S}^\mathrm{E}_{j_\dag\mathcal{H}})_A\oplus (\mathcal{S}^\mathrm{E}_{j_\dag\mathcal{H}})_B\arrow[dashed]{r}\arrow{d} & (\mathcal{S}^\mathrm{E}_q)_A\oplus (\mathcal{S}^\mathrm{E}_q)_B\arrow[dashed]{r}{+1}\arrow{d} & \text{}\\ \SolE{{\mathbb A}^1_\infty}(\mathcal{D}_{{\mathbb A}^1}/(Q))\arrow{r}\arrow{d}{+1} & \SolE{{\mathbb A}^1_\infty}(j_\dag \mathcal{H}(0,\alpha;\beta))\arrow{r}\arrow{d}{+1} & \SolE{{\mathbb A}^1_\infty}(\mathcal{D}_{{\mathbb A}^1}/(q))\arrow{r}{+1}\arrow{d}{+1} & \text{}\\ \text{}& \text{}& \text{} \end{tikzcd}$$ whose rows are induced by the short exact sequence \eqref{eq:SplitQ} and whose columns are induced by the short exact sequence of sheaves (cf.\ \cite[Proposition 2.3.6(iv)]{KS90}) $$0\to {\mathds C}_{A\cap B}\to {\mathds C}_A \oplus {\mathds C}_B\to {\mathds C}_{{\mathbb A}^1}\to 0.$$ Note that $$(\mathcal{S} ol^\mathrm{E}_{{\mathbb A}^1_\infty}(M))_{\Delta_r} \cong \mathrm{E}i_{(\Delta_r)_\infty!!} {\mathds C}^\mathrm{E}_{(\Delta_r)_\infty}\otimes \pi^{-1}\Sol{\Delta_r}(M)$$ if $M$ has a regular singularity at $0$ and no other singularities in $\overline{\Delta}_r$. Therefore, the above argument shows that the first summands in the middle row (as well as the morphisms between them) are defined over $K$. For the second summands, this is trivial since $(\mathcal{S}^\mathrm{E}_q)_{{\mathbb A}^1\setminus\Delta_{r'}}\cong 0$, and the same holds for the first row. It is also clear that the two dashed vertical arrows are defined over $K$. It then follows from the axioms of a triangulated category that the morphism $(\mathcal{S}^\mathrm{E}_Q)_{A\cap B}\to (\mathcal{S}^\mathrm{E}_Q)_A\oplus (\mathcal{S}^\mathrm{E}_Q)_B$ is defined over $K$ and hence its cone $\SolE{({\mathbb A}^1)_\infty}(\mathcal{D}_{{\mathbb A}^1}/(Q))$ admits a $K$-structure. Finally, the topological analogues via the enhanced solution functor of $\mathrm{inv}_+$, $j^+$ and $\mathrm{FL}^{-1}(\bullet)$ are $\mathrm{E}\,\mathrm{inv}_{\infty !!}$, $\mathrm{E}j_\infty^{-1}$ and $\mathrm{E}q_{\infty !!}({\mathds E}^{-xy}\mathbin{\overset{+}{\otimes}} \mathrm{E}p_{\infty}^{-1}(\bullet))$ (see e.g.\ \cite{KS16} for a study of topological Fourier--Laplace transforms for enhanced ind-sheaves), and these preserve $K$-structures by Lemma~\ref{lemmaExtensionCompatibility}. Hence we are done by formula \eqref{eq:FLTrafoKatz}. \end{proof} After the previous discussion, the only case left is when some of the numbers $\beta_j$ equals zero. This is treated in the following Theorem, which also summarizes our results. Notice that the assumption $n\geq m$ that we still need to make is not very restrictive: If $n<m$, then one replaces the given system $\mathcal{H}(\alpha;\beta)$ by the system $$ \mathcal{H}(-\beta_1,\ldots,-\beta_m;-\alpha_1,\ldots,-\alpha_n)=(\kappa^{n-m+1})^+\text{inv}_+\mathcal{H}(\alpha;\beta), $$ and formulates the hypothesis for this one. \begin{theorem}\label{theo:MainTheo2} Let $n\geq m$, and let $\alpha_1,\ldots,\alpha_n,\beta_1,\ldots,\beta_m\in[0,1)$ be given. Let $r\in {\mathds N}_0$ such that $\beta_1=\ldots=\beta_r=0$ and $\beta_j\neq 0$ for $j\in\{r+1,\ldots,m\}$. Suppose moreover that $\alpha_i\neq\beta_j$ for all $i,j$. Let $L\subset {\mathds C}$ as before (i.e.\ containing all of the numbers $e^{2\pi i\alpha_i}$ and $e^{2\pi i \beta_j}$). Let $K\subset L$ be finite Galois, with $G=\textup{Gal}(L/K)$ such that \begin{enumerate} \item $\gamma:=(\beta_{r+1},\ldots,\beta_m,\alpha_1,\ldots,\alpha_n)\in (0,1)^{N-r}$ is $G$-good, with $k=N-r=n+m-r$, $l=1$. \item The function \begin{equation}\label{eq:LaurPolMainTheo2} f:=x_{r+1}+\ldots+x_m+\frac{1}{x_{m+1}}+\ldots+ \frac{1}{x_N}+q\cdot x_{r+1}\cdots\ldots\cdot x_N\in \mathcal{O}_{\mathds G} =\mathcal{O}_{{\mathds G}_m^{N-r}}\boxtimes\mathcal{O}_{\dG_{m,q}} \end{equation} lies in $\mathcal{O}_{\mathds G}^{\im(\rho)}$ where again $\rho$ is the action of the group $G$ from Definition \ref{def:Ggood}. Notice that we use $x_{r+1},\ldots,x_N,q$ as coordinates on ${\mathds G}$ here. \end{enumerate} Then $\SolE{(\dG_{m,q})_\infty}\big(\mathcal{H}(\alpha;\beta)\big)$ has a $K$-structure. \end{theorem} \begin{proof} If $r=0$, then the result follows directly from the previous Corollary \ref{cor:CaseNonZero}. Otherwise, we will proceed by induction on $r$. Let us write $$ \mathcal{H}':=\mathcal{H}(\alpha_1,\ldots,\alpha_n;\underbrace{0,\ldots,0}_{r-1\; \textup{times}},\beta_{r+1},\ldots,\beta_m) $$ We can assume that $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}')$ is defined over $K$: for $r=1$, this follows again from Corollary \ref{cor:CaseNonZero}, using our assumptions on $\gamma$ and on $f$. If $r>1$ this is precisely the induction hypothesis. Now we use the second formula in \cite[Proposition 5.3.3]{Ka}, which states that $$ \mathcal{H}(\alpha;\beta)= \kappa^+\text{inv}_+j^+\FL(j_+\mathcal{H}'). $$ Again, the functors involved in this formula correspond, via $\SolE{}(\bullet)$ to the functors $\mathrm{E}\kappa_\infty^{-1}$, $\mathrm{E}\,\mathrm{inv}_{\infty !!}$, $\mathrm{E}j_\infty^{-1}$, $\mathrm{E}q_{\infty !!}({\mathds E}^{xy}\mathbin{\overset{+}{\otimes}} \mathrm{E}p_{\infty}^{-1}(\bullet))$ and $\mathrm{E}j_{\infty !!}$. They all preserve $K$-structures by Lemma~\ref{lemmaExtensionCompatibility}. Since, as just explained, $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}')$ is defined over $K$, we therefore obtain that also $\SolE{(\dG_{m,q})_\infty}\left(\mathcal{H}(\alpha;\beta)\right)$ has a $K$-structure, which is what we had to prove. \end{proof} \label{page:proofMainTheoIntro} We obtain the version of this Theorem stated in the introduction as a simply consequence. \begin{proof}[Proof of Theorem \ref{theo:MainTheoIntro}] Under the hypotheses of Theorem \ref{theo:MainTheoIntro}, let $r\in\{1,\ldots,m\}$ be such that $\beta_1=\ldots=\beta_r=0$ and $\beta_j\neq 0$ for $j\in\{r+1,\ldots,m\}$. Then we have an action of $G:=\textup{Gal}(L/K)$ on the set $e^{2\pi i \alpha_1},\ldots, e^{2\pi i \alpha_n}$ and on the set $e^{2\pi i \beta_{r+1}},\ldots, e^{2\pi i \beta_m}$ (since $e^{2\pi i \beta_j}=1$ for $1\leq j \leq r$, and this value is fixed by any element of $G$), which means exactly that the vector $\gamma=(\beta_{r+1},\ldots,\beta_m,\alpha_1,\ldots,\alpha_n)$ is $G$-good. Moreover, the natural action $\rho$ of $G$ on ${\mathds G}_m^{N-r}\times \dG_{m,q}$ by permutation of the first $m-r$ and the next $n$ coordinates evidently satisfies that the function $f$ from Formula \eqref{eq:LaurPolMainTheo2} lies in $\mathcal{O}_{{\mathds G}}^{\im(\rho)}$. Then the result follows from the previous Theorem \ref{theo:MainTheo2}. \end{proof} For an irregular holonomic $\mathcal{D}$-module, the perverse sheaf of solutions is not a primary object of study. Nevertheless, it is worth mentioning that in the situation just studied, this object carries a $K$-structure as well. \begin{corollary}\label{cor:Sol} Under the assumptions of Theorem~\ref{theo:MainTheo2} the perverse sheaf of solutions $\Sol{\dG_{m,q}}(\mathcal{H}(\alpha;\beta))[1]$ has a $K$-structure. \end{corollary} \begin{proof} Since $\Sol{\dG_{m,q}}(\mathcal{H}(\alpha;\beta))\cong \mathsf{sh}_{(\dG_{m,q})_\infty}\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$, this follows directly from Corollary~\ref{corSheafificationExtension}. \end{proof} In particular, we get the following result in the regular case. \begin{corollary}\label{cor:LocSystem} Assume that $n=m$ and the hypotheses of Theorem~\ref{theo:MainTheo2} are satisfied. Then the ${\mathds C}$-local system $\Sol{\dG_{m,q}\setminus\{1\}}(\mathcal{H}(\alpha;\beta)|_{\dG_{m,q}\setminus\{1\}})$ is the complexification of a $K$-local system. \end{corollary} \begin{proof} This follows directly from Corollary~\ref{cor:Sol} and Lemma~\ref{lemmaExtensionCompatibility} by pulling back $\Sol{\dG_{m,q}}(\mathcal{H}(\alpha;\beta))$ to $\dG_{m,q}\setminus\{1\}$, where $\mathcal{H}(\alpha;\beta)$ is nonsingular (hence described by a local system). \end{proof} \section{Applications} In this section, we will discuss a few interesting cases in which Theorem \ref{theo:MainTheo2} can be applied. The first one concerns real structures and is inspired by \cite[Theorem 2]{Fedorov}, which we will reprove afterwards as a simple corollary. \begin{theorem}\label{theo:RealStruct} Let $n\geq m$, consider numbers $\alpha_1,\ldots,\alpha_n,\beta_1,\ldots,\beta_m \in[0,1)$, with $\alpha_i\neq \beta_j$, and let $s\in\{0,\ldots,n\}$ and $r\in\{0,\ldots,m\}$ such that \begin{enumerate} \item $0=\alpha_1=\ldots=\alpha_s<\alpha_{s+1}\leq\ldots\alpha_n<1$, \item $0=\beta_1=\ldots=\beta_r<\beta_{r+1}\leq\ldots\leq\beta_m<1$, \item $\alpha_{s+i}+\alpha_{n+1-i}=1$ for all $i\in\{1,\ldots,n-s\}$ and \item $\beta_{r+j}+\beta_{m+1-j}=1$ for all $j\in\{1,\ldots,m-r\}$. \end{enumerate} (obviously, since $\alpha_i\neq \beta_j$, at most one of the numbers $r$ and $s$ can be positive). Then $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$ has an ${\mathds R}$-structure in the sense of Definition \ref{def:KStructure}. \end{theorem} \begin{proof} We put $\gamma:=(\beta_{r+1},\ldots,\beta_m,\alpha_1,\ldots,\alpha_n) \in [0,1)^{n+m-r}$. Take $L$ to be equal to ${\mathds C}$ and $K={\mathds R}$, so that $G=\textup{Gal}({\mathds C}/{\mathds R})={\mathds Z}/2{\mathds Z}$. We claim that with these choices, $\gamma$ is $G$-good. Namely, consider the action $\varrho:G \longrightarrow \Aut({\mathds G})$ (where ${\mathds G}:={\mathds G}_m^{n+m-r}\times\dG_{m,q} =\Spec[x_{r+1}^\pm,\ldots,x_{m+n}^\pm,q^\pm]$) such that $\varrho_{[1]}(x_{r+j})=x_{m+1-j}$ and $\varrho_{[1]}(x_{m+s+i})=x_{n+m-i}$ (it is readily checked that $\varrho_{[1]}$ is an involution, thus defining an action of $G$). Then assumptions 3.\ and 4.\ imply the condition in equation \eqref{eq:ConjugationCond} for $g=[1]\in {\mathds Z}/2{\mathds Z}$, notice that in this case, the $g$-conjugate of $\mathcal{F}^\gamma$ is simply the ordinary conjugate $\overline{\mathcal{F}^\gamma}$. Hence $\gamma$ is $G$-good. Moreover, it is clear that the Laurent polynomial $$ f=x_{r+1}+\ldots+x_m+\frac{1}{x_{m+1}}+\ldots+ \frac{1}{x_{m+n}}+q\cdot x_{r+1}\cdots\ldots\cdot x_{m+n} $$ is invariant under $G$ (more precisely, $f\in \mathcal{O}_{\mathds G}^{\im(\varrho)}$) since $G$ acts simply by exchanging pairs of the first $m-r$ and the last $n$ coordinates. Hence we can apply Theorem \ref{theo:MainTheo2}, which tells us that $\SolE{\dG_{m,q}}(\mathcal{H}(\alpha;\beta))$ has an ${\mathds R}$-structure, i.e.\ is obtained via extension of scalars from an enhanced ind-sheaf defined over ${\mathds R}$. \end{proof} As a consequence of Corollary~\ref{cor:LocSystem}, we can now easily get back (the Betti structure part of) Fedorov's result \cite[Theorem 2]{Fedorov} here. \begin{corollary} Let numbers $\alpha_1,\ldots,\alpha_n$ and $\beta_1,\ldots,\beta_n$ in $[0,1)$ be given and assume that they satisfy the assumptions of the previous theorem. Then the local system on ${\mathds P}^1\setminus\{0,1,\infty\}$ associated to the corresponding hypergeometric equation via the Riemann--Hilbert correspondence is the complexification of a local system of real vector spaces. \end{corollary} Next we consider the case when all $\alpha_i,\beta_j$ are rational. Then the field $L$ from above can be chosen to be cyclotomic, more precisely, let $c\in {\mathds Z}$ such that $c\alpha_i,c\beta_j\in {\mathds Z}$, and put $L:={\mathds Q}(\zeta)$, where $\zeta$ is a primitive $c$-th root of unity, so that $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$ is a priori defined over $L$. Let $H=\textup{Gal}(L/{\mathds Q})\cong({\mathds Z}/c{\mathds Z})^*$. For any $g\in H$, and for any $\delta\in [0,1)$ with $e^{2\pi i\delta}\in L$ we write $\rho_g(\delta)=\widetilde{\delta}\in[0,1)$ if $$ g.e^{2\pi i \delta}= e^{2\pi i \widetilde{\delta}}. $$ In this situation, put $M:=\{\beta_j\}_{j=1,\ldots,m}$, $N:=\{\alpha_i\}_{i=1,\ldots,n}$. \begin{lemma}\label{lem:CycloFields} let $G < H$ be a subgroup such that $\rho_g(M)\subset M$ and $\rho_g(N)\subset N$ for all $g\in G$. Then $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$ has a $K$-structure in the sense of Definition \ref{def:KStructure}, where $K:=L^G$ is the fixed field of $G$. \end{lemma} \begin{proof} First we remark that the inclusions $\rho_g(M)\subset M$ and $\rho_g(N)\subset N$ are automatically equalities, and that we obtain an action $$ \rho: G \longrightarrow S(M)\times S(N)\cong S_m\times S_n. $$ Notice that if $r\in\{1,\ldots,m\}$ is as before, i.e. $\beta_j=0$ for $j=1,\ldots,r$ and $\beta_j\neq 0$ for $j>r$, then since necessarily $\rho(0)=0$, this action factors over $S(\{\beta_{r+1},\ldots,\beta_m\})\times S(N)\cong S_{m-r}\times S_n$. By construction we have $$ \overline{\mathcal{F}^\gamma}^g = \mathcal{F}^{\rho^{-1}_g(\gamma)} $$ for each $g\in G$, where $\rho_g(\gamma):= (\rho_g(\beta_{r+1}),\ldots, \rho_g(\beta_m), \rho_g(\alpha_1),\ldots,\rho_g(\alpha_n))$. Again, since $G$ acts on ${\mathds G}_m^{m-r+n}$ via symmetry groups in the first $m-r$ and the last $n$ coordinates, we have that the function $$ f=x_{m+r}+\ldots+x_m+\frac{1}{x_{m+1}}+\ldots+\frac{1}{x_{m+n}}+ q\cdot x_{r+1}\cdot\ldots\cdot x_{m+n} $$ lies in $\mathcal{O}^{\im(\varrho)}_{\mathds G}$. Then the result follows by applying Theorem \ref{theo:MainTheo2}. \end{proof} Notice that if the hypotheses of Theorem \ref{theo:RealStruct} are satisfied and if we suppose moreover that $\alpha_i,\beta_j\in {\mathds Q}$, then Theorem \ref{theo:RealStruct} follows as a special case from Lemma \ref{lem:CycloFields}, since the fixed field $K$ will automatically be contained in ${\mathds R}$, and hence $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$ acquires an ${\mathds R}$-structure as well. Finally, if in Lemma \ref{lem:CycloFields} we have $\rho_g(M)\subset M$ and $\rho_g(N)\subset N$ for all $g\in H$, then we automatically get that $\mathcal{H}(\alpha;\beta)$ is defined over ${\mathds Q}$. This condition can actually be rephrased in a nicer way. \begin{theorem}\label{theo:RatStruct} Let $s\in\{1,\ldots,m\}$ and $r\in\{1,\ldots,m\}$ be as in Theorem \ref{theo:RealStruct}. Suppose that there exist non-negative integers $e,f$ and positive integers $r_1,\ldots, r_e$ and $s_1,\ldots,s_f$ such that $n-s=\varphi(r_1)+\ldots+\varphi(r_e)$ and $m-r=\varphi(s_1)+\ldots+\varphi(s_f)$ where $\varphi$ is Euler's $\varphi$-function. If now we have $$ \prod_{i=s+1}^n (t\partial_t-\alpha_i)=\prod_{i'=1}^e \prod_{d\in ({\mathds Z}/{\mathds Z} r_{i'})^*} \left(t\partial_t-\frac{d}{r_{i'}}\right) $$ and $$ \prod_{j=r+1}^m (t\partial_t-\beta_j)=\prod_{j'=1}^f \prod_{d\in ({\mathds Z}/{\mathds Z} s_{j'})^*} \left(t\partial_t-\frac{d}{s_{j'}}\right), $$ then $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$ has a ${\mathds Q}$-structure. \end{theorem} \begin{proof} It is elementary to verify that the assumption implies that the group $G$ in Lemma \ref{lem:CycloFields} can be taken to be the full Galois group $\textup{Gal}(L/{\mathds Q})\cong ({\mathds Z}/c{\mathds Z})^*$ (notice that $c$ is divisible by $\text{lcm}(r_1,\ldots,r_k,s_1,\ldots,s_l)$), and then $K=L^G={\mathds Q}$. \end{proof} A special case of this result is worth mentioning, since it is related to various examples of mirror symmetry for toric orbifolds. \begin{corollary}\label{cor:RatStructuresWeightedProjSp} Let $m,n$ and $r,s$ be as above, and suppose that there are $p,q\in {\mathds Z}_{\geq 0}$ and $w_1,\ldots, w_p, v_1,\ldots,v_q \in {\mathds Z}_{>0}$ such that $n-s=w_1+\ldots+w_p-p$ and $m-r=v_1+\ldots+v_q-q$ and such that $$ \prod_{i=s+1}^n (t\partial_t-\alpha_i)=\prod_{i'=1}^p \prod_{d=1}^{w_{i'}-1} \left(t\partial_t-\frac{d}{w_{i'}}\right) $$ and $$ \prod_{j=r+1}^m (t\partial_t-\beta_j)=\prod_{j'=1}^q \prod_{d=1}^{v_{j'}-1} \left(t\partial_t-\frac{d}{v_{j'}}\right). $$ Then $\SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))$ has a ${\mathds Q}$-structure. \end{corollary} \begin{proof} This is a direct consequence of the previous theorem: If all $w_i$ and all $v_j$ are prime, then both statements are equal, otherwise, we consider the decomposition of each $w_i$ and $v_j$ into prime factors, and we apply Theorem \ref{theo:RatStruct} to them. \end{proof} \begin{remark} Suppose that we are given $(\alpha;\beta)\in [0,1)^N$ as before such that the following variant of the assumptions of the previous theorem holds: There exist $p,q\in {\mathds Z}_{\geq 0}$ and $w_2,\ldots, w_p, v_1,\ldots,v_q\in {\mathds Z}_{>0}$ such that $n=w_1+\ldots+w_p$ and $m=v_1+\ldots+v_q$ and such that $$ \prod_{i=1}^n (t\partial_t-\alpha_i)=\prod_{i'=1}^p \prod_{d=0}^{w_{i'}-1} \left(t\partial_t-\frac{d}{w_{i'}}\right) $$ (where $w_1=0$, so that necessarily $\alpha_1=0$) and $$ \prod_{j=1}^m (t\partial_t-\beta_j)=\prod_{j'=1}^q \prod_{d=0}^{v_{j'}-1} \left(t\partial_t-\frac{d}{v_{j'}}\right). $$ Notice that here both some of the numbers $\alpha_i$ and some of the numbers $\beta_j$ are equal to zero, so that the corresponding module $\mathcal{H}(\alpha;\beta)$ is no longer necessarily irreducible. We now consider the following Laurent polynomial: $$ \widetilde{f}:=x_1+\ldots+x_q+\frac{1}{x_{q+1}}+\ldots+\frac{1}{x_{q+p-1}}+q\cdot x_1^{v_1}\cdots\ldots\cdot x_q^{v_q} \cdot x_{q+1}^{w_2}\cdot\ldots\cdot x_{p+q-1}^{w_p} \in \mathcal{O}_{\widetilde{{\mathds G}}}, $$ where this time $\widetilde{{\mathds G}}={\mathds G}_m^{q+p-1}\times\dG_{m,q}$. Let again $p:\widetilde{{\mathds G}}\twoheadrightarrow \dG_{m,q}$ be the projection to the last factor, then it can be shown along the lines of Proposition \ref{prop:HypGMSystem} that we have $$ \kappa^+\mathcal{H}^0 p_+ \mathcal{E}^{\widetilde{f}}_{\widetilde{{\mathds G}}} \cong \mathcal{H}(\alpha;\beta). $$ This isomorphism is essentially well-known, e.g. the Laurent polynomial $\widetilde{f}$ appears, in the case where $q=0$, as the Landau--Ginzburg model for the quantum cohomology of weighted projective spaces, see \cite{DS2} for a thorough discussion of this example. As a consequence, one can show directly that this (reducible) module $\mathcal{H}(\alpha;\beta)$ has a ${\mathds Q}$-structure since it is constructed via standard functors from an object (namely $\mathcal{E}^{\widetilde{f}}_{\widetilde{{\mathds G}}}$) which already has a ${\mathds Q}$-structure. This is in contrast to the cases discussed before, where we start with the module $\mathcal{E}^{\gamma,f}$, on which there is no a priori Betti structure. One can also relate (for the case of parameters $\alpha,\beta$ satisfying the hypotheses of this remark) the two approaches by comparing the Laurent polynomials $f$ and $\widetilde{f}$. This yields an interesting geometric explanation for these two different approaches to the existence of Betti structures. We plan to discuss these issues in a subsequent work. \end{remark} \section{Consequences for Stokes matrices} \label{sec:Stokes} We apply our results to questions regarding the Stokes matrices for hypergeometric system at the irregular singular point. In Section 9.8 of \cite{DK16}, the authors explain how the Stokes matrices or Stokes multipliers are encoded in the enhanced ind-sheaf of the solutions. We assume to be in the situation of Theorem \ref{theo:MainTheo2} so that the enhanced solutions carry a $K $-structure \[ \SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))\cong \pi^{-1}{\mathds C}_{\dG_{m,q}} \otimes_{\pi^{-1}K_{\dG_{m,q}}} H_K \] for some $H_K\in \mathrm{E}^\mathrm{b}(\mathrm{I}K_{(\dG_{m,q})_\infty})$. Recall that we considered parameters $\alpha_1, \ldots, \alpha_n, \beta_1, \ldots, \beta_m \in {\mathds C}$ with $n>m $. Then $\mathcal{H}(\alpha;\beta) $ is irregular singular at infinity and if we write $d:=n-m$ it is ramified of degree $d $. Let us denote by $\rho:\mathds{G}_{m,u} \to \dG_{m,q}$ the ramification map $\rho:u \mapsto u^d=q$. As usual, we will consider the pull-back of $\mathcal{H}(\alpha;\beta)$ with respect to $\rho$ and study the Stokes matrices of the resulting enhanced ind-sheaf with the induced $K$-structure (see Lemma \ref{lemmaExtensionCompatibility} (ii)): \begin{equation}\label{eq:Kstrsol} \mathrm{E} \rho^{-1} \SolE{(\dG_{m,q})_\infty}(\mathcal{H}(\alpha;\beta))\cong \pi^{-1}{\mathds C}_{\mathds{G}_{m,u}} \otimes_{\pi^{-1}K_{\dG_{m,q}}} \mathrm{E} \rho^{-1} H_K. \end{equation} Let us write $\widetilde{H}_K := \mathrm{E} \rho^{-1} H_K \in \mathrm{E}^\mathrm{b}(\mathrm{I}K_{(\mathds{G}_{m,u})_\infty})$. The pull-back $\mathcal{H}(\alpha;\beta)$ is of slope one and there is a finite set $C_1 \subset {\mathds C}^\times$ such that the formal exponential factors of $\mathcal{H}(\alpha;\beta)$ are the elements of \[ \{e^0\} \cup \{ e^{cu} \mid c \in C_1 \} \] (see \cite{SabStokes} for these notions). Let us write $C:=\{0\} \cup C_1$. \begin{remark}\label{rem:Katz} With additional assumptions on the parameters $(\alpha;\beta)$, the exponential factors can be determined rather easily. If the non-resonant parameters satisfy that $d\alpha_j \not \in {\mathds Z}$ for all $j $ and that the module is not Kummer induced (see \cite[Kummer Recognition Lemma 3.5.6]{Ka}), a theorem of N. Katz \cite[Theorem 6.2.1]{Ka} relates $\mathcal{H}(\alpha;\beta)$ with the Fourier-Laplace transform of a regular singular hypergeometric module. Applying the stationary phase formula of C. Sabbah \cite{SaStat}, one then deduces that $C_1$ is given as $C_1=\{ d\cdot \zeta \mid \zeta \in \mu_d \}$, where $\mu_d$ is the group of $d$-th roots of unity. \end{remark} We would like to apply results from \cite{MocCurveTest}. There is a natural pre-order for subanalytic functions $f,g $ on a bordered space $(M^\circ,M)$ defined as \[ f \prec g :\Leftrightarrow f-g \text{ is bounded from above on $U \cap M^\circ$ for any relatively compact subset $U$ of $M $.} \] It induces an equivalence relation by setting $f \sim g :\Leftrightarrow f\prec g $ and $g \prec f$. We will write $[f]$ for the equivalence class of a function. Let $\Delta$ be a small open neighbourhood of $\infty$ and let $\varpi\colon \widetilde{\Delta}\to \Delta$ be the oriented real blow-up of $\Delta$ at $\infty$. Let us write $\Delta^\circ \vcentcolon= \Delta \smallsetminus \{ \infty \}$ with its inclusion $\iota\colon \Delta^\circ \hookrightarrow \widetilde{\Delta}$, and consider the bordered space $\mathbf{\Delta} \vcentcolon=(\Delta^\circ, \widetilde{\Delta})$. Since we are interested in the local situation at infinity, we will restrict all sheaves to $\Delta$ (or $\widetilde{\Delta}$). For example, we consider the enhanced ind-sheaf \(\SolE{(\Delta^\circ,\Delta)}((\rho^+\mathcal{H}(\alpha;\beta))|_{\Delta^\circ})\) instead of the full version on $(\mathds{G}_{m,u})_\infty$. Let us remark that we consider $\varpi$ as a morphism of bordered spaces \( \varpi\colon \mathbf{\Delta}=(\Delta^\circ,\widetilde{\Delta}) \to (\Delta^\circ,\Delta) \) also. As explained in \cite[Section 9]{DK16}, we know that we can cover $\Delta^\circ$ by sectors $\Sigma_k$ such that we have trivializations of the enhanced solutions of $\rho^+\mathcal{H}(\alpha;\beta)$ of the form \[ \pi^{-1}{\mathds C}_{\Sigma_k}\otimes \SolE{(\Delta^\circ,\Delta)}(\rho^+\mathcal{H}(\alpha;\beta))\cong \pi^{-1}{\mathds C}_{\Sigma_k} \otimes \bigoplus_{c\in C} \big({\mathds E}^{\mathrm{Re}(cu)}_{(\Delta^\circ,\Delta),{\mathds C}}\big)^{r_c} \] with the index set $C$ from above. We write ${\mathds E}^{\mathrm{Re}(cu)}_{(\Delta^\circ,\Delta),{\mathds C}}$ here for the enhanced ind-sheaf ${\mathds E}^{\mathrm{Re}(cu)}_{\mathds C}$ defined in \eqref{eq:DefExp} in order to emphasize the bordered space on which it lives. Since \[ \mathrm{E} \varpi^{-1} {\mathds E}^{\mathrm{Re}(cu)}_{(\Delta^\circ,\Delta),{\mathds C}} \cong {\mathds E}^{\mathrm{Re}(cu)}_{\mathbf{\Delta},{\mathds C}} , \] we deduce the trivializations \begin{equation}\label{eq:sectorK} \pi^{-1}{\mathds C}_{\Sigma_k}\otimes \mathrm{E} \varpi^{-1} \SolE{(\Delta^\circ,\Delta)}(\rho^+\mathcal{H}(\alpha;\beta))\cong \pi^{-1}{\mathds C}_{\Sigma_k} \otimes \bigoplus_{c\in C} \big({\mathds E}^{\mathrm{Re}(cu)}_{\mathbf{\Delta},{\mathds C}}\big)^{r_c}. \end{equation} We will now work on the bordered space $\mathbf{\Delta}$ and hence omit the subscript by simply writing ${\mathds E}^{\mathrm{Re}(cu)}_{{\mathds C}}$ again. It is important to remark that the induced filtration \begin{equation}\label{eq:enhStokesFiltr} F_\mathfrak{a} \big(\pi^{-1}{\mathds C}_{\Sigma_k}\otimes \mathrm{E} \varpi^{-1} \SolE{(\Delta^\circ,\Delta)}(\rho^+\mathcal{H}(\alpha;\beta))\big) \cong \pi^{-1}{\mathds C}_{\Sigma_k} \otimes \bigoplus_{c\in C: [cu]\prec \mathfrak{a}} \big({\mathds E}^{\mathrm{Re}(cu)}_{\mathds C}\big)^{r_c} \end{equation} indexed by classes $\mathfrak{a}$ of subanalytic functions is well-defined, i.e.\ does not depend on the choice of the isomorphism in \eqref{eq:sectorK} (cf.\ \cite[Lemma 5.15]{MocCurveTest}). Since we have pole order at most one at infinity in the exponential factors, we know that we can obtain these splittings on two sectors, each of width slightly greater than $\pi$. Let us choose sectors $S_\pm$ (in $\widetilde{\Delta}$) such that the $\varpi^{-1}(\infty)\cap S_\pm $ cover $\varpi^{-1}(\infty)$, and let \[ \sigma_+ \cup \sigma_- = S_+ \cap S_- \] be the union of the two smaller sectors $\sigma_\pm$, the overlaps of the sectors $S_\pm$. The choice of the sectors $S_\pm$ has some impact on the Stokes matrices one wants to compute -- in principle it amounts to the action of a braid group. Let us denote by $\mathcal{L}:= \big(\mathsf{sh}_{(\mathds{G}_{m,u})_\infty}\SolE{(\mathds{G}_{m,u})_\infty}(\rho^+\mathcal{H}(\alpha;\beta))\big)\big|_{\Delta^\circ}$ the local system of solutions on the punctured disc and by $\widetilde{\mathcal{L}}:=\iota_\ast \mathcal{L}$ its extension to the boundary. The Stokes filtration on the local system $\widetilde{\mathcal{L}}|_{\varpi^{-1}(\infty)}$ is the filtration inherited from the filtration in \eqref{eq:enhStokesFiltr} via the sheafification functor. On the sectors $S_\pm$, we have splittings of these filtrations as in \eqref{eq:sectorK}, say \[ \psi_{\pm}\colon \widetilde{\mathcal{L}}|_{(\varpi^{-1}(\infty)\cap S_\pm)} \stackrel{\cong}{\to} \bigoplus_{c\in C} (\mathcal{E}_{c,\pm})^{r_c} \] where $\mathcal{E}_{c,\pm} $ is the constant rank one local system on the interval $\varpi^{-1}(\infty)\cap S_\pm$ coming from $\iota_*\mathsf{sh}_{\mathbf{\Delta}}{\mathds E}^{\mathrm{Re}(cu)}_{\mathds C}$. The Stokes matrices are defined to be the matrices representing the transition isomorphims on the overlaps \[ \mathcal{S}_+ = \big( \psi_- \circ \psi_+^{-1})|_{\varpi^{-1}(\infty)\cap\sigma_+} \text{ and } \mathcal{S}_- = \big( \psi_- \circ \psi_+^{-1})|_{\varpi^{-1}(\infty)\cap\sigma_-}. \] (Let us remark that there are different conventions and the Stokes matrices are sometimes also defined as the inverse isomorphisms, which is not important for our purposes. Here, we did not describe the orientation of the sectors and their overlaps explicitly.) In order to prove that one can arrive at Stokes matrices with entries in the subfield $K \subset {\mathds C}$, we have to show that the local system $\widetilde{\mathcal{L}}_{\varpi^{-1}(\infty)}$, its Stokes filtration and splittings can be defined over $K$. We are indebted to T.\ Mochizuki for pointing out the idea how to prove this. First, note that both sides of \eqref{eq:sectorK} have a $K$-structure, so that we can write this isomorphism in the form \begin{equation}\label{eq:tildeHK} \pi^{-1}{\mathds C}_{\Delta^\circ} \otimes_{\pi^{-1}K_{\Delta^\circ}} \pi^{-1}K_{\Sigma_k} \otimes \mathrm{E} \varpi^{-1}\widetilde{H}_K \cong \pi^{-1}{\mathds C}_{\Delta^\circ} \otimes_{\pi^{-1}K_{\Delta^\circ}} \pi^{-1}K_{\Sigma_k} \otimes \bigoplus_{c\in C} \big({\mathds E}^{\mathrm{Re}(cu)}_K\big)^{r_c}. \end{equation} \begin{comment} We would like to prove that the latter arises from an isomorphism $\pi^{-1}K_{S_k}\otimes \widetilde{H}_K \cong \pi^{-1}K_{S_k} \otimes \bigoplus_{c\in C} ({\mathds E}^{\mathrm{Re}(cu)}_K)^{r_c}$, so that the trivializations can be defined over $K$ and we obtain a representation of the Stokes matrices with entries in $K$. \end{comment} Let us denote by $\widetilde{\mathcal{L}}_K:=\iota_\ast (\mathsf{sh}_{(\mathds{G}_{m,u})_\infty}\widetilde{H}_K)|_{\Delta^\circ}$ the associated $K$-structure of $\widetilde{\mathcal{L}}$. We know that $\mathrm{E}\varpi^{-1}\widetilde{H}_K$ is an ${\mathds R}$-constructible enhanced ind-sheaf and we deduce from \cite[Lemma 4.9.9]{DK16} that there exists a subanalytic stratification $\widetilde{\Delta}=\bigsqcup_{\lambda \in \Lambda} S_\lambda$ refining $\widetilde{\Delta}=\varpi^{-1}(\infty) \sqcup \Delta^\circ$ such that the following holds: For each stratum $S_\lambda \subset \Delta^\circ$, there exist a finite set of ${\mathds R} \cup \{\infty\} $-valued subanalytic functions $f_{\lambda,j}< g_{\lambda,j}$, say for $j=1, \ldots,m $, and isomorphisms \begin{equation}\label{eq:decompStrat} \pi^{-1} K_{S_\lambda} \otimes \mathrm{E}\varpi^{-1}\widetilde{H}_K \cong \pi^{-1} K_{S_\lambda} \otimes \bigoplus_{j=1}^m K^\mathrm{E}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} K_{f_{\lambda,j}\le t < g_{\lambda,j}} \end{equation} where (analogously to \eqref{eq:DefExp}) we write \[ K_{f\le t <g} \vcentcolon= K_{\{ (u,t) \in \widetilde{\Delta} \times \mathsf{P} \mid u \in \Delta^\circ, t \in {\mathds R}, f(u)\le t < g(u) \}}. \] For each $j $, the pair $(f_{\lambda,j},g_{\lambda,j})$ then is non-equivalent in the sense of \cite[§5.2.2]{MocCurveTest}. Let us now consider the situation around points on the boundary of the real blow-up: For all but finitely many points $p \in \varpi^{-1}(\infty)$, we find one-dimensional strata $S_\eta \subset \varpi^{-1}(\infty)$ containing $p$ and $S_\lambda \subset \Delta^\circ$ such that their union contains an open neighbourhood $U_p $ of $p $ in $\widetilde{\Delta}$ and such that \eqref{eq:decompStrat} holds over $S_\lambda$ and consequently also over $U_p^\circ=U_p \cap \Delta^\circ$. Let $Z \subset \varpi^{-1}(\infty)$ be the finite set of point where this does not hold, i.e.\ zero-dimensional strata in $\varpi^{-1}(\infty)$ or limit points of (real) one-dimensional strata in $\Delta^\circ$. Consider a point $p \in \varpi^{-1}(\infty) \smallsetminus Z$ and apply the notations as above. Since \[ \pi^{-1} {\mathds C}_{U_p^\circ}\otimes_{\pi^{-1} K_{U_p^\circ}}\pi^{-1}K_{U_p^\circ}\otimes K^\mathrm{E}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} K_{f_{\lambda,j}\le t < g_{\lambda,j}} \cong \pi^{-1} {\mathds C}_{U_p^\circ} \otimes {\mathds C}^\mathrm{E}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} {\mathds C}_{f_{\lambda,j}\le t < g_{\lambda,j}} \] for all $j$, we deduce from \eqref{eq:decompStrat} the isomorphism \begin{equation}\label{eq:decompStratC} \pi^{-1} {\mathds C}_{U_p^\circ} \otimes \mathrm{E}\varpi^{-1}\widetilde{H}_K \cong \pi^{-1}{\mathds C}_{U_p^\circ}\otimes \bigoplus_{j=1}^m {\mathds C}^\mathrm{E}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} {\mathds C}_{f_{\lambda,j}\le t < g_{\lambda,j}}. \end{equation} If $U_p^\circ$ is chosen small enough, it is contained in one of the sectors $\Sigma_k$ from \eqref{eq:tildeHK} and since \[ {\mathds E}^{\mathrm{Re}(cu)}_{\mathds C}={\mathds C}^{\mathrm{E}}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} {\mathds C}_{-\mathrm{Re}(cu)\le t}, \] we combine \eqref{eq:sectorK} and \eqref{eq:decompStratC} to obtain the isomorphism \begin{equation}\label{eq:KMIs} \pi^{-1}{\mathds C}_{U_p^\circ}\otimes\bigoplus_{j=1}^m {\mathds C}^\mathrm{E}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} {\mathds C}_{f_{\lambda,j}\le t < g_{\lambda,j}} \cong \pi^{-1}{\mathds C}_{U_p^\circ}\otimes \bigoplus_{c\in C} {\mathds C}^{\mathrm{E}}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} ({\mathds C}_{-\mathrm{Re}(cu) \le t})^{r_c}. \end{equation} On the basis of \cite{MocCurveTest}\footnote{We refer to the arXiv version of Mochizuki's paper. However, the paper has been reorganized in the meantime and will be published in two parts. The notation $\mathrm{Sub}_{\not\sim}^{\langle 2 \rangle, \ast}$ is the one from the reorganized article (part II). We are grateful to Takuro Mochizuki for providing the new versions. The notation of \cite{MocCurveTest} is slightly different but similar enough not to create confusion.}, let us denote by \( \mathrm{Sub}_{\not\sim}^{\langle 2 \rangle, \ast}(U_p^\circ,U_p) \) the set of either pairs $(f,g) $ of non-equivalent subanalytic functions (with $f(p)<g(p) $ pointwise) or of pairs of one subanalytic function together with $\infty$ on the bordered space $(U_p^\circ,U_p)$. The pre-order $\prec$ induces a pre-order on the set $\mathrm{Sub}_{\not\sim}^{\langle 2 \rangle, \ast}(U_p^\circ,U_p)$. The quotient with respect to the induced equivalence relation is denoted by \( \overline{\mathrm{Sub}}_{\not\sim}^{\langle 2 \rangle, \ast}(U_p^\circ,U_p). \) Now, both sides of \eqref{eq:KMIs} are associated to finite multi-subsets $(I,\mathfrak{m})$ of $\mathrm{Sub}_{\not\sim}^{\langle 2 \rangle, \ast}(U_p^\circ,U_p)$, i.e.\ finitely many elements $\mathfrak{a}$ of the latter set together with a multiplicity $m_\mathfrak{a} \in {\mathds N}$ for each, namely \begin{itemize} \item $(I_\mathrm{left},\mathfrak{m}_\mathrm{left}) $ consisting of the restrictions of the pairs $(f_{\lambda,j},g_{\lambda,j})$ to $(U_p^\circ,U_p)$ and the multiplicities induced by equivalent pairs for the left hand side of \eqref{eq:KMIs}, and \item $(I_\mathrm{right},\mathfrak{m}_\mathrm{right}) $ being the multiset of the pairs $(-\mathrm{Re}(ct),\infty)$ with multiplicity $r_c $. \end{itemize} Each side of \eqref{eq:KMIs} is constructed in the obvious way from these multi-subsets. If we mimic the notation from \cite{MocCurveTestII} and write\footnote{Note that T.~Mochizuki more generally considers graded multi-sets where an additional grading information refers to a shift of the enhanced ind-sheaves as building blocks for $\mathrm{K}_M(I,\mathfrak{m})$. We don't need these gradings here, since all sheaves are concentrated in one degree.} \[ \mathrm{K}_{(U_p^\circ,U_p)}(I,\mathfrak{m}) \vcentcolon= \bigoplus_{(f,g)\in I} {\mathds C}^\mathrm{E}_{(U_p^\circ,U_p)} \mathbin{\overset{+}{\otimes}} ({\mathds C}_{f \le t < g})^{m_{(f,g)}}, \] the isomorphism \eqref{eq:KMIs} after pull-back via the embedding $(U_p^\circ,U_p) \hookrightarrow \mathbf{\Delta} $ reads as \[ \mathrm{K}_{(U_p^\circ,U_p)}(I_\mathrm{left},\mathfrak{m}_\mathrm{left}) \cong \mathrm{K}_{(U_p^\circ,U_p)}(I_\mathrm{right},\mathfrak{m}_\mathrm{right}). \] Now, due to \cite[§5.2.6]{MocCurveTest}\footnote{In loc.~cit.~the statement is referred to as a direct analogy to Lemma 5.15. In the reorganized article it is worked out in all details in \cite[Lemma 3.29]{MocCurveTestII}.} we conclude that the induced multi-subsets of $\overline{\mathrm{Sub}}_{\not\sim}^{\langle 2 \rangle, \ast}(U_p^\circ,U_p)$ coincide and so do the canonical filtrations. Hence, we obtain an isomorphism \begin{equation}\label{eq:Kdecomp} \pi^{-1} K_{U_p^\circ} \otimes \mathrm{E}\varpi^{-1}\widetilde{H}_K \cong \pi^{-1} K_{U_p^\circ} \otimes \bigoplus_{c \in C} K^\mathrm{E}_{\mathbf{\Delta}} \mathbin{\overset{+}{\otimes}} \big({\mathds E}^{\mathrm{Re}(cu)}_K\big)^{r_c} \end{equation} and an induced filtration $F^p_{\mathfrak{a}}(\widetilde{\mathcal{L}}_{K,p})$ on the stalk $\widetilde{\mathcal{L}}_{K,p}$ which induces the Stokes filtration on $\widetilde{\mathcal{L}}_p$ after extension of scalars from $K$ to ${\mathds C}$. For a point $p\in Z$ consider two nearby points $p_1,p_2 \in \varpi^{-1}(\infty)$ on each side of $p$, i.e.\ in the components of a sufficiently small punctured interval at $p$. Then we have canonical isomorphisms of the stalks $\widetilde{\mathcal{L}}_{p_j} \cong \widetilde{\mathcal{L}}_p$ as well as for the stalks of the $K $-structure. With respect to the first isomorphisms, the Stokes filtrations on $\widetilde{\mathcal{L}}$ are related by \begin{equation}\label{eq:Fp1p2p} F^p_\mathfrak{a} (\widetilde{\mathcal{L}}_p) = F^{p_1}_\mathfrak{a} (\widetilde{\mathcal{L}}_{p_1}) \cap F^{p_2}_\mathfrak{a} (\widetilde{\mathcal{L}}_{p_2}), \end{equation} (note that both stalks on the right hand side are equal if $p$ is no Stokes direction of a pair $(cu,\mathfrak{a})$ of exponential factors, anyway.) Hence, if we define $F^p_\mathfrak{a}(\widetilde{\mathcal{L}}_{K,p})$ analogously to \eqref{eq:Fp1p2p}, we obtain a filtration also on $\widetilde{\mathcal{L}}_{K,p}$ inducing the one on $\widetilde{\mathcal{L}}_p$. In summary, there exist filtrations $F^p_\mathfrak{a}(\widetilde{\mathcal{L}}_K)$ on any stalk $p \in \varpi^{-1}(\infty)$ inducing the Stokes filtration after scalar extension from $K$ to ${\mathds C}$. \begin{lemma}\label{lem:gradedLocSys} The graded objects \[ \mathrm{Gr}^{F^p_\bullet}_\mathfrak{a}(\widetilde{\mathcal{L}}_{K,p}) = F^p_\mathfrak{a}(\widetilde{\mathcal{L}}_{K,p})/ \sum_{\mathfrak{b} \prec \mathfrak{a}} F^p_\mathfrak{b}(\widetilde{\mathcal{L}}_{K,p}) \] glue to a local system of $K$-vector spaces on $\varpi^{-1}(\infty)$. \end{lemma} \begin{proof} First let us remark that each interval $I \subsetneq \varpi^{-1}(\infty)$ induces canonical isomorphisms \begin{equation}\label{eq:stalksiso} \widetilde{\mathcal{L}}_p \cong H^0(I,\widetilde{\mathcal{L}}) \cong \widetilde{\mathcal{L}}_q \end{equation} for all $p,q \in I$. The same holds for the local system $\widetilde{\mathcal{L}}_K$. We know that the claim of the lemma is true over ${\mathds C}$, the graded objects associated to the filtrations as ${\mathds C}$-sheaves are local systems of ${\mathds C}$-vector spaces. Consequently, if one chooses complementary vector spaces $G^p_\mathfrak{a} \subset F^p_{\mathfrak{a}}(\widetilde{\mathcal{L}}_p)$ for each $p \in \varpi^{-1}(\infty)$ such that the canonical morphism \[ G^p_\mathfrak{a} \hookrightarrow F^p_{\mathfrak{a}}(\widetilde{\mathcal{L}}_p) \to \mathrm{Gr}^p_{\mathfrak{a}}(\widetilde{\mathcal{L}}_p) \] is an isomorphism, then each $p$ has an open neighbourhood $I_p$ such that \( \widetilde{\mathcal{L}}_q = \bigoplus_{c\in C} G^p_{[cu]} \) is a splitting of the filtration for each $q \in I_p$ where we use the identifications from \eqref{eq:stalksiso}. In the same way, we can choose complementary $K$-vector spaces $G^p_{K,\mathfrak{a}} \subset F^p_{K,\mathfrak{a}} $ for each $p$. Then $G^p_\mathfrak{a}:=G^p_{K,\mathfrak{a}} \otimes_K {\mathds C}$ is a choice as above and we know that for each $p$ the ${\mathds C}$-vector spaces $G^p_\mathfrak{a}$ induce a local splitting of the filtration on $\widetilde{\mathcal{L}}_q $ for $q$ in a neighbourhood of $p$. Therefore, the same is true for the $K$-vector spaces $G^p_{K,\mathfrak{a}}$. The claim of the lemma easily follows. \end{proof} We want to convince ourselves that we can find local splittings of the filtration on $\widetilde{\mathcal{L}}_K$ over the same intervals as it is the case for $\widetilde{\mathcal{L}}$. \begin{lemma}\label{lemma:splittingK} Suppose we have a splitting $\widetilde{\mathcal{L}}|_I= \bigoplus_{c\in C} \mathcal{G}_{I,[cu]}$ of the Stokes filtration over an interval $I \subsetneq \varpi^{-1}(\infty)$. Then there is a splitting \[ \widetilde{\mathcal{L}}_K|_I= \bigoplus_{c\in C} \mathcal{G}_{K,I,[cu]} \] over $I$ inducing the given splitting after extension of scalars from $K$ to ${\mathds C}$. \end{lemma} \begin{proof} Let us denote by $\mathrm{Gr}^F_{K,[cu]}(\widetilde{\mathcal{L}}_K)$ the local systems associated to the filtered local system $\widetilde{\mathcal{L}}_K$ on $\varpi^{-1}(\infty)$ as in Lemma~\ref{lem:gradedLocSys}. Let $I$ be an interval as in the assumptions. The filtration $F^p_\bullet(\widetilde{\mathcal{L}}_{K,p})$ induces a filtration on $H^0(I,\widetilde{\mathcal{L}}_K)$ -- recall the identifications \eqref{eq:stalksiso} and its variant for $\widetilde{\mathcal{L}}_K$. We have a surjective morphism \[ F^p_{[cu]}(H^0(I,\widetilde{\mathcal{L}}_K)) \twoheadrightarrow H^0(I, \mathrm{Gr}^F_{K,[cu]}(\widetilde{\mathcal{L}}_K)) \] for each $p \in I$, hence we also get a morphism \begin{equation}\label{eq:bigcapsurjective} \bigcap_{p\in I} F^p_{[cu]}(H^0(I,\widetilde{\mathcal{L}}_K)) \longrightarrow H^0(I, \mathrm{Gr}^F_{K,[cu]}(\widetilde{\mathcal{L}}_K)). \end{equation} It is easy to see that it suffices to show that the latter is surjective, since then we can find a subspace $G_{K,[cu]} \subset \bigcap_{p\in I} F^p_{[cu]}(H^0(I,\widetilde{\mathcal{L}}_K))$ such that the induced morphism \[ G_{K,[cu]} \to H^0(I, \mathrm{Gr}^F_{K,[cu]}(\widetilde{\mathcal{L}}_K)) \] is an isomorphism. Then, if we denote by $\mathcal{G}_{K,I,[cu]}$ the constant local system of $K $-vector spaces over $I$ associated to $G_{K,[cu]}$, these define a splitting of $\widetilde{\mathcal{L}}_K$ over $I$. To prove that \eqref{eq:bigcapsurjective} is surjective, observe that the given splitting $\widetilde{\mathcal{L}}|_I= \bigoplus_{c\in C} \mathcal{G}_{I,[cu]}$ yields that \[ H^0(I,\mathcal{G}_{I,[cu]}) \subset F^p_{[cu]}(H^0(I,\widetilde{\mathcal{L}})) \] for all $p \in I$ and \( H^0(I,\mathcal{G}_{I,[cu]}) \to H^0(I,\mathrm{Gr}^F_{[cu]}(\widetilde{\mathcal{L}})) \) is an isomorphism. Consequently, the natural morphism \begin{equation}\label{eq:bigcapsurjectiveC} \bigcap_{p\in I} F^p_{[cu]}(H^0(I,\widetilde{\mathcal{L}})) \twoheadrightarrow H^0(I, \mathrm{Gr}^F_{[cu]}(\widetilde{\mathcal{L}})) \end{equation} is surjective. Since \eqref{eq:bigcapsurjectiveC} is the obtained from \eqref{eq:bigcapsurjective} by extension of scalars from $K$ to ${\mathds C}$, and since this extension is a right-exact functor, it follows that the morphism \eqref{eq:bigcapsurjective} is surjective as well. \end{proof} We can now state and prove the final result of this section. \begin{theorem} Assume that we are in the situation of Theorem \ref{theo:MainTheo2} and that moreover we have $n>m$. Put again $d\vcentcolon=n-m$, let $\rho\colon u \to u^d=q$ be the local ramification map at infinity of degree $d$ and consider the pull-back $\rho^+ \mathcal{H}(\alpha;\beta)$ of the hypergeometric system. Then there is a representation of the Stokes matrices for $\rho^+ \mathcal{H}(\alpha;\beta)$ with values in the field $K$. \end{theorem} \begin{proof} We pick up the notation from above. By Lemma \ref{lemma:splittingK}, we know that the local system $\widetilde{\mathcal{L}}_K$ of $K$-vector spaces splits on an interval $I \subsetneq \varpi^{-1}(\infty)$ as a local system of $K$-vector spaces whenever $\widetilde{\mathcal{L}}$ splits on $I$. In our situation, we have splittings of $\widetilde{\mathcal{L}}$ over two intervals $I_\pm= \varpi^{-1}(\infty)\cap S_\pm$ and the Stokes matrices are the connecting isomorphisms between these splittings on the intersection. We deduce from Lemma \ref{lemma:splittingK} that the splittings and hence the connecting isomorphisms arise from the same construction over $K$. \end{proof} Notice that a related statement for the cases $K={\mathds R}$ and $K={\mathds Q}$ and the assumptions as in Remark \ref{rem:Katz} was found in \cite[Corollary 6.3 and 6.4]{Hien}, using rather different methods. \bibliographystyle{amsalpha} \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,314,259,992,890
arxiv
\section{Introduction} GW170817 has been a milestone event for transient multi-messenger studies. It was the first binary neutron star (NS) merger observed by the LIGO and VIRGO detectors \citep{2017PhRvL.119p1101A}, and so far it remains the only binary NS system from which gravitational waves (GWs) and a multi-wavelength (radio to gamma-ray) counterpart have been discovered \citep{2020arXiv201014527A,2020ApJ...905..145K,2020arXiv201211700P}. The GW170817 NS-NS merger occurred at 12:41:04 on 2017 August 17 UTC, and its GW detection was followed by the detection of a $\gamma$-ray burst (GRB) by the \emph{Fermi} and \emph{INTEGRAL} satellites, $\approx2$\,s after the merger. UV/optical/IR instruments subsequently identified the so-called kilonova counterpart (AT2017gfo), in the galaxy NGC\,4993 at a distance of $\approx40$\,Mpc, making GW170817/GRB170817a the closest short GRB with known redshift \citep[e.g.,][]{2017Natur.551...64A,2017ApJ...848L..19C,2017Sci...358.1556C,2017ApJ...848L..17C,2017Sci...358.1570D,2017Sci...358.1559K,2017Natur.551...80K,2017Sci...358.1583K,2017Natur.551...67P,2017Sci...358.1574S,2017Natur.551...75S,2017ApJ...848L..27T,2017ApJ...848L..24V}. Observations of the quasi-thermal UV/optical/IR emission from the GW170817 slow ($\sim 0.1c-0.3c$), neutron-rich, kilonova ejecta were successful in verifying that mergers of NSs in binaries are production sites of heavy elements such as gold and platinum \citep[e.g.,][]{2017Sci...358.1559K,2017Natur.551...80K,2017Natur.551...67P,2017arXiv171005931M}. In addition to the quasi-thermal kilonova emission, a delayed non-thermal (synchrotron) afterglow from GW170817/GRB\,170817a was first observed in the X-rays $\approx 9$\,days after the merger by the \emph{Chandra} observatory \citep[e.g.,][]{2017Natur.551...71T,2017ApJ...848L..25H,2017ApJ...848L..20M}. A radio afterglow detection with the Karl G. Jansky Very Large Array (VLA) followed, about two weeks after the merger \citep{2017Sci...358.1579H}. Further radio observations of the source proved decisive in narrowing down the morphology of the jet \citep{2018ApJ...861L..10C,2018ApJ...858L..15D,2017ApJ...848L..21A,2018ApJ...856L..18M,2018ApJ...868L..11M, 2018Natur.554..207M,2019ApJ...886L..17H}, ruling out the simple uniform energy-velocity (top-hat) ejecta in favour of a structured jet, where the ejecta velocity varies with the angle from the jet axis \citep{2018Natur.561..355M,2018PhRvL.120x1103L,2020ApJ...901L..26R}. These observations, with the help of hydrodynamic simulations \citep{2018PhRvL.120x1103L,2018ApJ...867...18N}, set constraints on the opening angle of the jet core ($\lesssim 5$\,deg), the observer's viewing angle ($\approx 15-30$\,deg), the isotropic equivalent energy ($\sim10^{52}$\,erg) and the interstellar medium (ISM) density ($\sim 10^{-4}-0.5$\,cm$^{-3}$). The extended radio follow-up of GW170817 up to 2.1\,years after the merger had shown that the radio emission from the structured jet had faded below typical flux density sensitivities that can be reached with the VLA in a few hours of observing \citep{2020arXiv200602382M}. Several theoretical scenarios, however, predict the possible emergence at late times of detectable electromagnetic emission associated with the afterglow of the kilonova ejecta itself \citep[e.g.,][]{2011Natur.478...82N,2019MNRAS.487.3914K,2018ApJ...867...95H,2020MNRAS.495.4981M}. Indeed, numerical simulations show that during the NS-NS merger, a modest fraction of a solar mass is ejected from the system, and the total ejecta mass and velocity distribution of such ejecta depend on the total mass, mass ratio, and the nuclear equation of state (EoS) of the compact objects in the binary. While optical-UV observations are mostly sensitive to the low-end of the ejecta velocity distribution, radio (and X-rays) can probe the fastest moving ejecta tail, shedding light on whether NS-NS ejecta are broadly distributed in energy and velocity (as simulations seem to suggest) and providing indirect constraints on the nuclear EoS. Tentative evidence for a very late-time re-brightening possibly associated with the kilonova afterglow of GW170817 has recently been reported in the X-rays \citep{2020GCN.29055....1H,2021GCN.29375....1H,2020GCN.29038....1T}. On the other hand, relatively shallow observations (RMS $\approx4.3\,\mu$Jy) with the VLA at 3\,GHz had reported a lack of radio detection contemporaneous with the X-ray late-time re-brightening \citep{2020GCN.29053....1A}. This radio non-detection was interpreted to be compatible with expectations from the simplest extrapolation of the X-ray excess to the radio band \citep{2020GCN.29053....1A}. Here, we use deep VLA observations to show that the lack of a radio detection to much deeper limits constrains kilonova ejecta models to a relatively steep energy-velocity ejecta profile. Our work is organized as follows. We report our new observations in $\S$\ref{sec:obs}; in $\S$\ref{sec:disc} we discuss our results within the kilonova afterglow model; finally, in $\S$\ref{sec:conc} we conclude with a summary. \setlength\LTcapwidth{\linewidth} \begin{longtable*}{ccccccc} \caption{VLA late-time observations of the GW170817 field. See text for details on RMS measurements. \label{tab:rad_obs2}}\\ \toprule \toprule Date & $\nu$ & VLA & Time on-source & RMS & VLA & PI\\ (UT) & (GHz) & config. & (hr) & ($\mu$Jy) & program \\ \midrule 2020 Sep 19 & 3.0 & B & 2\,h43\,m24\,s & 4.6 & 20A-185 & Balasubramanian \\ 2019 Sep 20 & 3.0 & B & 2\,h43\,m27\,s & 5.8 & 20A-185 & Balasubramanian \\ 2020 Dec 15 & 3.0 & A & 3\,h24\,m14\,s & 3.5 & SL0449 & Margutti \\ 2020 Dec 27 & 3.0 & A & 3\,h24\,m14\,s & 3.3 & SL0449 & Margutti \\ 2021 Jan 10 & 3.0 & A & 2\,h41\,m34\,s & 3.7 & 20B-208 & Balasubramanian \\ 2021 Jan 16 & 3.0 & A & 2\,h38\,m34\,s & 3.6 & 20B-208 & Balasubramanian \\ 2021 Feb 02 & 3.0 & A & 3\,h24\,m16\,s & 3.3 & SM0329 & Margutti \\ 2021 Feb 04 & 3.0 & A & 2\,h38\,m38\,s & 4.1 & 20B-472 & Corsi \\ 2021 Feb 05 & 3.0 & A & 2\,h41\,m41\,s & 4.0 & 20B-472 & Corsi \\ 2021 Feb 06 & 3.0 & A & 2\,h41\,m34\,s & 3.8 & 20B-472 & Corsi \\ 2021 Feb 08 & 3.0 & A & 2\,h41\,m36\,s & 4.0 & 20B-472 & Corsi \\ 2021 Feb 10 & 15.0 & A & 2\,h40\,m52\,s & 1.9 & SM0329 & Margutti \\ \bottomrule \end{longtable*} \section{Observations and data reduction}\label{sec:obs} As we describe in what follows, we have processed new and archival data of GW170817 obtained at radio and X-ray wavelengths between 2020 September and 2021 February. The full panchromatic afterglow light curve of GW170817 is available for download on the web\footnote{\url{https://github.com/kmooley/GW170817/} or \url{http://www.tauceti.caltech.edu/kunal/gw170817/}}. \subsection{Radio Observations} Radio observations of the GW170817 field were carried out with the VLA on the dates listed in Table \ref{tab:rad_obs2} at S band (2--4 GHz, nominal central frequency of 3\,GHz) with the array in its B (September 2020) and A configurations (December 2020 - February 2021). Each observation was calibrated in \texttt{CASA} \citep{McMullin2007} using the automated VLA calibration pipeline. The calibrated data were then manually inspected for further RFI excision. We interactively imaged each observation using the \texttt{CASA} task \texttt{tclean} with one Taylor term (\texttt{nterms=1}) and robust weighting (\texttt{robust=0.5}), and derived the RMS measurements using \texttt{imstat}. Table \ref{tab:rad_obs2} lists the RMS sensitivity reached in each observation, estimated within a region of 20 synthesized beams around the position of GW170817 \citep[$\alpha=13{\rm h}09{\rm m}48.069{\rm s}$, $\delta=-23{\rm d}22{\rm m}53.39{\rm s}$, J2000;][]{2018Natur.561..355M}. We find no significant ($>3\times$RMS) excess in a region of one synthesized beam around the position of GW170817 in any of the individual images. Next, we co-added and interactively imaged as above (\texttt{nterms=1} and \texttt{robust=0.5}) all A-configuration 3\,GHz VLA observations listed in Table \ref{tab:rad_obs2}. An RMS of 1.3\,$\mu$Jy was reached at 2.8\,GHz (Table \ref{tab:obs}) within a region of size approximately equal to 20 synthesized beams centered on the position of GW170817. Within one synthesized beam centered on the location of GW170817, we measure an \texttt{imstat} peak flux density value of $\approx2.8\,\mu$Jy at 2.8\,GHz. With this procedure, several of the bright, extended sources present in the field left substantial deconvolution residuals. Thus, to mitigate the effects of deconvolution residuals, test the robustness of our measurement, and further improve our sensitivity, we imaged all 3\,GHz data (both A and B configurations of the VLA) listed in Table \ref{tab:rad_obs2} non-interactively with two Taylor terms (\texttt{nterms=2}), robust weighting (\texttt{robust=0.5}), single phase-only selfcal (solution interval of 4 minutes), and a cleaning threshold of 4\,$\mu$Jy. This yielded an image RMS noise of 0.99\,$\mu$Jy (in a region of size equal to 20 synthesized beams around the position of GW170817; theoretical thermal noise $\approx 0.85\,\mu$Jy) and a peak flux density value of 2.86\,$\mu$Jy (Table \ref{tab:obs}) within one synthesized beam centered on the location of GW170817. A late-time VLA observation of the GW170817 field was also carried out in U band (nominal central frequency of 15\,GHz) on 10 February 2021 (Table \ref{tab:rad_obs2}). We calibrated this dataset and interactively imaged the field (with \texttt{nterms=1} and \texttt{robust=0.5}). No significant emission is found at the location of GW170817 (Table \ref{tab:obs}). The RMS measured in a region of size equal to 20 synthesized beams centered around the position of GW170817 is of $\approx 1.9\,\mu$Jy at 15\,GHz. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{panchrom_lc.pdf}% \caption{Comprehensive 3\,GHz light curve of GW170817 as presented in our recent work \cite{2020arXiv200602382M}, which includes data from \citet{2019ApJ...883L...1F,2019Sci...363..968G,2018ApJ...862L..19N}, together with our latest measurement in the radio (3\,GHz, latest yellow data point in the grey, shaded region) and X-rays (latest purple data point in the grey, shaded region) extrapolated to 3\,GHz using the spectral index derived in \cite{2020arXiv200602382M}. The best fit structured jet model for GW170817 is also plotted (top panel, black line). As evident from the lower panel, our radio measurement is compatible with the tail of the GW170817 jet within the large errors. On the other hand, the X-rays seem to show a more significant discrepancy and suggest that a kilonova flare may be taking over the emission \citep[][]{2020GCN.29055....1H,2021GCN.29375....1H,2020GCN.29038....1T}.}\label{fig:panchrom_lc}% \end{figure*} \subsection{X-ray Data} We reprocessed and analyzed the Chandra ACIS-S observations of the GW170817 field obtained between 2020 December 10 and 2021 January 27 (obsIDs 22677, 24887, 24888, 24889, 23870, 24923, 24924; 150.5 ks, PI Margutti) using the same procedure described in \cite{2020arXiv200602382M}. For observations where the source was not apparent, we used the relative position of the other X-ray sources in the field of view to determine the source extraction region. By combining the spectral products of all 7 observations and fitting the data with an absorbed power-law model where hydrogen column density $N_{\rm H}$ has been fixed to the Galactic value and the photon index $\Gamma=1-\beta$ has been fixed to 1.57 \citep{2020arXiv200602382M}, we find an unabsorbed flux density of $(2.1^{+0.7}_{-0.6})\times10^{-4} \,\mu$Jy at $2.4\times10^{17}$\,Hz (1\,keV; $1\sigma$ uncertainty). To investigate if $\Gamma$ is different from 1.57, we refitted the Chandra data leaving $\Gamma$ as a free parameter. From the 2020 December--January 2021 data we find $\Gamma=2.16^{+1.48}_{-1.18}$. If we additionally combine the 96.6\,ks of data obtained in 2020 March, we get $\Gamma= 0.98^{+0.77}_{-0.76}$ (90\% uncertainties). Hence, in both cases, the value of $\Gamma$ is consistent (well within the 90\% confidence interval) with $\Gamma=1.57$. Our results are also consistent with \citet[][]{2020GCN.29055....1H,2021GCN.29375....1H,2020GCN.29038....1T}. \begin{table*}[] \centering \begin{tabular}{ccccccc} \toprule \toprule Date & Epoch & $\nu$ & $F_{\nu}$ & $\sigma_{\nu}$ & Instrument & Reference\\ (UT) & (days) & (Hz) & ($\mu$Jy) & ($\mu$Jy)& &\\ \midrule 2020 Dec 15 - 2021 Feb 08 & 1243 & $2.8\times10^9$ & 2.8 & 1.1-1.3 & VLA A & This work\\ 2020 Sep 19 - 2021 Feb 08 & 1199 & $3.0\times10^9$ & 2.86 & 0.99 & VLA A\&B & This work\\ 2020 Dec 10 - 2021 Jan 27 & 1234 & $2.41\times10^{17}$ & $2.1\times10^{-4}$ & $0.7\times10^{-4}$ & Chandra& This work\\ \bottomrule \end{tabular} \caption{Results for the co-added late-time radio observations of GW170817. See text for discussion. } \label{tab:obs} \end{table*} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{3GHz_lc_with_kn_predictions.pdf} \vspace {-0.5cm} \caption{3\,GHz radio light curve of GW170817 with our recent radio measurement (red star) along with predictions for the rising part of the kilonova afterglow light curve as a function of $\alpha$ (see $\S$\ref{eq:kn_rise}) with the assumption that the minimum speed of the ejecta is $\beta_{0}=0.3$ and other parameters as listed in $\S$\ref{sec:disc} (solid lines). Dashed lines show the curves for generic parameters (see $\S$\ref{sec:disc}.) \label{fig:kn_lc}} \end{figure} \section{Discussion}\label{sec:disc} As evident from Figure \ref{fig:panchrom_lc}, our late-time radio observations of GW170817 do not provide evidence for radio emission in excess to what is expected from the very late time tail of a structured jet afterglow model (black solid line). The X-ray observations, on the other hand, suggest a more pronounced statistical fluctuation or the possible emergence of a new component at higher frequencies (bottom panel in Figure 1). The radio-to-X-ray spectral index as derived from the measured ratio of the late-time radio-to-X-ray flux densities (see Table 2) is $-0.523\pm0.026$, within $\approx 1.8\sigma$ of the value adopted in Figure \ref{fig:panchrom_lc} ($-0.569\pm0.002$) derived by \cite{2020arXiv200602382M}. Thus, while current measurements still carry too large uncertainties for claiming any clear evidence for a change in spectral behavior at late times, the lack of a radio detection together with the tentative X-ray excess would suggest a flattening of the radio-to-X-ray spectrum at late times. In general terms, it is not difficult to envision a scenario in which the electron index ($p$) for the ejecta responsible for the late-time X-ray excess differs from the one used to model the structured jet afterglow at earlier times \citep[$p=2.07-2.14$;][]{2020arXiv200602382M}. The predictions of Fermi particle acceleration imply that the power-law index $p$ expected at non-relativistic shock speeds is close to $p\approx 2$, while at ultra-relativistic velocities one can have $p\approx 2.2$ \citep[e.g.][and references therein]{2015SSRv..191..519S}. Thus, a flattening of the radio-to-X-ray spectral index in GW170817, if confirmed by further follow-up, could support the idea that a non-relativistic ejecta component is starting to dominate the emission. Differently from the above theoretical predictions for particle acceleration, non-relativistic ejecta observed in radio-emitting core-collapse supernovae typically have $p=2.5-3.2$ \citep[e.g.,][]{1998ApJ...499..810C}, pointing to steeper radio-to-X-ray spectra than that suggested by the X-ray excess observed in GW170817. Cases where a transition of the ejecta from the relativistic to the non-relativistic regime has been observed include GRB\,030329 \citep[e.g.,][]{frail2005,2008A&A...480...35V} and TDE Swift J1644+57 \citep{2021ApJ...908..125C}. These two cases pointed to a slowly-increasing or constant value of $p$ in the relativistic-to-non-relativistic transition, which again would correspond to a spectral steepening rather than a spectral flattening. One key difference between GW170817 and other non-relativistic flows is that we have seen a single power-law spectrum from radio, optical, and X-rays, i.e. synchrotron radiation from electrons with different Lorentz factors in many orders of magnitude. In this case, the spectral index should represent $p$. On the other hand, in other non-relativistic flows, we often determine $p$ from the radio spectra closer to the minimum Lorentz factor $\gamma_m$ of the electron energy distribution, where the synchrotron emission may be dominated by thermal electrons around the typical Lorentz factor rather than accelerated electrons \citep[e.g.,][]{2015PhRvL.114h5003P,2013ApJ...762L..24M}. Thus, GW170817 offers an opportunity to test particle acceleration theory, and continued monitoring from radio-to-X-rays is key to this end. We also note that continued X-ray observations may offer an opportunity to probe the evolution of the cooling frequency in the Newtonian limit, when $\nu_{\rm c}\propto\beta^{-3}t^{-2}$ \citep{2018ApJ...867...95H}, and thus constrain $\beta$ and the kinetic energy of the fast tail of the ejecta \citep[see also][]{2019MNRAS.483..624L}. An alternative explanation for the excess in X-rays (as compared to the radio) observed in GW170817 might be the possibility of a Compton echo of the X-rays from the prompt emission of GRB\,170817A, scattering off surrounding dust \citep{2018MNRAS.476.5621B}. Given currently large uncertainties in the X-ray result, hereafter we focus on the constraints that the lack of a radio excess set on kilonova ejecta models. Following \cite{2019MNRAS.487.3914K}, the kilonova blast wave drives a shock through the interstellar medium, resulting in synchrotron emission. Electrons are accelerated to a power-law distribution of Lorentz gamma factors $\gamma > \gamma_{m}$, with power-law index $p$. The energy in the kilonova blast wave is distributed as $E\propto(\beta\Gamma)^{-\alpha}$, with $\beta_{0}$ being the minimum shock velocity. It is reasonable to assume that radio (GHz) observations are in between the minimum frequency, $\nu_{m}$ \citep[corresponding to $\gamma_{m}$, see][] {2011Natur.478...82N}, and the cooling frequency, $\nu_{c}$. In this case, the kilonova peak flux density reads \citep{2011Natur.478...82N}: \begin{equation} F_{\rm \nu, pk}\approx (1522\,{\rm \mu Jy})\,\epsilon_{\rm e,-1}^{p-1}\,\epsilon_{\rm B,-3}^{\frac{p+1}{4}}\,n_{-2}^{\frac{p+1}{4}}\,\beta_{0}^{\frac{5p-7}{2}}\,E_{51}\,\nu_{9.5}^{\frac{1-p}{2}}\,d_{26}^{-2}, \end{equation} where $Q_{x}=Q/10^{x}$ is followed for all quantities ($Q$, all expressed in cgs units); $\epsilon_{\rm B}$ and $\epsilon_{\rm e}$ are the fractions of the total energy in the magnetic field and electrons respectively; $n$, the number density of the medium; $E$ is the energy of the blast wave; $d$ is the distance to the source; the normalization constant is calculated for $p=2.1$. The time at which the kilonova afterglow emission peaks can be calculated as \citep{2019MNRAS.487.3914K}: \begin{equation} t_{\rm dec}=t_{\rm pk}\approx (3.3 {\rm yr})\left(\frac{E_{\rm iso, 51}}{n_{-2}}\right)^{\frac{1}{3}}\, \beta_{0}^{-\frac{2}{3}}\left(\frac{2+\alpha}{\beta_0(5+\alpha)}-1\right). \end{equation} The blast wave can be approximated to be mildly relativistic before this peak, and therefore the rising part of the kilonova ejecta light curve can be easily modelled as \citep[see][and references therein]{2019MNRAS.487.3914K}: \begin{equation}\label{eq:kn_rise} F_{\rm\nu, KN}\,(t)=F_{\rm \nu,pk}\left(\frac{t}{t_{\rm p}}\right)^s, \end{equation} where: \begin{equation} s=\frac{3\alpha-6(p-1)}{8+\alpha}. \end{equation} In Figure \ref{fig:kn_lc}, we plot the rising portion of the 3\,GHz kilonova light curves obtained following the above prescriptions, setting $\beta_{0}=0.3$, $E_{\text{iso}}=10^{51}$\,erg, $d = 40$\,Mpc, and adopting different values for the power-law index $\alpha$ of the energy-speed distribution of the kilonova ejecta. The solid lines correspond to the choice $p=2.1$, $\epsilon_{e}=7.8\times10^{-3}$, $\epsilon_{\rm B}=9.9\times10^{-4}$\, $n=9.8\times10^{-3}{\rm cm}^{-3}$ as derived from the modeling of the earlier-time panchromatic afterglow of the GW170817 structured jet \citep{2020arXiv200602382M}. For comparison, the dashed lines show the case $\epsilon_{e}=10^{-1}$, $\epsilon_{\rm B}=10^{-3}$\, $n=10^{-3}{\rm cm}^{-3}$, $p=2.2$, which corresponds to the generic case discussed in \cite{2019MNRAS.487.3914K}. As evident from this Figure, to explain the absence of a kilonova detection in the radio one needs $\alpha\gtrsim5$ for the case where the density and micro-physical parameters are set equal to the ones measured for the structured jet afterglow. This constraint on $\alpha$ agrees with the predictions from numerical simulations described in \cite{2018ApJ...867...95H} and X-ray observations discussed in \cite{2020RNAAS...4...68H}. For the more generic parameters as in \cite{2019MNRAS.487.3914K}, $\alpha\gtrsim20$. \section{Summary and conclusion}\label{sec:conc} We have presented extensive late-time radio observations of the GW170817 field, carried out with the most extended configurations of the VLA. Combining the collected data we have built the deepest high-resolution image of the GW170817 field available so far. Our radio flux density measurements show that there is no evidence for emission in excess to the one expected from the afterglow of the GW170817 structured jet at 3\,GHz and $t\approx 1200$\,d since merger. These results constrain the energy-speed distribution of the kilonova ejecta to be rather steep, with a power-law index of $\alpha \gtrsim 5$. We finally commented on how the recent detection of a potential excess in the X-rays may hint to a flattening of the power-law index of the electron energy distribution of the kilonova ejecta compared to the value of this parameter as constrained by the earlier panchromatic afterglow observations. Further late-time monitoring of the GW170817 field with the VLA are likely to reveal whether a kilonova afterglow is emerging. \acknowledgements We are grateful to Dale A. Frail for contributing to shaping this work with many insightful discussions. A.B. and A.C. acknowledge support from the National Science Foundation via Award \#1907975. K.P.M. and G.H. acknowledge support from the National Science Foundation Grant AST-1911199. D.L. acknowledges support from the National Science Foundation via Award \#1907955. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
1,314,259,992,891
arxiv
\section{Introduction} The standard Hamiltonian of solid-state theory considers a system of electrons and ions interacting only through Coulomb forces. These are of zeroth order in the $1/c$ expansion. Besides relativistic corrections, pure electromagnetic interactions contribute already terms of order $1/c^2$ that might be important. The first classical Hamiltonian containing such terms was built up by Darwin \cite{Darwin} more than hundred years ago. Later Landau-Lifshitz \cite{Landau} re-derived the same result by the choice of a very special gauge. The interest for this approximation continues until today. We refer to earlier papers cited in the recent papers of Essen \cite{Essen1} \cite{Essen2}. This author also gave another derivation, instead of the $1/c$ expansion, just by neglecting the radiation effects in the Maxwell equations. He underlined also the importance of $1/c^2$ corrections in its quantum mechanical version for solid state theory, particularly for the theory of superconductivity, as well as for the inclusion of ultra-relativistic effects. The application into plasma physics were discussed recently by Krause et al.\cite{Krause}, aimed at the Vlasov equation. It is worth to mention also a paper by Bessonov \cite{Bessonov}, who found some strange solutions derived from the Darwin Lagrangian. The aim of this paper is to derive the correct extension of the solid-state Hamiltonian including terms of order $1/c^2$, since as we show, the Darwin Hamiltonian is misleading. Its derivation ignores some delicate, but essential aspects of the canonical formulation of the electromagnetic theory. We derive here the proper classical Hamiltonian using Landau-Lifshitz's \cite{Landau} way, but with the consequent use of the Coulomb gauge. The $1/c^2 $ terms we obtain describe a (transverse) current-current interaction very similar to the density-density interaction of the Coulomb terms. We prove here also that the quantized version of our Hamiltonian coincides with the $1/c^2$ approximation of the non-relativistic Quantum Electrodynamics (QED), restricted to the states without photons. This proves definitely the correctness of our construction. An important feature of the extended Hamiltonian is the possibility to distinguish the internal and external magnetic fields. This might be important for a better understanding of superconductivity. Indeed, ideal diamagnetism (Meissner effect), consist in the compensation of the external magnetic field by the internal one. However, today's theories of superconductivity operate within the frame of the standard Coulomb model of solid state and therefore here the magnetic field is just a self-consistent one. \section{The classical $1/c^2$ Hamiltonian in the Coulomb gauge } One can not formulate a Lagrangian theory of classical point-like charged particles interacting with the electromagnetic field due to the divergent self-interaction.( From the Lorentz force one has to omit the action of the field created by each charged particle on itself.) This impedes also the derivation of the appropriate Hamiltonian. Usually, one defines the classical or quantum mechanical theory of charged particles directly by a Hamiltonian including only Coulomb potentials without self-interacting terms. Almost one hundred years ago Darwin \cite{Darwin} proposed a closed classical Lagrangian for N point-like charges $e_i$ and mass $m_i$ ($i,j =1,\ldots N$) including terms up to order $1/c^2$ avoiding self.interaction and derived the corresponding Hamiltonian \begin{eqnarray} & &{\cal H}=\sum_{i}\frac{m_i}{2}\vec{p_{i}}^{2}+\sum_{i>j}\frac{e_i e_j}{|\vec{r_{i}}-\vec{r}_{j}|} \label{Darwin} \\ & &-\sum_{i>j}\frac{e_i e_j}{2c^{2}m_i m_j|\vec{r}_{i}-\vec{r}_{j}|}\left[\vec{p}_{i}\cdot\vec{p}_{j}+(\vec{p}_{i}\cdot\vec{n}_{ij})(\vec{p}_{j}\cdot\vec{n}_{ij})\right] , \nonumber \end{eqnarray} where $\vec{n}_{ij}\equiv \frac{\vec{r}_i -\vec{r}_j}{|\vec{r}_i -\vec{r}_j| }$. His derivation is based on the expansion of the Li\'enard-Wiechert potentials to second order in $1/c$. Jackson \cite{Jackson} in his derivation uses the Coulomb gauge, but is forced to make one more approximation to get the above result. Landau-Lifshitz \cite{Landau} have shown that Eq. \ref{Darwin} actually implies a very unusual choice of gauge, not the usual Coulomb one. The choice of the gauge is however essential, since the physical magnetic field ${\vec B}$, as well as the photon in quantum electrodynamics (QED), have only two transverse degrees of freedom and only in the Coulomb gauge (often called as the "physical" or "unitary" gauge) one is left just with these two degrees of freedom for the electromagnetic field. The constraint on the vector potential (implied by Landau-Lifshitz's choice), after its elimination through the velocities, propagates on the velocities. It is worth to recall here for example, that in the relativistic QED, in the Lorentz gauge, restrictions on the allowed physical states have to be imposed to eliminate the longitudinal and temporal photons! We follow here the way chosen by Landau-Lifshitz \cite{Landau} to construct a classical Hamiltonian up to terms of order $1/c^2$, however not in their choice of gauge, but in the Coulomb one. One starts with the Lagrangian of a single electron in an external field (here in a non-relativistic approach!) produced by some external sources $\rho^{ext}$ and $\vec{i}^{ext}$ \[ L(\vec{r,}\dot{\vec{r}})=\frac{m\dot{\vec{r}}^{2}}{2}-e\phi^{ext}(\vec{r},t)+ \frac{e}{c}{\cal\vec{A}}^{ext}(\vec{r},t)\dot{\vec{r}}\quad. \] In the Coulomb gauge \[\nabla {\cal\vec {A}}^{ext}(\vec{r},t)=0\] the potentials are \[ \phi(\vec{r},t)^{ext}=\int d\vec{x}\frac{\rho^{ext}(\vec{x},t)}{|\vec{r}-\vec{x}|};\quad\quad {\cal \vec{A}}^{ext}(\vec{r},t)=\int d\vec{x}\frac{\vec{i}_{\bot}^{ext}(\vec{x},t-|\vec{r}-\vec{x}|/c)}{c|\vec{r}-\vec{x}|}\enspace , \] where $\rho^{ex}({\vec x},t)$ is the external charge density, while $\vec{i}_{\bot}^{ext}(\vec{x},t)$ is the external transverse ($\nabla\vec{i}_{\bot}^{ext}=0$) current density \[ \vec{i}_{\bot}^{ext}(\vec{x},t)\equiv\vec{i}^{ext}(\vec{x},t)+\frac{1}{4\pi}\nabla\int d\vec{x} '\frac{\nabla'\vec{i}^{ext}(\vec{x}',t)}{|\vec{x}-\vec{x}'|}\enspace . \] As Landau-Lifshitz do it, one has to expand the retarded current density in powers of $1/c$, however here we need only the lowest approximation due to the already existent $1/c$ factor in the Lagrangian i.e. \[ {\cal\vec{A}}^{ext}(\vec{r},t)\approx\int d\vec{x}\frac{\vec{i}_\bot^{ext}(\vec{x},t)}{c|\vec{r}-\vec{x}|}\enspace . \] If the source of the fields is a single point particle of charge $e'$ at $\vec{x}(t)$ having the velocity $\dot{\vec{x}}(t)$ then \[\rho^{ext}(\vec{x,}t)=e'\delta(\vec{x}-\vec{x}(t)),\qquad \vec{i}^{ext}(\vec{x},t)=e'\dot{\vec{x}}(t)\delta(\vec{x}-\vec{x}(t))\enspace , \] with \[ \phi^{ext}(\vec{r},t)=\frac{e'}{|\vec{r}-\vec{x}(t)|} \] and \[{\cal\vec{A}}^{ext}(\vec{r},t)= \frac{e' }{c}\left[\frac{\dot{\vec{x}}(t)}{|\vec{r}-\vec{x}(t)|} - \frac{1}{4\pi} \int d\vec{x}\frac{1}{|\vec{r}-\vec{x}|}\nabla \left(\dot{\vec{x}}(t)\nabla \frac{1}{|\vec{x}-\vec{x}(t)|}\right)\right]\enspace. \] Therefore the Lagrangian of the electron in the field of the another electron, in this approximation, is \begin{eqnarray*} & &{\cal L}(\vec{r,}\dot{\vec{r;}}\vec{x,}\dot{\vec{x}})=\frac{m_i \dot{\vec{r}}^{2}}{2}-\frac{e e'}{|\vec{r}-\vec{x}(t)|} \\ & &+\frac{e e'\dot{\vec{r}}}{c^2}\left[ \frac{\dot{\vec{x}}(t)}{|\vec{r}-\vec{x}(t)|}-\frac{1}{4\pi} \int d\vec{x}\frac{1}{|\vec{r}-\vec{x}|}\nabla \left(\dot{\vec{x}}(t)\nabla \frac{1}{|\vec{x}-\vec{x}(t)|}\right)\right] . \end{eqnarray*} By generalization one obtains for a system of $N$ charged particles the total Lagrange function \begin{eqnarray*} & &L\!=\!\sum_{i}\frac{m_i}{2}\vec{v_{i}}^{2}-\!\sum_{i>j}\frac{e_i e_j}{|\vec{r_{i}}-\vec{r}_{j}|} \\ & &+\!\sum_{i>j}\frac{e_i e_j}{c^{2}}\vec{v}_i \left[ \frac{\vec{v}_j }{|\vec{r}_i-\vec{r}_j|}-\frac{1}{4\pi} \int d\vec{x}\frac{1}{|\vec{r}_i-\vec{x}|}\nabla \left(\vec{v}_j \nabla \frac{1}{|\vec{x}-\vec{r}_j|}\right)\right] . \end{eqnarray*} Generally speaking, one has to use here Dirac's canonical formalism \cite{Dirac1, Dirac2}, since due to the velocity dependent terms, there is a relationship between the canonical momenta, However, to lowest order in $1/c$ we have \[ {\vec p}_i =\frac{\delta L}{\delta {\dot{\vec r}_i}}\approx m {\dot{\vec r}_i} \] and therefore (according to Landau-Lifshitz), we may still remain in the frame of the standard canonical formalism. The resulting classical Hamiltonian is \begin{eqnarray} & & H =\sum_{i}\frac{\vec{p_{i}}^{2}}{2m_i}+\sum_{i>j}\frac{e_i e_j}{|\vec{r_{i}}-\vec{r}_{j}|} \label{Hamclas} \\ & &-\sum_{i>j}\frac{e_i e_j}{c^{2}m_i m_j}\vec{p}_i\left[ \frac{\vec{p}_j }{|\vec{r}_i-\vec{r}_j|}-\frac{1}{4\pi} \int d\vec{x}\frac{1}{|\vec{r}_i-\vec{x}|}\nabla \left(\vec{p}_j \nabla \frac{1}{|\vec{x}-\vec{r}_j|}\right)\right]\nonumber \end{eqnarray} including $1/c^2$ terms. This Hamiltonian is not identical with the Darwin Hamiltonian Eq.\ref{Darwin}! \section{Quantum-mechanical electron Hamiltonian with $1/c^2$ terms.\label{cur-cur}} Now we may start to formulate directly a second quantized version of the theory starting from the classical Hamiltonian of Eq.\ref{Hamclas}. For sake of simplicity we consider here a single sort of particles (electrons) of mass $m$ and charge $e$. By introducing the charge and current densities: \[ \rho(\vec{x})=\sum_i e\delta(\vec{x}-\vec{r}_i); \qquad \vec{i}(\vec{x})=\sum_i \frac{e}{m}\vec{p}_i\delta(\vec{x}-\vec{r}_i) \] one would be tempted to rewrite the classical Hamiltonia Eq.\ref{Hamclas} as \begin{equation} \sum_{i}\frac{1}{2m}\vec{p_{i}}^{2}+\frac{1}{2} \int d\vec{x}\int d\vec{x}' \frac{\rho(\vec{x})\rho(\vec{x}')}{|\vec{x}-\vec{x}')|} -\frac{1}{2} \int d\vec{x}\int d\vec{x'} \frac{\vec{i}_\bot(\vec{x})\vec{i}_\bot(\vec{x}')} {c^2|\vec{x}-\vec{x}'|}\enspace ,\label{Hsymb} \end{equation} where $\vec{i}_\bot (\vec{x})$ is the transverse part of the current density \[ \vec{i}_{\bot}(\vec{r},t)\equiv\vec{i}(\vec{r},t)+\frac{1}{4\pi}\nabla\int d\vec{r}'\frac{\nabla'\vec{i}(\vec{r}',t)}{|\vec{r}-\vec{r}'|} \enspace .\label{c2-Hamil-cl} \] However, due to the divergent self-interaction of point-like classical particles this expression is not meaningful, even without the $1/c^2 $ terms. The quantum-mechanical version of this Hamiltonian for a system of identical particles (here fermions) the problem is however milder. One cannot identify individual particles and therefore the self-interaction is at least not obvious and one may eliminate it partially in the second quantization formalism by considering a "normal ordering" of the operators in the Hamiltonian, as it was done also in the case of the Coulomb interaction. This ordering of the creation ans annihilation operators eliminates the interaction in states that contain less than two particles. Therefore we may proceed with the second quantization formulation of the theory directly form the last symbolic expression Eq.\ref{Hsymb}. One has to introduce the second quantized charge and the transverse part of the current density operators expressed in terms of second quantized wave functions $\psi_{\sigma}({\vec x})$ for fermions with spin $1/2$ \[\rho(\vec{x})=e\sum_{\sigma=\pm1}\psi_{\sigma }^{+}(\vec{x})\psi_{\sigma}(\vec{x})\] \[\vec{i}(\vec{x})=\frac{e}{2m}\sum_{\sigma=\pm1}\psi_{\sigma }^{+}(\vec{x})\frac{\hbar}{\imath}\nabla\psi_{\sigma}(\vec{x})+h.c. \] The resulting quantum mechanical Hamiltonian ${\bf H}$ in the second quantized formalism looks then as \begin{eqnarray} {\bf H}=&-&\sum_{\sigma=\pm1}\int d{\vec x} \psi_{\sigma }^{+}(\vec{x})\frac{\hbar^2}{2m}\nabla ^2\psi_{\sigma}(\vec{x}) \label{Hamc2} \\ &+&\frac{1}{2}\sum_{\sigma,\sigma'=\pm1}\int d{\vec x}\int d{\vec x}' \psi_{\sigma }^{+}(\vec{x})\psi_{\sigma' }^{+}(\vec{x}') \frac{e^2}{|\vec{x}-\vec{x}')|}\psi_{\sigma'}(\vec{x}')\psi_{\sigma}(\vec{x}) \nonumber \\ &-&\frac{1}{2} \int d\vec{x}\int d\vec{x'} \frac{:\vec{i}_\bot(\vec{x})\vec{i}_\bot(\vec{x}'):} {c^2|\vec{x}-\vec{x}'|} \enspace . \nonumber \end{eqnarray} The last term we did not write out explicitly, but just indicated the normal ordering by the $:\ldots :$ symbols, since it is very lengthy and complicated in the coordinate space due to the additional integrals in the definition of the transverse part. However, this term has a simple expression in the discrete ${\vec k}$ - space basis (plane waves with periodical boundary conditions in a cube of volume $\Omega$). It looks explicitly as \begin{equation} -\frac{e^{2}\hbar^{2}}{m^{2}c^2\Omega}\sum_{\sigma,\sigma'=\pm 1}\sum_{\vec{k},\vec{p},\vec{q}}\frac{2\pi}{q^{2}}\left(\vec{k}\vec{p}-\vec{q}\vec{k}\frac{1}{q^2}\vec{q}\vec{p}\right)a_{{\vec{k}},\sigma }^{+}a_{\vec{p},\sigma '}^{+}a_{\vec{p}+\vec{q},\sigma'} a_{\vec{k}-\vec{q},\sigma} \end{equation} This is the second quantized Hamiltonian of electromagnetic interacting electrons of order $1/c^2$. It includes a (transverse) current-current interaction. We still have to add the interaction with a classical external transverse vector potential $\vec {A}^{ext}(\vec {x},t)$. According to the minimal rule, in the presence of $\vec{A}^{ext}$ one has to replace everywhere $-\imath\hbar \nabla $ by $-\imath\hbar \nabla -\frac{e}{c}\vec{A}^{ext}$. This implies not only the modification of the kinetic energy term, but also the modification of the current density ${\vec i}_{\bot}(\vec{x})$ in the current-current term of the Hamiltonian. However, this will produce terms of order $1/ c^3 $ one may ignore. One has to keep in mind nevertheless, that the current density operator $\vec{j}(\vec x, t)$, whose average is of interest, must contain the diamagnetic term \[ \vec{j}(\vec{x},t)=\frac{e}{2m}\left(\psi^{+}(\vec{x})\left(\frac{\hbar}{\imath}\nabla-\frac{e}{c}\vec{A}^{ext}(\vec{x},t)\right)\psi(\vec{x},t)+h.c.\right)\enspace . \label{curdens} \] For sake of simplicity we discussed here only the electromagnetic Hamiltonian of an electron system, assuming implicitly a uniform positive background. Actually it concerns also the basic non-relativistic Hamiltonian describing the electron-ion constituents of the solid state system and their e.m. interactions. It is known, however, that magnetic interactions of the spins, as well as the spin-orbit interaction, pure relativistic QED contributions, are often relevant and have to be included in solid-state theory. \section{Connection to the non-relativistic QED} One may look now at these results to see their meaning from the other side, namely form the point of view of the non-relativistic QED of electrons interacting with photons. This Hamiltonian in the Coulomb gauge looks as \begin{eqnarray*} H^{QED} & = & \sum_{\vec{q},\lambda}\hbar\omega_{\vec{q}}b_{\vec{q},\lambda}^{+}b_{\vec{q},\lambda} \\ &+&\sum_{\sigma=\pm1}\int d\vec{x}\psi_{\sigma}^{+}(\vec{x})\left[\frac{1}{2m}\left(\frac{\hbar}{\imath}\nabla-\frac{e}{c}\vec{A}(\vec{x})\right)^{2}\right]\psi_{\sigma}(\vec{x})\\ & + & \sum_{\sigma,\sigma'=\pm1}\frac{1}{2}\int d\vec{x}\int d{\vec x}'\psi_{\sigma}^{+}(\vec{x})\psi_{\sigma'}^{+}(\vec{x}')\frac{e^{2}}{|\vec{x}-\vec{x}'|}\psi_{\sigma'}(\vec{x}')\psi_{\sigma}(\vec{x}) \end{eqnarray*} where the quantized transversal ($\nabla {\vec A}({\vec x})=0$ ) e.m. potential \[ \vec{A}(\vec{x})=\sum_{\lambda=1,2}\sqrt{\frac{hc}{\Omega}}\sum_{\vec{q}}\frac{1}{\sqrt{|\vec{q}|}}\vec{e}_{\vec{q}}^{(\lambda)} e^{\imath\vec{q}\vec{x}}\left(b_{\vec{q},\lambda}+b_{-\vec{q},\lambda}^{+}\right) \] was taken with periodical boundary conditions. The photon frequency is $\omega_{\vec{q}}=c|\vec{q}|$, while the bosonic commutators are \[ \left[b_{\vec{q},\lambda},b_{\vec{q}',\lambda'}^{+}\right]=\delta_{\vec{q},\vec{q}}'\delta_{\lambda\lambda'} \] and the unit vectors $\vec{e}_{\vec{q}}^{(\lambda)}$ are orthogonal to the wave vector $\vec {q}$ and to each other \[ \vec{q}\vec{e}_{\vec{q}}^{(\lambda)}=0;\qquad \vec{e}_{\vec{q}}^{(\lambda)}\vec{e}_{\vec{q}}^{(\lambda')}=\delta_{\lambda\lambda'}; \qquad\vec{e}_{\vec{q}}^{(\lambda)}=\vec{e}_{-\vec{q}}^{(\lambda)};\qquad(\lambda,\lambda'=1,2) \enspace . \] Actually in the kinetic energy term of this Hamiltonian the product of the vector potentials ("seagull term") $\frac{e^2}{c^2}\int \psi^{+}\psi A A $ has to be normal ordered wit respect to the photon creation and annihilation operators! As usual in many body theories of solid-state, here the non-relativistic QED, contrary to the fundamental relativistic QED, is understood as a cut-off theory, where the bare parameters coincide with the physical ones. Let us discuss this QED Hamiltonian on the subspace of unperturbed states containing only electrons and no photons (photon vacuum). Then, if we want to retain only contributions up to order $1/c^2$ we may omit from the beginning the (normal ordered) seagull" term $\frac{e^2}{c^2}\int \psi^{+}\psi A A $. Being itself of order $1/c^2$ it may have only even higher order non-vanishing matrix elements in this subspace. Therefore, besides the standard Coulomb term one is left only with the photon-current interaction $- \frac{1}{c}\int i_{\bot}A$. It appears in the S-matrix theory of adiabatic perturbations within the considered subspace without photons only by Feynman diagrams constructed by the basic graph having four electron legs and two current-photon vertices connected by a photon propagator as shown in Fig.\ref{graph}. After neglecting the term $-\omega^2 /c^2 $ in the denominator of the photon propagator in Fourier space (i.e. eliminating corrections of higher order as $1/c^2$ already contained in the vertex parts) it looks as \[\frac{1}{q^2}(\delta_{\mu,\nu}-\frac{q_\mu q_\nu }{q^2}); \qquad (\mu,\nu=1,2,3)\] and one can convince oneself that this diagram of second order coincides with the first order diagram of the S matrix of the theory of the preceding Section. (The factor $-1/2$ in front of the current-current interaction term in Eq.\ref{Hamc2} takes care of this last aspect!) \begin{figure}[h] \begin{center} \includegraphics[scale=0.08]{graph.eps} \end{center} \caption{The basic current-current graph in QED. } \label{graph} \end{figure} Therefore, we may conclude, that the quantum mechanical $1/c^2$ electron Hamiltonian (more precisely the S-Matrix !) we obtained by the Landau construction in the Coulomb gauge, coincides indeed with the corresponding approximation in the non-relativistic QED. (The author realized the above interpretation after consulting the first use of the term "current-current interaction" by Holstein, Norton and Pincus \cite{Holstein} friendly suggested by P. Kopietz.) \section{Relevance for the theory of superconductivity} All theories of superconductivity are formulated in the frame of the lowest order in $1/c$ solid-state Hamiltonian describing Coulomb interacting electrons and ions. They are rather successful in the phenomenological explanation of many experimental aspects. We have in mind mainly the Bardeen-Cooper-Schrieffer \cite{BCS} (BCS) theory of the phase transition as being due to the anomalous correlation of electrons of opposite momenta and spins, caused by phonon exchange. The implementation of this idea is however performed within different further approximations. Among them the Bogolyubov-de Gennes equation \cite{Bogo-deGenn} is the most efficient, since by considering a contact potential for the implementation of the BCS idea, this theory succeeds to formulate a description in the real space and implicitly also at the boundaries. Another succesful theory is due to Landau-Ginzburg \cite{Ginz-Land} based on original postulates without reference to a microscopical substrate. Despite all their successes neither of these theories is able to give a convincing explanation of the electromagnetic properties of superconductors. Here we do not think about the hopeless proof of current flow without dissipation. This would be a task for the theory of open systems. Unfortunately, this theory, in spite of the big progresses of the last 60 years, is not yet able to answer such questions. (See a good review of the state of art by Spohn \cite{Spohn}.) We mean here just the understanding of already the simplest phenomenon, the Meissner effect. Obviously, the expulsion of the magnetic field from a superconductor in equilibrium is just the ideal diamagnetism, the internal magnetic field compensating the external one. Although, seemingly one succeeds with today's theories to show the failure of the penetration of the magnetic field, one remains confused about the underlying physical mechanism, since within the zeroth order in $1/c$ description one can make no distinction between the external and internal magnetic fields. All today's theories have to resort to a self-consistent treatment of the magnetic field. A distinction between the external and internal magnetic fields is possible either within the QED or its $1/c^2$ approximation described in this paper. To formulate a theory of superconductivity within this new frame is not at all simple, but desirable. \section{Conclusions} We have built up the correct extension of the basic Hamiltonian of classical interacting point-like charged particles in order to include electromagnetic effects of order $1/c^2$ using Landau-Lifshitz's \cite{Landau} construction, however in the adequate Coulomb gauge. We have shown also, that its quantized version is equivalent to ignoring higher than $1/c^2$ corrections in the QED Hamiltonian and restricting the theory to the subspace of free electron-ion states without photons. This implies omitting the "sea-gull" vertex and the retardation in the photon propagator. We stressed also the importance of the inclusion of current-current interaction for a convincing theory of the Meissner effect. We hope, that this extended $1/c^2$ theory may be relevant also in other fields of solid-state theory, as well as, in the classical version, for the plasma theory. \vspace{1cm} {\bf \large{Acknowledgments}} \vspace{0.5cm} The author is deeply indebted to P. Gartner, M. Bundaru, V. B\^arsan, P. Kopietz and H. Haug for useful discussions on this topic. \section{References}
1,314,259,992,892
arxiv
\section{Section title} \acknowledgments Insert here the text.
1,314,259,992,893
arxiv
\section{Introduction} Nano-periodic spin and charge correlations in the form of stripes have featured prominently in the debate over the mechanism of superconductivity in the layered cuprates \cite{Vojta-Archiv-2009}. Of particular interest has been the interpretation of the spin fluctuation spectrum measured by neutron scattering, which exhibits a characteristic hour-glass shape in hole-doped cuprates \cite{Arai-PRL-1999,Bourges-Science-2000,Hayden-Nature-2004,Tranquada-Nature-2004,Christensen-PRL-2004,Reznik-PRL-2004,Stock-PRB-2005,Vignolle-NatPhys-2007,Hinkov-NatPhys-2007,Matsuda-PRL-2008}. The hour-glass dispersion is found in cuprates both with and without static stripe order, and has recently been observed\cite{Matsuda-PRL-2008} in a very lightly-doped, insulating, cuprate in which the incommensurate spin modulation is diagonal, i.e.\ at 45$^{\circ}$ to the Cu--O bonds, rather than parallel to the bonds as found at higher doping. Theoretical approaches based on stripes\cite{Tranquada-Nature-1995,Batista-PRB-2001,Vojta-PRL-2004,Uhrig-PRL-2004,Seibold-PRL-2005,Andersen-PRL-2005,Yao-PRL-2006} generally succeed in reproducing the gross features of the spectrum, but the low energy part remains a concern\cite{note-0}. These models invariably predict cones of spin excitations emerging from four equivalent incommensurate (IC) wavevectors (two in the case of untwinned unidirectional stripes). Such spin-wave cones, however, are not observed experimentally in the cuprates. Instead the spectra show four IC peaks which disperse inwards with energy \emph{without splitting} towards the ordering wavevector ${{\bf Q}_{\rm AF}}$ of the parent antiferromagnet where they merge before dispersing apart again at higher energies. Insight into the role of stripes in the cuprates can be gained from the structurally-related but insulating system La$_{2-x}$Sr$_x$NiO$_{4+\delta}$, whose phase diagram exhibits stable stripe order over a wide range of hole content $n_h=x+2\delta$ without the complication of superconductivity \cite{Hayden-PRL-1992,Chen-PRL-1993,Tranquada-PRL-1994,Yamada-PhysicaC-1994,Tranquada-PRB-1994,Yoshizawa-PRB-2000}. Stripes in La$_{2-x}$Sr$_x$NiO$_{4+\delta}$ are aligned at 45$^{\circ}$ to the Ni--O bonds, like in lightly-doped La$_{2-x}$Sr$_x$CuO$_{4}$, but as noted above, the characteristic features of the magnetic dispersion in the cuprates is not dependent on the alignment of the stripes. Detailed measurements of La$_{2-x}$Sr$_x$NiO$_{4+\delta}$ have been reported for several compositions close to $x=1/3$ with $\delta=0$ \cite{Bourges-PRL-2003,Boothroyd-PRB-2003,Boothroyd-PhysicaB-2004,Woo-PRB-2005}. Like in the cuprates, the low energy spectra show four IC peaks, but so far no evidence has been found for an inward dispersion. With the exception of an unexplained peak near 25 meV \cite{Boothroyd-PRB-2003,Boothroyd-PhysicaB-2004}, the spectra are consistent with propagating spin-wave modes of ordered stripes. In this paper we report neutron scattering measurements of the spin excitation spectrum of La$_2$NiO$_{4.11}$. Previous studies of the spin excitations in the nickelates have mostly been performed on compounds whose stripe period is 1.5 to 2 times shorter than found in stripe-ordered cuprates. In La$_2$NiO$_{4.11}$, however, the stripe period is similar to that in the cuprates. Oxygen-doped La$_{2}$NiO$_{4 + \delta}$ exhibits stripe order for $\delta \gtrsim 0.11$ ($n_h \gtrsim 0.22$) \cite{Yamada-PhysicaC-1994,Tranquada-PRB-1994}. For $0 \leq \delta \lesssim 0.11$, commensurate antiferromagnetic (AFM) order of the parent phase ($\delta = 0$) is observed but with an ordering temperature that decreases with $\delta$. The advantage of O-doping over Sr-doping is that for a given hole concentration there is less disorder in the stripes, possibly as a result of three-dimensional ordering of the interstitial oxygen combined with a lack of cation disorder. Our measurements reveal an anomalous inward dispersion of the spectral weight in the energy range 10\,meV to $25$\,meV, while above $\sim 25$\,meV the IC peaks are found to split into spin-wave cones. This shows that the magnetic excitations of a stripe-ordered antiferromagnet can display similar behavior at low energies to that found in the hole-doped cuprates, although in La$_{2}$NiO$_{4 + \delta}$ the complete hour-glass dispersion is not observed. Our attempts to model the results quantitatively suggest that a description that goes beyond linear spin-wave theory for ideal stripes is required. \section{Experimental details} The single crystal of La$_{2}$NiO$_{4 + \delta}$ used here was grown in Oxford by the floating-zone method \cite{Prabhakaran-JCG-2002} and had a mass of 16\,g. The oxygen content determined by thermogravimetric analysis of a specimen from the same boule was $\delta = 0.11 \pm 0.01$. Neutron scattering measurements were made on the MAPS time-of-flight spectrometer at ISIS and on the IN8 and IN20 triple-axis spectrometers at the ILL. A preliminary report of the MAPS data has already been published \cite{Freeman-JMMM-2007}. On MAPS the crystal was aligned with the $c$ axis parallel to the incident beam direction. Spectra were recorded with several incident neutron energies, and the scattering from a standard vanadium sample was used to normalize the spectra and place them on an absolute intensity scale. On IN8 and IN20 the crystal was mounted with the $c$ axis and $[110]$ direction in the horizontal scattering plane (we refer throughout to the tetragonal unit cell with lattice parameters $a=3.85$\,{\AA} and $c=12.7$\,{\AA}), and the spectrometers were configured for a fixed final neutron energy of $E_{\rm f} = 14.7$\,meV with a graphite filter after the sample to suppress higher orders. Measurements on IN20 employed uniaxial polarization analysis.\cite{Moon-PR-1969}. Polarized neutron diffraction (elastic scattering) data were collected with three orthogonal orientations of ${\bf P}$ relative to ${\bf Q}$: (1) ${\bf P} \parallel {\bf Q}$, (2) ${\bf P} \perp {\bf Q}$ with ${\bf P}$ in the scattering plane, and (3) ${\bf P} \perp {\bf Q}$ with ${\bf P}$ perpendicular to the scattering plane. The data were used to probe the spin orientation in the magnetically-ordered phase. The spin-flip (SF) and non-spin-flip (NSF) components of two magnetic Bragg peaks were measured for each of the three orthogonal directions of $\bf P$. Corrections were applied to compensate for the imperfect neutron polarization. For the inelastic scattering measurements the neutron spin polarization $\bf P$ was maintained parallel to the scattering vector $\bf Q$ in order to separate magnetic from non-magnetic scattering \section{Results} \begin{figure}[!ht] \begin{center} \includegraphics [width=6cm,bbllx=58,bblly=281,bburx=227,bbury=513,angle=0,clip=] {La2NiO4_Fig1.eps} \caption{Model for ideal period--4 diagonal stripes (DS4 model). Two equivalent domains are shown, with stripes running parallel to the (a) $[1\bar{1}0]$ and (b) $[110]$ directions of the square lattice. In relation to hole-doped nickelates, the arrows show AFM ordered spins on Ni$^{2+}$ sites and the filled circles represent doped holes assumed here to be localized on Ni$^{3+}$ ions. Exchange interactions considered in the spin-wave analysis are indicated. } \label{fig1} \end{center} \end{figure} Neutron diffraction measurements revealed patterns consistent with diagonal stripe order (i.e.\ stripes oriented at 45$^{\circ}$ to the Ni--O bonds) characterized by a fourfold group of magnetic diffraction peaks at IC wavevectors ${\bf Q}_{\rm IC}={\bf Q}_{\rm AF} \pm (\epsilon /2,\epsilon/2,0)$ and ${\bf Q}_{\rm AF} \pm (-\epsilon /2,\epsilon/2,0)$, where $h$, $k$, $l$ are integers, ${\bf Q}_{\rm AF}=(h + 0.5,k+ 0.5,l)$ are antiferromagnetic (AF) wavevectors, and $\epsilon=0.270\pm0.005$ is the incommensurability \cite{note-1}. These peaks were observed below $T_{\rm N} \simeq 120$\,K, consistent with previous data \cite{Yamada-PhysicaC-1994,Tranquada-PRB-1994}. The observation of four satellites around ${\bf Q}_{\rm AF}$ is explained by the presence of domains in which the charge stripes run along the two equivalent diagonals of the square lattice, as shown schematically in Fig.~\ref{fig1} for ideal period-4 stripes. The domain with stripes parallel to the $[1\bar{1}0]$ direction gives rise to the magnetic peaks displaced by $\pm(\epsilon /2,\epsilon/2,0)$ from ${\bf Q}_{\rm AF}$, while the $[110]$ stripe domain causes the peaks displaced by $\pm(-\epsilon /2,\epsilon/2,0)$ from ${\bf Q}_{\rm AF}$. The correlation lengths for the magnetic order were found to be $\sim 50$\,{\AA} both parallel and perpendicular to the stripes in the $ab$ plane, and $\sim 90$\,{\AA} in the $c$ direction. We also observed the distinct sets of superlattice peaks associated with interstitial oxygen ordering reported in Ref. \onlinecite{Tranquada-PRB-1995}. The polarized neutron diffraction measurements depend on the Fourier component ${\bf M}({\bf Q})$ of the magnetic structure at the ordering wavevector ${\bf Q}={\bf Q}_{\rm IC}$. Following the approach described in Ref.~\onlinecite{Freeman-PRB-2002}, we used the six measurements (SF and NSF components for three orientations of $\bf P$ relative to $\bf Q$) at the magnetic Bragg peaks ${\bf Q}_1 = (0.5-\epsilon /2,0.5-\epsilon /2,3)$ and ${\bf Q}_2 = (0.5+\epsilon /2,0.5+\epsilon /2,1)$ to determine the intensities associated with the components of ${\bf M}({{\bf Q}_{\rm IC}})$ along the $[110]$, $[1\bar{1}0]$ and $[001]$ directions. The data collected at 2~K gave $I_{110}/I_{1\bar{1}0} = 0.062 \pm 0.004$ and $I_{001}/(I_{110}+I_{1\bar{1}0}) = 0.004 \pm 0.004$. Within experimental error these ratios did not vary with temperature between 2~K and 60~K. For collinear magnetic order ${\bf M}({\bf Q})$ is proportional to the ordered moment, and assuming this to be the case here we find that the ordered moments lie in the $ab$ plane to within an experimental uncertainty of 5~deg and make an in-plane angle of $14.0 \pm 0.5$ deg to the stripe direction. \begin{figure} \begin{center} \includegraphics [width=8.0cm,bbllx=236,bblly=300,bburx=610, bbury=790,angle=0,clip=] {La2NiO4_Fig2.eps} \caption{\label{fig2} (color online) Spin excitation spectrum of La$_{2}$NiO$_{4.11}$ measured by inelastic neutron scattering and calculated by linear spin-wave theory. (a)--(c) Constant-energy slices averaged over the energy ranges indicated above the figures. Vertical dashes mark the positions of maximum intensity in the 7--9\,meV slices. The data were collected on the MAPS spectrometer with a sample temperature of 7\,K and with incident neutron energies of 80\,meV. The intensity is in units of mb\,sr$^{-1}$\,meV$^{-1}$\,f.u.$^{-1}$, where f.u. stands for ``formula unit" (of La$_{2}$NiO$_{4.11}$). (e)--(g) Simulations based on the DS4 model shown in Fig.~\ref{fig1} (Ref. \onlinecite{Carlson-PRB-2004}) with exchange constants $J_a = 28$\,meV, $J_b = 17$\,meV and $J_c = J_d = 0$\,meV, and 2D Gaussian wavevector broadening with standard deviation 0.022\,r.l.u. The intensity in the simulations is in arbitrary units.} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics [width=7cm,bbllx=56,bblly=200,bburx=530,bbury=604,angle=0,clip=] {La2NiO4_Fig3.eps} \caption{Constant-energy cuts through the excitation spectrum of La$_{2}$NiO$_{4.11}$ parallel to the $(\xi,-\xi)$ direction through $(0.5,0.5)$. No offset has been applied to the scans at different energies.} \label{fig3} \end{center} \end{figure} We now turn to the magnetic excitation spectrum. Figs.\ \ref{fig2}(a)--(c) show constant-energy slices which illustrate the most notable features of the observed spectrum. The data, which were recorded on MAPS with unpolarized neutrons of incident energy 80\,meV and a sample temperature of 7\,K, have been averaged over the indicated energy ranges and plotted as a function of the in-plane scattering vector ${\bf Q}_{\rm 2D} = (h,k)$ in reciprocal lattice units (r.l.u.) \cite{note-2}. The 7--9\,meV slice exhibits four peaks at the magnetic ordering wavevectors. The intensities of the peaks are unequal because the ordered moments make different angles to $\bf Q$ in the two stripe domains.\cite{note-3} These four peaks are also present in the 15--20 meV slice but have clearly shifted inwards towards ${\bf Q}_ {\rm AF} = (0.5, 0.5)$. Above 25\,meV these peaks evolve into rings as can be seen in the 30--40\,meV slice. The features just described are significantly broader in wavevector than the experimental resolution, which is approximately 0.05~r.l.u under the conditions used to collect the data in Figs.\ \ref{fig2}(a)--(c). Above $\sim 50$\,meV the rings strongly overlap and merge into a single broad peak centered on ${\bf Q}_ {\rm AF}$. As shown in Fig.~\ref{fig3}, the peak at ${\bf Q}_ {\rm AF}$ decreases in intensity with increasing energy up to $\sim 110$\,meV, above which no signal can be detected above the background. We performed additional measurements with neutron polarization analysis to distinguish magnetic from non-magnetic scattering. Fig.\ \ref{fig4} (main panel) shows a series of constant-energy scans along the $(\xi, \xi)$ direction measured on IN20 in the spin-flip (SF) scattering channel. With {\bf P} $\parallel$ {\bf Q} the SF scattering is purely magnetic. Up to 10\,meV the scans show peaks centred at the same wavevectors as the magnetic Bragg peaks, which are also shown in the figure, but above 10\,meV the peaks move inwards and broaden compared with the resolution. Interestingly, even at 10\,meV the peaks are about 50\% broader than the experimental resolution. \begin{figure}[!ht] \begin{center} \includegraphics [width=8cm,bbllx=70,bblly=230,bburx=369,bbury=729,angle=0,clip=] {La2NiO4_Fig4.eps} \caption[Energy dependence of magnetic excitations] {Constant-energy scans through the spin excitation spectrum of La$_{2}$NiO$_{4.11}$ in the direction perpendicular to the stripes. The inelastic data in the lower panel were recorded on IN20 employing neutron polarization analysis. The spin-flip (SF) channel contains purely magnetic scattering. Vertical lines mark the positions of the magnetic Bragg peaks, shown at the base of the frame. The scans were measured in several different zones: $(\xi,\xi,4)$ for 10 and 15\, meV, $(\xi+1,\xi+1,0)$ for 17.5, 20 and 25\,meV, and $(\xi+1,\xi+1,0.5)$ for 22.5\,meV. Each scan is offset vertically by 50 counts. The solid lines are the results of fits to two Lorentzian functions with equal widths. The upper panel shows a cut through the MAPS data averaged over the energy range 45--50\,meV. The out-of-plane wavevector is $l \simeq 5$. The solid line is a fit to four Lorentzians with equal peak widths. Horizontal bars in both panels indicate the wavevector resolution. } \label{fig4} \end{center} \end{figure} These results confirm that the inward-dispersing peaks found here are magnetic in origin. The upper panel of Fig.\ \ref{fig4} shows a cut through the MAPS data along the direction perpendicular to the stripes, averaged over the energy range 45--50\,meV. This illustrates the splitting of the IC peaks at higher energies [see also Fig.~\ref{fig2}(c)]. \begin{figure}[!ht] \begin{center} \includegraphics [width=7cm,bbllx=64,bblly=159,bburx=500,bbury=625,angle=0,clip=] {La2NiO4_Fig5.eps} \caption[Fitted centres] {(color online) Dispersion of magnetic excitations in La$_{2}$NiO$_{4.11}$. Data points are peak centers from fits to a series of constant-energy scans made on MAPS, IN20 and IN8. Data below (above) 25\,meV were fitted to two (four) Lorentzian functions on a linear background. The broken lines show the peak positions from the LSWT simulations for the DS4 model [Fig.~\ref{fig1}].} \label{fig5} \end{center} \end{figure} To quantify the dispersion we fitted Lorentzian functions to the peaks in a series of constant-energy scans including those shown in Fig.~\ref{fig4}. The fitted peak centers are plotted in Fig.~\ref{fig5}. Data from all three spectrometers are included in the analysis and are consistent with one another. One can see clearly how the peak intensity remains centered at the magnetic ordering wavevectors up to 10\,meV, above which it disperses inwards. At 20\,meV the incommensurability is 20--25\% less than the static value. This analysis confirms that the shift observable in the raw intensity data is intrinsic and not simply an artefact from the overlap of the peaks. Above 25\,meV four peaks can be resolved in the scans and the data were accordingly fitted to four Lorentzian functions. The resulting pair of ``V"-shaped dispersion curves are asymmetric, with higher velocity on the inner branches. This asymmetry partially counteracts the inward shift in intensity at lower energies, but even in the highest energy data (45--50\,meV) the pairs of spin-wave peaks are still shifted towards ${\bf Q}_ {\rm AF}$ relative to the magnetic Bragg peaks. An obvious question to ask now is, do other stripe-ordered nickelates exhibit similar features to those just described? The question is answered for one other composition in Fig.~\ref{fig6}. This figure shows the magnetic dispersion of La$_{5/3}$Sr$_{1/3}$NiO$_{4}$, which has commensurate period-3 charge stripes ($\epsilon=1/3$). The dispersion was obtained from a series of constant-energy cuts along the $(\xi, \xi)$ direction through the MAPS data set of Woo {\it et al.}\cite{Woo-PRB-2005} As before, a two- or four-Lorentzian line shape was used to obtain the peak positions. To within experimental error the dispersion is symmetric about the magnetic Bragg peak positions, in stark contrast to the case of La$_{2}$NiO$_{4.11}$. \begin{figure}[!ht] \begin{center} \includegraphics [width=7cm,bbllx=64,bblly=159,bburx=500,bbury=625,angle=0,clip=] {La2NiO4_Fig6.eps} \caption{(color online) Dispersion of magnetic excitations in La$_{5/3}$Sr$_{1/3}$NiO$_{4}$. The data are from the study by Woo {\it et al.}\cite{Woo-PRB-2005} Points are peak centers from fits to a series of constant-energy scans made on MAPS with incident energies of 60\,meV and 160\,meV. Data below (above) 25\,meV were fitted to two (four) Lorentzian functions on a linear background. The broken lines mark the magnetic Bragg peak positions.} \label{fig6} \end{center} \end{figure} \section{Analysis and Discussion} The data presented in the previous section show conclusively that there is an inward shift towards ${\bf Q}_ {\rm AF}$ in the intensity of the spin excitation spectrum of La$_{2}$NiO$_{4.11}$ for energies above $\sim$~10 meV, whereas there is no such shift for the commensurate stripe-ordered compound La$_{5/3}$Sr$_{1/3}$NiO$_{4}$. Since the spins in the stripe phase are localized and ordered the dominant magnetic excitations are expected to be spin precession waves, so a natural first step is to see whether the results can be understood in the framework of linear spin-wave theory. Linear spin-wave theory (LSWT) has been used previously to investigate the spin excitation spectrum of ideal commensurate stripe structures \cite{Kruger-PRB-2003,Carlson-PRB-2004}. The calculations assume a Heisenberg spin Hamiltonian, so charge dynamics are included only insofar as they modify the strength of the inter-stripe exchange couplings. To test whether LSWT can describe the anomalous features of the spin excitation spectrum of La$_{2}$NiO$_{4.11}$ we compared the data with LSWT predictions for ideal site-centered period-4 diagonal charge stripes as shown in Fig.~\ref{fig1}, dubbed DS4 in Ref.~\onlinecite{Carlson-PRB-2004}, for which the incommensurability $\epsilon=0.25$ is close to that observed experimentally ($\epsilon\simeq0.27$). The minimal DS4 model has two exchange parameters, one ($J_a$) coupling nearest-neighbor Ni$^{2+}$ spins within a stripe domain, and the other ($J_b$) coupling Ni$^{2+}$ spins either side of a domain wall in a straight line through a Ni$^{3+}$ site. The justification for this model is that it provides a very good description of the magnetic dispersion of La$_{5/3}$Sr$_{1/3}$NiO$_{4}$ \cite{Boothroyd-PRB-2003,Woo-PRB-2005}. A detailed comparison with the model is not possible because the experimental spectrum of La$_{2}$NiO$_{4.11}$ has considerable intrinsic broadening and at high energies is complicated by the twinning of the stripes. Instead, we estimate the parameters of the model as follows. The magnetic dispersion of the $J_a$--$J_b$ DS4 model has a band width of $4J_a$ (we assume $S=1$). The experimental magnon bandwidth is $110 \pm 10$\,meV, so $J_a = 28 \pm 3$\,meV. To estimate $J_b$ we fitted the slope of the acoustic magnon branch perpendicular to the stripes to the data above 25\,meV in Fig.\ \ref{fig5}. This gave $J_b = 17 \pm 3$\,meV. The values of $J_a$ and $J_b$ agree closely with those determined experimentally for La$_{5/3}$Sr$_{1/3}$NiO$_{4}$ ($J_a = 27.5 \pm 0.4$\,meV, $J_b = 13.6 \pm 0.3$\,meV \cite{Boothroyd-PRB-2003,Woo-PRB-2005}) and $J_a$ is in good agreement with that obtained for La$_{2}$NiO$_{4}$ ($J_a = 31 \pm 0.7$\,meV \cite{Nakajima-JPSJ-1993}). Figures \ref{fig2}(d)--(f) show simulated constant-energy slices from the $J_a$--$J_b$ DS4 model alongside the corresponding data from MAPS. The simulations reproduce most features of the data but, crucially, they do not exhibit the inward dispersion of the incommensurate peaks. It has been shown that an apparent inward dispersion can be obtained for striped spin structures within LSWT when $J_b \ll J_a$ \cite{Yao-PRL-2006}. In this case the inward dispersion is the combined effect of broadening and a stronger intensity on the surface of the spin-wave cone nearest ${\bf Q}_{\rm AF}$. A reduction in $J_b/J_a$ is only possible within the constraints imposed by the La$_{2}$NiO$_{4.11}$ data if we introduce additional exchange parameters. However, the inclusion of diagonal next-nearest-neighbor Ni--Ni couplings between ($J_c$) and within ($J_d$) stripe domains did not enable us to reproduce the observed 20--25\% inwards intensity shift within the DS4 model without departing from the observed magnon band width of $\sim 110$\,meV and/or the data shown in Fig.\ \ref{fig5}. We also performed simulations for period-4 bond-centred stripes but we could not reproduce the inwards intensity shift with this model either. Our results suggest that the magnetic dynamics of La$_{2}$NiO$_{4.11}$ require a description beyond the simplest LSWT of ideal stripes. Some of the following features may be needed: First, the stripes in La$_{2}$NiO$_{4.11}$ are not commensurate but have a period slightly less than four lattice spacings. A mixture of $\sim 25$\% period-3 stripes and $\sim 75$\% period-4 stripes is expected, and irregularities in the arrangement may explain the observed broadening of the spin waves. Second, La$_{2}$NiO$_{4.11}$ is close to the border between stripe order and La$_{2}$NiO$_{4}$-like AFM order \cite{Yamada-PhysicaC-1994,Tranquada-PRB-1994}. Competition between these order parameters may influence the magnetic spectrum. Third, coupling between spin and charge degrees of freedom has so far been neglected. Finally, we recall that an as-yet unexplained resonance/gap-like feature appears in the the magnetic spectrum of La$_{2-x}$Sr$_{x}$NiO$_{4}$ with $x \sim 1/3$ \cite{Boothroyd-PRB-2003,Boothroyd-PhysicaB-2004} in the same energy range as the observed inward dispersion in La$_{2}$NiO$_{4.11}$. It is possible that these anomalous features are related. \section{Conclusion} This work was motivated by the possibility that charge-stripe correlations might be behind the hour-glass magnetic spectrum found in the hole-doped cuprates. Our results show that one feature of the cuprate spectrum, namely the inward dispersion at low energies, is also found in an insulating nickelate with well-correlated but incommensurate stripe order. This similarity is intriguing, but does not necessarily imply that the inward magnetic dispersion has a common origin in the two systems since there are important differences between hole-doped cuprates and nickelates. Nevertheless, our results do emphasize that the magnetic spectra of stripe-ordered materials contain interesting features that need to be understood better. Attempts to understand the inward dispersion in the particular case of La$_{2}$NiO$_{4.11}$ will need to go beyond the simplest model of ideal period-4 stripes used here. \section*{ACKNOWLEDGMENTS} A.T.B. is grateful to the Laboratory for Neutron Science at the Paul Scherrer Institute for hospitality and support during an extended visit in 2009. This work was supported by the Engineering \& Physical Sciences Research Council of Great Britain, Research Corporation, and US NSF Grant No. DMR 08-04748.
1,314,259,992,894
arxiv
\section*{Introduction} The shapes of biological membranes that surround cells and cellular organelles are often highly curved \cite{Shibata09,Kozlov14,Nixon16}. The membrane curvature is induced and regulated by proteins such as the arc-shaped BAR domains \cite{Takei99,Peter04,Rao11,Baumgart11,Mim12b} or the reticulons \cite{Voeltz06,Zurek11}, which have been suggested to oligomerize into arc-shaped protein { complexes} \cite{Shibata08}. Arc-shaped proteins and protein { complexes} can induce membrane tubules \cite{Frost08,Hu08,Daum16}, but have also been associated with other highly curved membrane structures. Reticulon proteins, for example, are involved in the generation of the membrane tubules of the endoplasmic reticulum (ER) and have been suggested to stabilize the highly curved edges \cite{Shibata10,Schweitzer15b} that connect stacked membrane sheets of the ER \cite{Terasaki13,Nixon16}. Electron microscopy indicates that BAR domain proteins can form highly ordered helical coats around membrane tubules \cite{Frost08,Mim12a,Adam15} that are apparently held together by specific protein-protein interactions, as well as rather loose, irregular arrangements \cite{Daum16}. { The variability of distances and angles between neighboring BAR domains in these loose arrangements suggests that the arrangements form without specific protein-protein interactions \cite{Daum16} and, thus, may be dominated by membrane-mediated interactions between the proteins \cite{Weikl18,Phillips09,Reynwar07}. These indirect, membrane-mediated interactions arise because the overall bending energy of the membrane depends on the distance and orientation of curvature-inducing proteins.} In simulations with coarse-grained models, a variety of morphologies with tubular or disk-like membrane shapes have been observed \cite{Ayton07,Ayton09,Ramakrishnan13,Tourdot14,Noguchi15,Noguchi16,Noguchi16b}. The disk-like shapes consist of a double-membrane stack connected by a curved edge and are counterparts of the connected, stacked membrane sheets in the much larger membrane systems investigated in experiments \cite{Terasaki13,Nixon16}. In this article, we systematically investigate the membrane morphologies induced by arc-shaped { scaffold} particles such as proteins or protein complexes with coarse-grained modeling and simulations. In our coarse-grained model of membrane shaping, the membrane is described as a triangulated elastic surface, and the particles as segmented arcs that induce membrane curvature by binding to the membrane. { The direct particle-particle interactions are purely repulsive and only prevent particle overlap. The particle arrangements in our model are therefore governed by indirect, membrane-mediated interactions. These particle arrangements are essentially unaffected by the membrane discretization because the particles are not embedded in the membrane, in contrast to previous elastic-membrane models. In previous models, curvature-inducing particles have been described as nematic objects embedded on the vertices of a triangulated membrane \cite{Ramakrishnan13,Tourdot14}, or as curved chains of beads embedded in a two-dimensional sheet of beads that represents the membrane \cite{Noguchi15,Noguchi16}.} Our main aim here is to obtain general classification of the membrane morphologies induced by arc-shaped scaffold particles that do not exhibit specific attractive interactions. This classification is obtained from simulations in which the overall number of particles exceeds the number of membrane-bound particles. The membrane coverage then is not constrained by the number of available particles, which leads to rather sharp transitions between `pure' spherical, tubular, or disk-like morphologies in our simulations. Previous elastic membrane models, in contrast, have been investigated for a fixed number of membrane-embedded or bound particles, which typically leads to `mixed' membrane morphologies, e.g.~morphologies with membrane tubules or disks protruding from a spherical versicle. We find that the membrane shape is fully determined by the arc angle and the membrane coverage of the particles. For all considered arc angles of the particles between $60\degree$ and $180\degree$, membrane tubules are formed at particle coverages that exceed about 40\%. Arc angles of $60\degree$ roughly correspond to the angle enclosed by BAR domain proteins such as the Arfaptin BAR domain and the endophilin and amphiphysin N-BAR domains \cite{Qualmann11,Masuda10}, while larger arc angles up to $180\degree$ have been postulated for reticulon scaffolds \cite{Shibata10,Schweitzer15b}. At smaller membrane coverages below 40\%, particles with arc angles of about $60\degree$ do not change the membrane morphologies in our model, while particles with arc angles larger than about 120$\degree$ induce disk-like double-membrane stacks by stabilizing curved edges. Particles with arc angles around $90\degree$ lead to faceted, irregular membrane morphologies at smaller coverages. { The arrangements of particles with arc angles of $60\degree$ along tubules in our simulations is similar to the rather loose arrangement of N-BAR domains observed in electron microscopy experiments \cite{Daum16}. This similarity supports the suggestion that these rather loose arrangements of N-BAR domains are dominated by membrane-mediated interactions.} \section*{Methods} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figure-1} \caption{(a) Arc-shaped particles composed of 3 to 7 planar segments with an angle of $30\degree$ between neighboring segments. The angle between the two end segments of the particles is $60\degree$ for particles of size 3, and $90\degree$, $120\degree$, $150\degree$, and $180\degree$ for particles of size 4, 5, 6, and 7, respectively. (b) Distributions of angles between the membrane triangles that are bound to the end segments of the particles for the binding cutoff $\theta_c = 10\degree$ of the particle-membrane adhesion potential (see Equation (\ref{Vpm})). The mean values of these angle distributions are $52.5\degree$, $82.6\degree$, $112.4\degree$, $143.2\degree$, and $169.7\degree$ for particles of size 3 to 7, respectively. (c) Distributions of angles between the membrane triangles bound to the end segments of particles of size 3 for different values of the binding cutoff $\theta_c$. The mean value of the distributions increases from $52.5\degree$ for $\theta_c=10\degree$ to 57.8\degree for $\theta_c=5\degree$ and $59.1\degree$ for $\theta_c=3\degree$. } \label{figure-1} \end{figure} \subsection*{Model} We model the membrane as a discretized closed surface. The bending energy of a closed continuous membrane without spontaneous curvature is the integral ${\cal E}_\text{be} = 2\kappa \oint M^2 \,dS$ over the membrane surface with local mean curvature $M$ \cite{Helfrich73}. We use the standard discretization of the bending energy for triangulated membrane described in Ref.\ \cite{Julicher96,Bahrami12} and choose as typical bending rigidity the value $\kappa = 10 k_B T$ \cite{Dimova14}. Our discretized membranes are composed of either $n_t = 2000$ triangles or $n_t = 5120$ triangles. The edge lengths of the triangles are kept within an interval $[a_m, \sqrt{3} a_m]$ \cite{Julicher96,Bahrami12}. The area of the membrane is constrained to ensure the near incompressibility of lipid membranes \cite{Lipowsky05}. The strength of the harmonic constraining potential is chosen such that the fluctuations of the membrane area are limited to less than $0.5\%$. The enclosed volume is unconstrained to enable the full range of membrane morphologies. Coarse-grained molecular simulations indicate that the full spectrum of bending fluctuations can be described for a membrane discretization length $a_m$ of about 5 nm\cite{Goetz99}. Our arc-shaped particles are composed of 3 to 7 identical planar quadratic segments. Neighboring segments share a quadratic edge and enclose an angle of $30\degree$ in most of our simulations (see Figure 1(a)). The arc angle of the particles, i.e.\ the angle between the first and last segment, then adopts the values $60\degree$, $90\degree$, $120\degree$, $150\degree$, and $180\degree$ for particles with 3, 4, 5, 6, and 7 segments, respectively. In addition, we consider particles composed of 5 segments with an angle of $15\degree$ between neighboring segments. These particles have the same arc angle of $60\degree$ as particles composed of 3 segments with angle $30\degree$ between neighboring segments, but have a larger size and smaller curvature compared to the particles of 3 segments. Each planar segment of a particle interacts with the nearest triangle of the membrane via the particle-membrane adhesion potential \begin{equation} V_\text{pm} = - U f_r(r) f_\theta(\theta), \label{Vpm} \end{equation} where $r$ is the distance between the center of the segment and the center of the nearest triangle, $\theta$ is the angle between the normals of the segments and this triangle, and $U$ is the adhesion energy per particle segment. The distance-dependent function $f_r$ is a square-well function that adopts the values $f_r(r) = 1$ for $r_1 < r < r_2$ and $f_r(r)=0$ otherwise. The angle-dependent function $f_\theta$ is square-well function with values $f_\theta(\theta) = 1$ for $|\theta| < \theta_c$ and $f_\theta(\theta) = 0$ otherwise. By convention, the normals of the membrane triangles are oriented outward from the enclosed volume of the membrane, and the normals of the particle segments are oriented away from the center of the particle arc. The particles then bind to the membrane with their inner, concave side that is oriented towards the center of the arc. We use the parameter values $r_1 = 0.25 a_m$ and $r_2 = 0.75 a_m$ in all our simulations, and the value $\theta_c = 10\degree$ in most of our simulations. In simulations with the smallest particles composed of 3 segments, we consider also the values $\theta_c = 3\degree$ and $5\degree$, besides $\theta_c = 10\degree$. The overlapping of particles is prevented by a purely repulsive hard-core interaction that only allows distances between the centers of the planar segments of different particles that are larger than $a_p$. The hard-core area of a particle segment thus is $\pi a_p^2/4$. We use this hard-core area in calculating the membrane coverage of bound particles. We choose the value $a_p = 1.5 a_m$ for the linear size of the planar particle segments. The particle segments then are slightly larger than the membrane triangles with minimum side length $a_m$, which ensures that different particle segments bind to different triangles.\\ \begin{figure*}[hbt!] \centering \includegraphics[width=0.95\linewidth]{figure-2} \caption{(a) Reduced volume versus membrane coverage for membrane morphologies induced by (a) particles of size 3 with binding cutoff $\theta_c = 10\degree$, (b) particles of size 3 with binding cutoffs $\theta_c = 3\degree$ (full circles) and $\theta_c = 5\degree$ (open circles), and (c) to (f) particles of size 4 to 7 with binding cutoff $\theta_c = 10\degree$. Clusters of points with the same morphology are classified as spheres (blue), faceted (green), disks (orange), and tubes (red). The grey points in (b) and (c) are intermediate between spherical and tubular, the grey points in (e) are intermediate between disk-like and tubular, and the grey points in (f) correspond to strongly metastable morphologies in which only a part of the membranes is tubular and covered with particles. The data result from simulations with membranes composed of 2000 triangles. In these simulations, the membrane area is constrained to the average value $A \simeq 0.677 n_t a_m^2 \simeq 1.35 \cdot 10^3 a_m^2$ where $a_m$ is the minimum edge length of the triangulated membrane. The overall number of particles in our simulations is either $N=200$ or $400$, and the adhesion energy per particle segment adopts one of the values $U = 3, 4, 5, \ldots 20\, k_B T$ (see Figure S2). For each combination of particle number $N$ and adhesion energy $U$, we have performed simulations starting from an initial spherical, disk-like, or tubular membrane shape (see Figure S1). The membrane and particles are enclosed in a cubic box of volume $V_\text{box} \simeq 1.26\cdot 10^5 a_m^3$. This box volume is 27 times larger than the volume of a perfect sphere with the membrane area $A$ given above. } \label{figure-2} \end{figure*} \subsection*{Simulations} We have performed Metropolis Monte Carlo simulations in a cubic box with periodic boundary conditions. The simulations consist of four different types of Monte Carlo steps: membrane vertex translations, membrane edge flips, particle translations, and particle rotations. Vertex translations enable changes of the membrane shape, while edge flips ensure membrane fluidity \cite{Gompper97}. In a vertex translation, a randomly selected vertex of the triangulated membrane is translated along a random direction in three-dimensional space by a distance that is randomly chosen from an interval between $0$ and $0.1 a_m$. In a particle translation, a randomly selected particle is translated in random direction by a random distance between $0$ and $a_m$. In a particle rotation, a randomly selected particle is rotated around a rotation axis that passes trough the central point along the particle arc. For particles that consist of 3, 5, or 7 segments, the rotation axis runs through the center of the central segments. For particles of 4 or 6 segments, the rotation axis runs through the center of the edge that is shared by the two central segments. The rotation axis is oriented in a random direction. The random rotations are implemented using quaternions \cite{Frenkel02,Vesely82} with rotation angles between 0 and a maximum angle of about $2.3\degree$. Each of these types of Monte Carlo steps occur with equal probabilities for single membrane vertices, edges, or particles. The membrane coverage $x$ of the particles in our simulations depends on the overall number of particles and on the adhesion energy $U$ per particle segment. The overall number of particles in our simulations is either $N=200$ or $400$, and the adhesion energy per particle segment is varied from $U = 3$ to $20\, k_B T$ to obtain the full range of possible coverages. For each combination of particle number $N$ and adhesion energy $U$, we have performed simulations starting from an initial spherical, disk-like, or tubular membrane shape (see Figure S1). The particles are initially randomly distributed in the simulation box outside of the membrane. Simulations starting from initial disk-like and tubular shapes first include only particle translations and rotations to stabilize these initial shapes by bound particles. All simulations then include all four types of MC moves for total simulation lengths between $1\cdot 10^7$ and $8\cdot 10^7$ MC steps per membrane vertex, depending on convergence. To verify convergence, we divide the last $10^7$ MC steps per vertex of a simulation into ten intervals of $10^6$ steps and calculate the average coverage $x$ and reduced volume $v$ of the membrane for each interval. For membranes composed of $n_t = 2000$ triangles, we take a simulation to be converged if the standard deviations of these ten averages are smaller than 0.02 for both quantities. The values for the membrane coverage $x$ and reduced volume $v$ given in Figures \ref{figure-2} and \ref{figure-3} are the mean values of these 10 averages. For our larger membranes composed of $n_t = 5120$ triangles, we take a simulation to be converged if the standard deviations of the 10 averages of $x$ and $v$ for the last 10 intervals of $10^6$ MC steps are both smaller than 0.01. \section*{Results} The arc-shaped particles of our model induce membrane curvature by binding to the membrane with their inner concave sides. We first consider particles composed of 3 to 7 planar segments with an angle of $30\degree$ between adjacent segments (see Figure \ref{figure-1}(a)). The arc angle of these particles depends on the particle size, i.e.\ on the number of planar segments. A particle segment is bound to the discretized, triangulated membrane of our model if its distance to the closest membrane triangle is within a given range, and if the particle segment and membrane triangle are nearly parallel with an angle that is smaller than a cutoff angle $\theta_c$ (see Methods for details). The relative area of the particle segments and membrane triangles is chosen such that a particle segment can only be bound to a single membrane triangle. Figures 1(b) and (c) illustrate the distributions of angles between the two membrane triangles that are bound to the two end segments of the particles. These induced membrane angles increase with increasing particle size and with decreasing binding cutoff $\theta_c$. The membrane morphologies obtained in our simulations are determined by the size and membrane coverage of the particles (see Figure \ref{figure-2}). The overall number of particles in the simulations is always larger than the number of bound particles covering the membrane. The membrane coverage then depends on the concentration and binding energy of the particles, but is not limited or constrained by the number of available particles. An overlap between particles is prevented by a hard-core repulsion potential. Without particles, the closed membrane of our model adopts a spherical shape because the bending energy of such a membrane vesicle is minimal for the sphere. For the smallest particles of size 3 and membranes composed of 2000 triangles, the membrane retains a spherical shape until coverages of about 50\% for the binding cutoff angle $\theta_c = 10\degree$ (see Figure \ref{figure-2}(a)), and until coverages of about 45\% for the binding cutoffs $\theta_c =3\degree$ and $5\degree$ (see Figure \ref{figure-2}(b)). At larger coverages, the morphology of the membrane changes from spherical to tubular. This morphology change leads to a drop in the reduced volume $v = 6 \sqrt{\pi} V/A^{3/2}\le 1$, which is a measure for the area-to-volume ratio of the membrane vesicle \cite{Seifert91} and adopts its maximum value of 1 for an ideal sphere. The area $A$ of the membrane is constrained in our simulations to ensure the near incompressibility of lipid membranes \cite{Lipowsky05}, whereas the volume $V$ is unconstrained to allow for the full range of membrane morphologies. At intermediate coverages, the membrane morphologies depend on the particle size. For particles of size 4 to 7, spherical morphologies with bound particles do not occur, in contrast to particles of size 3. Instead, particles of size 4 lead to irregular, faceted morphologies at intermediate coverages between about 10\% and 30\%. Particles of size 5 to 7 induce disk-like morphologies at intermediate coverages. However, all particles induce tubular morphologies at sufficiently large coverages. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figure-3} \caption{Morphology diagram with mean induced angle versus membrane coverage for the data points of Figure \ref{figure-2}. The mean induced angles are the mean values of the angle distributions of Figure \ref{figure-1}. The lines of full circles with the same mean induced angle are from simulations with particles of size 3 to 7 (bottom to top) and binding cutoff $\theta_c = 10\degree$. The open circles represent the simulation results for particles with size 3 and binding cutoff $\theta_c = 3\degree$. } \label{figure-3} \end{figure} \begin{figure*}[bth!] \centering \includegraphics[width=0.95\linewidth]{figure-4} \caption{Exemplary tubular morphologies for particles composed of 3 to 7 segments with an angle of $30\degree$ between adjacent segments. The membrane coverage is $x = 0.56$ for all morphologies. The membranes consist of 2000 triangles.} \label{figure-4} \end{figure*} The points in Figure \ref{figure-2} result from simulations with different overall particle number, binding energy, or initial membrane shape. The initial membrane shape in our simulations is either spherical, tubular, or disk-like (see Figure S1). Figure \ref{figure-2} only includes points from simulations with converged membrane coverage and reduced volume (see Methods). Simulations that differ in the initial membrane shape may converge to different values for the membrane coverage and reduced volume, which indicates metastabilities. These metastabilities tend to increase with the binding energy and size of the particles. For particles with size 3 to 5, all membrane morphologies emerge in simulations starting from an initially spherical membrane. For particles with size 6 and 7, in contrast, disk-like and tubular morphologies no longer emerge in simulations with an initially spherical membrane because these particles do not bind to spherical membranes within the accessible simulation times. However, all points in the diagrams of figure \ref{figure-2} fall onto a single curve for a given particle size, irrespective of whether these points result from metastable or stable simulations and parameters. These curves imply that the reduced volume $v$ of the membrane is a function of the membrane coverage. For each particle type, the membrane coverage determines the reduced volume $v$ and, thus, the membrane morphology. All membrane morphologies of Figure \ref{figure-2} are summarized in the diagram of Figure \ref{figure-3}. In this diagram, the mean induced angle is displayed versus the membrane coverage for the data points of Figure \ref{figure-2}. The mean induced angle varies from $52.5\degree$ for particles of size 3 with binding cutoff $\theta_c = 10\degree$ to $169.7\degree$ for particles of size 7. Figure \ref{figure-3} illustrates that the threshold values of the membrane coverage above which membrane tubes are formed in our simulations is rather independent of the particle type. These threshold values range from about 40\% for particles of size 4 to 50\% for particles of size 3 and angle cutoff $\theta_c=10\degree$. Particles of size 5 to 7 induce membrane tubules at coverages above about 45\%, and disks at intermediate coverages below about 35\%. For particles of size 5, disk-like and tubular morphologies are separated by a somewhat larger gap in membrane coverage $x$, compared to particles of size 6 and 7. This gap arises because particles of size 5 are arranged in three lines along the tubes (see Figures \ref{figure-4} and S2), while elongated disks exhibit only two lines of particles at opposing sides (see Figures \ref{figure-5} and S3). Tubes with particles of size 5 thus cannot be generated by simple elongation of disks, in contrast to particles of size 6 and 7, which are arranged in two lines of particles along the tubes. Particles of size 4 induce tubular morphologies with four lines of particles along the tube. At intermediate coverages, the particles lead to irregular, faceted morphologies with strongly curved membrane ridges covered by lines of particles, and weakly curved, uncovered membrane segments in between these ridges (see Figure \ref{figure-2}, top). Particles of size 3 tend to align side by side along the tubules, but do not form continuous lines along the whole tubule. The ordering of particles of size 3 along the tubes is thus shorter ranged compared to larger particles. \begin{figure*}[bth!] \centering \includegraphics[width=0.8\linewidth]{figure-5} \caption{ Exemplary disk-like morphologies for particles composed of 5 to 7 segments with an angle of $30\degree$ between adjacent segments and membrane coverages of $x = 0.28$, $0.31$, and $0.35$ (from left to right). The membranes consist of 2000 triangles.} \label{figure-5} \end{figure*} In Figure \ref{figure-6}, we compare simulation results for (a) particles composed of 3 segments with an angle of $30\degree$ between neighboring segments and (b) particles composed of 5 segments with an angle of $15\degree$ between neighboring segments. Both types of particles enclose the same arc angle of $60\degree$ between their terminal segments, but have different curvatures because of the different angles between their neighboring segments, and different sizes. The membrane in the simulations of Figure \ref{figure-6} is composed of 5120 triangles and is, thus, significantly larger than the membrane in the simulations of Figure \ref{figure-2}. For membranes composed of 2000 triangles as in Figure \ref{figure-2}, the more weakly curved particles of Figure \ref{figure-6}(b) do not induce a clear morphology transition from spherical to tubular because this smaller membrane size does not allow for sufficiently elongated spherocylinders that are clearly distinguishable from spheres (data not shown). For the larger membrane size of 5120 triangles, however, both types of particles of Figure \ref{figure-6} exhibit a rather sharp morphology transition from spherical to tubular at membrane coverages of about 0.37. This identical threshold value for the sphere-to-tubule transition illustrates that the overall membrane morphology is determined by the arc angle, which is identical for both types of particles, and not by the size or curvature of the particles. As expected, the more weakly curved particles of Figure \ref{figure-6}(b) induce thicker tubules for membrane coverages beyond the threshold value of about 0.37. For particles composed of 3 segments with an angle of $30\degree$ between neighboring segments, the threshold value for membranes with 5120 triangles is smaller than the threshold value of about 0.5 obtained for membranes with 2000 triangles (see Figure \ref{figure-2}(a)). The tubules of membranes with 2000 triangles induced by particles of 3 segments have a relatively small aspect ratio, i.e.\ a relatively small ratio of tube length and diameter, and dumbbell-like distortions for coverages $x$ close to the threshold value for tube formation (see Figure S3). The membrane size of 2000 triangles therefore is likely too small for reliable estimates of the sphere-to-tubule transition for these particles. The larger particles of Figure \ref{figure-2}, in contrast, induce tubules with significantly larger aspect ratios, compared to particles of size 3 (see Figures \ref{figure-4} and S3). The threshold values for tube formation obtained for these particles from Figures 2(c) to (f) therefore should be only weakly affected by the membrane size. \begin{figure*}[hbt!] \centering \includegraphics[width=0.8\linewidth]{figure-6} \caption{(a) Reduced volume $v$ versus membrane coverage for membrane morphologies with 5120 membrane triangles and (a) particles composed of 3 segments with an angle of $30\degree$ between neighboring segments and (b) particles composed of 5 segments with an angle of $15\degree$ between neighboring segments. Spherical morphologies are indicated by blue points, tubular morphologies by red points. The two grey points in (a) correspond to intermediate morphologies. The data result from simulations with overall particle number $N=400$. To attain different coverages $x$, we have run simulations with adhesion energy $U = 3$, $3.5$, $4$, $4.5$, $5$, $5.5$, $6$, $6.5$, $6.6$, $6.8$, $7$, $7.2$, $7.4$, $7.5$, $7.6$, $7.8$, $8$, $8.2$, $8.4$, $8.5$, $8.6$, $9$, $10$, $11$, $12$, $13$, and $14\, k_B T$ in (a) and $U = 3$, $3.5$, $4$, $4.1$, $4.2$, $4.3$, $4.4$, $4.5$, $4.6$, $4.7$, $4.8$, $4.9$, $5$, $6$, $7$, $8$, $9$, $10$, $11$, $12$, and $13\, k_B T$ in (b). For each value of $U$, we have run 3 simulations starting from a spherical morphology and 2 to 3 simulations starting from a tubular morphology. Only points from simulations with converged membrane coverage and reduced volume are included in the plots (see Methods). The mean induced angle of the particles in (b) is $56.1\degree$ and, thus, slightly larger than the mean induced angle of $52.5\degree$ of the particles in (a). The particle coverage is $x = 0.48$ for the tubule shown to the left and $x = 0.44$ for the tubule to the right. } \label{figure-6} \end{figure*} \section*{Discussion and conclusions} In this article, we have investigated the transitions between different membrane morphologies induced by arc-shaped particles { with purely repulsive direct particle-particle interactions}. Our aim was to classify the membrane morphologies and to identify the particle properties that determine these morphologies. Arc-shaped particles can differ in their size, curvature, arc angle, adhesion energy, and overall number. Our central result is that the membrane morphologies induced by arc-shaped particles are determined by the arc angle and the membrane coverage of the particles. In our model, the particles are described as segmented arcs that adhere to the triangulated membrane. The particle discretization thus is independent from the membrane discretization, and the membrane coverages obtained in our simulations are not affected or limited by the membrane discretization. In our simulations, the overall number of particles is larger than the number of adsorbed particles. Unbound particles thus constitute a particle reservoir in our simulations, which leads to rather sharp transitions and `pure' morphologies, as typical for simulations in a grand-canonical ensemble. We find that arc-shaped particles induce membrane tubules for membrane coverages larger than a threshold value of about 0.4, rather independent of their arc angle. At smaller coverages, particles with arc angles larger than about $120\degree$ induce disk-like membrane morphologies. These disk-like morphologies have characteristic membrane edges that connect the two opposing, nearly planar membrane segments of the disks. The disks are therefore `small-membrane equivalents' of the stacked, connected membrane sheets observed in the endoplasmic reticulum \cite{Terasaki13,Nixon16}. On the coarse-graining level of our model, the curvature generation of the particles is captured by induced curvature angles of the particles. Proteins may induce comparable curvatures and curvature angles by membrane adhesion, membrane insertion, or a combination of adhesion and insertion \cite{McMahon15,Kozlov14,Baumgart11,Zimmerberg06}. { In our simulations, particles with arc angles of $60\degree$ induce a characteristic sphere-to-tubule transition. The membrane adopts a spherical shape for particle coverages below a transition value, and a tubular shape for coverages above the transition value. At coverages below the transition value, the particle arrangement on the spherical membranes is rather homogeneous, with only short-range order. Particles with arc angles of $90\degree$ and larger, in contrast, do not exhibit homogeneous arrangements on spherical membranes. Instead, these particles form linear aggregates at small membranes coverages in which the particles are aligned side-by-side. This alignment in lines are driven by indirect, membrane-mediated interactions between bound particles \cite{Weikl18,Phillips09,Reynwar07} because the direct particle-particle interactions are purely repulsive in our model. For particles with arc angles of $90\degree$, the linear aggregates at small membrane coverages lead to faceted morphologies, and to a rather continuous change from faceted morphologies to tubular morphologies with both four lines of particles for increasing membrane coverages (see also Figure S6). Particles with arc angles of $120\degree$, $150\degree$, and $180\degree$ form single closed linear aggregates at small membrane coverages in our simulations with excess particles. These closed linear aggregates lead to disk-like membrane morphologies. For particles with arc angles of $150\degree$ and $180\degree$, the disk-like morphologies change rather continuously into tubular morphologies with both two lines of particles along opposite membrane sites. For particles with arc angles of $120\degree$, the transition from disk-like to tubular morphologies is discontinuous because the particles are arranged in three lines along tubes, which cannot form continuously from the single closed particle line of the disk-like morphologies. Overall, the transition from faceted or disk-like morphologies into tubular morphologies depend on the packing density of particles in the linear aggregates, and on the distances between the linear particle aggregates, which depends on the membrane coverage. It is plausible to assume that both the particle density along linear aggregates and the distances between linear aggregates for given membrane coverages do not depend on the overall membrane size in our simulations. The threshold coverage values for tube formation obtained from our simulations therefore should be independent of the membrane size, at least for particles with arc angles of $90\degree$ and larger. The sphere-to-tubule transition of our particles with arc angles of $60\degree$ in principle may depend on the membrane size, mainly because the curvature of the spherical membranes changes with increasing size. However, the rather large spherical membranes in the simulations of Figure \ref{figure-6} have a curvature that is already significantly smaller than the particle curvature. We therefore expect that the threshold values of the sphere-to-tubule transition obtained from Figure \ref{figure-6} do not change substantially with increasing membrane size. The tubules in our simulations have two spherical caps with a bending energy that corresponds to the bending energy of a sphere. The excess bending in the sphere-to-tubule transition therefore occurs in the cylindrical section of our tubules. } { Arc angles of $60\degree$ roughly correspond to the angle enclosed by BAR domain proteins such as the Arfaptin BAR domain and the endophilin and amphiphysin N-BAR domains \cite{Qualmann11,Masuda10}. Electron tomography images of membrane tubules induced by Bin1 N-BAR domain proteins show a rather loose protein arrangement with only short-ranged order (see Figure 5I of reference \cite{Daum16}), which is rather similar to the tubular morphologies obtained for our arc-shaped particles with arc angles of $60\degree$ (see Figures 4, S4, and 6). This similarity suggests that the observed rather loose arrangements of Bin1 N-BAR domains are likely dominated by membrane-mediated interactions. However, in contrast to our arc-shaped particles, the arrangements of BAR domain proteins in general can be affected by specific protein-protein interactions. The highly ordered coats of CIP4 F-BAR domains \cite{Frost08}, endophilin N-BAR domains \cite{Mim12a}, and amphiphysin/BIN1 N-BAR domains \cite{Adam15} observed in electron microscopy appear to result from such specific protein-protein interactions. 3D reconstructions show that neighboring CIP4 F-BAR domains and amphiphysin/BIN1 N-BAR domains are arranged in parallel in these coats, but rather tip-to-side than side-to-side. These parallel tip-to-side arrangements in two-dimensional helical aggregates is clearly different from the linear, membrane-mediated aggregates observed in our simulations. Fluorescence experiments of the membrane tubulation of giant unilamellar vesicles (GUVs) by endophilin N-BAR domains indicate an onset of tubulation at endophilin coverages of about 10\% on the vesicles for low membrane tensions \cite{Shi15}. The endophilin coverage along the tubules has not been determined in these experiments. However, the tubulation of endophilin N-BAR domain observed in these fluorescence experiments is clearly different from the sphere-to-tube transition induced by our particles with arc angles of $60\degree$. In this transition, membrane spheres with particle coverages close to $40\degree$ change into tubes with about the same coverage. In fluorescence experiments with pre-formed tubules pulled with optical tweezers from GUVs, the membrane coverages $44\pm 27\%$ of endophilin N-BAR domains and $37\pm 9\%$ of $\beta$2 centaurin BAR domains have been measured along the tubules \cite{Simunovic16}. } Sparse but regular arrangements on membrane tubules have been recently observed for dynamin-amphiphysin complexes \cite{Takeda18}. The membrane-mediated side-by-side alignment of particles in our simulations is also prominent in the time sequences of particle-induced morphology changes from spherical to tubular (see Figures S4, S5, and S5). Attractive membrane-mediated pair interactions of arc-shaped particles that are oriented side-by-side have been previously found by energy minimization \cite{Schweitzer15a}. In molecular dynamics (MD) simulations with a coarse-grained molecular model of N-BAR domains proteins on DLPC lipid vesicles, in contrast, a tip-to-tip alignment of proteins has been observed \cite{Simunovic13,Simunovic17}. Morphologies with tubular and disk-like membrane segments have been previously observed in different elastic-membrane models \cite{Ayton07,Ayton09,Ramakrishnan13,Tourdot14,Noguchi15,Noguchi16,Noguchi16b}. In recent models, curvature-inducing particles have been described as nematic objects embedded on the vertices of a triangulated membrane \cite{Ramakrishnan13,Tourdot14}, or as curved chains of beads embedded in a two-dimensional sheet of beads that represents the membrane \cite{Noguchi15,Noguchi16}. These models have been investigated for a constant number of membrane-embedded particles. For such a canonical ensemble of embedded particles, membrane-mediated interactions between the particles lead to particle aggregation and to membrane morphologies with particle-dense, strongly curved membrane segments and particle-free, weakly curved segments. The membrane coverages in the particle-dense regions are affected by the membrane discretization in these models. Curved chains of beads embedded in a sheet of beads, for example, tend to contact each other side by side \cite{Noguchi15,Noguchi16}. The membrane coverage in these dense particle regions then depends on the size of the discrete membrane beads. In agreement with our simulation results, the curved chains of beads form several lines along tubules, depending on their chain length \cite{Noguchi15}. In our disk-like morphologies, the arc-shaped particles stabilize a highly curved, closed edge that connects two stacked, weakly curved membrane segments. The thickness of the membrane disks decreases with the particle size and, thus, with the induced angle of the particles (see Figures S7, \ref{figure-5}, and S3), in agreement with energy minimization for ideal disks \cite{Schweitzer15b}. The thickness and coverage of the disks in our model are limited by the membrane area. For larger membrane area A, disk-like morphologies exhibit smaller overall coverages because the curved, particle-covered edge of the disks increases only proportional to $\sqrt{A}$. In general, stacked membranes with lateral extensions that are significantly larger than their separation repel each other sterically because of membrane shape fluctuations \cite{Helfrich78,Lipowsky86}. Therefore, stacked membranes of large area are presumably stabilized by additional attractive interactions between the membranes. The membrane morphologies in our model result from an intricate interplay of the bending free energy of the membrane and the overall adhesion free energy of the particles. The membrane in our model is tensionless. In general, the bending energy dominates over a membrane tension $\sigma$ on length scales smaller than the characteristic length $\sqrt{\kappa/\sigma}$, which adopts values between 100 and 400 nm for typical tensions $\sigma$ of a few $\mu\text{N}/\text{m}$ \cite{Simson98,Popescu06,Betz09} and typical bending rigidities $\kappa$ between $10$ and $40$ $k_B T$ where $k_B T$ is the thermal energy \cite{Nagle13,Dimova14}. Our results thus hold on length scales smaller than this characteristic length. In contrast, the overall membrane morphology on length scales larger than $\sqrt{\kappa/\sigma}$ depends on the membrane tension \cite{Lipowsky13,Shi15,Simunovic15}. \section*{Author Contributions} FB and TW designed the research. FB carried out all simulations. FB and TW analyzed the data and wrote the article. \section*{Acknowledgments} Financial support from the Deutsche Forschungsgemeinschaft (DFG) {\em via} the International Research Training Group 1524 ``Self-Assembled Soft Matter Nano-Structures at Interfaces" is gratefully acknowledged.
1,314,259,992,895
arxiv
\section{Introduction} \label{sec-intro} The effectiveness of model-free reinforcement learning (RL) algorithms has been demonstrated extensively on robotic control tasks, computer video games, and other challenging problems \cite{lillicrap2015,mnih2015}. Despite of widespread success, many existing RL algorithms must process a huge number of environment samples in order to learn effectively \cite{mnih2016icml,wu2017nips,bhatnagar2009automatica}. Aimed at significantly reducing the \emph{sample cost}, \emph{off-policy} algorithms that can learn efficiently by reusing past experiences have attracted increasing attention \cite{lillicrap2015,munos2016nips,gu2016icra,wang2016arxiv}. Unfortunately, on RL problems with high-dimensional continuous state spaces and action spaces, off-policy learning can be highly unstable and often diverge even with small changes to hyper-parameter settings \cite{lillicrap2015,henderson2017arxiv}. In view of the difficulties faced by existing off-policy algorithms, innovative techniques have been developed lately by seamlessly integrating both the maximum reward and the maximum entropy objectives, resulting in a new family of \emph{maximum entropy RL} algorithms \cite{nachum2017,donoghue2017,nachum2017arxiv}. A cutting-edge member of this family is the \emph{Soft Actor-Critic} (SAC) algorithm \cite{haarnoja2018icml}. SAC allows RL agents to effectively reuse past experiences by adopting an off-policy learning framework derived from the policy iteration method \cite{sutton1998book}. It also stabilizes RL by learning maximum entropy policies that are robust to environmental uncertainties and erroneous parameter settings \cite{ziebart2010thesis}. This algorithm is simpler to implement and more reliable to use than some of its predecessors \cite{haarnoja2017}. While SAC is well-known for its high sample efficiency, the policies trained by SAC to maximize Shannon entropy never prevent an agent from exploring actions with low expected long-term rewards. This will inevitably reduce the effectiveness of environment exploration and affect the learning performance. To address this limitation, during the RL process, it can be more favorable in practice to maximize general entropy measures such as Tsallis entropy \cite{tsallis1994} or R\'enyi entropy \cite{jizba2004phy}. Specifically, it is shown mathematically in \cite{lee2018iral,chen20182} that, by maximizing Tsallis entropy, an RL agent will completely ignore unpromising actions, thereby achieving highly efficient environment exploration. However, as far as we know, no RL algorithms have ever been developed to maximize these general entropy measures in continuous action spaces. In this paper, we present a new policy iteration theory as an important extension of soft policy iteration proposed in \cite{haarnoja2018icml} to enable Actor-Critic RL that is capable of maximizing arbitrary general entropy measures. Guided by our new theory, we have further developed two new RL algorithms to fulfill respectively the objectives to maximize Tsallis entropy and R\'enyi entropy. The potential performance advantage of our new algorithms, in comparison to SAC, will also be analyzed theoretically. In particular, our new algorithms promote varied trade-offs between exploration and exploitation. When they are used to train an ensemble of policies which will be utilized jointly to guide an agent's environment exploration, the chance for the agent to learn high-quality policies is expected to be noticeably enhanced. Driven by this idea, a new ensemble Actor-Critic algorithm is further developed in this paper. In this algorithm, each policy in the ensemble can be trained to maximize either Tsallis entropy or R\'enyi entropy. We adopt a bootstrap mechanism proposed in \cite{osband20161} with the aim to realize deep environment exploration. To achieve satisfactory testing performance, we also introduce a new action-selection Q-network to perform high-level action selection based on actions recommended by each policy in the ensemble. On six difficult benchmark control tasks, our algorithms have been shown to clearly outperform SAC and several state-of-the-art RL algorithms in terms of both sample efficiency and effectiveness. \section{Related Work} \label{sec-survey} This paper studies the collective use of three RL frameworks, i.e. the \emph{actor-critic framework} \cite{deisenroth2013}, the \emph{maximum entropy framework} \cite{nachum2017,donoghue2017,haarnoja2017,haarnoja2018icml} and the \emph{ensemble learning framework} \cite{osband20161,wiering2008smc}. Existing works such as SAC have considered the first two frameworks. It is the first time in the literature for us to further incorporate the third framework for effective environment exploration and reliable RL. The actor-critic framework is commonly utilized by many existing RL algorithms, such as TRPO \cite{schulman2015icml}, PPO \cite{schulman20171,chen20181} and ACKTR \cite{wu2017nips}. However, a majority of these algorithms must collect a sequence of new environment samples for each learning iteration. Our algorithms, in comparison, can reuse past samples efficiently for policy training. Meanwhile, entropy regularization in the form of KL divergence is often exploited to restrict behavioral changes of updated policies so as to stabilize learning \cite{schulman2015icml,chen2018ijcnn}. On the other hand, our algorithms encourage entropy maximization for effective environment exploration. Previous works have also studied the maximum entropy framework for both on-policy and off-policy learning \cite{nachum2017,donoghue2017,nachum2017arxiv}. Among them, a large group focused mainly on problems with discrete actions. Some recently developed algorithms have further extended the maximum entropy framework to continuous domains \cite{haarnoja2017,haarnoja2018icml}. Different from these algorithms that focus mainly on Shannon entropy, guided by a new policy iteration theory, our newly proposed RL algorithms can maximize general entropy measures such as Tsallis entropy and R\'enyi entropy in continuous action spaces. In recent years, ensemble methods have gained wide-spread attention in the research community \cite{osband20161,chen2017ucb,bukcman2018}. For example, Bootstrapped Deep Q-Network and UCB Q-Ensemble \cite{osband20161,chen2017ucb} have been developed for single-agent RL with success. However, these algorithms were designed to work with discrete actions. In an attempt to tackle continuous problems, some promising methods such as ACE \cite{huang2017corr}, SOUP \cite{zeng2018ijcai} and MACE \cite{peng2016siggraph} have been further developed. Several relevant algorithms have also been introduced to support ensemble learning under a multi-agent framework \cite{lowe2017nips,panait2005aamas}. Different from ACE that trains each policy in the ensemble independently, our ensemble algorithm trains all policies by using the same replay buffer so as to significantly reduce sample cost. Different from MACE that treats policies in the ensemble as individual actions to be explored frequently across successive interactions with the learning environment, we adopt the bootstrap mechanism to realize deep environment exploration. Moreover, to the best of our knowledge, different from SOUP and other relevant algorithms \cite{kalweit2017pmlr}, the idea of training a separate Q-network for high-level action selection has never been studied before for single-agent RL. The effectiveness of this method is also theoretically analyzed in Section \ref{sec-algorithm}. \section{Preliminaries} \label{sec-back} The basic concepts and notations for RL will be introduced in this section. A general maximum entropy learning framework will also be presented. \subsection{The Reinforcement Learning Problem} \label{sub-rl} We focus on RL with continuous state spaces and continuous action spaces. At any time $t$, an agent can observe the current state of its learning environment, denoted as $s_t\in\mathbb{S}\subseteq \mathbb{R}^n$, in a $n$-dimensional state space. Based on the state observation, the agent can perform an action $a_t$ selected from an $m$-dimensional action space $\mathbb{A}\subseteq\mathbb{R}^m$. This causes a state transition in the environment from state $s_t$ to a new state $s_{t+1}$, governed by the probability distribution $P(s_t,s_{t+1},a_t)$ which is unknown to the agent. Meanwhile, an immediate feedback in the form of a scalar and bounded reward, i.e. $r(s_t,a_t)$, will be provided for the agent to examine the suitability of its decision to perform $a_t$. Guided by a policy $\pi(s,a)$ that specifies the probability distribution of performing any action $a$ in any state $s$, the agent has the goal in RL to maximize its long-term cumulative rewards, as described below. \begin{equation} \max_{\pi} \E_{(s_t,a_t)\sim \pi} \sum_{t=0}^{\infty} \gamma^t r(s_t,a_t) \label{eq-lt-cum-rew} \end{equation} \noindent where $\gamma\in[0,1)$ is a discount factor for the RHS of \eqref{eq-lt-cum-rew} to be well-defined. Meanwhile, the expectation is conditional on policy $\pi$ that guides the agent to select its actions in every state $s_t$ it encounters. Through RL, the agent is expected to identify an optimal policy, denoted as $\pi^*$, that solves the maximization problem in \eqref{eq-lt-cum-rew} above. \subsection{A Maximum Entropy Learning Framework} \label{sub-me-lf} While maximizing the long-term cumulative rewards in \eqref{eq-lt-cum-rew}, by considering simultaneously a maximum entropy objective, an RL agent can enjoy several key advantages in practice, such as effective environment exploration as well as improved learning speed \cite{ziebart2010thesis,haarnoja2018icml}. Driven by this idea, \eqref{eq-lt-cum-rew} can be extended to become \begin{equation} \max_{\pi} \E_{(s_t,a_t)\sim \pi} \sum_{t_0}^{\infty} \gamma^t r(s_t,a_t) + \alpha \gamma^t \mathcal{H}\left( \pi(s_t,\cdot) \right) \label{eq-lt-cum-rew-ext} \end{equation} \noindent where $\alpha$ as a scalar factor controls the relative importance of the entropy term against the cumulative reward in \eqref{eq-lt-cum-rew-ext}. In this way, we can further manage the stochasticity of the optimal policy. In the past few years, researchers have studied extensively the use of Shannon entropy given below for maximum entropy RL. \begin{equation} \mathcal{H}^s\left( \pi(s,\cdot) \right)=\int_{a\in\mathbb{A}} -\pi(s,a)\log\pi(s,a)\mathrm{d}a \label{eq-shannon-entropy} \end{equation} Some recent works have also explored the maximization of Tsallis entropy \cite{lee2018iral,chow2018icml}. As a generalization of Shannon entropy, Tsallis entropy is derived from the $q$-logarithm function \begin{equation} \log_{(q)}\pi(s,\cdot)=\frac{\pi(s,\cdot)^{q-1}-1}{q-1} \label{eq-gen-log} \end{equation} \noindent where $q\geq 1$ is the \emph{entropic index}. It is not difficult to verify that $\lim_{q\rightarrow 1}\log_{(q)}\pi(s,\cdot)=\log\pi(s,\cdot)$. Subsequently Tsallis entropy can be defined below \begin{equation} \mathcal{H}^q\left( \pi(s,\cdot) \right)= \int_{a\in\mathbb{A}}-\pi(s,a)\frac{\pi(s,a)^{q-1}-1}{q-1}\mathrm{d}a \label{eq-tsallis-entropy} \end{equation} Existing research demonstrated the effectiveness of Tsallis entropy on RL problems with discrete actions \cite{lee2018iral}. Particularly, Tsallis entropy enables an agent to completely ignore unpromising actions for highly efficient environment exploration. In this paper, we will further consider the application of Tsallis entropy in continuous action spaces, a question that has never been studied in the past. Moreover, we will develop a general theory for maximum entropy RL that supports arbitrary entropy measures. To testify the wide applicability of our theory, we will develop a new RL algorithm that maximizes the famous R\'enyi entropy, as defined below. \begin{equation} \mathcal{H}^{\eta}\left( \pi(s,\cdot) \right)=\frac{1}{1-\eta}\log\left( \int_{a\in\mathbb{A}} \pi(s,a)^{\eta} \mathrm{d}a \right) \label{eq-renyi-entropy} \end{equation} \noindent where $\eta\geq 1$ is the entropic index. Although R\'enyi entropy plays an important role in many fields such as quantum physics and theoretical computer science, we are not aware of any existing RL algorithms that utilize R\'enyi entropy to control stochastic environment exploration. \section{Policy Iteration for Maximum Entropy Reinforcement Learning} \label{sec-theory} Aimed at developing a new theory for maximum entropy RL, we adopt a policy iteration framework which can be subsequently transformed into practical RL algorithms. Under this framework, RL is realized through iterative execution of \emph{policy evaluation} and \emph{policy improvement} steps. The policy evaluation step is responsible for learning the value functions of a given policy. Based on the learned value functions, improvement over existing policies can be performed subsequently during the policy improvement steps. In line with this framework, for any fixed policy $\pi$, the Q-function and V-function for $\pi$ can be defined respectively as \begin{equation} \begin{split} &Q^{\pi}(s,a)=\\&\E_{\begin{array}{c}(s_t,a_t)\sim\pi,\\s_0=a,a_0=a\end{array}}\left[ \sum_{t=0}^{\infty}\gamma^t r(s_t,a_t) + \gamma^{t+1} \alpha \mathcal{H}\left( \pi(s_{t+1},\cdot) \right) \right] \end{split} \label{eq-q-func} \end{equation} \noindent and \begin{equation} V^{\pi}(s)=\E_{a\sim \pi(s,\cdot)}\left[ Q(s,a)+\alpha \mathcal{H}\left( \pi(s,\cdot) \right)\right] \label{eq-v-func} \end{equation} \noindent In order to learn both $Q^{\pi}$ and $V^{\pi}$, the Bellman backup operator $\mathcal{T}^{\pi}$ as presented below is very useful \begin{equation} \mathcal{T}^{\pi}Q^{\pi}(s,a)=r(s,a)+\gamma \E_{s'\sim P(s,s',a)} \left[ V^{\pi}(s') \right] \label{eq-bellman} \end{equation} In fact, it can be shown that the process of learning $Q^{\pi}$ (and subsequently $V^{\pi}$) through $\mathcal{T}^{\pi}$ can converge to the true value functions of policy $\pi$, as presented in Proposition \ref{proposition-1} below (see Appendix A for proof). \begin{proposition} Starting from arbitrary Q-function $Q^0$ and define $Q^{k+1}=\mathcal{T}^{\pi}Q^k$, repeated updating of the Q-function through the Bellman backup operator $\mathcal{T}^{\pi}$ in \eqref{eq-bellman} will result in $Q^k$ converging to $Q^{\pi}$ for any policy $\pi$ as $k\rightarrow\infty$. \label{proposition-1} \end{proposition} In the policy improvement step, a mechanism must be developed to build a new policy $\pi'$ based on an existing policy $\pi$ such that $Q^{\pi'}(s,a)\geq Q^{\pi}(s,a)$ for any $s\in\mathbb{S}$ and $a\in\mathbb{A}$. In SAC, $\pi'(s,a)$ is constructed to approximate the distribution derived from the exponential of $Q^{\pi}(s,a)$. This is achieved by minimizing the KL divergence below. \begin{equation} \min_{\pi^{\prime}\in\Pi} D_{KL}\left( \pi^{\prime}(s,\cdot),\exp\left( Q^{\pi}(s,\cdot)-\mathbb{C}_s \right) \right),\forall s\in\mathbb{S} \label{eq-pi-improve-sac} \end{equation} \noindent where $\Pi$ refers to a predefined family of policies and may vary significantly from one problem domain to another. $\mathbb{C}_s$ normalizes the second argument of the KL divergence in order for it to be a well-defined probability distribution. It is important to note that the policy improvement mechanism given in \eqref{eq-pi-improve-sac} only works when $\mathcal{H}(\pi)=\mathcal{H}^s(\pi)$ in \eqref{eq-lt-cum-rew-ext}. Driven by our goal to maximize arbitrary entropy measures during RL, we propose a new policy improvement mechanism in the form of a maximization problem as follows. \begin{equation} \max_{\pi^{\prime}\in\Pi} \E_{a\sim \pi^{\prime}(s,\cdot)}\left[ Q^{\pi}(s,a)+\alpha \mathcal{H}\left( \pi^{\prime}(s,\cdot) \right)\right],\forall s\in\mathbb{S} \label{eq-pi-improve-gac} \end{equation} \noindent Based on \eqref{eq-pi-improve-gac}, as formalized in Proposition \ref{proposition-2} (see Appendix B for proof), it can be confirmed that in a tabular setting the policy improvement step can always produce a new policy $\pi'$ that is either better than or equivalent in performance to an existing policy $\pi$. \begin{proposition} Let $\pi$ be an existing policy and $\pi^{\prime}$ be a new policy that optimizes \eqref{eq-pi-improve-gac}, then $Q^{\pi^{\prime}}(s,a)\geq Q^{\pi}(s,a)$ for all $s\in\mathbb{S}$ and $a\in\mathbb{A}$. \label{proposition-2} \end{proposition} Guided by Propositions \ref{proposition-1} and \ref{proposition-2}, a full policy iteration algorithm can be established to alternate between the policy evaluation and policy improvement steps. As this process continues, we have the theoretical guarantee that, in the tabular case, RL will converge to the optimal stochastic policy among the family of policies $\Pi$, as presented in Proposition \ref{proposition-3} below (see Appendix C for proof). \begin{proposition} The policy iteration algorithm driven by the Bellman operator in \eqref{eq-bellman} and the policy improvement mechanism in \eqref{eq-pi-improve-gac} will converge to a policy $\pi^*\in\Pi$ such that $Q^{\pi^*}(s,a)\geq Q^{\pi}(s,a)$ among all $\pi\in\Pi$, and for all $s\in\mathbb{S}$ and $a\in\mathbb{A}$. \label{proposition-3} \end{proposition} The key difference of our policy iteration theory from the theory developed in \cite{haarnoja2018icml} lies in the new policy improvement mechanism in \eqref{eq-pi-improve-gac}. Particularly, \eqref{eq-pi-improve-gac} makes it straightforward to improve an existing policy $\pi$ by maximizing both its value function $Q^{\pi}$ and an arbitrary entropy measure. Meanwhile, Corollary \ref{corollary-1} (see Appendix D for proof) below shows that \eqref{eq-pi-improve-gac} and \eqref{eq-pi-improve-sac} are equivalent when the objective is to maximize Shannon entropy $\mathcal{H}^s$. Hence the theoretical results presented in this section, especially the newly proposed policy improvement mechanism, stand for an important generalization of SAC. \begin{corollary} Consider any policy $\pi\in\Pi$. When $\mathcal{H}(\pi)=\mathcal{H}^s(\pi)$ in \eqref{eq-lt-cum-rew-ext}, let the policy obtained from solving \eqref{eq-pi-improve-sac} be $\pi^{\prime}\in\Pi$ and the policy obtained from solving \eqref{eq-pi-improve-gac} be $\pi^{\prime\prime}\in\Pi$. Then $Q^{\pi^{\prime}}(s,a)=Q^{\pi^{\prime\prime}}(s,a)$ all $s\in\mathbb{S}$ and $a\in\mathbb{A}$. \label{corollary-1} \end{corollary} \section{Actor-Critic Algorithms for Maximum Entropy Reinforcement Learning} \label{sec-algorithm} In this section, we will first develop two new RL algorithms to maximize respectively Tsallis entropy and R\'enyi entropy. Afterwards, we will further propose an ensemble algorithm that simultaneously trains multiple policies through actor-critic learning techniques and analyze its performance advantage. \subsection{Building Actor-Critic Algorithms to Maximize Tsallis and R\'eny Entropy} Driven by the policy iteration theory established in Section \ref{sec-theory} and following the design of SAC in \cite{haarnoja2018icml}, our new RL algorithms will contain two main components, i.e. the \emph{actor} and the \emph{critic}. The critic manages value function learning and the actor is in charge of policy learning. On large-scale RL problems with continuous state spaces and continuous action spaces, it is unfeasible to perform policy evaluation and policy improvement till convergence for every round of policy update. Instead, both the actor and the critic must learn concurrently, as shown in Algorithm \ref{algorithm-1}. \begin{algorithm}[!ht] \begin{algorithmic} \STATE {\bf Input}: three DNNs, i.e. $V_{\theta}(s)$, $Q_{\omega}(s,a)$ and $\pi_{\phi}(s,a)$, and a replay buffer $\mathcal{B}$ that stores past state-transition samples for training. \STATE {\bf for} each environment step $t$ {\bf do}: \STATE \ \ \ \ Sample and perform $a_t\sim \pi_{\phi}(s_t,\cdot)$ \STATE \ \ \ \ Add $(s_t,a_t,s_{t+1},r_t)$ to $\mathcal{B}$ \STATE \ \ \ \ {\bf for} each learning step {\bf do}: \STATE \ \ \ \ \ \ \ \ Sample a random batch $\mathcal{D}$ from $\mathcal{B}$. \STATE \ \ \ \ \ \ \ \ Perform critic learning: \STATE \ \ \ \ \ \ \ \ \ \ \ \ $\theta\leftarrow \theta-\lambda_{\theta}\nabla_{\theta}\epsilon_{\theta}^{\mathcal{D}}$ \STATE \ \ \ \ \ \ \ \ \ \ \ \ $\omega\leftarrow\omega-\lambda_{\omega}\nabla_{\omega}\epsilon_{\omega}^{\mathcal{D}}$ \STATE \ \ \ \ \ \ \ \ Perform actor learning: \STATE \ \ \ \ \ \ \ \ \ \ \ \ $\phi\leftarrow\phi+\lambda_{\phi}\nabla_{\phi}\epsilon_{\phi}^{\mathcal{D}}$ \end{algorithmic} \caption{The actor-critic algorithm for maximum entropy RL.} \label{algorithm-1} \end{algorithm} In line with this algorithm design, we will specifically consider parameterized value functions and policies, i.e. $V_{\theta}(s)$, $Q_{\omega}(s,a)$ and $\pi_{\phi}(s,a)$, in the form of \emph{Deep Neural Networks} (DNNs), where the network parameters are $\theta$, $\omega$ and $\phi$ respectively. As explained in \cite{haarnoja2018icml}, although it is not necessary for the critic to learn both $V_{\theta}(s)$ and $Q_{\omega}(s,a)$, the explicitly learned $V_{\theta}(s)$ can noticeably stabilize policy training by serving as the state-dependent baseline function. Hence $V_{\theta}(s)$ is also included in the design of our new actor-critic algorithms. Depending on the actual entropy measure used in \eqref{eq-lt-cum-rew-ext}, the learning rules to be employed for updating network parameters can be very different. We will derive these rules in the sequel. Similar to SAC, a replay buffer $\mathcal{B}$ is maintained consistently during RL. As shown in Algorithm \ref{algorithm-1}, at each learning step, a fixed-size batch $\mathcal{D}\subseteq\mathcal{B}$ of state-transition samples can be collected from $\mathcal{B}$ and used to define the squared error below for the purpose of training $V_{\theta}$. \begin{equation} \begin{split} \epsilon_{\theta}^{\mathcal{D}}=&\frac{1}{2 \|\mathcal{D}\|}\cdot \\ & \sum_{(s,a,s',r)\in\mathcal{D}} \left( V_{\theta}(s)-\E_{b\sim\pi(s,\cdot)}Q_{\omega}(s,b)-\alpha\mathcal{H}(\pi)\right)^2 \label{eq-epsilon-theta} \end{split} \end{equation} \noindent where $\|\mathcal{D}\|$ is the cardinality of $\mathcal{D}$. Similarly, the Bellman residue for training $Q_{\omega}$ can be determined as \begin{equation} \epsilon_{\omega}^{\mathcal{D}}=\frac{1}{2\|\mathcal{D}\|}\sum_{(s,a,s',r)\in\mathcal{D}}\left( Q_{\omega}(s,a)-r-\gamma V_{\theta}(s') \right)^2 \label{eq-epsilon-omega} \end{equation} Based on \eqref{eq-epsilon-theta} and \eqref{eq-epsilon-omega}, $\nabla_{\theta}\epsilon_{\theta}^{\mathcal{D}}$ and $\nabla_{\omega}\epsilon_{\omega}^{\mathcal{D}}$ can be further utilized to build the learning rules for $\theta$ and $\omega$ respectively (See Algorithm \ref{algorithm-1}). Specifically, \begin{equation} \begin{split} &\nabla_{\theta}\epsilon_{\theta}^{\mathcal{D}}=\frac{1}{\|\mathcal{D}\|}\cdot \\ & \sum_{(s,a,s',r)\in\mathcal{D}}\nabla_{\theta}V_{\theta}(s)\left( \begin{array}{l} {\displaystyle V_{\theta}(s)- \E_{b\sim\pi(s,\cdot)}Q_{\omega}(s,b)} \\ -\alpha\mathcal{H}(\pi_{\phi})\end{array}\right) \end{split} \label{eq-nabla-epsilon-theta} \end{equation} \noindent To evaluate $\nabla_{\theta}\epsilon_{\theta}^{\mathcal{D}}$ in \eqref{eq-nabla-epsilon-theta}, we must estimate the expectation $\E_{b\sim\pi(s,\cdot)}Q_{\omega}(s,b)$ and the entropy term $\mathcal{H}(\pi_{\phi})$ efficiently. In practice, both of the two in \eqref{eq-nabla-epsilon-theta} can be quickly approximated through a group of $k$ actions, i.e. $a_1,\ldots,a_k$, which will be sampled independently from $\pi_{\phi}(s,\cdot)$. In particular, when Tsallis entropy $\mathcal{H}^q(\pi)$ in \eqref{eq-tsallis-entropy} is exploited for maximum entropy RL, then \begin{equation*} \mathcal{H}^q\left(\pi_{\phi}(s,\cdot)\right)\approx-\frac{1}{k}\sum_{i=1}^k \frac{\pi_{\phi}(s,a_i)^{q-1}-1}{q-1} \label{eq-tsallis-est} \end{equation*} \noindent On the other hand, when R\'enyi entropy $\mathcal{H}^{\eta}(\pi_{\phi})$ in \eqref{eq-renyi-entropy} is adopted, we must either calculate $\int\pi^{\eta}(s,a)\mathrm{d}a$ precisely or estimate it through multiple action samples, depending on the type of action-selection distributions supported by the corresponding policy family $\Pi$. Please refer to Appendix E for detailed discussion on the techniques we used to evaluate $\int\pi^{\eta}(s,a)\mathrm{d}a$ and $\mathcal{H}^{\eta}(\pi_{\phi})$. Besides the critic, we must develop the learning rules for the actor too. According to the policy improvement mechanism presented in \eqref{eq-pi-improve-gac}, the performance index below can be employed for training $\pi_{\phi}$. \begin{equation} \epsilon_{\phi}^{\mathcal{D}}=\frac{1}{\|D\|}\sum_{(s,a,s',r)\in\mathcal{D}} \E_{a\sim\pi_{\phi}(s,\cdot)} Q_{\omega}(s,a)+\alpha\mathcal{H}(\pi_{\phi}) \label{eq-epsilon-policy} \end{equation} \noindent Subject to the entropy measure used in \eqref{eq-epsilon-policy}, $\nabla_{\phi}\epsilon_{\phi}^{\mathcal{D}}$ must be computed differently. Specifically, with the objective of maximum Tsallis entropy $\mathcal{H}^q(\pi_{\phi})$, we will first determine $\nabla_{\phi}\epsilon_{\phi}$ in every sampled state $s$, i.e. $\nabla_{\phi}\epsilon_{\phi}^s$, as given below. \begin{equation} \begin{split} &\nabla_{\phi}\epsilon_{\phi}^s=\int_{a\in\mathcal{A}}\pi_{\phi}(s,a) \nabla_{\phi}\log\pi_{\phi}(s,a) Q_{\omega}(s,a)\mathrm{d}a- \\ &\alpha\int_{a\in\mathcal{A}}\pi_{\phi}(s,a)\frac{q\ \pi_{\phi}(s,a)^{q-1}-1}{q-1}\nabla_{\phi}\log\pi_{\phi}(s,a)\mathrm{d}a \end{split} \label{eq-nabla-phi-tsallis} \end{equation} \noindent Because $\int_{a\in\mathcal{A}}\pi_{\phi}(s,a)\nabla_{\phi}\log\pi_{\phi}(s,a)\mathrm{d}a=0$ in any state $s\in\mathbb{S}$, the second integral term at the RHS of \eqref{eq-nabla-phi-tsallis} can be re-written as \[ q \int_{a\in\mathcal{A}}\pi_{\phi}(s,a)\nabla_{\phi}\pi_{\phi}(s,a)\log_{(q)}\pi_{\phi}(s,a)\mathrm{d}a \] \noindent As a consequence, $\nabla_{\phi}\epsilon_{\phi}^s$ will be approximated as \begin{equation} \begin{split} & \nabla_{\phi}\epsilon_{\phi}^s\approx \\ & \frac{1}{k}\sum_{i=1}^k \nabla_{\phi}\log\pi_{\phi}(s,a_i)\left( A(s,a_i)-\alpha q \log_{(q)}\pi_{\phi}(s,a_i) \right) \end{split} \label{eq-nabla-s-tsallis} \end{equation} \noindent where $a_1,\ldots,a_k$ are $k$ actions sampled from $\pi_{\phi}(s,\cdot)$ and \[ A(s,a_i)=Q_{\omega}(s,a_i)-V_{\theta}(s) \] \noindent Hence \begin{equation} \nabla_{\phi}\epsilon_{\phi}^{\mathcal{D}}\approx \frac{1}{\|\mathcal{D}\|}\nabla_{\phi}\epsilon_{\phi}^s \label{eq-nabla-phi} \end{equation} In the same vein, $\nabla_{\phi}\epsilon_{\phi}^{\mathcal{D}}$ can also be computed easily when R\'enyi entropy $\mathcal{H}^{\eta}(\pi_{\phi})$ is adopted. Specifically, in any sampled state $s$, \begin{equation*} \begin{split} \nabla_{\phi}\epsilon_{\phi}^s=&\int_{a\in\mathcal{A}}Q_{\omega}(s,a)\pi_{\phi}(s,a)\nabla_{\phi}\log\pi_{\phi}(s,a)\mathrm{d}a+\\ &\int_{a\in\mathcal{A}} \frac{\alpha \eta \pi_{\phi}(s,a)^{\eta}}{(1-\eta)\int_{a\in\mathcal{A}}\pi_{\phi}(s,a)^{\eta}\mathrm{d}a}\nabla_{\phi}\log\pi_{\phi}(s,a)\mathrm{d}a \end{split} \end{equation*} \noindent As a result, $\nabla_{\phi}\epsilon_{\phi}^{\mathcal{D}}$ will be estimated as \begin{equation} \begin{split} \nabla_{\phi}\epsilon_{\phi}^s &\approx \frac{1}{k}\sum_{i=1}^k \nabla_{\phi}\log\pi_{\phi}(s,a_i)\\& \left(A(s,a_i)+\frac{\alpha\eta \pi_{\phi}(s,a_i)^{\eta-1}}{(1-\eta)\int_{a\in\mathcal{A}}\pi_{\phi}(s,a)^{\eta}\mathrm{d}a} \right) \end{split} \label{eq-nabla-phi-renyi} \end{equation} Based on \eqref{eq-nabla-s-tsallis}, \eqref{eq-nabla-phi} and \eqref{eq-nabla-phi-renyi}, two alternative learning rules have been developed for the actor to train $\pi_{\phi}$. Together with the learning rules for the critic developed previously, they give rise to two separate RL algorithms, one supports RL that maximizes Tsallis entropy and will be called \emph{Tsallis entropy Actor-Critic} (TAC) and the other enables RL for maximum R\'enyi entropy and will be named as \emph{R\'enyi entropy Actor-Critic} (RAC). In Appendix F, theoretical analysis will be performed to study the effectiveness of TAC and RAC. Specifically, under suitable conditions and assumptions, the performance lower bounds for both TAC and RAC can be derived analytically. Moreover we show that the performance lower bound of TAC (when entropic index $q>1$) can be higher than that of SAC (when $q=1$). In other words, TAC can be more effective than SAC while enjoying efficient environment exploration through random action selection with maximum entropy. On the other hand, although the performance bound for RAC does not depend on entropic index $\eta$, nevertheless we can control the influence of the maximum entropy objective in \eqref{eq-lt-cum-rew-ext} and in \eqref{eq-pi-improve-gac} through $\eta$. Policy improvement during RL can be controlled subsequently, resulting in varied trade-offs between exploration and exploitation. It paves the way for the development of a new ensemble algorithm for single-agent RL in the next subsection. \subsection{Building an Ensemble Actor-Critic Algorithm} \label{sub-sec-eac} Using TAC and RAC respectively, we can simultaneously train \emph{an ensemble of policies}. The effectiveness and reliability of RL are expected to be enhanced when an agent utilizes such an ensemble to guide its interaction with the learning environment (see Proposition \ref{proposition-ens} below). To fulfill this goal, inspired by the bootstrap mechanism for deep environment exploration \cite{osband20161}, a new \emph{Ensemble Actor-Critic} (EAC) algorithm is developed in this subsection. Algorithm \ref{algorithm-2} highlights the major steps of EAC. \begin{algorithm}[!ht] \begin{algorithmic} \STATE {\bf Input}: an ensemble of $L$ policies and a replay buffer $\mathcal{B}$ that stores past state-transition samples for training. \STATE {\bf for} each problem episode {\bf do}: \STATE \ \ \ \ Choose a policy from the ensemble randomly \STATE \ \ \ \ {\bf for} $t=1,\ldots$ until end of episode {\bf do}: \STATE \ \ \ \ \ \ \ \ Use the chosen policy to sample action $a_t$. \STATE \ \ \ \ \ \ \ \ Perform $a_t$. \STATE \ \ \ \ \ \ \ \ Insert observed state transition into $\mathcal{B}$. \STATE \ \ \ \ \ \ \ \ Sample a random batch $\mathcal{D}$ from $\mathcal{B}$. \STATE \ \ \ \ \ \ \ \ Use TAC or RAC and $\mathcal{D}$ to train all policies in \ \ \ \ \ \ \ \ the ensemble. \STATE \ \ \ \ \ \ \ \ Use $\mathcal{D}$ to train action-selection Q-network $Q_{\psi}$. \end{algorithmic} \caption{An Ensemble Actor-Critic (EAC) Algorithm for RL.} \label{algorithm-2} \end{algorithm} EAC trains all policies in the ensemble by using the same replay buffer $\mathcal{B}$. Meanwhile, a policy will be chosen randomly from the ensemble to guide the agent's future interaction with the learning environment during each problem episode which starts from an initial state and ends whenever a terminal state or the maximum number of time steps has been reached. As explained in \cite{osband20161}, this bootstrap mechanism facilitates deep and efficient environment exploration. In addition to training, a new technique must be developed to guide an RL agent to select its actions during testing. We have investigated several possible techniques for this purpose including choosing the policy with the highest training performance for action selection as well as asking every critic in the ensemble to evaluate the actions recommended by all policies and selecting the action with the highest average Q-value. The latter option is recently introduced by the SOUP algorithm \cite{zeng2018ijcai}. However these techniques do not allow EAC to achieve satisfactory testing performance. We found that this is because every policy in EAC is not only trained to maximize cumulative rewards but also to maximize an entropy measure. Therefore the evaluation of any action by a critic in the ensemble will be influenced by the entropy of the corresponding policy. While this is important for training, it is not desirable for testing. In fact during testing we must choose the most promising actions for the pure purpose of reward maximization. Guided by this understanding, we have introduced a new component for high-level action selection in EAC, i.e. an action-selection Q-network $Q_{\psi}$ parameterized by $\psi$, as highlighted in Algorithm \ref{algorithm-2}. $Q_{\psi}$ will be trained together with all policies in the ensemble. The Bellman residue below without involving any entropy measures will be exploited for training $Q_{\psi}$. \begin{equation*} \epsilon_{\psi}^{\mathcal{D}}=\frac{1}{2 \|\mathcal{D}\|}\sum_{(s,a,s',r)\in\mathcal{D}} \left( \begin{array}{l} Q_{\psi}(s,a)-r \\ -\gamma\max_{b\in\{b_1\ldots,b_L\}}Q_{\psi}(s',b) \end{array} \right)^2 \end{equation*} \noindent where $b_1,\ldots,b_L$ refer to the $L$ actions sampled respectively from the $L$ policies in the ensemble for state $s'$. Consequently, during testing, every policy will recommend an action in each newly encountered state. The action with the highest Q-value according to $Q_{\psi}$ will be performed by the agent as a result. Proposition \ref{proposition-ens} (see Appendix G for proof) below gives the clue regarding why high-level action selection via $Q_{\psi}$ can be effective. \begin{proposition} For any policy $\pi\in\Pi$, assume that action selection guided by $\pi$ in any state $s\in\mathbb{S}$ follows multivariate normal distribution. Also assume that the Q-function for policy $\pi$ is continuous, differentiable with bounded derivatives and unimodal. Let $\pi'\in\Pi$ be a new policy that maximizes \eqref{eq-pi-improve-gac} when $\alpha=0$. Meanwhile, $\pi_1,\ldots,\pi_L$ represent the $L$ individual policies in the ensemble, each of which also maximizes \eqref{eq-pi-improve-gac} when $\alpha\neq 0$. Define the joint policy $\pi^e$ below \[ \pi^e(s)=\argmax_{a\in\{\tilde{a}_1,\ldots,\tilde{a}_L\}} Q^{\pi}(s,a) \] \noindent where $\tilde{a}_1,\ldots,\tilde{a}_L$ stand for the $L$ actions sampled respectively from each policy in the ensemble. Then, as $L\rightarrow\infty$, $Q^{\pi^e}(s,a)\geq Q^{\pi'}(s,a)$ for all $s\in\mathbb{S}$ and $a\in\mathbb{A}$. \label{proposition-ens} \end{proposition} It is intuitive to consider $Q_{\psi}$ in EAC as being trained to approximate the Q-function of the joint policy $\pi^e$ in Proposition \ref{proposition-ens}. Consequently Proposition \ref{proposition-ens} suggests that, when $L$ is sufficiently large, high-level action selection guided by $\pi^e$ and therefore $Q_{\psi}$ can reach the optimal cumulative rewards achievable by any policy $\pi\in\Pi$. Meanwhile, we can continue to enjoy effective environment exploration during training through the bootstrap mechanism since the trained $Q_{\psi}$ is only exploited for action selection during testing. \section{Experiments} \label{sec-exp} To examine the sample complexity and performance of TAC, RAC and EAC, we conduct experiments on six benchmark control tasks, including Ant, Half Cheetah, Hopper, Lunar Lander, Reacher and Walker2D. We rely consistently on the implementation of these benchmark problems provided by OpenAI GYM~\cite{Brockman:2016wv} and powered by the PyBullet simulator~\cite{Tan:2018us}. Many previous works utilized the MuJoCo physics engine to simulate system dynamics of these control tasks~\cite{Todorov:2012vi}. We did not study MuJoCo problems due to two reasons. First, it is widely reported that PyBullet benchmarks are tougher to solve than MuJoCo problems~\cite{Tan:2018us}. Hence, we expect to show the performance difference among all competing algorithms more significantly on PyBullet problems. Second, PyBullet is license-free with increasing popularity. In contrast, MuJoCo is only available to its license holders. To make our experiment results reproducible, the source code of TAC, RAC and EAC has been made freely available online \footnote{https://github.com/yimingpeng/sac-master}. There are eight competing algorithms involved in our experiments, i.e. SAC, TAC, RAC, EAC-TAC, EAC-RAC, TRPO, PPO and ACKTR. Among them EAC-TAC refers to the ensemble learning algorithm developed in Subsection \ref{sub-sec-eac} where TAC is used to train every policy in the ensemble. EAC-RAC refers to the variation where policy training is realized through RAC. Meanwhile, TRPO, PPO and ACKTR are state-of-the-art RL algorithms frequently employed for performance comparison. We used the high-quality implementation of TRPO, PPO and ACKTR provided by OpenAI Baselines\footnote{https://github.com/openai/baselines}. The source code for SAC is obtained from its inventors\footnote{https://github.com/haarnoja/sac}. We follow closely \cite{haarnoja2018icml} to determine hyper-parameter settings of SAC, TAC, RAC, EAC-TAC and EAC-RAC. The hyper-parameter settings of TRPO, PPO and ACKTR were obtained also from the literature~\cite{schulman2015icml,schulman20171,wu2017nips}. Detailed hyper-parameter settings for all algorithms can be found in Appendix H. We first examine the influence of entropic index on the performance of TAC and RAC. Figure \ref{fig:tac_entropic_index} depicts the learning performance of TAC with respect to three different settings of the entropic index $q$ (i.e. 1.5, 2.0 and 2.5) and also compares TAC with SAC as the baseline. As evidenced in the figure, with proper settings of the entropic index $q$, TAC can clearly outperform SAC on all six benchmark problems. Meanwhile, the best value for $q$ varies from one problem domain to another. When comparing TAC subsequently with other competing algorithms in Figure \ref{fig:performance_evaluation}, we will use the best $q$ value observed in Figure \ref{fig:tac_entropic_index} for each benchmark. Besides TAC, we have also examined the influence of entropic index $\eta$ on the performance of RAC and witnessed similar results. Due to space limitation, please refer to Appendix I for more information. \begin{figure*}[!ht] \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/entropy/tac/Plots/AntBulletEnv-Reward.pdf} \subcaption{\tiny{Ant}} \end{minipage}% \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/entropy/tac/Plots/HalfCheetahBulletEnv-Reward.pdf} \subcaption{\tiny{Half Cheetah}} \end{minipage} \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/entropy/tac/Plots/HopperBulletEnv-Reward.pdf} \subcaption{\tiny{Hopper}} \end{minipage} \\ \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/entropy/tac/Plots/LunarLanderContinuous-Reward.pdf} \subcaption{\tiny{Lunar Lander}} \end{minipage} \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/entropy/tac/Plots/ReacherBulletEnv-Reward.pdf} \subcaption{\tiny{Reacher}} \end{minipage} \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/entropy/tac/Plots/Walker2DBulletEnv-Reward.pdf} \subcaption{\tiny{Walker2D}} \end{minipage} \caption{The influences of entropic indices ($[1.5, 2.0, 2.5]$) on the performance of TAC on six benchmark control problems, with SAC serving as the baseline.} \label{fig:tac_entropic_index} \end{figure*} \label{sub:performance_evaluation} \begin{figure*}[!ht] \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/AntBulletEnv-Reward.pdf} \subcaption{\tiny{Ant}} \end{minipage}% \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/HalfCheetahBulletEnv-Reward.pdf} \subcaption{\tiny{Half Cheetah}} \end{minipage} \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/HopperBulletEnv-Reward.pdf} \subcaption{\tiny{Hopper}} \end{minipage} \\ \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/LunarLanderContinuous-Reward.pdf} \subcaption{\tiny{Lunar Lander}} \end{minipage} \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/ReacherBulletEnv-Reward.pdf} \subcaption{\tiny{Reacher}} \end{minipage} \begin{minipage}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{figures/Walker2DBulletEnv-Reward.pdf} \subcaption{\tiny{Walker2D}} \end{minipage} \\ \begin{center} \begin{minipage}[t]{0.7\textwidth} \includegraphics[width=\textwidth]{figures/Labels.pdf} \end{minipage} \end{center} \caption{Performance comparison among all competing algorithms, including SAC, TAC, RAC, EAC-TAC, EAC-RAC, ACKTR, TRPO, and PPO, on six benchmark control problems.} \label{fig:performance_evaluation} \end{figure*} We next compare the performance of all competing algorithms, as shown in Figure \ref{fig:performance_evaluation}. An inspection of this figure gives rise to two major findings. First, three algorithms proposed by us, i.e., EAC-TAC, EAC-RAC and TAC, have achieved much better performance than other algorithms. In fact, EAC-TAC performed clearly the best on Ant and outperformed all non-ensemble algorithms on Lunar Lander. The three algorithms also achieved the best performance on Walker2D. Moreover, we cannot find any problem on which other algorithms can significantly outperform the three. Second, ensemble techniques can noticeably enhance the reliability and performance of an RL algorithm. For example, EAC-TAC significantly outperformed TAC on two benchmarks (i.e., Ant and Lunar Lander) and did not perform worse than TAC on the rest. EAC-RAC outperformed RAC on five out of six benchmarks. In general, our experiments show that maximum entropy RL algorithms can be more sample efficient than other competing algorithms. Particularly, during the initial learning phase, these algorithms can learn faster to soon reach a high performance level. \section{Conclusions} In this paper, we have established a new policy iteration theory with the aim to maximize arbitrary general entropy measures through Actor-Critic RL. Guided by the theory, we have further developed two new Actor-Critic algorithms, i.e. TAC and RAC, to maximize respectively Tsallis entropy and R\'enyi entropy. Theoretical analysis suggests that these algorithms can be more effective than the recently proposed SAC algorithm. TAC and RAC also inspired us to develop a new ensemble algorithm named EAC that features the use of a bootstrap mechanism for deep environment exploration as well as a new value-function based mechanism for high-level action selection. Empirically we showed that TAC, RAC and EAC can outperform several state-of-the-art learning algorithms. Our research raises an important question regarding the mathematical properties that a general entropy measure must satisfy in order for it to promote effective environment exploration and learning. An answer to this open question will be pursued in the future work. \small{ \bibliographystyle{named}
1,314,259,992,896
arxiv
\section{Introduction} \noindent Any massive body, having a mass greater than a particular limit, collapsing under the influence of its own gravity leads to the formation of singularity. The weak and strong cosmic censorship conjectures say that the singularity thus formed would be hidden from a distant observer by the event horizon formed during/after the formation of singularity during the gravitational collapse and no null geodesic can escape the singularity and reach an asymptotic/distant observer [1-4]. However, it is theoretically possible that there exists certain cases where null geodesics can escape the singularity and reach a distant observer where the collapsing system has a certain minimum amount of radial inhomogeneity in density. The visible singularity thus formed, could either be locally or globally visible depending upon the formation time of the trapped surfaces. For the singularities to be {\it globally} visible, the null geodesics must come out from the singularity and reach a distant observer before it crosses the apparent horizon or the boundary of the trapped surfaces. To put it in a simpler way, the trapped surfaces should be formed at infinite comoving time. It may happen that the null geodesics escape the singularity and before reaching a distant observer, it encounters the trapped surface and fall back in towards the singularity. The singularity would be {\it locally} visible to any observer who lies within the apparent horizon. However, these locally visible singularities are not astrophysicaly much relevant, while for globally visible ones, the outgoing null geodesics can in principle reach an asymptotic observer and thus have a much greater relevance in the realm of astrophysics. The context of visibility of singularities formed by gravitational collapse has gained much interest and importance in recent times [5-17]. It has been shown that dust clouds having a minimum or sufficient inhomogeneity collapsing spherically can result in the formation of a Naked Singularity (NS) where null geodesics can escape the trapped surfaces and reach distant observers, thus being visible to them [18-19] The prime reason being that if these spacetime singularities are visible to a distant observer, the visible null geodesics would bear the possible physical and detectable signatures of quantum effects (of gravity) which are expected to have strong impacts in the region of ultra-strong gravitational fields and thus providing us with the much needed insights on the physical effects of quantum gravity on a spacetime manifold. The importance of study of quantum effects of gravity and correspondingly the visibility of singularities could be well argued form the interpretation of the singularity theorem which predicts the breakdown of general relativity (GR) at singularities formed during gravitational collapse (or otherwise) where the physical variables (like matter or radiation density) blows up, rendering the system unphysical. Thus, formation and existence of spacetime singularities puts a limit/boundary on the validity of GR itself in regions of high gravity and curvature of the spacetime manifold. One could interpret the blowing up of physical quantities as particle productions in higher curvature regions and thus bringing in the quantum effects into the game, inevitably. It could be further argued that study the laws of physics in such high curvature and gravity regions in the context of the Standard Model of particle physics (SM) and the interactions therein, would bring further insights into the dynamics of the observable/physical variables at the extreme vicinity of the spacetime singularities. In this present endeavour we study a toy model of homogenous gravitational collapse of a massless vector field in the presence of a positive cosmological constant $\Lambda$. The toy model satisfies both the weak and strong energy conditions required for gravitational collapse. The positive cosmological constant plays an important role in the nature of the singularity formed. For $0 \le \Lambda < 1$ this model satifsies the the cosmic censorship conjecture and results in the formation of a BH. Our work is organised as follows: In Sec. II, we deal with the basic dynamical equations governing the collapse on FLRW spacetime background. We show explicitly that the energy density falls off as fourth power of the scale factor $a (t)$. Sec III is dedicated to the study of the causal structures of the singularity. In Sec. IV we summarise our work with concluding remaks. Throughout the paper we have used the geometrised units, viz. $8 \pi G = c = 1$. \section{Dynamics of Collapse}\label{sectiontwo} \, We begin with the Lagrangian density for the massless a vector field $(A_\mu)$, \begin{equation} \label{1} {\cal L_M} \,= \,- \, \frac{1}{4}\, F_{\mu\nu}\,F^{\mu\nu}, \end{equation} where $F_{\mu\nu}$ is the electromagnetic field strength tensor and $F_{\mu\nu}\,F^{\mu\nu}$ denotes its kinetic term and study the gravitational collapse of the vector field $A^\mu$ and the spacetime background governed by the spatially flat Friedmann–Lemaitre–Robertson–Walker (FLRW) metric given by, \begin{equation} \label{2} ds^2 \, = \, - \,\,dt^2 + a^2 \,dr^2 + R^2 \,d{\Omega}^2, \end{equation} where $R^2 d{\Omega}^2 = a^2 r^2 (d{\theta}^2 + sin^2 \theta \, d{\phi}^2$). Here $a \equiv a(t)$ is the scale factor with the boundary conditions $a(0) = 1$ and $a(t_s) = 0$, $t_s$ being the singularity formation time. $R \equiv R(r, t)$ is the $physical$ radius and $r$ is the $comoving$ radius related by the relation: $R(r, t) = r \,a (t)$. The energy-momentum tensor can be calculated from the Lagrangian density for the vector field is given by \begin{equation} \label{3} T_{\mu\nu} \,= \,- \, \frac{2}{\sqrt{- g}} \, \frac{\delta (\sqrt{- g}\, \cal L_M)}{\delta g^{\mu\nu}}, \end{equation} such that the the components of the energy momentum tensor is given by $T_{\mu\nu} =$ diag $\, (\rho, \, p,\, p,\, p)$, where $\rho$ and $p$ are the density and pressure, respectively, and in presence of cosmological constant ($\Lambda$) are subsequently expressed by the relations \begin{eqnarray} \label{4} &&\rho \, = \, \frac{3}{2}\Big[a^2 \dot {A}^2 + 2 \, a \, \dot{a}\, A \, \dot {A} \Big] \, = \, \frac{3 \dot{a}^2}{a^2} + \Lambda, \nonumber\\ &&p \, = \,\frac{1}{2} \Big[a^2 \dot {A}^2 + 2 \, a \, \dot{a}\, A \, \dot {A} \Big] \, = \,- \,\frac{2\ddot{a}}{a} - \frac{\dot{a}^2}{a^2} - \Lambda, \end{eqnarray} where the overdot represents the time derivative of the functions w.r.t. time. It can be easily checked that the massless vector field follows the expected equation of state for radiation, viz, $p/ \rho = \omega = 1/3$. The Klein Gordon (KG) equation (equations of motion) is given by: \begin{equation} \label{5} \partial_\mu \big(\sqrt{- g} \, F^{\mu\nu}\big) \,= \,0, \end{equation} where $\sqrt{- g} = [a (t)]^3$ for FLRW background. In case of FLRW metric, it can be seen that there are no dynamics of the `scalar potential' $\phi$. Thus, the above equation (\ref{5}) now transforms to \begin{equation} \label{6} \ddot {A} \,+ \,3 \,H \, \dot {A} \,= \,0, \end{equation} where $H = (\dot{a}/a)$ is the Hubble constant and we have considered the spatial components of the vector field $(A_i)$ is a function of the scale factor $a$ and time $t$ only and has the same value A (i.e., $A_i = A$ and $\frac{\partial A}{\partial r} = 0$, $r$ being the comoving radius). The above equation (\ref{6}) can also be derived from the Einstein's field equations (\ref{4}) such that $\dot A \ne 0$, identically. Thus, it is viable to observe/infer that the KG equation is not a free equation and can be deduced from the expressions of the energy density ($\rho$) and pressure $(p)$. Using the chain rule: $\dot {A} = A' \dot {a}$, and both the equations from (\ref{4}), we get: \begin{equation} \label{7} \rho \, + \,p \,= \,2 \, \Big[a^2 \dot {A}^2 + 2 \, a \, \dot{a}\, A \, \dot {A} \Big]. \end{equation} We can obtain the dynamics of the collapse from the first equation in (\ref{4}) as, \begin{equation} \label{8} \dot {a} \,= \,- \, a \, \sqrt {\frac{\rho \, (a) - \Lambda}{3}}, \end{equation} and thus correspondingly by differentiating the above equation we obtain: \begin{equation} \label{9} \ddot {a} \, = \, \frac{a}{3} \, \big(\rho - \Lambda + \frac{a}{2}\,{\rho}'\big). \end{equation} Further, we get from the above equation (\ref{4}), we get the following relation, \begin{equation} \label{10} \rho \, - \,p \,= \,\Big[a^2 \dot {A}^2 + 2 \, a \, \dot{a}\, A \, \dot {A} \Big], \end{equation} and using Eqn. (\ref{8}) in Eqn. (\ref{7}), we obtain: \begin{eqnarray} \label{11} \rho\, \Big[1 - \frac{2}{3} \, (a\, A')^2 &-& \frac{4}{3}\, a^3 A A' \Big] \,+ \, p \nonumber\\ &=& \, \Big[\frac{2}{3} \, (a\, A')^2 + \frac{4}{3} a^3 A A' \Big]\Lambda. \end{eqnarray} Using the equation (\ref{4}) we get: \begin{equation} \label{12} p = - \rho - \, \frac{a {\rho}'}{3} \end{equation} With a little algebra involving equations (\ref{9}), (\ref{10}), (\ref{11}) and (\ref{12}) we obtain, \begin{equation} \label{13} \frac{{\rho}'}{{\rho}}\,= - \frac{4}{a}, \end{equation} which immediately gives us, \begin{equation} \label{14} \rho \, (a) \,= \, \frac{{\rho}_0}{[a (t)]^4}, \end{equation} where ${\rho}_0$ is an integration constant (${\rho} = {\rho}_0$ at $t = 0$). Consequently, we get \begin{equation} \label{15} \dot{a} = - a \sqrt{\frac{ {\rho}_0}{3\, a^4} - \frac{\Lambda}{3}}, \end{equation} which when integrated leads to the explicit expression of the scale factor $a (t)$. However, it is to be noted that the above differential equation cannot be solved analytically and hence, we go for numerical analysis to get the variation of the scale factor $a (t)$ and correspondingly the nature of the singularity. Thus, we find that for the massless vector fields $A^\mu$ in presence of a positive cosmological constant $\Lambda$, \begin{eqnarray} \label{16} \rho \, (a) \propto \, a^{- 4}, \end{eqnarray} showing that during collapse as $a (t)$ decreases, $\rho \,(a)$ diverges and ultimately blows up as $a (t) \rightarrow 0$. \section{Causal Structure of Singularity}\label{sectionfour} The dynamics of the scale factor $a(t)$ and that of the energy density $\rho(a)$ for the massless vector fields in presence of the cosmological constant $\Lambda$ shows that as time $t$ increases from $0$, $a(t)$ decreases and energy density $\rho \, (a)$ increases. As $a (t) \rightarrow 0$ we approach singularity and the nature of this strong gravity region can be explored by investigating formation of trapped surfaces around the singularity. At any instant of time $t$, trapped surfaces are not formed when \begin{equation} \label{17} \frac{\rho \, R^2}{3} = \frac{r^2 \,a^2}{3}\, \Big(\frac{{\rho}_0}{a^4}\Big) < 1, \end{equation} where we have considered $\rho_0 = 1$ for simplicity and $R$ is the physical radius of the collapsing cloud. The expansion scalar $\theta_l$ for the outgoing null geodesic congruence for the case of a massless vector fields is then given by, \begin{eqnarray} \label{18} \theta_l = \frac{2}{R} \, \Big(1 - \sqrt{\frac{\rho}{3}}\, R\Big), \end{eqnarray} with equation (\ref{16}) implying the positivity of the above equation (\ref{17}). The expansion scalar is positive (i.e., $\theta_l > 0$) when $a(t)$, if it satisfies the following equation, \begin{eqnarray} {r_c}^2 < \lim_{a \to 0} \frac{3}{a^2\, \rho}, \end{eqnarray} where ${r_c}$ is defined as the largest comoving radius of the collapsing system. Hence we can observe that there is always a causal connection when we have positive expansion scalar. The nature of singularity formed is obtained by substituting the value of $\rho$ from Eqn. (\ref{14}) in Eqn. (\ref{8}) and then solving the corresponding differential equation, thus obtained with the boundary condition: $a (t) = 1, t = 0$. The variation of $a (t)$ is plotted against $t$ for five different values of $\Lambda$ viz, 0, 0.2, 0.4, 0.8 \& 1: \begin{figure}[htbp] \centerline{\includegraphics[scale=.5]{abyt1.pdf}} \caption{Variation of scale factor with time for different $\Lambda$} \label{fig1} \end{figure} It is interesting to note that, for $\Lambda = 0, 0.2, 0.4, \& \, 0.8 $, $a (t)$ becomes zero at a {\it finite} comoving time. For $\Lambda = 1$, no singularity is formed in time. The variations of F/R with time $t$, for the same values of $\Lambda$ are plotted as: \begin{figure}[htbp] \centering {\includegraphics[scale=.45]{FbyRvst.pdf}} \caption{Variation of F/R with time (t) for different time and $\Lambda$ at r = 1} \label{fig2} \end{figure} \section{Conclusion} We study the toy model of a homogenous gravitational collapse for a massless vector field $A^\mu$ in presence of cosmological constant $\Lambda$, where the spatially flat FLRW metric seeded by the massless vector field $A^\mu$ and the cosmological constant $\Lambda$. From the Euler Lagrange (EL) equation of motion (\ref{5}), we get that there are no dynamics of the scalar potential. Since we are considering the homogenous collapse, we do not have any space dependence on the vector field $A^\mu$ and it is only a function of time $t$. Our present endeavour provide us with some interesting insights. First, we obtained the density function in an algebraic form $\rho \, (a) \propto \, a^{- 4}$, and by solving the corresponding differential equation numerically, we obtain the corresponding variation of the scale factor $a (t)$ w.r.t $t$ as shown in Fig. (\ref{fig1}). It is observed that as we increase the value of $\Lambda$, the singularity formation time increases, which depicts that $\Lambda$ acts as a repulsive force, in general, and thus preventing gravitational collapse of the vector fields. As the singularities are formed in finite comoving time, the gravtitational collapse for the massless vector fields would always lead to the formation of Blackholes. It is to be noted here that the chosen values or range of $\Lambda$ depends upon the value of the integration constant $\rho_0$ (which represents the initial density of the collapsing system) appearing in Eqn. (14). We have chosen $\rho_0 = 1$ throughout our calculations. If the chosen value of $\rho_0$ is greater than or less than 1, then the range of $\Lambda$ and the value of the respective singularity formation time $t_s$ would change correspondingly, however the nature of the variation of $a (t)$ with $t$ would remain the same. If we consider the value of $\Lambda$ more than the value of the initial energy density then we would not have any collapsing scenario. Further, we do not get any physical situation for $\Lambda > \rho_0$, as then we would encounter complex quantities, which are not considered physical (as evident from equation (\ref{8}) and we do not observe any gravitational collapse. Second, the presence of the cosmological constant $\Lambda$ shows that as it increases the singularity formation time $t_s$ increases. As the scale factor falls with time, the plot of $F/R$ vs. time $t$ (Fig. (\ref{fig2})) shows that as time increases, the apparent horizon starts moving inwards (towards $R = 0$), and it crosses each of the comoving radius (except the comoving radius corresponding to $r = 0$) before the formation of singularity. Hence, there are permanent trapped surfaces formed from which no null geodesic can escapeafter the apparent horizon crosses the comoving radius leading to formation of blackholes. This is a general feature of the situation for $\Lambda < 1$. When the value of $\Lambda$ equals the value of the initial density $\rho_0$, then we get the straight line as seen in Fig. (\ref{fig1}), meaning that the scale factor does not change (increase or decrease) with time and we get a static universe. Further, for $\Lambda = 1$, we observe that ``F/R" does not change with time and has a fixed value around 0.63. This corresponds to the case where the presence of the cosmological constant prevents the collapse and hence a singularity. We obtain a ball of radiation, forming a stable equilibrium state due to the two opposite forces at work. As for this case, the value of ``F/R" is less than 1, this (stable) ball of radiation would be visible to a distant observer.
1,314,259,992,897
arxiv
\section{Parametrization of lepton angular distribution} \label{sec:param} A probability density function (pdf) for a single event can be defined through the matrix element as~\cite{Alwall:2010cq} \begin{eqnarray} \rho(\mathbf{p}^{\text{vis}}|\lambda) = \dfrac{1}{\sigma_{\lambda}} \sum_{a,b} \int \d x_1 \d x_2 f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \int \d \Phi \dfrac{\d\hat{\sigma}}{\d \Phi} \prod_{i\in\text{vis}} \delta(\mathbf{p}_i-\mathbf{p}_i^{vis}), \label{eq:pdf} \end{eqnarray} where $\Phi$ represents the Lorentz invariant phase space, in our case, a four body version $\Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}})$ with $l=e,\mu$ and $\chi$ for the DM particle. $f_a(x,\mu_{\mathrm{F}})$ corresponds to the parton distribution function of parton $a$, with an energy fraction of $x$ and a factorization scale $\mu_{\mathrm{F}}$. $\lambda$ stands for a set of parameters of interest. The visible part of the phase space is determined through observables, while the invisible part is integrated over. The general cross section formula is written as: \begin{eqnarray} \sigma &=& \sum_{a,b} \int \d x_1 \d x_2 f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \int \d \Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}}) \dfrac{\d \hat{\sigma}}{\d \Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}})}. \end{eqnarray} For the same process, it follows that the $\rho(\mathbf{p}^{\text{vis}}|\lambda)$ is indeed a probability density function for the visible kinematics: \begin{eqnarray} \left(\prod_{i\in\text{vis}}\int\d^3p_i\right) \rho(\mathbf{p}^{\text{vis}}|\lambda) = 1. \end{eqnarray} To calculate the production of a Z boson in association with a DM mediator, we parametrize the four-momenta as follows: \begin{eqnarray} p_1^{\mu} =& x_1 \dfrac{\sqrt{s}}{2} (1,0,0,1)^T &= \dfrac{\sqrt{\hs}}{2} \sqrt{\dfrac{x_1}{x_2}} (1,0,0,1)^T , \\ \nonumber p_2^{\mu} =& x_2 \dfrac{\sqrt{s}}{2} (1,0,0,-1)^T &= \dfrac{\sqrt{\hs}}{2} \sqrt{\dfrac{x_2}{x_1}} (1,0,0,-1)^T , \\ \nonumber \py^{\mu} =& (\py^0, -\qt, 0, \py^3)^T &= \left( \dfrac{\sqrt{s}}{2} \xy \cosh \yy, -\qt, 0, \dfrac{\sqrt{s}}{2} \xy \sinh \yy \right)^T, \\ \nonumber \pz^{\mu} =& (\pz^0, \qt, 0, \pz^3)^T &= \left( \dfrac{\sqrt{s}}{2} \xz \cosh \yz, \qt, 0, \dfrac{\sqrt{s}}{2} \xz \sinh \yz \right)^T, \end{eqnarray} where $\xz=\dfrac{2\sqrt{\sz+\qt^2}}{\sqrt{s}}$, $\xy=\dfrac{2\sqrt{\sy+\qt^2}}{\sqrt{s}}$. It is common to study the decaying lepton angular distribution in the Collins-Soper (CS) frame~\cite{PhysRevD.16.2219}. The Collins-Soper frame, as shown in Fig.\ref{fig:frames}, is a Z boson rest frame, with the z-axis lying in a way bisects the opening angle $\theta_{ab}$ between the beam and negative target momenta directions. In this frame, momenta of the two incoming partons become: \begin{eqnarray} \label{eq:kinCS} p^{CS}_1 &=& \dfrac{x_1}{2} \sqrt{\dfrac{s}{\sz}} \ee^{-\yz} ( \sqrt{\sz+\qt^2} , -\qt,0,\sqrt{\sz}), \\ \nonumber p^{CS}_2 &=& \dfrac{x_2}{2} \sqrt{\dfrac{s}{\sz}} e^{\yz} ( \sqrt{\sz+\qt^2} , -\qt,0,-\sqrt{\sz}), \end{eqnarray} where the $x_{1,2}$ and $\yz$ dependences have been factorized out. Determined by these two momenta, the z-axis of this frame treats the in- and out-partons equally and $\tan\frac{\theta_{ab}}{2}=\frac{|\bf{\qt}|}{\sqrt{\sz}}$ is invariant under the longitudinal boost. This feature makes it suitable for the study of effects at finite $|\bf{\qt}|$. To avoid possible dilutions by the initial states swapped processes, we performed a rotation of $\pi$ around the x-axis for events with $\yz<0$~\cite{Aad:2016izn,Khachatryan:2015paa}. This rotation makes all angular coefficients distribute symmetric in $\yz$. \begin{figure}[htbp] \centering \includegraphics[width=9.0cm]{figs/CSframe.ps} \caption{\label{fig:frames} Sketch of the Collins-Soper frame. $\mathbf{p}_1, \mathbf{p}_2$ correspond to the three momenta of the right- and left- flying protons.} \end{figure} In experiment, only the two decaying lepton pair are measurable, giving a set of visible variables $\yz,\qt,\sz,\cos\theta_{CS},\phi_{CS}$, where the latter two denote polar and azimuthal angles of the charged lepton in the CS frame. We parametrize the Lorentz invariant phase space in a way such that the invisible part $\sy,\yy,\cos\theta_{\chi},\phi_{\chi}$ can be integrated over: \begin{eqnarray} \int \d \Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}}) &=& \int \dfrac{\d \sz}{2\pi} \dfrac{\d \sx}{2\pi} \int \d\Phi'_2(\py,\pz) \d\Phi_2(k_{l},k_{\bar{l}}) \d\Phi_2(k_{\chi},k_{\bar{\chi}}), \\ \int \d\Phi'_2(\py,\pz) &=& \int \dfrac{\d^3 \pz}{(2\pi)^3 2 \pz^0} \dfrac{\d^3 \py}{(2\pi)^3 2 \py^0} (2\pi)^4 \delta^4(p_1+p_2-\pz-\py), \\ \nonumber &=& \dfrac{1}{4\pi s} \int \d \yz \d \yy \d \qt \cdot \qt \\ \nonumber && \delta(x_1-\dfrac{\xz}{2}\ee^{\yz}-\dfrac{\xy}{2}\ee^{\yy}) \delta(x_2-\dfrac{\xz}{2}\ee^{-\yz}-\dfrac{\xy}{2}\ee^{-\yy}) \\ \int \d \Phi_2(k_1,k_2) &=& \dfrac{1}{8\pi} \bar{\beta}(\dfrac{\mathrm{m}_1^2}{s_{12}},\dfrac{\mathrm{m}_2^2}{s_{12}}) \dfrac{\d \cos\theta}{2} \dfrac{\d \phi}{2\pi}, \\ \bar{\beta}(a,b) &=& \sqrt{\lambda(1,a,b)} = \sqrt{1+a^2+b^2-2 a-2 b-2 a b}. \nonumber \end{eqnarray} Then we factorize the decay angular distribution in terms of nine harmonic polynomials and eight angular coefficients $A_i,i=0,...,7$~\cite{Aad:2016izn,Khachatryan:2015paa}: \begin{eqnarray} \dfrac{\d\sigma}{\d\qt\d\yz\d\sz\d\cos\theta\d\phi} &&= \left( \int \d\cos\theta \d\phi \dfrac{\d\sigma}{\d\qt\d\yz\d\sz\d\cos\theta\d\phi} \right) \dfrac{3}{16 \pi} \\ \nonumber && \Bigg\{ (1+\cos^2\theta) + \dfrac{1}{2} A_0 (1-3\cos^2\theta) + A_1 \sin2\theta \cos\phi \\ \nonumber && + \dfrac{1}{2} A_2 \sin^2\theta \cos2\phi + A_3 \sin\theta \cos\phi + A_4 \cos\theta \\ \nonumber && + A_5 \sin^2\theta \sin2\phi + A_6 \sin2\theta \sin\phi + A_7 \sin\theta\sin\phi \Bigg\}, \end{eqnarray} where the polar and azimuthal angles $\theta,\phi$ are measured in the CS frame. Coefficients $A_5-A_7$ are parity-odd and do not contribute at tree level and are found to be very small for a Z boson production~\cite{Aad:2016izn,Khachatryan:2015paa}. Therefore in this analysis, we consider only $A_0-A_4$. \section{Cross checks with the {\sc MG5} program} \label{app:valid} To make the {\sc MG5} results comparable, we implemented similar setups as described in the paper. These include coupling constants, the choice of PDF set, renormalization and factorization scales, Breit-Wigner cutoff, and BL selections as described in Table~\ref{tab:selections} of the paper. The Table~\ref{tab:validation1} compares our results with the {\sc MG5} ones with one on-shell Z boson in the final states. For all the cases, the differences lie within statistical uncertainty. The Table~\ref{tab:validation2} compares our results with the {\sc MG5} with the Z boson leptonicalled decayed. Our program considered all the BL-selections with NWA, while the {\sc MG5} ones replace the NWA with $|\mathrm{m}_{ll}-\mz|<15\times\Gamma_{\mathrm{Z}}$. This replacement leads to slightly smaller {\sc MG5} cross sections comparing to ours, but in general, the differences are not large. Normalizations of the signal and all of the background pdfs are also checked to be consistent with one. \begin{table}[htb] \centering \scalebox{0.8}{ \begin{tabular}{c|c|c|c|c} \hline \hline Process/Benchmark & Cross section & Cross section & Relative & Relative \\ & (fb) & from {\sc MG5} (fb) & Difference (\%) & Statistical uncertainty (\%) \\ \hline S0$_a$ & 0.1535 & 0.1536 & 0.052 & 0.34 \\ S0$_b$ & 0.1452 & 0.1454 & 0.14 & 0.29 \\ S0$_c$ & 4.436$\times 10^{-7}$ & 4.459$\times 10^{-7}$ & 0.52 & 0.14 \\ \hline S1$_a$ & 37.16 & 37.21 & 0.14 & 0.23 \\ S1$_b$ & 7.931 & 7.943 & 0.15 & 0.24 \\ S1$_c$ & 66.94 & 67.01 & 0.11 & 0.25 \\ \hline Z($\to 2\nu$)Z & 3561 & 3564 & 0.081 & 0.16 \\ W($\to e\nu$)Z & 2547 & 2556 & 0.39 & 0.26 \\ Z+jet & 1.189$\times 10^{7}$ & 1.192$\times 10^{7}$ & 0.23 & 0.23 \\ \hline \hline \end{tabular}} \caption{ Comparison of cross sections obtained by our program and the {\sc MG5}, with one on-shell Z boson in the final states. Their differences and the statistical uncertainties taken from the {\sc MG5} are presented relative to the {\sc MG5} ones. } \label{tab:validation1} \end{table} \begin{table}[htb] \centering \scalebox{0.8}{ \begin{tabular}{c|c|c|c|c} \hline \hline Process/Benchmark & Cross section & Cross section & Relative & Relative \\ & (fb) & from {\sc MG5} (fb) & Difference (\%) & Statistical uncertainty (\%) \\ \hline S0$_a$ & 4.748$\times 10^{-3}$ & 4.688$\times 10^{-3}$ & 1.3 & 0.31 \\ S0$_b$ & 4.333$\times 10^{-3}$ & 4.382$\times 10^{-3}$ & 1.1 & 0.33 \\ S0$_c$ & 1.667$\times 10^{-8}$ & 1.649$\times 10^{-8}$ & 1.1 & 0.27 \\ \hline S1$_a$ & 1.149 & 1.034 & 11 & 0.23 \\ S1$_b$ & 0.2431 & 0.2186 & 11 & 0.27 \\ S1$_c$ & 2.070 & 1.861 & 11 & 0.23 \\ \hline ZZ$\to 2l 2\nu$ & 27.71 & 26.50 & 4.6 & 0.13 \\ WZ($\to e\nu 2l$) & 17.05 & 18.39 & 7.3 & 0.26 \\ Z($\to l^+l^-$)+jet & 36125 & 34440 & 4.9 & 0.30 \\ \hline \hline \end{tabular}} \caption{ Comparison of cross sections obtained by our program and the {\sc MG5}, with Z boson leptonically decayed. Our program considered all the BL-selections with NWA, while the {\sc MG5} ones replace the NWA with $|\mathrm{m}_{ll}-\mz|<15\times\Gamma_{\mathrm{Z}}$. Hence the {\sc MG5} results are in general slightly smaller than ours. Their differences and the statistical uncertainties taken from the {\sc MG5} are presented relative to the {\sc MG5} ones. } \label{tab:validation2} \end{table} \section{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}% \def\subsection{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\subsubsection{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape \centering }% }% \def\paragraph{% \@startsection {paragraph}% {4}% {\parindent}% {\z@}% {-1em}% {\normalfont\normalsize\itshape}% }% \def\subparagraph{% \@startsection {subparagraph}% {5}% {\parindent}% {3.25ex \@plus1ex \@minus .2ex}% {-1em}% {\normalfont\normalsize\bfseries}% }% \def\section@preprintsty{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries }% }% \def\subsection@preprintsty{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries }% }% \def\subsubsection@preprintsty{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape }% }% \@ifxundefined\frontmatter@footnote@produce{% \let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote }{}% \def\@pnumwidth{1.55em} \def\@tocrmarg {2.55em} \def\@dotsep{4.5pt} \setcounter{tocdepth}{3} \def\tableofcontents{% \addtocontents{toc}{\string\tocdepth@munge}% \print@toc{toc}% \addtocontents{toc}{\string\tocdepth@restore}% }% \def\tocdepth@munge{% \let\l@section@saved\l@section \let\l@section\@gobble@tw@ }% \def\@gobble@tw@#1#2{}% \def\tocdepth@restore{% \let\l@section\l@section@saved }% \def\l@part#1#2{\addpenalty{\@secpenalty}% \begingroup \set@tocdim@pagenum{#2}% \parindent \z@ \rightskip\tocleft@pagenum plus 1fil\relax \skip@\parfillskip\parfillskip\z@ \addvspace{2.25em plus\p@}% \large \bf % \leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@ \hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip \par \nobreak % \endgroup }% \def\tocleft@{\z@}% \def\tocdim@min{5\p@}% \def\l@section{% \l@@sections{}{section }% \def\l@f@section{% \addpenalty{\@secpenalty}% \addvspace{1.0em plus\p@}% \bf }% \def\l@subsection{% \l@@sections{section}{subsection }% \def\l@subsubsection{% \l@@sections{subsection}{subsubsection }% \def\l@paragraph#1#2{}% \def\l@subparagraph#1#2{}% \let\toc@pre\toc@pre@auto \let\toc@post\toc@post@auto \def\listoffigures{\print@toc{lof}}% \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} \def\listoftables{\print@toc{lot}}% \let\l@table\l@figure \appdef\class@documenthook{% \@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}% \raggedcolumn@sw{\raggedbottom}{\flushbottom}% }% \def\tableft@skip@float{\z@ plus\hsize}% \def\tabmid@skip@float{\@flushglue}% \def\tabright@skip@float{\z@ plus\hsize}% \def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}% \def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}% \def\@makefntext#1{% \def\baselinestretch{1}% \reset@font \footnotesize \leftskip1em \parindent1em \noindent\nobreak\hskip-\leftskip \hb@xt@\leftskip{% \Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}% \hss\@makefnmark\ }% #1% \par }% \prepdef \section*{Acknowledgments} This work would not be possible without what D. Yang have learned from Kaoru Hagiwara (KEK) back in PITT PACC (U.S.) and Xinjiang Univ. (China). We have benefited from useful discussions with many people, to name a few, Kaoru Hagiwara, Tao Han, Junmo Chen, Xing Wang (PITT), Kai Ma (SUT) and Yandong Liu, Jing Li (PKU). D. Yang would also like to thank the PITT particle physics group and the Xinjiang Univ. theoretical physics group for warm hospitality during the stay. We are also grateful to Junichi Kanzaki (KEK), Yajuan Zheng (NTU) for useful advises in using {\sc BASES}. This work is supported in part by the National Natural Science Foundation of China, under Grants No. 11475190 and No. 1157500, and by a short-term internship program from the graduate school of Peking University. \section{Numerical results of angular coefficients in the Collins-Soper frame} \label{sec:angC} As we are not directly searching for a resonance, the $\sz$ is expected to give no sensitivity and a narrow width approximation (NWA) is applied for convenience. Apart from that, we have four observables from the Z boson decay: $\yz,\qt,\cos\theta_{CS},\phi_{CS}$. To study the features of this four-dimensional data, we calculate angular coefficients in the $\yz-\qt$ plane for both the major background process $\mathrm{Z Z}\to 2l2\nu$ production and different dark sector models. The angular coefficients can be extracted using the method of moments~\cite{Beaujean:2015xea}. In the experiment, it is more straightforward to extract from a likelihood fit~\cite{Aad:2016izn,Khachatryan:2015paa}. Applying NWA for the Z boson, the cross section can be calculated through spin density matrices of the Z boson production ($\rho^{\rm{P}}$) and decay ($\rho^{\rm{D}}$): \begin{eqnarray} \dfrac{\d\sigma}{\d\yz\d\qt\d\sy\d\Phi_2(k_{\chi},k_{\bar{\chi}})\d\cos\theta\d\phi} = \dfrac{\d\sigma_{P}}{\d\yz\d\qt\d\sy\d\Phi_2(k_{\chi},k_{\bar{\chi}})} \cdot \mathrm{Br}(\mathrm{Z}\to l^+ l^-) \cdot 3 \sum_{s,s'} \rho^{\mathrm{P}}_{s s'} \rho^{\mathrm{D}}_{s s'}. \nonumber \end{eqnarray} The Z boson production density matrix is defined in a specific range (${\cal R}$) of $\yz-\qt$ as follows: \begin{eqnarray} \rm{Tr}\rho^{\rm{P}} &=& \int_{\cal R} \d\Phi'_2(\py,\pz) \d\Phi_2(k_{\chi},k_{\bar{\chi}}) \sum_{a,b} f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \dfrac{1}{2\hs} \overline{\sum_{\text{ext}}} \sum_{s} \left| {\cal M}_s \right|^2, \\ \nonumber \rho^{\rm{P}}_{s s'} &=& \dfrac{1}{\rm{Tr}\rho^{\rm{P}}} \int_{\cal R} \d\Phi'_2(\py,\pz) \d\Phi_2(k_{\chi},k_{\bar{\chi}}) \sum_{a,b} f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \dfrac{1}{2\hs} \overline{\sum_{\text{ext}}} {\cal M}_s {\cal M}^*_{s'} \end{eqnarray} where $\overline{\sum}_{\text{ext}}$ means sum over spins and colors of all external particles other than the Z boson and averaged for the initial state ones. The decay density matrix is obtained using the Z boson decay amplitudes and parametrized similar as in Ref.\cite{Dutta:2008bh}. The production and decay density matrices are both normalized such that the trace is one. To obtain the amplitudes, we start from the {\sc FeynRules} models implimented by authors of Ref.\cite{Mattelaer:2015haa,Backovic:2015soa,Neubert:2015fka,Das:2016pbk,Kraml:2017atm} and use {\sc ALOHA} in the {\sc MadGraph} framework to generate {\sc HELAS} subroutines for the helicity amplitudes~\cite{Alloul:2013bka,deAquino:2011ub,Alwall:2014hca,HAGIWARA19861,DREINER20101}. In the CS frame, we choose the z-axis as spin quantization axis, hence a rotation is necessary to bring the helicity frame results to the CS frame ones. We choose the y-axis to be common for the two frames and find the opening angle between the two frames $\omega$ can be obtained through \begin{eqnarray} \cos\omega &=& \dfrac{2\sqrt{\tau_{\mathrm{Z}}}\sinh\yz}{\sqrt{\xz^2\cosh^2\yz-4\tau_{\mathrm{Z}}}}, \end{eqnarray} where $\tau_{\mathrm{Z}} \equiv \dfrac{\sz}{s}$ and $\omega \in [0,\pi)$. The density matrices are then rotated according to Wigner's d-functions: \begin{eqnarray} \rho_{s s'}^{\mathrm{P},HEL} &=& \sum_{\alpha, \beta} d^{J=1}_{\alpha s} (\omega) d^{J=1}_{\beta s'} (\omega) \rho^{\mathrm{P},CS}_{\alpha \beta}, \\ \nonumber \rho_{s s'}^{\mathrm{P},CS} &=& \sum_{\alpha, \beta} d^{J=1}_{\alpha s} (-\omega) d^{J=1}_{\beta s'} (-\omega) \rho^{\mathrm{P},HEL}_{\alpha \beta}, \\ \nonumber \end{eqnarray} where we have used the following notations: \begin{eqnarray} g_{\alpha\beta} &=& - \sum_{s} \epsilon^*_{\alpha}(p,s) \epsilon_{\beta}(p,s) \\ \nonumber \epsilon^{\mu}(p,s) \epsilon_{\mu}(p,s') &=& - d^{J=1}_{s s'}(\theta_{s,s'}), \\ \nonumber d^{J=1}_{s=+,-,0;s'=+,-,0} (\theta) &=& \left( \begin{array}{ccc} \dfrac{1+\cos\theta}{2} & \dfrac{1-\cos\theta}{2} & -\dfrac{\sin\theta}{\sqrt{2}} \\ \dfrac{1-\cos\theta}{2} & \dfrac{1+\cos\theta}{2} & \dfrac{\sin\theta}{\sqrt{2}} \\ \dfrac{\sin\theta}{\sqrt{2}} & -\dfrac{\sin\theta}{\sqrt{2}} & \cos\theta \\ \end{array} \right). \end{eqnarray} The phase space is prepared analytically, and integration is performed using {\sc BASES}~\cite{Kawabata:1995th} and {\sc GNU Scientific Library}. We mapped the phase space variables to increase integration efficiencies. Specifically, for a massive propagator with mass m and width $\Gamma$, the invariant mass is generated with \begin{eqnarray} s &=& \mathrm{m}^2 + \mathrm{m} \Gamma \tan( x (y_{max}-y_{min}) + y_{min} ), \text{ where } \\ y_{min/max} &=& \arctan(\dfrac{s_{min/max}-\mathrm{m}^2}{\mathrm{m}\Gamma}), \\ \nonumber \mathrm{Jacobian} &=& \dfrac{y_{max}-y_{min}}{\mathrm{m}\Gamma} \left( (s-\mathrm{m}^2)^2 + (\mathrm{m}\Gamma)^2 \right), \end{eqnarray} and $x$ is a uniformly generated random number. The simulation considers $\sin\theta_W=0.23129$, $\mz=91.1876$~GeV, $\Gamma_{\mathrm{Z}}=2.4952$~GeV and $\alpha(\mz)^{-1}=127.95$~\cite{Patrignani:2016xqp}. The W boson mass is obtained through $\mz \cos\theta_W$, assuming $\rho$ parameter equals to one. The $\alpha_S$ is chosen to be consistent with the one in the parton distribution functions (PDF). We use PDF set {\sc NNPDF23}~\cite{Ball:2013hta} with $\alpha_S(\mz)=0.130$ at leading order. The factorization scale is set to be equal to the Z boson transverse energy $E_{\mathrm{T}}=\sqrt{\qt^2+\sz}$. Cross sections in this section consider the visible Z boson decays to electrons and muons with NWA and $\mathrm{Br}(\mathrm{Z}\to l^+ l^-)=6.73$\%~\cite{Patrignani:2016xqp}. The advantage of our program is that high statistical accuracy can be achieved through a direct integration. To validate our program, we checked our angular coefficients through toy measurements based on {\sc MadGraph5\_aMC@NLO (MG5)} generated events. \subsection{SM $\mathrm{Z Z}\to 2l2\nu$ background} The SM $\mathrm{Z Z}\to 2l2\nu$ production is the major background of our DM search. It has a similar final state signature as the signal process, as depicted in Fig.~\ref{fig:feynzz}. Hence we first take a look at the Fig.~\ref{fig:angzz} for the angular coefficients of this process. In general, the angular coefficient $A_0$ measures the difference between longitudinal and transverse polarizations, and it looks more longitudinal at high $\qt$. The coefficient $A_4$ measures forward-backward asymmetry, the Z boson looks more like left-handed in the forward region. The $A_2$ measures the interference between the transverse amplitudes and the $A_{1,3}$ measures the interference between transverse and longitudinal. \clearpage \begin{figure}[htb] \centering \includegraphics[width=10.0cm]{figs/zz_decayed.ps} \caption{\label{fig:feynzz} Representative Feynman diagrams of the SM $\mathrm{Z Z}\to 2l2\nu$ production. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSsm_zz_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSsm_zz_yz.eps} \caption{\label{fig:angzz} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the SM $\mathrm{Z Z}\to 2l2\nu$ process. } \end{figure} \subsection{Spin-0 mediator} \begin{figure}[!h] \centering \includegraphics[width=5.0cm]{figs/zy0_decayed.ps} \caption{\label{fig:feyns0} Representative Feynman diagrams of the dark sector with a spin-0 mediator. For the S0$_c$ model, there is no virtual photon propagator. } \end{figure} We consider a simplified model with a scalar s-channel mediator as described in Ref.~\cite{Neubert:2015fka}. The dark sector model is constructed as follows: \begin{eqnarray} {\cal L}_{SM EW}^{Y_0} &=& \dfrac{1}{\Lambda} g^S_{h3} (D^{\mu}\phi)^{\dagger} (D_{\mu}\phi) Y_0 \\ & & + \dfrac{1}{\Lambda} B_{\mu\nu} \left( g^S_{B} B^{\mu\nu} + g^P_B \tilde{B}^{\mu\nu} \right) Y_0 + \dfrac{1}{\Lambda} W_{\mu\nu}^i \left( g^S_W W^{i,\mu\nu} + g^P_W \tilde{W}^{i,\mu\nu} \right) Y_0, \\ {\cal L}_{X}^{Y_0} &=& \mathrm{m}_{\chi_C} g^S_{X_C} \chi^*_C \chi_C Y_0 + \bar{\chi}_D ( g^S_{X_D} + i g^P_{X_D} \gamma_5 ) \chi_{D} Y_0, \end{eqnarray} where $\tilde{V}^{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}V^{\rho\sigma}$ is the dual field strength tensor of $V$ field, $\Lambda$ is a high energy scale. As discussed in Ref.~\cite{Neubert:2015fka}, this operator can be induced by a fermion loop graph with heavy fermion integrated out. Signature of this model is very different from the SM $\mathrm{Z Z}\to 2l2\nu$ process, the dark mediator is emitted from the SM gauge bosons as depicted in Fig.~\ref{fig:feyns0}. We consider three benchmark scenarios of the parameters labeled by S0$_{a,b,c}$. As our angular distributions are more sensitive to changes in couplings, we fix the mass of dark matter $\mchi=10$~GeV and the mass of the mediator $\mys=1000$~GeV. The angular distributions won't be changed drastically as long as $2\mchi$ is much smaller than $\mys$. The parameter values and inclusive cross sections are listed in Table~\ref{tab:exhi-s0}. Angular coefficients of the benchmark scenatios S0$_{a,b,c}$ are shown in Fig.~\ref{fig:angs0a}, Fig.~\ref{fig:angs0b} and Fig.\ref{fig:angs0c} respectively. Comparing to the SM $\mathrm{Z Z}\to 2l2\nu$, the dark matter signal is produced with much higher $\qt$ and have very different angular coefficients distributions, e.g., more transverse at low $\qt$. The S0$_a$ and S0$_b$ can be distinguished from $A_0,A_2$, where the $\yz$ dependences are very different. In the case of S0$_c$, $Y_0$ couples to weak bosons like a Higgs boson and cannot perturb the coupling structure with the Z boson production. Consequently, the $A_0$, $A_1$ and $A_3$ in the CS frame are all zero hence are not shown in the figure. \begin{table}[htb] \centering \begin{tabular}{c|cccccccc} \hline \hline Benchmark & S0$_a$ & S0$_b$ & S0$_c$ \\ \hline $g^S_{X_D}$ & 1 & 0 & 0 \\ $g^P_{X_D}$ & 0 & 1 & 0 \\ $g^S_{X_C}$ & 0 & 0 & 1 \\ \hline $g^S_{W}$ & 0.25 & 0 & 0 \\ $g^P_{W}$ & 0 & 0.25 & 0 \\ $g^S_{h3}$ & 0 & 0 & 1 \\ $\Lambda$ (GeV) & 3000 & 3000 & 3000 \\ \hline Interaction & CP-even & CP-odd & CP-even \\ $\mchi$ (GeV) & 10 & 10 & 10 \\ $\mys$ (GeV) & 1000 & 1000 & 1000 \\ \hline $\Gamma_{Y_0}$ (GeV)& 41.4 & 41.4 & 1.05 \\ Cross section (fb) & 0.0103 & 0.00977 & 2.98e-08 \\ \hline \hline \end{tabular} \caption{ Benchmark scenarios with a spin-0 mediator. } \label{tab:exhi-s0} \end{table} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin0_SwXd_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin0_SwXd_yz.eps} \caption{\label{fig:angs0a} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S0$_a$. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin0_PwXd_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin0_PwXd_yz.eps} \caption{\label{fig:angs0b} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benckmark scenario S0$_b$. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A2_BSspin0_HXc_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin0_HXc_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin0_HXc_yz.eps} \caption{\label{fig:angs0c} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S0$_c$. Comparing with other figures, we extended the range of the $A_2$ for a better demonstration. } \end{figure} \subsection{Spin-1 mediator} We consider the same dark sector with a spin-1 mediator as in the LHC experiment~\cite{Abercrombie:2015wmb} with the following interactions of the dark sector: \begin{eqnarray} {\cal L}_{X_D}^{Y_1} &=& \bar{\chi}_D \gamma_{\mu} \left( g^V_{X_D} + g^A_{X_D} \gamma_5 \right) \chi_D Y_1^{\mu} \\ \nonumber {\cal L}_{SM}^{Y_1} &=& \bar{d}_i \left( g^V_{d_{ij}} + g^A_{d_{ij}}\gamma_5 \right) d_j Y_1^{\mu} + \bar{u}_i \left( g^V_{u_{ij}} + g^A_{u_{ij}}\gamma_5 \right) u_j Y_1^{\mu} \end{eqnarray} \begin{figure}[htb] \centering \includegraphics[width=10.0cm]{figs/zy1_decayed.ps} \caption{\label{fig:feyns1} Representative Feynman diagrams of the dark sector with a spin-1 mediator. } \end{figure} The masses of the dark matter and the mediator are chosen to be the same as in the spin-0 model. A sound discussion of the impact of the choice of masses is available in the Ref.~\cite{Abercrombie:2015wmb}. Since our analysis is more suitable for testing couplings, we consider benchmark scenarios as listed in Table~\ref{tab:exhi-s1}. The signal signature is close to the SM $\mathrm{Z Z}\to 2l2\nu$ process, as shown in Fig.~\ref{fig:feyns1}, and we include here the SM $\mathrm{Z Z}\to 2l2\nu$ as a special case with zero coupling for comparison. The S1$_b$ and S1$_c$ project out the right- and left-handed part the Z-q-$\mathrm{\bar{q}}$ couplings. Since the magnitude of the left-handed couplings are larger than the one of the right-handed, cross section of the S1$_c$ scenario is found to be much larger than the S1$_b$ scenario. Angular coefficients of the benchmark scenarios S1$_{a,b,c}$ are shown in Fig.~\ref{fig:angs1a}, Fig.~\ref{fig:angs1b} and Fig.~\ref{fig:angs1c} respectively. Comparing with the SM $\mathrm{Z Z}\to 2l2\nu$ and spin-0 dark sector models, $A_0$ of the spin-1 models are found to be very significant. Among the three scenarios, most signatures look similar, but $A_3$ and $A_4$ take different signs between the S1$_b$ and S1$_c$. Hence the $A_3$ and $A_4$ can be used to quantify the parity violation of the dark sector. \begin{table}[htb] \centering \begin{tabular}{c|ccccccc} \hline \hline Benchmark & S1$_a$ & S1$_b$ & S1$_c$ & S1$_0$ \\ & Spin independent & Right handed & Left handed & SM ($\mathrm{Z Z}\to2l2\nu$) \\ \hline $g^V_{X_D}$ & 1 & $1/\sqrt{2}$ & $1/\sqrt{2}$ & - \\ $g^A_{X_D}$ & 0 & $1/\sqrt{2}$ & -$1/\sqrt{2}$ & - \\ $g^V_{X_C}$ & 0 & 0 & 0 & - \\ \hline $g^V_{u}$ & 0.25 & $\sqrt{2}/8$ & $\sqrt{2}/8$ & - \\ $g^A_{u}$ & 0 & $\sqrt{2}/8$ & -$\sqrt{2}/8$ & - \\ $g^V_{d}$ & 0.25 & $\sqrt{2}/8$ & $\sqrt{2}/8$ & - \\ $g^A_{d}$ & 0 & $\sqrt{2}/8$ & -$\sqrt{2}/8$ & - \\ \hline $\mchi$ (GeV) & 10 & 10 & 10 & - \\ $\myv$ (GeV) & 1000 & 1000 & 1000 & - \\ \hline $\Gamma_{Y_1}$ (GeV)& 56.3 & 55.9 & 55.9 & - \\ Cross section (fb) & 2.50 & 0.533 & 4.50 & 239 \\ \hline \hline \end{tabular} \caption{ Benchmark scenarios with a spin-1 mediator. } \label{tab:exhi-s1} \end{table} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin1_VV_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin1_VV_yz.eps} \caption{\label{fig:angs1a} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S1$_a$. } \end{figure} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin1_LL_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin1_LL_yz.eps} \caption{\label{fig:angs1b} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S1$_b$. } \end{figure} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin1_RR_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin1_RR_yz.eps} \caption{\label{fig:angs1c} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S1$_c$. } \end{figure} \clearpage \subsection{Spin-2 mediator} The dark sector with a spin-2 mediator is also tested. We consider a model as described in the Ref.\cite{Kraml:2017atm}, with benchmark scenarios listed in Table~\ref{tab:exhi-s2}. The masses of the dark matter and the mediator are also chosen to be the same as in the spin-0 model. Despite an increase of complexity in the computation, we found the angular coefficients look similar to the benchmark scenario S1$_a$. We show only the angular coefficients of the benchmark scenario S2$_a$ in Fig.~\ref{fig:angs2a}. Some visible differences from the S1$_a$ can be observed from the $A_0$ and $A_2$ distributions. Since we do not measure the DM, the angular coefficients of S2$_{b,c}$ are found to be very close to the ones of S2$_a$. \begin{table}[htb] \centering \begin{tabular}{c|cccccccc} \hline \hline Benchmark & S2$_a$ & S2$_b$ & S2$_c$ \\ \hline $g^T_{X_D}$ & 1 & 0 & 0 \\ $g^T_{X_R}$ & 0 & 1 & 0 \\ $g^T_{X_V}$ & 0 & 0 & 1 \\ $g^T_{SM}$ & 1 & 1 & 1 \\ \hline $\mchi$ (GeV) & 10 & 10 & 10 \\ $\myt$ (GeV) & 1000 & 1000 & 1000 \\ $\Lambda$ & 3000 & 3000 & 3000 \\ \hline $\Gamma_{Y_2}$ (GeV)& 95.3 & 93.7 & 97.7 \\ Cross section (fb) & 2.73 & 0.0462 & 0.578 \\ \hline \hline \end{tabular} \caption{ Benchmark scenarios with a spin-2 mediator. Angular coefficients of the three scenarios look all the same. } \label{tab:exhi-s2} \end{table} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin2_Xd_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin2_Xd_yz.eps} \caption{\label{fig:angs2a} Angular coefficients and the $\yz-\qt$ differential cross section of the benchmark scenario S2$_a$. } \end{figure} \clearpage \section{Introduction} The existence of dark matter (DM) is now well established. Current measurement gives a cold DM density of $25.8$\%, which is much significant than the $4.84$\% baryon density~\cite{Patrignani:2016xqp,Ade:2015xua}. Despite being an essential constituent of the universe, intrinsic properties of the DM, like mass, spin and nongravitational interaction between the standard model (SM) particles are still elusive at present. Assuming that DM is weakly interacting with the SM particles, the DM annihilation cross section will be constraint by the precisely measured relic DM abundance and a weak-scale DM candidate is usually expected for consistency~\cite{PhysRevLett.39.165}. The WIMP DM candidate can be produced at the LHC, and its missing from detection typically leads to large missing transverse energy, resulting in mono-X signatures, where X may denote a jet~\cite{Beltran:2010ww,Aaboud:2017buf,Sirunyan:2017hci}, especially t-/b-jet~\cite{Aaboud:2017rzf,Sirunyan:2017xgm}, a photon~\cite{Sirunyan:2017ewk}, a Z boson~\cite{Carpenter:2012rg,Sirunyan:2017onm,Aaboud:2017bja}, a W boson~\cite{Bai:2012xg,Aaboud:2017efa} or a Higgs boson~\cite{Aaboud:2017yqz,Sirunyan:2017hnk}. Numerous efforts have been performed at the LHC searching for the DM, many results from 13 TeV collisions are now available~\cite{Sirunyan:2017onm,Sirunyan:2016iap,Sirunyan:2017hci,Khachatryan:2016jww,Sirunyan:2017hnk,Sirunyan:2017xgm,Sirunyan:2017ewk,Sirunyan:2017nvi,Aaboud:2017rzf,Aaboud:2017bja,Aaboud:2017buf,Aaboud:2017buh,Aaboud:2017efa,Aaboud:2017yqz}, with strategies and benchmark models described in Ref.~\cite{Abercrombie:2015wmb}. In this analysis, we explore the effectiveness of the Z boson leptonic decay with mono-Z signature in probing properties of the dark sector. Compared with other search channels, this channel has a relatively lower cross section and may not be the most powerful one at the stage of searching. However, precisely measured electrons and muons provide a clean signature and can be used to increase the signal feasibility. Phenomenology of this channel has been explored in Ref.~\cite{Neubert:2015fka,Petriello:2008pu,Alves:2015dya,Han:1999ne,Yu:2014ula}, including higher-order QCD predictions, multivariate analysis, a search for extra dimension and effects on electron-positron colliders. LHC measurements are also available, and limits have been set on several dark sector models~\cite{Sirunyan:2017onm,Aaboud:2017bja,Aad:2014wca}. To better exploit the powerfulness of the lepton angular distribution, we study systematically information carried by the angular distribution and how they are affected by the dark sector. The modeling of the dark sector can be implemented in many models. As there is no strong support for the correctness of a specific model, it is now popular to set limits on parameters of effective or simplified theories~\cite{Goodman:2010ku,Goodman:2010yf,Abercrombie:2015wmb,Cao:2009uw}. Despite the simplicity, these models may not be realistic if we are not applying them in a suitable case. Either oversimplification nor overdress of the theory can lead to ineffectual results. For example, going to very high energy can result in the violation of unitarity in effective theories~\cite{Abercrombie:2015wmb,Cotta:2012nj}. On the other hand, some features are general among models and can have less dependence on the variations of model parameters, e.g., spin and mass of the dark mediator, parity or charge conjugation parity (CP) of the couplings. If applying carefully, those effective or simplified models can help us better understand the phenomenology of the dark sector. Motivated by this, we look for specific variables that can have discrimination power on general features of the dark sector. We consider the associated production of a Z boson and a dark mediator, where the Z boson decays to a pair of electrons or muons and the dark mediator decays to a pair of dark matter. As the dark matter is unmeasurable, the typical feature of the event is a single leptonically decaying Z boson, with $\pt$ balanced by the missing transverse momentum vector. With precisely measured electron or muon momenta, one can reconstruct the Z boson rest frame and study in detail information carried by the Z boson spin density matrix. We consider simplified models for spin-0, spin-1, and spin-2 mediators~\cite{Mattelaer:2015haa,Backovic:2015soa,Neubert:2015fka,Das:2016pbk,Kraml:2017atm}. In each case, only a few benchmark scenarios are considered with representative parameter values. For the spin-0 model, we assume the dark mediator can only weakly interact with bosons through a set of dimension-5 operators as described in Ref.~\cite{Neubert:2015fka}. In this case, the mono-Z boson channel is advantageous as a triple boson coupling is necessary for the production. If introducing couplings to the SM fermions assuming minimal flavor violation, their effects are suppressed due to proportionalities to the Yukawa couplings~\cite{Backovic:2015soa,Cheung:2010zf,Lin:2013sca}. The spin-1 mediator model is chosen to be consistent with the one adopted in the LHC experiment~\cite{Abercrombie:2015wmb}. A spin-2 mediator model described in Ref.\cite{Kraml:2017atm} is also tested. To maximally exploit the statistical power of the data, we present a framework to use the matrix element method (MEM) with a dynamical construction of event likelihood function and set unbinned limits on parameters of the dark sector~\cite{doi:10.1143/JPSJ.57.4126,doi:10.1143/JPSJ.60.836,Gao:2010qx,Chatrchyan:2012sn,DeRujula:2010ys}. We parametrize the test statistic in a way such that the sensitivity of MEM can be quantified through a term proportional to the KL-divergence of two probability density functions~\cite{kullback1951}. Limits on the coupling strengths of the dark sector models are set at 95\% confidence level (CL) based on the asymptotic approximation. As the spin-2 scenarios are found to have similar angular coefficients to the one of a spin-independent spin-1 model, they are not considered in the limit setting. An example application of a matrix-element-kinematic-discriminator is also demonstrated with simulated events. This paper is organized as follows: Section~\ref{sec:param} introduces the parametrization of lepton angular distribution. Section~\ref{sec:angC} describes computational details and presents numerical results of angular coefficients in the Collins-Soper frame. Section~\ref{sec:limits} explained the statistical method for setting limits and present results on the coupling strengths of dark sector models. Section~\ref{sec:summary} summarizes our major findings and outlooks aspects of the study. \section{Setting limits on the coupling strength parameters of dark sector models} \label{sec:limits} In Section~\ref{sec:angC}, we have shown that angular coefficients of the benchmark dark sector models can have distinct signatures from the SM $\mathrm{Z Z}\to 2l2\nu$ background process in the $\yz-\qt$ plane. In this section, we take advantage of these signatures and set limit on the coupling strength parameter $\lambda$ of each dark sector model, based on observables $\mathbf{x}=(\yz,\qt,\cos\theta_{CS},\phi_{CS})$. The invisible part $(\yy,\sy,\cos\theta_{\chi},\phi_{\chi})$ was integrated out to construct pdfs, as described in Section~\ref{sec:param}. \subsection{Statistical method} With the pdfs of the signal and background processes obtained through MEM, one can construct an unbinned likelihood function over N events in the data sample~\cite{Barlow:1990vc}: \begin{eqnarray} {\cal L}(\text{data}|\lambda,\boldsymbol{\theta}) &=& \text{Poisson}(N|S(\lambda,\boldsymbol{\theta})+B(\boldsymbol{\theta})) \rho(\boldsymbol{\theta}) \prod_i \rho(\mathbf{x}^i|\lambda,\boldsymbol{\theta}), \\ \rho(\mathbf{x}|\lambda,\boldsymbol{\theta}) &=& \dfrac{ S(\lambda,\boldsymbol{\theta}) \rho_s(\mathbf{x}^i,\lambda) + B(\boldsymbol{\theta}) \rho_b(\mathbf{x}^i) }{ S(\lambda,\boldsymbol{\theta}) + B(\boldsymbol{\theta}) }, \end{eqnarray} where $\rho_s(\mathbf{x},\lambda)$ and $\rho_b(\mathbf{x})$ represent pdfs of the signal and background, $S(\lambda,\boldsymbol{\theta})$ and $B(\boldsymbol{\theta})$ corresponding to the expected signal and background yields. The $\boldsymbol{\theta}$ represents the full set of nuisance parameters with pdf $\rho(\boldsymbol{\theta})$, which are designed to incorporate systematic uncertainties. To set limits on the parameters $\lambda$, we compare the compatibility of the data with the $\lambda$ fixed and $\lambda$ floated hypotheses and construct a test statistic based on the profile likelihood ratio: \begin{eqnarray} t_{\lambda} = -2\ln\dfrac{ {\cal L}(\text{data}|\lambda,\boldsymbol{\hat{\theta}}_{\lambda}) }{ {\cal L}(\text{data}|\hat{\lambda},\boldsymbol{\hat{\theta}}) }. \end{eqnarray} According to the Wilk's theorem, this test statistic satisfies the $\chi^2$ distribution of the same degrees of freedom as $\lambda$ in the large sample limit~\cite{wilks1938}. One can, therefore, set limits on the $\lambda$ through a parameter space scan and cut on the $-2\ln\Delta{\cal L}$ values. Neglecting pdf of the nuisance parameters, it follows that \begin{eqnarray} t_{\lambda} = -2\ln\dfrac{\text{Poisson}(N|S(\lambda)+B)}{\text{Poisson}(N|S(\hat{\lambda})+B)} -2 \sum_i \ln \dfrac{\rho(\mathbf{x}^i|\lambda)}{\rho(\mathbf{x}^i|\hat{\lambda})} \end{eqnarray} For setting limits on $\lambda$, we assume that there is a single dataset in agreement with $\lambda=0$. In the large sample limit, we have: \begin{eqnarray} t_{\lambda} &\xrightarrow{N\to\infty}& -2\ln\dfrac{\text{Poisson}(N|S(\lambda)+B)}{\text{Poisson}(N|B)} +2 N \int \d \mathbf{x} \rho(\mathbf{x}|\lambda=0) \ln \dfrac{\rho(\mathbf{x}|\lambda=0)}{\rho(\mathbf{x}|\lambda)} \\ \nonumber &=& -2\ln\dfrac{\text{Poisson}(N|S(\lambda)+B)}{\text{Poisson}(N|B)} + 2 N \cdot D(\rho(\mathbf{x}|\lambda=0) || \rho(\mathbf{x}|\lambda)). \end{eqnarray} where the first term is a test statistic for simple counting experiment and the second term is proportional to $N$ and a KL-divergence~\cite{kullback1951}. As the KL-divergence measures the difference of the pdfs $\rho(\mathbf{x}|\lambda)$ and $\rho(\mathbf{x}|\lambda=0)$, it quantifies the powerfulness of the MEM. For simplicity, we will call the first term as normalization term and the second one as KL-divergence term. In our study, the likelihood function is prepared by {\sc BASES} numerical integration with HELAS subroutines for the helicity amplitudes. The evaluation of the KL-divergence term is performed using a plain integration provided by the {\sc GNU Scientific Library}. We validate our program by checking the normalizations of all the constructed pdfs and by comparing the angular coefficients and cross sections of all involved processes with the {\sc MG5}. See more information in Appendix~\ref{app:valid}. \subsection{Background modeling and event selections} To make our limits more realistic, we consider a few selections -- marked as BL selections -- as listed in Table~\ref{tab:selections} to capture major detector acceptance effects for the processes involved. The values of these selections are set refering to recent 13 TeV LHC measurements~\cite{Sirunyan:2017onm,Aaboud:2017bja}. There are several additional selections considered in experiments to improve the signal feasibility, e.g., jet counting, $3^{rd}$-lepton veto, top quark veto, and $\Delta\phi_{ll,\ptvecmiss}$, $|\etmiss-\ptll|/\ptll$ for momentum balance~\cite{Sirunyan:2017onm}. These selections reject most background from misidentification but lead to different acceptance efficiencies for different processes. Without detector simulation, we determine the event rate according to the CMS results (Table 3 of Ref.~\cite{Sirunyan:2017onm}), with an ancillary $A\cdot \epsilon$ incorporating the additional selections in the experiment and a scale factor normalizing to 150~$\fbinv$ data. The signal dark matter processes are assumed to have the same ancillary $A\cdot \epsilon$ as the SM ZZ$\to 2l 2\nu$ process. \begin{table}[htb] \centering \begin{tabular}{c|c} \hline \hline Variable & Requirements \\ \hline $p^l_{\mathrm{T}}$ & $>20$~GeV \\ \sz & \text{NWA} \\ $E^{\text{miss}}_{\mathrm{T}}$ & $>80$~GeV \\ $|\eta_{l}|$ & $<2.4$ \\ $\Delta R_{ll}$ & $>0.4$ \\ $|\yz|$ & $<2.5$ \\ \hline \hline \end{tabular} \caption{ Selections considered in our computations (BL-selections), where $l=e,\mu$. Additional selection requirements are considered in experiments to improve the signal feasibility. Their effects are included through an Ancillary $A\cdot \epsilon$. } \label{tab:selections} \end{table} Our background pdf is constructed based on components summarized in Table~\ref{tab:bkgevt}. Apart from the non-resonant-$ll$ background, which is constructed using only the phase space, other components are built using matrix elements. The WZ$\to 3l\nu$ matrix element assumes W$\to e \nu$, where the electron is not identified by a detector. The Z/$\gamma^*\to l^+l^-$ is estimated with matrix element of the Z$\to l^+l^-$ plus one jet production, phase space of this process reduces to three final state particles. \begin{table}[htb] \centering \begin{tabular}{c|c|c|c} \hline \hline Process & Cross section with BL-selections (fb) & Ancillary $A\cdot \epsilon$ & Events \\ \hline ZZ$\to 2l 2\nu$ & 27.7 & 0.488 & 2028 \\ Non-resonant-$ll$ & 1.57$\times 10^{3}$ & 5.80$\times 10^{-3}$ & 1370 \\ WZ($\to e\nu 2l$) & 17.05 & 0.296 & 757 \\ Z/$\gamma^*\to l^+l^-$ & 3.61$\times 10^{4}$ & 1.23$\times 10^{-4}$ & 665 \\ \hline \hline \end{tabular} \caption{ Background estimation with cross sections calculated in a phase space with BL-selections and ancillary $A\cdot \epsilon$ to obtain the same event rate as in Table 3 of Ref.~\cite{Sirunyan:2017onm}. The number of events has been translated into 150~$\fbinv$ data. } \label{tab:bkgevt} \end{table} In the presence of selections, angular coefficients can be distorted. Fig.~\ref{fig:bkgak} shows the angular coefficients $A_0-A_4$ for the background only hypothesis. Irregular distributions on the boundaries are mainly caused by the selections on $|\eta_{l}|$ and $\Delta R_{ll}$. With the coupling strength at our expected limit, the presence of signal can only perturb the shapes of the background only ones. \begin{figure}[htbp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_bkgak_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_bkgak_yz.eps} \caption{\label{fig:bkgak} Angular coefficients $A_0-A_4$ in the CS frame and $\yz-\qt$ differential cross section for background only hypothesis. Selections in Table~\ref{tab:selections} have been applied and cause irregular shapes in kinematic boundaries.} \end{figure} \subsection{Limits on the coupling strength parameters of the dark sector models} In our dark sector models, it is necessary to have two couplings: one for the interaction with SM particles, one for the DM decay. For conciseness, we assume that both couplings in the benchmark model are scaled by a strength parameter $\lambda$. This assumption makes the cross sections change with two orders severer in couplings than ones for limits of a single coupling. We compare the upper limits set from the normalization term $-2\ln\text{Poisson}$ and from the KL-divergence term $2N\cdot D(\rho(\mathbf{x}|0)||\rho(\mathbf{x}|\lambda))$ in Fig.~\ref{fig:lims0} for the S0 benchmark scenarios and in Fig.~\ref{fig:lims1} for the S1 benchmark scenarios. The shapes provide significant improvements in all cases. The KL-divergence terms drive the final limits for the S0 benchmark scenarios and are close to the normalization terms in the S1 benchmark scenarios. \begin{figure}[htbp] \centering \includegraphics[width=4.5cm]{figs/limitS0a.eps} \includegraphics[width=4.5cm]{figs/limitS0b.eps} \includegraphics[width=4.5cm]{figs/limitS0c.eps} \caption{\label{fig:lims0} Upper limits on the coupling strength parameters of the S0 benchmark scenarios. } \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=4.5cm]{figs/limitS1a.eps} \includegraphics[width=4.5cm]{figs/limitS1b.eps} \includegraphics[width=4.5cm]{figs/limitS1c.eps} \caption{\label{fig:lims1} Upper limits on the coupling strength parameters of the S1 benchmark scenarios. } \end{figure} We provide in Table~\ref{tab:limit} 95\% CL upper limits of the strength parameters. In our evaluation, the numerical uncertainty of the normalization terms can be easily made negligible. However, the evaluation of the KL-divergence terms can be computationally expensive. It takes us roughly $700\times 6$ CPU hours, functioning at about 2.4 GHz, for us to obtain 30\%-50\% uncertainties on the KL-divergence terms around the limit values. The signal cross sections at the limit values are also reported. Since a counting experiment calculate limits based on signal background yields, the results from the normalization term are almost the same. The ones from the KL-divergence terms, however, depend on the shape difference between the signal and background. As the KL-divergence is a measure the shape difference, a lower cross section means a larger the difference in shape. These quantitative results are in agreement with qualitative features of the angular coefficients among models provided in Section~\ref{sec:angC}. \begin{table}[htb] \centering \scalebox{1.0}{ \begin{tabular}{r|ccc|ccc} \hline \hline Benchmark ~~~~~~~~~~ & S0$_a$ & S0$_b$ & S0$_c$ & S1$_a$ & S1$_b$ & S1$_c$ \\ \hline Limit from the normalization term ($\lambda_1$) & 4.4 & 4.6 & 103 & 1.1 & 1.7 & 0.97 \\ Signal cross section at $\lambda_1$ (fb) & 1.86 & 1.87 & 1.86 & 1.87 & 1.87 & 1.87 \\ \hline Limit from the KL-divergence term ($\lambda_2$) & 3.5 & 3.6 & 81 & 1.1 & 1.7 & 0.99 \\ Signal cross section at $\lambda_2$ (fb) & 0.75 & 0.70 & 0.72 & 1.9 & 2.0 & 2.0 \\ \hline Combined limit ($\lambda_0$) & 3.5 & 3.5 & 79 & 1.0 & 1.5 & 0.89 \\ \hline \hline \end{tabular}} \caption{ Upper limits on the coupling strength parameters of the dark sector models at 95\% CL, with signal cross sections at the limit values. } \label{tab:limit} \end{table} \subsection{Example application of MEKD} Our computation considered only parton level matrix element at leading order (LO). We comment that there are already efforts to extend the MEM to Next-to-Leading Order (NLO)~\cite{Campbell:2012cz} and incorporates parton shower effects~\cite{Soper:2011cr}. There is an easier approach to exploit the LO matrix elements, called the matrix element kinematic discriminator (MEKD)~\cite{Avery:2012um,Chatrchyan:2012sn,Chatrchyan:2012jja}. This method construct a variable named MEKD that can be calculated for events with required observables. By construction, it utilizes the matrix element and can be used to distinguish the signal and background. The advantage of this method is that detector effects and theoretical uncertainties in the construction of likelihood function is independent of the application. Based on the pdfs defined as in Eq.~\ref{eq:pdf} of the signal and combined background, we define the MEKD as: \begin{eqnarray} \text{MEKD} = \ln \dfrac{\rho_s(\mathbf{x},\lambda)}{\rho_b(\mathbf{x})}, \end{eqnarray} where $\mathbf{x}=(\yz,\qt,\cos\theta_{CS},\phi_{CS})$ and the invisible part has been integrated out. Then we use the {\sc MG5} program to generate events for the applications. For the LO simulations, we consider the same setup as has been used in our program. For the NLO simulations, we consider NNPDF23\_nlo with default renormalization and factorization scales, defined as the sum of the transverse masses divided by two of all final state particles and partons. Negatively weighted events in the NLO simulations have been incorporated consistently. The Fig.~\ref{fig:mekd} stacks MEKD distributions of both signal and backgrounds. On the left plot, all of the processes are generated with LO accuracy (NLO in QCD for Z($\to l^+l^-$)+jet). The signal considers S0$_a$ benchmark model with $\lambda=3.5$. We multiplied the signal yield by a factor of five for a better demonstration. The Non-resonant-$ll$ process is expected to be obtained from data-driven in the experiment. We mimic its contribution by using a $\mathrm{t\bar{t}}(\to 2l2\nu2 \mathrm{b-jets})$ sample. The right plot replaces the SM ZZ$\to 2l 2\nu$, WZ($\to e\nu 2l$) and Z($\to l^+l^-$)+jet with simulated events at NLO accuracy. In both cases, the MEKD shows very nice discrimination power on the signal and background. It is made clear that NLO simulated events are applicable, with a reasonable loss of sensitivity. \begin{figure}[!h] \centering \includegraphics[width=7.5cm]{figs/mekd_LO.eps} \includegraphics[width=7.5cm]{figs/mekd_NLO.eps} \caption{\label{fig:mekd} Example MEKD distributions with {\sc MG5} generated events. The left plot is obtained with simulated events at LO accuracy. The right plot considers events of Z($\to l^+l^-$)+jet processes at NLO accuracy. The signal considers S0$_a$ benchmark model with $\lambda=3.5$. We multiplied the signal yield by a factor of five for a better demonstration. } \end{figure} \section{Summary} \label{sec:summary} In this paper, we have exploited the Z boson leptonic decay information to probe the dark sector with a scalar, vector, and tensor mediators. We obtained angular coefficients of the SM $\mathrm{Z Z}\to 2l2\nu$ background and benchmark scenarios of the dark sector models in the $\yz-\qt$ plane. Our results show that the angular coefficients $A_{0}-A_{4}$ behave very differently between the SM $\mathrm{Z Z}\to 2l2\nu$ process and the dark sector signal processes. The angular coefficients among dark sector models of spin-0 and spin-1 mediators are also found to be different from scenario to scenario. Specifically, the angular coefficients have sensitivities on the parity violation of the spin-1 model and the CP-violation of the spin-0 model. The angular coefficients in the spin-2 model are found to be similar to the spin-independent scenario of the spin-1 model but still have minor differences. To quantify the shape information that can be used for the search of dark sectors, we consider unbinned fits to the four-dimensional $\yz-\qt-\cos\theta_{CS}-\phi_{CS}$ distributions based on dynamically constructed matrix element likelihood functions and set 95\% CL upper limits on the coupling strength parameters of the spin-0 and spin-1 benchmark scenarios. To be realistic, we emulate the acceptance and efficiency effects referring to the 13 TeV LHC measurement~\cite{Sirunyan:2017onm,Aaboud:2017bja}. To make our framework concise, we obtained all the results using asymptotic approximation without event generation. Our evaluated KL-divergence term quantifies the shape effect in each case. The obtained results demonstrate significant improvements in the limits, especially on the S0 benchmark models. For easier usage of experimental data, we provide an example application of MEKD with simulated events. We show that our MEKD constructed with LO matrix elements are applicable for NLO events and preserves good discrimination power on the signal and background. We expect this kind of MEKDs to be useful for exploiting the lepton angular distributions in experimental analyses.
1,314,259,992,898
arxiv
\section{Introduction and main results} \medskip It is well known that a magnetic field can be a shield against the effect of solar wind and cosmic rays. The most famous effect happens in the earth, in which its magnetic field pushes the cosmic rays towards the poles, causing the aurora borealis. Analogously, one can pose the problem of how to protect a spaceship flying to Mars from the solar wind. Indeed a prolonged exposure to the solar wind can be dangerous for the health of the crew of the spaceship. \noindent It is easy to show that, in the motion of a point charged particle, surfaces with a magnetic field singular enough represent an impenetrable wall producing the so-called magnetic shield, as it can be seen in the proof of estimate (\ref{B}). In the present paper we will discuss whether a similar effect could happen for a charged plasma described by the Vlasov-Poisson equation, which evolves in presence of an external magnetic field singular on the border of some regions. Actually the problem is not obvious, since in the continuum case a fluid particle could in principle be pushed against the wall by the other particles, especially if the total mass of the plasma is infinite. We will show rigorously that in many cases this phenomenon cannot happen. The construction of such a model of magnetic shield depends mainly on three different things: the geometry of the region that we want to screen by a magnetic field and the spatial as well as velocity distribution of the plasma. In the sequel we discuss some emblematic important cases. In other situations we are not able to obtain a proof of the existence of the shield effect, and we do not know if this is due to a technical lack or to physical reasons. \bigskip The solar wind is composed by different species of particles, i.e. electrons, protons, and other positive heavy ions, therefore we initially discuss the time evolution of a plasma with many species with different sign charges, moving under the influence of the auto-induced electric field and a fixed external magnetic field. Then we study a non-trivial generalization of the initial data when only one species is present. Let $\Gamma$ be a smooth region of the space and $n$ the total number of the species. For any $i=1,2,...,n$ let $f_i(x,v,t)$ represent the distribution function of charged particles at the point of the phase space $(x,v)$ at time $t$ and let $\sigma_i$ (which can be positive or negative) be the charge per unit mass of the $i$-th species. We describe the time evolution of this system via the $n$ Vlasov-Poisson equations: \begin{equation} \label{Eq} \left\{ \begin{aligned} &\partial_t f_i(x,v,t) +v\cdot \nabla_x f_i(x,v,t)+ \sigma_i(E(x,t)+v \wedge B(x)) \cdot \nabla_v f_i(x,v,t)=0 \\ &E(x,t)=\sum_{i=1}^n \sigma_i \int_{\mathbb{R}^3 \setminus \Gamma} \frac{x-y}{|x-y|^3} \ \rho_i(y,t) \, dy \\ &\rho_i(x,t)=\int_{\mathbb{R}^3} f_i(x,v,t) \, dv \\ &f_i(x,v,0)=f_{i,0}(x,v)\geq 0, \qquad x\in {\mathbb{R}^3 \setminus \Gamma}, \qquad v\in\mathbb{R}^3, \qquad i=1 \dots n \end{aligned} \right. \end{equation} where $\rho_i$ are the spatial densities of the species, $E$ is the electric field and $B(x)$ is an external magnetic field that is singular on the boundary $\partial \Gamma$. \noindent The model described by these equations neglects the relativistic effects and this approximation is reasonable, being the velocities of the main part of the solar wind of the order $4 \cdot 10^2$ Km/sec, while the velocity of light is of the order $3 \cdot 10^5$ Km/sec. \bigskip System (\ref{Eq}) shows that $f_i$ are time-invariant along the solutions of the so called characteristics equations: \begin{equation} \label{ch} \begin{cases} \displaystyle \dot{X}_i(t)= V_i(t)\\ \displaystyle \dot{V}_i(t)= \sigma_i \left[ E(X(t),t)+V_i(t) \wedge B(X(t))\right] \\ \displaystyle (X_i(t'), V_i(t'))=(x,v) \\ \displaystyle f_i(X_i(t), V_i(t), t) = f_{i,0}(x,v), \end{cases} \end{equation} where we have used the simplified notation \begin{equation} \label{2.8} (X_i(t),V_i(t))= (X_i(t,t',x,v),V_i(t,t',x,v)) \end{equation} to represent a characteristic at time $t$ passing at time $t'<t$ through the point $(x,v)$. Hence we have \begin{equation} \label{2.9} \| f_i(t)\|_{L^\infty}= \| f_{i,0} \|_{L^\infty}. \end{equation} Moreover this dynamical system preserves the measure of the phase space (Liouville's theorem). \bigskip We start the investigation when $\Gamma$ is a bounded region (which represents the spaceship). For concreteness we assume $\Gamma$ to be a torus, but the study can be extended without difficulty to other smooth regions. We have chosen a torus and not, e.g., a sphere because we want to put on the border a magnetic field well defined everywhere (and for a sphere it is not defined at the poles), fact that imposes some constraints on the topology of $\Gamma$. \bigskip Let $x=(x_1,x_2,x_3)\in \mathbb{R}^3$, then $x\in \Gamma$ if $$ \left(R-\sqrt{x_1^2+x_2^2} \right)^2+x_3^2=r^2, $$ for any $r\in [0,r_0]$ with $R>r_0>0$. \bigskip \noindent In toroidal coordinates the equations are \begin{equation} \label{coord.} \begin{cases} \displaystyle x_1= (R+r\cos \alpha) \cos \theta\\ \displaystyle x_2 = (R+r\cos \alpha) \sin \theta \\ \displaystyle x_3 = r \sin \alpha \\ \displaystyle 0 \leq \alpha < 2 \pi, \quad 0 \leq \theta < 2 \pi. \end{cases} \end{equation} \noindent We will describe the region $\mathbb{R}^3 \setminus \Gamma$ with toroidal coordinates when $r_0<r<(R+r_0)/2$ (near the external border of the torus), with a smooth switch to cartesian coordinates elsewhere. We choose the external magnetic field $B(x)$ of the form \begin{equation} \label{m.f.} B(x) = \nabla \wedge A(x), \qquad A(x)= \frac{a(r)}{R+r \cos \alpha} \hat{e}_\theta \end{equation} where $a(r)$ is a decreasing smooth function for $r_0 < r \leq r_0+\frac{R-r_0}{4}$, divergent for $r \to r_0^+$ and vanishing otherwise, and $\hat{e}_\theta$ is the unit vector tangent to the border of the torus in the direction of increasing $\theta$ (and fixed $\alpha$). From (\ref{coord.}) one obtains \begin{equation} \label{versors} \begin{cases} \displaystyle \hat{e}_r= \cos \alpha \cos \theta \ \hat{c}_1+\cos \alpha \sin \theta \ \hat{c}_2+\sin \alpha \ \hat{c}_3\\ \displaystyle \hat{e}_\theta= - \sin \theta \ \hat{c}_1 + \cos \theta \ \hat{c}_2\\ \displaystyle \hat{e}_\alpha = -\sin \alpha \cos \theta \ \hat{c}_1-\sin \alpha \sin \theta \ \hat{c}_2+\cos \alpha \ \hat{c}_3 \end{cases} \end{equation} where $\hat{e}_\alpha $ is the unit vector defined analogously to $\hat{e}_\theta$ (and so orthogonal to $\hat{e}_\theta$), $\hat{e}_r=\hat{e}_\theta \wedge \hat{e}_\alpha $, and $\hat{c}_1,\hat{c}_2,\hat{c}_3$ are the units vectors of the Cartesian axes $x_1,x_2,x_3$. From (\ref{coord.}) we have (for the curl in toroidal coordinates see for instance \cite {Bat}) \begin{equation} \label{mag.f.} B(x) = \frac{a'(r)}{R+r \cos \alpha} \hat{e}_\alpha. \end{equation} We make for convenience the explicit choice \begin{equation} \label{a(r)} a(r) = (r-r_0)^{-\tau}, \qquad \tau >\frac72, \qquad \textnormal{for} \qquad r_0< r < r_0+ \frac{R-r_0}{8}. \end{equation} \medskip We note that other choices of the magnetic field are possible, as for example one directed along $\hat e_\theta$ and singular on the border of the torus. We have considered the form (\ref{mag.f.}) which gives more difficulties in the analysis, while other choices can be studied following the same lines of the present treatise. In any case, it will be clear in what follows that the magnetic field has to be chosen with a suitably strong divergence on the boundary of $\Gamma.$ A remark, that will play an important role in the sequel, is the fact the magnetic force $ V(t)\wedge B(X(t))$ does not change the modulus of the velocity, since \begin{equation} \begin{split} \frac{d}{dt}V^2=2V\cdot \dot{V}= 2V\cdot (E+V\wedge B)=2V\cdot E. \label{baa} \end{split} \end{equation} We denote by $\Lambda_i$ the spatial support of $f_{i,0}(x,v)$ for any $i=1,\dots, n.$ Moreover $\partial \Gamma$ represents the border of the torus and $\Gamma^c=\mathbb{R}^3\setminus\Gamma.$ \noindent One of the main results of the present paper is the following theorem, which is valid for charge densities $\sigma_i$ of any sign (both positive and negative). \begin{theorem} Let us fix an arbitrary positive time T. Consider the initial data $f_{i,0} \in L^\infty$ such that $\Lambda_i\subset \Gamma^c\setminus \partial\Gamma$, with a distance between $\Lambda_i$ and $\Gamma$ greater than $d_0>0$. Let $f_{i,0}$ also satisfy the following hypotheses: \begin{equation} 0\leq f_{i,0}(x,v)\leq C_0 e^{- \lambda |v|^{q}}g(|x|), \qquad q>\frac{18}{7} \label{Ga1} \end{equation} where $g$ is a bounded, continuous, not increasing function such that, for $|x|\geq 1$, \begin{equation} g(|x|) \leq \frac{C}{|x|^{\alpha}} \qquad \qquad \textit{with}\,\,\,\, \alpha>3 \label{asp1} \end{equation} being $\lambda,$ $C_0$ and $C$ positive constants. Then $\forall (x,v)$ there exists a solution to equations (\ref{ch}) on $[0, T]$ such that $X(t)\in \Gamma^c\setminus \partial\Gamma.$ Moreover there exist positive constants $C'$ and ${\lambda}'$ such that \begin{equation} 0\leq f_i(x,v,t)\leq C'e^{- \lambda' |v|^{q}}g(|x|). \label{dec1} \end{equation} This solution is unique in the class of those satisfying (\ref{dec1}). \label{th_02} \end{theorem} The proof will be discussed in the next section. We remark that the assumption on the super-gaussian decay of the velocities (\ref{Ga1}) is related to the fact that the magnetic lines are not straight lines, which is a consequence of the boundedness of $\Gamma.$ When studying unbounded regions (see Section \ref{General}) we are able to consider data having a gaussian decay in the velocities. The spatial decay (\ref{asp1}) implies that the total masses are bounded: \begin{equation} \label{mass_i} M_i = \int_{\mathbb{R}^3 \setminus \Gamma} \, dx \int_{\mathbb{R}^3} \, dv \, f_{i,0}(x,v ) < \infty. \end{equation} \bigskip We would like to study cases in which some species has infinite mass, in order to put in evidence whether or not the results are dependent on the boundedness of $M_i.$ Unfortunately, we are able to do this investigation only if the charges of all the species have the same sign (all positive or all negative). In the proof we will discuss why. The following theorem holds: \begin{theorem} Let us fix an arbitrary positive time T. Assume that all $\sigma_i > 0$, or all $\sigma_i < 0$, $i=1,\dots, n$. Consider the initial data $f_{i,0} \in L^\infty$ such that $\Lambda_i\subset \Gamma^c\setminus \partial\Gamma$, with a distance between $\Lambda_i$ and $\Gamma$ greater than $d_0>0$. Let $f_{i,0}$ also satisfy the following hypotheses: \begin{equation} 0\leq f_{i,0}(x,v)\leq C_0 e^{- \lambda |v|^{q}}g(|x|), \qquad q > \frac{45}{7} - \frac97 \alpha \quad \textit{with} \,\,\,\, \frac83<\alpha\leq 3 \label{Ga} \end{equation} where $g$ is a bounded, continuous, not increasing function such that, for $|x|\geq 1$, \begin{equation} g(|x|) \leq \frac{C}{|x|^{\alpha}} \label{asp} \end{equation} being $\lambda,$ $C_0$ and $C$ positive constants. Then $\forall (x,v)$ there exists a solution to equations (\ref{ch}) on $[0, T]$ such that $X(t)\in \Gamma^c\setminus \partial\Gamma.$ Moreover there exist positive constants $C'$ and ${\lambda}'$ such that \begin{equation} 0\leq f_i(x,v,t)\leq C'e^{- \lambda' |v|^{q}}g(|x|). \label{dec} \end{equation} This solution is unique in the class of those satisfying (\ref{dec}). \label{th_03} \end{theorem} The proof will be discussed in the next section. We remark that the lower bound on $\alpha$ in (\ref{Ga}) is due to technical reasons, i.e. to control the tail of the velocities distribution in the iterative method of subsection \ref{subs_conv}, while the upper bound is due to allow for infinite mass. The border case $\alpha=3$ should be considered apart, since some estimates change slightly (as (\ref{Q^})), but it is simpler than $\alpha<3$ and will not be investigated explicitly. \bigskip \noindent The Vlasov-Poisson equation has been widely studied in several papers. The problem of existence and uniqueness of the solution in ${\mathbb{R}}^3$, for $L^1$ data, is completely solved since many years (see \cite{L, Pf, S,W} and a nice review of many such results in \cite{G}). For the one species case it has been studied also the case of spatial density not belonging to $L^1$ to outline that the results do not depend on the finite total mass. In particular it has been first investigated a system with simplified bounded interaction \cite{CCMP} and more recently for coulomb interaction (or coulomb like interaction) in \cite{CCM2, Rem, R3,inf,CCM16}. Other papers related to these problems, with many different optics, are \cite{Ca, Ch, ChZ, DMS, GJP, J, Lo, MMP_11, Pa1, Pa2, P, Sal09, S1, S2, S3} to quote some of them. In particular in some papers the confinement of the plasma by a singular magnetic field is discussed \cite {CCM12,CCM2,Rem, CCM15, inf}. In \cite {CCM12} and \cite {CCM15} we have discussed the confinement in an infinite cylinder and in a torus of a plasma with bounded total mass and velocities, in \cite {CCM2,Rem} the confinement in an infinite cylinder of a plasma always with bounded velocities but now with infinite mass, while \cite {inf} deals with the confinement in an infinite cylinder of a plasma with infinite mass and unbounded velocities. \noindent In the present paper we discuss the opposite problem, that is how to forbid the entrance of an external plasma inside a fixed region by means of a singular magnetic field. Thus, we conjugate the problems of confinement and infinite mass in the whole of $\mathbb{R}^3$ while, in the preceding articles \cite {CCM2,Rem} and \cite {inf} , we dealt separately with confined plasmas in a cylinder (an almost one-dimensional system) or free plasmas in the space. For this reason we will borrow some of the techniques already used in both cases, adapting them to the actual problem and repeating some of them to make the present paper self-contained. On the other side, we discuss in more detail some aspects that we have not yet investigated. \noindent The novelties with respect to the preceding literature consist in the following aspects. First, in Theorem \ref{th_02} we study a system with different charge sign species. The analysis is done only in case of finite total mass, since we are not able to deal with infinite masses, by the lack of some properties of the local energy (\ref{e}). Secondly, in Theorem \ref{th_03} we deal with a one species infinite mass plasma with a curved obstacle, whose geometry leads to some difficulties, in particular in the proof of the estimate of the electric field, which is given in Proposition \ref{prop_1}. \noindent The plan of the paper is the following: in Section 2 we prove Theorem \ref{th_02} and Theorem \ref{th_03}, Section \ref{General} is devoted to some generalizations, and in the Appendix we put some technical proofs. \section{Proofs} \bigskip \subsection{Some comments on the Theorems} In Theorem \ref{th_02} the $n$ species have charges of different sign, and this difficulty imposes to study initial data with finite total mass, while in Theorem \ref{th_03} the $n$ species have charges of the same sign, and this simplification permits to extend the hypothesis to considering initial data with infinite mass. Moreover in this second case the fact that the species are $n$ or only one does not differ substantially, and we will investigate explicitly the case of a single species. In both cases the strategy of the proof relies on the introduction of a truncated dynamics, denoted as \textit{partial dynamics}, in which we assume the initial data $f_{i,0}$ having compact support in the velocities, that is initially $|v|\leq N$. We investigate this case and then we allow $N\to \infty$. \bigskip We premise a remark about the case considered in Theorem \ref{th_02}. We put $$ f(x,v,t)=\sum_{i=1}^n f_i(x,v,t) \qquad \textnormal{and} \qquad \rho(x,t) = \sum_{i=1}^n \sigma_i \, \rho_i(x,t). $$ The total energy of the plasma is \begin{equation} \frac12 \int d x \int d v \, |v|^2 f( x, v,t) +\frac12 \int dx \, \rho( x,t)\int dy \, \frac{ \rho ( y,t)}{ | x- y|} = C, \label{finite_energy} \end{equation} where the first integral is the kinetic energy and the second one the potential energy. The total energy is finite (because of the finite total mass) and it is a constant of the motion. We want to recall the well known fact that $$ \int d x \int dy \, \rho(x,t) \, \rho ( y,t) \frac{1}{| x- y|} = \int dx \, |E(x, t)|^2, $$ and hence the potential energy is positive, in spite of the fact that the spatial density $\rho$ is not definite in sign. We put $$ \Phi(x,t) = \int dy \, \rho ( y,t) \frac{1}{| x- y|}. $$ Since $\rho(x,t) = \textnormal{div} E(x,t)$, we have $$ \int d x \int dy \, \rho(x,t) \, \rho ( y,t) \frac{1}{| x- y|} = \int dx \, \Phi (x,t) \, \textnormal{div} E(x,t) = $$ $$ \int dx \big[ \textnormal{div}(E(x,t) \Phi(x,t)) - E(x,t) \cdot \nabla \Phi(x,t) \big] = $$ $$ \int dx \, \textnormal{div}(E(x,t) \Phi(x,t)) + \int dx \, |E(x,t)|^2. $$ The first integral vanishes, as it can be seen performing the integral over a ball of radius $R$, and taking the limit $R\to \infty$. Indeed, for the Gauss theorem it is $$ \int dx \, \textnormal{div}(E(x,t) \Phi(x,t))=\int (E(x,t) \Phi(x,t)) \cdot \hat n \, dS $$ where the right-hand integral is taken over the surface of the ball. If $\rho\in L^1$, $|E|$ goes to zero as $\frac{1}{|x|^2}$ for large $|x|$, and $ \Phi$ as $\frac{1}{|x|}$. The surface grows as $R^2$, hence the integral goes to zero. Consequently eq. (\ref{finite_energy}) becomes \begin{equation} \frac12 \int d x \int d v \, |v|^2 f( x, v,t) +\frac12 \int dx \, |E(x,t)|^2 = C, \end{equation} and hence both potential and kinetic energy are bounded and positive during the motion. This consideration allows to treat the case contemplated in Theorem \ref{th_02} (species with charges of different sign) along the same lines of Theorem \ref{th_03}. This last theorem allows for infinite mass, which is difficult in the case of Theorem \ref{th_02}, since we are not able to prove Proposition {\ref{propo}} (strongly based on the positivity of the potential), which is the key tool to control infinite mass. Therefore in the sequel we study explicitly Theorem \ref{th_03}, because it is the more complicated case. During the proof we will see that the proof of Theorem \ref{th_02} will follow from that of Theorem \ref{th_03}, since we can bound the absolute value of the electric field acting on a characteristic by the sum of the absolute values of the electric fields produced by the different species; moreover in several points (as (\ref{K}), (\ref{5/3}), (\ref{i5}), and the omitted proof of Lemma \ref{Lem_1}) we need to bound the kinetic energy of the plasma, which in case of Theorem {\ref{th_02}} is simply bounded by a constant, for what seen before. \subsection{Partial dynamics and local energy in case of Theorem \ref{th_03}} For what said before we consider hereafter the case of species with charges of the same sign. Moreover the number of species will be assumed to be one, since all the main bounds proved in the sequel, as for example the fundamental bound of the electric field (\ref{E_field}), are not affected by the number of species (growing linearly in the number of species $n$, which is finite). Hence from now on we will drop the index of species $i=1,\dots, n$. Beside system (\ref{ch}) we introduce a modified differential system, called \textit{partial dynamics}, in which the initial density has compact support in the velocities. More precisely, for any positive integer $N,$ we consider the following equations: \begin{equation} \label{chN} \begin{cases} \displaystyle \dot{X}^N(t)= V^N(t)\\ \displaystyle \dot{V}^N(t)= E^N(X^N(t),t) + V^N(t) \wedge B^N(X^N(t)) \\ \displaystyle ( X^N(t'), V^N(t'))=(x,v) \\ \displaystyle f^N(X^N(t),V^N(t),t)=f^N_0(x,v), \end{cases} \end{equation} where \begin{equation*} \left(X^N(t),V^N(t)\right)= \left((X^N(t,t',x,v),V^N(t,t',x,v)\right) \end{equation*} \begin{equation*} E^N(x,t)=\int \frac{x-y}{|x-y|^3} \ \rho^N(y,t) \, dy \end{equation*} \begin{equation*} \rho^N(x,t)=\int f^N(x,v,t) \, dv, \end{equation*} being the initial distribution $f_0^N$ defined as \begin{equation} f^N_0(x,v)=f_0(x,v)\chi\left(|v|\leq N\right) \label{ic} \end{equation} with $\chi\left(\cdot\right)$ the characteristic function of the set $\left(\cdot\right)$. \bigskip For a single species, the existence of the solution to (\ref{chN}) in case the spatial density satisfies $\rho(x, 0)\leq C |x|^{-\alpha},$ with $\alpha$ in a suitable range to include infinite mass, follows from what already done in \cite{CCM2} and \cite{Rem}. In what follows some positive constants will appear, generally denoted by $C$ (changing from line to line). All of them will depend exclusively on the initial data and an arbitrarily fixed, once for ever, time $T$, but not on $N$, while any dependence on $N$ in the estimates will be clearly stressed. \bigskip We introduce the {\textit{local energy}}, which is a fundamental tool to deal with the infinite charge of the plasma. For any vector $\mu\in \mathbb{R}^3$ and any $R>1$ we define the function: \begin{equation} \varphi^{\mu,R}( x)=\varphi\Bigg(\frac{| x-\mu|}{R}\Bigg) \label{a} \end{equation} with $\varphi$ a smooth function such that: \begin{equation} \varphi(r)=1 \ \ \hbox{if} \ \ r\in[0,1] \label{b} \end{equation} \begin{equation} \varphi(r)=0 \ \ \hbox{if} \ \ r\in[2,+\infty) \label{c} \end{equation} \begin{equation} -2\leq \varphi'(r)\leq 0.\label{d} \end{equation} We define the local energy as: \begin{equation} \begin{split} & W^N( \mu,R,t)=\\&\frac12 \int_{\mathbb{R}^3\setminus \Gamma} d x\, \varphi^{\mu,R}( x)\left[\int d v \ |v|^2 f^N( x, v,t)\,+ \rho^N( x,t)\int_{\mathbb{R}^3\setminus \Gamma} dy \ \frac{ \rho^N ( y,t)}{ | x- y|}\right]. \end{split} \label{e} \end{equation} It depends on the property of $f^N$ whether or not $W^N$ is bounded. For the moment, we stress that the local energy takes into account the complete interaction with the rest of the plasma out of the sphere of center $\mu$ and radius $2R.$ We set \begin{equation} Q^N(R,t)=\sup_{\mu\in {\mathbb {R}}^3 }W^N( \mu,R,t).\label{Q} \end{equation} By the definition of local energy, it is easy to prove, see for instance \cite{CCM16}, that \begin{lemma} In the hypotheses of Theorem \ref{th_03}, $\forall R>0$ it holds \begin{equation} Q^N(R,0)\leq CR^{3-\alpha}. \label{Q^} \end{equation} \label{Qbound} \end{lemma} Let us introduce the following functions, for any $t\in [0,T]:$ \begin{equation} {\mathcal{V}}^N(t)=\max \left\{ \widetilde C, \sup_{s\in [0,t]}\sup_{(x,v)\in \mathbb{R}^3 \times b(N)}|V^N(x,v,s)| \right\}, \label{mv} \end{equation} \begin{equation} R^N(t)= 1 +\int_0^t {\mathcal{V}}^N(s)\, ds, \label{RN} \end{equation} where $\widetilde{C}>1$ is a positive constant chosen suitably large for further technical reasons, and $b(N)$ is the ball in $\mathbb{R}^3$ of center $0$ and radius $N.$ \medskip The main result we can prove on the local energy is stated in the following Proposition, whose proof is contained in \cite{CCM16} and not repeated here. \begin{proposition} In the hypotheses of Theorem \ref{th_03}, for any $ t\in [0,T]$ it holds \begin{equation*} Q^N(R^N(t), t)\leq CQ^N(R^N(t),0). \end{equation*} \label{propo} \end{proposition} \subsection{Preliminary results}\label{subs_res} As said before, we refer here to a single species, since the same reasoning in the following can be repeated for each of the $n$ species. \bigskip We want to remove the cut-off $N$ on the velocities, by showing that the velocity of any characteristic can be bounded over the time interval $[0,T],$ independently of $N.$ A naive approach would be to look for a bound on the electric field valid for any time. Unfortunately we are not able to do this, but only to bound the time average of the electric field acting on a characteristic, as we will show in Propositions \ref{prop_1} and \ref{media}, and this will be enough to achieve the result. Such estimates are improvements with respect to the analogous ones obtained in \cite{CCM15}. \noindent A fundamental tool in the proof of Theorem \ref{th_03} is the achievement of the following estimate on the time integral of the electric field, along the same lines of \cite{CCM16}. \begin{proposition} In the hypothesis of Theorem \ref{th_02} (for $\alpha>3$) and Theorem \ref{th_03} (for $\alpha\leq 3$) respectively, we have \begin{equation} \int_0^t |E^N(X^N(s), s)|\, ds \leq C \mathcal{A}(\mathcal{V}^N(t)), \label{E_field} \end{equation} where \begin{equation} \mathcal{A}(\mathcal{V}^N(t))= \left\{ \begin{aligned} &C \left[\mathcal{V}^N(t) \right]^{\frac67} \log^2 \mathcal{V}^N(t) \qquad \textit{for} \,\,\,\,\alpha>3 \\ &C \left[\mathcal{V}^N(t)\right]^{\frac67} \log^3 \mathcal{V}^N(t) \qquad \textit{for} \,\,\,\,\alpha=3 \\ &C \left[\mathcal{V}^N(t)\right]^{\gamma} \log^2 \mathcal{V}^N(t) \qquad \textit{for} \,\,\,\,\frac{8}{3}<\alpha<3 \end{aligned} \right. \end{equation} and $\gamma = \frac{15}{7} - \frac{3}{7}\alpha$. \label{prop_1} \end{proposition} \noindent We give the proof in the more complicated case $\frac{8}{3}<\alpha<3$, the other cases being simpler. The exponent $\gamma$ above is necessary in order to deal with the exponential tail (\ref{dec}) of the velocities distribution. The proof of Proposition \ref{prop_1} is a direct consequence of the following: \begin{proposition} Setting \begin{equation*} \langle E^N \rangle_{{\Delta}} := \frac{1}{{\Delta}} \int_{t}^{t+{\Delta}} |E^N(X(s), s)| \, ds, \end{equation*} there exists a positive number ${\Delta},$ depending on $N,$ such that: \begin{equation} \langle E^N \rangle_{{\Delta}} \leq C {\cal{V}}^N(T)^{\gamma} \log^2({\mathcal{V}}^N(T)), \label{averna} \end{equation} for any $t\in [0,T]$ such that $t\leq T- \Delta$. \label{media} \end{proposition} \medskip We observe that Proposition \ref{media} is sufficient to achieve the proof of Proposition \ref{prop_1}, which can be done by dividing the interval $[0,T]$ in $n$ subintervals $[t_{i-1},t_{i}]$, $i=1,...,n$, with $t_0=0$, $t_n=T,$ such that ${\Delta}/2\leq t_{i-1}-t_{i}\leq {\Delta},$ and using Proposition \ref{media} on each of them. We postpone the proof of Proposition \ref{media} to the Appendix. As a consequence of Proposition \ref{prop_1}, the following holds: \begin{corollary} \begin{equation} {\mathcal{V}}^N(T) \leq C N \label{V^N} \end{equation} \begin{equation} \rho^N(x, t) \leq C N^{3\gamma} \log^6 N \label{rho_t} \end{equation} \begin{equation} Q^N(R^N(t), t)\leq CN^{3-\alpha} \label{Q^N} \end{equation} \begin{equation} |B(X^N(t))|\leq CN^{\frac{2(\tau+1)}{\tau}}, \label{B} \end{equation} being $\gamma$ the exponent in Proposition \ref{prop_1} and $\tau$ the one in eq. (\ref{a(r)}). \label{coro}\end{corollary} \begin{proof} To prove (\ref{V^N}) we observe that the external Lorentz force does not affect the modulus of the particle velocities, being \begin{equation} \frac{d}{dt}\frac{|V^N(t)|^2}{2}=V^N(t) \cdot E^N(X^N(t), t). \label{magn} \end{equation} Hence \begin{equation} |V^N(t)|^2=|v|^2+2\int_0^tV^N(s) \cdot E^N(X^N(s), s)\,ds.\label{v^2} \end{equation} This fact, by Proposition \ref{prop_1} and the choice of the initial data such that $|v|\leq N,$ implies \begin{equation} |V^N(t)|^2 \leq N^2+C \left[{\mathcal{V}}^N(t)\right]^{\gamma+1} \log^2({\mathcal{V}}^N(t)). \end{equation} Hence, since $\gamma+1<2,$ by taking the $\sup_{t\in [0,T]}$ we obtain the thesis. Now we prove (\ref{rho_t}). Putting $$ (\bar{x}, \bar{v})=\left(X^N(x,v,t),V^N(x,v,t)\right),$$ by using the invariance of the density along the characteristics we have \begin{equation*} \rho^N(\bar{x}, t) =\int f(\bar{x}, \bar{v},t)\, d\bar{v}=\int f_0\left(x(\bar x, \bar v, t),v(\bar x, \bar v, t)\right)\, d\bar{v}. \end{equation*} We notice that, putting $$ \widetilde{V}^N(t)=\sup_{0\leq s\leq t}|V^N(s)|, $$ from (\ref{v^2}), Proposition \ref{prop_1} and (\ref{V^N}) it follows, for some positive constant $\bar{C}$ \begin{equation} \begin{split} |v|^2\geq |V^N(t)|^2- \bar{C} \widetilde{V}^N(t) N^{\gamma}\log^2 N. \end{split}\label{VN} \end{equation} Hence, we decompose the integral as follows (writing for conciseness $x, v$ in place of $x(\bar x, \bar v, t), v(\bar x, \bar v, t)$), \begin{equation} \begin{split} &\rho^N(\bar{x}, t)\leq\\&\int_{|\bar v|\leq 2\bar{C} N^{\gamma}\log^2 N} f_0(x,v)\ d\bar{v}+C \int_{|\bar v|> 2\bar{C} N^{\gamma}\log^2 N} e^{-\lambda |v|^2}\ d\bar{v}\\& \leq CN^{3\gamma}\log^6 N+ C \int_{|\bar v|> 2\bar{C} N^{\gamma}\log^2 N} e^{-\lambda |v|^2}\ d\bar{v}. \label{f_0} \end{split} \end{equation} Being by definition $|\bar{v}|\leq \widetilde{V}^N(t)$, if $\widetilde{V}^N(t)\geq |\bar v| > 2\bar{C} N^{\gamma}\log^2 N$, then by (\ref{VN}), $$|v|^2\geq |V^N(t)|^2-\frac{[ \widetilde{V}^N(t)]^2}{2}.$$ Since the inequality holds for any $t\in [0,T],$ it holds also at the time in which $V^N$ reaches its maximal value over $[0,t],$ that is $$ |v|^2 \geq [\widetilde{V}^N(t)]^2-\frac{[ \widetilde{V}^N(t)]^2}{2}\geq \frac{| {V}^N(t)|^2}{2}=\frac{|\bar v|^2}{2}.$$ Hence from (\ref{f_0}) it follows \begin{equation} \begin{split} &\rho^N(\bar{x}, t)\leq CN^{3\gamma}\log^6 N +C \int e^{-\frac{\lambda}{2}|\bar{v}|^2}\ d\bar{v} \leq CN^{3\gamma}\log^6 N.\end{split} \end{equation} To obtain (\ref{Q^N}) we simply combine (\ref{V^N}) with Lemma \ref{Qbound} and Proposition \ref{propo}. We prove (\ref{B}) by an argument analogous to the one used in \cite{CCM15}. \noindent We consider the time evolution of a single characteristic of the plasma in toroidal coordinates (see (\ref{tor_coor})), in particular the second equation of that system, \begin{equation} (R+r \cos\alpha) \ddot \theta + 2 \, \dot r \, \dot \theta \cos\alpha -2 \, \dot\alpha \, \dot\theta \, r \sin\alpha = E_\theta - \frac{a'(r) \, \dot r}{R+r \cos\alpha}, \end{equation} which, after multiplying by $(R+r \cos\alpha)$, becomes (following the notation in (\ref{tor_vel})), \begin{equation} a'(r) \dot r = -(R+r \cos\alpha)^2 \ddot \theta - 2 v_r v_\theta \cos\alpha + 2 v_\alpha v_\theta \sin\alpha + (R+r \cos\alpha) E_\theta. \label{confin} \end{equation} Integrating in time (\ref{confin}), for the left hand side we have \begin{equation} \int_0^t a'(r(s)) \frac{dr}{ds} \, ds = a(r(t)) - a(r(0)), \label{lhs} \end{equation} while for the right hand side we get \begin{eqnarray} &&\int_0^t \left( -(R+r \cos\alpha)^2 \ddot \theta - 2 v_r v_\theta \cos\alpha + 2 v_\alpha v_\theta \sin\alpha + (R+r \cos\alpha) E_\theta \right) ds \nonumber \\ &&= -\left[ (R+r \cos\alpha)^2 \dot \theta \right]_0^t +\int_0^t \left( 2 \dot\theta (R+r \cos\alpha) (\dot r \cos\alpha - r \dot\alpha \sin\alpha) \right) ds \, \nonumber \\ &&\,\,\,\,\,\,+ \int_0^t \left( -2 v_r v_\theta \cos\alpha + 2 v_\alpha v_\theta \sin\alpha + (R+r \cos\alpha) E_\theta \right) ds. \label{rhs} \end{eqnarray} As a by-product we have the impenetrability of the region $\Gamma$ for the partial dynamics, since (\ref{lhs}) diverges to $\infty$ for $r\to r_0^+$ (border of the torus), while (\ref{rhs}) stays finite. We have indeed, by the bound on the maximum velocity (\ref{V^N}), and by the a priori estimate on the electric field (\ref{C1}) (together with (\ref{Q^N})), $$ a(r(t)) \leq a(r(0)) + C N + C N^2. $$ Since $a(r(0))$ depends only on the initial conditions, and it is finite by the assumption given in Theorem \ref{th_02} and \ref{th_03} (that the distance between $\Lambda_i$ and $\Gamma$ is greater than $d_0>0$), then by (\ref{mag.f.}) and the choice of the function $a(r)$ (\ref{a(r)}) we obtain (\ref{B}). \end{proof} \bigskip Before going into the proof of Theorem \ref{th_03} we give some technical estimates, whose proofs can be found in \cite{CCM16}. \begin{proposition} \begin{equation} \|E^N(t)\|_{L^\infty}\leq C {\mathcal{V}}^N(t)^{\frac43}Q^N(R^N(t),t)^{\frac13}. \label{C1} \end{equation} \label{prop2}\end{proposition} \begin{lemma} \label{lemR'/R} For any $\mu\in \mathbb{R}$ and for any couple of positive numbers $R<R'$ we have: $$ W^N(\mu,R',t)\leq C \left(\frac{R'}{R}\right)^3 Q^N(R,t). $$ \end{lemma} \begin{lemma} For any $\mu\in \mathbb{R}^3$ and $R>0$ it holds \begin{equation*} \int_{1\leq |\mu-x|\leq R}\frac{\rho^N(x,t)}{|\mu-x|^2}\,dx\leq CQ^N(R,t)^{\frac35}. \end{equation*} \label{Lem_1} \end{lemma} \begin{lemma} Let $R^N(t)$ be defined in (\ref{RN}). Then for any $\mu\in \mathbb{R}^3$ it holds \begin{equation} \int_{3R^N(t)\leq |\mu-x| }\frac{\rho^N(x,t)}{|\mu-x|^2}\,dx\leq C. \end{equation}\label{Lem_1'} \end{lemma} \subsection{Convergence of the partial dynamics as $N \to \infty$ and proof of Theorem \ref{th_03}} \label{subs_conv} We now start the proof of Theorem \ref{th_03}. We recall that $\Lambda$ is the spatial support of $f_0$ and $b(N)$ is the ball in $\mathbb{R}^3$ of center $0$ and radius $N.$ We fix a couple $(x,v)\in \Lambda\times b(N)$, and we consider $X^N(t)$ and $X^{N+1}(t),$ that is the time evolved characteristics, both starting from this initial condition, in the different dynamics relative to the initial distributions $f_0^N$ and $f_0^{N+1}.$ We set \begin{equation} \delta^N(t) =\sup_{(x,v)\in \Lambda\times b(N)} |X^N(t)-X^{N+1}(t)| \label{delta^N} \end{equation} \begin{equation} \eta^N(t)=\sup_{(x,v)\in \Lambda\times b(N)} |V^N(t)-V^{N+1}(t)| \end{equation} and \begin{equation} \sigma^N(t)=\delta^N(t)+\eta^N(t). \end{equation} Our goal is to prove the following estimate \begin{equation} \sigma^N(t)\leq C^{-N^c} \label{fff} \end{equation} with $c$ a positive exponent. Once we get this estimate, the proof of the Theorem is accomplished. Indeed, (\ref{fff}) implies that the sequences $X^N(t)$ and $V^N(t)$ are Cauchy sequences, uniformly on $[0,T].$ Therefore, for any fixed $(x,v)$ they converge to limit functions which we call $X(t)$ and $ V(t).$ The proof of property (\ref{dec}), and the facts that the solution is unique and satisfies system (\ref{ch}), follow directly from what already done in Section 3 of \cite{CCM16}. \bigskip $\mathbf{Proof\, of \,estimate\, (\ref{fff})}$. We have \begin{equation} \begin{split} &|X^N(t)-X^{N+1}(t)| =\\& \, \bigg| \int_0^t dt_1 \int_0^{t_1} dt_2 \, \Big[E^N\left(X^N(t_2), t_2\right) +V^N(t)\wedge B\left(X^{N}(t_2)\right) \\ & - E^{N+1}\left(X^{N+1}(t_2), t_2\right) -V^{N+1}(t)\wedge B\left(X^{N+1}(t_2)\right) \Big] \bigg| \leq \\ &\int_0^t dt_1 \int_0^{t_1} dt_2 \, \left[ \mathcal{G}_1(x,v,t_2) + \mathcal{G}_2(x,v,t_2) + \mathcal{G}_3(x,v,t_2)\right] \end{split} \label{iter_} \end{equation} where \begin{equation} \mathcal{G}_1(x,v,t) = \left|E^N\left(X^N(t), t\right) -E^N\left(X^{N+1}(t), t\right)\right| \end{equation} \begin{equation} \mathcal{G}_2(x,v,t) = \left|E^N\left(X^{N+1}(t), t\right) -E^{N+1}\left(X^{N+1}(t), t\right)\right| \end{equation} and \begin{equation} \mathcal{G}_3(x,v,t) = \left|V^N(t)\wedge B\left(X^N(t)\right) -V^{N+1}(t)\wedge B\left(X^{N+1}(t)\right)\right|. \end{equation} To estimate the first term $\mathcal{G}_1$ we have to prove a quasi-Lipschitz property for $E^N.$ We consider the difference $|E^N(x,t)-E^N(y,t)|$ at two generic points $x$ and $y,$ (which will be in our particular case $X^N(t)$ and $X^{N+1}(t)$) and set: \medskip \begin{equation*} \mathcal{G}_1'(x,y,t)= |E^N(x,t)-E^N(y,t)|\chi (|x-y|\geq 1) \end{equation*} and \begin{equation*} \mathcal{G}_1''(x,y,t)= |E^N(x,t)-E^N(y,t)|\chi (|x-y|< 1). \end{equation*} Hence \begin{equation} \mathcal{G}_1(x,v,t)=\mathcal{G}_1'(X^N(t),X^{N+1}(t),t)+\mathcal{G}_1''(X^N(t),X^{N+1}(t),t). \label{F}\end{equation} \medskip By Proposition \ref{prop2} and Corollary \ref{coro} we have \begin{equation} \begin{split} \mathcal{G}_1'(x,y,t)\leq&\ |E^N(x,t)|+|E^N(y,t)|\leq C N^{\frac{7-\alpha}{3}}. \end{split}\label{b0} \end{equation} By the range of the parameter $\alpha\in(\frac83, 3)$ it is $\frac{7-\alpha}{3}<3\gamma$, so that we get \begin{equation} \begin{split} \mathcal{G}_1'(x,y,t)\leq CN^{3\gamma}\leq CN^{3\gamma}|x-y|. \label{b1} \end{split} \end{equation} \medskip Let us now estimate the term $\mathcal{G}_1''$. We put $\zeta=\frac{x+y}{2}$ and consider the sets: \begin{equation} \begin{split} &S_1=\{z:|\zeta-z|\leq 2|x-y|\}, \\& S_2=\Big\{z:2|x-y|\leq |\zeta-z|\leq\frac{4}{|x-y|}\Big\}\\& S_3=\Big\{z:|\zeta-z|\geq \frac{4}{|x-y|}\Big\}.\label{S'} \end{split} \end{equation} We have: \begin{equation} \begin{split} &\mathcal{G}_1''(x,y,t)\leq \int_{S_1\cup S_2\cup S_3} \left|\frac{x-z}{|x-z|^3}-\frac{y-z}{|y-z|^3}\right| \rho^N(z,t) \,dz. \end{split} \label{s} \end{equation} By (\ref{rho_t}) and the definition of $\zeta$ we have \begin{equation} \begin{split} &\int_{S_1} \left| \frac{x-z}{|x-z|^3}-\frac{y-z}{|y-z|^3}\right| \rho^N(z,t) \,dz\leq\\& CN^{3\gamma} \log^6 N \int_{S_1} \left|\frac{1}{|x-z|^2}+\frac{1}{|y-z|^2} \right|\,dz\leq C N^{3\gamma} \log^6 N |x-y|. \end{split} \label{Ai1}\end{equation} Let us pass to the integral over the set $S_2.$ By the Lagrange theorem applied to each component $i=1, 2, 3$, of $E^N(x,t)-E^N(y,t)$, there exists $\xi_z$ such that $\xi_z=\kappa x +(1-\kappa)y$ and $\kappa\in [0,1]$ (depending on $z$), for which \begin{equation} \begin{split} \int_{S_2} \left|\frac{x_i-z_i}{|x-z|^3}-\frac{y_i-z_i}{|y-z|^3}\right|& \rho^N(z,t) dz\leq C |x-y|\int_{S_2} \frac{\rho^N(z,t)}{|\xi_z-z|^3}\,dz\\&\leq CN^{3\gamma}\log^6 N |x-y|\int_{S_2} \frac{1}{|\xi_z-z|^3}\,dz. \end{split}\label{D'} \end{equation} Since in $S_2$ it results $|\xi_z-z|\geq \frac{|\zeta-z|}{2}$, we have $$ \int_{S_2} \frac{1}{|\xi_z-z|^3}\,dz \leq 8\int_{S_2} \frac{1}{|\zeta-z|^3}\,dz \leq C |\log |x-y|+1| $$ \noindent and combining the results for the three components \begin{equation} \begin{split} &\int_{S_2} \left|\frac{x-z}{|x-z|^3}-\frac{y-z}{|y-z|^3}\right| \rho^N(z,t) \,dz\leq CN^{3\gamma} \log^6 N |x-y|\, |\log |x-y|+1|. \label{D} \end{split} \end{equation} For the last integral over $S_3,$ again by the Lagrange theorem, we have for some $\xi_z=\kappa x +(1-\kappa)y$ and $\kappa\in [0,1],$ \begin{equation} \int_{ S_3} \left|\frac{x_i-z_i}{|x-z|^3}-\frac{y_i-z_i}{|y-z|^3}\right| \rho^N(z,t)\, dz\leq C |x-y| \int_{ S_3} \frac{\rho^N(z,t)}{|\xi_z-z|^3}\,dz.\label{S} \end{equation} Notice that, since $z\in S_3$ and $|x-y|<1$ by definition of $\mathcal{G}_1''$, it is $$ |\xi_z - z|\geq |\zeta - z| - |x-y|, $$ and $$ \frac{1}{|\xi_z - z|} \leq \frac{1}{|\zeta - z| - |x-y|} \leq \frac{1}{\frac34|\zeta - z| +\frac14 |\zeta-z|- |x-y|} \leq \frac43 \frac{1}{|\zeta-z|}, $$ then we have \begin{equation} \begin{split} \int_{ S_3} \frac{\rho^N(z,t)}{|\xi_z-z|^3}\,dz \leq C\int_{|\zeta-z|\geq 4 } \frac{\rho^N(z,t)}{|\zeta-z|^3}\,dz. \end{split} \end{equation} Lemmas \ref{lemR'/R}, \ref{Lem_1} and \ref{Lem_1'} imply \begin{equation} \int_{|\zeta-z|\geq 4 } \frac{\rho^N(z,t)}{|\zeta-z|^3}\,dz\leq CQ^N(R^N(t),t)^{\frac35} + C \end{equation} and by (\ref{Q^N}) we get \begin{equation} \int_{|\zeta-z|\geq 4} \frac{\rho^N(z,t)}{|\zeta-z|^3}\,dz\leq CN^{\frac{9-3\alpha}{5}}. \end{equation} Using this estimate in (\ref{S}) and going back to (\ref{s}), by (\ref{Ai1}) and (\ref{D}) we have proven that \begin{equation} \mathcal{G}_1''(x,y,t)\leq CN^{3\gamma} \log^6 N |x-y| \, |\log |x-y|+1|. \label{f1} \end{equation} \bigskip In conclusion, recalling (\ref{F}), estimates (\ref{b1}), (\ref{f1}), and the definition of $\delta^N(t)$ (\ref{delta^N}) show that \begin{equation} \mathcal{G}_1(x,v,t)\leq CN^{3\gamma} \log^6 N \left(1+|\log \delta^N(t)|\right)\delta^N(t). \label{f1'} \end{equation} \medskip We now concentrate on the term $\mathcal{G}_2$. Putting $\bar{X}=X^{N+1}(t),$ we have: \begin{equation} \mathcal{G}_2(x,v,t)\leq \mathcal{G}_2'(\bar X,t)+\mathcal{G}_2''(\bar X,t), \label{I_2} \end{equation} where \begin{equation} \mathcal{G}_2'(\bar X,t)=\left|\int_{|\bar X - y|\leq 2\delta^N(t)} \frac{ \rho^N(y,t)-\rho^{N+1}(y,t)}{|\bar X-y|^2} \, dy \right| \end{equation} \begin{equation} \mathcal{G}_2''(\bar X,t)=\left|\int_{2\delta^N(t)\leq |\bar X - y| } \frac{ \rho^N(y,t)-\rho^{N+1}(y,t)}{|\bar X-y|^2} \, dy \right|. \end{equation} By estimate (\ref{rho_t}) we get \begin{equation} \mathcal{G}_2'(\bar X,t)\leq \int_{|\bar X - y|\leq 2\delta^N(t)} \frac{ \rho^N(y,t)+\rho^{N+1}(y,t)}{|\bar X-y|^2} \, dy \leq CN^{3\gamma}\log^6 N \, \delta^N(t).\label{F2'} \end{equation} Now we estimate the term $\mathcal{G}_2''.$ By the invariance of $f^N(t)$ along the characteristics, making a change of variables, we decompose the integral as follows: \begin{equation} \begin{split} \mathcal{G}_2''(\bar X,t)= \left| \int_{S^N(t)} d y \, dw\,\frac{f_0^N( y, w)}{|\bar X- Y^N(t)|^2}\ -\int_{S^{N+1}(t)}dy \, dw\,\frac{f_0^{N+1}(y,w)}{|\bar X-Y^{N+1}(t)|^{2}} \right| \end{split} \label{I_2''} \end{equation} where we put, for $i=N, N+1,$ $$(Y^i(t), W^i(t))= (X^i(y,w,t), V^i(y,w,t))$$ $$ S^{i}(t)=\{( y, w):2\delta^N(t)\leq|{\bar X}- Y^i(t)| \}. $$ We notice that it is \begin{equation} \mathcal{G}_2''(\bar X,t)\leq \mathcal{I}_1+\mathcal{I}_2+\mathcal{I}_3\label{l_i} \end{equation} where \begin{equation} \mathcal{I}_1= \int_{S^N(t)} d y \int d w \left| \frac{1}{|\bar X - Y^N(t)|^{2}} -\frac{1}{|\bar X - Y^{N+1}(t)|^{2}} \right| f_0^N( y,w), \end{equation} \begin{equation} \mathcal{I}_2= \int_{S^{N+1}(t)} d y \int d w \, \frac{ \left| f_0^N( y,w) - f_0^{N+1}( y, w) \right|}{|\bar X - Y^{N+1}(t)|^{2}} \, \end{equation} \begin{equation} \mathcal{I}_3= \int_{S^N(t)\setminus S^{N+1}(t)}dy\int dw\,\frac{f_0^N(y,w)}{\left|\bar{X}-Y^{N+1}(t)\right|^2}. \end{equation} We start by estimating $\mathcal{I}_1.$ By the Lagrange theorem \begin{equation} \mathcal{I}_1\leq C |Y^{N}(t)- Y^{N+1}(t)| \int_{S^N(t)} dy\int dw \, \frac{ f_0^N(y,w)}{|\bar X - \xi^N(t) |^{3}} \end{equation} where $\xi^N(t)= \kappa Y^{N}(t)+ (1-\kappa) Y^{N+1}(t)$ for a suitable $\kappa \in [0, 1]$. By putting \begin{equation*} \left(\bar{y}, \bar{w}\right)= \left((Y^N(t), W^N(t)\right) \end{equation*} and $$ {\bar{S}}^N(t)=\left\{ (\bar y, \bar w): (y, w)\in S^N(t) \right\}, $$ we get \begin{equation} \begin{split} \mathcal{I}_1\leq & \, C \delta^N(t)\int_{ {\bar{S}}^N(t)}d\bar{y}\int d\bar{w} \, \frac{ f^N(\bar{y},\bar{w}, t)}{|\bar X - \xi^N(t) |^{3}}.\end{split}\label{bar} \end{equation} If $(y,w)\in S^N(t)$ then\begin{equation*} |\bar X - \xi^N(t)| > |\bar X - \bar{y}| - |\bar{y} -Y^{N+1}(t)|\geq|\bar X - \bar{y}|-\delta^N(t)\geq \frac{|\bar X - \bar{y}|}{2}\label{>>} \end{equation*} which, by (\ref{bar}), implies \begin{equation} \begin{split} \mathcal{I}_1 \leq \ &C \delta^N(t)\int_{ \bar S^N(t)}d\bar{y}\int d\bar{w} \, \frac{ f^N(\bar{y},\bar{w}, t)}{|\bar X - \bar{y} |^{3}}\,=C \delta^N(t)\int_{\bar S^N(t)}d\bar{y}\, \frac{ \rho^N(\bar{y}, t)}{|\bar X - \bar{y} |^{3}} \end{split}. \end{equation} Now we consider the sets $$ A_1= \left\{(\bar y,\bar w):2 \delta^N(t)\leq\left|\bar{X}-\bar{y}\right|\leq \frac{4}{\delta^N(t)}\right\} $$ $$ A_2= \left\{(\bar y,\bar w): 1\leq\left|\bar{X}-\bar{y}\right|\leq 3R^N(t)\right\} $$ $$ A_3= \left\{(\bar y, \bar w):3R^N(t)\leq |\bar{X}-\bar{y}|\right\}. $$ Then it is $\bar S^N(t)\subset \bigcup_{i=1,2,3} A_i$ and \begin{equation} \mathcal{I}_1 \leq C \delta^N(t)\sum_{i=1}^3\int_{ A_i}d\bar{y}\frac{ \rho^N(\bar{y}, t)}{|\bar X - \bar{y} |^{3}}. \end{equation} We estimate the integral over $A_1$ as we did in (\ref{D'}), the one over $A_2$ by means of Lemma \ref{Lem_1} and the last one by means of Lemma \ref{Lem_1'}, yielding \begin{equation} \begin{split} \mathcal{I}_1 \leq C\delta^N(t)\left[N^{3\gamma}\log^6 N \, |\log \delta^N(t)|+Q^N\left(3R^N(t),t\right)^{\frac35}+1\right]. \end{split}\label{i1} \end{equation} Lemma \ref{lemR'/R} implies that \begin{equation} \mathcal{I}_1 \leq C \delta^N(t)\left[N^{3\gamma}\log^6 N \, |\log \delta^N(t)|+Q^N\left(R^N(t),t\right)^{\frac35}+1\right] \end{equation} and by (\ref{Q^N}) in Corollary \ref{coro} we get \begin{equation} \begin{split} \mathcal{I}_1 \leq CN^{3\gamma} \log^6 N \, \delta^N(t)\left[|\log \delta^N(t)|+1\right]\end{split}.\label{i1'} \end{equation} We estimate now the term $\mathcal{I}_2.$ By the definition of $f_0^i,$ for $i=N, N+1,$ and by (\ref{Ga}) it follows \begin{equation} \begin{split} &\mathcal{I}_2 = \int_{S^{N+1}(t)} d y \int_{} d w\, \frac{ f_0^{N+1}( y, w) }{|\bar X - Y^{N+1}(t)|^{2}} \chi\left(N\leq |w|\leq N+1\right) \leq\\& C\,e^{-\lambda N^q}\int_{S^{N+1}(t)} dy \int_{} dw\,\, \frac{g(y) }{|\bar X - Y^{N+1}(t)|^{2}} \chi\left(|w|\leq N+1\right). \end{split} \label{AA} \end{equation} We evaluate the integral over $S^{N+1}(t)$ by considering the sets $$ B_1= \left\{(y,w): |\bar{X}-Y^{N+1}(t)|\leq4R^{N+1}(t)\right\} $$ $$ B_2= \left\{(y,w):4R^{N+1}(t)\leq |\bar{X}-Y^{N+1}(t)|\right\}, $$ so that \begin{equation} S^{N+1}(t)\subset \bigcup_{i=1,2}B_i. \label{si} \end{equation} By putting \begin{equation*} (\bar{y}, \bar{w})=\left((Y^{N+1}(t), W^{N+1}(t)\right), \end{equation*} by (\ref{V^N}) it is $|\bar{w}|\leq CN,$ so that we have \begin{equation} \begin{split} &\int_{B_{1}} dy \int_{} dw\, \frac{g(y) }{|\bar X - Y^{N+1}(t)|^{2}}\chi\left(|w|\leq N+1\right)\leq\\&C \int_{|\bar{w}|\leq C N} d\bar{w}\int_{ |\bar{X}-\bar{y}|\leq 4R^{N+1}(t)} d\bar{y}\, \frac{1 }{|\bar X - \bar{y}|^{2}}\leq CN^3R^{N+1}(t)\leq CN^4 .\label{g5} \end{split}\end{equation} For the integral over the set $B_2,$ we observe that, if $|\bar{X}-Y^{N+1}(t)|\geq 4R^{N+1}(t),$ then $|\bar{X}-y|\geq 3R^{N+1}(t)$ and \begin{equation*} |\bar{X}-Y^{N+1}(t)|\geq |\bar{X}-y|-R^{N+1}(t)\geq \frac23|\bar{X}-y|. \end{equation*} Hence \begin{equation} \begin{split} &\int_{B_2 } dy \int_{} dw\, \frac{g(y) }{|\bar X - Y^{N+1}(t)|^{2}}\chi\left(|w|\leq N+1\right)\leq \\&\frac94\int_{ |\bar{X}-y|\geq 3R^{N+1}(t)} dy \int_{|w|\leq N+1} dw\, \frac{g(y) }{|\bar X -y|^{2}}\leq\\& CN^3\int_{|\bar{X}-y|\geq 3R^{N+1}(t)} dy\, \frac{g(y) }{|\bar X -y|^{2}}. \end{split}\end{equation} The last integral can be shown to be bounded by a constant, by using Lemma \ref{Lem_1'}. Hence we can conclude that \begin{equation} \int_{B_2 } dy \int_{} dw\, \frac{g(y) }{|\bar X - Y^{N+1}(t)|^{2}}\chi\left(|w|\leq N+1\right)\leq CN^3.\label{g6} \end{equation} Going back to (\ref{AA}), by (\ref{si}), (\ref{g5}) and (\ref{g6}) we have \begin{equation} \mathcal{I}_2\leq Ce^{-\lambda N^q}\left(N^4+N^3\right)\leq Ce^{-\frac{\lambda}{2} N^q}. \label{I} \end{equation} Finally, we estimate the term $\mathcal{I}_3.$ If $(y,w)\in S^N(t),$ then $$ \left|\bar{X}-Y^{N+1}(t)\right|\geq \left|\bar{X}-Y^{N}(t)\right| -\delta^N(t)\geq \delta^N(t), $$ so that \begin{equation} \begin{split} & \mathcal{I}_3 = \int_{S^N(t)\setminus S^{N+1}(t)}dy\int dw\,\frac{f_0^N(y,w)}{\left|\bar{X}-Y^{N+1}(t)\right|^2}\leq\\& \frac{1}{[\delta^N(t)]^2} \int_{\left(S^{N+1}(t)\right)^c}dy\int dw\,f_0^N(y,w). \end{split} \end{equation} If $(y,w)\in \left(S^{N+1}(t)\right)^c,$ then $$\left|\bar{X}-Y^{N}(t)\right|\leq \left|\bar{X}-Y^{N+1}(t)\right|+ \delta^N(t)\leq 3\delta^N(t).$$ Hence, by putting \begin{equation*} (\bar{y}, \bar{w}) = \left(Y^N(t), W^N(t)\right) \end{equation*} we get \begin{equation} \begin{split} \mathcal{I}_3\leq & \,\frac{1}{[\delta^N(t)]^2} \int_{\left|\bar{X}-\bar{y}\right|\leq 3\delta^N(t)} d\bar{y}\int d\bar{w}f^N(\bar{y}, \bar{w}, t)\leq\\&\frac{1}{[\delta^N(t)]^2} \int_{ \left|\bar{X}-\bar{y}\right|\leq 3\delta^N(t)}d\bar{y}\,\rho(\bar{y},t)\leq \\& C\frac{N^{3\gamma}\log^6 N}{[\delta^N(t)]^2} \int_{ \left|\bar{X}-\bar{y}\right|\leq 3\delta^N(t)}d\bar{y}\,\leq CN^{3\gamma}\log^6 N\, \delta^N(t).\end{split}\label{I3} \end{equation} Going back to (\ref{l_i}), by (\ref{i1'}), (\ref{I}) and (\ref{I3}) we have \begin{equation} \mathcal{G}_2''\leq C\left[N^{3\gamma}\log^6 N \, \delta^N(t)\left(|\log \delta^N(t)|+1\right)+e^{-\frac{\lambda}{2} N^q}\right], \label{F2''} \end{equation} so that, by (\ref{I_2}), (\ref{F2'}) and (\ref{F2''}), we get \begin{equation} \mathcal{G}_2\leq C\left[N^{3\gamma}\log^6 N \, \delta^N(t)\left(|\log \delta^N(t)|+1\right)+e^{-\frac{\lambda}{2} N^q}\right].\label{f} \end{equation} Hence, going back to the estimate (\ref{f1'}) of the term $\mathcal{G}_1$ we have that \begin{equation} \mathcal{G}_1+\mathcal{G}_2\leq C\left[N^{3\gamma}\log^6 N \, \delta^N(t)\left(|\log \delta^N(t)|+1\right)+e^{-\frac{\lambda}{2} N^q}\right].\label{g'} \end{equation} We need now a property, which is easily seen to hold by convexity, that is, for any $s\in (0,1)$ and $a\in (0,1)$ it holds: $$ s(|\log s|+1)\leq s|\log a|+a. $$ Hence, for $\delta^N(t)< 1,$ we have, for any $a<1,$ $$ \mathcal{G}_1+\mathcal{G}_2\leq C\left[N^{3\gamma} \log^6 N \, \left( \delta^N(t)|\log a|+a\right)+e^{-\frac{\lambda}{2} N^q}\right]. $$ Therefore, in case $\delta^N(t)< 1,$ we choose $a =e^{-\lambda N^q}$, yielding \begin{equation} \mathcal{G}_1+\mathcal{G}_2\leq C\left[N^{3\gamma+q}\log^6 N \, \delta^N(t)+e^{-\frac{\lambda}{2} N^q}\right].\label{fdue} \end{equation} In case that $\delta^N(t)\geq1$ we come back to (\ref{g'}) (valid for any $\delta^N(t)$) and we insert the bound $\delta^N(t)\leq C N$, obtaining $$ \mathcal{G}_1+\mathcal{G}_2\leq C\left[N^{3\gamma} \log^7 N \, \delta^N(t)+e^{-\frac{\lambda}{2} N^q}\right], $$ which can be included in (\ref{fdue}). Finally we estimate the term $\mathcal{G}_3.$ We have \begin{equation} \begin{split} \mathcal{G}_3(x,v,t) \leq &\ |V^N(t)|| B(X^{N}(t))-B(X^{N+1}(t)|+\\& \ |V^N(t)-V^{N+1}(t)|| B(X^{N+1}(t))|. \end{split}\end{equation} By applying the Lagrange theorem we have \begin{equation*} | B(X^{N}(t))-B(X^{N+1}(t)|\leq C\frac{|X^{N}(t)-X^{N+1}(t)|}{|r_0- r_\xi |^{\tau+2}} \end{equation*} where $r_\xi$ is the $r$-coordinate (in toroidal coordinates) of a point $\xi^N(t)$ of the segment joining $X^N(t)$ and $X^{N+1}(t).$ Due to the bound (\ref{B}), it has to be $|r_0-r_\xi|\geq \frac{C}{N^{2/\tau}}.$ Hence \begin{equation} | B(X^{N}(t))-B(X^{N+1}(t)|\leq CN^{\frac{2(\tau +2)}{\tau}}\delta^N(t). \end{equation} This, together with the bounds (\ref{V^N}) and (\ref{B}), imply \begin{equation} \begin{split} \mathcal{G}_3(x,v,t)& \leq \,C\left[N^{\frac{3\tau+4}{\tau }} \delta^N(t)+N^{\frac{2(\tau+1)}{\tau}}\eta^N(t)\right].\\& \label{F3} \end{split} \end{equation} At this point, going back to (\ref{iter_}), taking the supremum over the set $\{(x,v)\in \Lambda\times b(N)\},$ by (\ref{fdue}) and (\ref{F3}) we arrive at: \begin{equation} \begin{split} \delta^N(t) \leq \,&C\left(N^{3\gamma+q}\log^6 N +N^{\frac{3\tau+4}{\tau }}\right) \int_0^t dt_1 \int_0^{t_1} dt_2\,\delta^N(t_2)+\\ &C \int_0^t dt_1 \int_0^{t_1} dt_2\,e ^{-\frac{\lambda}{2} N^q}+ CN^{\frac{2(\tau+1)}{\tau}}\int_0^t dt_1 \int_0^{t_1} dt_2 \ \eta^N(t_2),\label{dd} \end{split} \end{equation} where in (\ref{f1'}) we have taken into account the bound (\ref{V^N}), which gives $|\delta^N(t)|\leq CN.$ On the other side, by using the same method to estimate the quantity $\eta^N(t),$ we get, analogously \begin{equation} \begin{split} \eta^N(t) \leq \,&C\left(N^{3\gamma+q}\log^6 N +N^{\frac{3\tau+4}{\tau }}\right) \int_0^t dt_1 \,\delta^N(t_1)+\\ &C \int_0^t dt_1 \,e ^{-\frac{\lambda}{2} N^q}+ CN^{\frac{2(\tau+1)}{\tau}}\int_0^t dt_1 \, \eta^N(t_1). \end{split} \end{equation} Since obviously $\delta^N(t_1)\leq \int_0^{t_1}dt_2\,\eta^N(t_2)$ we get from the last eqn. \begin{equation} \begin{split} &\eta^N(t) \leq C\left(N^{3\gamma+q}\log^6 N+N^{\frac{3\tau+4}{\tau }}\right) \int_0^t dt_1 \int_0^{t_1} dt_2\, \eta^N(t_2)+C \int_0^t dt_1\,e ^{-\frac{\lambda}{2} N^q}\\&+CN^{\frac{2(\tau+1)}{\tau}}\int_0^t dt_1 \ \eta^N(t_1).\label{ee} \end{split} \end{equation} Hence, summing up (\ref{dd}) and (\ref{ee}), we have: \begin{equation} \begin{split} \sigma^N(t)\leq \,&C\left(N^{3\gamma+q}\log^6 N+N^{\frac{3\tau+4}{\tau }}\right) \int_0^t dt_1 \int_0^{t_1} dt_2\,\sigma^N(t_2)+\\&CN^{\frac{2(\tau+1)}{\tau}}\int_0^t dt_1 \ \sigma^N(t_1)+C \left(t+\frac{t^2}{2}\right)e ^{-\frac{\lambda}{2} N^q}. \end{split}\label{d+e} \end{equation} We take \begin{equation} \nu > \frac{3\gamma+q}{2}, \label{nu} \end{equation} moreover by Proposition \ref{prop_1} it is $\frac67<\gamma<1$, and by (\ref{Ga}) it is $q>\frac{18}{7}$, hence choosing $\tau > \frac72$ we get \begin{equation} \begin{split} \sigma^N(t)\leq \,C&\Bigg[N^{2\nu} \int_0^t dt_1 \int_0^{t_1} dt_2\,\sigma^N(t_2)+N^{\nu}\int_0^t dt_1 \ \sigma^N(t_1)+\\& \left(t+\frac{t^2}{2}\right)e ^{-\frac{\lambda}{2} N^q}\Bigg]. \end{split}\label{d'} \end{equation} We insert in the integrals the same inequality for $\sigma^N(t_1)$ and $\sigma^N(t_2)$ and iterate in time, up to $k$ iterations. By direct inspection, using in the last step the estimate $\sup_{t\in [0,T]}\sigma^N(t)\leq CN,$ we arrive at \begin{equation} \begin{split} \sigma^N(t)\leq &\,CNe ^{-\frac{\lambda}{2} N^q}\sum_{i=1}^{k-1}C^i\sum_{j=0}^{i}\binom{i}{j}\frac{N^{2\nu j}t^{2j}N^{\nu (i-j)}t^{i-j}}{(2j+i-j)!} +\\&C^kN \sum_{j=0}^{k}\binom{k}{j}\frac{N^{2\nu j}t^{2j}N^{\nu (k-j)}t^{k-j}}{(2j+k-j)!}.\end{split} \label{double_sum} \end{equation} In the double sum of (\ref{double_sum}) we have all combinations of $j$ double time integrations of $e^{-\frac{\lambda}{2}N^q}$, yielding the factor $N^{2\nu j} t^{2 j}$ and the contribution $2j$ in the factorial at denominator, and of ($i-j$) single time integrations of $e^{-\frac{\lambda}{2}N^q}$, yielding the factor $N^{\nu (i-j)} t^{i-j}$ and the contribution ($i-j$) in the factorial at denominator; the last single sum in (\ref{double_sum}) has the same structure, coming from terms of the iteration which avoid integration of $e^{-\frac{\lambda}{2}N^q}$. Note that in absence of the middle term (single time integration) in the right hand side of (\ref{d'}) we would obtain $$ \sigma^N(t) \leq N C^k N^{2\nu k} \frac{t^{2k}}{(2k)!} + N e^{-\frac{\lambda}{2}N^q} \sum_{i=1}^{k-1} C^i N^{2\nu i} \frac{t^{2 i}}{(2 i)!} $$ as in the case of \cite{CCM16}. By putting \begin{equation*} S_k''=\sum_{i=1}^{k-1}C^i\sum_{j=0}^{i}\binom{i}{j}\frac{N^{\nu (i+j)}t^{i+j}}{(i+j)!} \end{equation*} and \begin{equation*} S_k'=C^k\sum_{j=0}^{k}\binom{k}{j}\frac{N^{\nu (j+k)}t^{j+k}}{(j+k)!} \end{equation*} we get \begin{equation} \sigma^N(t)\leq \,CNe ^{-\frac{\lambda}{2} N^q}S''_k+CNS_k'.\label{summ} \end{equation} We start by estimating $S''_k.$ Recalling that $\binom{i}{j}<2^i$ we get \begin{equation} \begin{split} S''_k\leq &\,\sum_{i=1}^{k-1}2^i C^i\sum_{j=0}^{i}\frac{N^{\nu (i+j)}t^{i+j}}{(i+j)!}. \end{split} \end{equation} The use of the Stirling formula $a^nn^n\leq n!\leq b^nn^n$ for some constants $a,b>0$ yields: \begin{equation} S''_k\leq \,\sum_{i=1}^{k-1}2^i\sum_{j=0}^{i}\frac{N^{\nu(i+j)}(Ct)^{i+j}}{(i+j)^{i+j}}\leq \sum_{i=1}^{k-1}2^i\frac{N^{\nu i}(Ct)^{i}}{i^{i}} \sum_{j=0}^{i}\frac{N^{\nu j}(Ct)^{j}}{j^{j}}, \end{equation} from which it follows, again by the Stirling formula, \begin{equation} S''_k\leq \left(e^{N^{\nu}Ct}\right)^2\leq e^{N^{\nu}C}.\label{sum'} \end{equation} For the term $S_k',$ putting $j+k=\ell,$ we get \begin{equation} S_k'\leq 2^k C^k\sum_{\ell=k}^{2k}\frac{N^{\ell\nu}(Ct)^{\ell}}{\ell^\ell}\leq C^kk\frac{N^{2k\nu}}{k^k}. \end{equation} By choosing $k=N^\zeta$ with $\zeta> 2\nu$, we have, for sufficiently large $N$, \begin{equation} S_k'\leq C^kk\left(k^{\frac{2\nu}{\zeta}-1}\right)^k\leq C^{-N^\zeta}. \label{ffinal} \end{equation} Going back to (\ref{summ}), by (\ref{sum'}) and (\ref{ffinal}) we have seen that \begin{equation} \sigma^N(t)\leq CN\left[e^{-\frac{\lambda}{2} N^q}e^{N^{\nu}C}+C^{-N^ \zeta}\right]. \end{equation} Hence, by the choice $$ q> \frac{45}{7} - \frac97 \alpha \qquad \textnormal{and} \qquad \frac{3\gamma+q}{2}<\nu<q $$ we have proven estimate (\ref{fff}). \bigskip \bigskip \section{Generalizations} \label{General} Theorems \ref{th_02} and \ref{th_03} can be proven also with a weaker hypothesis on the initial data, that is substituting conditions (\ref{asp1}) and (\ref{asp}) with the following \begin{equation} \int_{|i-x|\leq 1}dx \, g(|x|) \leq \frac{C}{|i|^\alpha} \qquad \textnormal{for any} \,\,\,\, i\in {\mathbb{Z}}^3 \setminus \{0\} \label{decay_den} \end{equation} being $\alpha$ in the respective intervals of Theorems \ref{th_02} and \ref{th_03}. This assumption has already been done in \cite{CCM16} and it can be satisfied when the spatial density $\rho(x,0)$ has a suitable decay at infinity (as in (\ref{asp1}) and (\ref{asp})), but also whenever $\rho(x,0)$ is constant, or has an oscillatory character, provided it has suitable support properties. Hence hypothesis (\ref{decay_den}) allows for spatial densities which possibly do not belong to any $L^p$ space. Moreover the initial data, both in positions and velocities, are not necessarily intended to have spherical symmetry, provided that they satisfy the hypothesis of Theorems \ref{th_02} and \ref{th_03}. We want to emphasize that when the magnetic lines are straight lines (condition that imposes an unbounded region $\Gamma$), the previous results can be improved. In this article we have considered a region $\Gamma$ as a torus, and we had to deal with curved magnetic lines. This fact complicated very much the problem. Actually a particle of velocity $v$ that remains close to the border of $\Gamma$ feels not only the electric force and the magnetic force (which does not increase its velocity), but also a centrifugal force (proportional to $|v|^2$) much greater than the electric force, which affects considerably the dynamics. In case the magnetic lines are straight lines this last force is absent and we can obtain stronger results. In particular we can extend the initial data to include a gaussian (Maxwell-Boltzmann) decay in the initial velocities. A region $\Gamma$ for which this analysis can be performed is for example an unbounded cylinder, \begin{equation} \Gamma = \{x \in \mathbb{R}^3: r<A \} \qquad \qquad r=\sqrt{x_2^2+x_3^2} \end{equation} where the magnetic field can be chosen as in \cite{inf}, \begin{equation} B(x)= (h(r^2),0,0) \qquad \qquad h(r^2) = \frac{1}{(A^2-r^2)^\tau} \end{equation} or a half-space $\Gamma=\{ x\in \mathbb{R}^3: x_1>0\}$, with a magnetic field of the form \begin{equation} B(x)= (0,0,h(x_1)) \qquad \qquad h(x_1) = \frac{1}{x_1^\tau} \end{equation} and a suitable $\tau>0$. \bigskip \bigskip \bigskip \noindent {\textbf{Acknowledgements}} \,\, Work performed under the auspices of GNFM-INDAM and the Italian Ministry of the University (MIUR). \bigskip \bigskip \bigskip \section*{Appendix}
1,314,259,992,899
arxiv
\section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi-a]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{margintable} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{margintable} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{marginfigure} \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{marginfigure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf,authordraft]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmlarge]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[manuscript,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigplan,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmtog]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi-a]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{margintable} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{margintable} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{marginfigure} \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{marginfigure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf,authordraft]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmlarge]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[manuscript,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigplan,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmtog]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section*{Checklist} \subsection*{Legend} \begin{itemize} \lgood{Green -- in good condition} \lok{Light green -- ok} \lpartial{Yellow -- partially done} \lbad{Orage -- in bad condition} \lreallybad{Red -- in really bad condition} \lforpetr{Cyan -- for Petr} \lforandrei{Olive -- for Andrei} \ltodo{White -- todo} \end{itemize} \subsection*{Structural} \begin{enumerate} \lok{Abstract} \lpartial{Introduction} \lok{System Model} \lok{Abstractions and Definitions} \lok{Reconfigurable and Dynamic {\BLA}} \lpartial{Discussion} \lpartial{Related work} \lok{Appendix about Access Control} \lok{Description of full {\DBLA}} \lok{Proofs for {\DBLA}} \lok{Full description of the transformation} \lok{Proofs for transformation} \lok{Dynamic {\MaxRegister}.} \lpartial{Proofs for {\MaxRegister}.} \end{enumerate} \subsection*{Important TODOs} \begin{enumerate} \lbad{Update the roadmap in the end of the introduction.} \lok{Say that correct clients only propose values and configurations with valid proofs in Max-Register.} \lok{Mention that $\fGet()$ in max-register implementation doesn't use forward-secure signatures at all.} \ltodo{Reference the appendices B and C properly.} \lok{Rewrite ``our contribution'' in intro.} \lok{Introduce pivotal configurations in the main paper and add some text about it.} \lok{Introduce ``{\tactive} configurations'' properly or remove it from the paper.} \lok{Introduce the term ``comparable'' for lattice elements and sets.} \lok{Say that public keys are part of configuration.} \lok{Describe how we fix the ``slow reader'' attack in intro.} \lok{Add proofs to max-register.} \end{enumerate} \subsection*{Check before submiting} \begin{enumerate} \ltodo{Update the roadmap in the end of the introduction.} \ltodo{Make sure that the terminology is consistent.} \ltodo{Check that ``todo'' or ``lots of text'' is not used anywhere.} \ltodo{Run spell checker.} \end{enumerate} \subsection*{Miscellaneous} \begin{enumerate} \ltodo{Simplify properties or {\dynamic} and {\reconfigurable} items? Especially, the liveness properties.} \ltodo{Add drawings.} \lok{Describe some complete reconfigurable systems without any external dependencies.} \lok{Introduce forward-secure digital signatures.} \lok{Introduce Reliable Broadcast primitive in Section 2 (System Model).} \ltodo{Add some motivation for ``verifiable input values'' (just one sentence), don't just throw them to the reader.} \lok{Consider adding strong validity property??} \lcomment[Andrei]{The idea is to make the ``minimal implementation'' (see ``discussion'').} \lok{Motivate why we use ``weaker'' version of BLA: without strong validity and not generalized.} \lok{``later'' and ``earlier'' $\to$ ``succeeds'' and ``precedes'' (or ``succeeding'', ``preceding'')??} \lcomment[Andrei]{Decided on {\thigher}, {\tlower}, {\thighest}, and {\tlowest}.} \ltodo{Maybe remove ``History Comparability'' property? Is it needed or is it a detail of our implementation?} \lforandrei{Mention that $C \sqsubseteq \Cinst$ is used to simplify proofs and that $C \sqsubseteq \Ccurr$ would work.} \ltodo{Add execution examples??} \ltodo{Rename MaxElement to Highest??} \lok{Clean up the full code. Especially the comments. Remove old terminology. Check that there are no overflowed lines.} \lok{Rename ``Global objects'' to ``Shared objects'' and ``Local variables'' to ``Global variables''.} \ltodo{Maybe somehow remove all this ``some process knows $\sigma$ such that...''. This ``knows'' souns really informal.} \lok{Reformulate the Operational Liveness property and the corresponding property for max-register.} \ltodo{Use consistent paragraph style (in Intro and System Model we currently use custom).} \ltodo{Perhaps, add some proofs of time complexity.} \lok{Comments on communication cost.} \ltodo{Decide whether the configurations that once were in the local history of a correct client are called ``candidate''.} \end{enumerate} \subsection*{From PODC reviews} \begin{enumerate} \item[] \textbf{From the 1st reviewer:} \lpartial{Explain the mechanisms for quorum proofs and proof verifications in more detail.} \lcomment{I believe we addressed this issue by explaining how {\AC} works and by better description of the {\DBLA} algorithm.} \ltodo{The writing in many parts is not clear enough.} \begin{enumerate} \ltodo{Quote: ``Section 3.1 [Verifiable inputs and outputs] was not clear to me.''} \lpartial{Section 3.3 [RBLA: specification] had too many properties, which made it very difficult to follow.} \lcomment[atonkikh]{This is addressed by the current structure of chapter 3: the properties of {\dynamic} and {\reconfigurable} objects are separated from the properties of the objects themselves.} \ltodo{Quote: ``What is an up-call?''} \lpartial{Improve writing. Quote: ``In the Comparability property, what do you mean by verifiable values?''} \lcomment[atonkikh]{This is partially addressed by renaming ``verifiable values'' to ``verifiable output values''.} \end{enumerate} \ltodo{The ``History Convergence'' property seems to mix the definition with implementation} \lok{The algorithm is very difficult to follow. An informal overview or the intuition could help.} \lcomment{The main paper now gives the overview and the technical details are presented in the appendix. Still no examples though.} \lgood{Potentially consider using the fault-tolerant shared memory model?} \lcomment{No.} \ltodo{Add conclusion/discussion section?} \ltodo{Add discussion on protocol's complexity} \ltodo{Add discussion on the open problems} \lcomment{Open problems: ``General approach'' to reconfiguration, ``optimal'' communication complexity, a framework to simplify proofs.} \lforpetr{Cite "On Liveness of Dynamic Storage" to justify finite number of reconfiguration requests.} \lpartial{More adequate names for ``I still work here'' and the ``slow reader'' attacks?} \lcomment[Andrei]{Maybe they are fine.} \ltodo{Present the ``I still work here'' and the ``slow reader'' attacks properly. They should be used to motivate the algorithm.} \end{enumerate} \begin{enumerate} \item[] \textbf{From the 2nd reviewer:} \ltodo{We should clearly state our contribution and why our results are novel.} \ltodo{Maybe add more intuition and move some formalism to the Appendix?} \ltodo{Quote: ``Since the proofs are not included in the main draft, sections 2 and 3 are unnecessarily technical.''} \lpartial{We should add an appendix on access control and emphasize that it captures different models.} \ltodo{Perhaps, explicitly reference some paper on Sybil attacks just to show that we are aware of them.} \ltodo{Read and, perhaps, reference the papers that consider continuous churn:} \begin{enumerate} \ltodo{Baldoni et al. 2013: \url{https://www.sciencedirect.com/science/article/pii/S0304397513002673}} \ltodo{Attiya et al. 2020: \url{https://arxiv.org/abs/2003.07787}} \ltodo{Kumar et al. 2019: \url{https://arxiv.org/abs/1910.06716}} \end{enumerate} \end{enumerate} \begin{enumerate} \item[] \textbf{From the 3rd reviewer:} \lok{Provide a more intuitive and easy-to-read description of the protocol (perhaps, less formal and technical).} \ltodo{Add execution examples??} \ltodo{The concepts of VerifyInputValue and VerifyOutputValue are hard to understand?} \ltodo{Improve writing in property descriptions.} \ltodo{Verification Stability $\to$ Verification Agreement: If $\fVerifyOutputValue(v, \sigma)$ returns $val$ at a correct process, then it never returns $val' \neq val$ to any correct process.} \lcomment[atonkikh]{We probably don't have this property now because $\fVerifyOutputValue$ is now a function, not an operation. However, we need to somehow make it clear that this is a function in mathematical sense.} \lpartial{Quote: ``More effort should be made by authors to make the paper more readable. It is not that it is formalized poorly (quite opposite) and that english is not good (it is) it is just very difficult to follow the exposition. More example would be welcome. Also, the textual description of the protocol should help more the reader understand the pseudocode especially since proofs are not in the main body. Authors should guide better the reader through the protocols.''} \end{enumerate} \end{document} \section{Typesetting instructions -- Summary} \label{sec:typesetting-summary} LIPIcs is a series of open access high-quality conference proceedings across all fields in informatics established in cooperation with Schloss Dagstuhl. In order to do justice to the high scientific quality of the conferences that publish their proceedings in the LIPIcs series, which is ensured by the thorough review process of the respective events, we believe that LIPIcs proceedings must have an attractive and consistent layout matching the standard of the series. Moreover, the quality of the metadata, the typesetting and the layout must also meet the requirements of other external parties such as indexing service, DOI registry, funding agencies, among others. The guidelines contained in this document serve as the baseline for the authors, editors, and the publisher to create documents that meet as many different requirements as possible. Please comply with the following instructions when preparing your article for a LIPIcs proceedings volume. \paragraph*{Minimum requirements} \begin{itemize} \item Use pdflatex and an up-to-date \LaTeX{} system. \item Use further \LaTeX{} packages and custom made macros carefully and only if required. \item Use the provided sectioning macros: \verb+\section+, \verb+\subsection+, \verb+\subsubsection+, \linebreak \verb+\paragraph+, \verb+\paragraph*+, and \verb+\subparagraph*+. \item Provide suitable graphics of at least 300dpi (preferably in PDF format). \item Use BibTeX and keep the standard style (\verb+plainurl+) for the bibliography. \item Please try to keep the warnings log as small as possible. Avoid overfull \verb+\hboxes+ and any kind of warnings/errors with the referenced BibTeX entries. \item Use a spellchecker to correct typos. \end{itemize} \paragraph*{Mandatory metadata macros} Please set the values of the metadata macros carefully since the information parsed from these macros will be passed to publication servers, catalogues and search engines. Avoid placing macros inside the metadata macros. The following metadata macros/environments are mandatory: \begin{itemize} \item \verb+\title+ and, in case of long titles, \verb+\titlerunning+. \item \verb+\author+, one for each author, even if two or more authors have the same affiliation. \item \verb+\authorrunning+ and \verb+\Copyright+ (concatenated author names)\\ The \verb+\author+ macros and the \verb+\Copyright+ macro should contain full author names (especially with regard to the first name), while \verb+\authorrunning+ should contain abbreviated first names. \item \verb+\ccsdesc+ (ACM classification, see \url{https://www.acm.org/publications/class-2012}). \item \verb+\keywords+ (a comma-separated list of keywords). \item \verb+\relatedversion+ (if there is a related version, typically the ``full version''); please make sure to provide a persistent URL, e.\,g., at arXiv. \item \verb+\begin{abstract}...\end{abstract}+ . \end{itemize} \paragraph*{Please do not \ldots} Generally speaking, please do not override the \texttt{lipics-v2019}-style defaults. To be more specific, a short checklist also used by Dagstuhl Publishing during the final typesetting is given below. In case of \textbf{non-compliance} with these rules Dagstuhl Publishing will remove the corresponding parts of \LaTeX{} code and \textbf{replace it with the \texttt{lipics-v2019} defaults}. In serious cases, we may reject the LaTeX-source and expect the corresponding author to revise the relevant parts. \begin{itemize} \item Do not use a different main font. (For example, the \texttt{times} package is forbidden.) \item Do not alter the spacing of the \texttt{lipics-v2019.cls} style file. \item Do not use \verb+enumitem+ and \verb+paralist+. (The \texttt{enumerate} package is preloaded, so you can use \verb+\begin{enumerate}[(a)]+ or the like.) \item Do not use ``self-made'' sectioning commands (e.\,g., \verb+\noindent{\bf My+ \verb+Paragraph}+). \item Do not hide large text blocks using comments or \verb+\iffalse+ $\ldots$ \verb+\fi+ constructions. \item Do not use conditional structures to include/exclude content. Instead, please provide only the content that should be published -- in one file -- and nothing else. \item Do not wrap figures and tables with text. In particular, the package \texttt{wrapfig} is not supported. \item Do not change the bibliography style. In particular, do not use author-year citations. (The \texttt{natbib} package is not supported.) \end{itemize} \enlargethispage{\baselineskip} This is only a summary containing the most relevant details. Please read the complete document ``LIPIcs: Instructions for Authors and the \texttt{lipics-v2019} Class'' for all details and don't hesitate to contact Dagstuhl Publishing (\url{mailto:publishing@dagstuhl.de}) in case of questions or comments: \href{http://drops.dagstuhl.de/styles/lipics-v2019/lipics-v2019-authors/lipics-v2019-authors-guidelines.pdf}{\texttt{http://drops.dagstuhl.de/styles/lipics-v2019/\newline lipics-v2019-authors/lipics-v2019-authors-guidelines.pdf}} \section{Lorem ipsum dolor sit amet} Lorem ipsum dolor sit amet, consectetur adipiscing elit \cite{DBLP:journals/cacm/Knuth74}. Praesent convallis orci arcu, eu mollis dolor. Aliquam eleifend suscipit lacinia. Maecenas quam mi, porta ut lacinia sed, convallis ac dui. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse potenti. Donec eget odio et magna ullamcorper vehicula ut vitae libero. Maecenas lectus nulla, auctor nec varius ac, ultricies et turpis. Pellentesque id ante erat. In hac habitasse platea dictumst. Curabitur a scelerisque odio. Pellentesque elit risus, posuere quis elementum at, pellentesque ut diam. Quisque aliquam libero id mi imperdiet quis convallis turpis eleifend. \begin{lemma}[Lorem ipsum] \label{lemma:lorem} Vestibulum sodales dolor et dui cursus iaculis. Nullam ullamcorper purus vel turpis lobortis eu tempus lorem semper. Proin facilisis gravida rutrum. Etiam sed sollicitudin lorem. Proin pellentesque risus at elit hendrerit pharetra. Integer at turpis varius libero rhoncus fermentum vitae vitae metus. \end{lemma} \begin{proof} Cras purus lorem, pulvinar et fermentum sagittis, suscipit quis magna. \begin{claim} content... \end{claim} \begin{claimproof} content... \end{claimproof} \end{proof} \begin{corollary}[Curabitur pulvinar, \cite{DBLP:books/mk/GrayR93}] \label{lemma:curabitur} Nam liber tempor cum soluta nobis eleifend option congue nihil imperdiet doming id quod mazim placerat facer possim assum. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. \end{corollary} \begin{proposition}\label{prop1} This is a proposition \end{proposition} \autoref{prop1} and \cref{prop1} \ldots \subsection{Curabitur dictum felis id sapien} Curabitur dictum \cref{lemma:curabitur} felis id sapien \autoref{lemma:curabitur} mollis ut venenatis tortor feugiat. Curabitur sed velit diam. Integer aliquam, nunc ac egestas lacinia, nibh est vehicula nibh, ac auctor velit tellus non arcu. Vestibulum lacinia ipsum vitae nisi ultrices eget gravida turpis laoreet. Duis rutrum dapibus ornare. Nulla vehicula vulputate iaculis. Proin a consequat neque. Donec ut rutrum urna. Morbi scelerisque turpis sed elit sagittis eu scelerisque quam condimentum. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aenean nec faucibus leo. Cras ut nisl odio, non tincidunt lorem. Integer purus ligula, venenatis et convallis lacinia, scelerisque at erat. Fusce risus libero, convallis at fermentum in, dignissim sed sem. Ut dapibus orci vitae nisl viverra nec adipiscing tortor condimentum \cite{DBLP:journals/cacm/Dijkstra68a}. Donec non suscipit lorem. Nam sit amet enim vitae nisl accumsan pretium. \begin{lstlisting}[caption={Useless code},label=list:8-6,captionpos=t,float,abovecaptionskip=-\medskipamount] for i:=maxint to 0 do begin j:=square(root(i)); end; \end{lstlisting} \subsection{Proin ac fermentum augue} Proin ac fermentum augue. Nullam bibendum enim sollicitudin tellus egestas lacinia euismod orci mollis. Nulla facilisi. Vivamus volutpat venenatis sapien, vitae feugiat arcu fringilla ac. Mauris sapien tortor, sagittis eget auctor at, vulputate pharetra magna. Sed congue, dui nec vulputate convallis, sem nunc adipiscing dui, vel venenatis mauris sem in dui. Praesent a pretium quam. Mauris non mauris sit amet eros rutrum aliquam id ut sapien. Nulla aliquet fringilla sagittis. Pellentesque eu metus posuere nunc tincidunt dignissim in tempor dolor. Nulla cursus aliquet enim. Cras sapien risus, accumsan eu cursus ut, commodo vel velit. Praesent aliquet consectetur ligula, vitae iaculis ligula interdum vel. Integer faucibus faucibus felis. \begin{itemize} \item Ut vitae diam augue. \item Integer lacus ante, pellentesque sed sollicitudin et, pulvinar adipiscing sem. \item Maecenas facilisis, leo quis tincidunt egestas, magna ipsum condimentum orci, vitae facilisis nibh turpis et elit. \end{itemize} \begin{remark} content... \end{remark} \section{Pellentesque quis tortor} Nec urna malesuada sollicitudin. Nulla facilisi. Vivamus aliquam tempus ligula eget ornare. Praesent eget magna ut turpis mattis cursus. Aliquam vel condimentum orci. Nunc congue, libero in gravida convallis \cite{DBLP:conf/focs/HopcroftPV75}, orci nibh sodales quam, id egestas felis mi nec nisi. Suspendisse tincidunt, est ac vestibulum posuere, justo odio bibendum urna, rutrum bibendum dolor sem nec tellus. \begin{lemma} [Quisque blandit tempus nunc] Sed interdum nisl pretium non. Mauris sodales consequat risus vel consectetur. Aliquam erat volutpat. Nunc sed sapien ligula. Proin faucibus sapien luctus nisl feugiat convallis faucibus elit cursus. Nunc vestibulum nunc ac massa pretium pharetra. Nulla facilisis turpis id augue venenatis blandit. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. \end{lemma} Fusce eu leo nisi. Cras eget orci neque, eleifend dapibus felis. Duis et leo dui. Nam vulputate, velit et laoreet porttitor, quam arcu facilisis dui, sed malesuada risus massa sit amet neque. \section{Discussion} \label{sec:discussion} \myparagraph{The general case.} While we do not provide any general approach for building {\dynamic} objects, we expect that most static algorithms in asynchronous model can be adopted to the {\dynamic} case by applying the same \emph{set of techniques}. These techniques include our state transfer protocol (relying on forward-secure signatures), the use of an additional round-trip to prevent the ``slow reader'' attack, and the structure of our cryptographic proofs that ensures that {\tentative} configurations cannot create a valid certificate for any output value. To illustrate this, in Appendix~\ref{sec:max-register}, we present the {\dynamic} version of {\MaxRegister}~\cite{max-register}. We also discuss the {\dynamic} version of the {\AccessControl} abstraction in Appendix~\ref{sec:access-control}. \myparagraph{Communication cost.} We \emph{did not} try to provide the optimal implementations of each object or to implement the most general abstractions (such as generalized lattice agreement~\cite{gla,rla}). Instead, we focused on providing \emph{minimal implementation} for the \emph{minimal set of abstractions} to demonstrate the ideas and the \emph{general techniques} for defining and building reconfigurable services in the harsh world of asynchrony and Byzantine failures. Therefore, we left plenty of space for optimizations. We discuss a few possible directions in Appendix~\ref{subsec:dbla-optimizations}, after providing the full implementation of {\DBLA}. Most of them aim at reducing the size of the proofs. \myparagraph{Open questions.} We would like to mention two directions that are most relevant to this paper. First, with regard to active reconfiguration, it would be interesting to see more efficient algorithms for ``small'' configuration changes while still supporting completely changing the set of replicas in one reconfiguration request. In this paper we assume that the sets of replicas of all configurations might be completely disjoint. Therefore, we have this expensive quorum-to-quorum communication pattern, which seems completely unnecessary if the set of replicas changes only slightly after each reconfiguration. Second, with regard to Byzantine faults, it would be interesting to consider models with ``weaker'' adversary. In this paper we assumed a very strong model of the adversary: no assumptions are made about correctness of replicas in superseded configurations. This ``pessimistic'' approach leads to more complicated and expensive protocols. \section{Max Register}\label{sec:max-register} To examplify our claim that our approach to building {\dynamic} objects can be used to implement different abstractions and is not limited to lattice agreement, in this section we show how to create an atomic Byzantine fault tolerant {\MaxRegister} in {\dynamic} setting. An atomic (a.k.a.\ linearizable) multi-writer multi-reader Byzantine {\MaxRegister} is a distributed object that has two operations: $\fRead()$ and $\fWrite(v, \sigma)$ and must be parametrized by a boolean function $\fVerifyInputValue(v, \sigma)$. As before, we say that $\sigma$ is a \emph{valid certificate for input value}~$v$ iff $\fVerifyInputValue(v, \sigma) = \iTrue$ and that value~$v$ is a \emph{verifiable input value} iff some process knows $\sigma$ such that $\fVerifyInputValue(v, \sigma) = \iTrue$. We assume that correct clients invoke $\fWrite(v, \sigma)$ only if $\fVerifyInputValue(v, \sigma) = \iTrue$. We \emph{do not} make any assumptions on the number of verifiable input values for this abstraction (i.e.\ it can be infinite). The object must satisfy the following three properties: \begin{itemize} \item \textit{\pMRValidity}: if $\fRead()$ returns value $v$ to a correct process, then $v$ is verifiable input value; \item \textit{\pMRAtomicity}: if some correct process $p$ completed $\fWrite(v, \sigma)$ or received $v$ from $\fRead()$ strictly before than some correct process $q$ invoked $\fRead()$, then the value returned to $q$ must be greater than or equal to $v$; \item \textit{\pMRLiveness}: every call to $\fRead()$ and $\fWrite(v, \sigma)$ by a forever-correct process eventually terminates. \end{itemize} For simplicity, unlike {\ByzantineLatticeAgreement}, our {\MaxRegister} doesn't provide the $\fVerifyOutputValue(v, \sigma)$ function. \subsection{{\DynamicMaxRegister} implementation} \label{subsec:dmr-impl} \input{algorithms/dynamic-max-register.tex} In this section we present our implementation of the \emph{\dynamic} version of the {\MaxRegister} abstraction ({\DynamicMaxRegister} or {\DMR} for short). Overall, the ``application'' part of the implementation is very similar to the classical ABD algorithm~\cite{ABD}, and the ``{\dynamic}'' part of the implementation is almost the same as in {\DBLA}. \paragraph*{Client implementation} From the client's perspective, the two main procedures are $\fGet()$ and $\fSet(v, \sigma)$. $\fSet(v, \sigma)$ (\lstlinerangeref{dmr-client}{set}{set-end}) is used to store the value on a quorum of replicas of the most recent configuration. It returns $\iTrue$ iff it manages to receive signed acknowledgments from a quorum of some configuration. Forward secure signatures are used to prevent the ``I still work here'' attack. Since $\fSet$ doesn't try to read any information from the replicas, it is not susceptible to the ``slow reader'' attack. $\fGet()$ (lines~\ref{lst:dmr-client:get}--\ref{lst:dmr-client:get-end}) is very similar to $\fSet(...)$ and is used to request information from a quorum of repicas of the most recent configuration. Since we do not provide the $\fVerifyOutputValue(...)$ function, the replies from replicas are \emph{not signed} (\lstlineref{dmr-replica}{upon-get-send-reply}). Therefore, $\fGet()$ is susceptible to both the ``I still work here'' and the ``slow reader'' attack when used alone. Later in this section we discuss how the invocation of $\fSet(...)$ right after $\fGet()$ (\lstlineref{dmr-client}{read-set}) allows us to avoid these issues. Operation $\fWrite(v, \sigma)$ (lines~\ref{lst:dmr-client:write}--\ref{lst:dmr-client:write-end}) is used by correct clients to store values in the register. It simply performs repeated calls to $\fSet(v, \sigma)$ until some call succeeds to reach a quorum of replicas. Retries are safe because, as in lattice agreement, write requests to a max-register are idempotent. Since we assume the total number of verifiable histories to be finite, only a finite number of retries is possible. Operation $\fRead()$ (lines~\ref{lst:dmr-client:read}--\ref{lst:dmr-client:read-end}) is used to request the current value from the register, and it consists of repeated calls to both $\fGet()$ and $\fSet(...)$. The call to $\fGet()$ is simply used to query information from the replicas. The call to $\fSet(...)$ is usually called ``the write-back phase'' and serves two purposes here: \begin{itemize} \item It is used instead of the ``confirming'' phase to prevent the ``I still work here'' and the ``slow-reader'' attacks. Indeed, if the configuration was superseded during the execution of $\fGet()$, $\fSet(...)$ will not succed because it will not be able to gather a quorum of signed replies in the same configuration; \item Also, it is used to order the calls to $\fRead()$ and to guarantee the {\pMRAtomicity} of the {\MaxRegister}~\cite{ABD}. Intuitively, if some correct process succesfully completed $\fSet(v, \sigma)$ strictly before some other correct process invoked $\fRead()$, the later process will receive a value that is not smaller than $v$. \end{itemize} \paragraph*{Replica implementation} The replica implementation (Algorithm~\lstref{dmr-replica}) is mostly the same as for {\DBLA} (Algorithm~\lstref{dbla-short-replica}). It handles the $\mUpdateRead$ and $\mUpdateReadResp$ messages a bit differently (\lstlinerangeref{dmr-replica}{upon-update-read}{upon-read-resp-end}), by exchanging $\vcurr$ and $\sigmacurr$ instead of $\icurrentValues$ as in {\DBLA}. It also handles client requests specific to {\MaxRegister} (\lstlinerangeref{dmr-replica}{upon-get}{upon-set-end}). \subsection{Proof of correctness} Since our {\DynamicMaxRegister} implementation uses the same state transfer protocol as {\DBLA}, most proofs from Section~\ref{subsec:dbla-correctness-proof} that apply to {\DBLA}, also apply to {\DMR} (with some minor adaptations). Here we provide only the statements of such theorems, without repeating the proofs. Then we introduce several theorems specific to {\DMR} and sketch the proofs. \paragraph*{{\DMR} safety} \begin{lemma}[Candidate configurations] \label{lem:dmr-candidate-configurations} \leavevmode \begin{enumerate} \item \label{itm:dmr-cand-in-hist} Each candidate configuration is present in some verifiable history. \item \label{itm:dmr-cand-fin} There is a finite number of candidate configurations. \item \label{itm:dmr-cand-comp} All candidate configurations are comparable with ``$\sqsubseteq$''. \end{enumerate} \end{lemma} \begin{lemma}[{\Tentative} configurations] \label{lem:dmr-tentative-configurations} \leavevmode \begin{enumerate} \item No correct client will ever make a request to a {\tentative} configuration. \item {\Tentative} configurations cannot be installed. \item A correct process will never invoke $\fFSVerify$ with timestamp $\fHeight(C)$ for any {\tentative} configuration $C$. \item A correct replica will never broadcast any message via the uniform reliable broadcast primitive in a {\tentative} configuration. \end{enumerate} \end{lemma} \begin{lemma} \label{lem:dmr-pivotal-configurations} If $C \sqsubseteq \fMaxElement(h)$, where $C$ is a {\pivotal} configuration and $h$ is the local history of a correct process, then $C \in h$. \end{lemma} \begin{theorem}[\pInstallationValidity] \label{the:dmr-installation-validity} Our implementation of {\DMR} satisfies {\pInstallationValidity}. I.e., only a candidate configuration can be installed. \end{theorem} \begin{lemma}[Key update] \label{lem:dmr-key-update} If a {\pivotal} configuration $C$ is superseded, then there is no quorum of replicas in that configuration capable of signing messages with timestamp $\fHeight(C)$, i.e., $\nexists Q \in \iquorums(C) \textST \forall r \in Q: \ist_r \le \fHeight(C)$. \end{lemma} We say that a correct client \emph{completes its operation in configuration $C$} iff at the moment when the client completes its operation, the {\thighest} configuration in its local history is $C$. \begin{lemma}[State transfer correctness] \label{lem:dmr-state-transfer-correctness} \leavevmode \\ If some correct process completed $\fWrite(v, \sigma)$ in $C$ or received $v$ from $\fRead()$ operation completed in $C$, then for each {\tactive} installed configuration $D$ such that $C \sqsubset D$, there is a quorum $Q_D \in \iquorums(D)$ such that for each correct replica in $Q_D$: $\vcurr \ge v$. \end{lemma} The following lemma is the first lemma specific to {\DMR}. \begin{lemma}[{\pMRAtomicity} in one configuration] \label{lem:dmr-atomicity-in-one-config} \leavevmode \\ If some correct process $p$ completed $\fWrite(v, \sigma)$ in $C$ or received $v$ from $\fRead()$ operation completed in $C$, strictly before some process $q$ invoked $\fRead()$, and $q$ completed its operation in $C$, then the value returned to $q$ is greater than or equal to $v$. \end{lemma} \begin{proof} Recall that $\fRead()$ operation consists of repeated calls to two procedures: $\fGet()$ and $\fSet(...)$. % If process $q$ successfully completed $\fSet(...)$ in configuration $C$, then, by the use of forward-secure signatures, configuration $C$ was {\tactive} during the execution of $\fGet()$ that preceded the call to $\fSet$. % This also means that configuration $C$ was {\tactive} during the execution of $\fSet(v, \sigma)$ by process $p$, since it was before process $q$ started executing its request. % By the quorum intersection property, process $q$ must have received $v$ or a greater value from at least one correct replica. \end{proof} \begin{theorem}[{\pMRAtomicity}] \label{the:dmr-atomicity} Our implementation of {\DMR} satisfies the {\pMRAtomicity} property. If some correct process $p$ completed $\fWrite(v, \sigma)$ or received $v$ from $\fRead()$ strictly before than some process $q$ invoked $\fRead()$, then the value returned to $q$ must be greater than or equal to $v$ \end{theorem} \begin{proof} Let $C$ (resp., $D$) be the {\thighest} configuration in $p$'s (resp., $q$'s) local history when it completed its request. Also, let $v$ (resp., $u$) be the value that $p$ (resp., $q$) passed to the last call to $\fSet(...)$ (notice that both $\fRead()$ and $\fWrite(...)$ call $\fSet(...)$). If $C = D$, then $u \ge v$ by Lemma~\ref{lem:dmr-atomicity-in-one-config}. The case when $D \sqsubset C$ is impossible due to the use of forward-secure signatures Correct replicas do not answer user requests in a configuration until this configuration is installed (\lstlineref{dmr-replica}{upon-get-wait}). Therefore, $C$ had to be installed before $p$ completed its request. By Lemma~\ref{lem:dmr-key-update} (Key Update) and by the use of forward-secure signatures in $\fSet(...)$, this would mean that $q$ would not be able to complete its request in $D$. Finally, let us consider the case when $C \sqsubset D$. In this case, by Lemma~\ref{lem:dmr-state-transfer-correctness}, the quorum intersection property, and the use of forward-secure signatures in $\fSet(...)$, $q$ received $v$ or a greater value from at least one correct replica during the execution of $\fGet()$. Therefore, in this case $u$ is also greater than or equal to $v$. \end{proof} \begin{theorem}[{\DMR} safety] \label{the:dmr-safety} Our implementation satisfies the safety properties of {\DMR}: {\pMRValidity}, {\pMRAtomicity}, {\pDynamicValidity}, {\pHistoryComparability}, and {\pInstallationValidity}. \end{theorem} \begin{proof} \leavevmode \begin{itemize} \item {\pValidity} follows directly from the implementation: correct clients only return verifiable input values from $\fGet()$ (\lstlineref{dmr-client}{get-end}); \item {\pMRAtomicity} follows directly from Theorem~\ref{the:dmr-atomicity}. \item Proofs for {\pDynamicValidity}, {\pHistoryComparability}, and {\pInstallationValidity} are exactly the same as for {\DBLA} (see Theorem~\ref{the:dbla-safety}). \end{itemize} \end{proof} \paragraph*{{\DMR} liveness} \begin{theorem}[\pHistoryConvergence] \label{the:dmr-history-convergence} Local histories of all correct processes will eventually become identical. \end{theorem} Recall that the \emph{maximal installed configuration} is the {\thighest} installed configuration and is denoted by $\Cmax$ (see Definition~\ref{def:cmax} and Lemma~\ref{lem:cmax-existence}. \begin{lemma}[$\Cmax$ installation] \label{lem:dmr-cmax-installation} The maximal installed configuration will eventually be installed by all correct replicas. \end{lemma} \begin{lemma}[State transfer progress] \label{lem:dmr-state-transfer-progress} State transfer executed by a forever-correct replica, always terminates. \end{lemma} \begin{lemma} \label{lem:dmr-cmax-is-final} $\Cmax$ will eventually become the {\thighest} configuration in the local history of each correct process. \end{lemma} \begin{theorem}[\pNewInstallationLiveness] \label{the:dmr-installation-liveness} Our implementation of {\DMR} satisfies the {\pNewInstallationLiveness} property. That is, (1)~if configuration $C$ is in the local history of a forever-correct process, either $C$ or a {\thigher} configuration will eventually be installed; % (2)~if some configuration $C$ is installed, each correct replica will eventually install $C$ or a {\thigher} configuration. \end{theorem} \begin{theorem}[\pMRLiveness] \label{the:dmr-mr-liveness} Our implementation of {\DMR} satisfies the {\pMRLiveness} property: every call to $\fRead()$ and $\fWrite(v, \sigma)$ by a forever-correct process eventually terminates. \end{theorem} \begin{proof} Let $p$ be a forever-correct client that invoked $\fRead()$ or $\fWrite(...)$. By Lemma~\ref{lem:cmax-is-final}, $\Cmax$ will eventually become the {\thighest} configuration in the local history of $p$. % If the client's request will not terminate by the time it learns about $\Cmax$, the client will restart its request there. % Since $\Cmax$ will eventually be installed by all correct replicas and will never be superseded, there will be a quorum of forever-correct replicas, and $p$ will be able to complete its request there. \end{proof} \begin{theorem}[{\DMR} liveness] \label{the:dmr-liveness} Our implementation satisfies the liveness properties of {\DMR}: {\pMRLiveness}, {\pDynamicLiveness}, {\pHistoryConvergence}, and {\pNewInstallationLiveness}. \end{theorem} \begin{proof} \leavevmode {\pMRLiveness} follows directly from Theorem~\ref{the:dmr-mr-liveness}. Proofs for {\pDynamicLiveness}, {\pHistoryConvergence}, and {\pNewInstallationLiveness} are exactly the same as for {\DBLA} (see Theorem~\ref{the:dbla-safety}). \end{proof} \section{Building {\dynamic} and {\reconfigurable} objects}\label{sec:rbla} In this section, we present our technique for creating \emph{\dynamic} and \emph{\reconfigurable} objects as defined in Sections~\ref{subsec:dynamic-objects} and~\ref{subsec:reconf-objects}, respectively. First, in Section~\ref{subsec:dbla-impl}, we present our implementation of the \emph{\dynamic} version of the \emph{\ByzantineLatticeAgreement} abstraction defined in Section~\ref{subsec:bla-def}, which is the main building block in our approach to creating \emph{any} {\reconfigurable} objects. Second, in Section~\ref{subsec:transform-short}, we show how to construct \emph{\reconfigurable} objects out of {\dynamic} ones. To respect space limits, the complete pseudocode, the proof of correctness, and further discussion are delegated to Appendices~\ref{sec:dbla-full} and~\ref{sec:transform-full}. \subsection{\DynamicByzantineLatticeAgreement}\label{subsec:dbla-impl} \emph{\DynamicByzantineLatticeAgreement} abstraction ({\DBLA} for short) is the main building block in our approach for constructing {\reconfigurable} objects. Its specification is a combination of the specification of {\ByzantineLatticeAgreement} (Section~\ref{subsec:bla-def}) and the specification of a {\dynamic} object (Section~\ref{subsec:dynamic-objects}). The abstraction exports two operations: $\fPropose(v, \sigma)$ and $\fUpdateHistory(h, \sigma)$, two up-calls: $\fNewHistory(h)$ and $\fInstalledConfig(C)$, and one function: $\fVerifyOutputValue(w, \tau)$. The abstraction is parameterized by two boolean functions: $\fVerifyInputValue(v, \sigma)$ and $\fVerifyInputConfig(C, \sigma)$, and must satisfy a set of properties defined in Sections~\ref{subsec:bla-def} and~\ref{subsec:dynamic-objects} such as {\pComparability}, {\pNewInstallationLiveness}, and {\pDynamicValidity}. \input{algorithms/dbla-short} The pseudocode for our algorithm is partially presented in Algorithms~\lstref{dbla-short-client} and~\lstref{dbla-short-replica}. The complete pseudocode and the proof of correctness are put in Appendix~\ref{sec:dbla-full}. Here we try to convey the most important ideas and design decisions. \myparagraph{The ``replica-centric'' approach.} Most papers on crash-fault tolerant reconfiguration take the \emph{client-centric approach} to state transfer~\cite{dynastore,smartmerge,rla,SKM17-reconf}. In those algorithms the client is responsible for reading the state from the replicas of older configurations and relaying it to the replicas of a new configuration. This approach is appealing because it has small communication complexity when there are few clients. However, the client-centric approach becomes significantly more expensive and harder to implement if we assume that some clients can be Byzantine. Instead, we take the \emph{replica-centric approach} to state transfer~\cite{freestore}. That is, the replicas of a new configuration are responsible for reading the state from the replicas of prior configurations. \myparagraph{{\Pivotal} and {\tentative} configurations.} As we mentioned earlier, we use forward-secure signatures to guarantee that superseded configurations cannot affect correct clients or forge certificates for output values. However, due to asynchrony, it is impossible to acess all prior configurations when installing a new one. We say that a configuration is \emph{\pivotal} if it is the last configuration in some verifiable history. Other candidate configurations are called \emph{\tentative}. Intuitively, the {\pivotal} configurations are the configurations that can potentially be installed by a correct replica and to which a correct client can potentially make a request. The (useless) {\tentative} configurations are the ``necessary evil'' of the asynchronous model. Eliminating or detecting them would require solving consensus. As we prove in Appendix~\ref{subsec:dbla-correctness-proof}, our protocol ensures that \emph{\tentative} configurations are \emph{always} ``harmless'' and that {\pivotal} configurations are ``harmless'' after they are superseded. \myparagraph{Client implementation.} Let us first take the perspective of a correct client (Algortihm~\lstref{dbla-short-client}). Since the state transfer is handled by the replicas, the client's code is rather simple yet there are several important details. Our {\BLA} implementation builds upon the (crash fault-tolerant static) implementation of lattice agreement by Faleiro et al.~\cite{gla}. An important difference is that, instead of operating directly on the object lattice $\Lat$, internally we operate with sets of verifiable input values, in other words, with the power lattice $2^{\Lat}$, and only join them before returning the result to the client (line~\ref{lst:dbla-short-client:propose-return}). While yielding the same result, this trick helps us to preserve the {\pValidity} property of {\BLA}: every output value is, indeed, a join of some set of verifiable input values. To propose a new value, the client first needs to call the $\fRefine$ procedure (\lstlinerangeref{dbla-short-client}{refine}{refine-end}). It will add the value to the set of all verifiable input values the client is aware of ($\icurrentValues$, line~\ref{lst:dbla-short-client:refine-current-values}) and send a $\mPropose$ message to all replicas of the most recent configuration in the client's local history (line~\ref{lst:dbla-short-client:refine-send-message}). If this configuration turns out to be outdated, the client will eventually learn about a newer configuration and will simply repeat its request there (line~\ref{lst:dbla-short-client:new-history-refine}). This is safe because lattice agreement operations are \emph{idempotent}, which means that an operation can be applied several times with the same result as if it was applied only once. In response to the client's $\mPropose$ message, the replicas will send $\mProposeResp$ messages (see \lstlinerangeref{dbla-short-client}{upon-propose-resp}{upon-propose-resp-end}), each of which will contain the set of verifiable input values known to the replica ($\ivalues$), a \emph{forward-secure signature} ($\isignature$) (see Section~\ref{sec:system-model}), and a sequence number ($\isn$). If the client learns any new verifiable input values from a response (line~\ref{lst:dbla-short-client:upon-propose-resp-refine}), it will repeat the $\fRefine$ procedure with the new values. Since we assume that the total number of verifiable input values is finite in any given infinite execution, this process will eventually converge. If the client receives a $\mProposeResp$ message without any new verifiable input values in it, it will add this message to the set of acknowledgments (line~\ref{lst:dbla-short-client:upon-propose-resp-ack}). Notice the additional check ($\ivalues = \icurrentValues$). This condition is always $\iTrue$ if the sender is correct. It is only needed to filter out malformed acknowledgments from Byzantine replicas. Upon gathering a quorum of such acknowledgments (line~\ref{lst:dbla-short-client:upon-acks-collected}), the client proceeds to the second stage of the protocol, which we will cover in the next paragraph. Now we would like to emphasize the use of \emph{forward-secure signatures} here. It is what allows us to mitigate the \textbf{``I still work here''} attack. As we will see below, our state transfer protocol ensures that before any \emph{\pivotal} configuration is superseded, all correct replicas in some quorum of that configuration will invoke $\fUpdateFSKeys$ (in other words, will remove their private keys for that configuration). If the client tries to access a configuration that was superseded long before the client's request, it simply will not receive enough responses with valid forward-secure signatures and will eventually retry its request in a {\thigher} configuration (line~\ref{lst:dbla-short-client:new-history-refine}). The primary goal of the second stage is to mitigate the \textbf{``slow reader''} attack. The client simply sends the acknowledgments it has collected in the first stage to the replicas and asks them to sign with their forward-secure signatures that they have seen these acknowledgments \emph{before they updated their private keys}, i.e., before this configuration was superseded. By this, we confirm that the configuration used in the first stage was not meanwhile superseded. \myparagraph{Replica implementation.} Now, let us take the pespective of a correct replica (Algorithm~\ref{alg:dbla-short-replica}). Each $\mPropose$ message has some configuration identifier attached to it. When a correct replica recieves a $\mPropose$ message from a correct client, there are three possible cases (lines~\ref{lst:dbla-short-replica:upon-propose}--\ref{lst:dbla-short-replica:upon-propose-end}): \begin{enumerate} \item If the message is from the {\thighest} configuration in the replica's local history ($\fMaxElement(\ihistory)$), and this configuration is installed by this replica ($C = \Cinst$), the replica will process this message and will send a reply, signed with a forward-secure signature with timestamp $\fHeight(C)$ (line~\ref{lst:dbla-short-replica:upon-propose-send-resp}); \item If the message is from a configuration that is {\tlower} than $\fMaxElement(\ihistory)$, the replica will drop this message. This cannot undermine liveness of client requests because, if both the client and the replica remain correct forever, the client will either complete the request without this replica's help, or eventually learn about a {\thigher} configuration and repeat its request there; \item Otherwise, the replica ``waits'' (line~\ref{lst:dbla-short-replica:upon-propose-wait}). That is, the message is stored locally until one of the two conditions above is satisfied. \end{enumerate} A major part of the replica's implementation is the state transfer procedure (lines~\ref{lst:dbla-short-replica:state-transfer-begin}--\ref{lst:dbla-short-replica:state-transfer-end}). Whenever a replica sees that its ``current configuration'' ($\Ccurr$) is not the {\thighest} one in its local history, it tries to read the state from a quorum in each of the intermidiate configurations between $\Ccurr$ and the {\thighest} configuration ($\Cnext$) one by one. If the replica succeeds before $\Cnext$ or a {\thigher} configuration is installed, it updates $\Ccurr$ to $\Cnext$ and lets the other replicas know about. It uses the \emph{uniform reliable broadcast primitive} we introduced in Section~\ref{subsec:bcast} to disseminate message $\Message{\mUpdateComplete}$ among the correct replicas of $\Cnext$ (line~\ref{lst:dbla-short-replica:state-transfer-broadcast}). If, on the other hand, it happens that the replica is waiting for responses from some configuration $C$, and this configuration is already superseded, then it might never gather a quorum of replies from the replicas of this configuration. In this case our algorithm ensures that this replica will eventually install some configuration {\thigher} than $C$ and will finish waiting via the other condition ($C \sqsubset \Ccurr$). When a replica learns about a new verifiable history (line~\ref{lst:dbla-short-replica:new-history}), it updates its private keys so that it will never again be able to answer any user request in a configuration {\tlower} than the {\thighest} configuration in this new history (line~\ref{lst:dbla-short-replica:new-history-update-fs-keys}). Due to the properties of reliable broadcast, unless this replica halts or turns Byzantine, every other correct process will eventually also learn about this history and abandon {\tlower} configurations. When a correct replica reads the state from some prior configuration $C$ during state transfer (lines~\ref{lst:dbla-short-replica:state-transfer-update-read}--\ref{lst:dbla-short-replica:state-transfer-wait}), it is responsible to make sure that some quorum $Q \in \iquorums(C)$ updated their forward-secure private keys to a timestamp larger than $\fHeight(C)$. This is achieved by a simple mechanism: the replicas of $C$ will not reply to the $\mUpdateRead$ messages until they update their private keys far enough (until $C \sqsubset \fMaxElement(\ihistory)$). \subsection{Making {\dynamic} objects {\reconfigurable}} \label{subsec:transform-short} While {\dynamic} objects are important building blocks, they are not particularly useful by themselves because they assume that all verifiable input histories are comparable. In this section we show how to combine several {\dynamic} objects to obtain a single \emph{\reconfigurable} object. As described in Section~\ref{subsec:reconf-objects}, compared to the ``static'' versions, {\reconfigurable} objects have one more operation -- $\fUpdateConfig(C, \sigma)$, must be parameterized by a boolean function $\fVerifyInputConfig(C, \sigma)$, and must satisfy a few additional properties such as {\pReconfigurationValidity} and {\pNewInstallationLiveness}. \begin{figure} \centering \begin{subfigure}[b]{0.52\textwidth} \centering \pdftex[\textwidth]{compiled-drawings/transform.pdf_tex} \caption{{\Reconfigurable} object} \label{fig:transform} \end{subfigure} \hfill \begin{subfigure}[b]{0.46\textwidth} \centering \pdftex[\textwidth]{compiled-drawings/transform-with-ac.pdf_tex} \caption{{\Reconfigurable} object with {\AccessControl}} \label{fig:transform-with-ac} \end{subfigure} \caption{ \textbf{(a):} The structure of our reconfigurable objects. All three {\dynamic} objects use $\iHistLA$ as the source of verifiable histories (including $\iHistLA$ itself). Verifiable input values for $\iConfLA$ are the verifiable input configurations of the {\reconfigurable} object. Verifiable input values for $\iHistLA$ are singleton sets of verifiable output values of $\iConfLA$. Finally, verifiable input values for $\iUObj$ are the verifiable input values of the reconfigurable object. \textbf{(b):} An example of a {\reconfigurable} object without external dependencies. $\iConfAC$ and $\iObjAC$ are instances of the \emph{\DynamicAccessControl} abstraction. } \label{fig:transform-both} \end{figure} We build a {\reconfigurable} object by combining three \emph{\dynamic} ones. First of all, the {\dynamic} object that we want to make {\reconfigurable} (let us call it $\iUObj$). For example, it can be an instance of {\DBLA} if you wanted to make a {\reconfigurable} version of {\ByzantineLatticeAgreement}. Two others are used to build verifiable histories: $\iConfLA$ is a {\DBLA} operating on the configuration lattice $\Conf$, and $\iHistLA$ is a {\DBLA} operating on the power lattice $2^{\Conf}$. \input{algorithms/transform-short.tex} The relations between the three {\dynamic} objects are shown in Figure~\ref{fig:transform}. The short, quite straightforward version of the pseudocode is presented in Algorithm~\lstref{transform-short}. To update a configuration, the client first accesses $\iConfLA$ and then submits the returned configuration (with its proof) to $\iHistLA$. The resulting history is broadcast (using the reliable-broadcast primitive) and is used to reconfigure all three {\dynamic} objects. Thus, the three {\dynamic} objects are tied together. They ``live through'' the same history of reconfigurations and can be seen as three parts of the same algorithm rather than separate entities. The complete pseudocode of the transformation and the proof of its correctness are presented in Appendix~\ref{sec:transform-full}. \myparagraph{Eliminating external dependencies.} Most abstractions that we define and implement in this paper rely on external parameters: functions $\fVerifyInputValue(v, \sigma)$ and $\fVerifyInputConfig(C, \sigma)$. This allows us to connect several objects into larger systems (as shown in Figure~\ref{fig:transform-both}) and abstracts away application-specific details. Figure~\ref{fig:transform-with-ac} shows an example of a ``self-contained'' object without any external dependencies. The idea is very simple: add two instances of the {\dynamic} version of the {\AccessControl} abstraction we introduced in Section~\ref{subsec:access-control-def} that will serve as the gateways for the object. These {\dynamic} objects will be reconfigured along with $\iConfLA$, $\iHistLA$, and $\iUObj$. We discuss different implementations of the {\DynamicAccessControl} abstraction in Appendix~\ref{sec:access-control}. \section{Abstractions and Definitions}\label{sec:definitions} We start by defining {\AccessControl} and {\ByzantineLatticeAgreement} abstractions (Sections~\ref{subsec:access-control-def} and~\ref{subsec:bla-def}). We then introduce the important notions of \emph{\reconfigurable} and \emph{\dynamic} objects (Sections~\ref{subsec:reconf-objects} and~\ref{subsec:dynamic-objects}, respectively), and recall the broadcast primitives that we intend to use (Section~\ref{subsec:bcast}. \subsection{{\AccessControl} abstraction}\label{subsec:access-control-def} We introduce the \emph{{\AccessControl} abstraction} ({\AC} for short), which can be used to prevent malicious, malformed, or excessively frequent requests from affecting correct clients. As we discuss later, liveness properties of our reconfigurable objects require a sensible \emph{access control policy} to prevent Byzantine clients from flooding the system with reconfiguration requests. {\AccessControl} abstraction exports one operation and one function: \begin{itemize} \item Operation $\fRequestCert(v)$ returns a \emph{certificate} $\sigma$, which can be verified with $\fVerifyCert(v, \sigma)$, or the special value $\bot$, indicating that the permission was denied; \item Function $\fVerifyCert(v, \sigma)$ returns a boolean value. \end{itemize} The implementation must satisfy the following property: \begin{itemize} \item \textit{\pCertificateVerifiability}: If $\fRequestCert(v)$ returned $\sigma$ to a correct process, then $\fVerifyCert(v, \sigma) = \iTrue$. \end{itemize} One way to implement the {\AccessControl} abstraction in a \emph{static} system is to form a certificate where $v$, the argument of $\fRequestCert$, is signed by each member of a quorum of replicas. However, in most cases, there must be some additional application-specific logic, according to which the correct replicas will decide if they should sign the suggested value. Since access control is not the primary focus of this paper, we discuss different policies, their {\dynamic} implementations, and applications to building {\reconfigurable} services in Appendix~\ref{sec:access-control}. \subsection{{\ByzantineLatticeAgreement} abstraction}\label{subsec:bla-def} \emph{{\ByzantineLatticeAgreement} abstraction} ({\BLA} for short) is an adaptation of Lattice Agreement~\cite{gla} that tolerates Byzantine failures of processes (both clients and replicas). It is parameterized by a lattice $\Lat$, called the \emph{object lattice}, and a boolean function $\fVerifyInputValue: \Lat \times \Sigma \to \{\iTrue, \iFalse\}$, where $\Sigma$ is a set of possible certificates. We say that $\sigma$ is a \emph{valid certificate for input value} $v$ iff $\fVerifyInputValue(v, \sigma) = \iTrue$. This way we abstract away the application-specific notions of ``correct input values'' and ``allowed actions''. We say that $v$ is a \emph{verifiable input value} in a given run iff at some point in time in that run, some process \emph{knows} a certificate $\sigma$ that is valid for $v$, i.e., it maintains $v$ and a valid certificate $\sigma$ in its local memory. We require that the adversary is unable to invert $\fVerifyInputValue$ by computing a valid certificate for a given value. This is the case, for example, when $\sigma$ must contain a set of unforgeable digital signatures. The {\ByzantineLatticeAgreement} abstraction exports one operation and one function: \begin{itemize} \item Operation $\fPropose(v, \sigma)$ that returns a response of the form $\Tupple{w, \tau}$, where $w \in \Lat$, $\sigma$ is a valid certificate for input value $v$, and $\tau$ is a certificate for output value $w$; \item Function $\fVerifyOutputValue(v, \sigma)$ that returns a boolean value. \end{itemize} Similarly to input values, we say that $\tau$ is a \emph{valid certificate for output value} $w$ iff $\fVerifyOutputValue(w, \tau) = \iTrue$. We say that $w$ is a \emph{verifiable output value} in a given run iff at some point in that run, some process knows $\tau$ that is valid for $w$. Implementations of {\ByzantineLatticeAgreement} must satisfy the following properties: \begin{itemize} \item \textit{\pValidity}: Every verifiable output value $w$ is a join of some set of verifiable input values; \item \textit{\pVerifiability}: If $\fPropose(...)$ returns $\Tupple{w, \tau}$ to a correct process, then $\fVerifyOutputValue(w, \tau) = \iTrue$; \item \textit{\pInclusion}: If $\fPropose(v, \sigma)$ returns $\Tupple{w, \tau}$ to a correct process, then $v \sqsubseteq w$; \item \textit{\pComparability}: All verifiable output values are comparable\footnote{ Recall that values $v$ and $w$ are called \emph{comparable} iff either $v \sqsubseteq w$ or $w \sqsubset v$.}; \item \textit{\pOperationalLiveness}: If the total number of verifiable input values is finite, every call to $\fPropose(v, \sigma)$ by a forever-correct process eventually terminates. \end{itemize} \ignore{ {\pValidity}, {\pVerifiability}, and {\pInclusion} are simple sanity properties. {\pComparability} is the main property of Lattice Agreement. {\pOperationalLiveness} ensures the progress of the system. } Notice that liveness is required only in the case of finitely many verifiable input values. This is sufficient for the purposes of reconfiguration. The abstraction that provides unconditional liveness is called \emph{Generalized Lattice Agreement}~\cite{gla} and is out of the scope of this paper. \subsection{{\Reconfigurable} objects}\label{subsec:reconf-objects} First, let us define the notion of a \emph{history}. In Section~\ref{sec:system-model}, we introduced the configuration lattice $\Conf$. A finite set $h \subseteq \Conf$ is called a \emph{history} iff all elements of $h$ are comparable (in other words, if they form a \emph{sequence} or \emph{chain}). A \emph{\reconfigurable} object exports one operation $\fUpdateConfig(h, \sigma)$ and two up-calls: $\fNewHistory(h)$, used by all processes to indicate that they have updated their local histories, and $\fInstalledConfig(C)$, used by replicas to indicate that they are ready to serve user requests in configuration $C$. Each {\reconfigurable} object must be parameterized by a boolean function $\fVerifyInputConfig: \Conf \times \Sigma \to \{\iTrue, \iFalse\}$, where $\Sigma$ is a set of possible certificates. We say that $C \in \Conf$ is a \emph{verifiable input configuration} in a given run iff at some point in that run, some process knows $\sigma$ such that $\fVerifyInputConfig(C, \sigma) = \iTrue$. We require the total number of verifiable input configurations to be finite in any given infinite execution of the protocol. Intuitively, this assumption precludes the scenario when Byzantine clients invoke an infinite sequence of valid reconfiguration requests, undermining system progress due to permanent reconfiguration. In practice, this boils down to assuming sufficiently long periods of stability when no new verifiable input configurations appear. This requirement is imposed by all asynchronous reconfigurable storage systems~\cite{tutorial2010,SKM17-reconf,rla,freestore} we are aware of, and, in fact, can be shown to be necessary~\cite{r-liveness}. The reconfiguration mechanism must provide an illusion of a unique global history. However, unambiguously agreeing on the contents of this history among several processes would require solving consensus, which is known to be impossible in asynchronous systems~\cite{flp}. Hence, in our system, each process is only aware of some subsequence of the global history. These subsequences are called \emph{local histories} of the processes. When a process updates its local history, it indicates this fact by triggering $\fNewHistory(h)$. We say that this process \emph{adopts} the history $h$. A configuration $C$ is called \emph{candidate} if it appears in the local history of a correct process. In other words, if some correct process has triggered $\fNewHistory(h)$ such that $C \in h$. We say that a replica \emph{installs} configuration $C$ when it triggers $\fInstalledConfig(C)$ up-call. A configuration is called \emph{installed} if some correct replica installed it. Finally, a candidate configuration is called \emph{superseded} if some {\thigher} configuration is installed, and \emph{\tactive} otherwise. Intuitively, \emph{candidate} configurations are those considered by the protocol. \emph{Installation} means that the configuration has received all the necessary information from prior configurations and has made sure that the older configurations are ``harmless''. Thereby, \emph{superseded} configurations are those, on which we do not rely anymore. Replicas that do not belong to any {\tactive} configuration can be safely turned off or even become Byzantine. Each {\reconfigurable} object must satisfy the following properties: \begin{itemize} \item \textit{\pReconfigurationValidity}: If some correct process triggered $\fNewHistory(h)$, then $h$ is a well-formed history (i.e.,\ $h \subseteq \Conf$ and all elements of $h$ are comparable), and $\forall C \in h:$ $C$ is a join of some set of valid input configurations; \item \textit{\pReconfigurationLiveness}: Every call to $\fUpdateConfig(C, \sigma)$ will eventually terminate, and the client will eventually have $C$ or a {\thigher} configuration in its local history; \item \textit{\pHistoryComparability}: All local histories of correct processes are related by containment, i.e.\ if a correct process $p$ triggers $\fNewHistory(h_1)$, and a correct process $q$ triggers $\fNewHistory(h_2)$, then either $h_1 \subseteq h_2$ or $h_2 \subset h_1$. Moreover, if some process triggers $\fNewHistory(h_1)$, and, later on, this process triggers $\fNewHistory(h_2)$, then $h_1 \subseteq h_2$; \item \textit{\pInstallationValidity}: Only a candidate configuration can be installed by a correct replica; \item \textit{\pHistoryConvergence}: Local histories of all correct processes will eventually become identical; \item \textit{\pNewInstallationLiveness}: If configuration $C$ is in the local history of a forever-correct process, either $C$ or a {\thigher} configuration will eventually be installed. % Moreover, if some configuration $C$ is installed, each correct replica will eventually install $C$ or a {\thigher} configuration. \end{itemize} {\pReconfigurationValidity} and {\pInstallationValidity} are simple sanity properties. {\pHistoryComparability} ensures that all processes have consistent views of the system. {\pHistoryConvergence} and {\pNewInstallationLiveness} ensure the progress of the system. Intuitively, since the total number of verifiable input configurations is assumed to be finite, all correct processes will converge to the same ``final'' history (by {\pHistoryConvergence}), and the latest configuration in that history will eventually be installed by all correct replicas (by {\pNewInstallationLiveness}). Additionaly, {\pReconfigurationLiveness} requires that every call to $\fUpdateConfig(C, \sigma)$ by a correct process eventually terminates. \subsection{{\Dynamic} objects}\label{subsec:dynamic-objects} \emph{\Dynamic} objects are similar to {\reconfigurable} objects, but instead of $\fUpdateConfig(C, \sigma)$, they have operation $\fUpdateHistory(h, \sigma)$, and instead of $\fVerifyInputConfig(C, \sigma)$, they must be parameterized by a boolean function $\fVerifyHistory: \Hist \times \Sigma \to \{\iTrue, \iFalse\}$, where $\Hist$ is the set of all histories and $\Sigma$ is the set of all possible certificates. We say that $h$ is a \emph{verifiable history} iff at some point in time some process knows $\sigma$ such that $\fVerifyHistory(h, \sigma) = \iTrue$. As with verifiable input configurations, the total number of verifiable histories is required to be finite. Additionally, we require all verifiable histories to be related by containment, i.e.\ if $\fVerifyHistory(h_1, \sigma_1) = \iTrue$ and $\fVerifyHistory(h_2, \sigma_2)$, then either $h_1 \subseteq h_2$ or $h_2 \subset h_1$. We discuss how to build such histories in Section~\ref{subsec:transform-short}. Similarly to {\reconfigurable} objects, a {\dynamic} object must have $\fNewHistory(h)$ and $\fInstalledConfig(C)$ up-calls and satisfy the {\pHistoryComparability}, {\pInstallationValidity}, {\pHistoryConvergence}, and {\pNewInstallationLiveness} properties. However, instead of {\pReconfigurationValidity} and {\pReconfigurationLiveness}, it must satisfy two other properties: \begin{itemize} \item \textit{\pDynamicValidity}: A correct process may trigger up-call $\fNewHistory(h)$ only for a verifiable history $h$; \item \textit{\pDynamicLiveness}: Every invocation of $\fUpdateHistory(h)$ by a forever-correct client will eventually terminate. Moreover, the client will eventually trigger up-call $\fNewHistory(h')$ such that $h \subseteq h'$. \end{itemize} While {\reconfigurable} objects provide general-purpose reconfiguration interface, {\dynamic} objects are more restrictive in a sense that they impose strict comparability requirement on the verifiable histories. The main contribution of this paper is that we show how to build \emph{\dynamic} objects in Byzantine environment and how to create \emph{\reconfigurable} objects using {\dynamic} objects as building blocks. We argue that this technique might be applicable to a large class of objects and examplify our claim with {\dynamic} (and {\reconfigurable}) versions of {\ByzantineLatticeAgreement} and {\MaxRegister}~\cite{ABD, ByzantineQuorumSystems}. \subsection{Intersection, availability, and configuration sanity}\label{subsec:intersection-and-availability} Safety and liveness of our {\dynamic} and {\reconfigurable} objects rely upon specific assumptions on the quorum systems associated with candidate configurations. \begin{itemize} \item For safety, we require the \textbf{\pQuorumIntersection} property: for every \emph{\tactive} candidate configuration $C$, for all $Q_1, Q_2 \in \iquorums(C)$, $Q_1 \cap Q_2$ contains a correct process. \item For liveness, we require the \textbf{\pQuorumAvailability} property: for every candidate configuration $C$ that is \emph{never} superseded, there is always a quorum $Q \in \iquorums(C)$ that consists of correct replicas. This is equivalent to saying that there exists $Q \in \iquorums(C)$ that consists entirely of \emph{forever-correct} replicas. \end{itemize} Once a configuration is superseded, its quorums can become unavailable or compromised. \subsection{Broadcast primitives}\label{subsec:bcast} To make sure that no slow process is ``left behind'', we assume that a variant of \emph{reliable broadcast primitive}~\cite{cachin2011introduction} is available. The primitive must ensure two properties: (1)~If a forever-correct process~$p$ broadcasts a message~$m$, then $p$ eventually delivers $m$; (2)~If some message~$m$ is delivered by a \emph{forever-correct} process, every forever-correct process eventually delivers $m$. In practice such primitive can be implemented by some sort of a gossip protocol~\cite{gossiping}. This primitive is ``global'' in a sense that it is not bound to any particular configuration. In pseudocode we use ``{\RBBroadcast{...}}'' to denote a call to the ``global'' reliable broadcast. Additionally, we assume a ``local'' \emph{uniform reliable broadcast} primitive~\cite{cachin2011introduction}. It has a stronger totality property: if some \emph{correct} process $p$ delivered some message $m$, then every forever-correct process will eventually deliver $m$, even if $p$ later turns Byzantine. This primitive can be implemented in a \emph{static} system, provided a quorum system. As we deal with \emph{\dynamic} systems, we associate every broadcast message with a fixed configuration and only guarantee these properties if the configuration is \emph{never superseded}. Notice that any static implementation of uniform reliable broadcast trivially guarantees this property. In pseudocode we use ``{\URBBroadcastIn{...}{$C$}}'' to denote a call to the ``local'' uniform reliable broadcast in configuration $C$. \ignore{ We assume a \emph{uniform reliable broadcast} primitive~\cite{cachin2011introduction} ensuring, informally, that every forever-correct process delivers every message that has been \emph{broadcast} by a \emph{forever-correct process} and every message that has been \emph{delivered} by a \emph{correct} process (which may have turned into faulty afterwards). The primitive can be implemented in a static system, provided a quorum system. As we deal with dynamic systems, we associate every broadcast message with a fixed configuration and only guarantee these properties if the configuration is never superseded. Notice that any static implementation of uniform reliable broadcast trivially guarantees this property. Additionally, to make sure that no slow process is ``left behind'', we assume that a ``global'' underlying reliable broadcast primitive~\cite{cachin2011introduction} is available. The primitive must ensure that If a forever-correct process delivers a message, then every forever-correct process will eventually deliver it. This primitive can be implemented assuming reliable communication channels. } \section{Reconfigurable and Dynamic {\ByzantineLatticeAgreement}}\label{rbla} In this section, we present our \emph{\dynamic} and \emph{\reconfigurable} implementations of the {\ByzantineLatticeAgreement} abstraction defined in Section~\ref{subsec:bla-def}. Recall that the abstractions rely on the $\fVerifyInputValue$ and $\fVerifyInputConfig$ functions, specifying, informally, valid inputs. We discuss how to build systems without external dependencies in Section~\ref{subsec:discussion:complete-systems} and Appendix~\ref{sec:access-control}. In Section~\ref{subsec:dbla-impl}, we present our implementation of the \emph{\dynamic} version of the {\BLA} abstraction. In Section~\ref{subsec:rbla-impl}, we show how to build the \emph{\reconfigurable} {\BLA} from several instances of {\dynamic} {\BLA}. To respect space limits, the complete pseudocode and the proof of correctness are delegated to Appendices~\ref{sec:dbla-full} and~\ref{sec:rbla-full}. \subsection{{\Dynamic} {\BLA}}\label{subsec:dbla-impl} {\DynamicByzantineLatticeAgreement} abstraction ({\DBLA} for short) is the main building block in our approach for constructing {\reconfigurable} objects. Its specification is the combination of the specification of {\ByzantineLatticeAgreement} (from Section~\ref{subsec:bla-def}) and the specification of a {\dynamic} object (from Section~\ref{subsec:dynamic-objects}). The abstraction exports two operations: $\fPropose(v, \sigma)$ and $\fUpdateHistory(h, \sigma)$, two up-calls: $\fNewHistory(h)$ and $\fInstalledConfig(C)$, and one function: $\fVerifyOutputValue(w, \tau)$. The abstraction is parameterized by two boolean functions: $\fVerifyInputValue(v, \sigma)$ and $\fVerifyInputConfig(C, \sigma)$, and must satisfy a set of properties defined in Sections~\ref{subsec:bla-def} and~\ref{subsec:dynamic-objects} such as {\pComparability}, {\pNewInstallationLiveness}, and {\pDynamicValidity}. \input{algorithms/dbla-short} The pseudocode for our algorithm is partially presented in Algorithms~\lstref{dbla-short-client} and~\lstref{dbla-short-replica}. To respect space limits, we present the full pseudocode and the proof of correctness in Appendix~\ref{sec:dbla-full}. Here we try to convey the most important ideas and design decisions. Most papers on crash-fault tolerant reconfiguration take the \emph{client-centric approach} to state transfer~\cite{dynastore,smartmerge,rla,SKM17-reconf}. In those algorithms the client is responsible for reading the state from the replicas of an older configuration and relaying the read state to the replicas of a new configuration. This approach is appealing because it has small communication complexity when there are few clients and assumes only \emph{passive} replicas, i.e. the replicas only reply to client's messages and never initiate any communication themselves. However, the client-centric approach becomes significantly more expensive and harder to implement if we assume that some clients can be Byzantine. Instead, in this paper, we take the \emph{replica-centric approach} to state transfer. That is, the replicas of a new configuration are responsible for reading the state from the replicas of prior configurations. This approach has the benefit of a lower latency and the communication cost that does not depend on the number of clients~\cite{freestore}, but most importantly, it is easier to adopt to the case of Byzantine clients. \myparagraph{Client implementation.} Let us first take the perspective of a client (Algortihm~\lstref{dbla-short-client}). Since the state transfer is handled by the replicas, the client's code is rather simple yet there are several important details. Our {\BLA} implementation builds upon the (crash fault tolerant static) implementation of lattice agreement by Faleiro et al.~\cite{gla}. An important difference is that, instead of operating directly on the object lattice $\Lat$, internally we operate with sets of verifiable input values, in other words, with the power lattice $2^{\Lat}$, and only join them before returning the result to the client (line~\ref{lst:dbla-short-client:propose-return}). This yields the same results and helps us to preserve the {\pValidity} property of {\BLA}: every output value is, indeed, a join of some set of verifiable input values. To propose a new value, the client first needs to call the $\fRefine$ procedure (\lstlinerangeref{dbla-short-client}{refine}{refine-end}). It will add the value to the set of all verifiable input values the client is aware of ($\icurrentValues$, line~\ref{lst:dbla-short-client:refine-current-values}) and send a $\mPropose$ message to all replicas of the most recent configuration in the client's local history (line~\ref{lst:dbla-short-client:refine-send-message}). If this configuration turns out to be outdated, the client will eventually learn about a newer configuration and will simply repeat its request there (line~\ref{lst:dbla-short-client:new-history-refine}). This is safe because lattice agreement operations are \emph{idempotent}, which means that an operation can be applied several times with the same result as if it was applied only once. In response to the client's $\mPropose$ message, the replicas will send $\mProposeResp$ messages (see \lstlinerangeref{dbla-short-client}{upon-propose-resp}{upon-propose-resp-end}), each of which will contain the set of verifiable input values known to the replica ($\ivalues$), a \emph{forward-secure signature} ($\isignature$) (see Section~\ref{sec:system-model}), and a sequence number ($\isn$). If the client learns any new verifiable input values from a response (line~\ref{lst:dbla-short-client:upon-propose-resp-refine}), it will repeat the $\fRefine$ procedure with the new values. Since we assume that the total number of verifiable input values is finite in any given infinite execution, this process will eventually converge. If the client receives a $\mProposeResp$ message without any new verifiable input values in it, it will add this message to the set of acknowledgments (line~\ref{lst:dbla-short-client:upon-propose-resp-ack}). Notice the additional check ($\ivalues = \icurrentValues$). This condition is supposed to be always $\iTrue$ if the sender is correct. It is only needed to filter out malformed acknowledgments from Byzantine replicas. Upon gathering a quorum of such acknowledgments (line~\ref{lst:dbla-short-client:upon-acks-collected}), the client proceeds to the second stage of the protocol, which we will cover in the next paragraph. Now we would like to emphasize the use of \emph{forward-secure signatures} here. It is what allows us to mitigate the \textbf{``I still work here''} attack. Our state transfer protocol ensures that before any configuration is superseded, all correct replicas in some quorum of that configuration will invoke $\fUpdateFSKeys$ (in other words, will remove their private keys for that configuration). If the client tries to access a configuration that was superseded long before the client's request, it simply won't receive enough responses with valid forward-secure signatures and will eventually retry its request in a {\thigher} configuration (line~\ref{lst:dbla-short-client:new-history-refine}). The primary goal of the second stage is to mitigate the \textbf{``slow reader''} attack. The client simply sends the acknowledgments it has collected in the first stage to the replicas and asks them to sign with their forward-secure signatures that they have seen these acknowledgments \emph{before they updated their private keys}, i.e., before this configuration was superseded. By this, we confirm that the configuration used in the first stage was not meanwhile superseded. \myparagraph{Replica implementation.} Now, let us take the pespective of a correct replica (Algorithm~\ref{alg:dbla-short-replica}). Each $\mPropose$ message has some configuration identifier attached to it. When a correct replica recieves a $\mPropose$ message from a correct client, there are three possible cases (lines~\ref{lst:dbla-short-replica:upon-propose}--\ref{lst:dbla-short-replica:upon-propose-end}): \begin{enumerate} \item If the message is from the highest configuration in the replica's local history ($\fMaxElement(\ihistory)$), and this configuration is installed by this replica ($C = \Cinst$), the replica will process this message and will send a reply, signed with a forward-secure signature with timestamp $\fHeight(C)$ (line~\ref{lst:dbla-short-replica:upon-propose-send-resp}); \item If the message is from a configuration that is {\tlower} than $\fMaxElement(\ihistory)$, the replica will drop this message. This cannot undermine liveness of client requests because, if both the client and the replica remain correct forever, the client will either complete the request without this replica's help, or eventually learn about a {\thigher} configuration and repeat its request there; \item Otherwise, the replica ``waits'' (line~\ref{lst:dbla-short-replica:upon-propose-wait}). That is, the message is stored in some buffer until one of the two former conditions is satisfied. \end{enumerate} A major part of the replica's implementation is the state transfer procedure (lines~\ref{lst:dbla-short-replica:state-transfer-begin}--\ref{lst:dbla-short-replica:state-transfer-end}). Whenever a replica sees that its ``current configuration'' ($\Ccurr$) is not the {\thighest} one in its local history, it tries to read the state from a quorum in each of the intermidiate configurations between $\Ccurr$ and the {\thighest} configuration ($\Cnext$) one by one. If it succeeds before $\Cnext$ or a {\thigher} configuration is installed, it updates $\Ccurr$ to $\Cnext$ and lets the other replicas know about. It uses the uniform reliable broadcast primitive we introduced in Section~\ref{subsec:bcast} to disseminate message $\Message{\mUpdateComplete}$ among the correct replicas of $\Cnext$ (line~\ref{lst:dbla-short-replica:state-transfer-broadcast}). If, on the other hand, it happens that the replica is waiting for responses from some configuration $C$, and this configuration is already superseded, then it might never gather a quorum of replies from the replicas of this configuration. In this case our algorithm ensures that this replica will eventually install some configuration {\thigher} than $C$ and will finish waiting via the other condition ($C \sqsubset \Ccurr$). When a replica learns about a new verifiable history (line~\ref{lst:dbla-short-replica:new-history}), it updates its private keys so that it will never again be able to answer any user request in a configuration {\tlower} than the {\thighest} configuration in this new history (line~\ref{lst:dbla-short-replica:new-history-update-fs-keys}). Due to the properties of reliable broadcast, every other correct process will eventually also learn about this history and abandon {\tlower} configurations. When a correct replica reads the state from some prior configuration $C$ during state transfer (lines~\ref{lst:dbla-short-replica:state-transfer-update-read}--\ref{lst:dbla-short-replica:state-transfer-wait}), it is responsible to make sure that some quorum $Q \in \iquorums(C)$ updated their forward-secure private keys to a timestamp larger than $\fHeight(C)$. This is acvhieved by a simple mechanism: the replicas of $C$ won't reply to the $\mUpdateRead$ messages until they update their private keys far enough (until $C \sqsubset \fMaxElement(\ihistory)$). \subsection{{\Reconfigurable} {\BLA}}\label{subsec:rbla-impl} While {\DBLA} is an important building block, it is not particularly useful by itself because it relies on the comparability of verifiable histories. In this section we show how to combine several {\DBLA} instances to obtain a single instance of \emph{\ReconfigurableByzantineLatticeAgreement} ({\RBLA} for short). As described in Section~\ref{subsec:reconf-objects}, compared to the ``static'' version of the {\ByzantineLatticeAgreement} abstraction, {\RBLA} has one more operation -- $\fUpdateConfig(C, \sigma)$, must be parameterized by a boolean function $\fVerifyInputConfig(C, \sigma)$, and must satisfy a number of additional properties such as {\pReconfigurationValidity} and {\pNewInstallationLiveness}. \input{algorithms/rbla-short.tex} \begin{figure} \centering \input{compiled-drawings/rbla-architecture.pdf_tex} \caption{ The structure of the {\RBLA} implementation. % % All three {\dynamic} objects use $\iHistLA$ as the source of verifiable histories (including $\iHistLA$ itself). Verifiable input values for $\iConfLA$ are the verifiable input configurations of the {\RBLA} object. Verifiable input values for $\iHistLA$ are singleton sets of verifiable output values of $\iConfLA$. Finally, verifiable input values for $\iObjLA$ are the verifiable input values of the {\RBLA} object itself. } \label{fig:rbla-architecture} \end{figure} Our {\RBLA} implementation builds upon three {\DBLA} objects: $\iConfLA$ operating on the configuration lattice $\Conf$, $\iHistLA$ operating on the lattice $2^{\Conf}$, and $\iObjLA$ operating on the object lattice $\Lat$. The first two {\DBLA} objects are used to take care of configuration updates, and the third one serves user requests. The relations between the three {\DBLA} objects are shown in Figure~\ref{fig:rbla-architecture}. The short version of the pseudocode is presented in Algorithm~\lstref{rbla-short} and is quite straightforward. The data operations are performed directly on $\iObjLA$. To update a configuration, the client first accesses $\iConfLA$ and then submits the returned configuration (with its proof) to $\iHistLA$. The resulting history is broadcasted and is used to reconfigure all three {\DBLA} objects. Thus, the three {\DBLA} objects are tied together. They ``live through'' the same history of reconfigurations and can be seen as three parts of the same algorithm rather than separate entities. By accessing $\iConfLA$ and then $\iHistLA$ we minimize the number of configurations that should be accessed for a consistent configuration shift. Indeed, due to the {\pComparability} property of {\ByzantineLatticeAgreement}, all configurations submitted to $\iHistLA$ will already be ordered and any set of such configurations will form a sequence (i.e., a history as defined in Section~\ref{subsec:reconf-objects}). When $k$ configurations are concurrently proposed, this allows us to access only $O(k)$ configurations for state transfer, and not an exponential number as in earlier work~\cite{dynastore,parsimonious}. This is known to be optimal~\cite{SKM17-reconf}. To respect space limits, we present the full pseudocode with the proof of correctness in Appendix~\ref{sec:rbla-full}. \section{Introduction}\label{sec:introduction} \myparagraph{Replication and quorums.} Replication is a natural way to ensure availability of shared data in the presence of failures. A collection of \emph{replicas}, each holding a version of the data, ensure that the \emph{clients} get a desired service, even when some replicas become unavailable or hacked by a malicious adversary. Consistency of the provided service requires the replicas to \emph{synchronize}: intuitively, every client should be able to operate on the most ``up-to-date'' data, regardless of the set of replicas it can reach. It always makes sense to assume as little as possible about the environment in which a system we design is expected to run. For example, \emph{asynchronous} distributed systems do not rely on timing assumptions, which makes them extremely robust with respect to communication disruptions and computational delays. It is, however, notoriously difficult and sometimes even impossible to make such systems \emph{fault-tolerant}. The folklore CAP theorem~\cite{Bre00,GL02} states that no replicated service can combine consistency, availability, and partition-tolerance. In particular, no consistent and available read-write storage can be implemented in the presence of partitions: clients in one partition are unable to keep track of the updates taking place in another one. Therefore, fault-tolerant storage systems tend to assume that partitions are excluded, e.g., by requiring a majority of replicas to be \emph{correct}~\cite{ABD}. More generally, one can assume a \emph{quorum system}, e.g., a set of subsets of replicas satisfying the intersection and availability properties~\cite{quorums}. Every (read or write) request from a client should be \emph{accepted} by a quorum of replicas. As every two quorums have at least one replica in common, intuitively, no client can miss previously written data. Of course, failures of replicas may jeopardize the underlying quorum system. In particular, we may find ourselves in a system in which no quorum is available and, thus, no operation may be able to terminate. Even worse, if the replicas are subject to Byzantine failures, we may not be able to guarantee the very correctness of read values. \myparagraph{Asynchronous reconfiguration.} To anticipate such scenarios in a long run, we must maintain a \emph{reconfiguration} mechanism that enables replacing compromised replicas with correct ones and update the corresponding quorum assumptions. A challenge here is to find an asynchronous implementation of reconfiguration in a system where both clients and replicas are subject to Byzantine failures that can be manifested by arbitrary and even malicious behavior. In the world of selfishly driven blockchain users, a reconfiguration mechanism must be prepared to this. Recently, a number of reconfigurable systems were proposed for asynchronous \emph{crash}-fault environments~\cite{dynastore,parsimonious,smartmerge,freestore,SKM17-reconf,rla} that were originally applied to (read-write) storage systems~\cite{dynastore, parsimonious,freestore}, and then extended to max-registers~\cite{smartmerge,SKM17-reconf} and more general \emph{lattice} data type~\cite{rla}. These proposals tend to ensure that the clients reach a form of ``loose'' agreement on the currently active configurations, which can be naturally expressed via the \emph{lattice agreement} abstraction~\cite{lattice-hagit,gla}. We allow clients to (temporarily) live in different worlds, as long as these worlds are ``properly ordered''. For example, we may represent a configuration as a set of \emph{updates} (additions and removals of replicas) and require that all ``installed'' configurations should be related by containment. A configuration becomes \emph{stale} as soon as it is ``subsumed'' by a new one representing a proper superset of updates. \myparagraph{Challenges of Byzantine fault-tolerant reconfiguration.} In this paper, we focus on \emph{Byzantine fault-tolerant} reconfiguration mechanism. We had to address here several challenges, specific to dynamic systems with Byzantine faults, which make them hard to benefit from existing crash fault-tolerant reconfigurable solutions. First, when we build a system out of lower-level components, we need to make sure that the outputs provided by these components are ``authentic''. Whenever a (potentially Byzantine) process claims to have obtained a \emph{value} $v$ (e.g., a new configuration estimate) from an underlying object (e.g., Lattice Agreement), it should also provide a \emph{proof} $\sigma$ that can be independently verified by every correct process. The proof typically consists in a multi-signature provided by a quorum of replicas of some configuration. We abstract this requirement out by equipping the object with a function $\fVerifyOutputValue$ that returns a boolean value, provided $v$ and $\sigma$. When invoked by a correct process, the function returns $\iTrue$ if and only if $v$ has indeed been produced by the object. When ``chaining'' the objects, i.e., adopting the output $v$ provided by an object $A$ as an input for another object $B$, which is the typical scenario in our system, a correct process invokes $A.\fVerifyOutputValue(v,\sigma)$, where $\sigma$ is the proof associated with $v$ by the implementation of $A$. Only valid values returned by $A$ can be used as inputs to $B$. Second, we face the \textbf{{\attackName} attack}~\cite{tutorial2010}. It is possible that a slow client tries to access a \emph{stale}, outdated configuration in which some quorum is entirely compromised by the Byzantine adversary. The client can therefore be provided with an inconsistent view on the shared data. Thus, before accepting a new configuration, we need to make sure that stale ones are no longer capable of processing data requests from the clients. We address this issue via forward-secure signature schemes~\cite{bellare1999forward,drijvers2019pixel}. Intuitively, every replica is provided with a distinct private key associated to each configuration. Before an ``important'' configuration is replaced with a newer one, at least a quorum of its replicas are asked to destroy their private keys. This way, even if the replicas are to become Byzantine in the future, they will not be able to provide slow clients with inconsistent values. The stale configuration simply becomes ``non-responsive'', as in crash-fault tolerant reconfigurable systems. Unfortunately, in an asynchronous system it is impossible to make sure that replicas of \emph{all} stale configurations remove their private keys as it would require solving consensus~\cite{flp}. However, as we show in this paper, it is possible to make sure that the configurations in which replicas \emph{do not} remove their keys are never accessed by correct clients and are incapable of creating ``proofs'' for output values. Finally, there is a subtle, and quite interesting \textbf{``slow reader'' attack}. Suppose that a client accesses \emph{almost all} replicas in a quorum of the current configuration each holding a stale state, as the only correct replica in the quorum that has the up-to-date state has not yet responded. The client then falls asleep, meanwhile the configuration is superseded by a new one and is compromised by the adversary. In an asynchronous system, it might happen that the replica that has not yet responded is compromised, while still holding its original private keys. The replica can then ``pretend'' not to be aware of the current state. Therefore, the slow client might still be able to complete its request in the superseded configuration, and return a stale state. We show that this issue can be addressed by an additional ``confirming'' round-trip executed by the client. \myparagraph{Our contribution: Byzantine fault-tolerant reconfigurable services.} We provide a systematic solution to each of the challenges described above and present a set of techniques for building reconfigurable services in asynchronous model with Byzantine faults of both clients and replicas. We consider a very strong model of the adversary: any number of clients can be Byzantine and, as soon as some configuration is installed, no assumptions are made about the correctness of replicas in any of the prior configurations. Moreover, in our quest for a simple solution for the Byzantine model, we devised a new approach to building asynchronous reconfigurable services by further exploring the connection between reconfiguration and the lattice agreement abstraction~\cite{smartmerge, rla}. We believe that this approach can be usefully applied to crash fault-tolerant systems as well. Instead of trying to build a complex graph of configurations ``on the fly'' while transferring the state between those configurations, we start by simply assuming that we are already given a \emph{linear history} (a.k.a.\ a sequence of configurations). We introduce the notion of a \emph{{\dynamic} object} -- an object that can transfer its own state between the configurations of a given finite linear history and serve meaningful user requests. We then provide ``{\dynamic}'' implementations of several important object types such as {\LatticeAgreement} and {\MaxRegister} and expect that other asynchronous \emph{static} algorithms can be translated to the \emph{\dynamic} model using a similar set of techniques. Finally, we present a \emph{general transformation} that allows us to combine \emph{any {\dynamic} object} with two \emph{\DynamicByzantineLatticeAgreement} objects in such a way that together they constitute a single \emph{\reconfigurable} object, which exports a general-purpose reconfiguration interface and supports all the operations of the original {\dynamic} object. \myparagraph{Roadmap.} The rest of the paper is organized as follows. We overview the model assumptions in Section~\ref{sec:system-model}, define our abstractions in Section~\ref{sec:definitions} and describe their implementations in Section~\ref{sec:rbla}. We discuss related work Section~\ref{sec:related-work} and conclude in Section~\ref{sec:discussion}. For space reasons, we had to put some details of the discussion to the appendix. A discussion of the access control problem in Byzantine fault-tolerant systems is delegated to Appendix~\ref{sec:access-control}. Detailed presentations and proofs of our algorithms are delegated to Appendices~\ref{sec:dbla-full} and~\ref{sec:transform-full}. Our implementation of the dynamic max-register abstraction is presented in Appendix~\ref{sec:max-register}. \section{{\Dynamic} Access Control}\label{sec:access-control} In this section we present several {\dynamic} implementations of the {\AccessControl} abstraction defined in Section~\ref{subsec:access-control-def} and disscuss their application to managing reconfiguration request. \subsection{Sanity-check approach} \input{algorithms/access-control-short.tex} One of the simplest access control policies in a \emph{static} system is to require at least $b+1$ replicas to sign each certificate, where $b$ is the maximal possible number of Byzantine replicas, sometimes called the \emph{resilience threshold}. The correct replicas can perform some application-specific sanity checks before approving requests. The key property of this approach is that it guarantees that each valid certificate is signed by \emph{at least one correct replica}. In many cases, this is enough to guarantee resilience both against the Sybil attacks~\cite{sybilattack} and against flooding the system with reconfiguration requests. The correct replicas can check the identities of new participants and refuse to sign excessively frequent requests. In \emph{\dynamic} asynchronous systems just providing $b+1$ signatures isn't enough. Despite the use of forward-secure signatures, in a superseded {\pivotal} configuration there might be significantly more than $b$ Byzantine replicas with their private keys not removed (in fact, at least $2b$). The straightforward way to implement this policy in a \emph{\dynamic} system is to add the \emph{confirming} phase, as in our implementation of {\DynamicByzantineLatticeAgreement} (see Section~\ref{subsec:dbla-impl}), after collecting $b+1$ signed approvals. The confirming phase guarantees that during the execution of the first phase the configuration was relevant (i.e., not superseded). The state transfer protocol should be the same as for {\DBLA} with the exception that no actual state is being transferred. It is needed only to ensure that the replicas of superseded {\pivotal} configurations update their private keys before new configurations are installed. This and the following approach can be generally described as ``vote-based'' access control policies. The pseudocode for their {\dynamic} implementation is presented in Algorithm~\lstref{ac-short}. \subsection{Quorum-based approach (``on-chain governance'')} A more powerful strategy in a \emph{static} system is to require a \emph{quorum} of replicas to sign each certificate. An important property of this implementation is that it can detect and prevent conflicting requests. More formally, suppose that there are values $v_1$ and $v_2$, for which the following two properties should hold: \begin{itemize} \item Both are acceptable: $\fRequestCert(v_i)$ should not return $\bot$ unless $\fRequestCert(v_j)$ was invoked in the same execution, where $j \neq i$. \item At most one may be accepted: if some process knows $\sigma_i$ such that $\fVerifyCert(v_i, \sigma_i)$ then no process should know $\sigma_j$ such that $\fVerifyCert(v_j, \sigma_j)$. \end{itemize} Note that it is possible that neither $v_1$ nor $v_2$ is accepted by the {\AccessControl} if the requests are made concurrently. To guarantee that exactly one certificate is issued, we would need to implement consensus, which is impossible in asynchronous model~\cite{flp}. If a correct replica has signed a certificate for value $v_i$, it should store this fact in persistent memory and refuse signing $v_j$ if requested. Due to the quorum intersection property, this guarantees the ``at most one'' semantic in a static system. This approach can be implemented in a \emph{\dynamic} system using the same pseudocode from Algorithm~\lstref{ac-short} and the state transfer protocol from our {\DBLA} implementation (\lstlinerangeref{dbla-short-replica}{state-transfer-begin}{state-transfer-end}). Using the {\dynamic} version of this approach to certifying reconfiguration requests allows us to capture the notion of what is sometimes called ``on-chain governance''. The idea is that the participants of the system (in our case, the owners of the replicas) decide which actions or updates to allow by the means of voting. Every decision needs a quorum of signed votes to be considered valid and no two conflicting decisions can be made. \subsection{Trusted administrators (the ``centralized'' approach)} A naive yet common approach to {\dynamic} systems is to have a trusted administrator, who signs the reconfiguration requests. However, if the administrator's private key is lost, the system might lose liveness, and if it is compromised, the system might lose safety. A more viable approach is to have $n$ administrators and to require $b+1$ of them to sign every certificate, for some $n$ and $b$ such that $0 \le b < n$. In this case, the system will ``survive'' up to $b$ keys being compromised and up to $n-(b+1)$ keys being lost. An interesting problem, which we will not dive into here, is to allow changing the set of administrators. One possible solution would be to ``run a reconfiguration protocol'' among the administrators. Another approach is to include the set of administrators to the configuration lattice $\Conf$ of the reconfigurable object itself and to use normal reconfiguration requests to change the set of administrators. \section{System Model}\label{sec:system-model} \myparagraph{Processes and channels. We consider a system that consists of two non-overlapping (possibly infinite) sets of processes: the set of \emph{replicas} $\Replicas$, and the set of \emph{clients} $\Clients$. At any point in a given execution, a process can be in one of the four states: \emph{idle}, \emph{correct}, \emph{stopped}, or \emph{Byzantine}. A process is \emph{idle} if it has not taken a single step in the execution yet. A process stops being idle by taking a step, e.g., sending or receiving a message. A process is considered \emph{correct} as long as it respects the algorithm it is assigned. A process is \emph{stopped} if it executed the ``halt'' command and not taking any further steps. Finally, a process is \emph{Byzantine} if it prematurely stops taking steps of the algorithm or takes steps that are not prescribed by it. If a process remains correct forever, we say that it is \emph{forever-correct}. We assume asynchronous \emph{reliable} \emph{authenticated} point-to-point links between each pair of processes~\cite{cachin2011introduction}. If a forever-correct process $p$ sends a message $m$ to a forever-correct process $q$, then $q$ eventually delivers $m$. Morever, if a correct process $q$ receives a message $m$ from a process $p$ at time $t$, and $p$ is correct at time $t$, then $p$ has indeed sent $m$ to $q$ before $t$. \ignore{ \begin{figure} \centering \input{compiled-drawings/process-state-transitions.pdf_tex} \caption{ Possible process state transitions. An idle process can start participating in the protocol and become correct. A correct process may crash or be corrupted (become Byzantine). A crashed process may later be corrupter by the adversary and also become Byzantine. A crashed or Byzantine process can never become correct again. } \label{fig:process-state-transitions} \end{figure} } We assume that the adversary is computationally bound so that it is unable to break the cryptographic techniques, such as digital signatures, forward security schemes~\cite{bellare1999forward,drijvers2019pixel} and one-way hash functions. \myparagraph{Configuration lattice.} Join semi-lattice is a tuple $(\Lat, \sqsubseteq)$, where $\Lat$ is a set partially ordered by the binary relation $\sqsubseteq$ such that for all elements of $x,y \in \Lat$, there exists \emph{the least upper bound} for the set $\{x,y\}$. The least upper bound of elements $x, y \in \Lat$ is a lattice element $z \in \Lat$ such that $x, y \sqsubseteq z$ and $\forall\ w \in \Lat:$ if $x, y \sqsubseteq w$, then $z \sqsubseteq w$. The least upper bound of elements $x$ and $y$ is denoted by $x \join y$, and referred to as the \emph{join operator}. $\join$ is an associative, commutative, and idempotent binary operator on $\Lat$. We write $x \sqsubset y$ whenever $x \sqsubseteq y$ and $x \neq y$. We say that $x, y \in \Lat$ are \emph{comparable} iff either $x \sqsubseteq y$ or $y \sqsubset x$. For any set $A$, \emph{the powerset lattice} of $A$, denoted by $2^A$, is the lattice of subsets of $A$. $\forall\ Z_1, Z_2 \in 2^A: Z_1 \join Z_2 \bydef Z_1 \cup Z_2, Z_1 \sqsubseteq Z_2 \bydef Z_1 \subseteq Z_2$. A configuration is an element of a join semi-lattice $(\Conf, \sqsubseteq)$. We assume that every configuration is associated with a finite set of replicas via a map $\ireplicas: \Conf \to 2^{\Replicas}$, and a \emph{quorum system} via a map $\iquorums: \Conf \to 2^{2^{\Replicas}}$, such that $\forall C \in \Conf: \iquorums(C) \subseteq 2^{\ireplicas(C)}$. Additionally we assume that there is a monotonous mapping from configurations to numbers $\fHeight: \Conf \to \mathbb{Z}$, such that $\forall C_1, C_2 \in \Conf:$ if $C_1 \sqsubset C_2$, then $\fHeight(C_1) < \fHeight(C_2)$. We say that a configuration $C$ is \emph{\thigher} than a configuration $D$ iff $D \sqsubset C$, and that $C$ is \emph{\tlower} than $D$ iff $C \sqsubset D$.\footnote{ Notice that ``$C$ is {\thigher} than $D$'' implies ``$\fHeight(C) > \fHeight(D)$'', but not vice versa. } As we shall see, liveness properties of our reconfigurable system require that every ``active'' configuration contains an ``available'' quorum, and safety properties require that every two quorums of an ``active'' configuration have a correct replica in common. In crash fault-tolerant systems, a configuration lattice can be naturally defined as follows~\cite{dynastore,DynReconfTutorial,SKM17-reconf,freestore}: $\Updates \bydef \{+,-\} \times \Replicas$, $\Conf \bydef 2^{\Updates}$, the join operator is the set union operation~$\cup$, $\textit{replicas}(C) \bydef \{s \in \Replicas \ST (+, s) \in C \land (-, s) \notin C\}$, $\textit{quorums}(C) \bydef \{Q \subseteq \textit{replicas}(C) \ST |Q| > \frac{1}{2}\ |\textit{replicas}(C)|\}$, and $\fHeight(C) \bydef |C|$. In Byzantine fault-tolerant systems, the last condition is replaced with $|Q| > \frac{2}{3}\ |replicas(C)|$~\cite{ByzantineQuorumSystems} and the replicas' public keys are distributed along with their ids. \myparagraph{Forward secure digital signatures.} In the \emph{{\attackName} attack}~\cite{tutorial2010}, a client may not be aware that a configuration is already compromised by the Byzantine adversary. Therefore, the client may obtain an inconsistent view on the shared data. To anticipate this attack, before an ``important'' configuration is replaced with a new one, we obligate all correct replicas in some quorum to destroy their private keys. This way, no quorum of this configuration will be able to serve any future client's request. To this end, we use \emph{forward-secure digital signatures}~\cite{bellare1999forward}: each process $p$ maintains a \emph{signing timestamp} $\ist_p$, a non-negative integer number, initially $0$. Each process has access to three operations: \begin{enumerate} \item $\fUpdateFSKeys(t)$ sets $\ist_p$ to $t\geq \ist_p$; \item $\fFSSign(m, t)$ returns a signature for the message $m$ and timestamp $t$, assuming $t \ge \ist_p$; \item $\fFSVerify(m, r, s, t)$ returns $\iTrue$ iff $s$ was generated by invoking $\fFSSign(m, t)$ by process $r$ when $\ist_r$ was less than or equal to $t$. \end{enumerate} Note that it is by-design impossible, even for a Byzantine process, to change $\ist_p$ to a lower value and to sign a message with a timestamp lower than the current value of $\ist_p$. One way to interpret this definition is to imagine a large sequence of private keys available to the process, associated with a publicly known sequence of matching public keys. When a correct replica performs $\fUpdateFSKeys(t)$, it deletes all private keys with indices smaller than $t$, and even if it turns Byzantine later on, it will not be able to recover these keys, and other processes will still be able to trust messages signed by this replica with smaller timestamps. We use timestamps to identify distinct configurations: every process $p$ is assigned with a distinct private key for each configuration. Practical implementations of the forward-secure signature scheme assume a maximal timestamp $\ist_{\max}$, taken sufficiently large to accommodate all possible configuration changes. A sequences of keys consumes $O(\mathrm{Poly}(\log(\ist_{\max})))$ bytes of local storage, which allows us to maintain sequences of very large (practically unbounded) sizes of the order of $2^{64}$. \section{Related work}\label{sec:related-work} Dynamic replicated systems with \emph{passive reconfiguration}~\cite{BaldoniBKR09,AttiyaCEKW19,KW19} do not explicitly regulate arrivals and departures of replicas. Their consistency properties are ensured under strong assumptions on the churn rate. Except for the recent work~\cite{KW19}, churn-tolerant storage systems do not tolerate Byzantine failures. In contrast, \textit{active reconfiguration} allows the clients to explicitly propose configuration updates, e.g., sets of new replica arrivals and departures. Early proposals of (actively) reconfigurable storage systems tolerating process crashes, such as RAMBO~\cite{rambo} and reconfigurable Paxos~\cite{paxos-reconfigure}, used consensus (and, thus, assumed certain level of synchrony) to ensure that the clients agree on the evolution of configurations. Dynastore~\cite{dynastore} was the first \emph{asynchronous} reconfigurable storage: clients propose incremental additions or removals to the system configuration. As the proposals commute, the processes can resolve their disagreements without involving consensus. The \textit{parsimonious speculative snapshot} task~\cite{parsimonious} allows to resolve conflicts between concurrent configuration updates in a storage system using instances of commit-adopt~\cite{Gaf98}. The worst-case time complexity, in the number of message delays, of reconfiguration was later put down from $O(n^2)$ to $O(n)$~\cite{SKM17-reconf}, where $n$ is the number of concurrently proposed configuration updates. SmartMerge~\cite{smartmerge} made an important step forward by treating reconfiguration as an instance of abstract \emph{lattice agreement}~\cite{gla}. However, the algorithm assumes an external (reliable) lattice agreement service which makes the system not fully reconfigurable. The recently proposed \emph{reconfigurable lattice-agreement} abstraction~\cite{rla} enables truly reconfigurable versions of a large class of objects and constructions, including state-based CRDTs\cite{crdt}, atomic-snapshot, max-register, conflict detector and commit-adopt. We believe that the reconfiguration service we introduced in this paper can be used to derive Byzantine fault-tolerant reconfigurable implementations of objects in the class. Byzantine quorum systems~\cite{ByzantineQuorumSystems} introduce abstractions for ensuring availability and consistency of shared data in asynchronous systems with Byzantine faults. In particular, a \emph{dissemination} quorum system ensures that every two quorums have a correct process in common and that at least one quorum only contains correct processes. In this paper, assume that a configuration that is never superseded must be equipped with a dissemination quorum. Dynamic Byzantine quorum systems~\cite{alvisi2000dynamic} appear to be the first attempt to implement a form of active reconfiguration in a Byzantine fault-tolerant data service running on a \emph{static} set of replicas, where clients can raise or lower the resilience threshold. Dynamic Byzantine storage~\cite{martin2004framework} allows a trusted \emph{administrator} to issue ordered reconfiguration calls that might also change the set of replicas. The administrator also takes care of ``forgetting'' configuration keys to anticipate the {\attackName} attack~\cite{tutorial2010}. In this paper, we propose an implementation of a Byzantine fault-tolerant reconfiguration service that does not rely on this assumption. Forward-secure signature schemes~\cite{bellare1999forward,drijvers2019pixel} enable a decentralized way to construct a sequence of distinct private keys for each process. We use the scheme to provide each process with a unique private key for each element of the configuration lattice. To counter the {\attackName} attack, we ensure that correct processes destroy their configuration keys before a new configuration is installed, without relying on a global agreement of the configuration sequence~\cite{martin2004framework}. \section{Reconfigurable objects: complete pseudocode and proofs}\label{sec:transform-full} In Section~\ref{subsec:transform-short} we present a general transformation from a {\dynamic} object into a {\reconfigurable} one. Here we would like to clarify some assumptions, explain some motivation behind our implementation, provide the full pseudocode, describe some possible optimizations, and, finally, prove the correctness of our transformation. \subsection{Implementation}\label{subsec:transform-full-implementation} \input{algorithms/transform-full.tex} \myparagraph{Cryptographic keys.} In Algorithm~\lstref{transform}, we use several {\dynamic} objects. We assume that correct replicas have separate public/private key pairs for each dynamic object. This prevents replay attacks across objects and allows each {\dynamic} object to manage its keys separately. \myparagraph{Bootstrapping.} We know from prior work~\cite{smartmerge,rla} that lattice agreement is very useful for reconfiguration. In particular, it can be used to build comparable configurations. We take a step further and use two separate instances of lattice agreement: one to build comparable configurations ($\iConfLA$) and the other to build histories out of them ($\iHistLA$). We can then use these two LA objects to reconfigure a single {\dynamic} object. However, this raises a question: how to reconfigure the lattice agreement objects themselves? We found the answer in the idea that is sometimes referred to as ``bootstrapping''. We use the lattice agreement objects to reconfigure \emph{themselves and at least one other object}. This implies that the lattice agreement objects share the configurations with $\iUObj$. The most natural implementation is that the code for all three dynamic objects will be executed by the same set of replicas. Bootstrapping is dangerous because, if applied badly, it can lead to infinite recursion. However, we structured our solution in such a way that there is no recursion at all: the client first makes normal requests to $\iConfLA$ and $\iHistLA$, and then uses the resulting history to reconfigure all {\dynamic} objects, as if this history was obtained by the client from the outside of the system. It is important to notice that liveness of the call $\iHistLA.\fVerifyOutputValue(h, \sigma)$ is not affected by reconfiguration: the function simply checks some digital signatures and is guaranteed to always terminate given enough processing time. \myparagraph{Shared parts.} All implementations of {\dynamic} objects presented in this paper have a similar structure. For example, they all share the same code for state transfer (\lstlinerangeref{dbla-replica-1}{state-transfer-begin}{state-transfer-end}). However, we do not deny the possibility that other implementations of other {\dynamic} objects might have very different implementations. Therefore, in our transformation we use $\iUObj$ as a ``blackbox'' and do not make any assumptions about its implementation. Moreover, for simplicity, we use the two {\DBLA} objects as ``blackboxes'' as well. In fact, $\iConfLA$ and $\iHistLA$ may have different implementations and the transformation will still work as long as they satisfy the specification from Section~\ref{sec:definitions}. However, this comes at a cost. In particular, if implemented naively, a single {\reconfigurable} object will run several independent state transfer protocols. If, as in our paper, all {\dynamic} objects have similar implementations of their state transfer protocols, this can be done more efficiently by combining all state transfer protocols into one, which should transfer the states of all {\dynamic} objects and make sure that superseded configurations are harmless. Another part shared by all {\dynamic} objects presented here is the mechanism they use to disseminate new histories (\lstlineref{dbla-client-1}{update-history}), using the global reliable broadcast primitive (Section~\ref{subsec:bcast}). Moreover, we do the same on the ``higher'' level, in the pseudocode for our transformation itself (\lstlineref{transform}{update-config-broadcast}). This is necessary to guarantee that if one {\dynamic} object installs a configuration, then all other {\dynamic} objects will do the same eventually. Of course, there is no need to broadcast the same data multiple times and practical implementations should avoid it when possible. \subsection{Proof of correctness} \label{subsec:transform-proofs} In the following two lemmas we show that we use the {\dynamic} objects ($\iConfLA$, $\iHistLA$, and $\iUObj$) correctly, i.e., all requirements imposed on verifiable histories are satisfied. \begin{lemma} All histories passed to the {\dynamic} objects by correct processes (\lstlinerangeref{transform}{new-history-update-dbla-begin}{new-history-update-dbla-end}) are verifiable with $\fVerifyHistory$ (\lstlineref{transform}{verify-history}). \end{lemma} \begin{proof} Follows directly from the implementation (\lstlineref{transform}{new-history-verify}). \end{proof} \begin{lemma} \label{lem:transform-verifiable-histories} All histories verifiable with $\fVerifyHistory$ (\lstlineref{transform}{verify-history}) are (1)~well-formed (that is, consist of comparable configurations) and (2)~related by containment. Moreover, (3)~in any given infinite execution there is only a finite number of histories verifiable with $\fVerifyHistory$. \end{lemma} \begin{proof} (1)~follows from the {\pComparability} property of $\iConfLA$, the {\pValidity} property of $\iHistLA$, and the definition of $\iHistLA.\fVerifyInputValue$ (\lstlineref{transform}{histla-verify-input-value}). (2)~follows directly from the {\pComparability} property of $\iHistLA$. (3)~follows from the requirement of finite number of verifiable input configurations and the {\pValidity} property of $\iConfLA$ and $\iHistLA$. Only a finite number of configurations can be formed by $\iConfLA$ out of a finite number of verifiable input configurations. Then, only a finite number of histories can be formed by $\iHistLA$ out of configurations from $\iConfLA$. \end{proof} \begin{theorem}[Transformation safety] \label{the:transform-safety} Our implementation satisfies the \emph{safety} properties of a {\reconfigurable} object: {\pReconfigurationValidity}, {\pHistoryComparability}, and {\pInstallationValidity}. \end{theorem} \begin{proof} \leavevmode \begin{itemize} \item {\pReconfigurationValidity} follows from the {\pComparability} property of $\iConfLA$, the {\pValidity} property of $\iConfLA$ and $\iHistLA$, and the definition of $\iHistLA.\fVerifyInputValue$ (\lstlineref{transform}{histla-verify-input-value}); \item {\pHistoryComparability} follows from the {\pComparability} property of $\iHistLA$; \item Finally, {\pInstallationValidity} follows from the {\pInstallationValidity} of the underlying {\dynamic} objects. \end{itemize} \end{proof} \begin{theorem}[{\pHistoryConvergence}] \label{the:transform-history-convergence} Local histories of all correct processes will eventually become identical. \end{theorem} \begin{proof} The same reasoning as for Theorem~\ref{the:history-convergence} ({\pHistoryConvergence} theorem for $\DBLA$) can be applied with the only modification that instead of ``verifiable histories'' we need to consider ``histories verifiable with $\fVerifyHistory$'' (see Lemma~\ref{lem:transform-verifiable-histories}). \end{proof} \begin{theorem}[Transformation liveness] \label{the:transform-liveness} Our implementation satisfies the \emph{liveness} properties of a {\reconfigurable} object: {\pReconfigurationLiveness}, {\pHistoryConvergence}, and {\pNewInstallationLiveness}. \end{theorem} \begin{proof} \leavevmode \begin{itemize} \item {\pReconfigurationLiveness} follows from the {\pOperationalLiveness} property of $\iConfLA$ and $\iHistLA$ and the liveness of reliable broadcast. \item {\pHistoryConvergence} follows from Theorem~\ref{the:transform-history-convergence}; \item Finally, {\pNewInstallationLiveness} follows from the {\pNewInstallationLiveness} of the underlying {\dynamic} objects. \end{itemize} \end{proof} \section{{\Dynamic} {\BLA}: complete implementation and proofs}\label{sec:dbla-full} We define and describe the implementation of our {\DynamicByzantineLatticeAgreement} abstraction in Section~\ref{subsec:dbla-impl}. Here we would like to present the complete pseudocode, clarify several details, and provide additional discussion of the protocol (Section~\ref{subsec:dbla-impl-full}), give a rigorous proof of correctness (Section~\ref{subsec:dbla-correctness-proof}), and, finally, describe possible directions for optimizations (Section~\ref{subsec:dbla-optimizations}). \subsection{Implementation} \label{subsec:dbla-impl-full} \input{algorithms/dbla-full.tex} Our implementation of the {\DBLA} abstraction is split into four parts: Algorithms~\lstref{dbla-client-1} and~\lstref{dbla-client-2} describe the behaviour of a correct client while Algorithms~\lstref{dbla-replica-1} and~\lstref{dbla-replica-2} describe the behaviour of a correct replica. In the begining of Algorithms~\lstref{dbla-client-1} and~\lstref{dbla-replica-1} we describe all parameters, global variables, and auxiliary functions used in the code. Additionally, we use ``{\RBBroadcast{$\MsgDesc{MessageDescriptor}$, $\imsgParams$ ...}}'' to denote the call to the \emph{global reliable broadcast} primitive introduced in Section~\ref{subsec:bcast}, and ``{\URBBroadcastIn{$\MsgDesc{MessageDescriptor}$, $\imsgParams$ ...}{$C$}}'' to denote the call to the \emph{local uniform reliable broadcast} primitive in configuration $C$ (see Section~\ref{subsec:bcast}). \myparagraph{Execution environment.} We assume single-threaded execution of the pseudocode. The lines of code are to be executed one by one in a sequential order. Some events (such as message delivery or an assignment to a global variable) may activate some handlers, but the execution of these handlers is delayed. However, we expect some ``fairness'' in a sense that if some handler remains ``ready'' indefinitely, it will eventually be executed. Sometimes waiting is explicitly mentioned in the code (e.g.,~\lstlineref{dbla-client-1}{propose-wait}). In these places we assume that the control flow may switch to other blocks. It may later return to the line after the~``\textbf{wait for}'' statement if the condition in the statement is satisfied. \myparagraph{Notation.} We use ``let $\textit{var} = \textit{expession}$'' to denote an assignment to a local variable and ``$\textit{var} \gets \textit{expression}$'' to denote an assignment to a global variable (they are usually defined in the ``\GlobalsBlockName'' section). We use triangle brackets to denote tupples and messages. They might start with a \emph{message descriptor}, which is written in bold (e.g.,~$\Tupple{\mPropose, \icurrentValues, \iseqNumber, C}$). \myparagraph{The use of powerset lattice.} In our implementation, we internally use the powerset lattice $2^{\Lat}$ and only join the values before returning the result to the client (\lstlineref{dbla-client-1}{propose-return}). This gives the right result because of one simple observation: if $A, B \in 2^{\Lat}$ and $A \subseteq B$, then, by associativity and commutativity, $\fJoinAll(B) = \fJoinAll(A) \join \fJoinAll(B \setminus A)$. By definition of ``$\join$'', $\fJoinAll(A) \sqsubseteq \fJoinAll(A) \join \fJoinAll(B \setminus A)$. This approach makes it easy to guarantee the {\pValidity} property of {\BLA}, but in some cases it might increase the communication cost (i.e., the number of bytes exchanged through the network) and the number of rounds needed to terminate. \myparagraph{Number of accessed configurations.} By accessing $\iConfLA$ and then $\iHistLA$, we minimize the number of configurations that should be accessed for a consistent configuration shift. Indeed, due to the {\pComparability} property of {\ByzantineLatticeAgreement}, all configurations submitted to $\iHistLA$ will already be ordered and any set of such configurations will form a sequence (i.e., a history as defined in Section~\ref{subsec:reconf-objects}). As a result, when $k$ configurations are concurrently proposed, only $O(k)$ configurations are accessed for state transfer, and not an exponential number as in some earlier work on reconfiguration~\cite{dynastore,parsimonious}. This is known to be optimal~\cite{SKM17-reconf}. \subsection{Proof of correctness} \label{subsec:dbla-correctness-proof} \paragraph*{{\DBLA} safety} Recall that a configuration is called \emph{candidate} iff it appears in the local history of a correct process. The following lemma gathers some obvious yet extremely useful statements about candidate configurations. \begin{lemma}[Candidate configurations] \label{lem:candidate-configurations} \leavevmode \begin{enumerate} \item \label{itm:cand-in-hist} Each candidate configuration is present in some verifiable history. \item \label{itm:cand-fin} There is a finite number of candidate configurations. \item \label{itm:cand-comp} All candidate configurations are comparable with ``$\sqsubseteq$''. \end{enumerate} \end{lemma} \begin{proof} Only a verifiable history can be adopted by a correct process (\lstlineref{dbla-client-2}{new-history-verify} and \lstlineref{dbla-replica-2}{new-history-verify}), hence (\ref{itm:cand-in-hist}). % The total number of verifiable histories is required to be finite, and each history is finite, hence (\ref{itm:cand-fin}). % All verifiable histories are required to be related by containment and all configurations within one history are required to be comparable, hence (\ref{itm:cand-comp}). \end{proof} A configuration is called \emph{\pivotal} if it is the last configuration in some verifiable history. Non-{\pivotal} candidate configurations are called \emph{\tentative}. Intuitively, the next lemma states that in the rest of the proofs we can almost always consider only {\pivotal} configurations. {\Tentative} configurations are both harmless and useless. \begin{lemma}[{\Tentative} configurations] \label{lem:tentative-configurations} \leavevmode \begin{enumerate} \item No correct client will ever make a request to a {\tentative} configuration. \item {\Tentative} configurations cannot be installed. \item A correct process will never invoke $\fFSVerify$ with timestamp $\fHeight(C)$ for any {\tentative} configuration $C$. \item A correct replica will never broadcast any message via the uniform reliable broadcast primitive in a {\tentative} configuration. \end{enumerate} \end{lemma} \begin{proof} Follows directly from the algorithm. Both clients and replicas only operate on configurations that were obtained by invoking the function $\fMaxElement(h)$ on some verifiable configuration. \end{proof} The next lemma states that correct processes cannot ``miss'' any \emph{\pivotal} configurations in their local histories. This is crucial for the correctness of the state transfer protocol. \begin{lemma} \label{lem:pivotal-configurations} If $C \sqsubseteq \fMaxElement(h)$, where $C$ is a {\pivotal} configuration and $h$ is the local history of a correct process, then $C \in h$. \end{lemma} \begin{proof} Follows directly from the definition of a {\pivotal} configuration and the requirement that all verifiable histories are related by containment (see Section~\ref{subsec:dynamic-objects}). \end{proof} Recall that a configuration is called \emph{superseded} iff some {\thigher} configuration is installed (see Section~\ref{subsec:reconf-objects}). A configuration is \emph{installed} iff some \emph{correct} replica has triggered the $\fInstalledConfig$ up-call (\lstlineref{dbla-replica-2}{upon-update-complete-up-call}). For this, the correct replica must receive a quorum of $\mUpdateComplete$ messages via the uniform reliable broadcast primitive (\lstlineref{dbla-replica-2}{upon-update-complete}). \begin{theorem}[\pInstallationValidity] \label{the:installation-validity} Our implementation of {\DBLA} satisfies {\pInstallationValidity}. I.e., only a candidate configuration can be installed. \end{theorem} \begin{proof} Follows directly from the implementation. A correct replica will not install a configuration until it is in the replica's local history (\lstlineref{dbla-replica-2}{upon-update-complete-wait}). \end{proof} In our algorithm it is possible for a configuration to be installed after it was superseded. There are at least three scenarios how it might happen: \begin{enumerate} \item If a quorum of replicas broadcast $\mUpdateComplete$ messages in some configuration $C$ which is not yet installed, then a {\thigher} configuration is installed making $C$ superseded, then some correct replica in $C$ deliver the broadcast messages and triggers the up-call (\lstlineref{dbla-replica-2}{upon-update-complete-up-call}). \item If $C$ is superseded, but there is one ``slow'' replica that doesn't yet know about it. The other replicas might turn Byzantine and broadcast the $\mUpdateComplete$ messages in $C$. Since an $\mUpdateComplete$ message doesn't include any additional information such as digital signatures, the correct replica has no way to tell that these messages are from Byzantine replicas and will trigger the up-call (\lstlineref{dbla-replica-2}{upon-update-complete-up-call}). \end{enumerate} We could technically prevent the second scenario, but not the first one. Moreover, some ``intermidiate'' scenarios are also inevitable. Let us call the configurations that were installed while being {\tactive} (i.e., not superseded) ``\emph{properly installed}''. We will use this definition to prove next few lemmas. \begin{lemma} The {\tlowest} properly installed configuration {\thigher} than configuration $C$ is the first installed configuration {\thigher} than $C$ in time order. \end{lemma} \begin{proof} Let $N$ be the {\tlowest} properly installed configuration {\thigher} than $C$. If some configuration {\thigher} than $N$ were installed earlier, then $N$ would not be properly installed. If some configuration between $C$ and $N$ were installed earlier, then $N$ would not be the {\tlowest}. \end{proof} The following lemma stipulates that our state transfer protocol makes the superseded {\pivotal} configurations ``harmless'' by leveraging a forward-secure signature scheme. \begin{lemma}[Key update] \label{lem:key-update} If a {\pivotal} configuration $C$ is superseded, then there is no quorum of replicas in that configuration capable of signing messages with timestamp $\fHeight(C)$, i.e., $\nexists Q \in \iquorums(C) \textST \forall r \in Q: \ist_r \le \fHeight(C)$. \end{lemma} \begin{proof} Let $N$ be the {\tlowest} properly installed configuration {\thigher} than $C$. Let us consider the moment when $N$ was installed. % By the algorithm, all corect replicas in some quorum $Q_N \in \iquorums(N)$ had to broadcast $\mUpdateComplete$ messages before $N$ was installed (\lstlineref{dbla-replica-2}{upon-update-complete}). % Since $N$ was not yet superseded at that moment, there was at least one correct replica $r_N \in Q_N$. By Lemma~\ref{lem:pivotal-configurations}, $C$ was in $r_N$'s local history whenever it performed state transfer to any configuration {\thigher} than $C$. % By the protocol, a correct replica only advances its $\Ccurr$ variable after executing the state transfer protocol (\lstlineref{dbla-replica-1}{state-transfer-ccurr-set}) or right before installing a configuration (\lstlineref{dbla-replica-2}{upon-update-complete-ccurr-set}). % Since no configurations between $C$ and $N$ were yet installed, $r_N$ had to pass through $C$ in its state transfer protocol and to receive $\mUpdateReadResp$ messages from some quorum $Q_C \in \iquorums(C)$ (\lstlineref{dbla-replica-1}{state-transfer-wait}). Recall that correct replicas update their private keys whenever they learn about a {\thigher} configuration (\lstlineref{dbla-replica-2}{new-history-update-fs-keys}) and that they will only reply to message $\Tupple{\mUpdateRead, \isn, C}$ once $C$ is \emph{not} the {\thighest} configuration in their local histories (\lstlineref{dbla-replica-2}{upon-update-read-wait}). % This means that all correct replicas in $Q_C$ actually had to update their private keys before $N$ was installed, and, hence, before $C$ was superseded. By the quorum intersection property, this means that in each quorum in $C$ at least one replica updated its private key to a timestamp higher than $\fHeight(C)$ and will not be capable of signing messages with timestamp $\fHeight(C)$ even if it becomes Byzantine. \end{proof} Notice that in a {\tentative} configuration there might be arbitrary many Byzantine replicas with their private keys not updated. This is inevitable in asynchronous system: forcing the replicas in {\tentative} configurations to update their private keys would require solving consensus. This doesn't affect correct processes because, as shown in Lemma~\ref{lem:tentative-configurations}, {\tentative} configurations are harmless. However, it is important to remember when designing new {\dynamic} protocols. The following lemma formalizes that the state is correctly transfered between configurations. \begin{lemma}[State transfer correctness] \label{lem:state-transfer-correctness} \leavevmode \\ If $\sigma = \Tupple{\ivalues, h, \sigma_h, \iproposeAcks, \iconfirmAcks}$ is a valid proof for $v$, then for each {\tactive} installed configuration $D$ such that $\fMaxElement(h) \sqsubset D$, there is a quorum $Q_D \in \iquorums(D)$ such that for each correct replica in $Q_D$: $\ivalues \subseteq \icurrentValues$. \end{lemma} \begin{proof} Let $C = \fMaxElement(h)$. We proceed by induction on the sequence of all properly installed configurations {\thigher} than $C$. Let us denote this sequence by $\Cnice$. % Other configurations are not interesting simply because there is no such moment in time when they are simultaneously {\tactive} and installed. Let $N$ be {\tlowest} configuration in $\Cnice$. % Let $Q_C \in \iquorums(C)$ be a quorum of replicas whose signatures are in $\iproposeAcks$. % Since $N$ is installed, there is a quorum $Q_N \in \iquorums(N)$ in which all correct replicas broadcast $\Message{\mUpdateComplete, N}$. % For each correct replica $r_N \in Q_N$, $r_N$ passed with its state transfer protocol through configuration $C$ and received $\mUpdateReadResp$ messages from some quorum of replicas in $C$. Note that at that moment $N$ wasn't yet installed, and, hence $C$ wasn't yet superseded. % By the quorum intersection property, there is at least one correct replica $r_C \in Q_C$ that sent an $\mUpdateReadResp$ message to $r_N$ (\lstlineref{dbla-replica-2}{upon-update-read-send-resp}). % Because $r_C$ will only react to $r_N$'s message after updating its private keys (\lstlineref{dbla-replica-2}{upon-update-read-wait}), it had to sign $\Tupple{\mProposeResp, \ivalues}$ (\lstlineref{dbla-replica-1}{upon-propose-sign}) before sending reply to $r_N$, which means that the $\mUpdateReadResp$ message from $r_C$ to $r_N$ must have contained a set of values that includes all values from $\ivalues$. % This proves the base case of the induction. Let us consider any configuration $D \in \Cnice$ such that $N \sqsubset D$. Let $M$ be the {\thighest} configuration in $\Cnice$ such that $N \sqsubseteq M \sqsubset D$ (in other words, the closest to $D$ in $\Cnice$). % Assume that the statement holds for $M$, i.e., while $M$ was {\tactive}, there were a quorum $Q_M \in \iquorums(M)$ such that for each correct replica in $Q_M$: $\ivalues \subseteq \icurrentValues$. % Similarly to the base case, let us consider a quorum $Q_D \in \iquorums(D)$ such that every correct replica in $Q_D$ reliably broadcast $\Message{\mUpdateComplete, D}$ before $D$ was installed. % For each correct replica $r_D \in Q_D$, by the quorum intersection property, there is at least one correct replica in $Q_M$ that sent an $\mUpdateReadResp$ message to $r_D$. This replica attached its $\icurrentValues$ to the message, which contained $\ivalues$. % This proves the induction step and completes the proof. \end{proof} The next lemma states that if two output values were produced in the same configuration, they are comparable. In a static system it could be proven by simply referring to the quorum intersection property. In a {\dynamic} Byzantine system, however, to use the quorum intersection, we need to prove that the configuration was {\tactive} during the whole period when the clients were exchanging data with the replicas. We need to prove that the ``slow reader'' attack is impossible. Luckily, we have the second stage of our algorithm designed for this sole purpose. \begin{lemma}[{\pComparability} in one configuration] \label{lem:comparable-in-one-config} \leavevmode \\ If $\sigma_1 = \Tupple{\ivalues_1, h_1, \sigma_{h1}, \iproposeAcks_1, \iconfirmAcks_1}$ is a valid proof for output value $v_1$, \\ and $\sigma_2 = \Tupple{\ivalues_2, h_2, \sigma_{h2}, \iproposeAcks_2, \iconfirmAcks_2}$ is a valid proof for output value $v_2$, \\ and $\fMaxElement(h_1) = \fMaxElement(h_2)$, then $v_1$ and $v_2$ are comparable. \end{lemma} \begin{proof} Let $C = \fMaxElement(h_1) = \fMaxElement(h_2)$. % By definition, $\sigma$ is a valid proof for $v$ means that $\fVerifyOutputValue(v, \sigma) = \iTrue$ (\lstlineref{dbla-client-1}{verify-output-value}). By the implementation, this means that $h_1$ and $h_2$ are verifiable histories (\lstlineref{dbla-client-1}{verify-output-value-verify-history}). Therefore, $C$ is a {\pivotal} configuration. The set $\iconfirmAcks_1$ contains signatures from a quorum of replicas of configuration $C$, with timestamp $\fHeight(C)$. Each of these signatures had to be produced after each of the signatures in $\iproposeAcks_1$ because they sign the message $\Tupple{\mConfirmResp, \iproposeAcks_1}$ (\lstlineref{dbla-replica-1}{upon-confirm-sign}). % Combining this with the statement of Lemma~\ref{lem:key-update} (Key Update), it follows that at the moment when the last signature in the set $\iproposeAcks_1$ was created, the configuration $C$ was {\tactive} (otherwise it would be impossible to gather $\iconfirmAcks_1$). % We can apply the same argument to the sets $\iproposeAcks_2$ and $\iconfirmAcks_2$. It follows that there are quorums $Q_1, Q_2 \in \iquorums(C)$ and a moment in time $t$ such that: (1) $C$ is not superseded at time $t$, (2) all \emph{correct} replicas in $Q_1$ signed message $\Tupple{\mProposeResp, \ivalues_1}$ before $t$, and (3) all \emph{correct} replica in $Q_2$ signed message $\Tupple{\mProposeResp, \ivalues_2}$ before $t$. % Since $C$ is not superseded at time $t$, there must be a correct replica in $Q_1 \cap Q_2$ (by the {\pQuorumIntersection} property), which signed both $\Tupple{\mProposeResp, \ivalues_1}$ and $\Tupple{\mProposeResp, \ivalues_2}$ (\lstlineref{dbla-replica-1}{upon-propose-sign}). % Since correct replicas only sign $\mProposeResp$ messages with comparable sets of values\footnote{ Indeed, elements are only added to the set $\icurrentValues$ of each correct replica, and the replicas only sign messages with the same set of verifiable input values as $\icurrentValues$ (see \lstlinerangeref{dbla-replica-1}{upon-propose-cur-vals}{upon-propose-sign}). }, $\ivalues_1$ and $\ivalues_2$ are comparable, i.e., either $\ivalues_1 \subseteq \ivalues_2$ or $\ivalues_2 \subset \ivalues_1$. Hence, $v_1 = \fJoinAll(\ivalues_1)$ and $v_2 = \fJoinAll(\ivalues_2)$ are comparable. \end{proof} Finally, let us combine the two previous lemmas to prove the {\pComparability} property of our {\DBLA} implementation. \begin{theorem}[{\pComparability}] \label{the:comparability} Our implementation of {\DBLA} satisfies the {\pComparability} property. That is, all verifiable output values are comparable. \end{theorem} \begin{proof} Let $\sigma_1 = \Tupple{\ivalues_1, h_1, \sigma_{h1}, \iproposeAcks_1, \iconfirmAcks_1}$ be a valid proof for output value $v_1$, and $\sigma_2 = \Tupple{\ivalues_2, h_2, \sigma_{h2}, \iproposeAcks_2, \iconfirmAcks_2}$ be a valid proof for output value $v_2$. Also, let $C_1 = \fMaxElement(h_1)$ and $C_2 = \fMaxElement(h_2)$. Since $h_1$ and $h_2$ are verifiable histories (\lstlineref{dbla-client-1}{verify-output-value-verify-history}), both $C_1$ and $C_2$ are {\pivotal} by definition. If $C_1 = C_2$, $v_1$ and $v_2$ are comparable by Lemma~\ref{lem:comparable-in-one-config}. Consider the case when $C_1 \neq C_2$. Without loss of generality, assume that $C_1 \sqsubset C_2$. % Let $Q_1 \in \iquorums(C_2)$ be a quorum of replicas whose signatures are in $\iproposeAcks_2$. Let $t$ be the moment when first correct replica signed $\Message{\mProposeResp, \ivalues_2}$. Correct replicas only start processing user requests in a configuration when this configuration is installed (\lstlineref{dbla-replica-1}{upon-propose-wait}). % Therefore, by Lemma~\ref{lem:state-transfer-correctness}, at time $t$ there was a quorum of replicas $Q_2 \in \iquorums(C_2)$ such that for every correct replica in $Q_2$: $\ivalues_1 \subseteq \icurrentValues$. % By the quorum intersection property, there must be at least one correct replica in $Q_1 \cap Q_2$. Hence, $\ivalues_1 \subseteq \ivalues_2$ and $\fJoinAll(\ivalues_1) \sqsubseteq \fJoinAll(\ivalues_2)$. \end{proof} \begin{theorem}[{\DBLA} safety] \label{the:dbla-safety} Our implementation satisfies the safety properties of {\DBLA}: {\pValidity}, {\pVerifiability}, {\pInclusion}, {\pComparability}, {\pDynamicValidity}, {\pHistoryComparability}, and {\pInstallationValidity}. \end{theorem} \begin{proof} \leavevmode \begin{itemize} \item {\pValidity} follows directly from the implementation: a correct client collects verifiable input values and joins them before returning from $\fPropose$ (\lstlineref{dbla-client-1}{propose-return}); \item {\pVerifiability} follows directly from how correct replicas form and check the proofs for output values (Algorithm~\ref{alg:dbla-client-1}, lines~\ref{lst:dbla-client-1:propose-make-proof} and~\ref{lst:dbla-client-1:verify-output-value-unpack}--\ref{lst:dbla-client-1:verify-output-value-end}); \item {\pInclusion} follows from the fact that the set $\icurrentValues$ of a correct client only grows (\lstlineref{dbla-client-2}{refine-current-values}); \item {\pComparability} follows from Theorem~\ref{the:comparability}; \item {\pDynamicValidity} follows from the fact that correct processes check that the history is verifiable before adopting it (\lstlineref{dbla-client-2}{new-history-verify} and \lstlineref{dbla-replica-2}{new-history-verify}); \item The {\pHistoryComparability} property has two statements in it. One of them follows the requirement that verifiable histories are comparable. The other one follows from the fact that correct processes check that $\ihistory \subset h$ before adopting history $h$ (\lstlineref{dbla-client-2}{new-history-verify} and \lstlineref{dbla-replica-2}{new-history-verify}); \item Finally, {\pInstallationValidity} follows from Theorem~\ref{the:installation-validity}. \end{itemize} \end{proof} \paragraph*{{\DBLA} liveness} \begin{theorem}[\pHistoryConvergence] \label{the:history-convergence} Local histories of all correct processes will eventually become identical. \end{theorem} \begin{proof} Let $p$ and $q$ be any two forever-correct processes\footnote{ If either $p$ or $q$ eventually halts or becomes Byzantine, their local histories are not required to converge. }. % Suppose, for contradiction, that the local histories of $p$ and $q$ have diverged at some point and will never converge again. % Recall that correct processes only adopt verifiable histories, and that we require the total number of verifiable histories to be finite. % Therefore, there is some configuration $h_p$, which is the the {\thighest} configuration ever adopted by $p$, and some configuration $h_q$ which is the the {\thighest} configuration ever adopted by $q$. % Since all verifiable configurations are required to be comparable, and we assume that $h_p \neq h_q$, one of them must be {\thigher} than the other. % Without loss of generality, suppose that $h_p \subset h_q$. Since $q$ had to deliver $h_q$ through reliable broadcast (unless $h_q$ is the initial history), and $q$ remains correct forever, $p$ will eventually deliver $h_q$ as well, and will adopt it. A contradition. \end{proof} Next, we introduce an important definition, which we will use throughout the rest of the proofs. \begin{definition}[Maximal installed configuration] \label{def:cmax} In a given infinite execution, a \emph{maximal installed configuration} is a configuration that eventually becomes installed and never becomes superseded. \end{definition} \begin{lemma}[$\Cmax$ existence] \label{lem:cmax-existence} In any infinite execution there is a unique maximal installed configuration. \end{lemma} \begin{proof} By Lemma~\ref{lem:candidate-configurations} (Candidate configurations) and Theorem~\ref{the:installation-validity} (\pInstallationValidity), the total number of installed configurations is finite and they are comparable. Hence, we can choose a unique maximum among them, which is never superseded by definition. \end{proof} Let us denote the (unique) maximal installed configuration by $\Cmax$. \begin{lemma}[$\Cmax$ installation] \label{lem:cmax-installation} The maximal installed configuration will eventually be installed by all correct replicas. \end{lemma} \begin{proof} Since $\Cmax$ is installed, by definition, at some point some correct replica has triggered up-call $\fInstalledConfig(\Cmax)$ (\lstlineref{dbla-replica-2}{upon-update-complete-up-call}). This, in turn, means that this replica delivered a quorum of $\mUpdateComplete$ messages via the \emph{uniform reliable broadcast} in $\Cmax$ when it was correct. % Therefore, even if this replica later becomes Byzantine, by definition of the uniform reliable broadcast, either $\Cmax$ will become superseded (which is impossible), or every correct replica will eventually deliver the same set of $\mUpdateComplete$ messages and install $\Cmax$. \end{proof} \begin{lemma}[State transfer progress] \label{lem:state-transfer-progress} State transfer (\lstlinerangeref{dbla-replica-1}{state-transfer-begin}{state-transfer-end}), executed by a forever-correct replica, always terminates. \end{lemma} \begin{proof} Let $r$ be a correct replica executing state transfer. % By Lemma~\ref{lem:candidate-configurations}, the total number of candidate configurations is finite. Therefore, it is enough to prove that there is no such configuration that $r$ will wait for replies from a quorum of that configuration indefinitely (\lstlineref{dbla-replica-1}{state-transfer-wait}). % Suppose, for contradiction, that there is such configuration $C$. If $C \sqsubset \Cmax$, then, by Lemma~\ref{lem:cmax-installation}, $r$ will eventually install $\Cmax$, and $\Ccurr$ will become not {\tlower} than $\Cmax$ (\lstlineref{dbla-replica-2}{upon-update-complete-ccurr-set}). Hence, $r$ will terminate from waiting through the first condition ($C \sqsubset \Ccurr$). A contradiction. Otherwise, if $\Cmax \sqsubseteq C$, then, by the definition of $\Cmax$, $C$ will never be superseded. % Since $r$ remains correct forever, by Theorem~\ref{the:history-convergence} (\pHistoryConvergence), every correct replica will eventually have $C$ in its local history. Since we assume reliable links between proceses (see Section~\ref{sec:system-model}), every correct replica in $\ireplicas(C)$ will eventually receive $r$'s $\mUpdateRead$ message and will send a reply, which $r$ will receive (\lstlineref{dbla-replica-2}{upon-update-read-send-resp}). % Hence, the waiting on line~\ref{lst:dbla-replica-1:state-transfer-wait} of Algorithm~\ref{alg:dbla-replica-1} will eventually terminate through the second condition ($r$ will receive responses from some $Q \in \iquorums(C)$ with correct sequence number). A contradiction. \end{proof} Intuitively, the following lemma states that $\Cmax$ is, in some sense, the ``final'' configuration. After some point every correct process will operate exclusively on $\Cmax$. No correct process will know about any configuration {\thigher} than $\Cmax$ or ``care'' about any configuration {\tlower} than $\Cmax$. \begin{lemma} \label{lem:cmax-is-final} $\Cmax$ will eventually become the {\thighest} configuration in the local history of each correct process. \end{lemma} \begin{proof} By Theorem~\ref{the:history-convergence} ({\pHistoryConvergence}), the local histories of all correct processes will eventually converge to the same history $h$. Let $D = \fMaxElement(h)$. % Since $\Cmax$ is installed and never superseded, it cannot be {\thigher} than $D$ (at least one correct replica will always ``remember'' about $\Cmax$). Suppose, for contradiction, that $\Cmax \sqsubset D$. % In this case, $D$ is never superseded, which means that there is a quorum $Q_D \in \iquorums(D)$ that consists entirely of forever-correct processes. By Theorem~\ref{the:history-convergence} ({\pHistoryConvergence}), all replicas in $Q_D$ will eventually learn about $D$ and will try to perform state transfer to it. By Lemma~\ref{lem:state-transfer-progress}, they will eventually succeed and install $D$. A contradiction with the definition of $\Cmax$. \end{proof} \begin{theorem}[\pNewInstallationLiveness] \label{the:installation-liveness} Our implementation of {\DBLA} satisfies the {\pNewInstallationLiveness} property. That is, (1)~if configuration $C$ is in the local history of a forever-correct process, either $C$ or a {\thigher} configuration will eventually be installed; % (2)~if some configuration $C$ is installed, each correct replica will eventually install $C$ or a {\thigher} configuration. \end{theorem} \begin{proof} Statement~(1) follows directly from Lemma~\ref{lem:cmax-is-final} and the definition of $\Cmax$. Statement~(2) follows from Lemma~\ref{lem:cmax-installation} ($\Cmax$ installation). \end{proof} \begin{theorem}[\pOperationalLiveness] \label{the:operational-liveness} Our implementation of {\DBLA} satisfies the {\pOperationalLiveness} property: if the total number of verifiable input values is finite, every call to $\fPropose(v, \sigma)$ by a forever-correct process eventually terminates. \end{theorem} \begin{proof} Let $p$ be a forever-correct client that invoked $\fPropose(v, \sigma)$. By Lemma~\ref{lem:cmax-is-final}, $\Cmax$ will eventually become the {\thighest} configuration in the local history of $p$. % If the client's request will not terminate by the time it learns about $\Cmax$, the client will call $\fRefine(\emptyset)$ after it (\lstlineref{dbla-client-2}{new-history-refine}). % By Lemma~\ref{lem:cmax-installation}, $\Cmax$ will eventually be installed by all correct replicas. Since it will never be superseded, there will be a quorum of forever-correct replicas. % This means that every round of messages from the client will eventually be responded to by a quorum of correct replicas. Since the total number of valid input values is finite, the client will call $\fRefine$ only a finite number of times (\lstlineref{dbla-client-2}{upon-propose-resp-refine}). % After the last call to $\fRefine$, the client will inevitably receive acknowledgements from a quorum of replicas, and will proceed to sending $\mConfirm$ messages (\lstlineref{dbla-client-2}{upon-acks-collected-send-confirm}). % Again, since there is an available quorum of correct replicas that installed $\Cmax$, the client will eventually receive enough acknowledgments and will complete the operation (\lstlineref{dbla-client-1}{propose-after-wait}). \end{proof} \begin{theorem}[{\DBLA} liveness] \label{the:dbla-liveness} Our implementation satisfies the liveness properties of {\DBLA}: {\pOperationalLiveness}, {\pDynamicLiveness}, {\pHistoryConvergence}, and {\pNewInstallationLiveness}. \end{theorem} \begin{proof} \leavevmode \begin{itemize} \item {\pOperationalLiveness} follows from Theorem~\ref{the:operational-liveness}; \item {\pDynamicLiveness} follows from the liveness of reliable broadcast: the client will eventually receive its own history and will adopt it unless it already adopted a larger history (\lstlineref{dbla-client-2}{new-history-refine}). \item {\pHistoryConvergence} follows from Theorem~\ref{the:history-convergence}; \item Finally, {\pNewInstallationLiveness} follows from Theorem~\ref{the:installation-liveness}. \end{itemize} \end{proof} \subsection{Possible optimizations} \label{subsec:dbla-optimizations} Here we would like to mention a few possible directions for optimization. Most of them are dedicated to reducing the communication cost of the protocol. First, the proofs in our protocol include the full local history of a process. Moreover, this history comes with its own proof, which also usually contains a history, and so on. If implemented naively, the size of one proof in bytes will be at least quadratic with respect to the number of distinct candidate configurations, which is completely unnecessary. The first observation is that these histories will be related by containment. So, in fact, they can be compressed just to the size of the largest one, which is linear. But we can go further and say that, in fact, in a practical implementation, the processes almost never should actually send full histories to each other because every process maintains its local history and all histories with proofs are already disseminated via the reliable broadcast primitive. When one process wants to send some history to some other process, it can just send a cryptographic hash of this history. The other process can check if it already has this history and, if not, ask the sender to only send the missing parts, instead of the \emph{whole} history. Second, a naive implementation of our {\DBLA} protocol would send ever-growing sets of verifiable input values around, which is, just as with histories, completely unnecessary. The processes should just limit themselves to sending diffs between what they know and what they think the recipient knows. Third, almost every proof in our systems contains signatures from a quorum of replicas. This adds another linear factor to the communication cost. However, it can be significantly reduced by the use of forward-secure \emph{multi-signatures}, such as Pixel~\cite{drijvers2019pixel}, which was designed for similar purposes. Finally, we use a suboptimal implementation of lattice agreement as the foundation for our {\DBLA} protocol. Perhaps, we could benefit from adapting a more efficient crash fault-tolerant asynchronous solution~\cite{garg-la}. \section*{Checklist} \subsection*{Legend} \begin{itemize} \lgood{Green -- in good condition} \lok{Light green -- ok} \lpartial{Yellow -- partially done} \lbad{Orage -- in bad condition} \lreallybad{Red -- in really bad condition} \lforpetr{Cyan -- for Petr} \lforandrei{Olive -- for Andrei} \ltodo{White -- todo} \end{itemize} \subsection*{Structural} \begin{enumerate} \lok{Abstract} \lpartial{Introduction} \lok{System Model} \lok{Abstractions and Definitions} \lok{Reconfigurable and Dynamic {\BLA}} \lpartial{Discussion} \lpartial{Related work} \lok{Appendix about Access Control} \lok{Description of full {\DBLA}} \lok{Proofs for {\DBLA}} \lok{Full description of the transformation} \lok{Proofs for transformation} \lok{Dynamic {\MaxRegister}.} \lpartial{Proofs for {\MaxRegister}.} \end{enumerate} \subsection*{Important TODOs} \begin{enumerate} \lbad{Update the roadmap in the end of the introduction.} \lok{Say that correct clients only propose values and configurations with valid proofs in Max-Register.} \lok{Mention that $\fGet()$ in max-register implementation doesn't use forward-secure signatures at all.} \ltodo{Reference the appendices B and C properly.} \lok{Rewrite ``our contribution'' in intro.} \lok{Introduce pivotal configurations in the main paper and add some text about it.} \lok{Introduce ``{\tactive} configurations'' properly or remove it from the paper.} \lok{Introduce the term ``comparable'' for lattice elements and sets.} \lok{Say that public keys are part of configuration.} \lok{Describe how we fix the ``slow reader'' attack in intro.} \lok{Add proofs to max-register.} \end{enumerate} \subsection*{Check before submiting} \begin{enumerate} \ltodo{Update the roadmap in the end of the introduction.} \ltodo{Make sure that the terminology is consistent.} \ltodo{Check that ``todo'' or ``lots of text'' is not used anywhere.} \ltodo{Run spell checker.} \end{enumerate} \subsection*{Miscellaneous} \begin{enumerate} \ltodo{Simplify properties or {\dynamic} and {\reconfigurable} items? Especially, the liveness properties.} \ltodo{Add drawings.} \lok{Describe some complete reconfigurable systems without any external dependencies.} \lok{Introduce forward-secure digital signatures.} \lok{Introduce Reliable Broadcast primitive in Section 2 (System Model).} \ltodo{Add some motivation for ``verifiable input values'' (just one sentence), don't just throw them to the reader.} \lok{Consider adding strong validity property??} \lcomment[Andrei]{The idea is to make the ``minimal implementation'' (see ``discussion'').} \lok{Motivate why we use ``weaker'' version of BLA: without strong validity and not generalized.} \lok{``later'' and ``earlier'' $\to$ ``succeeds'' and ``precedes'' (or ``succeeding'', ``preceding'')??} \lcomment[Andrei]{Decided on {\thigher}, {\tlower}, {\thighest}, and {\tlowest}.} \ltodo{Maybe remove ``History Comparability'' property? Is it needed or is it a detail of our implementation?} \lforandrei{Mention that $C \sqsubseteq \Cinst$ is used to simplify proofs and that $C \sqsubseteq \Ccurr$ would work.} \ltodo{Add execution examples??} \ltodo{Rename MaxElement to Highest??} \lok{Clean up the full code. Especially the comments. Remove old terminology. Check that there are no overflowed lines.} \lok{Rename ``Global objects'' to ``Shared objects'' and ``Local variables'' to ``Global variables''.} \ltodo{Maybe somehow remove all this ``some process knows $\sigma$ such that...''. This ``knows'' souns really informal.} \lok{Reformulate the Operational Liveness property and the corresponding property for max-register.} \ltodo{Use consistent paragraph style (in Intro and System Model we currently use custom).} \ltodo{Perhaps, add some proofs of time complexity.} \lok{Comments on communication cost.} \ltodo{Decide whether the configurations that once were in the local history of a correct client are called ``candidate''.} \end{enumerate} \subsection*{From PODC reviews} \begin{enumerate} \item[] \textbf{From the 1st reviewer:} \lpartial{Explain the mechanisms for quorum proofs and proof verifications in more detail.} \lcomment{I believe we addressed this issue by explaining how {\AC} works and by better description of the {\DBLA} algorithm.} \ltodo{The writing in many parts is not clear enough.} \begin{enumerate} \ltodo{Quote: ``Section 3.1 [Verifiable inputs and outputs] was not clear to me.''} \lpartial{Section 3.3 [RBLA: specification] had too many properties, which made it very difficult to follow.} \lcomment[atonkikh]{This is addressed by the current structure of chapter 3: the properties of {\dynamic} and {\reconfigurable} objects are separated from the properties of the objects themselves.} \ltodo{Quote: ``What is an up-call?''} \lpartial{Improve writing. Quote: ``In the Comparability property, what do you mean by verifiable values?''} \lcomment[atonkikh]{This is partially addressed by renaming ``verifiable values'' to ``verifiable output values''.} \end{enumerate} \ltodo{The ``History Convergence'' property seems to mix the definition with implementation} \lok{The algorithm is very difficult to follow. An informal overview or the intuition could help.} \lcomment{The main paper now gives the overview and the technical details are presented in the appendix. Still no examples though.} \lgood{Potentially consider using the fault-tolerant shared memory model?} \lcomment{No.} \ltodo{Add conclusion/discussion section?} \ltodo{Add discussion on protocol's complexity} \ltodo{Add discussion on the open problems} \lcomment{Open problems: ``General approach'' to reconfiguration, ``optimal'' communication complexity, a framework to simplify proofs.} \lforpetr{Cite "On Liveness of Dynamic Storage" to justify finite number of reconfiguration requests.} \lpartial{More adequate names for ``I still work here'' and the ``slow reader'' attacks?} \lcomment[Andrei]{Maybe they are fine.} \ltodo{Present the ``I still work here'' and the ``slow reader'' attacks properly. They should be used to motivate the algorithm.} \end{enumerate} \begin{enumerate} \item[] \textbf{From the 2nd reviewer:} \ltodo{We should clearly state our contribution and why our results are novel.} \ltodo{Maybe add more intuition and move some formalism to the Appendix?} \ltodo{Quote: ``Since the proofs are not included in the main draft, sections 2 and 3 are unnecessarily technical.''} \lpartial{We should add an appendix on access control and emphasize that it captures different models.} \ltodo{Perhaps, explicitly reference some paper on Sybil attacks just to show that we are aware of them.} \ltodo{Read and, perhaps, reference the papers that consider continuous churn:} \begin{enumerate} \ltodo{Baldoni et al. 2013: \url{https://www.sciencedirect.com/science/article/pii/S0304397513002673}} \ltodo{Attiya et al. 2020: \url{https://arxiv.org/abs/2003.07787}} \ltodo{Kumar et al. 2019: \url{https://arxiv.org/abs/1910.06716}} \end{enumerate} \end{enumerate} \begin{enumerate} \item[] \textbf{From the 3rd reviewer:} \lok{Provide a more intuitive and easy-to-read description of the protocol (perhaps, less formal and technical).} \ltodo{Add execution examples??} \ltodo{The concepts of VerifyInputValue and VerifyOutputValue are hard to understand?} \ltodo{Improve writing in property descriptions.} \ltodo{Verification Stability $\to$ Verification Agreement: If $\fVerifyOutputValue(v, \sigma)$ returns $val$ at a correct process, then it never returns $val' \neq val$ to any correct process.} \lcomment[atonkikh]{We probably don't have this property now because $\fVerifyOutputValue$ is now a function, not an operation. However, we need to somehow make it clear that this is a function in mathematical sense.} \lpartial{Quote: ``More effort should be made by authors to make the paper more readable. It is not that it is formalized poorly (quite opposite) and that english is not good (it is) it is just very difficult to follow the exposition. More example would be welcome. Also, the textual description of the protocol should help more the reader understand the pseudocode especially since proofs are not in the main body. Authors should guide better the reader through the protocols.''} \end{enumerate} \end{document} \section{Discussion} \label{sec:discussion} \myparagraph{Communication cost.} In this paper, \confirmA{we do not intend to provide the optimal implementations of each object or to implement the most general abstractions (such as generalized lattice agreement~\cite{gla,rla}).} Instead, we focused on providing the minimal implementation for the minimal set of abstractions to demonstrate the ideas and the general techniques for defining and building reconfigurable services in the harsh world of asynchrony and Byzantine failures. Therefore, \confirmA{our implementations leave} plenty of space for optimizations. We discuss a few possible directions in Appendices~\ref{subsec:dbla-optimizations} and~\ref{subsec:transform-full-implementation}. \myparagraph{Open questions.} \confirmA{We would like to mention two relevant directions for further research.} First, with regard to active reconfiguration, it would be \confirmA{interesting to devise algorithms that efficiently adapt to ``small'' configuration changes, while still supporting the option of completely changing the set of replicas \confirmed{in a single reconfiguration request}.} \confirmA{In this paper, we allow the sets of replicas of proposed configurations to be completely disjoint, which incurred an expensive quorum-to-quorum communication pattern. This might seem unnecessary for reconfigurations requests involving only slight changes of the set of replicas.} Second, with regard to Byzantine faults, it would be interesting to consider models with a ``weaker'' adversary. \confirmA{In this paper, we assumed a very strong model of the adversary: no assumptions are made about correctness of replicas in superseded configurations.} This ``pessimistic'' approach leads to more complicated and expensive protocols. \section{Implementing {\reconfigurable} objects} \label{sec:transform-main} \input{figures/dbla-and-transform.tex} While {\dynamic} objects are important building blocks, they are not particularly useful by themselves because they require an external source of comparable verifiable histories. In this section, we show how to combine several {\dynamic} objects to obtain a single \emph{\reconfigurable} object. Similar to {\dynamic} objects, the specification of a {\reconfigurable} object can be obtained as a combination of the specification of a static object with the specification of an abstract {\reconfigurable} object from Section~\ref{subsec:reconf-objects}. In particular, compared to static objects, {\reconfigurable} objects have one more operation -- $\fUpdateConfig(C, \sigma)$, must be parameterized by a boolean function $\fVerifyInputConfig(C, \sigma)$, and must satisfy {\pReconfigurationValidity}, {\pReconfigurationLiveness}, and {\pInstallationLiveness}. We build a {\reconfigurable} object by combining three \emph{\dynamic} ones. \confirmA{The first one is} the {\dynamic} object that we want to make {\reconfigurable} (let us call it $\iDObj$). \confirmed{For example, it can be an instance of {\DBLA} if we wanted to make a {\reconfigurable} version of {\ByzantineLatticeAgreement}.} \confirmA{The two remaining objects are used to build verifiable histories}: $\iConfLA$ is a {\DBLA} operating on the configuration lattice $\Conf$, and $\iHistLA$ is a {\DBLA} operating on the powerset lattice $2^{\Conf}$. The relationships between the three {\dynamic} objects are depicted in Figure~\ref{fig:transform}. \input{algorithms/transform-short.tex} The short version of the pseudocode is presented in Algorithm~\lstref{transform-short}. All data operations are performed directly on $\iDObj$. To update a configuration, the client first submits its proposal to $\iConfLA$ and then submits the result as a singleton set to $\iHistLA$. Due to the {\pBLAComparability} property, all verifiable output values produced by $\iConfLA$ are comparable, and any combination of them would create a well-formed history as defined in Section~\ref{subsec:dynamic-objects}. Moreover, the verifiable output values of $\iHistLA$ are related by containment, and, therefore, can be used as verifiable histories in {\dynamic} objects. We use them to reconfigure all three {\dynamic} objects (lines~\ref{lst:transform-short:update-config-update-history-begin}--\ref{lst:transform-short:update-config-update-history-end}). The full pseudocode of the transformation with formal specification of all parameters, as well as the proof of correctness and the discussion of possible optimizations, are presented in Appendix~\ref{sec:transform-full}. In Appendix~\ref{sec:access-control}, we discuss several ways to implement the access control to prevent the Byzantine clients from reconfiguring the system in an undesirable way. \section{Max Register}\label{sec:max-register} To exemplify our claim that our approach to building {\dynamic} objects can be used to implement different abstractions and is not limited to lattice agreement, in this section we show how to create an atomic Byzantine fault-tolerant {\MaxRegister} in {\dynamic} setting. An atomic (a.k.a.\ linearizable) multi-writer multi-reader Byzantine {\MaxRegister} is a distributed object that has two operations: $\fRead()$ and $\fWrite(v, \sigma)$ and must be parametrized by a boolean function $\fVerifyInputValue(v, \sigma)$. As before, we say that $\sigma$ is a \emph{valid certificate for input value}~$v$ iff $\fVerifyInputValue(v, \sigma) = \iTrue$ and that value~$v$ is a \emph{verifiable input value} iff some process knows $\sigma$ such that $\fVerifyInputValue(v, \sigma) = \iTrue$. We assume that correct clients invoke $\fWrite(v, \sigma)$ only if $\fVerifyInputValue(v, \sigma) = \iTrue$. We do not make any assumptions on the number of verifiable input values for this abstraction (i.e., it can be infinite). The object must satisfy the following three properties: \begin{itemize} \item \textit{\pMRValidity}: if $\fRead()$ returns value $v$ to a correct process, then $v$ is verifiable input value; \item \textit{\pMRAtomicity}: if some correct process $p$ completed $\fWrite(v, \sigma)$ or received $v$ from $\fRead()$ strictly before than some correct process $q$ invoked $\fRead()$, then the value returned to $q$ must be greater than or equal to $v$; \item \textit{\pMRLiveness}: every call to $\fRead()$ and $\fWrite(v, \sigma)$ by a forever-correct process eventually returns. \end{itemize} For simplicity, unlike {\ByzantineLatticeAgreement}, our {\MaxRegister} does not provide the $\fVerifyOutputValue(v, \sigma)$ function. \subsection{{\DynamicMaxRegister} implementation} \label{subsec:dmr-impl} \input{algorithms/dynamic-max-register.tex} In this section we present our implementation of the \emph{\dynamic} version of the {\MaxRegister} abstraction ({\DynamicMaxRegister} or {\DMR} for short). Overall, the ``application'' part of the implementation is very similar to the classical ABD algorithm~\cite{ABD}, and the ``{\dynamic}'' part of the implementation is almost the same as in {\DBLA}. \myparagraph{Client implementation.} From the client's perspective, the two main procedures are $\fGet()$ and $\fSet(v, \sigma)$ \needsrev{(not to be confused with the $\fRead$ and $\fWrite$ operations)}. $\fSet(v, \sigma)$ (\lstlinerangeref{dmr-client}{set}{set-end}) is used to store the value on a quorum of replicas of the most recent configuration. It returns $\iTrue$ iff it manages to receive signed acknowledgments from a quorum of some configuration. Forward-secure signatures are used to prevent the ``I still work here'' attack. Since $\fSet$ does not try to read any information from the replicas, it is not susceptible to the ``slow reader'' attack. $\fGet()$ (lines~\ref{lst:dmr-client:get}--\ref{lst:dmr-client:get-end}) is very similar to $\fSet(...)$ and is used to request information from a quorum of replicas of the most recent configuration. Since we do not provide the $\fVerifyOutputValue(...)$ function, the replies from replicas are \emph{not signed} (\lstlineref{dmr-replica}{upon-get-send-reply}). Therefore, $\fGet()$ is susceptible to both the ``I still work here'' and the ``slow reader'' attack when used alone. Later in this section we discuss how the invocation of $\fSet(...)$ right after $\fGet()$ (\lstlineref{dmr-client}{read-set}) allows us to avoid these issues. Operation $\fWrite(v, \sigma)$ (lines~\ref{lst:dmr-client:write}--\ref{lst:dmr-client:write-end}) is used by correct clients to store values in the register. It simply performs repeated calls to $\fSet(v, \sigma)$ until some call succeeds to reach a quorum of replicas. Retries are safe because, as in lattice agreement, write requests to a max-register are idempotent. Since we assume the total number of verifiable histories to be finite, only a finite number of retries is possible. Operation $\fRead()$ (lines~\ref{lst:dmr-client:read}--\ref{lst:dmr-client:read-end}) is used to request the current value from the register, and it consists of repeated calls to both $\fGet()$ and $\fSet(...)$. The call to $\fGet()$ is simply used to query information from the replicas. The call to $\fSet(...)$ is usually called ``the write-back phase'' and serves two purposes here: \begin{itemize} \item It is used instead of the ``confirming'' phase to prevent the ``I still work here'' and the ``slow-reader'' attacks. Indeed, if the configuration was superseded during the execution of $\fGet()$, $\fSet(...)$ will not succeed because it will not be able to gather a quorum of signed replies in the same configuration; \item Also, it is used to order the calls to $\fRead()$ and to guarantee the {\pMRAtomicity} property. Intuitively, if some correct process successfully completed $\fSet(v, \sigma)$ strictly before some other correct process invoked $\fRead()$, the later process will receive a value that is not smaller than $v$. \end{itemize} \myparagraph{Replica implementation.} The replica implementation (Algorithm~\lstref{dmr-replica}) is mostly the same as for {\DBLA} (Algorithm~\lstref{dbla-replica}). It handles the $\mUpdateRead$ and $\mUpdateReadResp$ messages a bit differently (\lstlinerangeref{dmr-replica}{upon-update-read}{upon-read-resp-end}), by exchanging $\vcurr$ and $\sigmacurr$ instead of $\icurrentValues$ as in {\DBLA}. It also handles client requests specific to {\MaxRegister} (\lstlinerangeref{dmr-replica}{upon-get}{upon-set-end}). \subsection{Proof of correctness} Since our {\DynamicMaxRegister} implementation uses the same state transfer protocol as {\DBLA}, most proofs from Section~\ref{subsec:dbla-correctness-proof} that apply to {\DBLA}, also apply to {\DMR} (with some minor adaptations). Here we provide only the statements of such theorems, without repeating the proofs. Then we introduce several theorems specific to {\DMR} and sketch the proofs. \myparagraph{{\DMR} safety.} \begin{lemma}[Candidate configurations] \label{lem:dmr-candidate-configurations} \leavevmode \begin{enumerate} \item \label{itm:dmr-cand-in-hist} Each candidate configuration is present in some verifiable history. \item \label{itm:dmr-cand-fin} There is a finite number of candidate configurations. \item \label{itm:dmr-cand-comp} All candidate configurations are comparable with ``$\sqsubseteq$''. \end{enumerate} \end{lemma} \begin{lemma}[{\Tentative} configurations] \label{lem:dmr-tentative-configurations} \leavevmode \begin{enumerate} \item No correct client will ever make a request to a {\tentative} configuration. \item {\Tentative} configurations cannot be installed. \item A correct process will never invoke $\fFSVerify$ with timestamp $\iheight(C)$ for any {\tentative} configuration $C$. \item A correct replica will never broadcast any message via the uniform reliable broadcast primitive in a {\tentative} configuration. \end{enumerate} \end{lemma} \begin{lemma} \label{lem:dmr-pivotal-configurations} If $C \sqsubseteq \fMaxElement(h)$, where $C$ is a {\pivotal} configuration and $h$ is the local history of a correct process, then $C \in h$. \end{lemma} \begin{theorem}[\pDynamicValidity] \label{the:dmr-dynamic-validity} Our implementation of {\DMR} satisfies {\pDynamicValidity}. I.e., only a candidate configuration can be installed. \end{theorem} \begin{lemma}[Key update] \label{lem:dmr-key-update} If a {\pivotal} configuration $C$ is superseded, then there is no quorum of replicas in that configuration capable of signing messages with timestamp $\iheight(C)$, i.e., $\nexists Q \in \iquorums(C) \textST \forall r \in Q: \ist_r \le \iheight(C)$. \end{lemma} We say that a correct client \emph{completes its operation in configuration $C$} iff at the moment when the client completes its operation, the {\thighest} configuration in its local history is $C$. \begin{lemma}[State transfer correctness] \label{lem:dmr-state-transfer-correctness} \leavevmode \\ If some correct process completed $\fWrite(v, \sigma)$ in $C$ or received $v$ from $\fRead()$ operation completed in $C$, then for each {\tactive} installed configuration $D$ such that $C \sqsubset D$, there is a quorum $Q_D \in \iquorums(D)$ such that for each correct replica in $Q_D$: $\vcurr \ge v$. \end{lemma} The following lemma is the first lemma specific to {\DMR}. \begin{lemma}[{\pMRAtomicity} in one configuration] \label{lem:dmr-atomicity-in-one-config} \leavevmode \\ If some correct process $p$ completed $\fWrite(v, \sigma)$ in $C$ or received $v$ from $\fRead()$ operation completed in $C$, strictly before some process $q$ invoked $\fRead()$, and $q$ completed its operation in $C$, then the value returned to $q$ is greater than or equal to $v$. \end{lemma} \begin{proof} Recall that $\fRead()$ operation consists of repeated calls to two procedures: $\fGet()$ and $\fSet(...)$. % If process $q$ successfully completed $\fSet(...)$ in configuration $C$, then, by the use of forward-secure signatures, configuration $C$ was {\tactive} during the execution of $\fGet()$ that preceded the call to $\fSet$. % This also means that configuration $C$ was {\tactive} during the execution of $\fSet(v, \sigma)$ by process $p$, since it was before process $q$ started executing its request. % By the quorum intersection property, process $q$ must have received $v$ or a greater value from at least one correct replica. \end{proof} \begin{theorem}[{\pMRAtomicity}] \label{the:dmr-atomicity} Our implementation of {\DMR} satisfies the {\pMRAtomicity} property. If some correct process $p$ completed $\fWrite(v, \sigma)$ or received $v$ from $\fRead()$ strictly before than some process $q$ invoked $\fRead()$, then the value returned to $q$ must be greater than or equal to $v$ \end{theorem} \begin{proof} Let $C$ (resp., $D$) be the {\thighest} configuration in $p$'s (resp., $q$'s) local history when it completed its request. Also, let $v$ (resp., $u$) be the value that $p$ (resp., $q$) passed to the last call to $\fSet(...)$ (notice that both $\fRead()$ and $\fWrite(...)$ call $\fSet(...)$). If $C = D$, then $u \ge v$ by Lemma~\ref{lem:dmr-atomicity-in-one-config}. The case when $D \sqsubset C$ is impossible due to the use of forward-secure signatures Correct replicas do not answer user requests in a configuration until this configuration is installed (\lstlineref{dmr-replica}{upon-get-wait}). Therefore, $C$ had to be installed before $p$ completed its request. By Lemma~\ref{lem:dmr-key-update} (Key Update) and by the use of forward-secure signatures in $\fSet(...)$, this would mean that $q$ would not be able to complete its request in $D$. Finally, let us consider the case when $C \sqsubset D$. In this case, by Lemma~\ref{lem:dmr-state-transfer-correctness}, the quorum intersection property, and the use of forward-secure signatures in $\fSet(...)$, $q$ received $v$ or a greater value from at least one correct replica during the execution of $\fGet()$. Therefore, in this case $u$ is also greater than or equal to $v$. \end{proof} \begin{theorem}[{\DMR} safety] \label{the:dmr-safety} Our implementation satisfies the safety properties of {\DMR}: {\pMRValidity}, {\pMRAtomicity}, and {\pDynamicValidity}. \end{theorem} \begin{proof} {\pMRValidity} follows directly from the implementation: correct clients only return verifiable input values from $\fGet()$ (\lstlineref{dmr-client}{get-end}). {\pMRAtomicity} follows directly from Theorem~\ref{the:dmr-atomicity}. {\pDynamicValidity} follows from Theorem~\ref{the:dmr-dynamic-validity}. \end{proof} \myparagraph{{\DMR} liveness.} \begin{lemma}[History Convergence] \label{lem:dmr-history-convergence} Local histories of all correct processes will eventually become identical. \end{lemma} Recall that the \emph{maximal installed configuration} is the {\thighest} installed configuration and is denoted by $\Cmax$ (see Definition~\ref{def:cmax} and Lemma~\ref{lem:cmax-existence}. \begin{lemma}[$\Cmax$ installation] \label{lem:dmr-cmax-installation} The maximal installed configuration will eventually be installed by all correct replicas. \end{lemma} \begin{lemma}[State transfer progress] \label{lem:dmr-state-transfer-progress} State transfer executed by a forever-correct replica, always terminates. \end{lemma} \begin{lemma} \label{lem:dmr-cmax-is-final} $\Cmax$ will eventually become the {\thighest} configuration in the local history of each correct process. \end{lemma} \begin{theorem}[\pMRLiveness] \label{the:dmr-mr-liveness} Our implementation of {\DMR} satisfies the {\pMRLiveness} property: every call to $\fRead()$ and $\fWrite(v, \sigma)$ by a forever-correct process eventually returns. \end{theorem} \begin{proof} Let $p$ be a forever-correct client that invoked $\fRead()$ or $\fWrite(...)$. By Lemma~\ref{lem:cmax-is-final}, $\Cmax$ will eventually become the {\thighest} configuration in the local history of $p$. % If the client's request will not terminate by the time it learns about $\Cmax$, the client will restart its request there. % Since $\Cmax$ will eventually be installed by all correct replicas and will never be superseded, there will be a quorum of forever-correct replicas, and $p$ will be able to complete its request there. \end{proof} \begin{theorem}[{\DMR} liveness] \label{the:dmr-liveness} Our implementation satisfies the liveness properties of {\DMR}: {\pMRLiveness}, {\pDynamicLiveness}, and {\pInstallationLiveness}. \end{theorem} \begin{proof} {\pMRLiveness} follows from Theorem~\ref{the:dmr-mr-liveness}. {\pDynamicLiveness} and {\pInstallationLiveness} follow directly from Lemmas~\ref{lem:dmr-cmax-is-final} and~\ref{lem:dmr-cmax-installation} respectively. \end{proof} \section{\DynamicByzantineLatticeAgreement}\label{sec:dbla-main} \emph{\DynamicByzantineLatticeAgreement} abstraction ({\DBLA} for short) is the main building block \confirmed{in our construction of {\reconfigurable} objects}. Its specification is a combination of the specification of {\ByzantineLatticeAgreement} (Section~\ref{subsec:bla-def}) and the specification of a {\dynamic} object (Section~\ref{subsec:dynamic-objects}). The interface of a {\DBLA} object is depicted in Figure~\ref{fig:dbla-interface}. To respect space limits, the complete pseudocode, the proof of correctness, and further discussion are delegated to Appendix~\ref{sec:dbla-full}. As we mentioned earlier, we use forward-secure digital signatures to guarantee that superseded configurations cannot affect correct clients or forge certificates for output values. Ideally, before a new configuration $C$ is installed (i.e., before a correct replica triggers $\fInstalledConfig(C)$ upcall), we would like to make sure that the replicas of all candidate configurations {\tlower} than $C$ invoke $\fUpdateFSKeys(\iheight(C))$. However, this would require the replica to know the set of all candidate configurations {\tlower} than $C$. Unambiguously agreeing on this set would require solving consensus, which is known to be impossible in an asynchronous system~\cite{flp}. Instead, we classify all candidate configurations in two categories: \emph{\pivotal} and \emph{\tentative}. A candidate configuration is called \emph{\pivotal} if it is the last configuration in some verifiable history. Otherwise it is called \emph{\tentative}. A nice property of {\pivotal} configurations is that it is impossible to ``skip'' one in a verifiable history. Indeed, if $C_1 = \fMaxElement(h_1)$ and $C_2 = \fMaxElement(h_2)$ and $C_1 \sqsubset C_2$, then, since all verifiable histories are related by containment, $h_1 \subseteq h_2$ and $C_1 \in h_2$. This allows us to make sure that, before a configuration $C$ is installed, the replicas in all {\pivotal} (and, possibly, some {\tentative}) configurations {\tlower} than $C$ update their keys. In order to reconfigure a {\DBLA} object, a correct client must use reliable broadcast to distribute the new verifiable history. Each correct process $p$ maintains, \confirmA{locally}, the largest (with respect to $\subseteq$) verifiable history it delivered so far through reliable broadcast. It is called \emph{the local history of process $p$} and is denoted by $\ihistory_p$. We use $\Chighest_p$ to denote the most recent configuration in $p$'s local history (i.e., $\Chighest_p = \fMaxElement(\ihistory_p)$). Whenever a replica $r$ updates $\ihistory_r$, it invokes $\fUpdateFSKeys(\iheight(\Chighest_r))$. Recall that if at least one forever-correct process delivers something via reliable broadcast, every other correct process will eventually deliver it as well. Similarly, each process $p$ keeps track of all verifiable input values it has seen $\icurVals_p \subseteq \Lat \times \Sigma$, where $\Sigma$ is the set of all possible certificates. \confirmed{Sometimes, during the execution of the protocol,} processes exchange these sets. Whenever a process $p$ receives a message that contains a set of values with certificates $\ivalues \subseteq \Lat \times \Sigma$, it checks that the certificates are valid (i.e., $\forall\ (v, \sigma) \in \ivalues: \fVerifyInputValue(v, \sigma) = \iTrue$) and adds these values with certificates to $\icurVals_p$. \subsection{Client implementation} \input{figures/la-execution.tex} \confirmA{The client's protocol is simple.} As we mentioned earlier, the operation $\fUpdateHistory(h, \sigma)$ is implemented as {\RBBroadcast{$\mNewHistory, h, \sigma$}}. \confirmA{The rest of the reconfiguration process is handled by the replicas.} The protocol for the operation $\fPropose(v, \sigma)$ consists of two stages: \emph{proposing} a value and \emph{confirming} the result. The first stage (proposing) mostly follows the implementation of lattice agreement by Faleiro et al.~\cite{gla}. Client $p$ repeatedly sends message \Message{$\mPropose$, $\icurVals_p$, $\iseqNumber_r$, $C$} to all replicas in $\ireplicas(C)$, where $\mPropose$ is the message descriptor, $C = \Chighest_p$, and $\iseqNumber_r$ is a sequence number used by the client to match sent messages with replies. After sending these messages to $\ireplicas(C)$, the client waits for responses of the form \Message{$\mProposeResp$, $\ivalues$, $\isignature$, $\isn$}, where $\mProposeResp$ is the message descriptor, $\ivalues$ is the set of all verifiable input values known to the replica with valid certificates (including those sent by the client), $\isignature$ is a forward-secure signature with timestamp $\iheight(C)$, and $\isn$ is the same sequence number as in the message from the client. During the first stage, three things can happen: (1)~the client learns about some new verifiable input values from one of the $\mProposeResp$ messages; (2)~the client updates its local history (by delivering it through reliable broadcast); and (3)~the client receives a quorum of valid replies with the same set of verifiable input values. \confirmA{In the latter case, the client combines the responses to form a certificate} (called $\iacks_1$) and proceeds to the second stage. In the first two cases, the client simply restarts the operation. Because the number of verifiable input values, as well as the number of verifiable histories, are \confirmA{assumed} to be finite, the number of restarts will also be finite. The example in Figure~\ref{fig:two-concurrent-proposes} illustrates how the first stage of the algorithm ensures the comparability of the results when no reconfiguration is involved. In this example, clients $p$ and $q$ concurrently propose values $\{1\}$ and $\{2\}$, respectively, from the lattice $\Lat = 2^{\mathbb{N}}$. Client $p$ successfully returns the proposed value $\{1\}$ while client $q$ is forced to refine its proposal and return the combined value $\{1, 2\}$. The quorum intersection prevents the clients from returning incomparable values (e.g., $\{1\}$ and $\{2\}$). \confirmed{% In the second (confirming) stage of the protocol, the client simply sends the acknowledgments it has collected in the first stage to the replicas of the same configuration. The client than waits for a quorum of replicas to reply with a forward-secure signature with timestamp $\iheight(C)$.} \confirmed{% The example in Figure~\ref{fig:slow-reader-demonstation} illustrates how reconfiguration can interfere with an ongoing $\fPropose$ operation in what we call \textit{the ``slow reader'' attack}, and how the second stage of the protocol prevents safety \confirmA{violations}.} \needsrevp{% In this example, client $p$ should not be able to return the proposed value $\{1\}$} because all correct replicas in quorum $\{r_1, r_3, r_4\}$ store value $\{2\}$, which means that previously some other client could have returned value $\{2\}$. The client successfully reaches replicas $r_2$ and $r_3$ before the reconfiguration. None of them tell the client about the input value $\{2\}$, because $r_2$ is outdated and $r_3$ is Byzantine. The message from $p$ to $r_1$ is delayed. \confirmA{% Meanwhile, a new configuration is installed, and all replicas of the original configuration become Byzantine. If $r_1$ lies to $p$, the client may finish the first stage of the protocol with value $\{1\}$. However, because replicas $r_2$ and $r_4$ updated their private keys during the reconfiguration, they are unable to send the signed confirmations with timestamp $\iheight(C)$ to the client. The client then waits until it receives the new verifiable history via reliable broadcast and restarts the operation in the new configuration.} The certificate for the output value $v \in \Lat$ produced by the $\fPropose$ protocol in a configuration $C$ consists of: (1)~the set of verifiable input values (with certificates for them) from the first stage of the algorithm (the join of all these values must be equal to $v$); (2)~a verifiable history (with a certificate for it) that confirms that $C$ is a {\pivotal} configuration; (3)~the quorum of signatures from the first stage of the algorithm; and (4)~the quorum of signatures from the second stage of the algorithm. Intuitively, the only way for a Byzantine client to obtain such a certificate is to benignly follow the $\fPropose$ protocol. \subsection{Replica implementation} Each replica $r$ maintains, \confirmA{locally}, its \emph{current configuration} (denoted by $\Ccurr_r$) and \emph{the last configuration installed by this replica} (denoted by $\Cinst_r$). $\Cinst_r \sqsubseteq \Ccurr_r \sqsubseteq \Chighest_r$. Intuitively, $\Ccurr_r = C$ means that replica $r$ knows that there is no need to transfer state from configurations {\tlower} than $C$, either because $r$ already performed the state transfer from those configurations, or because it knows that sufficiently many other replicas did. $\Cinst_r = C$ means that the replica knows that sufficiently many replicas in $C$ have up-to-date \confirmA{states}, and that configuration $C$ is ready to serve user requests. As we saw earlier, each client message is associated with some configuration $C$. The replica only answers the message when $C = \Cinst_r = \Ccurr_r = \Chighest_r$. \confirmed{% If $C \sqsubset \Chighest_r$, the replica simply ignores the message. Due to the properties of reliable broadcast, the client will eventually learn about $\Chighest_r$ and will repeat its request there (or in an even {\thigher} configuration).} \confirmed{% If $\Cinst_r \sqsubset C$ and $\Chighest_r \sqsubseteq C$, the replica waits until $C$ is installed before processing the message.} \confirmed{% Finally, if $C$ is incomparable with $\Cinst_r$ or $\Chighest_r$, then the message is sent by a Byzantine process and the replica should ignore it.} \confirmA{% When a correct replica $r$ receives a {\mPropose} message,} it adds the newly learned verifiable input values to $\icurVals_r$ and sends $\icurVals_r$ to the client with a forward-secure signature with timestamp $\iheight(C)$. When a correct replica receives a {\mConfirm} message, it simply signs the set of acknowledgments in it with a forward-secure signature with timestamp $\iheight(C)$ and sends the signature to the client. \input{algorithms/dbla-state-transfer.tex} A very important part of the replica's implementation is \emph{the state transfer protocol}. The pseudocode for it is presented in Algorithm~\ref{alg:dbla-state-transfer}. Note that we omit the subscript $r$ in the pseudocode because each process can only access its own variables directly. Let $\Cnext_r$ be the {\thighest} configuration in $\ihistory_r$ such that $r \in \ireplicas(\Cnext_r)$. Whenever $\Ccurr_r \neq \Cnext_r$, the replica tries to ``move'' to $\Cnext_r$ by reading the current state from all configurations between $\Ccurr_r$ and $\Cnext_r$ one by one in ascending order (line~\ref{lst:dbla-state-transfer:state-transfer-for-begin}). In order to read the current state from configuration $C \sqsubset \Cnext_r$, replica $r$ sends message \Message{$\mUpdateRead$, $\iseqNumber_r$, $C$} to all replicas in $\ireplicas(C)$. In response, each replica $r_1 \in \ireplicas(C)$ sends $\icurVals_{r_1}$ to $r$ in an $\mUpdateReadResp$ message (line~\ref{lst:dbla-state-transfer:upon-update-read-send-resp}). However, $r_1$ replies only after its private key is updated to a timestamp larger than $\iheight(C)$ (line~\ref{lst:dbla-state-transfer:upon-update-read-wait}). We maintain the invariant that for any correct replica $r'$: $\ist_{r'} = \iheight(\fMaxElement(\ihistory_{r'}))$, where $\ist_{r'}$ is the timestamp of the private key of $r'$. If $r$ receives a quorum of replies from the replicas of $C$, there are two distinct cases: \begin{itemize} \item $C$ is still {\tactive}. In this case, the quorum intersection property still holds for $C$, and replica $r$ can be sure that (1) if some $\fPropose$ operation has either completed in configuration $C$ or reached the second stage, $v \sqsubseteq \fJoinAll(\icurVals_r)$, where $v$ is the value returned by the $\fPropose$ operation and $\fJoinAll(\icurVals_r)$ is the join of all verifiable input values in the set $\icurVals_r$; and (2) if some $\fPropose$ operation has not yet reached the second stage, it will not be able to complete in configuration $C$ (see the example in Figure~\ref{fig:slow-reader-demonstation}). \item $C$ is already {\tsuperseded}. In this case, by definition, a {\thigher} configuration is installed, and, intuitively, replica $r$ will get the necessary state from that {\thigher} configuration. \end{itemize} It may happen that configuration $C$ is already superseded and $r$ will not receive sufficiently many replies from the replicas of $C$. However, in this case $r$ will eventually discover that some {\thigher} configuration is installed, and it will update $\Ccurr_r$ (line~\ref{lst:dbla-state-transfer:upon-update-complete-ccurr-set}). When a correct replica \confirmA{completes transferring the state} to some configuration $C$, it notifies other replicas about it by broadcasting message $\mUpdateComplete$ in configuration $C$ (line~\ref{lst:dbla-state-transfer:state-transfer-broadcast}). A correct replica \emph{installs} a configuration $C$ if it receives such messages from a quorum of replicas in $C$ (line~\ref{lst:dbla-state-transfer:upon-update-complete}). Because we want our protocol to satisfy the {\pInstallationLiveness} property (if one correct replica installs a configuration, every forever-correct replica must eventually install this or a {\thigher} configuration), the $\mUpdateComplete$ messages are distributed through the uniform reliable broadcast primitive that we introduced in Section~\ref{subsec:bcast}. \subsection{Implementing other {\dynamic} objects} While we do not provide any general approach for building {\dynamic} objects, we expect that most \confirmA{asynchronous Byzantine fault-tolerant static algorithms} can be adopted to the {\dynamic} case by applying the same set of techniques. These techniques include our state transfer protocol (relying on forward-secure signatures), the use of an additional round-trip to prevent the ``slow reader'' attack, and the structure of our cryptographic proofs \confirmA{ensuring} that {\tentative} configurations cannot create valid certificates for \confirmA{output values}. To illustrate this, in Appendix~\ref{sec:max-register}, we present the {\dynamic} version of {\MaxRegister}~\cite{max-register}. We also discuss the {\dynamic} version of the {\AccessControl} abstraction in Appendix~\ref{sec:access-control}. \section{Building {\dynamic} and {\reconfigurable} objects}\label{sec:rbla} In this section, we present our technique for creating \emph{\dynamic} and \emph{\reconfigurable} objects as defined in Sections~\ref{subsec:dynamic-objects} and~\ref{subsec:reconf-objects}, respectively. First, in Section~\ref{subsec:dbla-impl}, we present our implementation of the \emph{\dynamic} version of the \emph{\ByzantineLatticeAgreement} abstraction defined in Section~\ref{subsec:bla-def}, which is the main building block in our approach to creating \emph{any} {\reconfigurable} objects. Second, in Section~\ref{subsec:transform-short}, we show how to construct \emph{\reconfigurable} objects out of {\dynamic} ones. To respect space limits, the complete pseudocode, the proof of correctness, and further discussion are delegated to Appendices~\ref{sec:dbla-full} and~\ref{sec:transform-full}. \subsection{\DynamicByzantineLatticeAgreement}\label{subsec:dbla-impl} \emph{\DynamicByzantineLatticeAgreement} abstraction ({\DBLA} for short) is the main building block in our approach for constructing {\reconfigurable} objects. Its specification is a combination of the specification of {\ByzantineLatticeAgreement} (Section~\ref{subsec:bla-def}) and the specification of a {\dynamic} object (Section~\ref{subsec:dynamic-objects}). The abstraction exports two operations: $\fPropose(v, \sigma)$ and $\fUpdateHistory(h, \sigma)$, two up-calls: $\fNewHistory(h)$ and $\fInstalledConfig(C)$, and one function: $\fVerifyOutputValue(w, \tau)$. The abstraction is parameterized by two boolean functions: $\fVerifyInputValue(v, \sigma)$ and $\fVerifyInputConfig(C, \sigma)$, and must satisfy a set of properties defined in Sections~\ref{subsec:bla-def} and~\ref{subsec:dynamic-objects} such as {\pComparability}, {\pNewInstallationLiveness}, and {\pDynamicValidity}. \input{algorithms/dbla-short} The pseudocode for our algorithm is partially presented in Algorithms~\lstref{dbla-short-client} and~\lstref{dbla-short-replica}. The complete pseudocode and the proof of correctness are put in Appendix~\ref{sec:dbla-full}. Here we try to convey the most important ideas and design decisions. \myparagraph{The ``replica-centric'' approach.} Most papers on crash-fault tolerant reconfiguration take the \emph{client-centric approach} to state transfer~\cite{dynastore,smartmerge,rla,SKM17-reconf}. In those algorithms the client is responsible for reading the state from the replicas of older configurations and relaying it to the replicas of a new configuration. This approach is appealing because it has small communication complexity when there are few clients. However, the client-centric approach becomes significantly more expensive and harder to implement if we assume that some clients can be Byzantine. Instead, we take the \emph{replica-centric approach} to state transfer~\cite{freestore}. That is, the replicas of a new configuration are responsible for reading the state from the replicas of prior configurations. \myparagraph{{\Pivotal} and {\tentative} configurations.} As we mentioned earlier, we use forward-secure signatures to guarantee that superseded configurations cannot affect correct clients or forge certificates for output values. However, due to asynchrony, it is impossible to acess all prior configurations when installing a new one. We say that a configuration is \emph{\pivotal} if it is the last configuration in some verifiable history. Other candidate configurations are called \emph{\tentative}. Intuitively, the {\pivotal} configurations are the configurations that can potentially be installed by a correct replica and to which a correct client can potentially make a request. The (useless) {\tentative} configurations are the ``necessary evil'' of the asynchronous model. Eliminating or detecting them would require solving consensus. As we prove in Appendix~\ref{subsec:dbla-correctness-proof}, our protocol ensures that \emph{\tentative} configurations are \emph{always} ``harmless'' and that {\pivotal} configurations are ``harmless'' after they are superseded. \myparagraph{Client implementation.} Let us first take the perspective of a correct client (Algortihm~\lstref{dbla-short-client}). Since the state transfer is handled by the replicas, the client's code is rather simple yet there are several important details. Our {\BLA} implementation builds upon the (crash fault-tolerant static) implementation of lattice agreement by Faleiro et al.~\cite{gla}. An important difference is that, instead of operating directly on the object lattice $\Lat$, internally we operate with sets of verifiable input values, in other words, with the power lattice $2^{\Lat}$, and only join them before returning the result to the client (line~\ref{lst:dbla-short-client:propose-return}). While yielding the same result, this trick helps us to preserve the {\pValidity} property of {\BLA}: every output value is, indeed, a join of some set of verifiable input values. To propose a new value, the client first needs to call the $\fRefine$ procedure (\lstlinerangeref{dbla-short-client}{refine}{refine-end}). It will add the value to the set of all verifiable input values the client is aware of ($\icurrentValues$, line~\ref{lst:dbla-short-client:refine-current-values}) and send a $\mPropose$ message to all replicas of the most recent configuration in the client's local history (line~\ref{lst:dbla-short-client:refine-send-message}). If this configuration turns out to be outdated, the client will eventually learn about a newer configuration and will simply repeat its request there (line~\ref{lst:dbla-short-client:new-history-refine}). This is safe because lattice agreement operations are \emph{idempotent}, which means that an operation can be applied several times with the same result as if it was applied only once. In response to the client's $\mPropose$ message, the replicas will send $\mProposeResp$ messages (see \lstlinerangeref{dbla-short-client}{upon-propose-resp}{upon-propose-resp-end}), each of which will contain the set of verifiable input values known to the replica ($\ivalues$), a \emph{forward-secure signature} ($\isignature$) (see Section~\ref{sec:system-model}), and a sequence number ($\isn$). If the client learns any new verifiable input values from a response (line~\ref{lst:dbla-short-client:upon-propose-resp-refine}), it will repeat the $\fRefine$ procedure with the new values. Since we assume that the total number of verifiable input values is finite in any given infinite execution, this process will eventually converge. If the client receives a $\mProposeResp$ message without any new verifiable input values in it, it will add this message to the set of acknowledgments (line~\ref{lst:dbla-short-client:upon-propose-resp-ack}). Notice the additional check ($\ivalues = \icurrentValues$). This condition is always $\iTrue$ if the sender is correct. It is only needed to filter out malformed acknowledgments from Byzantine replicas. Upon gathering a quorum of such acknowledgments (line~\ref{lst:dbla-short-client:upon-acks-collected}), the client proceeds to the second stage of the protocol, which we will cover in the next paragraph. Now we would like to emphasize the use of \emph{forward-secure signatures} here. It is what allows us to mitigate the \textbf{``I still work here''} attack. As we will see below, our state transfer protocol ensures that before any \emph{\pivotal} configuration is superseded, all correct replicas in some quorum of that configuration will invoke $\fUpdateFSKeys$ (in other words, will remove their private keys for that configuration). If the client tries to access a configuration that was superseded long before the client's request, it simply will not receive enough responses with valid forward-secure signatures and will eventually retry its request in a {\thigher} configuration (line~\ref{lst:dbla-short-client:new-history-refine}). The primary goal of the second stage is to mitigate the \textbf{``slow reader''} attack. The client simply sends the acknowledgments it has collected in the first stage to the replicas and asks them to sign with their forward-secure signatures that they have seen these acknowledgments \emph{before they updated their private keys}, i.e., before this configuration was superseded. By this, we confirm that the configuration used in the first stage was not meanwhile superseded. \myparagraph{Replica implementation.} Now, let us take the pespective of a correct replica (Algorithm~\ref{alg:dbla-short-replica}). Each $\mPropose$ message has some configuration identifier attached to it. When a correct replica recieves a $\mPropose$ message from a correct client, there are three possible cases (lines~\ref{lst:dbla-short-replica:upon-propose}--\ref{lst:dbla-short-replica:upon-propose-end}): \begin{enumerate} \item If the message is from the {\thighest} configuration in the replica's local history ($\fMaxElement(\ihistory)$), and this configuration is installed by this replica ($C = \Cinst$), the replica will process this message and will send a reply, signed with a forward-secure signature with timestamp $\fHeight(C)$ (line~\ref{lst:dbla-short-replica:upon-propose-send-resp}); \item If the message is from a configuration that is {\tlower} than $\fMaxElement(\ihistory)$, the replica will drop this message. This cannot undermine liveness of client requests because, if both the client and the replica remain correct forever, the client will either complete the request without this replica's help, or eventually learn about a {\thigher} configuration and repeat its request there; \item Otherwise, the replica ``waits'' (line~\ref{lst:dbla-short-replica:upon-propose-wait}). That is, the message is stored locally until one of the two conditions above is satisfied. \end{enumerate} A major part of the replica's implementation is the state transfer procedure (lines~\ref{lst:dbla-short-replica:state-transfer-begin}--\ref{lst:dbla-short-replica:state-transfer-end}). Whenever a replica sees that its ``current configuration'' ($\Ccurr$) is not the {\thighest} one in its local history, it tries to read the state from a quorum in each of the intermidiate configurations between $\Ccurr$ and the {\thighest} configuration ($\Cnext$) one by one. If the replica succeeds before $\Cnext$ or a {\thigher} configuration is installed, it updates $\Ccurr$ to $\Cnext$ and lets the other replicas know about. It uses the \emph{uniform reliable broadcast primitive} we introduced in Section~\ref{subsec:bcast} to disseminate message $\Message{\mUpdateComplete}$ among the correct replicas of $\Cnext$ (line~\ref{lst:dbla-short-replica:state-transfer-broadcast}). If, on the other hand, it happens that the replica is waiting for responses from some configuration $C$, and this configuration is already superseded, then it might never gather a quorum of replies from the replicas of this configuration. In this case our algorithm ensures that this replica will eventually install some configuration {\thigher} than $C$ and will finish waiting via the other condition ($C \sqsubset \Ccurr$). When a replica learns about a new verifiable history (line~\ref{lst:dbla-short-replica:new-history}), it updates its private keys so that it will never again be able to answer any user request in a configuration {\tlower} than the {\thighest} configuration in this new history (line~\ref{lst:dbla-short-replica:new-history-update-fs-keys}). Due to the properties of reliable broadcast, unless this replica halts or turns Byzantine, every other correct process will eventually also learn about this history and abandon {\tlower} configurations. When a correct replica reads the state from some prior configuration $C$ during state transfer (lines~\ref{lst:dbla-short-replica:state-transfer-update-read}--\ref{lst:dbla-short-replica:state-transfer-wait}), it is responsible to make sure that some quorum $Q \in \iquorums(C)$ updated their forward-secure private keys to a timestamp larger than $\fHeight(C)$. This is achieved by a simple mechanism: the replicas of $C$ will not reply to the $\mUpdateRead$ messages until they update their private keys far enough (until $C \sqsubset \fMaxElement(\ihistory)$). \subsection{Making {\dynamic} objects {\reconfigurable}} \label{subsec:transform-short} While {\dynamic} objects are important building blocks, they are not particularly useful by themselves because they assume that all verifiable input histories are comparable. In this section we show how to combine several {\dynamic} objects to obtain a single \emph{\reconfigurable} object. As described in Section~\ref{subsec:reconf-objects}, compared to the ``static'' versions, {\reconfigurable} objects have one more operation -- $\fUpdateConfig(C, \sigma)$, must be parameterized by a boolean function $\fVerifyInputConfig(C, \sigma)$, and must satisfy a few additional properties such as {\pReconfigurationValidity} and {\pNewInstallationLiveness}. \begin{figure} \centering \begin{subfigure}[b]{0.52\textwidth} \centering \pdftex[\textwidth]{compiled-drawings/transform.pdf_tex} \caption{{\Reconfigurable} object} \label{fig:transform} \end{subfigure} \hfill \begin{subfigure}[b]{0.46\textwidth} \centering \pdftex[\textwidth]{compiled-drawings/transform-with-ac.pdf_tex} \caption{{\Reconfigurable} object with {\AccessControl}} \label{fig:transform-with-ac} \end{subfigure} \caption{ \textbf{(a):} The structure of our reconfigurable objects. All three {\dynamic} objects use $\iHistLA$ as the source of verifiable histories (including $\iHistLA$ itself). Verifiable input values for $\iConfLA$ are the verifiable input configurations of the {\reconfigurable} object. Verifiable input values for $\iHistLA$ are singleton sets of verifiable output values of $\iConfLA$. Finally, verifiable input values for $\iUObj$ are the verifiable input values of the reconfigurable object. \textbf{(b):} An example of a {\reconfigurable} object without external dependencies. $\iConfAC$ and $\iObjAC$ are instances of the \emph{\DynamicAccessControl} abstraction. } \label{fig:transform-both} \end{figure} We build a {\reconfigurable} object by combining three \emph{\dynamic} ones. First of all, the {\dynamic} object that we want to make {\reconfigurable} (let us call it $\iUObj$). For example, it can be an instance of {\DBLA} if you wanted to make a {\reconfigurable} version of {\ByzantineLatticeAgreement}. Two others are used to build verifiable histories: $\iConfLA$ is a {\DBLA} operating on the configuration lattice $\Conf$, and $\iHistLA$ is a {\DBLA} operating on the power lattice $2^{\Conf}$. \input{algorithms/transform-short.tex} The relations between the three {\dynamic} objects are shown in Figure~\ref{fig:transform}. The short, quite straightforward version of the pseudocode is presented in Algorithm~\lstref{transform-short}. To update a configuration, the client first accesses $\iConfLA$ and then submits the returned configuration (with its proof) to $\iHistLA$. The resulting history is broadcast (using the reliable-broadcast primitive) and is used to reconfigure all three {\dynamic} objects. Thus, the three {\dynamic} objects are tied together. They ``live through'' the same history of reconfigurations and can be seen as three parts of the same algorithm rather than separate entities. The complete pseudocode of the transformation and the proof of its correctness are presented in Appendix~\ref{sec:transform-full}. \myparagraph{Eliminating external dependencies.} Most abstractions that we define and implement in this paper rely on external parameters: functions $\fVerifyInputValue(v, \sigma)$ and $\fVerifyInputConfig(C, \sigma)$. This allows us to connect several objects into larger systems (as shown in Figure~\ref{fig:transform-both}) and abstracts away application-specific details. Figure~\ref{fig:transform-with-ac} shows an example of a ``self-contained'' object without any external dependencies. The idea is very simple: add two instances of the {\dynamic} version of the {\AccessControl} abstraction we introduced in Section~\ref{subsec:access-control-def} that will serve as the gateways for the object. These {\dynamic} objects will be reconfigured along with $\iConfLA$, $\iHistLA$, and $\iUObj$. We discuss different implementations of the {\DynamicAccessControl} abstraction in Appendix~\ref{sec:access-control}. \section{Abstractions and Definitions}\label{sec:definitions} \confirmA{In this section, we introduce principal abstractions of this paper (access control-interface, {\ByzantineLatticeAgreement}, {\Reconfigurable} and {\Dynamic} objects), state our quorum assumptions, and recall the definitions of broadcast primitives used in our algorithms.} \subsection{Access control} \confirmed{% In our implementations and definitions, we parametrize some abstractions by boolean functions $\fVerifyInputValue(v, \sigma)$ and $\fVerifyInputConfig(C, \sigma)$, where $\sigma$ is called a \emph{certificate}. Moreover, some objects also export a boolean function $\fVerifyOutputValue(v, \sigma)$, which lets anyone to verify that the value $v$ was indeed produced by the object. This helps us to deal with Byzantine clients. In particular, it achieves three important goals.} \confirmA{% First, the parameter $\fVerifyInputConfig$ allows us to prevent Byzantine clients from reconfiguring the system in an undesirable way or simply flooding the system with excessively frequent reconfiguration requests. In Appendix~\ref{sec:access-control}, we propose two simple approaches: each reconfiguration request must be signed by a quorum of replicas of some configuration\footnote{% Additional care is needed to prevent the ``slow reader'' attack. See Appendix~\ref{sec:access-control} for more details.} or by a quorum of preconfigured administrators.} \confirmA{% Second, the parameter $\fVerifyInputValue(v, \sigma)$ allows us to formally capture the application-specific notions of well-formed client requests and access control. For example, in a key-value storage system, each client can be permitted to modify only the key-value pairs that were created by this client. In this case, the certificate $\sigma$ is just a digital signature of that client.} \confirmA{% Finally, the exported function $\fVerifyOutputValue$ allows us to \emph{chain} several distributed objects in such a way that the output of one object is passed as input for another one.} \subsection{{\ByzantineLatticeAgreement} abstraction}\label{subsec:bla-def} \confirmed{% In this section we formally define \emph{\ByzantineLatticeAgreement} abstraction ({\BLA} for short), which serves as one of the main building blocks for constructing reconfigurable objects.} {\ByzantineLatticeAgreement} is an adaptation of {\LatticeAgreement}~\cite{gla} that can tolerate Byzantine failures of processes (both clients and replicas). It is parameterized by a \confirmed{join semi-lattice} $\Lat$, called the \emph{object lattice}, and a boolean function $\fVerifyInputValue: \Lat \times \Sigma \to \{\iTrue, \iFalse\}$, where $\Sigma$ is a set of possible certificates. We say that $\sigma$ is a \emph{valid certificate for input value} $v$ iff $\fVerifyInputValue(v, \sigma) = \iTrue$. We say that \confirmed{$v \in \Lat$} is a \emph{verifiable input value} in a given run iff at some point in time in that run, some process \emph{knows} a certificate $\sigma$ that is valid for $v$, i.e., it maintains $v$ and a valid certificate $\sigma$ in its local memory. We require that the adversary is unable to invert $\fVerifyInputValue$ by computing a valid certificate for a given value. This is the case, for example, when $\sigma$ must contain a set of unforgeable digital signatures. The {\ByzantineLatticeAgreement} abstraction exports one operation and one function.\footnote{% \confirmed{The main difference between an operation and a function is that a function can be computed without communicating with other processes and it always returns the same result given the same input.}} \begin{itemize} \item Operation $\fPropose(v, \sigma)$ returns a response of the form $\Tupple{w, \tau}$, where $v, w \in \Lat$, $\sigma$ is a valid certificate for input value $v$, and $\tau$ is a certificate for output value $w$; \item Function $\fVerifyOutputValue(v, \sigma)$ returns a boolean value. \end{itemize} Similarly to input values, we say that $\tau$ is a \emph{valid certificate for output value} $w$ iff $\fVerifyOutputValue(w, \tau) = \iTrue$. We say that $w$ is a \emph{verifiable output value} in a given run iff at some point in that run, some process knows $\tau$ that is valid for $w$. Implementations of {\ByzantineLatticeAgreement} must satisfy the following properties: \begin{itemize} \item \confirmed{\textit{\pBLAValidity}}: Every verifiable output value $w$ is a join of some set of verifiable input values; \item \confirmed{\textit{\pBLAVerifiability}}: If $\fPropose(...)$ returns $\Tupple{w, \tau}$ to a correct process, then $\fVerifyOutputValue(w, \tau) = \iTrue$; \item \confirmed{\textit{\pBLAInclusion}}: If $\fPropose(v, \sigma)$ returns $\Tupple{w, \tau}$ to a correct process, then $v \sqsubseteq w$; \item \confirmed{\textit{\pBLAComparability}}: All verifiable output values are comparable; \item \confirmed{\textit{\pBLALiveness}}: If the total number of verifiable input values is finite, every call to $\fPropose(v, \sigma)$ by a forever-correct process eventually returns. \end{itemize} \confirmed{For the sake of simplicity, we only guarantee liveness when there are finitely many verifiable input values.} This is sufficient for the purposes of reconfiguration. The abstraction that provides unconditional liveness is called \emph{\GeneralizedLatticeAgreement}~\cite{gla}. \subsection{{\Reconfigurable} objects}\label{subsec:reconf-objects} \confirmA{% It is possible to define a \emph{\reconfigurable} version of every static distributed object by enriching its interface and imposing some additional properties. In this section, we define the notion of a {\reconfigurable} object in a very abstract way. By combining this definition with the definition of a {\ByzantineLatticeAgreement} from Section~\ref{subsec:bla-def}, we obtain a formal definition of a {\ReconfigurableByzantineLatticeAgreement}. Similar combination can be performed with the definition of any static distributed object (e.g., with the definition of a {\MaxRegister} from Appendix~\ref{sec:max-register}).} \confirmA{% A {\reconfigurable} object exports an operation $\fUpdateConfig(C, \sigma)$, which can be used to reconfigure the system,} and must be parameterized by a boolean function $\fVerifyInputConfig: \Conf \times \Sigma \to \{\iTrue, \iFalse\}$, where $\Sigma$ is a set of possible certificates. \confirmA{As for verifiable input values,} we say that $C \in \Conf$ is a \emph{verifiable input configuration} in a given run iff at some point in that run, some process knows $\sigma$ such that $\fVerifyInputConfig(C, \sigma) = \iTrue$. We require the total number of verifiable input configurations to be finite in any given infinite execution of the protocol. In practice, this boils down to assuming sufficiently long periods of stability when no new verifiable input configurations appear. This requirement is imposed by all asynchronous reconfigurable storage systems~\cite{tutorial2010,SKM17-reconf,rla,freestore} we are aware of, and, in fact, can be shown to be necessary~\cite{r-liveness}. \confirmA{% When a correct replica $r$ is ready to serve user requests in configuration $C$, it triggers upcall $\fInstalledConfig(C)$. We then say that $r$ \emph{installs} configuration $C$.} A configuration is called \emph{installed} if some correct replica installed it. Finally, a configuration is called \emph{\tsuperseded} if some {\thigher} configuration is installed. \confirmed{Each {\reconfigurable} object must satisfy the following properties: \begin{itemize} \item \textit{\pReconfigurationValidity}: Every installed configuration $C$ is a join of some set of verifiable input configurations. Moreover, all installed configurations are comparable; % % \item \confirmA{\textit{\pReconfigurationLiveness}: Every call to $\fUpdateConfig(C, \sigma)$ by a forever-correct client eventually returns. Moreover, $C$ or a {\thigher} configuration will eventually be installed.} \item \textit{\pInstallationLiveness}: % If some configuration $C$ is installed by some correct replica, then $C$ or a {\thigher} configuration will eventually be installed by all correct replicas. \end{itemize}} \subsection{{\Dynamic} objects}\label{subsec:dynamic-objects} \confirmA{% {\Reconfigurable} objects are hard to build because they need to solve two problems at once. First, they need to order and combine concurrent reconfiguration requests. Second, the state of the object needs to be transferred across installed configurations (we call this \emph{state transfer}). We decouple these two problems by introducing the notion of a \emph{\dynamic} object. {\Dynamic} objects solve the second problem while ``outsourcing'' the first one.} \confirmed{% Before we formally define {\dynamic} objects, let us first define the notion of a \emph{history}. In Section~\ref{sec:system-model}, we introduced the configuration lattice $\Conf$. A finite set $h \subseteq \Conf$ is called a \emph{history} iff all elements of $h$ are comparable (in other words, if they form a sequence). Let $\fMaxElement(h)$ be $C \in h$ such that $\forall\ C' \in h: C' \sqsubseteq C$. By definition of a history, $\fMaxElement(h)$ is unambiguously defined for any history $h$.} \confirmed{% {\Dynamic} objects must export an operation $\fUpdateHistory(h, \sigma)$ and must be parameterized by a boolean function $\fVerifyHistory: \Hist \times \Sigma \to \{\iTrue, \iFalse\}$, where $\Hist$ is the set of all histories and $\Sigma$ is the set of all possible certificates.} We say that $h$ is a \emph{verifiable history} in a given run iff at some point in that run, some process knows $\sigma$ such that $\fVerifyHistory(h, \sigma) = \iTrue$. \confirmed{A configuration $C$ is called \emph{candidate} iff it belongs to some verifiable history. Also, a candidate configuration $C$ is called \emph{\tactive} iff it is not {\tsuperseded} by a {\thigher} configuration.} As with verifiable input configurations, the total number of verifiable histories is required to be finite. Additionally, we require all verifiable histories to be related by containment \confirmA{(i.e., comparable w.r.t. $\subseteq$)}. \confirmed{More formally,} if $\fVerifyHistory(h_1, \sigma_1) = \iTrue$ and $\fVerifyHistory(h_2, \sigma_2) = \iTrue$, then $h_1 \subseteq h_2$ or $h_2 \subseteq h_1$. We discuss how to build such histories in Section~\ref{sec:transform-main}. Similarly to {\reconfigurable} objects, a {\dynamic} object must have the $\fInstalledConfig(C)$ upcall. \confirmA{% The object must satisfy the following properties: \begin{itemize} \item \textit{\pDynamicValidity}: Only a candidate configuration can be installed by a correct replica;% \item \textit{\pDynamicLiveness}: Every call to $\fUpdateHistory(h, \sigma)$ by a forever-correct client eventually returns. Moreover, $\fMaxElement(h)$ or a {\thigher} configuration will eventually be installed; \item \textit{\pInstallationLiveness} (the same as for {\reconfigurable} objects): If some configuration $C$ is installed by some correct replica, then $C$ or a {\thigher} configuration will eventually be installed by all correct replicas. \end{itemize} Note that {\pDynamicValidity} implies that all installed configurations are comparable, since all verifiable histories are related by containment and all configurations within one history are comparable.} While {\reconfigurable} objects provide general-purpose reconfiguration interface, {\dynamic} objects are weaker, \confirmA{as they require an external service to build comparable verifiable histories}. \confirmA{As the main contribution of this paper, we show how to build \emph{\dynamic} objects in a Byzantine environment and how to create \emph{\reconfigurable} objects using {\dynamic} objects as building blocks.} We argue that this technique is applicable to a large class of objects. \subsection{Quorum system assumptions}\label{subsec:intersection-and-availability} \confirmA{% Most fault-tolerant implementations of distributed objects impose some requirements on the subsets of processes that can be faulty. We say that a configuration $C$ is \emph{available at time $t$} iff $\ireplicas(C)$ is a dissemination quorum system at time $t$ (as defined in Section~\ref{sec:system-model}). Correctness of our implementations of \emph{\dynamic} objects relies on the assumption that {\tactive} candidate configurations are available. Once a configuration is superseded by a {\thigher} configuration, we make no further assumptions about it.} \confirmed{% For \emph{\reconfigurable} objects we impose a slightly more conservative requirement: every combination of verifiable input configurations that is not yet superseded must be available. More formally, let $C_1, \dots, C_k$ be verifiable input configurations such that $C = C_1 \join \dots \join C_k$ is not superseded at time $t$. Then we require $\iquorums(C)$ to be a dissemination quorum system at time $t$.} \confirmA{% Correctness of our \emph{\reconfigurable} objects relies solely on correctness of the {\dynamic} building blocks. Formally, when $k$ configurations are concurrently proposed, we require all possible combinations, i.e., $2^k - 1$ configurations, to be available. However, in practice, at most $k$ of them will be chosen to be put in verifiable histories, and only those configurations actually need to be available. We impose a more conservative requirement because we do not know these configurations \emph{a priori}.} \subsection{Broadcast primitives}\label{subsec:bcast} To make sure that no slow process is ``left behind'', we assume that a variant of \emph{reliable broadcast primitive}~\cite{cachin2011introduction} is available. The primitive must ensure two properties: (1)~If a forever-correct process~$p$ broadcasts a message~$m$, then $p$ eventually delivers $m$; (2)~If some message~$m$ is delivered by a \emph{forever-correct} process, every forever-correct process eventually delivers $m$. In practice such primitive can be implemented by some sort of a gossip protocol~\cite{gossiping}. This primitive is ``global'' in a sense that it is not bound to any particular configuration. In pseudocode we use ``{\RBBroadcast{...}}'' to denote a call to the ``global'' reliable broadcast. Additionally, we assume a ``local'' \emph{uniform reliable broadcast} primitive~\cite{cachin2011introduction}. It has a stronger totality property: if some \emph{correct} process $p$ delivered some message $m$, then every forever-correct process will eventually deliver $m$, even if $p$ later turns Byzantine. This primitive can be implemented in a \emph{static} system, provided a quorum system. As we deal with \emph{\dynamic} systems, we associate every broadcast message with a fixed configuration and only guarantee these properties if the configuration is \emph{never superseded}. Notice that any static implementation of uniform reliable broadcast trivially guarantees this property. In pseudocode we use ``{\URBBroadcastIn{...}{$C$}}'' to denote a call to the ``local'' uniform reliable broadcast in configuration $C$. \section{Reconfigurable and Dynamic {\ByzantineLatticeAgreement}}\label{rbla} In this section, we present our \emph{\dynamic} and \emph{\reconfigurable} implementations of the {\ByzantineLatticeAgreement} abstraction defined in Section~\ref{subsec:bla-def}. Recall that the abstractions rely on the $\fVerifyInputValue$ and $\fVerifyInputConfig$ functions, specifying, informally, valid inputs. We discuss how to build systems without external dependencies in Section~\ref{subsec:discussion:complete-systems} and Appendix~\ref{sec:access-control}. In Section~\ref{subsec:dbla-impl}, we present our implementation of the \emph{\dynamic} version of the {\BLA} abstraction. In Section~\ref{subsec:rbla-impl}, we show how to build the \emph{\reconfigurable} {\BLA} from several instances of {\dynamic} {\BLA}. To respect space limits, the complete pseudocode and the proof of correctness are delegated to Appendices~\ref{sec:dbla-full} and~\ref{sec:rbla-full}. \subsection{{\Dynamic} {\BLA}}\label{subsec:dbla-impl} {\DynamicByzantineLatticeAgreement} abstraction ({\DBLA} for short) is the main building block in our approach for constructing {\reconfigurable} objects. Its specification is the combination of the specification of {\ByzantineLatticeAgreement} (from Section~\ref{subsec:bla-def}) and the specification of a {\dynamic} object (from Section~\ref{subsec:dynamic-objects}). The abstraction exports two operations: $\fPropose(v, \sigma)$ and $\fUpdateHistory(h, \sigma)$, two up-calls: $\fNewHistory(h)$ and $\fInstalledConfig(C)$, and one function: $\fVerifyOutputValue(w, \tau)$. The abstraction is parameterized by two boolean functions: $\fVerifyInputValue(v, \sigma)$ and $\fVerifyInputConfig(C, \sigma)$, and must satisfy a set of properties defined in Sections~\ref{subsec:bla-def} and~\ref{subsec:dynamic-objects} such as {\pComparability}, {\pNewInstallationLiveness}, and {\pDynamicValidity}. \input{algorithms/dbla-short} The pseudocode for our algorithm is partially presented in Algorithms~\lstref{dbla-short-client} and~\lstref{dbla-short-replica}. To respect space limits, we present the full pseudocode and the proof of correctness in Appendix~\ref{sec:dbla-full}. Here we try to convey the most important ideas and design decisions. Most papers on crash-fault tolerant reconfiguration take the \emph{client-centric approach} to state transfer~\cite{dynastore,smartmerge,rla,SKM17-reconf}. In those algorithms the client is responsible for reading the state from the replicas of an older configuration and relaying the read state to the replicas of a new configuration. This approach is appealing because it has small communication complexity when there are few clients and assumes only \emph{passive} replicas, i.e. the replicas only reply to client's messages and never initiate any communication themselves. However, the client-centric approach becomes significantly more expensive and harder to implement if we assume that some clients can be Byzantine. Instead, in this paper, we take the \emph{replica-centric approach} to state transfer. That is, the replicas of a new configuration are responsible for reading the state from the replicas of prior configurations. This approach has the benefit of a lower latency and the communication cost that does not depend on the number of clients~\cite{freestore}, but most importantly, it is easier to adopt to the case of Byzantine clients. \myparagraph{Client implementation.} Let us first take the perspective of a client (Algortihm~\lstref{dbla-short-client}). Since the state transfer is handled by the replicas, the client's code is rather simple yet there are several important details. Our {\BLA} implementation builds upon the (crash fault tolerant static) implementation of lattice agreement by Faleiro et al.~\cite{gla}. An important difference is that, instead of operating directly on the object lattice $\Lat$, internally we operate with sets of verifiable input values, in other words, with the power lattice $2^{\Lat}$, and only join them before returning the result to the client (line~\ref{lst:dbla-short-client:propose-return}). This yields the same results and helps us to preserve the {\pValidity} property of {\BLA}: every output value is, indeed, a join of some set of verifiable input values. To propose a new value, the client first needs to call the $\fRefine$ procedure (\lstlinerangeref{dbla-short-client}{refine}{refine-end}). It will add the value to the set of all verifiable input values the client is aware of ($\icurrentValues$, line~\ref{lst:dbla-short-client:refine-current-values}) and send a $\mPropose$ message to all replicas of the most recent configuration in the client's local history (line~\ref{lst:dbla-short-client:refine-send-message}). If this configuration turns out to be outdated, the client will eventually learn about a newer configuration and will simply repeat its request there (line~\ref{lst:dbla-short-client:new-history-refine}). This is safe because lattice agreement operations are \emph{idempotent}, which means that an operation can be applied several times with the same result as if it was applied only once. In response to the client's $\mPropose$ message, the replicas will send $\mProposeResp$ messages (see \lstlinerangeref{dbla-short-client}{upon-propose-resp}{upon-propose-resp-end}), each of which will contain the set of verifiable input values known to the replica ($\ivalues$), a \emph{forward-secure signature} ($\isignature$) (see Section~\ref{sec:system-model}), and a sequence number ($\isn$). If the client learns any new verifiable input values from a response (line~\ref{lst:dbla-short-client:upon-propose-resp-refine}), it will repeat the $\fRefine$ procedure with the new values. Since we assume that the total number of verifiable input values is finite in any given infinite execution, this process will eventually converge. If the client receives a $\mProposeResp$ message without any new verifiable input values in it, it will add this message to the set of acknowledgments (line~\ref{lst:dbla-short-client:upon-propose-resp-ack}). Notice the additional check ($\ivalues = \icurrentValues$). This condition is supposed to be always $\iTrue$ if the sender is correct. It is only needed to filter out malformed acknowledgments from Byzantine replicas. Upon gathering a quorum of such acknowledgments (line~\ref{lst:dbla-short-client:upon-acks-collected}), the client proceeds to the second stage of the protocol, which we will cover in the next paragraph. Now we would like to emphasize the use of \emph{forward-secure signatures} here. It is what allows us to mitigate the \textbf{``I still work here''} attack. Our state transfer protocol ensures that before any configuration is superseded, all correct replicas in some quorum of that configuration will invoke $\fUpdateFSKeys$ (in other words, will remove their private keys for that configuration). If the client tries to access a configuration that was superseded long before the client's request, it simply won't receive enough responses with valid forward-secure signatures and will eventually retry its request in a {\thigher} configuration (line~\ref{lst:dbla-short-client:new-history-refine}). The primary goal of the second stage is to mitigate the \textbf{``slow reader''} attack. The client simply sends the acknowledgments it has collected in the first stage to the replicas and asks them to sign with their forward-secure signatures that they have seen these acknowledgments \emph{before they updated their private keys}, i.e., before this configuration was superseded. By this, we confirm that the configuration used in the first stage was not meanwhile superseded. \myparagraph{Replica implementation.} Now, let us take the pespective of a correct replica (Algorithm~\ref{alg:dbla-short-replica}). Each $\mPropose$ message has some configuration identifier attached to it. When a correct replica recieves a $\mPropose$ message from a correct client, there are three possible cases (lines~\ref{lst:dbla-short-replica:upon-propose}--\ref{lst:dbla-short-replica:upon-propose-end}): \begin{enumerate} \item If the message is from the highest configuration in the replica's local history ($\fMaxElement(\ihistory)$), and this configuration is installed by this replica ($C = \Cinst$), the replica will process this message and will send a reply, signed with a forward-secure signature with timestamp $\fHeight(C)$ (line~\ref{lst:dbla-short-replica:upon-propose-send-resp}); \item If the message is from a configuration that is {\tlower} than $\fMaxElement(\ihistory)$, the replica will drop this message. This cannot undermine liveness of client requests because, if both the client and the replica remain correct forever, the client will either complete the request without this replica's help, or eventually learn about a {\thigher} configuration and repeat its request there; \item Otherwise, the replica ``waits'' (line~\ref{lst:dbla-short-replica:upon-propose-wait}). That is, the message is stored in some buffer until one of the two former conditions is satisfied. \end{enumerate} A major part of the replica's implementation is the state transfer procedure (lines~\ref{lst:dbla-short-replica:state-transfer-begin}--\ref{lst:dbla-short-replica:state-transfer-end}). Whenever a replica sees that its ``current configuration'' ($\Ccurr$) is not the {\thighest} one in its local history, it tries to read the state from a quorum in each of the intermidiate configurations between $\Ccurr$ and the {\thighest} configuration ($\Cnext$) one by one. If it succeeds before $\Cnext$ or a {\thigher} configuration is installed, it updates $\Ccurr$ to $\Cnext$ and lets the other replicas know about. It uses the uniform reliable broadcast primitive we introduced in Section~\ref{subsec:bcast} to disseminate message $\Message{\mUpdateComplete}$ among the correct replicas of $\Cnext$ (line~\ref{lst:dbla-short-replica:state-transfer-broadcast}). If, on the other hand, it happens that the replica is waiting for responses from some configuration $C$, and this configuration is already superseded, then it might never gather a quorum of replies from the replicas of this configuration. In this case our algorithm ensures that this replica will eventually install some configuration {\thigher} than $C$ and will finish waiting via the other condition ($C \sqsubset \Ccurr$). When a replica learns about a new verifiable history (line~\ref{lst:dbla-short-replica:new-history}), it updates its private keys so that it will never again be able to answer any user request in a configuration {\tlower} than the {\thighest} configuration in this new history (line~\ref{lst:dbla-short-replica:new-history-update-fs-keys}). Due to the properties of reliable broadcast, every other correct process will eventually also learn about this history and abandon {\tlower} configurations. When a correct replica reads the state from some prior configuration $C$ during state transfer (lines~\ref{lst:dbla-short-replica:state-transfer-update-read}--\ref{lst:dbla-short-replica:state-transfer-wait}), it is responsible to make sure that some quorum $Q \in \iquorums(C)$ updated their forward-secure private keys to a timestamp larger than $\fHeight(C)$. This is acvhieved by a simple mechanism: the replicas of $C$ won't reply to the $\mUpdateRead$ messages until they update their private keys far enough (until $C \sqsubset \fMaxElement(\ihistory)$). \subsection{{\Reconfigurable} {\BLA}}\label{subsec:rbla-impl} While {\DBLA} is an important building block, it is not particularly useful by itself because it relies on the comparability of verifiable histories. In this section we show how to combine several {\DBLA} instances to obtain a single instance of \emph{\ReconfigurableByzantineLatticeAgreement} ({\RBLA} for short). As described in Section~\ref{subsec:reconf-objects}, compared to the ``static'' version of the {\ByzantineLatticeAgreement} abstraction, {\RBLA} has one more operation -- $\fUpdateConfig(C, \sigma)$, must be parameterized by a boolean function $\fVerifyInputConfig(C, \sigma)$, and must satisfy a number of additional properties such as {\pReconfigurationValidity} and {\pNewInstallationLiveness}. \input{algorithms/rbla-short.tex} \begin{figure} \centering \input{compiled-drawings/rbla-architecture.pdf_tex} \caption{ The structure of the {\RBLA} implementation. % % All three {\dynamic} objects use $\iHistLA$ as the source of verifiable histories (including $\iHistLA$ itself). Verifiable input values for $\iConfLA$ are the verifiable input configurations of the {\RBLA} object. Verifiable input values for $\iHistLA$ are singleton sets of verifiable output values of $\iConfLA$. Finally, verifiable input values for $\iObjLA$ are the verifiable input values of the {\RBLA} object itself. } \label{fig:rbla-architecture} \end{figure} Our {\RBLA} implementation builds upon three {\DBLA} objects: $\iConfLA$ operating on the configuration lattice $\Conf$, $\iHistLA$ operating on the lattice $2^{\Conf}$, and $\iObjLA$ operating on the object lattice $\Lat$. The first two {\DBLA} objects are used to take care of configuration updates, and the third one serves user requests. The relations between the three {\DBLA} objects are shown in Figure~\ref{fig:rbla-architecture}. The short version of the pseudocode is presented in Algorithm~\lstref{rbla-short} and is quite straightforward. The data operations are performed directly on $\iObjLA$. To update a configuration, the client first accesses $\iConfLA$ and then submits the returned configuration (with its proof) to $\iHistLA$. The resulting history is broadcasted and is used to reconfigure all three {\DBLA} objects. Thus, the three {\DBLA} objects are tied together. They ``live through'' the same history of reconfigurations and can be seen as three parts of the same algorithm rather than separate entities. By accessing $\iConfLA$ and then $\iHistLA$ we minimize the number of configurations that should be accessed for a consistent configuration shift. Indeed, due to the {\pComparability} property of {\ByzantineLatticeAgreement}, all configurations submitted to $\iHistLA$ will already be ordered and any set of such configurations will form a sequence (i.e., a history as defined in Section~\ref{subsec:reconf-objects}). When $k$ configurations are concurrently proposed, this allows us to access only $O(k)$ configurations for state transfer, and not an exponential number as in earlier work~\cite{dynastore,parsimonious}. This is known to be optimal~\cite{SKM17-reconf}. To respect space limits, we present the full pseudocode with the proof of correctness in Appendix~\ref{sec:rbla-full}. \section{Introduction}\label{sec:introduction} \myparagraph{Replication and quorums.} Replication is a natural way to ensure availability of shared data in the presence of failures. A collection of \emph{replicas}, each holding a version of the data, ensure that the \emph{clients} get a desired service, even when some replicas become unavailable or hacked by a malicious adversary. Consistency of the provided service requires the replicas to \emph{synchronize}: intuitively, every client should be able to operate on the most ``up-to-date'' data, regardless of the set of replicas it can reach. It always makes sense to assume as little as possible about the environment in which a system we design is expected to run. For example, \emph{asynchronous} distributed systems do not rely on timing assumptions, which makes them extremely robust with respect to communication disruptions and computational delays. It is, however, notoriously difficult and sometimes even impossible to make such systems \emph{fault-tolerant}. The folklore CAP theorem~\cite{Bre00,GL02} states that no replicated service can combine consistency, availability, and partition-tolerance. In particular, no consistent and available read-write storage can be implemented in the presence of partitions: clients in one partition are unable to keep track of the updates taking place in another one. Therefore, fault-tolerant storage systems tend to assume that partitions are excluded, e.g., by requiring a majority of replicas to be \emph{correct}~\cite{ABD}. More generally, one can assume a \emph{quorum system}, e.g., a set of subsets of replicas satisfying the intersection and availability properties~\cite{quorums}. Every (read or write) request from a client should be \emph{accepted} by a quorum of replicas. As every two quorums have at least one replica in common, intuitively, no client can miss previously written data. Of course, failures of replicas may jeopardize the underlying quorum system. In particular, we may find ourselves in a system in which no quorum is available and, thus, no operation may be able to terminate. Even worse, if the replicas are subject to Byzantine failures, we may not be able to guarantee the very correctness of read values. \myparagraph{Asynchronous reconfiguration.} To anticipate such scenarios in a long run, we must maintain a \emph{reconfiguration} mechanism that enables replacing compromised replicas with correct ones and update the corresponding quorum assumptions. A challenge here is to find an asynchronous implementation of reconfiguration in a system where both clients and replicas are subject to Byzantine failures that can be manifested by arbitrary and even malicious behavior. In the world of selfishly driven blockchain users, a reconfiguration mechanism must be prepared to this. Recently, a number of reconfigurable systems were proposed for asynchronous \emph{crash}-fault environments~\cite{dynastore,parsimonious,smartmerge,freestore,SKM17-reconf,rla} that were originally applied to (read-write) storage systems~\cite{dynastore, parsimonious,freestore}, and then extended to max-registers~\cite{smartmerge,SKM17-reconf} and more general \emph{lattice} data type~\cite{rla}. These proposals tend to ensure that the clients reach a form of ``loose'' agreement on the currently active configurations, which can be naturally expressed via the \emph{lattice agreement} abstraction~\cite{lattice-hagit,gla}. We allow clients to (temporarily) live in different worlds, as long as these worlds are properly ordered. For example, we may represent a configuration as a set of \emph{updates} (additions and removals of replicas) and require that all installed configurations should be related by containment. A configuration becomes \emph{stale} as soon as it is subsumed by a new one representing a proper superset of updates. \myparagraph{Challenges of Byzantine fault-tolerant reconfiguration.} In this paper, we focus on \emph{Byzantine fault-tolerant} reconfiguration mechanism. We had to address here several challenges, specific to dynamic systems with Byzantine faults, which \confirmed{make it hard to benefit} from existing crash fault-tolerant reconfigurable solutions. First, when we build a system out of lower-level components, we need to make sure that the outputs provided by these components are ``authentic''. Whenever a (potentially Byzantine) process claims to have obtained a \emph{value} $v$ (e.g., a new configuration estimate) from an underlying object (e.g., Lattice Agreement), it should also provide a \emph{proof} $\sigma$ that can be independently verified by every correct process. The proof typically consists of \confirmed{digital signatures} provided by a quorum of replicas of some configuration. We abstract this requirement out by equipping the object with a function $\fVerifyOutputValue$ that returns a boolean value, provided $v$ and $\sigma$. When invoked by a correct process, the function returns $\iTrue$ if and only if $v$ has indeed been produced by the object. When ``chaining'' the objects, i.e., adopting the output $v$ provided by an object $A$ as an input for another object $B$, which is the typical scenario in our system, a correct process invokes $A.\fVerifyOutputValue(v,\sigma)$, where $\sigma$ is the proof associated with $v$ by the implementation of $A$. This way, only values actually produced by $A$ can be used as inputs to $B$. Second, we face the \textbf{{\attackName} attack}~\cite{tutorial2010}. It is possible that a client \confirmed{that did not log into the system for a long time} tries to access a \emph{stale}, outdated configuration in which some quorum is entirely compromised by the Byzantine adversary. The client can therefore be provided with an inconsistent view on the shared data. Thus, before accepting a new configuration, we need to make sure that the stale ones are no longer capable of processing data requests from the clients. We address this issue via the use of a forward-secure signature scheme~\cite{bellare1999forward}. Intuitively, every replica is provided with a distinct private key associated to each configuration. Before a configuration is replaced with a newer one, at least a quorum of its replicas are asked to destroy their private keys. Therefore, even if the replicas are to become Byzantine in the future, they will not be able to provide slow clients with inconsistent values. The stale configuration simply becomes non-responsive, as in crash-fault-tolerant reconfigurable systems. Unfortunately, in an asynchronous system it is impossible to make sure that replicas of \emph{all} stale configurations remove their private keys as it would require solving consensus~\cite{flp}. However, as we show in this paper, it is possible to make sure that the configurations in which replicas do not remove their keys are never accessed by correct clients and are incapable of creating ``proofs'' for output values. Finally, there is a subtle, and quite interesting \textbf{``slow reader'' attack}. Suppose that a client accesses \emph{almost all} replicas in a quorum of the current configuration each holding a stale state, as the only correct replica in the quorum that has the up-to-date state has not yet responded. The client then falls asleep, meanwhile, the configuration is superseded by a new one. As we do not make any assumptions about the correctness of replicas in stale configurations, the replica that has not yet responded can be compromised. Moreover, due to asynchrony, this replica can still retain its original private key. The replica can then pretend to be unaware of the current state. Therefore, the slow client might still be able to complete its request in the superseded configuration and return a stale state, which would violate the safety properties of the system. We show that this issue can be addressed by an additional ``confirming'' round-trip executed by the client. \myparagraph{Our contribution: Byzantine fault-tolerant reconfigurable services.} We provide a systematic solution to each of the challenges described above and present a set of techniques for building reconfigurable services in asynchronous model with Byzantine faults of both clients and replicas. We consider a very strong model of the adversary: any number of clients can be Byzantine and, as soon as some configuration is installed, no assumptions are made about the correctness of replicas in any of the prior configurations. Moreover, in our quest for a simple solution for the Byzantine model, we devised a new approach to building asynchronous reconfigurable services by further exploring the connection between reconfiguration and the lattice agreement abstraction~\cite{smartmerge, rla}. We believe that this approach can be usefully applied to crash fault-tolerant systems as well. Instead of trying to build a complex graph of configurations ``on the fly'' while transferring the state between those configurations, we start by simply assuming that we are already given a \emph{linear history} (i.e., a sequence of configurations). We introduce the notion of a \emph{{\dynamic} object} -- an object that can transfer its own state between the configurations of a given finite linear history and serve meaningful user requests. We then provide {\dynamic} implementations of several important object types such as {\LatticeAgreement} and {\MaxRegister} and expect that other asynchronous static algorithms can be translated to the {\dynamic} model using a similar set of techniques. Finally, we present a \emph{general transformation} that allows us to combine \emph{any} {\dynamic} object with two \emph{\DynamicByzantineLatticeAgreement} objects in such a way that together they constitute a single \emph{\reconfigurable} object, which exports a general-purpose reconfiguration interface and supports all the operations of the original {\dynamic} object. \myparagraph{Roadmap.} \confirmed{The rest of the paper is organized as follows.} We overview the model assumptions in Section~\ref{sec:system-model} and define our principal abstractions in Section~\ref{sec:definitions}. In Section~\ref{sec:dbla-main}, we describe our implementation of {\DynamicByzantineLatticeAgreement} and in Section~\ref{sec:transform-main}, \confirmed{we show how to use it to implement a reconfigurable object}. We discuss related work in Section~\ref{sec:related-work} and conclude in Section~\ref{sec:discussion}. \confirmed{% Complete pseudocode and proofs of correctness are delegated to Appendices~\ref{sec:dbla-full} and~\ref{sec:transform-full}. We discuss the problem of access control in Byzantine fault-tolerant systems in Appendix~\ref{sec:access-control}. Finally, we provide an implementation for {\dynamic} {\MaxRegister} in Appendix~\ref{sec:max-register}.} \section{{\AccessControl}}\label{sec:access-control} \needsrev{% In this section we present an auxiliary abstraction, called {\AccessControl}, and present two possible implementations. This abstraction can be used to prevent malicious, malformed, and excessively frequent user requests from affecting the system. {\AccessControl} object exports one operation and one function: \begin{itemize} \item Operation $\fRequestCert(v)$ returns a \emph{certificate} $\sigma$, which can be verified with $\fVerifyCert(v, \sigma)$, or the special value $\bot$, indicating that the permission was denied; \item Function $\fVerifyCert(v, \sigma)$ returns a boolean value. \end{itemize} And the implementation must satisfy the following property: \begin{itemize} \item \textit{\pCertificateVerifiability}: If $\fRequestCert(v)$ returned $\sigma$ to a correct process, then $\fVerifyCert(v, \sigma) = \iTrue$. \end{itemize}} \subsection{Sanity-check approach} \input{algorithms/access-control-short.tex} One of the simplest implementations of access control in a \emph{static} system is to require at least $b+1$ replicas to sign each certificate, where $b$ is the maximal possible number of Byzantine replicas, sometimes called the \emph{resilience threshold}. The correct replicas can perform some application-specific sanity checks before approving requests. The key property of this approach is that it guarantees that each valid certificate is signed by \emph{at least one correct replica}. In many cases, this is sufficient to guarantee resilience both against the Sybil attacks~\cite{sybilattack} and against attempts to flood the system with reconfiguration requests. The correct replicas can check the identities of the new participants and refuse to sign excessively frequent requests. In \emph{\dynamic} asynchronous systems just providing $b+1$ signatures is not sufficient. Despite the use of forward-secure signatures, in a superseded {\pivotal} configuration there might be significantly more than $b$ Byzantine replicas with their private keys not removed (in fact, at least $2b$). The straightforward way to implement this policy in a \emph{\dynamic} system is to add the \emph{confirming} phase, as in our implementation of {\DynamicByzantineLatticeAgreement} (see Section~\ref{sec:dbla-main}), after collecting $b+1$ signed approvals. The confirming phase guarantees that, during the execution of the first phase, the configuration was {\tactive}. The state transfer protocol should be the same as for {\DBLA} with the exception that no actual state is being transferred. The only goal of the state protocol in this case is it to make sure that the replicas update their private keys before a new configuration is installed. This and the following approach can be generally described as ``vote-based'' access control policies. The pseudocode for their {\dynamic} implementation is presented in Algorithm~\lstref{ac-short}. \subsection{Quorum-based approach (``on-chain governance'')} A more powerful strategy in a \emph{static} system is to require a \emph{quorum} of replicas to sign each certificate. An important property of this implementation is that it can detect and prevent conflicting requests. More formally, suppose that there are values $v_1$ and $v_2$, for which the following two properties should hold: \begin{itemize} \item Both are acceptable: $\fRequestCert(v_i)$ should not return $\bot$ unless $\fRequestCert(v_j)$ was invoked in the same execution, where $j \neq i$. \item At most one may be accepted: if some process knows $\sigma_i$ such that $\fVerifyCert(v_i, \sigma_i)$ then no process should know $\sigma_j$ such that $\fVerifyCert(v_j, \sigma_j)$. \end{itemize} Note that it is possible that neither $v_1$ nor $v_2$ is accepted by the {\AccessControl} if the requests are made concurrently. To guarantee that exactly one certificate is issued, we would need to implement consensus, which is impossible in asynchronous model~\cite{flp}. If a correct replica has signed a certificate for value $v_i$, it should store this fact in persistent memory and refuse signing $v_j$ if requested. Due to the quorum intersection property, this guarantees the ``at most one'' semantic in a static system. This approach can be implemented in a \emph{\dynamic} system using the same pseudocode from Algorithm~\lstref{ac-short} and the state transfer protocol from our {\DBLA} implementation (see Algorithm~\ref{alg:dbla-state-transfer}). Using the {\dynamic} version of this approach to certifying reconfiguration requests allows us to capture the notion of what is sometimes called ``on-chain governance''. The idea is that the participants of the system (in our case, the owners of the replicas) decide which actions or updates to allow by the means of voting. Every decision needs a quorum of signed votes to be considered valid and no two conflicting decisions can be made. \subsection{Trusted administrators} A naive yet common approach to {\dynamic} systems is to have a trusted administrator, who signs the reconfiguration requests. However, if the administrator's private key is lost, the system might lose liveness, and if it is compromised, the system might lose even safety. A more viable approach is to have $n$ administrators and to require $b+1$ of them to sign every certificate, for some $n$ and $b$ such that $0 \le b < n$. In this case, the system will ``survive'' up to $b$ keys being compromised and up to $n-(b+1)$ keys being lost. An interesting problem, which we will not dive into here, is to allow changing the set of administrators. One possible solution would be to ``run a reconfiguration protocol'' among the administrators. Another approach is to include the set of administrators to the configuration lattice $\Conf$ of the reconfigurable object itself and to use normal reconfiguration requests to change the set of administrators. \subsection{Combining {\AccessControl} with other objects} \label{subsec:combining-access-control-with-other-objects} \input{figures/access-control.tex} \needsrev{% There are at least two possible ways to combine the {\AccessControl} abstraction with a {\reconfigurable} object in a practical system.} \needsrev{% The simplest, and, perhaps, the most practical approach is to embed two instances of {\DynamicAccessControl} directly into the structure of a reconfigurable object, as shown in Figure~\ref{fig:access-control-inside}. In this case, the replicas that execute the code for the {\AccessControl} are the same replicas as the replicas that execute the code of other {\dynamic} objects in the implementation of this {\reconfigurable} object.} \needsrev{% Alternatively, one can apply the transformation from section~\ref{sec:transform-main} to the {\dynamic} {\AccessControl} implementation described in this section to obtain a \emph{\reconfigurable} version of the {\AccessControl} abstraction. It then can be combined with any other {\reconfigurable} object in a structure depicted in Figure~\ref{fig:access-control-outside}. In this case, the replicas of $\iConfRAC$ produce verifiable input configurations for themselves and for two other objects.} \section{System Model}\label{sec:system-model} \myparagraph{Processes and channels. \confirmed{We consider a system of \emph{processes}. A process can be a \emph{replica} or a \emph{client}. Let $\Replicas$ and $\Clients$ denote the (possibly infinite) sets of replicas and clients, resp., that potentially can take part in the computation.} At any point in a given execution, a process can be in one of the four states: \emph{idle}, \emph{correct}, \emph{halted}, or \emph{Byzantine}. A process is \emph{idle} if it has not taken a single step in the execution yet. A process stops being idle by taking a step, e.g., sending or receiving a message. A process is considered \emph{correct} as long as it respects the algorithm it is assigned. A process is \emph{halted} if it executed the special ``halt'' command and not taking any further steps. Finally, a process is \emph{Byzantine} if it prematurely stops taking steps of the algorithm or takes steps that are not prescribed by it. \confirmed{A correct process can later halt or become Byzantine. However, the reverse is impossible: a halted or Byzantine process cannot become correct.} \confirmA{We assume that a process that remains correct forever (we call it \emph{forever-correct}) does not prematurely stop taking steps of its algorithm.} We assume asynchronous \emph{reliable} \emph{authenticated} point-to-point links between each pair of processes~\cite{cachin2011introduction}. If a forever-correct process $p$ sends a message $m$ to a forever-correct process $q$, then $q$ eventually delivers $m$. Moreover, if a correct process $q$ receives a message $m$ from a process $p$ at time $t$, and $p$ is correct at time $t$, then $p$ has indeed sent $m$ to $q$ before $t$. We assume that the adversary is computationally bounded so that it is unable to break the cryptographic techniques, such as digital signatures, forward security schemes~\cite{bellare1999forward} and one-way hash functions. \myparagraph{Configuration lattice.} A \emph{join semi-lattice} \confirmed{(or simply a \emph{lattice})} is a tuple $(\Lat, \sqsubseteq)$, where $\Lat$ is a set partially ordered by the binary relation $\sqsubseteq$ such that for all elements $x,y \in \Lat$, there exists the \emph{least upper bound} for the set $\{x, y\}$, i.e., the element $z \in \Lat$ such that $x, y \sqsubseteq z$ and $\forall\ w \in \Lat:$ if $x, y \sqsubseteq w$, then $z \sqsubseteq w$. The least upper bound for the set $\{x, y\}$ is denoted by $x \join y$. $\join$ is called the \emph{join operator}. It is an associative, commutative, and idempotent binary operator on $\Lat$. We write $x \sqsubset y$ whenever $x \sqsubseteq y$ and $x \neq y$. We say that $x, y \in \Lat$ are \emph{comparable} iff either $x \sqsubseteq y$ or $y \sqsubset x$. \confirmed{For any (potentially infinite) set $A$, $(2^A, \sqsubseteq)$ is a join semi-lattice, called \emph{the powerset lattice of $A$}. For all $Z_1, Z_2 \in 2^A$, $Z_1 \sqsubseteq Z_2 \bydef Z_1 \subseteq Z_2$ and $Z_1 \join Z_2 \bydef Z_1 \cup Z_2$.} A configuration is an element of a join semi-lattice $(\Conf, \sqsubseteq)$. We assume that every configuration is associated with a finite set of replicas via a map $\ireplicas: \Conf \to 2^{\Replicas}$, and a \emph{quorum system} via a map $\iquorums: \Conf \to 2^{2^{\Replicas}}$, such that $\forall C \in \Conf: \iquorums(C) \subseteq 2^{\ireplicas(C)}$. \confirmed{% Additionally we assume that there is a map $\iheight: \Conf \to \mathbb{Z}$, such that $\forall C \in \Conf: \iheight(C) \ge 0$ and $\forall C_1, C_2 \in \Conf:$ if $C_1 \sqsubset C_2$, then $\iheight(C_1) < \iheight(C_2)$. We say that a configuration $C$ is \emph{\thigher} (resp., \emph{lower}) than a configuration $D$ iff $D \sqsubset C$ (resp, $C \sqsubset D$).}\footnote{ Notice that ``$C$ is {\thigher} than $D$'' implies ``$\iheight(C) > \iheight(D)$'', but not vice versa.} We say that $\iquorums(C)$ is a \emph{dissemination quorum system} at time $t$ iff every two sets (also called \emph{quorums}) in $\iquorums(C)$ have \confirmed{at least one replica in common that is correct at time $t$}, and at least one quorum is \emph{available} (all its replicas are correct) \confirmed{at time $t$}. \confirmed{A natural (but not the only possible) way to define the lattice~$\Conf$ is as follows: let $\Updates$ be $\{+,-\} \times \Replicas$, where tuple $(+, p)$ means ``add replica $p$'' and tuple $(-, p)$ means ``remove replica $p$''. Then $\Conf$ is the powerset lattice $(2^{\Updates}, \sqsubseteq)$. The mappings $\ireplicas$, $\iquorums$, and $\iheight$ are defined as follows: $\ireplicas(C) \bydef \{s \in \Replicas \ST (+, s) \in C \land (-, s) \notin C\}$, $\iquorums(C) \bydef \{Q \subseteq \ireplicas(C) \ST |Q| > \frac{2}{3}\ |\ireplicas(C)|\}$, and $\iheight(C) \bydef |C|$. It is straightforward to verify that $\iquorums(C)$ is a dissemination quorum system when strictly less than one third of replicas in $\ireplicas(C)$ are faulty. Notice that, when this lattice is used for configurations, once a replica is removed from the system, it cannot be added again with the same identifier. In order to add such a replica back to the system, a new identifier must be used.} \myparagraph{Forward-secure digital signatures.} \confirmA{% In a \emph{forward-secure digital signature scheme}~\cite{bellare1999forward,malkin2002efficient,boyen2006forward,drijvers2019pixel} the public key of a process is fixed while the secret key can evolve. Each signature is associated with a \emph{timestamp}. To generate a signature with timestamp $t$, the signer uses secret key $\isk_t$. The signer can \emph{update its secret key} and get $\isk_{t_2}$ from $\isk_{t_1}$ if $t_1 < t_2 \le T$.\footnote{\confirmed{$T$ is a parameter of the scheme and can be set arbitrarily large (with some modest overhead). We believe that $T = 2^{32}$ or $T = 2^{64}$ should be sufficient for most applications.}} However ``downgrading'' the key to a lower timestamp, from $\isk_{t_2}$ to $\isk_{t_1}$, is computationally infeasible. This way, if the signer updates their secret key to some timestamp $t$ and then removes the original secret key, it will not be able to sign new messages with a timestamp lower than $t$, even if it later turns Byzantine.} \confirmA{% For simplicity, we model a forward-secure signature scheme as an oracle which associates every process $p$ with a timestamp $\ist_p$ (initially, $\ist_p = 0$). The oracle provides $p$ with three operations: (1) $\fUpdateFSKeys(t)$ sets $\ist_p$ to $t \ge \ist_p$; (2) $\fFSSign(m, t)$ returns a signature for message $m$ and timestamp $t$ if $t \ge \ist_p$, otherwise it returns $\bot$; and (3) $\fFSVerify(m, p, s, t)$ returns $\iTrue$ iff $s$ was generated by invoking $\fFSSign(m, t)$ by process $p$.% \footnote{\confirmed{% We assume that anyone who knows the id of a process also knows its public key. For example, the public key can be directly embedded into the identifier}.}} \confirmA{In our protocols, we use the height of the configuration as the timestamp. When a replica answers requests in configuration $C$, it signs messages with timestamp $\iheight(C)$. When a higher configuration $D$ is installed, the replica invokes $\fUpdateFSKeys(\iheight(D))$. This prevents the {\attackName} attack described in Section~\ref{sec:introduction}.} \section{Related work}\label{sec:related-work} Dynamic replicated systems with \emph{passive reconfiguration}~\cite{BaldoniBKR09,AttiyaCEKW19,KW19} do not explicitly regulate arrivals and departures of replicas. Their consistency properties are ensured under strong assumptions on the churn rate. Except for the recent work~\cite{KW19}, churn-tolerant storage systems do not tolerate Byzantine failures. In contrast, \textit{active reconfiguration} allows the clients to explicitly propose configuration updates, e.g., sets of new replica arrivals and departures. Early proposals of (actively) reconfigurable storage systems tolerating process crashes, such as RAMBO~\cite{rambo} and reconfigurable Paxos~\cite{paxos-reconfigure}, used consensus (and, thus, assumed certain level of synchrony) to ensure that the clients agree on the evolution of configurations. DynaStore~\cite{dynastore} was the first \emph{asynchronous} reconfigurable storage: clients propose incremental additions or removals to the system configuration. As the proposals commute, the processes can resolve their disagreements without involving consensus. The \textit{parsimonious speculative snapshot} task~\cite{parsimonious} allows to resolve conflicts between concurrent configuration updates in a storage system using instances of commit-adopt~\cite{Gaf98}. The worst-case time complexity, in the number of message delays, of reconfiguration was later reduced from $O(n^2)$ to $O(n)$~\cite{SKM17-reconf}, where $n$ is the number of concurrently proposed configuration updates. SmartMerge~\cite{smartmerge} made an important step forward by treating reconfiguration as an instance of abstract \emph{lattice agreement}~\cite{gla}. However, the algorithm assumes an external (reliable) lattice agreement service which makes the system not fully reconfigurable. The recently proposed \emph{reconfigurable lattice-agreement} abstraction~\cite{rla} enables truly reconfigurable versions of a large class of objects and constructions, including state-based CRDTs~\cite{crdt}, atomic-snapshot, max-register, conflict detector and commit-adopt. We believe that the reconfiguration service we introduced in this paper can be used to derive Byzantine fault-tolerant reconfigurable implementations of objects in the class. Byzantine quorum systems~\cite{ByzantineQuorumSystems} introduce abstractions for ensuring availability and consistency of shared data in asynchronous systems with Byzantine faults. In particular, a \emph{dissemination} quorum system ensures that every two quorums have a correct process in common and that at least one quorum only contains correct processes. Dynamic Byzantine quorum systems~\cite{alvisi2000dynamic} appear to be the first attempt to implement a form of active reconfiguration in a Byzantine fault-tolerant data service running on a \emph{static} set of replicas, where clients can raise or lower the resilience threshold. Dynamic Byzantine storage~\cite{martin2004framework} allows a trusted \emph{administrator} to issue ordered reconfiguration calls that might also change the set of replicas. \confirmA{% The administrator is also responsible for generating new private keys for the replicas in each new configuration to anticipate the {\attackName} attack~\cite{tutorial2010}.} In this paper, we propose an implementation of a Byzantine fault-tolerant reconfiguration service that does not rely on this assumption. Forward-secure signature schemes~\cite{bellare1999forward,boyen2006forward,canetti2007forward,drijvers2019pixel,malkin2002efficient} enable a decentralized way to construct a sequence of distinct private keys for each process. We use the scheme to provide each process with a unique private key for each \confirmed{configuration}. To counter the {\attackName} attack, we ensure that \confirmed{sufficiently many} correct processes destroy their configuration keys before a new configuration is installed, without relying on a global agreement of the configuration sequence~\cite{martin2004framework}. \section{Reconfigurable objects: complete pseudocode and proofs}\label{sec:transform-full} In Section~\ref{sec:transform-main}, we present a general transformation from a {\dynamic} object into a {\reconfigurable} one. Here we would like to clarify some assumptions, explain some motivation behind our implementation, provide the full pseudocode, describe some possible optimizations, and, finally, prove the correctness of our transformation. \subsection{Implementation}\label{subsec:transform-full-implementation} \input{algorithms/transform-full.tex} \myparagraph{Cryptographic keys.} In Algorithm~\lstref{transform}, we use several {\dynamic} objects. We assume that correct replicas have separate public/private key pairs for each dynamic object. This prevents replay attacks across objects and allows each {\dynamic} object to manage its keys separately. \needsrev{We discuss how to avoid this assumption later in this section.} \myparagraph{Bootstrapping.} We know from prior work~\cite{smartmerge,rla} that lattice agreement is very useful for reconfiguration. In particular, it can be used to build comparable configurations. We take a step further and use two separate instances of lattice agreement: one to build comparable configurations ($\iConfLA$) and the other to build histories out of them ($\iHistLA$). We can then use these two LA objects to reconfigure a single {\dynamic} object. However, this raises a question: how to reconfigure the lattice agreement objects themselves? We found the answer in the idea that is sometimes referred to as ``bootstrapping''. We use the lattice agreement objects to reconfigure \emph{themselves and at least one other object}. This implies that the lattice agreement objects share the configurations with $\iDObj$. The most natural implementation is that the code for all three dynamic objects will be executed by the same set of replicas. Bootstrapping is a dangerous technique because, if applied badly, it can lead to infinite recursion. However, we structured our solution in such a way that there is no recursion at all: the client first makes normal requests to $\iConfLA$ and $\iHistLA$ and then uses the resulting history to reconfigure all {\dynamic} objects, as if this history was obtained by the client from the outside of the system. It is important to notice that liveness of the call $\iHistLA.\fVerifyOutputValue(h, \sigma)$ is not affected by reconfiguration: the function simply checks some digital signatures and is guaranteed to always terminate given enough processing time. \myparagraph{Shared parts.} All implementations of {\dynamic} objects presented in this paper have a similar structure. For example, they all share the same state transfer implementation (see Algorithm~\ref{alg:dbla-state-transfer}). However, we do not deny the possibility that other implementations of other {\dynamic} objects might have very different implementations. Therefore, in our transformation we use $\iDObj$ as a ``blackbox'' and do not make any assumptions about its implementation. Moreover, for simplicity, we use the two {\DBLA} objects as ``blackboxes'' as well. In fact, $\iConfLA$ and $\iHistLA$ may have different implementations and the transformation will still work as long as they satisfy the specification from Section~\ref{sec:definitions}. However, this comes at a cost. In particular, if implemented naively, a single {\reconfigurable} object will run several independent state transfer protocols, \needsrev{and a single correct replica will have several private/public key pairs (as mentioned earlier in this section).} If, as in our paper, all {\dynamic} objects have similar implementations of their state transfer protocols, this can be done more efficiently by combining all state transfer protocols into one, which would need to transfer the states of all {\dynamic} objects and make sure that the superseded configurations are harmless. \subsection{Proof of correctness} \label{subsec:transform-proofs} In the following two lemmas we show that we use the {\dynamic} objects ($\iConfLA$, $\iHistLA$, and $\iDObj$) correctly, i.e., all requirements imposed on verifiable histories are satisfied. \begin{lemma} All histories passed to the {\dynamic} objects by correct processes (\lstlinerangeref{transform}{update-config-update-history-begin}{update-config-update-history-end}) are verifiable with $\fVerifyHistory$ (\lstlineref{transform}{verify-history}). \end{lemma} \begin{proof} Follows from the {\pBLAVerifiability} property of $\iHistLA$. \end{proof} \begin{lemma} \label{lem:transform-verifiable-histories} All histories verifiable with $\fVerifyHistory$ (\lstlineref{transform}{verify-history}) are (1)~well-formed (that is, consist of comparable configurations) and (2)~related by containment. Moreover, (3)~in any given infinite execution there is only a finite number of histories verifiable with $\fVerifyHistory$. \end{lemma} \begin{proof} (1)~follows from the {\pBLAComparability} property of $\iConfLA$, the {\pBLAValidity} property of $\iHistLA$, and the definition of $\iHistLA.\fVerifyInputValue$ (\lstlineref{transform}{histla-verify-input-value}). (2)~follows directly from the {\pBLAComparability} property of $\iHistLA$. (3)~follows from the requirement of finite number of verifiable input configurations and the {\pBLAValidity} property of $\iConfLA$ and $\iHistLA$. Only a finite number of configurations can be formed by $\iConfLA$ out of a finite number of verifiable input configurations. Then, only a finite number of histories can be formed by $\iHistLA$ out of configurations from $\iConfLA$. \end{proof} \begin{theorem}[Transformation safety] \label{the:transform-safety} Our implementation satisfies the {\pReconfigurationValidity} property of a {\reconfigurable} object. That is, (1) every installed configuration $C$ is a join of some set of verifiable input configurations; and (2) all installed configurations are comparable. \end{theorem} \begin{proof} (1) follows from the {\pBLAValidity} property of $\iConfLA$ and $\iHistLA$ and the {\pDynamicValidity} property of the underlying {\dynamic} objects. (2) follows directly from the {\pDynamicValidity} property of the underlying {\dynamic} objects. \end{proof} \begin{theorem}[Transformation liveness] \label{the:transform-liveness} Our implementation satisfies the liveness properties of a {\reconfigurable} object: {\pReconfigurationLiveness} and {\pInstallationLiveness}. \end{theorem} \begin{proof} {\pReconfigurationLiveness} follows from the {\pBLALiveness} property of $\iConfLA$ and $\iHistLA$ and the {\pDynamicLiveness} property of the underlying {\dynamic} objects. {\pInstallationLiveness} follows from line~\ref{lst:transform:installed-config-upcall} of the implementation and the {\pInstallationLiveness} of the underlying {\dynamic} objects. \end{proof} \section{{\Dynamic} {\BLA}: complete implementation and proofs}\label{sec:dbla-full} We define and describe the implementation of our {\DynamicByzantineLatticeAgreement} abstraction in Section~\ref{sec:dbla-main}. Here we would like to present the complete pseudocode, clarify several details, and provide additional discussion of the protocol (Section~\ref{sec:dbla-main-full}), give a rigorous proof of correctness (Section~\ref{subsec:dbla-correctness-proof}), and, finally, describe possible directions for optimizations (Section~\ref{subsec:dbla-optimizations}). \subsection{Implementation} \label{sec:dbla-main-full} \input{algorithms/dbla-full.tex} Our implementation of the {\DBLA} abstraction is split into two parts: Algorithm~\lstref{dbla-client} describes the behavior of a correct client while Algorithm~\lstref{dbla-replica} describes the behavior of a correct replica. In the beginning of each algorithm, we describe all parameters, global variables,% \footnote{\needsrev{% By ``global variable'' we mean a variable that can be accessed from anywhere in the code executed \emph{by the same process}. This is not to be confused with shared variables in the shared memory model.}} and auxiliary functions used in the pseudocode. Additionally, we use ``{\RBBroadcast{$\MsgDesc{MessageDescriptor}$, $\imsgParams$ ...}}'' to denote the call to the \emph{global reliable broadcast} primitive introduced in Section~\ref{subsec:bcast}, and ``{\URBBroadcastIn{$\MsgDesc{MessageDescriptor}$, $\imsgParams$ ...}{$C$}}'' to denote the call to the \emph{local uniform reliable broadcast} primitive in configuration $C$ (see Section~\ref{subsec:bcast}). \myparagraph{Execution environment.} We assume single-threaded execution of the pseudocode. The lines of code are to be executed one by one in a sequential order. Some events (such as message delivery or an assignment to a global variable) may activate some handlers, but the execution of these handlers is delayed. However, we expect some ``fairness'' in a sense that if some handler remains ``ready'' indefinitely, it will eventually be executed. Sometimes waiting is explicitly mentioned in the code (e.g.,~\lstlineref{dbla-client}{propose-wait}). In these places we assume that the control flow may switch to other blocks. It may later return to the line after the~``\textbf{wait for}'' statement if the condition in the statement is satisfied. \myparagraph{Notation.} We use ``let $\textit{var} = \textit{expression}$'' to denote an assignment to a local variable and ``$\textit{var} \gets \textit{expression}$'' to denote an assignment to a global variable (they are usually defined in the ``\GlobalsBlockName'' section). We use triangle brackets to denote tuples and messages. They might start with a \emph{message descriptor}, which is written in bold (e.g.,~$\Tupple{\mPropose, \icurrentValues, \iseqNumber, C}$). \myparagraph{The use of powerset lattice.} In our implementation, we internally use the powerset lattice $2^{\Lat}$ and only join the values before returning the result to the client (\lstlineref{dbla-client}{propose-return}). This gives the right result because of one simple observation: if $A, B \in 2^{\Lat}$ and $A \subseteq B$, then, by associativity and commutativity, $\fJoinAll(B) = \fJoinAll(A) \join \fJoinAll(B \setminus A)$. By definition of ``$\join$'', $\fJoinAll(A) \sqsubseteq \fJoinAll(A) \join \fJoinAll(B \setminus A)$. This approach makes it easy to guarantee the {\pBLAValidity} property, but in some cases it might increase the communication cost (i.e., the number of bytes exchanged through the network) and even the number of rounds needed to terminate. \myparagraph{Number of accessed configurations.} By accessing $\iConfLA$ and then $\iHistLA$, we minimize the number of configurations that should be accessed for a consistent configuration shift. Indeed, due to the {\pBLAComparability} property of {\ByzantineLatticeAgreement}, all configurations submitted to $\iHistLA$ will already be ordered and any set of such configurations will form a sequence (i.e., a history as defined in Section~\ref{subsec:reconf-objects}). As a result, when $k$ configurations are concurrently proposed, only $O(k)$ configurations are accessed for state transfer, and not an exponential number as in some earlier work on reconfiguration~\cite{dynastore,parsimonious}. This is known to be optimal~\cite{SKM17-reconf}. \subsection{Proof of correctness} \label{subsec:dbla-correctness-proof} \myparagraph{{\DBLA} safety.} Recall that a configuration is called \emph{candidate} iff it appears in some verifiable history. The following lemma gathers some obvious yet very useful statements about candidate configurations. \begin{lemma}[Candidate configurations] \label{lem:candidate-configurations} \leavevmode \begin{enumerate} \item \label{itm:cand-fin} There is a finite number of candidate configurations. \item \label{itm:cand-comp} All candidate configurations are comparable with ``$\sqsubseteq$''. \end{enumerate} \end{lemma} \begin{proof} % The total number of verifiable histories is required to be finite, and each history is finite, hence (\ref{itm:cand-fin}). % All verifiable histories are required to be related by containment and all configurations within one history are required to be comparable, hence (\ref{itm:cand-comp}). \end{proof} Recall that a configuration is called \emph{\pivotal} if it is the last configuration in some verifiable history. Non-{\pivotal} candidate configurations are called \emph{\tentative}. Intuitively, the next lemma states that in the rest of the proofs we can almost always consider only {\pivotal} configurations. {\Tentative} configurations are both harmless and useless. \begin{lemma}[{\Tentative} configurations] \label{lem:tentative-configurations} \leavevmode \begin{enumerate} \item No correct client will ever make a request to a {\tentative} configuration. \item {\Tentative} configurations cannot be installed. \item A correct process will never invoke $\fFSVerify$ with timestamp $\iheight(C)$ for any {\tentative} configuration $C$. \item A correct replica will never broadcast any message via the uniform reliable broadcast primitive in a {\tentative} configuration. \end{enumerate} \end{lemma} \begin{proof} Follows directly from the algorithm. Both clients and replicas only operate on configurations that were obtained by invoking the function $\fMaxElement(h)$ on some verifiable configuration. \end{proof} The next lemma states that correct processes cannot ``miss'' any \emph{\pivotal} configurations in their local histories. This is crucial for the correctness of the state transfer protocol. \begin{lemma} \label{lem:pivotal-configurations} If $C \sqsubseteq \fMaxElement(h)$, where $C$ is a {\pivotal} configuration and $h$ is the local history of a correct process, then $C \in h$. \end{lemma} \begin{proof} Follows directly from the definition of a {\pivotal} configuration and the requirement that all verifiable histories are related by containment (see Section~\ref{subsec:dynamic-objects}). \end{proof} Recall that a configuration is called \emph{superseded} iff some {\thigher} configuration is installed (see Section~\ref{subsec:reconf-objects}). A configuration is \emph{installed} iff some \emph{correct} replica has triggered the $\fInstalledConfig$ upcall (\lstlineref{dbla-replica}{upon-update-complete-upcall}). For this, the correct replica must receive a quorum of $\mUpdateComplete$ messages via the uniform reliable broadcast primitive (\lstlineref{dbla-replica}{upon-update-complete}). \begin{theorem}[\pDynamicValidity] \label{the:dynamic-validity} Our implementation of {\DBLA} satisfies {\pDynamicValidity}. I.e., only a candidate configuration can be installed. \end{theorem} \begin{proof} Follows directly from the implementation. A correct replica will not install a configuration until it is in the replica's local history (\lstlineref{dbla-replica}{upon-update-complete-wait}). \end{proof} In our algorithm it is possible for a configuration to be installed after it was superseded. There are at least three scenarios how it might happen: \begin{enumerate} \item If a quorum of replicas broadcast $\mUpdateComplete$ messages in some configuration $C$ which is not yet installed, then a {\thigher} configuration is installed making $C$ superseded, then some correct replica in $C$ delivers the broadcast messages and triggers the upcall (\lstlineref{dbla-replica}{upon-update-complete-upcall}). \item If $C$ is superseded, but there is one ``slow'' replica that does not yet know about it. The other replicas might turn Byzantine and broadcast the $\mUpdateComplete$ messages in $C$. Since an $\mUpdateComplete$ message does not include any additional information such as digital signatures, the correct replica has no way to tell that these messages are from Byzantine replicas and will trigger the upcall (\lstlineref{dbla-replica}{upon-update-complete-upcall}). \end{enumerate} We could technically prevent the second scenario, but not the first one. Moreover, some ``intermediate'' scenarios are also inevitable. Let us call the configurations that were installed while being {\tactive} (i.e., not superseded) ``\emph{properly installed}''. We will use this definition to prove next few lemmas. \begin{lemma} The {\tlowest} properly installed configuration {\thigher} than configuration $C$ is the first installed configuration {\thigher} than $C$ in time order. \end{lemma} \begin{proof} Let $N$ be the {\tlowest} properly installed configuration {\thigher} than $C$. If some configuration {\thigher} than $N$ were installed earlier, then $N$ would not be properly installed. If some configuration between $C$ and $N$ were installed earlier, then $N$ would not be the {\tlowest}. \end{proof} The following lemma stipulates that our state transfer protocol makes the superseded {\pivotal} configurations ``harmless'' by leveraging a forward-secure signature scheme. \begin{lemma}[Key update] \label{lem:key-update} If a {\pivotal} configuration $C$ is superseded, then there is no quorum of replicas in that configuration capable of signing messages with timestamp $\iheight(C)$, i.e., $\nexists Q \in \iquorums(C) \textST \forall r \in Q: \ist_r \le \iheight(C)$. \end{lemma} \begin{proof} Let $N$ be the {\tlowest} properly installed configuration {\thigher} than $C$. Let us consider the moment when $N$ was installed. % By the algorithm, all correct replicas in some quorum $Q_N \in \iquorums(N)$ had to broadcast $\mUpdateComplete$ messages before $N$ was installed (\lstlineref{dbla-replica}{upon-update-complete}). % Since $N$ was not yet superseded at that moment, there was at least one correct replica $r_N \in Q_N$. By Lemma~\ref{lem:pivotal-configurations}, $C$ was in $r_N$'s local history whenever it performed state transfer to any configuration {\thigher} than $C$. % By the protocol, a correct replica only advances its $\Ccurr$ variable after executing the state transfer protocol (\lstlineref{dbla-replica}{state-transfer-ccurr-set}) or right before installing a configuration (\lstlineref{dbla-replica}{upon-update-complete-ccurr-set}). % Since no configurations between $C$ and $N$ were yet installed, $r_N$ had to pass through $C$ in its state transfer protocol and to receive $\mUpdateReadResp$ messages from some quorum $Q_C \in \iquorums(C)$ (\lstlineref{dbla-replica}{state-transfer-wait}). Recall that correct replicas update their private keys whenever they learn about a {\thigher} configuration (\lstlineref{dbla-replica}{new-history-update-fs-keys}) and that they will only reply to message $\Tupple{\mUpdateRead, \isn, C}$ once $C$ is \emph{not} the {\thighest} configuration in their local histories (\lstlineref{dbla-replica}{upon-update-read-wait}). % This means that all correct replicas in $Q_C$ actually had to update their private keys before $N$ was installed, and, hence, before $C$ was superseded. By the quorum intersection property, this means that in each quorum in $C$ at least one replica updated its private key to a timestamp higher than $\iheight(C)$ and will not be capable of signing messages with timestamp $\iheight(C)$ even if it becomes Byzantine. \end{proof} Notice that in a {\tentative} configuration there might be arbitrary many Byzantine replicas with their private keys not updated. This is inevitable in asynchronous system: forcing the replicas in {\tentative} configurations to update their private keys would require solving consensus. This does not affect correct processes because, as shown in Lemma~\ref{lem:tentative-configurations}, {\tentative} configurations are harmless. However, it is important to remember when designing new {\dynamic} protocols. The following lemma formalizes that the state is correctly transfered between configurations. \begin{lemma}[State transfer correctness] \label{lem:state-transfer-correctness} \leavevmode \\ If $\sigma = \Tupple{\ivalues, h, \sigma_h, \iproposeAcks, \iconfirmAcks}$ is a valid proof for $v$, then for each {\tactive} installed configuration $D$ such that $\fMaxElement(h) \sqsubset D$, there is a quorum $Q_D \in \iquorums(D)$ such that for each correct replica in $Q_D$: $\ivalues \subseteq \icurrentValues$. \end{lemma} \begin{proof} Let $C = \fMaxElement(h)$. We proceed by induction on the sequence of all properly installed configurations {\thigher} than $C$. Let us denote this sequence by $\Cnice$. % Other configurations are not interesting simply because there is no such moment in time when they are simultaneously {\tactive} and installed. Let $N$ be {\tlowest} configuration in $\Cnice$. % Let $Q_C \in \iquorums(C)$ be a quorum of replicas whose signatures are in $\iproposeAcks$. % Since $N$ is installed, there is a quorum $Q_N \in \iquorums(N)$ in which all correct replicas broadcast $\Message{\mUpdateComplete, N}$. % For each correct replica $r_N \in Q_N$, $r_N$ passed with its state transfer protocol through configuration $C$ and received $\mUpdateReadResp$ messages from some quorum of replicas in $C$. Note that at that moment $N$ wasn't yet installed, and, hence $C$ wasn't yet superseded. % By the quorum intersection property, there is at least one correct replica $r_C \in Q_C$ that sent an $\mUpdateReadResp$ message to $r_N$ (\lstlineref{dbla-replica}{upon-update-read-send-resp}). % Because $r_C$ will only react to $r_N$'s message after updating its private keys (\lstlineref{dbla-replica}{upon-update-read-wait}), it had to sign $\Tupple{\mProposeResp, \ivalues}$ (\lstlineref{dbla-replica}{upon-propose-sign}) before sending reply to $r_N$, which means that the $\mUpdateReadResp$ message from $r_C$ to $r_N$ must have contained a set of values that includes all values from $\ivalues$. % This proves the base case of the induction. Let us consider any configuration $D \in \Cnice$ such that $N \sqsubset D$. Let $M$ be the {\thighest} configuration in $\Cnice$ such that $N \sqsubseteq M \sqsubset D$ (in other words, the closest to $D$ in $\Cnice$). % Assume that the statement holds for $M$, i.e., while $M$ was {\tactive}, there were a quorum $Q_M \in \iquorums(M)$ such that for each correct replica in $Q_M$: $\ivalues \subseteq \icurrentValues$. % Similarly to the base case, let us consider a quorum $Q_D \in \iquorums(D)$ such that every correct replica in $Q_D$ reliably broadcast $\Message{\mUpdateComplete, D}$ before $D$ was installed. % For each correct replica $r_D \in Q_D$, by the quorum intersection property, there is at least one correct replica in $Q_M$ that sent an $\mUpdateReadResp$ message to $r_D$. This replica attached its $\icurrentValues$ to the message, which contained $\ivalues$. % This proves the induction step and completes the proof. \end{proof} The next lemma states that if two output values were produced in the same configuration, they are comparable. In a static system it could be proven by simply referring to the quorum intersection property. In a {\dynamic} Byzantine system, however, to use the quorum intersection, we need to prove that the configuration was {\tactive} during the whole period when the clients were exchanging data with the replicas. We need to prove that the ``slow reader'' attack is impossible. Luckily, we have the second stage of our algorithm designed for this sole purpose. \begin{lemma}[{\pBLAComparability} in one configuration] \label{lem:comparable-in-one-config} \leavevmode \\ If $\sigma_1 = \Tupple{\ivalues_1, h_1, \sigma_{h1}, \iproposeAcks_1, \iconfirmAcks_1}$ is a valid proof for output value $v_1$, \\ and $\sigma_2 = \Tupple{\ivalues_2, h_2, \sigma_{h2}, \iproposeAcks_2, \iconfirmAcks_2}$ is a valid proof for output value $v_2$, \\ and $\fMaxElement(h_1) = \fMaxElement(h_2)$, then $v_1$ and $v_2$ are comparable. \end{lemma} \begin{proof} Let $C = \fMaxElement(h_1) = \fMaxElement(h_2)$. % By definition, $\sigma$ is a valid proof for $v$ means that $\fVerifyOutputValue(v, \sigma) = \iTrue$ (\lstlineref{dbla-client}{verify-output-value}). By the implementation, this means that $h_1$ and $h_2$ are verifiable histories (\lstlineref{dbla-client}{verify-output-value-verify-history}). Therefore, $C$ is a {\pivotal} configuration. The set $\iconfirmAcks_1$ contains signatures from a quorum of replicas of configuration $C$, with timestamp $\iheight(C)$. Each of these signatures had to be produced after each of the signatures in $\iproposeAcks_1$ because they sign the message $\Tupple{\mConfirmResp, \iproposeAcks_1}$ (\lstlineref{dbla-replica}{upon-confirm-sign}). % Combining this with the statement of Lemma~\ref{lem:key-update} (Key Update), it follows that at the moment when the last signature in the set $\iproposeAcks_1$ was created, the configuration $C$ was {\tactive} (otherwise it would be impossible to gather $\iconfirmAcks_1$). % We can apply the same argument to the sets $\iproposeAcks_2$ and $\iconfirmAcks_2$. It follows that there are quorums $Q_1, Q_2 \in \iquorums(C)$ and a moment in time $t$ such that: (1) $C$ is not superseded at time $t$, (2) all \emph{correct} replicas in $Q_1$ signed message $\Tupple{\mProposeResp, \ivalues_1}$ before $t$, and (3) all \emph{correct} replica in $Q_2$ signed message $\Tupple{\mProposeResp, \ivalues_2}$ before $t$. % Since $C$ is not superseded at time $t$, there must be a correct replica in $Q_1 \cap Q_2$ (due to {\QuorumIntersection}), which signed both $\Tupple{\mProposeResp, \ivalues_1}$ and $\Tupple{\mProposeResp, \ivalues_2}$ (\lstlineref{dbla-replica}{upon-propose-sign}). % Since correct replicas only sign $\mProposeResp$ messages with comparable sets of values\footnote{ Indeed, elements are only added to the set $\icurrentValues$ of each correct replica, and the replicas only sign messages with the same set of verifiable input values as $\icurrentValues$ (see \lstlinerangeref{dbla-replica}{upon-propose-cur-vals}{upon-propose-sign}). }, $\ivalues_1$ and $\ivalues_2$ are comparable, i.e., either $\ivalues_1 \subseteq \ivalues_2$ or $\ivalues_2 \subset \ivalues_1$. Hence, $v_1 = \fJoinAll(\ivalues_1)$ and $v_2 = \fJoinAll(\ivalues_2)$ are comparable. \end{proof} Finally, let us combine the two previous lemmas to prove the {\pBLAComparability} property of our {\DBLA} implementation. \begin{theorem}[{\pBLAComparability}] \label{the:comparability} Our implementation of {\DBLA} satisfies the {\pBLAComparability} property. That is, all verifiable output values are comparable. \end{theorem} \begin{proof} Let $\sigma_1 = \Tupple{\ivalues_1, h_1, \sigma_{h1}, \iproposeAcks_1, \iconfirmAcks_1}$ be a valid proof for output value $v_1$, and $\sigma_2 = \Tupple{\ivalues_2, h_2, \sigma_{h2}, \iproposeAcks_2, \iconfirmAcks_2}$ be a valid proof for output value $v_2$. Also, let $C_1 = \fMaxElement(h_1)$ and $C_2 = \fMaxElement(h_2)$. Since $h_1$ and $h_2$ are verifiable histories (\lstlineref{dbla-client}{verify-output-value-verify-history}), both $C_1$ and $C_2$ are {\pivotal} by definition. If $C_1 = C_2$, $v_1$ and $v_2$ are comparable by Lemma~\ref{lem:comparable-in-one-config}. Consider the case when $C_1 \neq C_2$. Without loss of generality, assume that $C_1 \sqsubset C_2$. % Let $Q_1 \in \iquorums(C_2)$ be a quorum of replicas whose signatures are in $\iproposeAcks_2$. Let $t$ be the moment when first correct replica signed $\Message{\mProposeResp, \ivalues_2}$. Correct replicas only start processing user requests in a configuration when this configuration is installed (\lstlineref{dbla-replica}{upon-propose-wait}). % Therefore, by Lemma~\ref{lem:state-transfer-correctness}, at time $t$ there was a quorum of replicas $Q_2 \in \iquorums(C_2)$ such that for every correct replica in $Q_2$: $\ivalues_1 \subseteq \icurrentValues$. % By the quorum intersection property, there must be at least one correct replica in $Q_1 \cap Q_2$. Hence, $\ivalues_1 \subseteq \ivalues_2$ and $\fJoinAll(\ivalues_1) \sqsubseteq \fJoinAll(\ivalues_2)$. \end{proof} \begin{theorem}[{\DBLA} safety] \label{the:dbla-safety} Our implementation satisfies the safety properties of {\DBLA}: {\pBLAValidity}, {\pBLAVerifiability}, {\pBLAInclusion}, {\pBLAComparability}, and {\pDynamicValidity}. \end{theorem} \begin{proof} \leavevmode \begin{itemize} \item {\pBLAValidity} follows directly from the implementation: a correct client collects verifiable input values and joins them before returning from $\fPropose$ (\lstlineref{dbla-client}{propose-return}); \item {\pBLAVerifiability} follows directly from how correct replicas form and check the proofs for output values (Algorithm~\ref{alg:dbla-client}, lines~\ref{lst:dbla-client:propose-make-proof} and~\ref{lst:dbla-client:verify-output-value-unpack}--\ref{lst:dbla-client:verify-output-value-end}); \item {\pBLAInclusion} follows from the fact that the set $\icurrentValues$ of a correct client only grows (\lstlineref{dbla-client}{refine-current-values}); \item {\pBLAComparability} follows from Theorem~\ref{the:comparability}; \item Finally, {\pDynamicValidity} follows from Theorem~\ref{the:dynamic-validity}. \end{itemize} \end{proof} \myparagraph{{\DBLA} liveness.} \begin{lemma}[History Convergence] \label{lem:history-convergence} Local histories of all correct processes will eventually become identical. \end{lemma} \begin{proof} Let $p$ and $q$ be any two forever-correct processes\footnote{ If either $p$ or $q$ eventually halts or becomes Byzantine, their local histories are not required to converge. }. % Suppose, for contradiction, that the local histories of $p$ and $q$ have diverged at some point and will never converge again. % Recall that correct processes only adopt verifiable histories, and that we require the total number of verifiable histories to be finite. % Therefore, there is some configuration $h_p$, which is the the {\thighest} configuration ever adopted by $p$, and some configuration $h_q$ which is the the {\thighest} configuration ever adopted by $q$. % Since all verifiable configurations are required to be comparable, and we assume that $h_p \neq h_q$, one of them must be {\thigher} than the other. % Without loss of generality, suppose that $h_p \subset h_q$. Since $q$ had to deliver $h_q$ through reliable broadcast (unless $h_q$ is the initial history), and $q$ remains correct forever, $p$ will eventually deliver $h_q$ as well, and will adopt it. A contradiction. \end{proof} Next, we introduce an important definition, which we will use throughout the rest of the proofs. \begin{definition}[Maximal installed configuration] \label{def:cmax} In a given infinite execution, a \emph{maximal installed configuration} is a configuration that eventually becomes installed and never becomes superseded. \end{definition} \begin{lemma}[$\Cmax$ existence] \label{lem:cmax-existence} In any infinite execution there is a unique maximal installed configuration. \end{lemma} \begin{proof} By Lemma~\ref{lem:candidate-configurations} (Candidate configurations) and Theorem~\ref{the:dynamic-validity} (\pDynamicValidity), the total number of installed configurations is finite and they are comparable. Hence, we can choose a unique maximum among them, which is never superseded by definition. \end{proof} Let us denote the (unique) maximal installed configuration by $\Cmax$. \begin{lemma}[$\Cmax$ installation] \label{lem:cmax-installation} The maximal installed configuration will eventually be installed by all correct replicas. \end{lemma} \begin{proof} Since $\Cmax$ is installed, by definition, at some point some correct replica has triggered upcall $\fInstalledConfig(\Cmax)$ (\lstlineref{dbla-replica}{upon-update-complete-upcall}). This, in turn, means that this replica delivered a quorum of $\mUpdateComplete$ messages via the \emph{uniform reliable broadcast} in $\Cmax$ when it was correct. % Therefore, even if this replica later becomes Byzantine, by definition of the uniform reliable broadcast, either $\Cmax$ will become superseded (which is impossible), or every correct replica will eventually deliver the same set of $\mUpdateComplete$ messages and install $\Cmax$. \end{proof} \begin{lemma}[State transfer progress] \label{lem:state-transfer-progress} State transfer (\lstlinerangeref{dbla-replica}{state-transfer-begin}{state-transfer-end}), executed by a forever-correct replica, always terminates. \end{lemma} \begin{proof} Let $r$ be a correct replica executing state transfer. % By Lemma~\ref{lem:candidate-configurations}, the total number of candidate configurations is finite. Therefore, it is enough to prove that there is no such configuration that $r$ will wait for replies from a quorum of that configuration indefinitely (\lstlineref{dbla-replica}{state-transfer-wait}). % Suppose, for contradiction, that there is such configuration $C$. If $C \sqsubset \Cmax$, then, by Lemma~\ref{lem:cmax-installation}, $r$ will eventually install $\Cmax$, and $\Ccurr$ will become not {\tlower} than $\Cmax$ (\lstlineref{dbla-replica}{upon-update-complete-ccurr-set}). Hence, $r$ will terminate from waiting through the first condition ($C \sqsubset \Ccurr$). A contradiction. Otherwise, if $\Cmax \sqsubseteq C$, then, by the definition of $\Cmax$, $C$ will never be superseded. % Since $r$ remains correct forever, by Lemma~\ref{lem:history-convergence} (History Convergence), every correct replica will eventually have $C$ in its local history. Since we assume reliable links between processes (see Section~\ref{sec:system-model}), every correct replica in $\ireplicas(C)$ will eventually receive $r$'s $\mUpdateRead$ message and will send a reply, which $r$ will receive (\lstlineref{dbla-replica}{upon-update-read-send-resp}). % Hence, the waiting on line~\ref{lst:dbla-replica:state-transfer-wait} of Algorithm~\ref{alg:dbla-replica} will eventually terminate through the second condition ($r$ will receive responses from some $Q \in \iquorums(C)$ with correct sequence number). A contradiction. \end{proof} Intuitively, the following lemma states that $\Cmax$ is, in some sense, the ``final'' configuration. After some point every correct process will operate exclusively on $\Cmax$. No correct process will know about any configuration {\thigher} than $\Cmax$ or ``care'' about any configuration {\tlower} than $\Cmax$. \begin{lemma} \label{lem:cmax-is-final} $\Cmax$ will eventually become the {\thighest} configuration in the local history of each correct process. \end{lemma} \begin{proof} By Lemma~\ref{lem:history-convergence} (History Convergence), the local histories of all correct processes will eventually converge to the same history $h$. Let $D = \fMaxElement(h)$. % Since $\Cmax$ is installed and never superseded, it cannot be {\thigher} than $D$ (at least one correct replica will always ``remember'' about $\Cmax$). Suppose, for contradiction, that $\Cmax \sqsubset D$. % In this case, $D$ is never superseded, which means that there is a quorum $Q_D \in \iquorums(D)$ that consists entirely of forever-correct processes. By Lemma~\ref{lem:history-convergence} (History Convergence), all replicas in $Q_D$ will eventually learn about $D$ and will try to perform state transfer to it. By Lemma~\ref{lem:state-transfer-progress}, they will eventually succeed and install $D$. A contradiction with the definition of $\Cmax$. \end{proof} \begin{theorem}[\pBLALiveness] \label{the:operational-liveness} Our implementation of {\DBLA} satisfies the {\pBLALiveness} property: if the total number of verifiable input values is finite, every call to $\fPropose(v, \sigma)$ by a forever-correct process eventually returns. \end{theorem} \begin{proof} Let $p$ be a forever-correct client that invoked $\fPropose(v, \sigma)$. By Lemma~\ref{lem:cmax-is-final}, $\Cmax$ will eventually become the {\thighest} configuration in the local history of $p$. % If the client's request will not terminate by the time it learns about $\Cmax$, the client will call $\fRefine(\emptyset)$ after it (\lstlineref{dbla-client}{new-history-refine}). % By Lemma~\ref{lem:cmax-installation}, $\Cmax$ will eventually be installed by all correct replicas. Since it will never be superseded, there will be a quorum of forever-correct replicas. % This means that every round of messages from the client will eventually be responded to by a quorum of correct replicas. Since the total number of verifiable input values is finite, the client will call $\fRefine$ only a finite number of times (\lstlineref{dbla-client}{upon-propose-resp-refine}). % After the last call to $\fRefine$, the client will inevitably receive acknowledgments from a quorum of replicas, and will proceed to sending $\mConfirm$ messages (\lstlineref{dbla-client}{upon-acks-collected-send-confirm}). % Again, since there is an available quorum of correct replicas that installed $\Cmax$, the client will eventually receive enough acknowledgments and will complete the operation (\lstlineref{dbla-client}{propose-after-wait}). \end{proof} \begin{theorem}[{\DBLA} liveness] \label{the:dbla-liveness} Our implementation satisfies the liveness properties of {\DBLA}: {\pBLALiveness}, {\pDynamicLiveness}, and {\pInstallationLiveness}. \end{theorem} \begin{proof} {\pBLALiveness} follows from Theorem~\ref{the:operational-liveness}. {\pDynamicLiveness} and {\pInstallationLiveness} follow directly from Lemmas~\ref{lem:cmax-is-final} and~\ref{lem:cmax-installation} respectively. \end{proof} \subsection{Possible optimizations} \label{subsec:dbla-optimizations} Here we would like to mention a few possible directions for optimization. Most of them are dedicated to reducing the communication cost of the protocol. First, the proofs in our protocol include the full local history of a process. Moreover, this history comes with its own proof, which also usually contains a history, and so on. If implemented naively, the size of one proof in bytes will be at least quadratic with respect to the number of distinct candidate configurations, which is completely unnecessary. The first observation is that these histories will be related by containment. So, in fact, they can be compressed just to the size of the largest one, which is linear. But we can go further and say that, in fact, in a practical implementation, the processes almost never should actually send full histories to each other because every process maintains its local history and all histories with proofs are already disseminated via the reliable broadcast primitive. When one process wants to send some history to some other process, it can just send a cryptographic hash of this history. The other process can check if it already has this history and, if not, ask the sender to only send the missing parts, instead of the \emph{whole} history. Second, a naive implementation of our {\DBLA} protocol would send ever-growing sets of verifiable input values around, which is, just as with histories, completely unnecessary. The processes should just limit themselves to sending diffs between what they know and what they think the recipient knows. Third, almost every proof in our systems contains signatures from a quorum of replicas. This adds another linear factor to the communication cost. However, it can be significantly reduced by the use of forward-secure \emph{multi-signatures}, such as Pixel~\cite{drijvers2019pixel}, which was designed for similar purposes. Finally, we use a suboptimal implementation of lattice agreement as the foundation for our {\DBLA} protocol. Perhaps, we could benefit from adapting a more efficient crash fault-tolerant asynchronous solution~\cite{garg-la}.
1,314,259,992,900
arxiv
\section{Introduction}\label{sec:introduction} A \emph{tree} on a finite set $X$ is a connected acyclic graph with vertex set $X$. A \emph{rooted tree} is a tree with a distinguished vertex called a \emph{root}. It is well-known that the number of rooted trees on $[n]=\{1,2,\dots,n\}$ is $n^{n-1}$, see \cite[5.3.2~Proposition]{EC2}. Suppose $T$ is a rooted tree with root $r$. For a vertex $v\ne r$ of $T$ there is a unique path $(u_1,u_2,\dots,u_i)$ from $r=u_1$ to $v=u_i$. Then $u_{i-1}$ is called the \emph{parent} of $v$, and $v$ is called a \emph{child} of $u_{i-1}$. For two vertices $u$ and $v$, we say that $u$ is a \emph{descendant} of $v$ if the unique path from $r$ to $u$ contains $v$. Note that every vertex is a descendant of itself. A \emph{leaf} is a vertex with no children. A rooted tree is \emph{decreasing} if every nonleaf is greater than its children. The \emph{maximal decreasing subtree} of $T$, denoted $\MD(T)$, is the maximal subtree such that it has the same root as $T$ and it is decreasing. If the root of $T$ has no smaller children, $T$ is called \emph{minimally rooted}. The notion of maximal decreasing subtree was first appeared in \cite{Chauve2000} in order to prove the following theorem. \begin{thm}\cite[Theorem~2.1]{Chauve2000}\label{thm:CDG} The number of rooted trees on $[n+1]$ such that the root has $\ell$ smaller children equals $\binom{n}{\ell} n^{n-\ell}$. \end{thm} Recently, maximal decreasing subtrees reappeared in the study of a certain free Lie algebra over rooted trees by Bergeron and Livernet \cite{Bergeron2010}. Seo and Shin \cite{SeoShin} proved some enumeration properties of rooted trees with fixed size of maximal decreasing subtrees. We denote by $\TT_{n,k}$ the set of rooted trees on $[n+1]$ whose maximal decreasing subtrees have $k+1$ vertices. Let $\FF_{n,k}$ denote the set of functions $f:[n]\to[n]$ such that $[k]\subset f([n])$, where $[0]=\emptyset$. Equivalently, $\FF_{n,k}$ is the set of words on $[n]$ of length $n$ such that each of $1,2,\dots,k$ appears at least once. Using the Pr\"ufer code one can easily see that $\FF_{n,k}$ is in bijection with the set of rooted trees on $[n+1]$ such that $n+1$ is a leaf and $1,2,\dots,k$ are nonleaves. Thus, we will consider $\FF_{n,k}$ as the set of such trees. Seo and Shin \cite{SeoShin} proved the following theorem. \begin{thm}\cite{SeoShin}\label{thm:SeoShin} We have \[ |\TT_{n,k}| = |\FF_{n,k}|. \] \end{thm} In \cite{SeoShin} they showed Theorem~\ref{thm:SeoShin} by finding formulas for both sides and computing the formulas. In this paper we provide a bijective proof Theorem~\ref{thm:SeoShin}, which consists of several bijections between certain objects, see Theorem~\ref{thm:main}. In order to state the objects in Theorem~\ref{thm:main} we need the following definitions. An \emph{ordered forest} on a finite set $X$ is an ordered tuple of rooted trees whose vertex sets form a partition of $X$. We say that an ordered forest $(T_0,T_1,\dots,T_\ell)$ is \emph{$k$-good} if it satisfies the following conditions: \begin{enumerate} \item If $\ell=0$, then $T_0$ has only one vertex $v$ and we have $v\in [k]$. \item If $\ell\geq1$, then $T_1,T_2,\dots,T_\ell$ are minimally rooted, and the number of vertices of $T_0,T_1,\dots,T_i$ contained in $[k]$ is at least $i+1$ when $i=0,1,2,\dots,\ell-1$, and equal to $\ell$ when $i=\ell$. \end{enumerate} We now state the main theorem of this paper. \begin{thm}\label{thm:main} The following sets have the same cardinality: \begin{enumerate} \item the set $\TT_{n,k}$ of rooted trees on $[n+1]$ whose maximal decreasing subtrees have $k+1$ vertices, \item the set $\AA_{n,k}$ of cycles of $k+1$ minimally rooted trees such that the vertex sets of the trees form a partition of $[n+1]$, \item the set $\BB_{n,k}$ of ordered forests on $[n]$ such that the last $k$ trees are minimally rooted, \item the set $\CC_{n,k}$ of sequences of $k$-good ordered forests such that the vertex sets of the forests form a partition of $[n]$, \item the set $\FF_{n,k}$ of rooted trees on $[n+1]$ such that $n+1$ is a leaf, and $1,2,\dots,k$ are nonleaves. \end{enumerate} \end{thm} In Section~\ref{sec:bijections} we find bijections proving Theorem~\ref{thm:main}. The ideas in the bijections have some applications. In Section~\ref{sec:some-properties-tt_n} we find a bijective proof of the following identity, which (finding a bijective proof) is stated as an open problem in \cite{SeoShin}: \[ \sum_{k\ge0} \frac{1}{k} |\TT_{n,k}| = n^n. \] In Section~\ref{sec:anoth-proof-theor}, we gives another bijective proof of Theorem~\ref{thm:CDG}. From now on all trees in this paper are rooted trees. \section{Bijections}\label{sec:bijections} In this section we will find four bijections to prove Theorem~\ref{thm:main}. We assume that $n$ and $k$ are fixed nonnegative integers. We will write cycles using brackets to distinguish them from sequences. For instance, $[a_1,a_2,a_3]$ is a cycle and $(a_1,a_2,a_3)$ is a sequence, thus $[a_1,a_2,a_3]=[a_2,a_3,a_1]$ and $(a_1,a_2,a_3)\ne(a_2,a_3,a_1)$. For a tree or a forest $T$, we denote by $V(T)$ the set of vertices in $T$. \subsection{A bijection $\alpha:\TT_{n,k}\to\AA_{n,k}$} We will explain the map $\alpha$ by an example. Let $T\in\TT_{19,7}$ be the following tree. \[ \pstree{\TR{16}}{ \pstree{\TR{13}}{ \TR{17} \pstree{\TR{8}}{ \TR{19} \TR{18} } } \pstree{\TR{12}}{ \pstree{\TR{11}}{ \pstree{\TR{10}}{ \pstree{\TR{15}}{ \pstree{\TR{1}}{ \TR{3} \pstree{\TR{20}}{ \TR{4} } } } } } \pstree{\TR{7}}{ \pstree{\TR{14}}{ \TR{9} \TR{2} } } \pstree{\TR{5}}{ \TR{6} } } } \] Then we can decompose $T$ as follows: \begin{equation} \label{eq:1} T \Leftrightarrow \left( \raisebox{1.5cm}{ \pstree{\TR{16}}{ \pstree{\TR{13}}{ \TR{8} } \pstree{\TR{12}}{ \pstree{\TR{11}}{ \TR{10} } \TR{7} \TR{5} } } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{ \TR{17} } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{10}}{ \pstree{\TR{15}}{ \pstree{\TR{1}}{ \TR{3} \pstree{\TR{20}}{ \TR{4} } } } } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} } \right), \end{equation} where the first tree is $\MD(T)$, and the rest are the trees with more than one vertex in the forest obtained from $T$ by removing the edges in $\MD(T)$. We now construct a cycle $C$ corresponding to $\MD(T)$ as follows. First, let $C$ be the cycle containing only the maximal vertex $m$, which is the root of $\MD(T)$. For each remaining vertex $v$, starting from the largest vertex to the smallest vertex, we insert $v$ in $C$ after the parent of $v$. In the current example, we get the cycle $[16, 12, 5, 7, 11, 10, 13, 8]$. It is easy to see that this process is invertible. In fact this is equivalent to the well-known algorithm called the depth-first search (preorder). For each element $v$ except the largest element in this cycle, if there is a tree with root $v$ in \eqref{eq:1} replace $v$ with the tree. We then define $\alpha(T)$ to be the resulting cycle. It is not hard to see that $\alpha$ is a bijection. In the current example, we have \begin{equation} \label{eq:2} \alpha(T) = \left[ \raisebox{1.5cm}{ \TR{16} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{10}}{ \pstree{\TR{15}}{ \pstree{\TR{1}}{ \TR{3} \pstree{\TR{20}}{ \TR{4} } } } } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{ \TR{17} } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } } \right] \in \BB_{19,7}. \end{equation} \subsection{A bijection $\beta:\AA_{n,k}\to\BB_{n,k}$} In order to define the map $\beta$ we need two bijections $\phi$ and $\rho$ in the following two lemmas. These bijections will also be used in other subsections. \begin{lem}\cite{Chauve2000}\label{lem:CDG} Let $A\subset [n]$. There is a bijection $\phi$ from the set of minimally rooted trees on $A$ to the set of rooted trees on $A$ such that $\max(A)$ is a leaf. \end{lem} \begin{proof} We will briefly describe the bijection $\phi$. See \cite{Chauve2000} for the details. Consider a minimally rooted tree $T$ on $A$ with root $r$. Let $P$ be the subtree of $T$ rooted at $\max(A)$ containing all descendants of $\max(A)$, and let $Q$ be the tree obtained from $T$ by deleting $P$ (including $\max(A)$). We now consider the forest obtained from $P$ by removing all edges of $\MD(P)$. Suppose this forest has $\ell$ trees $T_1,T_2,\dots,T_\ell$ rooted at, respectively, $r_1,r_2,\dots,r_\ell$. If $V(\MD(P)) = \{u_1<u_2<\dots<u_t\}$ and $V(\MD(P))\setminus\{\max(A)\} \cup \{r\} = \{v_1<v_2<\dots<v_t\}$, let $T'$ be the tree obtained from $\MD(P)$ by replacing $u_i$ with $v_i$ for all $i$. Then $\phi(T)$ is the tree obtained from $T'$ by attaching $T_i$ at $r_i$ for $i=1,2,\dots,\ell$ and attaching $Q$ at $r$. \end{proof} \begin{lem}\label{lem:leaf} Let $A\subset[n]$. There is a bijection $\rho$ from the set of rooted trees on $A$ such that $\max(A)$ is a leaf to the set of ordered forests on $A\setminus\{\max(A)\}$. \end{lem} \begin{proof} Suppose $T$ is a rooted tree on $A$ such that $\max(A)$ is a leaf. Let $r=r_1, r_2,\dots, r_{\ell+1}=\max(A)$ be the unique path from the root $r$ of $T$ to the leaf $\max(A)$. Let $R_1,R_2,\dots,R_{\ell}$ be the rooted trees with roots $r_1,r_2,\dots,r_{\ell}$ respectively in the forest obtained from $T$ by removing the edges $r_1r_2,r_2r_3,\dots,r_{\ell}r_{\ell+1}$ and the vertex $r_{\ell+1}=\max(A)$. We define $\rho(T)=(R_1,R_2,\dots,R_{\ell})$. It is easy to see that $\rho$ is a desired bijection. \end{proof} Let $[T_0,T_1,\dots,T_k]\in \AA_{n,k}$. Since $[T_0,T_1,\dots,T_k]$ is a cycle, we can assume that $n+1\in T_0$. By Lemmas~\ref{lem:CDG} and \ref{lem:leaf}, the vertex $n+1$ in $\phi(T_0)$ is a leaf, and $\rho(\phi(T_0)) = (R_1,R_2,\dots,R_{\ell})$ is an ordered forest on $V(T_0)\setminus\{n+1\}$. We define $\beta([T_0,T_1,\dots,T_k])= (R_1,R_2,\dots,R_{\ell}, T_1,T_2,\dots,T_k)$. Since both $\phi$ and $\rho$ are invertible, $\beta$ is a bijection. \begin{example} Let $\F$ be the cycle in \eqref{eq:2}. Then we can write $\F$ as \[ \F = \left[ \raisebox{1.5cm}{ \pstree{\TR{10}}{ \pstree{\TR{15}}{ \pstree{\TR{1}}{ \TR{3} \pstree{\TR{20}}{ \TR{4} } } } } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{ \TR{17} } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right]. \] Then \[ (\rho\circ\phi) \left( \raisebox{1.5cm}{ \pstree{\TR{10}}{ \pstree{\TR{15}}{ \pstree{\TR{1}}{ \TR{3} \pstree{\TR{20}}{ \TR{4} } } } } } \right) = \rho\left( \raisebox{1.5cm}{ \pstree{\TR{10}}{ \pstree{\TR{15}}{ \pstree{\TR{1}}{ \TR{3} \TR{20} } } \TR{4} } } \right)= \left( \raisebox{.4cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{15} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{1}}{\TR{3}} } \right). \] Thus \[ \beta(\F) = \left( \raisebox{.8cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{15} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{1}}{\TR{3}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{ \TR{17} } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right) \in \BB_{19,7}. \] \end{example} \subsection{A bijection $\gamma:\BB_{n,k}\to\CC_{n,k}$} We call a vertex with label less than or equal to $k$ a \emph{special vertex}. For two ordered forests $\X$ and $\Y$ whose vertex sets are disjoint and contained in $[n]$, the pair $(\X,\Y)$ is called a \emph{balanced pair} if the trees in $\Y$ are minimally rooted and the number of special vertices in $\X$ and $\Y$ is equal to the number of trees in $\Y$. For two sets $A$ and $B$, the \emph{disjoint union} $A\uplus B$ is just the union of $A$ and $B$. However, if we write $A\uplus B$, it is always assumed that $A\cap B=\emptyset$. \begin{lem} There is a bijection $f$ from the set of balanced pairs $(\X,\Y)$ to the set of pairs $(\A,(\X',\Y'))$ of a $k$-good ordered forest $\A$ and a balanced pair $(\X',\Y')$ such that $V(\X)\uplus V(\Y) = V(\A) \uplus V(\X') \uplus V(\Y')$. \end{lem} \begin{proof} Suppose $\X=(X_1,X_2,\dots,X_s)$ and $\Y=(Y_1,Y_2,\dots,Y_t)$. We define $f(\X,\Y)=(\A,(\X',\Y'))$ as follows. Case 1: If $s\geq1$, and $X_1$ does not contain a special vertex, we define $\A=(X_1)$, $\X'=(X_2,\dots,X_s)$, and $\Y' = \Y$. Case 2: If $s\geq1$, and $X_1$ contains at least one special vertex, there is a unique integer $1\leq j\leq t$ such that $(X_1,Y_1,Y_2,\dots,Y_j)$ is a $k$-good ordered forest. Then we define $\A=(X_1,Y_1,Y_2,\dots,Y_j)$, $\X'=(X_2,X_3,\dots,X_s)$, and $\Y'=(Y_{j+1},Y_{j+2},\dots,Y_t)$. Since $\A$ is a $k$-good ordered forest, there are $j$ special vertices in $X_1,Y_1,Y_2,\dots,Y_j$. This implies that $(\X',\Y')$ is also a balanced pair. Case 3: If $s=0$, then $\X=\emptyset$ and there are $t$ special vertices in $Y_1,Y_2,\dots, Y_t$. Let $U=V(Y_1)\uplus\cdots\uplus V(Y_s)$ and let $m=\max(U)$. Suppose $Y_i$ contains $m$. We apply the map $\phi$ in Lemma~\ref{lem:CDG} to $Y_i$. Then $\phi(Y_i)$ is a rooted tree such that $m$ is a leaf. If we apply the map $\rho$ in Lemma~\ref{lem:leaf} to $\phi(Y_i)$, we get an ordered forest $\rho(\phi(Y_i)) = (T_1,T_2,\dots, T_\ell)$ on $V(Y_i)\setminus\{m\}$. Let $\overline{\X}=(T_1,T_2,\dots, T_\ell)$ and $\overline{\Y} = (Y_1,Y_2,\dots, \widehat{Y_i}, \dots,Y_t)$. Note that the set of vertices in $\overline{\X}$ and $\overline{\Y}$ is $U\setminus\{m\}$. Let $s_1<s_2<\cdots<s_t$ be the special vertices in $U$. Suppose $U\setminus\{m\} = \{ v_1<v_2<\cdots<v_{p}\}$ and $U\setminus\{s_i\} = \{ u_1<u_2<\cdots<u_{p}\}$. Then we define $\X'$ (resp.~$\Y'$) to be the ordered forest obtained from $\overline{\X}$ (resp.~$\overline{\Y}$) by replacing $v_j$ with $u_j$ for all $j$. We define $\A$ to be the rooted tree with only one vertex $s_i$. It is clear from the construction that $\A$ is a $k$-good ordered forest and $(\X',\Y')$ is a balanced pair. In all cases, we clearly have $V(\X)\uplus V(\Y) = V(\A) \uplus V(\X') \uplus V(\Y')$. We now show that $f$ is invertible by constructing the inverse map $g=f^{-1}$. Suppose $\A=(A_1,\dots,A_r)$, $\X'=(X_1,\dots,X_s)$, and $\Y'=(Y_1,\dots,Y_t)$, where $\A$ is a $k$-good forest and $(\X',\Y')$ is a balanced pair. Then we define $g(\A,(\X',\Y'))=(\X,\Y)$ as follows. Case 1: If $r=1$, and $A_1$ does not have a special vertex, we define $\X=(A_1,X_1,\dots,X_s)$ and $\Y=\Y'$. Case 2: If $r\ge2$, we define $\X=(A_1,X_1,\dots,X_s)$ and $\Y=(A_2,\dots,A_r, Y_1,\dots,Y_t)$. Case 3: If $r=1$, and $A_1$ has a special vertex, then by definition of $k$-good forests, $A_1$ has only one vertex. Let $U$ be the set of vertices in $\A$, $\X'$, and $\Y'$, and $m=\max(U)$. Suppose $s_1<\cdots<s_{t+1}$ are the $t+1$ special vertices in $U$, and the unique vertex in $A_1$ is $s_j$. Let $U\setminus\{m\} = \{ v_1<v_2<\cdots<v_{p}\}$ and $U\setminus\{s_j\} = \{ u_1<u_2<\cdots<u_{p}\}$. Then we define $\overline{\X}=(T_1,\dots,T_r)$ and $\overline{\Y}=(R_1,\dots,R_s)$ to be the ordered forests obtained from $\X'$ and $\Y'$ by replacing $u_i$ with $v_i$ for all $i$. Then the set of vertices in $\X'$ and $\Y'$ is now $U\setminus\{m\}$. Thus we can construct the tree $T=\rho^{-1}(\X')$ with maximum label $m$, and $R=\phi^{-1}(T)$ is a minimally rooted tree. We define $\X=\emptyset$ and $\Y=(R_1,\dots,R_{i-1}, R, R_i,\dots,R_s)$. It is easy to see that $g$ is the inverse map of $f$. \end{proof} Now we are ready to define the map $\gamma$. Suppose $(T_1,T_2,\dots,T_\ell, T_{\ell+1}, T_{\ell+2}, \dots, T_{\ell+k})\in \BB_{n,k}$. Let $\X=(T_1,T_2,\dots,T_\ell)$ and $\Y=(T_{\ell+1}, T_{\ell+2}, \dots, T_{\ell+k})$. Then $(\X,\Y)$ is a balanced pair. We define $(\X_0,\Y_0),(\X_1,\Y_1),\dots,$ and $\A_1, \A_2,\dots,$ as follows. Let $(\X_0,\Y_0)=(\X,\Y)$. For $i\geq0$, if $(\X_i,\Y_i)\ne(\emptyset,\emptyset)$, we define $\A_{i+1}, \X_{i+1}, \Y_{i+1}$ by $f(\X_i, \Y_i) = (\A_{i+1}, (\X_{i+1}, \Y_{i+1}))$. Let $p$ be the smallest integer such that $\X_p=\Y_p=\emptyset$. Then we define $\gamma(\X,\Y)$ to be $(\A_1,\A_2,\dots,\A_p)\in \CC_{n,k}$. Since $f$ is invertible, $\gamma$ is a bijection. \begin{example} Let \[ \F = \left( \raisebox{.8cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{15} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{1}}{\TR{3}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{ \TR{17} } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right) \in \BB_{19,7}. \] Note that special vertices are less than or equal to $7$. Then \[ \X=\X_0 = \left( \raisebox{.8cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{15} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{1}}{\TR{3}} } \right), \quad \Y = \Y_0=\left( \raisebox{.8cm}{ \pstree{\TR{13}}{ \TR{17} } \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right), \] \[ \A_1 = \left( \raisebox{.4cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{ \TR{17} } } \right), \quad \X_1 = \left( \raisebox{.4cm}{ \TR{15} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{1}}{\TR{3}} } \right), \quad \Y_1=\left( \raisebox{.8cm}{ \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right), \] \[ \A_2 = \left( \raisebox{.1cm}{ \TR{15}} \right), \quad \X_2 = \left( \raisebox{.4cm}{ \pstree{\TR{1}}{\TR{3}} } \right), \quad \Y_2=\left( \raisebox{.8cm}{ \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right), \] \[ \A_3 = \left( \raisebox{.4cm}{ \pstree{\TR{1}}{\TR{3}}\raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} } \right), \quad \X_3 = \emptyset, \quad \Y_3=\left( \raisebox{.8cm}{ \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right). \] In $\Y_3$ the largest vertex $14$ is in the third tree. Using $\phi$ and $\rho$ we get \[ (\rho \circ \phi) \left( \raisebox{.8cm}{ \pstree{\TR{7}}{\pstree{\TR{14}}{\TR{9} \TR{2}}} } \right) = \rho \left( \raisebox{.8cm}{ \pstree{\TR{9}}{ \pstree{\TR{7}}{ \TR{14} } \TR{2} } } \right) = \left( \raisebox{.4cm}{ \pstree{\TR{9}}{\TR{2}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{7} } \right). \] Thus \[ \overline{\X_3} = \left( \raisebox{.4cm}{ \pstree{\TR{9}}{\TR{2}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{7} } \right), \quad \overline{\Y_3}=\left( \raisebox{.4cm}{ \TR{12} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{6}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{11} } \right). \] Since $\X_3$ and $\Y_3$ have 4 special vertices $2,5,6,7$, and $6$ is the third smallest special vertex, we replace the vertices in $U\setminus\{14\}$ with those in $U\setminus\{6\}$. Since \[ \begin{array}{ccccccccccc} U\setminus\{14\} &=& \{ & 2, & 5, & 6, & 7, & 9, & 11, & 12 &\}, \\ U\setminus\{6\} &=& \{ &2, & 5, & 7, & 9, & 11, & 12, & 14 &\}, \end{array} \] we get \[ \A_4 = \left( \raisebox{.1cm}{ \TR{6} } \right), \quad \X_4 = \overline{\X_3'}= \left( \raisebox{.4cm}{ \pstree{\TR{11}}{\TR{2}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{9} } \right), \quad \Y_4=\overline{\Y_3'}= \left( \raisebox{.4cm}{ \TR{14} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{5}}{\TR{7}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} } \right), \] \[ \A_5 = \left( \raisebox{.4cm}{ \pstree{\TR{11}}{\TR{2}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{14} } \right), \quad \X_5= \left( \raisebox{.1cm}{ \TR{9} } \right), \quad \Y_5= \left( \raisebox{.4cm}{ \pstree{\TR{5}}{\TR{7}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} } \right), \] \[ \A_6 = \left( \raisebox{.1cm}{ \TR{9} } \right), \quad \X_6=\emptyset, \quad \Y_6= \left( \raisebox{.4cm}{ \pstree{\TR{5}}{\TR{7}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} } \right). \] In $\Y_6$, the largest vertex $12$ is in the second tree. \[ (\rho\circ\phi)(12) = \rho(12) = \emptyset. \] Thus \[ \overline{\X_6}= \emptyset, \quad \overline{\Y_6}= \left( \raisebox{.4cm}{ \pstree{\TR{5}}{\TR{7}} } \right). \] If we replace the labels in $\{5,7\}$ with $\{5,12\}$ we get \[ \A_7 = \left( \raisebox{.1cm}{ \TR{7} } \right), \quad \X_7 =\emptyset, \quad \Y_7= \left( \raisebox{.4cm}{ \pstree{\TR{5}}{\TR{12}} } \right). \] Since \[ (\rho\circ\phi)\left(\raisebox{.4cm}{\pstree{\TR{5}}{\TR{12}}}\right) = \rho\left(\raisebox{.4cm}{\pstree{\TR{5}}{\TR{12}}}\right) = 5, \] we have $\overline{\X_7}= (5)$ and $\overline{\Y_6}= \emptyset$. Replacing $5$ with $12$ we get \[ \A_8 = \left( \raisebox{.1cm}{ \TR{5} } \right), \quad \X_8 = \left( \raisebox{.1cm}{ \TR{12} } \right), \quad \Y_8=\emptyset. \] Finally we get \[ \A_9 = \left( \raisebox{.1cm}{ \TR{12} } \right), \quad \X_9 =\emptyset, \quad \Y_9=\emptyset. \] Thus \[ \gamma(\F)= \left( \left( \raisebox{.4cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{\TR{17}}} \right), \left( \raisebox{.1cm}{ \TR{15}} \right), \left( \raisebox{.4cm}{ \pstree{\TR{1}}{\TR{3}}\raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} } \right), \left( \raisebox{.1cm}{ \TR{6} } \right), \left( \raisebox{.4cm}{ \pstree{\TR{11}}{\TR{2}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{14} } \right), \left( \raisebox{.1cm}{ \TR{9} } \right), \left( \raisebox{.1cm}{ \TR{7} } \right), \left( \raisebox{.1cm}{ \TR{5} } \right), \left( \raisebox{.1cm}{ \TR{12} } \right) \right)\in \CC_{19,7}. \] \end{example} \subsection{A bijection $\zeta:\CC_{n,k}\to\FF_{n,k}$} Recall that a special vertex is a vertex with label at most $k$. \begin{lem}\label{lem:good} For a fixed set $A\subset [n]$ with $|A|\geq2$, there is a bijection $\psi$ from the set of $k$-good ordered forests on $A$ to the set of rooted trees on $A$ whose special vertices are nonleaves. \end{lem} \begin{proof} Suppose $\F=(A_1,A_2,\dots,A_p)$ is a $k$-good ordered forest. We first set all special vertices in $\F$ \emph{active}. Find the smallest vertex $v$ among the active vertices with minimal distance from the root in $A_1$. Then exchange the subtrees attached to $v$ and those attached to the root $r$ of $A_2$, and then attach the resulting tree rooted at $r$ to $v$ as shown below. \begin{center} \psset{nodesep=2pt, levelsep=30pt} \pstree{\TR{$v$}}{ \Tcircle{$T_1$} \TR{$\cdots$} \Tcircle{$T_a$} } \qquad \pstree{\TR{$r$}}{ \Tcircle{$U_1$} \TR{$\cdots$} \Tcircle{$U_b$} } \qquad \raisebox{-.5cm}{$\Rightarrow$} \qquad \pstree{\TR{$v$}}{ \Tcircle{$U_1$} \TR{$\cdots$} \Tcircle{$U_b$} \pstree{\TR{$r$}}{ \Tcircle{$T_1$} \TR{$\cdots$} \Tcircle{$T_a$} } } \end{center} We then make $v$ inactive. Note that $v$ is a nonleaf after this procedure. We do the same thing with the resulting tree and $A_3$, and proceed until there are no active special vertices. Since $(A_1,A_2,\dots,A_p)$ is $k$-good, we can eventually combine all of $A_1,A_2,\dots,A_p$ into a single rooted tree in which the special vertices are nonleaves. We define $\psi(\F)$ to be the resulting tree. It is straightforward to check that $\psi$ is invertible. \end{proof} Let $(\F_1,\F_2,\dots,\F_h)\in \CC_{n,k}$. For each $k$-good forest $\F_i$ we define $T_i=\psi(\F_i)$ if $\F_i$ has at least $2$ vertices, and $T_i=X$ if $\F_i=(X)$ and $X$ has only one vertex. Then we define $\zeta(\F_1,\F_2,\dots,\F_h)=\rho^{-1}(T_1,\dots,T_h)$. Since $\rho^{-1}$ and $\psi$ are invertible, $\zeta$ is a bijection. \begin{example} Let $(\F_1,\F_2,\dots,\F_h)$ be the following: \[ \left( \left( \raisebox{.4cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{\TR{17}}} \right), \left( \raisebox{.1cm}{ \TR{15}} \right), \left( \raisebox{.4cm}{ \pstree{\TR{1}}{\TR{3}}\raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} } \right), \left( \raisebox{.1cm}{ \TR{6} } \right), \left( \raisebox{.4cm}{ \pstree{\TR{11}}{\TR{2}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{14} } \right), \left( \raisebox{.1cm}{ \TR{9} } \right), \left( \raisebox{.1cm}{ \TR{7} } \right), \left( \raisebox{.1cm}{ \TR{5} } \right), \left( \raisebox{.1cm}{ \TR{12} } \right) \right)\in \CC_{19,7}. \] Then the map $\psi$ sends \[ \left( \raisebox{.4cm}{ \pstree{\TR{10}}{\TR{4}} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{13}}{\TR{17}}} \right) \mapsto \raisebox{.4cm}{ \pstree{\TR{10}}{ \pstree{\TR{4}}{ \TR{17} \TR{13} } } }, \] \[ \left( \raisebox{.4cm}{ \pstree{\TR{1}}{\TR{3}}\raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{8}}{ \TR{19} \TR{18} } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} } \right) \mapsto \left( \raisebox{.8cm}{ \pstree{\TR{1}}{ \TR{19} \TR{18} \pstree{\TR{8}}{ \TR{3} } }\raisebox{-4pt}{\mbox{ ,\quad}} \TR{16} } \right) \mapsto \raisebox{1.2cm}{ \pstree{\TR{1}}{ \TR{19} \TR{18} \pstree{\TR{8}}{ \pstree{\TR{3}}{ \TR{16} } } } }, \] \[ \left( \raisebox{.4cm}{ \pstree{\TR{11}}{\TR{2}} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{14} } \right) \mapsto \raisebox{.4cm}{ \pstree{\TR{11}}{\pstree{\TR{2}}{\TR{14}}} }. \] Thus we obtain $(T_1,\dots,T_h)$: \[ \left( \raisebox{1.1cm}{ \pstree{\TR{10}}{ \pstree{\TR{4}}{ \TR{17} \TR{13} } } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{15} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{1}}{ \TR{19} \TR{18} \pstree{\TR{8}}{ \pstree{\TR{3}}{ \TR{16} } } } \raisebox{-4pt}{\mbox{ ,\quad}} \TR{6} \raisebox{-4pt}{\mbox{ ,\quad}} \pstree{\TR{11}}{\pstree{\TR{2}}{\TR{14}}}\raisebox{-4pt}{\mbox{ ,\quad}} \TR{9} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{7} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{5} \raisebox{-4pt}{\mbox{ ,\quad}} \TR{12} } \right) \] If we add the vertex $n+1$, we obtain $\zeta(\F_1,\F_2,\dots,\F_h)$: \[ \pstree{\TR{10}}{ \pstree{\TR{15}}{ \pstree{\TR{1}}{ \TR{19} \TR{18} \pstree{\TR{8}}{ \pstree{\TR{3}}{ \TR{16} } } \pstree{\TR{6}}{ \pstree{\TR{11}}{ \pstree{\TR{9}}{ \pstree{\TR{7}}{ \pstree{\TR{5}}{ \pstree{\TR{12}}{ \TR{20} } } } } \pstree{\TR{2}}{ \TR{14} } } } } } \pstree{\TR{4}}{ \TR{17} \TR{13} } } \] \end{example} \section{Some properties of $|\TT_{n,k}|$}\label{sec:some-properties-tt_n} We denote the cardinality of $|\TT_{n,k}|$ by $a_{n,k}$. In \cite{SeoShin} Seo and Shin proved the following. \begin{thm}\cite{SeoShin} We have \begin{align} \label{eq:3} \sum_{k\ge0} \binom{m+k}{k} a_{n,k} &= (n+m+1)^n, \\ \label{eq:4} \sum_{k\ge0} \frac{1}{k} a_{n,k} &= n^n. \end{align} \end{thm} We give another proof using generating functions. \begin{proof} By Theorem~\ref{thm:SeoShin}, $a_{n,k}$ equals the number of words of length $n$ on $[n]$ with at least one $i$ for all $i\in[k]$. Thus \[ a_{n,k} = \left[ \frac{x^n}{n!}\right] (e^x-1)^k e^{(n-k)x} = \left[ \frac{x^n}{n!} \right] (1-e^{-x})^k e^{nx}, \] where $\left[ y^n\right]f(y)$ denotes the coefficient of $y^n$ in $f(y)$. Then we have \begin{align*} \sum_{k\ge0} \binom{m+k}{k} a_{n,k} & = \left[ \frac{x^n}{n!}\right] e^{nx} \sum_{k\ge0} \binom{m+k}{k} (1-e^{-x})^k \\ &= \left[ \frac{x^n}{n!}\right] e^{nx} \frac{1}{(1-(1-e^{-x}))^{m+1}}\\ &= \left[ \frac{x^n}{n!}\right] e^{(n+m+1)x} = (n+m+1)^n, \end{align*} where the following binomial theorem \cite[(1.20)]{EC1second} is used: \[ \frac{1}{(1-x)^n} = \sum_{k\geq0} \binom{n+k-1}{k} x^k. \] The second identity is proved similarly: \begin{align*} \sum_{k\ge0} \frac{1}{k} a_{n,k} &=\left[ \frac{x^n}{n!}\right] e^{nx} \sum_{k\ge0} \frac{(1-e^{-x})^k}{k}\\ &=\left[ \frac{x^n}{n!}\right] e^{nx} \ln \frac{1}{1-(1-e^{-x})}\\ &=\left[ \frac{x^n}{n!}\right] x e^{nx} = n! \left[ x^{n-1}\right] e^{nx} = n!\frac{n^{n-1}}{(n-1)!} = n^n. \end{align*} \end{proof} In \cite{SeoShin} they asked for a bijective proof of \eqref{eq:4}. We can prove it bijectively using our bijections as follows. \begin{proof}[Bijective proof of \eqref{eq:4}] By Theorem~\ref{thm:main}, $a_{n,k}$ is also equal to $|\BB_{n,k}|$, the number of ordered forests $(T_1,T_2,\dots, T_\ell, T_{\ell+1}, \dots,T_{\ell+k})$ on $[n]$ such that $T_{\ell+1}, \dots,T_{\ell+k}$ are minimally rooted. Thus $a_{n,k}/k$ is equal to the number of pairs $(\F, C)$ of an ordered forest $\F=(T_1,T_2,\dots, T_\ell)$ and a cycle $C=[T_{\ell+1}, \dots,T_{\ell+k}]$ of $k$ minimally rooted trees such that the vertex sets of $T_1,\dots, T_{\ell+k}$ form a partition of $[n]$. Then, by Theorem~\ref{thm:main}, the number of cycles of $k$ minimally rooted trees whose vertex sets form a subset $A$ of $[n]$ is equal to the set of rooted trees $T$ on $A$ with $|\MD(T)|=k$. Thus $a_{n,k}/k$ is equal to the number of ordered forests $(T_1,T_2,\dots, T_\ell, T)$ on $[n]$ with $|\MD(T)|=k$. The sum of $a_{n,k}/k$ for all $k$ is equal to the number of ordered forests on $[n]$. Suppose $(T_1,T_2,\dots, T_\ell)$ is an ordered forest on $[n]$ and $r_i$ is the root of $T_i$ for $i\in[\ell]$. By adding the edges $r_1r_2,r_2r_3,\dots, r_{\ell-1}r_\ell$, we get a rooted tree, say $H$. If we know the root $r_\ell$ of the last tree, then we can recover the ordered forest $(T_1,T_2,\dots, T_\ell)$ from $H$. Thus there is a bijection between the set of ordered forests on $[n]$ and the set of rooted trees on $[n]$ with a choice of $r_\ell$. The latter set has cardinality $n^n$ by Pr\"ufer code. This proves \eqref{eq:4}. \end{proof} \section{Another proof of Theorem~\ref{thm:CDG}} \label{sec:anoth-proof-theor} Using Pr\"ufer code one can easily see that $\binom{n}{\ell} n^{n-\ell}$ is equal to the number of rooted trees on $\{0,1,2,\dots,n+1\}$ such that $0$ is the root with $\ell+1$ children and $n+1$ is a leaf. By deleting the root $0$, such a tree is identified with a forest on $[n+1]$ with $\ell+1$ rooted trees such that $n+1$ is a leaf. Thus by replacing $n+1$ with $n$, we can rewrite Theorem~\ref{thm:CDG} as follows. \begin{thm}\cite{Chauve2000}\label{thm:children} The number of rooted trees of $[n]$ such that the root has $\ell$ smaller children equals the number of forests on $[n]$ with $\ell+1$ trees such that $n$ is a leaf. \end{thm} \begin{proof} Let $T$ be a rooted trees of $[n]$ such that the root has $\ell$ smaller children. We will construct a forest corresponding to $T$. Recall the bijection $\alpha:\TT_{n,k}\to\AA_{n,k}$. Suppose $T\in \TT_{n-1,k}$, $\alpha(T)=[T_0,T_1,\dots,T_k]$, $r_i$ is the root of $T_i$ for $i=0,1,2,\dots,k$. By shifting cyclically we can assume that $r_0$ is the largest root. Note that, by the construction of $\alpha$, $T$ is rooted at $r_0$. Also, from the construction of $\alpha$, it is easy to see that the smaller children of the root $r_0$ in $T$ are exactly the left-to-right maxima of $r_1,r_2,\dots,r_k$. Suppose $r_{i_1}<r_{i_2}<\cdots<r_{i_\ell}$ are the smaller children of $r_0$ in $T$. Then $1=i_1<i_2<\cdots<i_\ell\leq k$. Suppose $n$ is contained in $T_j$. Let $T_1', T_2',\dots, T_k'$ be the arrangement of the trees $T_0,T_1,\dots,\widehat{T_j}, \dots, T_k$ such that the word $r_1'r_2'\cdots r_k'$ of the roots of $T_1',\dots,T_k'$ are order-isomorphic to the word $r_1r_2\cdots r_k$. Notice that $r_{i_1}', r_{i_2}', \dots,r_{i_\ell}'$ are the left-to-right maxima of $r_1'r_2'\cdots r_k'$. Thus the following map is invertible: \begin{equation} \label{eq:5} (T_1', T_2',\dots, T_k') \mapsto \{[T_{i_1}', \dots,T_{i_2-1}'], [T_{i_2}', \dots, T_{i_3-1}'], \dots, [T_{i_\ell}', \dots, T_k']\}. \end{equation} Now we apply the inverse map $\alpha^{-1}$ of the bijection $\alpha$ to each cycle in \eqref{eq:5}. Then we get a set of rooted trees. Together with $T_j$, we obtain a forest on $[n]$. Since $T_j$ is the tree containing $n$, we can recover the original tree $T$ from the forest. This gives a bijection between the two sets in the theorem. \end{proof} The proof of Theorem~\ref{thm:children}, in fact, gives a generalization as follows. \begin{cor} Let $A(n,\ell,k)$ denote the number of rooted trees of $[n]$ such that the root has $\ell$ smaller children and the minimal decreasing subtree has $k+1$ vertices. Let $B(n,\ell,k)$ denote the number of forests on $[n]$ with $\ell+1$ trees such that $n$ is a leaf, and the sum of $|\MD(T)|$ for all trees $T$ in the forest except the one containing $n$ is equal to $k$. Then \[ A(n,\ell,k) = B(n,\ell,k). \] \end{cor} \begin{proof} This can be checked by the following observation. Consider a cycle $C$ in \eqref{eq:5}, and $T=\alpha^{-1}(C)$. Then $|\MD(T)|$ is the number of trees in $C$. Thus the sum of $|\MD(T)|$ for all cycles $C$ in \eqref{eq:5} is $k$. \end{proof} \section*{Acknowledgement} The author would like to thank Dennis Stanton for helpful discussion and comments, especially for the idea in the proof using generating functions in Section~\ref{sec:some-properties-tt_n}.
1,314,259,992,901
arxiv
\section{Introduction} Stochastic simulation is used to characterize the behavior of complex systems that are driven by random input processes. By ``input process'' we mean a sequence of one or more random variables with a fully specified joint distribution. In this paper we consider independent and identically distributed (i.i.d.)\ input processes that are themselves mutually independent, which means that the input processes can be full specified by their individual marginal distributions. The distributions of these input processes are often estimated from real-world data. Thus, a complete statistical characterization of stochastic system performance requires quantifying both simulation and input estimation errors.\footnote{As with any mathematical or computer model, simulations are also subject to a host of non-statistical errors in abstraction; these are not considered in this paper.} There are robust methods that are adequate for quantifying simulation error for many practical problems. However, the impact of input estimation error (which we call ``input uncertainty'') can overwhelm the simulation error, as demonstrated in \cite{Barton_Schruben_2001}; ignoring it may lead to unfounded confidence in the simulation assessment of system performance, which could be the basis for critical and expensive decisions. What we call input and simulation uncertainty are also known as epistemic and aleatory uncertainty, respectively; see for instance \cite{Kleijnen_2008}. Our method accounts for both sources of error. In this paper we address problems with parametric input distributions that are mutually independent with parameters that are estimated from a finite sample of real-world data. Of course, there exist practical problems in which the input processes are not independent, and there may also be significant uncertainty about the correct parametric distribution as well as its parameter values. Nevertheless, the case of i.i.d.\ input processes represented by a parametric distribution is prevalent in practice, making our contribution a useful step. We build on \cite{barton_nelson_xie_2011}, which proposed a metamodel-assisted bootstrapping approach that forms a confidence interval (CI) to account for the impact of input uncertainty when estimating the system's mean performance. In that paper, bootstrap resampling of the real-world data was used to approximate the input uncertainty, while a metamodel predicted the simulation's mean response at different parameter settings corresponding to bootstrap resampled data sets. In a metamodel-assisted bootstrapping framework there are three types of error: the input estimation error, the simulation estimation error and the error in the metamodel itself. The latter two types of error are not easily separated, so we call their combined effect ``metamodel uncertainty." \cite{barton_nelson_xie_2011} showed that metamodel uncertainty can be ignored when the simulation budget is not tight and an appropriate type of metamodel and experiment design are used; they provided a follow-up test to insure that the metamodel fit is close enough. In this setting their method yields a valid CI. However, if the true mean response surface is complex, especially for high-dimensional problems (i.e., many input distributions), and the computational budget is tight, then the impact of metamodel uncertainty can no longer be ignored without underestimating the error, which manifests itself in a CI that is too short. Computationally intensive stochastic simulations are the norm for a number of fields: Spatial stochastic simulations, e.g., of oil reservoirs, can take hours for a single run, and depend on many stochastic parameters (\citeauthor{Bangerth_2006}~\citeyear{Bangerth_2006}, \citeauthor{Wang_2012}~\citeyear{Wang_2012}). Simulations of semiconductor manufacturing (\citeauthor{fowler2004grand}~\citeyear{fowler2004grand}) and biological systems (\citeauthor{Ghosh_2011}~\citeyear{Ghosh_2011}, \citeauthor{Kastner_2002}~\citeyear{Kastner_2002}) can be similarly intensive. This paper is a significant enhancement of \cite{barton_nelson_xie_2011}. Here we propose an approach to form an interval estimate that accounts for \textit{both} input and metamodel uncertainty in estimating a stochastic system's mean performance. When there is little metamodel uncertainty the new method performs like \cite{barton_nelson_xie_2011}, but it does not experience a degradation in coverage when metamodel uncertainty is significant. When the statistical error measured by our CI is too large for the estimate to be useful, then the decision maker may want to know how the error can be reduced. Our approach leads naturally to a measure of the relative contribution of input to overall uncertainty that indicates whether the error can be reduced by an additional computational investment. The next section describes other approaches to attack the input uncertainty problem and contrasts them with our method. This is followed in Section~\ref{sec:problem_Description} by a formal description of the problem of interest and a brief review of the metamodel-assisted bootstrapping approach in Section~\ref{sec:MMABS}. In Section 5 we provide an algorithm to build an interval estimator accounting for both input and metamodel uncertainty, and give a method to estimate their relative contributions. We then report results from an empirical study of a difficult problem in Section 6 and conclude the paper in Section 7. All proofs are in the Appendix. \section{Background} Various approaches to account for input uncertainty have been proposed. The Bayesian methods use the posterior distributions of the inputs given the real-world data to quantify the input distribution uncertainty, and the impact on the system mean is estimated by drawing samples from these posterior distributions and running simulations at each sample point (\citeauthor{Chick_2001}~\citeyear{Chick_2001}; \citeauthor{Chick_Ng_2002}~\citeyear{Chick_Ng_2002}; \citeauthor{Zouaoui_Wilson_2003}~\citeyear{Zouaoui_Wilson_2003}, \citeyear{Zouaoui_Wilson_2004}). This could be computationally expensive when the time for each simulation run is significant because simulations need to be run at a large number of posterior sample points. In addition, for each input prior distribution we need to derive a corresponding posterior distribution which might be nonstandard and complex. A second approach is based on direct bootstrapping; it quantifies the impact of input uncertainty using bootstrap resampling of the input data, and runs simulations at each bootstrap resample point to estimate the impact on the system mean (\citeauthor{Barton_Schruben_2001}~\citeyear{Barton_Schruben_1993}, \citeyear{Barton_Schruben_2001}; \citeauthor{Barton_2007}~\citeyear{Barton_2007}; \citeauthor{Cheng_Holland_1997}~\citeyear{Cheng_Holland_1997}). Compared with the Bayesian approach, the direct bootstrap can be adapted to any input process without additional analysis and it is suitable for complex and nonstandard input distributions. However, similar to the Bayesian approach, this method also runs simulations at each resample point. Since the number of bootstrap resample points \textcolor{black}{to construct a CI} is recommended to be a few thousand, the direct bootstrapping method is also computationally expensive. More subtly, since the statistic that is bootstrapped is the random output of a simulation it is not a smooth function of the input data; this violates the asymptotic validity of the bootstrap. Differing from the previous two approaches that estimate the system mean response at each sample point by running simulations, a third approach introduces an equation-based metamodel of the mean response (see \cite{Cheng_Holland_2004} and references therein). Specifically, it assumes that the parametric families of the inputs are known, uses maximum likelihood estimators (MLEs) of the unknown parameters, and represents input-parameter uncertainty by the large-sample normal distribution of the MLEs. This uncertainty is propagated to the output mean by a linear function of the parameters that is based on a Taylor series approximation. Since the metamodel can be constructed using simulation results from a small number of runs, this method does not need substantial computational effort. However, a metamodel based on a locally linear approximation is only appropriate when there is a large quantity of real-world data so that the MLEs locate in a small neighborhood of the true parameters with high probability; it is not suitable when the underlying response surface is highly non-linear and only a modest quantity of real-world data are available. In addition, the asymptotic normal approximation for the input distribution parameters can be poor with sample sizes encountered in some applications. The metamodel-assisted bootstrapping approach introduced by \cite{barton_nelson_xie_2011} addresses some of the shortcomings in the prior work. Compared with \cite{Cheng_Holland_2004}, the bootstrap provides a more accurate approximation of the input uncertainty than the asymptotic normal distribution of the parameter estimates in many situations \citep{Horowitz_2001}. Further, the use of a general-form metamodel provides higher fidelity than a locally linear approximation. Compared with Bayesian and direct bootstrap approaches, the use of a metamodel reduces the impact of simulation error on the accuracy of CIs and reduces the computational effort because it does not run simulations at a large number of sampled or resampled points; instead, an equation-based metamodel is constructed based on a designed experiment at a smaller number of parameter settings. In addition, employing a metamodel makes the bootstrap statistic a smooth function of the input data so that the asymptotic validity concerns faced by the direct bootstrap method disappear. The numerical results in \cite{barton_nelson_xie_2011} provide evidence that metamodel-assisted bootstrapping is effective and superior to competitors \textit{when there is little metamodel uncertainty}, motivating its extension in this paper to more general and complex input-uncertainty problems in which the impact of metamodel uncertainty can no longer be ignored. The end result is a robust method for quantifying statistical uncertainty. \section{Problem Description} \label{sec:problem_Description} To make the description of the input uncertainty problem and our solution to it easier to follow we will use the queueing network in Figure~\ref{fig:Figure1_3} as an example and return to it in our empirical study in Section~\ref{sec:empirical}. Consider estimating the steady state expected number of customers in this network. The interarrival times follow a gamma distribution, $A\sim \mbox{gamma}(\alpha_A,\beta_A)$, and the service times at the $i$th station also follow a gamma distribution, $S_i\sim\mbox{gamma}(\alpha_{S_i},\beta_{S_i})$. Customers finishing service at stations $1,2,3$ must make decisions about their next station. These routing decisions follow Bernoulli distributions $P_i\sim\mbox{Ber}(p_i), i=1,2,3$. The parameters of the input distributions, $\alpha_A,\beta_A$, $\{(\alpha_{S_i},\beta_{S_i}),i=1,2,3,4\}$ and $\{p_i,i=1,2,3\}$ are all unknown and estimated from real-world data. Notice that the inputs include both continuous and discrete distributions. Our goal is to build a CI that covers the steady-state expected number of customers in the system when the input parameters assume their true but unknown values. We assume that at these ``true values'' the system is in fact stable, and if we have enough real-world data (which we may not) then the simulation with estimated parameters will also be stable. \begin{figure*}[tb] \vspace{0.2in} { \centering \includegraphics[scale=0.3]{Figure1_3.png} \vspace{-0.2in} \caption{Queueing network example. \label{fig:Figure1_3}} } \end{figure*} More generally, the stochastic simulation output is a function of random numbers and $L$ independent input distributions $F\equiv\{F_1,F_2,\ldots,F_L\}$. For notation simplification, we do not explicitly include the random numbers. The output from the $j$th replication of a simulation with input distribution $F$ can be written as \begin{equation} Y_j(F)=\mu(F)+\epsilon_j(F) \label{eq.Y_F} \end{equation} where $\mu(F) = {\rm E}[Y_j(F)]$ denotes the unknown output mean and $\epsilon_j(F)$ represents the simulation error with mean zero. Notice that the simulation output depends on the choice of input distributions. The true ``correct'' input distributions, denoted by $F^c\equiv\{F^c_1,F^c_2,\ldots,F^c_L\}$, are unknown and are estimated from a finite sample of real-world data. Our goal is to quantify the impact of the statistical error by finding a $(1-\alpha)100\%$ CI $[Q_L,Q_U]$ such that \begin{equation} \mbox{Pr}\{\mu(F^c)\in [Q_L,Q_U]\} = 1-\alpha. \label{eq.CI_1} \end{equation} This is a relatively general statement of the problem which could encompass multivariate input distributions (i.e., if one or more of $F_\ell^c$ are multivariate) and also unknown distribution families. However, in this paper we assume that the input distributions are univariate, the families are known, but the parameter values are not. Therefore, the input distributions $F$ can be completely characterized by the collection of parameters denoted by ${\bf x}$ with dimension $d$. We will define what we mean by ``parameters" below. By abusing the notation, we rewrite Equation~(\ref{eq.Y_F}) as \begin{equation} Y_j(\mathbf{x})=\mu(\mathbf{x})+\epsilon_j(\mathbf{x}) \label{eq.Y_x} \end{equation} where $\mathbf{x}\in \Psi$ and $\Psi \equiv\{{\bf x}\in\Re^d{:} \mbox{ the random variable } Y({\bf x}) \mbox{ is defined and $\mu({\bf x})$ is finite}\}$ denotes the region of interest. The collection of true parameters is denoted by ${\bf x}_c$ and is assumed to be in the interior of $\Psi$. We also assume $\mu({\bf x})$ is continuous for ${\bf x}\in\Psi$. Thus, our goal can be restated as finding a $(1-\alpha)100\%$ CI such that \begin{equation} \mbox{Pr}\{\mu({\bf x}_c)\in [Q_L,Q_U]\} = 1-\alpha. \label{eq.CI_2} \end{equation} Let $m_\ell$ denote the number of i.i.d.\ real-world observations available from the $\ell$th input process $\mathbf{Z}_{\ell,m_\ell}\equiv\left\{Z_{\ell,1},Z_{\ell,2},\ldots,Z_{\ell,m_\ell} \right\}$ with $Z_{\ell,i}\stackrel{i.i.d}\sim F^c_\ell$, $i=1,2,\ldots,m_\ell$. Let $\mathbf{Z}_{\mathbf{m}}=\{\mathbf{Z}_{\ell,m_\ell}, \ell=1,2,\ldots,L\}$ be the collection of samples from all $L$ input distributions in $F^c$, where $\mathbf{m}=(m_1,m_2,\ldots,m_L)$. The real-world data are a particular realization of $\mathbf{Z}_{\mathbf{m}}$, say $\mathbf{z}_{\mathbf{m}}^{(0)}$. Since the unknown input distributions are estimated from $\mathbf{z}_{\mathbf{m}}^{(0)}$, we assume the parameters are functions of $\mathbf{Z}_{\mathbf{m}}$ denoted by $\mathbf{X}_{\mathbf{m}}=\mathbf{X}(\mathbf{Z}_{\mathbf{m}})$. Therefore, input uncertainty is fully captured by the sampling distribution of $\mu(\mathbf{X}_{\mathbf{m}})$. Since the underlying response surface $\mu(\cdot)$ is unknown, we approximate it by a metamodel fitted to a set of stochastic simulation outputs. Let $\widehat{\mu}(\cdot)$ denote the metamodel. Fitting the metamodel introduces a source of uncertainty in propagating the sampling distribution of $\mathbf{X}_{\mathbf{m}}$ to the output mean: metamodel uncertainty. \textit{The contributions of this paper are to construct an interval estimator that accounts for both input and metamodel uncertainty, and to quantify the relative impact of each contributor. } \section{Metamodel-Assisted Bootstrapping Framework} \label{sec:MMABS} \cite{barton_nelson_xie_2011} introduced the metamodel-assisted bootstrapping approach to quantify input uncertainty. We review it here. The representation of the $L$ input distributions plays an important role in the implementation of metamodel-assisted bootstrapping. Since this paper focuses on problems with independent parametric distributions having unknown parameters, $F$ can be uniquely characterized by the corresponding collection of each distribution's parameters or, in many cases, its moments. The $\ell$th input distribution includes $h_\ell$ unknown parameters. Suppose that this $h_\ell$-parameter distribution is uniquely specified by its first (finite) $h_\ell$ moments, which is true for the distributions that are most often used in stochastic simulation. The moments are chosen as the independent variables for the metamodel because when they are close, the corresponding distributions will be similar and therefore generate similar outputs. An extended argument for choosing moments instead of the natural distribution parameters as independent variables can be found in \cite{barton_nelson_xie_2011}. This characterization will not work for all distributions, including some so-called ``heavy-tailed'' distributions. Let $\mathbf{x}_{[\ell]}$ denote an $h_\ell\times 1$ vector of the first $h_\ell$ moments for the $\ell$th input distribution \textcolor{black}{and} $d=\sum_{\ell=1}^L h_\ell$. By stacking ${\bf x}_{[\ell]}$ with $\ell=1,2,\ldots,L$ together, we have a $d\times 1$ dimensional independent variable ${\bf x}$. Notice that $F$ is completely characterized by the collection of moments ${\bf x}$, and from here on $\mu(\cdot)$ will be a function of distribution moments organized in this way. Denote the true moments by ${\bf x}_c$. For the queueing network in Figure~\ref{fig:Figure1_3}, there are $L=8$ input distributions including arrivals, services at stations $1,2,3,4$ and the three routing processes. Thus, $h_1=h_2=h_3=h_4=h_5=2$ and $h_6=h_7=h_8=1$. The distributions for the three routing processes can be completely specified by their corresponding means. The distributions for the arrival and service processes can be uniquely characterized by the corresponding first two standardized moments: mean and standard deviation. To avoid a scaling effect, the standard deviation is used instead of the second raw moment. For the $\ell$th process, let $\tau_\ell$ denote the first moment (mean) and $\varsigma_\ell$ denote the standard deviation. Then ${\bf x}=(\tau_1,\varsigma_1,\tau_2,\varsigma_2,\tau_3,\varsigma_3,\tau_4,\varsigma_4, \tau_5,\varsigma_5,\tau_6,\tau_7,\tau_8)^\top$ with $d=13$. The true moments ${\bf x}_c$ are unknown and estimated based on a finite sample $\mathbf{Z}_{\mathbf{m}}$ from $F^c$. As noted above, $\mathbf{X}_{\mathbf{m}}$ is a $d\times 1$ dimensional moment estimator that is a function of $\mathbf{Z}_{\mathbf{m}}$ written as $\mathbf{X}_{\mathbf{m}}=\mathbf{X}(\mathbf{Z}_{\mathbf{m}})$. Specifically, $\mathbf{X}_{\ell,m_\ell}=\mathbf{X}_\ell(\mathbf{Z}_{\ell,m_\ell})$ and $\mathbf{X}_{\mathbf{m}}^T=(\mathbf{X}_{1,m_1}^T,\mathbf{X}_{2,m_2}^T,\ldots, \mathbf{X}_{L,m_L}^T)$. Let $F_{\mathbf{X}_{\mathbf{m}}}^c$ represent the true, unknown distribution of $\mathbf{X}_{\mathbf{m}}$. Then ``input uncertainty'' refers to the distribution of $\mu(\mathbf{X}_{\mathbf{m}})$ with $\mathbf{X}_{\mathbf{m}}\sim F_{\mathbf{X}_{\mathbf{m}}}^c$. Given a finite sample of real-world data $\mathbf{z}_{\mathbf{m}}^{(0)}$, \cite{barton_nelson_xie_2011} used bootstrap resampling to approximate $F_{\mathbf{X}_{\mathbf{m}}}^c$ and a metamodel to represent $\mu({\bf x})$. \subsection{Bootstrap Resampling} We use distribution-free bootstrapping, meaning that we resample from the empirical distribution of the data rather than a fitted parametric distribution. \textcolor{black}{Under some regularity conditions, the bootstrap can provide an asymptotically consistent approximation for the sampling distribution of a moment estimator \citep{Shao_1995}. For the general performance of the bootstrap in representing the sampling distribution of an estimator, see \cite{Horowitz_2001}.} Implementation of the bootstrap in metamodel-assisted bootstrapping is as follows. \begin{enumerate} \item Draw $m_\ell$ samples with replacement from $\mathbf{z}_{\ell,m_\ell}^{(0)}$, denoted by $\mathbf{Z}_{\ell,m_\ell}^{(1)}$, and calculate the corresponding $h_\ell\times 1$ vector of bootstrap moment estimates denoted by $\widehat{{\mathbf{X}}}_{\ell,m_\ell}^{(1)}={{\bf X}}_\ell (\mathbf{Z}_{\ell,m_\ell}^{(1)})$ for $\ell=1,2,\ldots,L$. Then stack the results for all $L$ processes to obtain a $d\times 1$ vector $\widehat{{\mathbf{X}}}_{\mathbf{m}}^{(1)}$. \item Repeat the previous step $B$ times to generate $\widehat{{\mathbf{X}}}_{\mathbf{m}}^{(b)},b=1,2,\ldots,B$. \end{enumerate} The bootstrap resampled moments are drawn from the bootstrap distribution denoted by $\widehat{F}_{\mathbf{X}_\mathbf{m}}(\cdot|\mathbf{z}_{\mathbf{m}}^{(0)})$. \textcolor{black}{The most straightforward approach to propagate input uncertainty to the output mean is direct bootstrapping. Given a total computational budget of $N$ simulation replications, the system mean response at each of $B$ bootstrap resamples is estimated by the sample mean of $n=\lfloor N/B\rfloor$ replications, denoted by $\bar{Y}(\widehat{{\bf X}}_{\mathbf{m}}^{(b)}) = \sum_{j=1}^n Y_j(\widehat{{\bf X}}_{\bf m}^{(b)}), b=1,2,\ldots,B$. From these mean estimates a CI is formed. Direct bootstrapping consumes the total simulation budget to estimate the simulation mean responses and to form the CI. Thus, for direct bootstrapping the impact of Monte Carlo estimation error on the CI and the impact of simulation estimation error on the mean responses are not separable.} \cite{barton_nelson_xie_2011} assume that there is enough computational budget available so that the difference between $\widehat{\mu}({\bf x})$ and $\mu({\bf x})$ can be ignored; i.e.\ $\widehat{\mu}({\bf x})\approx\mu({\bf x})$. Let $\widehat{\mu}_b\equiv\widehat{\mu}(\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)})$ for $b=1,2,\ldots,B$. \cite{barton_nelson_xie_2011} quantified input uncertainty by a $(1-\alpha)100\%$ bootstrap percentile interval induced by the sorted responses $[Q^*_L,Q^*_U]=[\widehat{\mu}_{(\lceil \frac{\alpha}{2}B\rceil)},\widehat{\mu}_{(\lceil(1-\frac{\alpha}{2})B\rceil)}]$, where $\widehat{\mu}_{(i)}$ is the $i$th smallest value of $\widehat{\mu}_1, \widehat{\mu}_2,\ldots, \widehat{\mu}_B$. Here, the superscript ``*" indicates that the input distribution is approximated with the bootstrap. This interval provides asymptotically correct coverage when all input distributions meet certain moment conditions and the metamodel is continuously differentiable with nonzero gradient in a neighborhood of $\mathbf{x}_c$ \citep{barton_nelson_xie_2011}. Specifically, they establish the iterated limit \[ \lim_{m\rightarrow \infty}\lim_{B\rightarrow \infty}\mbox{Pr}\{\mu({\bf x}_c)\in [Q^*_L,Q^*_U]\}=1-\alpha \] where as $m\rightarrow \infty$ we have $m_\ell/m \rightarrow 1$, for $\ell=1,2,\ldots,L$. However, with a tight computational budget for building the metamodel we cannot guarantee that $\widehat{\mu}({\bf x})\approx\mu({\bf x})$ holds for arbitrarily complex systems especially for problems with many input distributions. Thus, we desire an interval estimator that accounts for both input and metamodel uncertainty. Stochastic kriging (SK), introduced by \cite{ankenman_nelson_staum_2010}, facilitates this. SK is flexible. Unlike the locally linear approximation in \cite{Cheng_Holland_2004}, it does not require any strong assumption about the form of the underlying true response surface $\mu(\cdot)$. Based on our previous study \citep{xie_nelson_staum_2010}, a SK metamodel fit to a small number of simulation runs can provide good global predictions and also a characterization of metamodel uncertainty for a wide variety of examples. The characterization of metamodel uncertainty is a key contribution of the new method presented here. \textcolor{black}{\textit{Compared with direct bootstrapping, metamodel-assisted bootstrapping separates the choice of $B$ from the budget $N$ and reduces the influence of simulation estimation error.} Instead doing simulations at $B$ samples from the bootstrap, we run simulations at well-chosen design points and build an equation-based metamodel $\widehat{\mu}({\bf x})$ to predict the mean response at different input distributions represented by bootstrap resampled moments. Notice that once we have the metamodel, we can use any $B$ we want to control the Monte Carlo estimation error of the interval estimator, even $B>N$. Further, the metamodel can efficiently use the computational budget to reduce the uncertainty introduced when propagating the input uncertainty to the output mean.} \subsection{Stochastic Kriging Metamodel} Kriging is a widely used interpolation method. Since the outputs from stochastic simulations include simulation variability that often changes significantly across the design space, SK was introduced to distinguish the uncertainty about the response surface from the simulation error inherent in stochastic simulation output. This section provides a brief review of SK. Suppose that the underlying true (but unknown) response surface can be thought of as a realization of a stationary Gaussian Process (GP). This fiction has been shown to provide a very useful framework for quantifying uncertainty about the unknown surface implied by a deterministic computer experiment and has been successfully employed in a wide variety of applications (see, for instance, \cite{Santer_2003}). SK extends this framework to include the variability of the simulation output $Y$ using the model \begin{equation} \label{eq:sk} Y_j(\mathbf{x})=\mathbf{\beta}_0+W(\mathbf{x})+\epsilon_j(\mathbf{x}). \end{equation} The independent variable ${\bf x}$ is interpreted as a location in space. In this paper, ${\bf x}$ denotes a $d\times 1$ vector of moments that uniquely characterize the input distributions. The variation in the simulation output over the $\mathbf{x}$ space is divided into extrinsic (response-surface) uncertainty $W(\mathbf{x})$ and intrinsic (simulation output) uncertainty $\epsilon_j(\mathbf{x})$. The term ``intrinsic" uncertainty refers to the variability inherent in the sampling that generates stochastic simulation outputs and ``extrinsic" uncertainty refers to our lack of knowledge about the response surface. SK uses a mean-zero, second-order stationary GP $W({\bf x})$ to account for the spatial dependence of the response surface. Thus, the uncertainty about the true response surface $\mu({\bf x})$ is represented by a GP $M({\bf x})\equiv\mathbf{\beta}_0+W(\mathbf{x})$ (note that $\beta_0$ can be replaced by a more general trend term $\mathbf{f}({\bf x})^\top \mbox{\boldmath $\beta$}$ without affecting our method). For many, but not all, simulation settings the output is an average of a large number of more basic outputs, so a normal approximation can be applied: $\epsilon({\bf x})\sim \mbox{N}(0,\sigma^2_{\epsilon}({\bf x}))$. Of course, normality of the simulation output will not always hold, but could be empirically tested if there is a concern. In SK, the covariance between $W(\mathbf{x})$ and $W(\mathbf{x}^\prime)$ quantifies how knowledge of the surface at some design points affects the prediction of the surface. A parametric form of the spatial covariance, denoted by $\Sigma(\mathbf{x},\mathbf{x}^\prime)=\mbox{Cov}[W(\mathbf{x}),W(\mathbf{x}^\prime)] =\tau^2r(\mathbf{x}-\mathbf{x}^\prime)$, is typically assumed where $\tau^2$ denotes the variance and $r(\cdot)$ is a correlation function that depends only on the distance $\mathbf{x}-\mathbf{x}^\prime$. Based on our previous study \citep{xie_nelson_staum_2010}, we use the product-form Gaussian correlation function \begin{equation} \label{eq:gauss.correlation} r(\mathbf{x}-\mathbf{x}^\prime)= \exp \bigg(-\sum_{j=1}^d \theta_j(x_j-x^\prime_j)^2 \bigg) \nonumber \end{equation} for the empirical evaluation in Section~6; however, our results do not require it. Let $\pmb{\theta}=(\theta_1,\theta_2,\ldots,\theta_d)$ represent the correlation parameters; for different correlation functions the dimension of $\pmb{\theta}$ could change. In any event, $M({\bf x})$ can be represented by a Gaussian process $ M({\bf x})\sim \mathrm{GP}(\mathbf{\beta}_0,\tau^2 r(\mathbf{x}-\mathbf{x}^\prime)).$ \begin{sloppypar} To reduce the uncertainty about $\mu({\bf x})$ we choose an experiment design consisting of pairs $\mathcal{D} \equiv\{({\bf x}_i,n_i),i=1,2,\ldots,k\}$ at which to run simulations and collect observations, where $({\bf x}_i,n_i)$ denotes the location and the number of replications, respectively, at the $i$th design point. The design that we recommend is described in more detail in the Appendix, but it is not the only design that could be effective. The simulation outputs at $\mathcal{D}$ are $\mathbf{Y}_\mathcal{D}\equiv \left\{ (Y_1(\mathbf{x}_i),Y_2(\mathbf{x}_i),\ldots,Y_{n_i}(\mathbf{x}_i)); i=1,2,\ldots,k \right\}$ and the sample mean at design point ${\bf x}_i$ is $\bar{Y}(\mathbf{x}_i)=\sum_{j=1}^{n_i}Y_j(\mathbf{x}_i)/n_i$. Let the sample means at all $k$ design points be $\bar{\mathbf{Y}}_\mathcal{D}=(\bar{Y}(\mathbf{x}_1),\bar{Y}(\mathbf{x}_2), \ldots,\bar{Y}(\mathbf{x}_k))^T$. Since the use of common random numbers is detrimental to prediction (as opposed to optimization; see \cite{chen_ankenman_nelson_2010}), the simulations at different design points are independent and the variance of $\bar{\mathbf{Y}}_\mathcal{D}$ is represented by a $k\times k$ diagonal matrix $C=\mbox{diag}\left\{\sigma^2_{\epsilon}({\bf x}_1)/n_1,\sigma^2_{\epsilon}({\bf x}_2)/n_2, \ldots,\sigma^2_{\epsilon}({\bf x}_k)/n_k \right\}$. \end{sloppypar} Let $\Sigma$ be the $k\times k$ spatial covariance matrix of the design points and let $\Sigma({\bf x},\cdot)$ be the $k\times 1$ spatial covariance vector between each design point and a fixed prediction point ${\bf x}$. If the parameters $(\tau^2,\pmb{\theta}, C)$ are known, then the metamodel uncertainty can be characterized by a refined GP $M_p({\bf x})$ that denotes the conditional distribution of $M({\bf x})$ given all simulation outputs, \begin{equation} M_p({\bf x})\sim \mathrm{GP}(m_{p}({\bf x}),\sigma^2_{p}({\bf x})) \label{eq.posterior} \end{equation} where $m_p(\cdot)$ is the minimum mean squared error (MSE) linear unbiased predictor \begin{equation} m_{p}({\bf x})=\widehat{\beta}_0 +\Sigma({\bf x},\cdot)^\top(\Sigma+C)^{-1} (\bar{\mathbf{Y}}_\mathcal{D}-\widehat{\beta}_0\cdot 1_{k\times 1}), \label{eq.predictor1} \end{equation} and the corresponding variance is \begin{equation} \label{eq.MSE1} \sigma^2_{p}({\bf x}) = \tau^2-\Sigma({\bf x},\cdot)^\top(\Sigma+C)^{-1}\Sigma({\bf x},\cdot) +\mathbf{\eta}^\top[1_{k\times 1}^\top(\Sigma+C)^{-1}1_{k\times 1} ]^{-1}\mathbf{\eta} \end{equation} where $\widehat{\beta}_0=[1_{k\times 1}^\top(\Sigma+C)^{-1}1_{k\times 1}]^{-1}1_{k\times 1}^\top(\Sigma+C)^{-1}\bar{\mathbf{Y}}_\mathcal{D}$ and $\mathbf{\eta}=1-1_{k\times 1}^\top(\Sigma+C)^{-1}\Sigma({\bf x},\cdot)$ \citep{ankenman_nelson_staum_2010}. With the parameters $(\tau^2,\pmb{\theta},{C})$ known, $M_p({\bf x})$ depends on the simulation outputs only through $\bar{\mathbf{Y}}_\mathcal{D}$. Thus, $M_p({\bf x})$ is a random \textcolor{black}{function} having the conditional distribution of $M({\bf x})$ given $\bar{\mathbf{Y}}_\mathcal{D}$. \textcolor{black}{Notice that $\sigma^2_p(\mathbf{x})$ reflects both metamodel and simulation error, including the constant term $\widehat{\beta_0}$, with the intrinsic simulation sampling error affecting $\sigma^2_p(\mathbf{x})$ through the matrix $C$. } Since in reality the spatial correlation parameters $\tau^2$ and $\pmb{\theta}$ are unknown, MLEs are typically used for prediction \textcolor{black}{with the log-likelihood function \begin{equation} \ell(\beta_0, \tau^2,\pmb{\theta}) = -\ln[(2\pi)^{k/2}]-\frac{1}{2}\ln[|\Sigma+C|] -\frac{1}{2}(\bar{\mathbf{Y}}_{\mathcal{D}}-\beta_0\cdot 1_{k\times 1})^\top [\Sigma+C]^{-1}(\bar{\mathbf{Y}}_{\mathcal{D}}-\beta_0\cdot 1_{k\times 1}) \nonumber \label{eq.loglikelihood} \end{equation} where $\Sigma$ is a function of $\tau^2$ and $\pmb{\theta}$.} The sample variance is used as an estimate for the simulation variance at design points $C$. By plugging $(\widehat{\beta}_0, \widehat{\tau}^2, \widehat{\pmb{\theta}}, \widehat{C})$ into Equations~(\ref{eq.predictor1}) and (\ref{eq.MSE1}) we can obtain the estimated mean $\widehat{m}_p(\mathbf{x})$ and variance $\widehat{\sigma}_p^2(\mathbf{x})$. Thus, the metamodel we use is $\widehat{\mu}({\bf x})=\widehat{m}_p({\bf x})$ with marginal variance estimated by $\widehat{\sigma}^2_p({\bf x})$. \textcolor{black}{\cite{ankenman_nelson_staum_2010} demonstrate that $\widehat{m}_p({\bf x})$ is still an unbiased predictor even with the plug-in estimator $\widehat{C}$, and further that the variance inflation of $\sigma^2_p({\bf x})$ caused by using $\widehat{C}$ is typically small. We performed an empirical study whose results indicate that if we use an adequate experiment design, such as the one-stage space-filling design used in this paper, then the performance of metamodel-assisted bootstrapping is also not sensitive to the estimation error in $\widehat{\tau}^2$ and $\widehat{\pmb{\theta}}$; see the Appendix. However, it is known that the estimator~(\ref{eq.MSE1}) with plug-in MLEs may sometimes underestimate the prediction variance; see \cite{DenHertog_2006}.} In the derivations that follow we will assume that the parameters $(\tau^2,\pmb{\theta},C)$ are known. This is necessary (and common in the kriging literature) because including the effect of parameter estimation is mathematically intractable. To apply the methods in practice (including our empirical study below), we form plug-in estimators by inserting $\widehat{\tau}^2,\widehat{\pmb{\theta}},\widehat{C}$. \section{Confidence Interval and Variance Decomposition} Our approach is to use metamodel-assisted bootstrapping to provide a CI for the true mean performance. To be robust the CI should account for both input and metamodel uncertainty. Since $m_p({\bf x})$ is an unbiased predictor under the Gaussian process assumption, $\sigma^2_p({\bf x})=0$ for all ${\bf x}$ would imply that there is no metamodel uncertainty due either to a finite number of design points ${\bf x}_i$ or finite number of replications $n_i$; that is, $m_p({\bf x}) = \mu({\bf x})$. Unfortunately, with anything short of complete information, there will always be some metamodel uncertainty; and if the budget is tight relative to the complexity of the true response surface, then the effect of metamodel uncertainty could be substantial, resulting in significant undercoverage of the confidence interval of \cite{barton_nelson_xie_2011} as we show in Section~\ref{sec:empirical}. The new interval introduced here does not suffer this degradation, and therefore is robust to the amount of simulation effort that can be expended and can be recommended for general use. \textcolor{black}{The kriging literature is the foundation for our work; see for instance \cite{Santer_2003}. Kriging provides inference about the value of an unknown function $\mu(\cdot)$ at a fixed prediction point ${\bf x}_0$ where the function has not been evaluated based on values of the function at a set of design points. Kriging models uncertainty about the function as a GP $M(\cdot)$ by assuming $\mu(\cdot)$ is a realization of $M(\cdot)$. An interval constructed to cover the conditional distribution of $M({\bf x}_0)$ given the values at the design points is often interpreted as a CI for $\mu({\bf x}_0)$ (e.g., \cite{Picheny_2010}). The success of this paradigm is not because the function of interest is actually random---it is not---but because in many problems the conditional GP appears to be a robust characterization of the remaining response-surface uncertainty.} \textcolor{black}{We adopt the kriging paradigm but with two key differences: our prediction point ${\bf x}_c$ is also unknown and must be estimated from real-world data, and our function $\mu(\cdot)$ can only be evaluated in the presence of stochastic simulation noise. Given the simulation outputs $\bar{\mathbf{Y}}_{\mathcal{D}}$, the remaining uncertainty about $\mu(\cdot)$ is characterized by the conditional GP $M_p(\cdot)$. To account for the impact from both input and metamodel uncertainty, we construct an interval $[C_L,C_U]$ covering $M_p({\bf x}_c)$ with probability $(1-\alpha)100\%$; that is, \begin{equation} \mbox{Pr}\{M_p({\bf x}_c)\in [C_L,C_U]\} = 1-\alpha. \label{eq.CI_3} \end{equation} } \textcolor{black}{Since the conditional coverage is $1-\alpha$, the unconditional coverage of $M({\bf x}_c)$ is $1-\alpha$ as well. The revised objective~(\ref{eq.CI_3}) is connected to our objective~(\ref{eq.CI_2}) through the assumption that the function $\mu(\cdot)$ is a realization of the GP $M(\cdot)$. A procedure that delivers an interval satisfying~(\ref{eq.CI_3}) will be a good approximation for a CI procedure satisfying~(\ref{eq.CI_2}) if $M_p(\cdot)$ faithfully represents the remaining uncertainty about $\mu(\cdot)$. This is clearly an approximation because in any real problem $\mu(\cdot)$ is a fixed function, therefore we refer to $[C_L,C_U]$ as an approximation for the CI (ACI).} \begin{comment} Let $\mbox{P}_{\bar{\mathbf{Y}}_{\mathcal{D}}}$ denote the distribution of $\bar{\mathbf{Y}}_{\mathcal{D}}$. Since the interval $[C_L,C_U]$ satisfies $\Pr\{M({\bf x}_c)\in[C_L,C_U]|\bar{\mathbf{Y}}_{\mathcal{D}}\}=1-\alpha$, it covers $M({\bf x}_c)$ with probability $(1-\alpha)$ by \begin{equation} \int\Pr\{M({\bf x}_c)\in [C_L,C_U]| \bar{\mathbf{Y}}_{\mathcal{D}}\}d\mbox{P}_{\bar{\mathbf{Y}}_{\mathcal{D}}}=1-\alpha. \nonumber \end{equation} \end{comment} In a practical setting, what is the next step if the \textcolor{black}{interval $[C_L,C_U]$} is so wide that we are uncomfortable making decisions based on estimates with that level of error? We suggest gaining some sense of the relative contribution from each source of uncertainty as a guide toward either running more simulations or collecting more real-world input data or both. For many problems collecting additional input data is not feasible or we would have done so already; in such cases knowing that input uncertainty is substantial and cannot be reduced allows us to exercise caution in how we use the simulation results. In this section, we first present a procedure to build \textcolor{black}{an ACI} that satisfies Equation~(\ref{eq.CI_3}) asymptotically. The asymptotic consistency of this \textcolor{black}{interval} is proved under the assumption that the true response surface is a realization of a GP \textcolor{black}{with all parameters known except $\beta_0$}. Next a variance decomposition is proposed to measure the relative contribution of input uncertainty to overall statistical uncertainty, and we study its asymptotic properties as well. This is a measure of input uncertainty due to \textit{all} input distributions. A method for attributing the input uncertainty to the $L$ distributions is provided by \cite{SongNelson_2013}. Finally, we address problems that can arise when the system's mean performance fails to exist, or the system is undefined, for some values of the sample moments, and explain why metamodel-assisted bootstrapping tends to be tolerant of the former situation and can be adjusted for the latter. Assumptions that are needed for the asymptotic analysis are the following: \vspace{12pt} \noindent \textbf{Assumptions:} \begin{enumerate} \item The $\ell$th input distribution is uniquely determined by its first $h_\ell$ moments and it has finite first $4h_\ell$ moments for $\ell=1,2,\ldots,L$. \item We have i.i.d observations $Z_{\ell,1}^{(0)},Z_{\ell,2}^{(0)},\ldots,Z_{\ell,m_\ell}^{(0)}$ from the $\ell$th distribution for $\ell=1,2,\ldots,L$. As $m\rightarrow \infty$, we have $m_\ell/m \rightarrow c_\ell$, $\ell=1,2,\ldots,L$, for a constant $c_\ell>0$. \item The $\epsilon_j(\mathbf{x})\stackrel{i.i.d.}\sim \mbox{N}(0,\sigma^2_{\epsilon}(\mathbf{x}))$ for any ${\bf x}$, and $M(\mathbf{x})$ is a stationary, separable GP with a continuous correlation function satisfying \begin{equation} 1-r(\mathbf{x}-\mathbf{x}^\prime)\leq\frac{c} {|\mbox{log}(\parallel\mathbf{x}-\mathbf{x}^\prime \parallel_2)|^{1+\gamma}} \mbox{ for all } \parallel\mathbf{x}-\mathbf{x}^\prime \parallel_2\leq\delta \label{eq.corrCond} \end{equation} for some $c>0$, $\gamma>0$ and $\delta<1$, where $\parallel\mathbf{x}-\mathbf{x}^\prime \parallel_2 =\sqrt{\sum_{j=1}^d({x}_j-{x}^\prime_j)^2}.$ \item The input processes $Z_{\ell j}^{(0)}$, simulation noise $\epsilon_j({\bf x})$ and GP $M(\mathbf{x})$ are mutually independent and the bootstrap process is independent of all of them. \end{enumerate} \vspace{12pt} Assumptions~1--2 give sufficient conditions for the almost sure (a.s.) consistency of bootstrap moment estimators $\widehat{\mathbf{X}}_{\mathbf{m}}\stackrel{a.s.}\rightarrow{\bf x}_c$ as $m\rightarrow\infty$ (see Lemma~1 in the Appendix). Under Assumption~3, a GP $M(\cdot)$ with a correlation function satisfying Condition~(\ref{eq.corrCond}) has continuous sample paths almost surely (\citeauthor{Adler_2010}~\citeyear{Adler_2010}, Theorem 3.4.1). Condition~(\ref{eq.corrCond}) is satisfied by many correlation functions used in practice, and in particular any power exponential correlation function $r({\bf x}-{\bf x}^\prime)= \exp\left(- \sum_{j=1}^d \theta_j|x_j-x_j^\prime|^p \right)$ with $0<p\leq 2$ and $\theta_j>0$ \citep{Santer_2003}. Assumption~4 indicates that input data are collected independently of the simulation model, and that our uncertainty about the mean response surface as represented by $M({\bf x})$ is independent of the stochastic simulation noise (although both can depend on ${\bf x}$). \subsection{ACI Procedure} \label{subSec:procedure} Based on a hierarchical approach, we propose the following procedure to build $(1-\alpha)100\%$ bootstrap percentile ACIs to achieve~(\ref{eq.CI_3}): \begin{enumerate} \item Given real-world data $\mathbf{z}_\mathbf{m}^{(0)}$, choose experiment design $\mathcal{D}=\{({\bf x}_i,n_i), i=1,2,\ldots, k\}$ as described in the Appendix. \item Run simulations at design points to obtain outputs $\mathbf{Y}_\mathcal{D}$. Compute the sample average $\bar{Y}({\bf x}_i)$ and sample variance $S^2({\bf x}_i)$ of the simulation outputs, $i=1,2,\ldots, k$. \textcolor{black}{Fit the SK metamodel parameters $(\beta_0, \tau^2, \pmb{\theta}, C)$} to obtain $\widehat{m}_p({\bf x})$ and $\widehat{{\sigma}}_{p}^2(\mathbf{x})$ using $\left(\bar{Y}({\bf x}_i), S^2({\bf x}_i), {\bf x}_i\right)$, $i=1,2,\ldots, k$. \item For $b = 1 \mbox{ to } B$ \begin{enumerate} \item Generate bootstrap resample $\mathbf{Z}^{(b)}_{\mathbf{m}} \stackrel{i.i.d.}\sim \mathbf{z}^{(0)}_{\mathbf{m}}$ and compute sample moments $\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)}$. \item Let $\widehat{\mu}_b\equiv\widehat{m}_p(\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)})$. \item Draw $\widehat{M}_b \sim \mbox{N}\left( \widehat{m}_p ( \widehat{\mathbf{X}}_{\mathbf{m}}^{(b)}), \widehat{\sigma}_{p}^2(\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)}) \right)$. \end{enumerate} \item[] Next $b$ \item Report estimated CI and ACI, respectively, \begin{eqnarray*} \mbox{CI}_0 &\equiv& \left[ \widehat{\mu}_{(\lceil B\frac{\alpha}{2} \rceil)},\widehat{\mu}_{(\lceil B(1-\frac{\alpha}{2})\rceil)} \right] \\ \mbox{CI}_+ &\equiv& \left[ \widehat{M}_{(\lceil B\frac{\alpha}{2} \rceil)},\widehat{M}_{(\lceil B(1-\frac{\alpha}{2})\rceil)}\right] \end{eqnarray*} where $\widehat{\mu}_{(1)} \le \widehat{\mu}_{(2)} \le \cdots \le \widehat{\mu}_{(B)}$ and $\widehat{M}_{(1)} \le \widehat{M}_{(2)} \le \cdots \le \widehat{M}_{(B)}$ are the sorted values. \end{enumerate} In this procedure, Step~1 provides an experiment design to build a SK metamodel, which is central to the metamodel-assisted bootstrapping approach. Since the input uncertainty is quantified with bootstrap resampled moments, we want the metamodel to correctly predict the responses at these points $\widehat{\mathbf{X}}_{\mathbf{m}}\sim \widehat{F}_{\mathbf{X}_{\mathbf{m}}}(\cdot|\mathbf{z}_{\mathbf{m}}^{(0)})$. Thus, the metamodel needs to be accurate and precise in a design space that covers the ``most likely" bootstrap moment estimates, which can be achieved by the experiment design proposed by \cite{barton_nelson_xie_2011}. Their design is data-driven; specifically, they first find the smallest ellipsoid denoted by $E$ that covers the most likely bootstrap resampled moments. They then generate a space-filling design that covers $E$. This design \textcolor{black}{methodology, which is summarized in the Appendix}, yielded accurate metamodels in the examples they studied. Based on the experiment design provided in Step~1, we run simulations and construct a metamodel in Step~2 \textcolor{black}{by fitting $(\beta_0, \tau^2, \pmb{\theta}, C)$}. Given the metamodel, we predict the simulation's mean responses at different input settings corresponding to bootstrap resampled moments and construct interval estimators as shown in Step~3. Notice that Step~3(a) accounts for the input uncertainty and Step~3(c) accounts for the input and metamodel uncertainty. Thus, this procedure provides two types of intervals: \begin{itemize} \item $\mbox{CI}_0$, proposed in \cite{barton_nelson_xie_2011}, returns an estimate of $[Q_L,Q_U]$ in Equation~(\ref{eq.CI_2}) by assuming $\widehat{m}_p({\bf x}) = \mu({\bf x})$; that is, it only accounts for input uncertainty and will be in error if there is substantial metamodel uncertainty. \item $\mbox{CI}_+$ returns an estimate of $[C_L,C_U]$ in Equation~(\ref{eq.CI_3}). This ACI accounts for both input and metamodel uncertainty. \end{itemize} As the metamodel uncertainty decreases, $\mbox{CI}_0$ and $\mbox{CI}_+$ become closer and closer to each other. Before evaluating the finite-sample performance of $\mbox{CI}_+$ in Section~6, we establish its asymptotic consistency \textcolor{black}{for objective~(\ref{eq.CI_3})}. \textcolor{black}{In Theorems~1--3 that follow, we replace $\widehat{\mu}_b$ and $\widehat{M}_b$ in Steps~3(b)--(c) of the ACI procedure with} \[ \textcolor{black}{\mu_b \equiv m_p(\widehat{{\bf X}}_{{\bf m}}^{(b)}) \mbox{ and }} M_b \sim \mbox{N}\left( m_p ( \widehat{\mathbf{X}}_{\mathbf{m}}^{(b)}), \sigma_{p}^2(\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)}) \right) \] recalling that $m_p(\cdot)$ and $\sigma_p^2(\cdot)$ are the marginal mean and variance of the conditional GP $M_p(\cdot)$ \textcolor{black}{when $(\tau^2, \pmb{\theta}, C)$ are known}. \begin{theorem} \label{thm:consistency} Suppose that Assumptions 1--4 hold. Then the interval $[M_{(\lceil B\frac{\alpha}{2} \rceil)}, M_{(\lceil B(1-\frac{\alpha}{2})\rceil)}]$ is asymptotically consistent, meaning the iterated limit \begin{equation} \lim_{m\rightarrow\infty} \lim_{B\rightarrow \infty} \Pr\{M_{(\lceil B\alpha/2 \rceil)} \leq M_p(\mathbf{x}_c) \leq M_{(\lceil B(1-\alpha/2)\rceil)} \} = 1-\alpha. \label{eq.11_1} \end{equation} \end{theorem} In brief, under the assumption that $\mu({\bf x})$ is a realization of a GP, $M_p({\bf x})$ characterizes the remaining metamodel uncertainty after observing $\bar{{\mathbf{Y}}}_\mathcal{D}$. And since the input uncertainty is asymptotically correctly quantified by the bootstrap moment estimator $\widehat{\mathbf{X}}_{\mathbf{m}}$, the distribution of $M_p(\widehat{\mathbf{X}}_{\mathbf{m}})$ accounts for both input and metamodel uncertainty. \textcolor{black}{Theorem~\ref{thm:consistency} shows that this interval satisfies objective~(\ref{eq.CI_3}) asymptotically. We are particularly interested in situations when the simulation effort is limited (\cite{barton_nelson_xie_2011} addressed the ample budget case), so the consistency result in Theorem~\ref{thm:consistency} is only with respect to the real-world data.} The detailed proof is provided in the Appendix. \textcolor{black}{In practice, including our empirical evaluation in Section~\ref{sec:empirical}, $(\tau^2, \pmb{\theta}, C)$ must be estimated, and the impact of parameter estimation (other than ${\bf x}_c$ and $\beta_0$) is not covered by Theorem~1. We address sensitivity to parameter estimation in the Appendix.} \begin{comment} \textcolor{black}{Our goal is to find a CI that can quantify the impact of input and simulation estimation error on the system performance estimate and satisfy Equation~(\ref{eq.CI_2}). The revised objective~(\ref{eq.CI_3}) is mainly used to facilitate analysis. Therefore, the empirical test in Section~\ref{sec:empirical} evaluates the finite sample performance about the probability of the derived interval $\mbox{CI}_+$ covering $\mu({\bf x}_c)$ instead $M_p({\bf x}_c)$. } \end{comment} \subsection{Variance Decomposition} \label{subSec:varDecomp} $\mbox{CI}_+$ accounts for input and metamodel uncertainty. When the width of $\mbox{CI}_+$ is too large to be useful, it is important to know the relative contribution from each source. Since the total output variability is the convolution of the input uncertainty and simulation/metamodel uncertainty, it is hard to separate the effects from these sources. To estimate the relative contributions \cite{Zouaoui_Wilson_2003}, \cite{Ng_Chick_2006}, \cite{Ankenman_Nelson_2012} and \cite{SongNelson_2013} assume that the simulation noise has a constant variance. In this section, we propose a variance decomposition that does not require the homogeneity assumption. Suppose that the parameters $(\tau^2,\pmb{\theta},C)$ are known, the metamodel uncertainty can be characterized by a GP and the simulation error follows a normal distribution. Then the metamodel uncertainty, given the simulation result $\bar{\mathbf{Y}}_\mathcal{D}$, is characterized by a GP $M_p({\bf x})\sim \mbox{N}(m_p(\mathbf{x}),\sigma^2_{p}(\mathbf{x}))$. Conditional on $\bar{\mathbf{Y}}_\mathcal{D}$, both $m_p(\mathbf{x})$ and $\sigma^2_{p}(\mathbf{x})$ are fixed functions. For notation simplification, all of following derivations are conditional on the simulation outputs $\bar{\mathbf{Y}}_\mathcal{D}$, but we will suppress the ``$|\bar{\mathbf{Y}}_\mathcal{D}$''. The random variable $M_p({\mathbf{X}}_{\mathbf{m}})$ accounts for input uncertainty through the sampling distribution of $\mathbf{X}_{\mathbf{m}}$ and the metamodel uncertainty through the random function $M_p(\cdot)$. To quantify the relative contribution of input and metamodel uncertainty, we decompose the total variance of $M_p({\mathbf{X}}_{\mathbf{m}})$ into two parts: \textcolor{black}{ \begin{eqnarray} \sigma^2_T &\equiv& \mbox{Var}[M_p(\mathbf{X}_{\mathbf{m}})] \nonumber \\[6pt] &=&\mbox{E}\{\mbox{Var}[M_p(\mathbf{X}_{\mathbf{m}})|\mathbf{X}_{\mathbf{m}}]\} +\mbox{Var}\{\mbox{E}[M_p(\mathbf{X}_{\mathbf{m}})|\mathbf{X}_{\mathbf{m}}]\} \nonumber\\[6pt] &=& \mbox{E}[\sigma^2_{p}(\mathbf{X}_{\mathbf{m}})] +\mbox{Var}[m_p(\mathbf{X}_{\mathbf{m}})]. \label{eq.varDecomp} \end{eqnarray}} The term $\sigma^2_M\equiv \mbox{E}[\sigma^2_{p}(\mathbf{X}_{\mathbf{m}})]$ is a measure of the metamodel uncertainty: the expected metamodel variance weighted by the density of moment estimator $\mathbf{X}_{\mathbf{m}}$. This weighting makes sense because the accuracy of the metamodel in regions with higher density is more important for the estimation of system mean performance. The term $\sigma^2_I\equiv \mbox{Var}[m_p(\mathbf{X}_{\mathbf{m}})]$ is a measure of input uncertainty when we replace the unknown true response surface $\mu(\cdot)$ with its best linear unbiased estimate $m_p(\cdot)$. What is the contribution of each term to ACI coverage? If the metamodel uncertainty disappears (i.e., $\sigma^2_p(\cdot)=0$), then $\sigma^2_M=0$, $\mbox{CI}_0$ and $\mbox{CI}_+$ coincide and they provide asymptotically consistent coverage \citep{barton_nelson_xie_2011}. Metamodel uncertainty is reduced by simulation effort. On the other hand, as $m\rightarrow\infty$ (more and more real-world input data), $\mathbf{X}_{\mathbf{m}}\stackrel{a.s.}\rightarrow{\bf x}_c$ and since $m_p({\bf x})$ is continuous we have $\sigma^2_I=0$; therefore, the width of $\mbox{CI}_0$ shrinks to zero as does coverage since there is remaining metamodel uncertainty in general. However, because $\mbox{CI}_+$ accounts for metamodel uncertainty it still provides asymptotically consistent coverage. This effect is demonstrated by the empirical study in Section~\ref{sec:empirical}. Our decomposition allows us to express the total variance in Equation~(\ref{eq.varDecomp}) as the sum of two variances measuring input and metamodel uncertainty: $\sigma^2_T = \sigma^2_I+\sigma^2_M.$ In the metamodel-assisted bootstrapping framework, we can estimate each variance component as follows: \begin{itemize} \item Total variance: $\widehat{\sigma}^2_T=\sum_{b=1}^B(M_b-\bar{M})^2/(B-1)$, where $\bar{M}=\sum_{b=1}^B M_b/B.$ \item Input variance: $\widehat{\sigma}^2_I=\sum_{b=1}^B(\textcolor{black}{\mu}_b-\bar{{\mu}})^2/(B-1)$, where $\bar{{\mu}}=\sum_{b=1}^B\textcolor{black}{\mu}_b/B$. \item Metamodel variance: $\widehat{\sigma}^2_M= \sum_{b=1}^B\textcolor{black}{\sigma}_p^2 (\widehat{\mathbf{X}}^{(b)}_{\mathbf{m}})/B$. \end{itemize} The ratio $\widehat{\sigma}_I/\widehat{\sigma}_T$ provides an estimate of the relative contribution from input uncertainty on $\mbox{CI}_+$. If it is close to 1, the contribution from metamodel uncertainty can be ignored. Thus, this ratio can help a decision maker determine where to put more effort: If the input variance dominates, then get more real-world data (if possible). If the metamodel variance dominates, then it can be reduced by more simulation, which can be a combination of additional design points and additional replications at existing design points. If neither dominates, then both activities are necessary to reduce $\mbox{CI}_+$ to a practically useful size. The asymptotic properties of these variance component estimators are shown in the following theorems. \begin{theorem} \label{thm:decomposition1} Suppose that Assumptions~1--4 hold. Then conditional on $\bar{\mathbf{Y}}_\mathcal{D}$ the variance component estimators $\widehat{\sigma}^2_M, \widehat{\sigma}^2_I, \widehat{\sigma}^2_T$ are consistent as $m, B\rightarrow\infty$, where as $m \rightarrow \infty$ we have $m_\ell/m \rightarrow c_\ell$, $\ell=1,2,\ldots,L$, for a constant $c_\ell>0$. Specifically, \begin{itemize} \item As $m\rightarrow\infty$, the input uncertainty disappears: \[ \lim_{m\rightarrow\infty}\sigma^2_M=\sigma^2_{p}({\bf x}_c), \lim_{m\rightarrow\infty}\sigma^2_I=0 \mbox{ and }\lim_{m\rightarrow\infty}\sigma^2_T=\sigma^2_{p}({\bf x}_c). \] \item As $m\rightarrow\infty$ and $B\rightarrow\infty$ in an iterated limit, the variance component estimators are consistent: \begin{eqnarray*} \lim_{m\rightarrow\infty}\lim_{B\rightarrow\infty}\widehat{\sigma}_M^2 &=&\lim_{m\rightarrow\infty}\sigma_M^2=\sigma_p^2({\bf x}_c), \\ \lim_{m\rightarrow\infty}\lim_{B\rightarrow\infty}\widehat{\sigma}_I^2 &=&\lim_{m\rightarrow\infty}\sigma_I^2=0, \\ \lim_{m\rightarrow\infty}\lim_{B\rightarrow\infty}\widehat{\sigma}_T^2 &=&\lim_{m\rightarrow\infty}\sigma_T^2=\sigma^2_p({\bf x}_c). \end{eqnarray*} \end{itemize} \end{theorem} Theorem~\ref{thm:decomposition1} demonstrates that the variance components estimators $\widehat{\sigma}^2_I$, $\widehat{\sigma}^2_M$ and $\widehat{\sigma}^2_T$ are consistent. However, we can see that when $m\rightarrow\infty$ the input uncertainty disappears. Since $\lim_{m\rightarrow\infty}\sigma^2_I=\lim_{m\rightarrow\infty} \lim_{B\rightarrow\infty}\widehat{\sigma}^2_I=0$ is not interesting, we study the consistency of scaled versions of $\sigma^2_I$ and $\widehat{\sigma}^2_I$ in Theorem~3, showing that $m\sigma^2_I$ and $m\widehat{\sigma}_I^2$ converge to the same non-zero constant. \begin{theorem} \label{thm:decomposition2} Suppose that Assumptions~1--4 and the following additional assumptions hold: \setcounter{enumi}{4} \begin{enumerate} \item [5.] The first three derivatives of the correlation function of the GP $M({\bf x})$ exist and the third derivative is bounded; and \item [6.] $m_\ell/m \rightarrow 1$ for $\ell=1,2,\ldots,L$. \end{enumerate} Then $\lim_{m\rightarrow\infty}m\sigma_I^2=\lim_{m\rightarrow\infty} \lim_{B\rightarrow\infty} m\widehat{\sigma}^2_I=\sigma^2_\mu$ almost surely, where $\sigma^2_\mu$ is a positive constant. \end{theorem} Theorems~\ref{thm:decomposition1}--\ref{thm:decomposition2} give the asymptotic properties of the variance component estimators, guaranteeing $\widehat{\sigma}_I/\widehat{\sigma}_T$ is a consistent estimator for the relative contribution of input to overall uncertainty. We will empirically evaluate its finite-sample performance in Section~\ref{sec:empirical} \textcolor{black}{where we form the variance component estimators by inserting $(\widehat{\tau}^2, \widehat{\pmb{\theta}}, \widehat{C})$ for the unknown parameters $(\tau^2, \pmb{\theta}, C)$.} \subsection{Unstable and Undefined Moments} A fundamental assumption of simulation is that the expectation $\mu({\bf x}_c)$ exists. This assumption does not imply, however, that it exists for \textit{all} possible values of ${\bf x}$, ${\bf X}_{\bf m}$ or $\widehat{{\bf X}}_{\bf m}^{(b)}$ that might be realized. The prototype example is a congestion-related performance measure of a queueing system as time goes to infinity when congestion increases without bound for some values of its interarrival-time and service-time parameters. We refer to systems for which $\mu({\bf x})$ is $\pm \infty$ for some values of ${\bf x}$ as potentially \textit{unstable}. Recall that $\mu({\bf x}) = {\rm E}[Y({\bf x})]$. A second problem arises when for some values of ${\bf x}$ the random variable $Y({\bf x})$ is undefined. The prototype example is a network for which we want to estimate some start-to-finish performance measure, but the start and finish are not connected for certain values of ${\bf x}$. We refer to such systems as potentially \textit{undefined}. Below we use illustrative examples to describe what happens to metamodel-assisted bootstrapping in each case, why we expect to be robust to unstable systems, and what needs to be done for undefined systems. We assume that ${\bf x}_c$ is an interior point of the space \textcolor{black}{$\Psi$ for which $\mu({\bf x}_c)$ is stable and $Y({\bf x}_c)$ is defined} so both problems disappear asymptotically ($m \rightarrow \infty$), but they may occur when we apply the metamodel-assisted bootstrapping approach to a finite sample of real-world data. \subsubsection{Unstable Moments} \label{sec:unstable} Consider the simulation of an $M/M/1$ queue. Let $x_1$ and $x_2$ denote the mean interarrival time and mean service time, respectively, and let ${\bf x} = (x_1, x_2)^\top$. The unknown mean response $\mu({\bf x})$ is the steady-state expected number of customers in the system. The true values $x_1^c$ and $x_2^c$ are unknown and must be estimated from real-world data; $x_1^c > x_2^c$ so the system is actually stable. We denote the unstable and stable regions of ${\bf x}$ by $U = \{(x_1, x_2): 0 < x_1 \le x_2 \}$ and $\bar{U} = \{(x_1, x_2): x_1 > x_2 > 0\}$, respectively, and ${\bf x}_c$ is an interior point of $\bar{U}$. As described in the Appendix, we use an initial set of bootstrap resampled moments to define an ellipsoid in which to embed our experiment design to fit the metamodel, and then generate a second set at which we evaluate the metamodel to form a two-sided, equal-probability bootstrap percentile interval. The conditional probability that a bootstrap resampled moment $\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)}$ is located in the unstable region given the real-world data is \begin{equation} P_U \equiv \mbox{Pr}\left\{\left. \widehat{\mathbf{X}}_{\mathbf{m}}^{(b)} \in U \right| \mathbf{z}_{\mathbf{m}}^{(0)} \right\}. \label{eq.infP} \end{equation} For the $M/M/1$ queue we know $U$ so we know which bootstrap moments are in the unstable region; therefore, we could map the mean response for unstable moments (symbolically) to $\infty$ and only fit or use the metamodel to predict the mean response at stable moments. If $P_U$ is large this could lead to a one-sided lower confidence interval (infinite upper limit) that would be interpreted as ``the real system may be unstable;'' this is an appropriate conclusion if input uncertainty is substantial. Unfortunately, in general stochastic systems it could be difficult or impossible to determine which moments are in the unstable region either analytically or empirically \citep{Wieland_Pasupathy_Schmeiser_2003}. Thus, in the experiment design phase we might simulate the system at an ${\bf x} \in U$ to fit the metamodel, and in the bootstrapping phase we might evaluate the resulting metamodel at an ${\bf x} \in U$ to estimate the CI. What is the effect of doing this when $P_U>0$, possibly even large? Suppose we start each replication of the $M/M/1$ queue with an empty and idle system. Let $\mu({\bf x},t)$ denote the true expected number of customers in the system at time $t \ge 0$. Then except for the case $x_1 = x_2$, which we ignore, it can be shown that $\mu({\bf x},t)$ satisfies the differential equation \[ \frac{d\mu({\bf x},t)}{dt}=\frac{1}{x_1}-\frac{1-p_0(t)}{x_2} \] where $p_0(t)$ is the probability that the system is empty at $t$ and $p_0(0)=1$. If ${\bf x}\in \bar{U}$ then $p_0(t)\rightarrow 1-x_2/x_1$ as $t\rightarrow \infty$; however, if ${\bf x}\in U$ then $p_0(t)\rightarrow 0$ as $t\rightarrow\infty$. Thus, for large $t$, \begin{equation} \frac{d\mu({\bf x},t)}{dt} \approx \left\{\begin{array}{ll} 0, & \mbox{if ${\bf x}\in \bar{U}$} \\ \label{eq.unstable} \frac{1}{x_1}-\frac{1}{x_2} > 0, & \mbox{if ${\bf x}\in U$}. \end{array} \right. \end{equation} For any finite run length $T$ and warm-up period $T_0 < T$ the simulation provides an unbiased estimator of \begin{equation} \label{eq:transient.mean} \bar{\mu}({\bf x}, T_0, T) = \frac{1}{T - T_0}\int_{T_0}^T \mu({\bf x}, t)\,dt . \end{equation} Notice that this quantity is finite for any positive values of $x_1$ and $x_2$, whether stable or not. However, if ${\bf x} \in \bar{U}$ then $\bar{\mu}({\bf x}, T_0, T)$ converges to $\mu({\bf x})$ for large $T$; while if ${\bf x} \in U$ then $\bar{\mu}({\bf x}, T_0, T)$ is increasing in $T$ for $T$ large enough. \textit{The key point is this: The expected value of any simulation-based estimator will be finite, even if the true steady-state mean is not. Further, the expected value of the simulation estimator at unstable ${\bf x}$ will tend to be larger than at near-by stable ${\bf x}$. This means that the simulation estimates corresponding to unstable ${\bf x}$ will tend to be the largest ones observed, but still not infinite.} Consider the design points or bootstrap resampled moments that are in $U$. When $P_U < \alpha/2$ and the run length is long enough, the unstable design points used to fit the metamodel, or bootstrap moments at which it is evaluated, tend not to adversely affect either the metamodel or the interval estimate because they are in the right tail beyond the $\alpha/2$ quantile. On the other hand, when $P_U \ge \alpha/2$ the large estimates corresponding to unstable design points or bootstrap moments tend to lengthen the interval estimate beyond what is required to cover $\mu({\bf x}_c)$; this causes overcoverage rather than undercoverage. Thus, metamodel-assisted bootstrapping \textcolor{black}{will often be} robust to unstable moments \textcolor{black}{in the sense of not being too short, but possibly too long; we demonstrate this empirically in Section~6.} \subsubsection{Undefined Moments} \label{sec:infeasible} Consider the queueing network example in Figure~\ref{fig:Figure1_3}. For simplicity, suppose that the routing probabilities $p_1,p_2,p_3$ are the only input parameters. Let ${\bf x}=(x_1,x_2,x_3)^\top = (p_1,p_2,p_3)^\top$. The true parameters ${\bf x}_c$ are unknown and estimated by finite samples from Bernoulli distributions ($1$ if the customer takes a particular route, $0$ otherwise). Suppose that the mean response of interest, $\mu({\bf x}),$ is the steady-state expected time for a customer to traverse the network, which exists and is well-defined at ${\bf x}_c$. Unfortunately, $Y({\bf x})$ may not be defined for every possible bootstrap resampled moment ${\bf x}$. For instance, if $\widehat{{\bf X}}_{{\bf m}}^{(b)} = (0, 0.665, 0)^\top$ then Stations~1 (start) and~4 (end) are disconnected and no simulation output for time to traverse the network will ever be generated. Thus, the system corresponding to this bootstrap moment is undefined. In practical problems for which we can obtain real-world input data, we should know a priori that the system performance measure is well defined (e.g., we would not include a route unless we actually observed a customer take it). Further, it should not be difficult to detect moments for which the system output is undefined, either because we understand the system logic (as in this example) or because the simulation simply fails to run. Therefore, a reasonable solution to the problem of undefined systems is to reject (and sample again) bootstrap moments $\widehat{{\bf X}}_{{\bf m}}^{(b)}$ that imply an undefined output. This makes our assessment of uncertainty \textit{conditional} on the system performance measure being defined, which makes sense. \section{Empirical Study} \label{sec:empirical} In this section we use the queueing network described in Section~\ref{sec:problem_Description} to evaluate the performance of our metamodel-assisted bootstrapping approach. The performance measure is the steady-state expected number of customers in the system. Both interarrival and service times follow gamma distributions and the routing decisions follow Bernoulli distributions. Thus, it is a 13-dimensional problem with $L=8$ input processes that include both continuous and discrete distributions. The true parameters of the input distributions are $\alpha_A=1,\beta_A=0.25$, $\alpha_{S_i}=1$, $\beta_{S_i}=0.2$ for $i=1,2,3,4$ and $p_1=p_2=0.5, p_3=0.75$. These parameter values imply a tractable Jackson network with steady-state number of customers in system $\mu({\bf x}_c) = 12.67$. The maximum traffic intensity at any station is $0.8$. In the experiments we assume that all parameters for all input distributions are unknown and are estimated from a finite sample of real-world data. Notice that $\alpha_A,\beta_A$, $\alpha_{S_i}$, $\beta_{S_i}$ for $i=1,2,3,4$ are estimated from continuous measurements, while the routing probabilities $p_1,p_2,p_3$ are estimated from 0 or 1 observations that would correspond to customer routing decisions. The model with \textit{estimated} input parameters is almost surely not a Jackson network and it could be unstable. Our measure of uncertainty is a $95\%$ CI for $\mu({\bf x}_c)$ \textcolor{black}{as defined by~(\ref{eq.CI_2}) because this is the objective desired in practice.} To evaluate the robustness of the metamodel-assisted bootstrapping approach, we systematically examine the effect of the quantity of real-world data and the number of design points and replications per design point used to fit the metamodel; see Figure~\ref{fig:Figure2_2}. We consider a wide range for the quantity of real-world data $m=50,500,5000$, letting $m_\ell=m$ for $\ell=1,2,\ldots L$. The levels for the number of design points are $k=20,40,80,130$. For a 13-dimensional problem $k=20$ is a very small design. The studies by \cite{Jones_1998} and \cite{Loeppky_2009} recommend that the number of design points should be $10$ times the dimension of the problem for kriging; we take this as the maximum number of design points. The same number of replications are assigned to all design points and we try $n=10,50,100$. \begin{figure*}[tb] \vspace{0.2in} { \centering \includegraphics[scale=0.35]{Figure2_3.png} \vspace{-0.2in} \caption{Experiment design. \label{fig:Figure2_2}} } \end{figure*} \cite{barton_nelson_xie_2011} demonstrated that $\mbox{CI}_0$ has good performance when the impact of metamodel uncertainty is negligible. In this empirical study we focus on situations where metamodel uncertainty may be significant. However, rather than creating a problem that actually takes hours or days to run, we instead construct a problem with high metamodel uncertainty by using short run lengths for each replication: 20 time units after the warm up, which is roughly equivalent to 80 finished customers. To avoid the influence from initial bias, all simulations start loaded with the number of customers at each station being their steady-state expected values (rounded) under ${\bf x}_c$. Furthermore, a long warmup period of 200 time units is used. The net effect is that the point estimators of the steady-state number in the network have low bias, but may be quite variable. To make the description of the empirical results easy to follow, we start with overall conclusions: \begin{enumerate} \item The new ACI $\mbox{CI}_+$ is robust to different levels of real-world data $m$, number of design points $k$ and number of replications $n$. \item When metamodel uncertainty is significant, $\mbox{CI}_0$ tends to have undercoverage that becomes more serious as $m$ increases. Since $\mbox{CI}_+$ accounts for metamodel uncertainty, it does not exhibit this degradation although it sometimes has slight overcoverage. \item Metamodel-assisted bootstrapping \textcolor{black}{continues to deliver at least the nominal coverage} when the probability of an unstable system $P_U$ is large. \item The ratio $\widehat{\sigma}_I/\widehat{\sigma}_T$ is a useful measure of the relative contribution of input to overall statistical uncertainty. \end{enumerate} As discussed in Section~\ref{sec:unstable}, metamodel-assisted bootstrapping might behave differently when $P_U < \alpha/2$ vs.\ $P_U \ge \alpha/2$. Since $P_U$ only depends on $m$ and ${\bf x}_c$, we ran a side experiment to estimate it using \begin{equation} \label{eq.infP_hat} \widehat{P}_U=\frac{1}{B}\sum_{b=1}^B\mbox{I} \left(\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)} \in U\right), \end{equation} where $\mbox{I}(\cdot)$ is the indicator function. The means and standard deviations (SD) of $\widehat{P}_U$ for $m=50,500,5000$ were estimated based on 1000 macro-replications and are displayed in Table~\ref{table:infP}. In each macro-replication we independently generated a sample of size $m$ of ``real-world data.'' Then, conditional on these data, we drew $B=2000$ bootstrap resampled moments. Finally, we calculated the estimate of $\widehat{P}_U$ using Equation~(\ref{eq.infP_hat}). As $m$ increases the bootstrap resampled moments become more closely centered around ${\bf x}_c$. Thus, both the mean and SD of $\widehat{P}_U$ decrease with increasing $m$ as shown in Table~\ref{table:infP}. When $m=50$, ${P}_U$ appears to be much larger than $\alpha/2$ so the bootstrap moments $\widehat{\mathbf{X}}_{\mathbf{m}}^{(b)}$ that correspond to the upper confidence bound are located in the unstable region $U$ with high probability. When $m=500$, ${P}_U$ appears to be close to $\alpha/2=2.5\%$, while when $m=5000$ there is little chance of getting unstable bootstrap moments. \begin{table}[tb] \caption{Percentage of unstable bootstrap resampled moments.} \label{table:infP} \begin{center} \begin{tabular}{|c|c|c|c|} \hline & $m=50$ & $m=500$ & $m=5000$ \\ \hline mean of $\widehat{P}_U$ & 44.4\% & 2.3\% & 0 \\ \hline SD of $\widehat{P}_U$ & 31.7\% & 7.9\% & 0 \\ \hline \end{tabular} \end{center} \end{table} In the following sections we describe the overall performance of $\mbox{CI}_0$ and $\mbox{CI}_+$, \textcolor{black}{including the situation where $P_U>0$}, and analyze the finite-sample performance of $\widehat{\sigma}_I/\widehat{\sigma}_T$ as a measure of the relative contribution of input to overall uncertainty. \subsection{Performance of CIs} \label{subSec:empirical_CI} Tables~\ref{table:HDm50}--\ref{table:HDm500-5000} show the results for $\mbox{CI}_0$ and $\mbox{CI}_+$ when $m=50,500,5000$, including the probability of covering $\mu({\bf x}_c)$, and the mean and SD of the interval widths. All results are based on $1000$ macro-replications. When $m=50$, $P_U$ is much greater than $\alpha/2$ according to Table~\ref{table:infP}. This explains the very large CI widths in Table~\ref{table:HDm50}. Nevertheless, both $\mbox{CI}_0$ and $\mbox{CI}_+$ have reasonable coverage overall, an observation we explore further in Section~\ref{subSec:empirical_Unstable}. Notice that $\mbox{CI}_0$ does exhibit undercoverage when we use a very small experiment design of $k=20$ points, while the coverage of $\mbox{CI}_+$ is much closer to the nominal value of $95\%$ in this case. If we fix the number of replications $n$ and increase the number of design points $k$, the coverage of $\mbox{CI}_0$ improves. For a fixed $k$ the effect of increasing $n$ is not as obvious. Table~\ref{table:HDm500-5000} shows the results for $m=500,5000$. Compared with the results for $m=50$, the mean and SD of the interval widths drop dramatically. The effects of $k$ and $n$ are easier to discern especially when $m=5000$, which has no unstable bootstrap moments. Specifically, for a fixed quantity of real-world data $m$, if either the number of design points $k$ or replications per design point $n$ is small then $\mbox{CI}_0$ tends to have undercoverage because it fails to account for substantial metamodel uncertainty, unlike $\mbox{CI}_+$. However, because $\mbox{CI}_+$ does incorporate metamodel uncertainty it sometimes has slight overcoverage. The most troubling observation about $\mbox{CI}_0$ is that, for fixed $(n,k)$, as the amount of input data $m$ increases its undercoverage becomes more serious. The diminished coverage occurs because as $m\rightarrow\infty$ the width of $\mbox{CI}_0$ shrinks to zero, which is not appropriate when there is still metamodel uncertainty. Again, $\mbox{CI}_+$ does not exhibit this degradation. As $n$ and $k$ increase, the coverages of $\mbox{CI}_0$ and $\mbox{CI}_+$ become closer to each other. \textit{The behavior of $\mbox{CI}_0$ is what we would expect based on \cite{barton_nelson_xie_2011}, which introduced $\mbox{CI}_0$. Their procedure continued to add simulation effort (design points and replications) until its effect on the confidence interval was negligible. \textcolor{black}{Compared to $\mbox{CI}_0$, the new interval, $\mbox{CI}_+$, is able to account for the effect of the remaining simulation estimation error. Therefore, it can work under more general situations where the simulated systems are complex and the simulation budget is tight.}} \begin{table}[tb] \caption{Results for $\mbox{CI}_{0}$, $\mbox{CI}_{+}$ and $\widehat{\sigma}_I/\widehat{\sigma}_T$ when $m=50$. } \label{table:HDm50} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $m=50$ & \multicolumn{3}{c|}{$k=20$} & \multicolumn{3}{c|}{$k=40$} \\ \cline{2-7} & $n=10$ & $n=50$ & $n=100$ & $n=10$ & $n=50$ & $n=100$ \\ \hline Coverage of $\mbox{CI}_0$ & 91.9\% & 92.3\% & 91.5\% & 93.8\% & 94.4\% & 93.4\%\\ \hline Coverage of $\mbox{CI}_{+}$ & 93.9\% & 94.9\% & 93.7\% & 94.9\% & 95.6\% & 95.9\% \\ \hline $\mbox{CI}_0$ Width (mean)& 326.4 & 332.4 & 339.5 & 319.1 & 328.6 & 326.5 \\ \hline $\mbox{CI}_{+}$ Width (mean) & 344.1 & 348.8 & 357.1 & 332.3 & 342.3 & 341.2 \\ \hline $\mbox{CI}_0$ Width (SD) & 183.1 & 173.6 & 180.7 & 176.4 & 167.6 & 175 \\ \hline $\mbox{CI}_{+}$ Width (SD) & 188 & 175.7 & 183.8 & 178.2 & 169.2 & 176.1 \\ \hline $\widehat{\sigma}_I/\widehat{\sigma}_T$ & 0.963 & 0.965 & 0.964 & 0.973 & 0.973 & 0.971 \\ \hline\hline $m=50$ & \multicolumn{3}{c|}{$k=80$} & \multicolumn{3}{c|}{$k=130$} \\ \cline{2-7} & $n=10$ & $n=50$ & $n=100$ & $n=10$ & $n=50$ & $n=100$ \\ \hline Coverage of $\mbox{CI}_0$ & 94.6\% & 96.3\% & 95.4\% & 94.2\% & 95.1\% & 95.4\%\\ \hline Coverage of $\mbox{CI}_{+}$ & 95.9\% & 96.7\% & 96.1\% & 94.5\% & 96\% & 96.1\% \\ \hline $\mbox{CI}_0$ Width (mean) & 312.1 & 314.8 & 322.7 & 322 & 321.86 & 320 \\ \hline $\mbox{CI}_{+}$ Width (mean) & 322 & 325.7 & 334 & 330.2 & 331 & 329.4 \\ \hline $\mbox{CI}_0$ Width (SD) & 169.7 & 159.1 & 164.7 & 171.5 & 169.3 & 172.3 \\ \hline $\mbox{CI}_{+}$ Width (SD) & 171.2 & 159.4 & 165 & 172.7 & 169.5 & 172.7 \\ \hline $\widehat{\sigma}_I/\widehat{\sigma}_T$ & 0.982 & 0.98 & 0.978 & 0.985 & 0.985 & 0.983 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[tb] \caption{Results for $\mbox{CI}_{0}$, $\mbox{CI}_{+}$ and $\widehat{\sigma}_I/\widehat{\sigma}_T$ when $m=500$ and $m=5000$. } \label{table:HDm500-5000} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $m=500$ & \multicolumn{3}{c|}{$k=20$} & \multicolumn{3}{c|}{$k=40$} \\ \cline{2-7} & $n=10$ & $n=50$ & $n=100$ & $n=10$ & $n=50$ & $n=100$ \\ \hline Coverage of $\mbox{CI}_0$ & 90.5\% & 94.6\% & 95.1\% & 94.9\% & 96.7\% & 96.4\%\\ \hline Coverage of $\mbox{CI}_{+}$ & 95.7\% & 97.7\% & 97.8\% & 96.6\% & 98.3\% & 97.8\% \\ \hline $\mbox{CI}_0$ Width (mean)& 24.8 & 28.1 & 29.4 & 27.1 & 28.5 & 28.7 \\ \hline $\mbox{CI}_{+}$ Width (mean) & 28.9 & 30.8 & 32.2 & 29.6 & 30.3 & 30.5 \\ \hline $\mbox{CI}_0$ Width (SD) & 19.9 & 19.4 & 20.6 & 19.1 & 19.2 & 19.9 \\\hline $\mbox{CI}_+$ Width (SD) & 20.6 & 20.4 & 21.7 & 19.7 & 19.9 & 20.6 \\ \hline $\widehat{\sigma}_I/\widehat{\sigma}_T$ & 0.88 & 0.932 & 0.933 & 0.932 & 0.957 & 0.958 \\ \hline\hline $m=500$ & \multicolumn{3}{c|}{$k=80$} & \multicolumn{3}{c|}{$k=130$} \\ \cline{2-7} & $n=10$ & $n=50$ & $n=100$ & $n=10$ & $n=50$ & $n=100$ \\ \hline Coverage of $\mbox{CI}_0$ & 96.5\% & 97.5\% & 95.8\% & 95.4\% & 96.5\% & 95.9\%\\ \hline Coverage of $\mbox{CI}_{+}$ & 98\% & 98.3\% & 97.3\% & 97.5\% & 97.1\% & 96.9\% \\ \hline $\mbox{CI}_0$ Width (mean) & 26.3 & 28 & 28.7 & 26.4 & 27.9 & 27.6 \\ \hline $\mbox{CI}_{+}$ Width (mean) & 28 & 29 & 29.7 & 27.9 & 28.6 & 28.2 \\ \hline $\mbox{CI}_0$ Width (SD) & 17.4 & 18 & 19.3 & 18.8 & 19.6 & 19.3 \\ \hline $\mbox{CI}_+$ Width (SD) & 17.7 & 18.4 & 19.6 & 18.9 & 19.9 & 19.5 \\ \hline $\widehat{\sigma}_I/\widehat{\sigma}_T$ & 0.952 & 0.977 & 0.978 & 0.957 &0.984 & 0.987 \\ \hline\hline\hline $m=5000$ & \multicolumn{3}{c|}{$k=20$} & \multicolumn{3}{c|}{$k=40$} \\ \cline{2-7} & $n=10$ & $n=50$ & $n=100$ & $n=10$ & $n=50$ & $n=100$ \\ \hline Coverage of $\mbox{CI}_0$ & 70.7\% & 89.2\% & 93.1\% & 81.5\% & 94.3\% & 94.8\%\\ \hline Coverage of $\mbox{CI}_{+}$ & 91.3\% & 96.3\% & 95.6\% & 96.5\% & 96.1\% & 96.3\% \\ \hline $\mbox{CI}_0$ Width (mean)& 3.29 & 3.97 & 4.14 & 3.93 & 4.23 & 4.3 \\ \hline $\mbox{CI}_{+}$ Width (mean) & 5.85 & 4.8 & 4.56 & 6.08 & 4.64 & 4.52 \\ \hline $\mbox{CI}_0$ Width (SD) & 1.89 & 1.2 & 1 & 1.64 & 0.87 & 0.83 \\ \hline $\mbox{CI}_+$ Width (SD) & 2.12 & 1.13 & 1 & 1.52 & 0.89 & 0.85 \\ \hline $\widehat{\sigma}_I/\widehat{\sigma}_T$ & 0.588 & 0.85 & 0.924 & 0.664 & 0.924 & 0.959 \\ \hline\hline $m=5000$ & \multicolumn{3}{c|}{$k=80$} & \multicolumn{3}{c|}{$k=130$} \\ \cline{2-7} & $n=10$ & $n=50$ & $n=100$ & $n=10$ & $n=50$ & $n=100$ \\ \hline Coverage of $\mbox{CI}_0$ & 88.9\% & 93.6\% & 94.9\% & 89.5\% & 93.7\% & 94.8\%\\ \hline Coverage of $\mbox{CI}_{+}$ & 98.1\% & 95\% & 96\% & 98\% & 95.6\% & 95.5\% \\ \hline $\mbox{CI}_0$ Width (mean) & 4.54 & 4.29 & 4.29 & 4.52 & 4.35 & 4.32 \\ \hline $\mbox{CI}_{+}$ Width (mean) & 6.1 & 4.56 & 4.42 & 5.98 & 4.64 & 4.45 \\ \hline $\mbox{CI}_0$ Width (SD) & 1.37 & 0.85 & 0.77 & 1.28 & 0.9 & 0.79 \\ \hline $\mbox{CI}_+$ Width (SD) & 1.27 & 0.85 & 0.78 & 1.13 & 0.87 & 0.77 \\ \hline $\widehat{\sigma}_I/\widehat{\sigma}_T$ & 0.757 & 0.946 & 0.974 & 0.766 & 0.945 & 0.974 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Performance of $\widehat{\sigma}_I/\widehat{\sigma}_T$} \label{subSec:empirical_Ratio} Tables~\ref{table:HDm50}--\ref{table:HDm500-5000} also demonstrate that $\widehat{\sigma}_I/\widehat{\sigma}_T$ provides a good measure of the relative contribution of input to overall uncertainty, and behaves as it should: \begin{itemize} \item For a fixed amount of real-world data $m$, increasing the number of design points and replications $(n,k)$ drives $\widehat{\sigma}_I/\widehat{\sigma}_T$ toward 1, indicating a decrease in metamodel uncertainty. \item For fixed simulation effort $(n,k)$, increasing the amount of real-world data $m$ decreases $\widehat{\sigma}_I/\widehat{\sigma}_T$, indicating that there is relatively less input uncertainty. Notice, however, that the relationship is not simple because as $m$ increases the design space over which we fit the metamodel becomes smaller, so that even with the same simulation effort the absolute level of metamodel uncertainty will decrease somewhat. \item When $\widehat{\sigma}_I/\widehat{\sigma}_T$ is near $1$, the behaviors (coverage and width) of $\mbox{CI}_0$ and $\mbox{CI}_{+}$ are similar and both have coverage close to the nominal level; this is illustrated in Figure~\ref{fig:coverageErr_ratio_m5000}. Recall that $\mbox{CI}_0$ does not account for metamodel uncertainty, and that $\widehat{\sigma}_I/\widehat{\sigma}_T \approx 1$ indicates that input uncertainty is large relative to metamodel uncertainty, which is when $\mbox{CI}_0$ will do best. Figure~\ref{fig:coverageErr_ratio_m5000} also illustrates the general robustness of $\mbox{CI}_+$. \end{itemize} \begin{figure*}[tb] \vspace{0.2in} { \centering \includegraphics[scale=0.7]{Coverage.png} \vspace{-0.2in} \caption{The coverage errors for $\mbox{CI}_0$ and $\mbox{CI}_+$ vs.\ $\widehat{\sigma}_I/\widehat{\sigma}_T$ when $m=5000$ across all values of $n$ and $k$. \label{fig:coverageErr_ratio_m5000}} } \end{figure*} \subsection{Robustness to Unstable Moments} \label{subSec:empirical_Unstable} Recall the observation from Table~\ref{table:HDm50} that when there is a small quantity of real-world data ($m=50$), resulting in a large probability of unstable bootstrap moments, then both $\mbox{CI}_0$ and $\mbox{CI}_+$ had large mean and SD of their widths, yet provided reasonable coverage. Examining the results, we found that most of the intervals that fail to cover the mean do so because the lower confidence bound is \textit{above} $\mu({\bf x}_c)$; this is the case for both $\mbox{CI}_0$ and $\mbox{CI}_+$. Using all 1000 macro-replications, the estimated probability that the lower confidence bound is above the mean (greater than $\mu({\bf x}_c)$, meaning too large) is 4.4\% for $\mbox{CI}_0$ and 3.8\% for $\mbox{CI}_+$, while the estimated probability that the upper confidence bound is below the mean (less than $\mu({\bf x}_c)$, meaning too small) is only $0.2\%$ for $\mbox{CI}_0$ and $0.1\%$ for $\mbox{CI}_+$; for two-sided equal percentile intervals we would expect these to be around $2.5\%$. \textcolor{black}{\textit{We conclude that even though the metamodel predicts a finite mean when it should be infinite, this will still tend to lead to overcoverage rather than undercoverage, and therefore is conservative.}} \begin{comment} To investigate this, we plotted the CIs using a technique suggested by \cite{Kang_Schmeiser_1990}. See Figure~\ref{fig:HDm50}, which displays results from the first 100 of the 1000 macro-replications when $m=50$, $k=130$ design points and $n=100$ replications per design point. The horizontal axis is the value of the lower confidence bound, the vertical axis is the value of the upper confidence bound, and each plot point corresponds to one CI. The vertical and horizontal lines located at $\mu({\bf x}_c)=12.67$ define Region~1 which contains intervals entirely above the true mean, Region~2 which contains intervals covering $\mu({\bf x}_c)$, and Region~3 which contains intervals entirely below the mean. Figure~\ref{fig:HDm50} shows that most of the intervals that fail to cover the mean do so because the lower confidence bound is above $\mu({\bf x}_c)$; this is the case for both $\mbox{CI}_0$ and $\mbox{CI}_+$. \begin{figure*}[] \vspace{0.2in} { \centering \includegraphics[scale=0.5]{HDm50.png} \vspace{-0.2in} \caption{A scatter plot of $\mbox{CI}_0$ and $\mbox{CI}_+$ for $m=50$ with $k=130$ and $n=100$. \label{fig:HDm50}} } \end{figure*} To evaluate $\mbox{CI}_0$ and $\mbox{CI}_+$ when $m=500$, Figure~\ref{fig:CI_m500k130n100} shows the corresponding plot, again with $k=130$ and $n=100$. Compared with $m=50$, when $m=500$ the percentage of unstable bootstrap resampled moments is much smaller as shown in Table~\ref{table:infP}. Thus, the behavior in Figure~\ref{fig:CI_m500k130n100} is much more stable than that in Figure~\ref{fig:HDm50}. \begin{figure*}[tb] \vspace{0.2in} { \centering \vspace{-0.2in} \caption{A scatter plot of $\mbox{CI}_0$ and $\mbox{CI}_+$ for $m=5000$ with $k=130$ and $n=100$. \label{fig:CI_m5000k130n100}} } \end{figure*} Figure~\ref{fig:CI_m5000k130n100} shows the corresponding plot when $m=5000$ with $k=130$ and $n=100$. The intervals $\mbox{CI}_0$ and $\mbox{CI}_+$ are very close to each other, and both CIs become more symmetric compared with those in Figures~\ref{fig:HDm50}--\ref{fig:CI_m500k130n100}. The plots in Figures~\ref{fig:HDm50}--\ref{fig:CI_m500k130n100} that the distribution of input parameter uncertainty, as represented by the distribution of $F_{\mathbf{X}_{\mathbf{m}}}^c$, could be \emph{asymmetric} for the quantities of real-world data sometimes encountered in practice. Compared with asymptotic normal approximation used in other methods, the bootstrap can capture higher moment information such as skewness of $F_{\mathbf{X}_{\mathbf{m}}}^c$. Thus, it can provide a better approximation; see the discussion in Section~3.3.3 of \cite{Shao_1995}. Then by using the smooth and flexible SK metamodel to provide a good \emph{global} approximation for the nonlinear surface $\mu({\bf x})$ and its remaining uncertainty, metamodel-assisted bootstrapping propagates input and metamodel uncertainty to the output mean. This explains the overall good finite-sample performance of $\mbox{CI}_+$ observed in Tables~\ref{table:HDm50}--\ref{table:HDm500-5000}. \end{comment} \begin{comment} \begin{figure*}[tb] \vspace{0.2in} { \centering \includegraphics[scale=0.45]{coverageErr_ratioM.png} \vspace{-0.2in} \caption{The coverage errors for $\mbox{CI}_0$ and $\mbox{CI}_+$ vs $\widehat{\sigma}_I/\widehat{\sigma}_T$ for $m=50,500,5000$ with $k=130$ and $n=100$. \label{fig:coverageErr_ratioM}} } \end{figure*} \end{comment} \section{Conclusions} In this paper, a metamodel-assisted bootstrapping approach is used for statistical uncertainty analysis. Input uncertainty is approximated by the bootstrap, an equation-based stochastic kriging metamodel is used to propagate the input uncertainty to the output mean, and the metamodel uncertainty is derived using properties of stochastic kriging. This approach delivers an interval estimator that accounts for \textit{all} statistical uncertainty, both simulation and input. The asymptotic consistency of this interval is proved under the assumption that the true response surface is a realization of a Gaussian process and certain parameters are known. An empirical study on a difficult problem demonstrates that our approach \textcolor{black}{can have} good finite-sample performance even when there are several input distributions (both discrete-valued and continuous-valued), a tight computational budget, and bootstrap moments corresponding to unstable systems. Thus, the new interval does not require a sequential experiment to make metamodel uncertainty negligible, as in \cite{barton_nelson_xie_2011}. If $\mbox{CI}_+$ is too wide, then it is important to know the relative contributions from input and metamodel uncertainty as a guide to either collecting more real-world data or doing more simulation or both. We give a measure of the relative contribution of input to overall statistical uncertainty by using a variance decomposition and analyze its asymptotic properties. \section*{Acknowledgments} This paper is based upon work supported by the National Science Foundation under Grant Nos.\ CMMI-0900354 and CMMI-1068473. The authors thank Ohad Perry, the associate editor and two anonymous referees for their help in the presentation of results and certain technical issues. \bibliographystyle{apacite}
1,314,259,992,902
arxiv
\section{Introduction} Next-generation astronomy research facilities will generate new challenges for data storage, access, analysis and system monitoring, bringing astronomy into the Petascale Data Era. But even today, astronomical knowledge is not growing at the same rate as the data. Scientific visualization is a fundamental, enabling technology for knowledge discovery. Despite recent progress, many current astronomy visualization approaches will be seriously challenged by, or are completely incompatible with, the Petascale Data Era. With an emphasis on developing new approaches compatible with data from the Square Kilometer Array and its Pathfinders, the goal of this work is to advance the field of astrophysical visualization in preparation for the Petascale Era. \section{Challenges and Design Objectives} The main goal of this work is to enable astronomers to visualize large spectral data cubes (e.g. at least 1 TB in size) of the size that will be generated from \htmladdnormallinkfoot{the Australian SKA Pathfinder (ASKAP)}{http://www.atnf.csiro.au/projects/askap/}. Multispectral data can be considered as a 4D data volume where three dimensions are associated to position allocation (two dimensions for the spatial position in sky coordinates and one dimension for the wavelength or frequency - both of which are related to the line of sight velocity) and one dimension for the flux density. The data cube can be considered as a stack of images where each image presents a sky portion over a small frequency range ($\Delta$$\nu$). To achieve this target we designed a framework that utilizes the latest available hardware technologies combined with the latest software infrastructure. The main design objectives and challenges for this framework were: \begin{enumerate} \item Being scalable enough to offer better visualization outcomes when more hardware is available; \item Support heterogeneous computing systems; \item Using off-the-shelf hardware solution, such as graphics processing units (GPUs); \item Requires minimum user intervention to reduce the time needed by astronomers to run and install the system; \item Being capable to handle tiled display systems to provide the user with high resolution output; and \item Handle data sizes exceeding current single machine memory limits. \end{enumerate} To achieve these objectives, the following design decisions were taken: \begin{enumerate} \item \textbf {Using ray-casting volume rendering as our visualization technique.} Although being computationally intensive and relatively hard to implement, volume rendering is an important visualization tool in our case because it gives the user a global picture of the data cube; it does not need the user to know in advance what s/he is searching for; and provides visuals that are easy to understand. Also, using ray-casting will support our framwork to achieve a high resolution output without any visual artifacts (Schwarz 2007). \item \textbf{Build a framework based on a distributed GPU architecture}. With a theoretical peak performance greater than 4 TFLOP/S ($10^{12}$ floating point operations per second) on hardware such as NVIDIA's Tesla, general purpose computing on graphics processing units offers a more effective and cheaper parallel architecture than existing multi-core processors. \end{enumerate} \section{Distributed GPU Framework} As shown in Figure 1, our distributed GPU framework combines the processing power of multiple GPU nodes to speed-up and enhance the spectral cube visualization process. This framework uses \htmladdnormallinkfoot{Message Passing Interface}{http://www.mcs.anl.gov/research/projects/mpi/} (MPI), multi-threading, and \htmladdnormallinkfoot{the Compute Unified Device Architecture}{http://www.nvidia.com/object/cuda_what_is.html} (CUDA) framework from NVIDIA to allow many GPUs to work on the same problem. \begin{figure} \plotone{P27.hassan.fig1.eps} \caption{Schematic diagram showing the framework's main hardware components. Each node is connected to the other nodes through a standard network interface which is managed through a seperate thread. The communication threads serve as a software communication channels between kernels on different GPUs. The scheduler server maps the current available tasks to the available computing resources. The scheduler module assumes that the resources are heterogenous and assign the computational tasks to each of them according to their computing capabilities. The final rendering result is presented to the user through a thin-client which translates the user interaction into rendering commands to the scheduler server.} \label{fig:P27_1} \end{figure} \subsection{Framework Modules} The main components of our proposed framework are partitioned based on their main functionality into: \begin{enumerate} \item Scheduling module: responsible for managing and controlling the overall process; \item GPU module: includes different execution kernels and utilizes the parallelization paradigm of the GPUs. We chose NVIDIA's CUDA library because it has a C like syntax, it is easy to use, and covers all the general purpose GPU computing functionality we require; \item Scene integration module: responsible for generating the final display output by combining the output of the contributing GPUs; and \item I/O and User interaction modules: support the user's interactivity and change the output according to user input. \end{enumerate} The process of generating a single volume rendering view of the spectral line cube goes through the following processes: \begin{enumerate} \item The spectral data cube is partitioned into a set of smaller sub-cubes; \item The scheduler module assigns these sub-cubes to the processing nodes/GPUs; \item Each GPU applies ray-tracing volume rendering to map each output pixel into a color plus alpha transparency channel (RGBA) based on a pre-selected transfer function; \item The process of volume rendering produces N images with the same resolution as the output screen(s); \item The scene integration module reapplies the same transfer function to these images to combine them into the final output; and \item The final output is directed to the output display(s). \end{enumerate} \section{Conclusion} The focus of this work is to improve multi-spectral data visualization to cope with the vast amount of data to be produced by ASKAP and similar facilities. By employing GPUs combined with distributed processing, we are aiming to implement a scalable system capable of interactively visualizing ``greater than memory" astrophysical datasets.
1,314,259,992,903
arxiv
\section*{INTRODUCTION} It is the purpose of the present contribution to describe a comprehensive theory for the production of pions due to the interactions of incident photons with nucleons \cite{hh97}. The history of such descriptions goes back to the fifties, and indeed many of the more general basic relations have been well-known for about forty years (see, e.g., \cite{ward50,kroll54,takah57,kazes59} and references therein). In recent years, the attention has focused on approaches attempting to take into account the fact that all hadrons involved in the reaction have an internal structure \cite{gross,afnan1,ohta,friar,coester,afnan,banerjee}. The work reported here adds to the latter approaches by providing a detailed theoretical framework for the gauge-invariant interactions of physical --- i.e., fully dressed --- hadrons with photons. The description is based on an effective field theory where the (at present, in detail unknown) quark and gluon degrees of freedom are parametrized by the bare quantities of the effective Lagrangian. We assume here that QCD provides us with cutoff functions for the $N$-$N \pi$ vertices that make all integrations convergent. These vertices of the effective Lagrangian are ``bare'' at the hadronic level, i.e., they can still be dressed by purely hadronic mechanisms. The physical currents for all processes contributing to the pion production amplitude are derived via their corresponding hadronic $n$-point Green's functions by employing a mathematical operation called a ``gauge derivative'' which allows one to obtain currents directly from the momentum-space versions of the respective Green's functions. For local fields, this is equivalent to the usual minimal-substitution procedure. However, for the present nonlocal case, where we assume bare vertex functions originating from QCD, this goes beyond minimal substitution. For lack of space, we cannot go into any details here regarding the gauge derivative. For the same reason, we rely entirely on a --- hopefully largely self-explanatory --- diagrammatic exposition of the formalism. Complete details are given in Ref. \cite{hh97}. \section*{PION PHOTOPRODUCTION} Generically, at any level of dynamical sophistication, there are four distinctly different contributions to the current $M^{\mu}$ for the pion photoproduction process $\gamma N\rightarrow \pi N$: Three contributions from the photon interacting with the three external hadronic legs of the $N$-$N \pi$ vertex and a fourth one arising from the photon attaching itself within the vertex itself. At the simplest level where one has only (bare) tree graphs, this last contribution vanishes identically for a local pseudoscalar vertex; for a bare $N$-$N \pi$ vertex with derivative coupling, it is equal to the Kroll--Ruderman \cite{kroll54} contact term obtained from minimal substitution. This generic picture, which is shown in Fig.\ \ref{fig1}, \begin{figure}[b!] \centerline{\psfig{file=m97fig1.eps,width=4.5in,clip=,silent=}} \vspace{3mm} \caption{\label{fig1} Basic diagrams describing pion photoproduction. Solid and dashed lines describe nucleons and pions, respectively.} \end{figure} remains true even if we consider fully dressed, physical hadrons. What becomes more complicated then are the ingredients contributing to the various mechanisms depicted in Fig.\ \ref{fig1}. For the fully dressed case, the internal details of the right-most graph of Fig.\ \ref{fig1} --- now called the {\it interaction current} (where, except for the bare contact term, there is at least one hadronic $N$-$N \pi$ vertex before or after the photon interacts with the hadronic system) --- are found \cite{hh97} to be given by the mechanisms shown in Fig.\ \ref{fig2}. \begin{figure} \centerline{\psfig{file=m97fig2.eps,width=4.5in,clip=,silent=}} \vspace{3mm} \caption{\label{fig2} Interaction current. The first graph on the right-hand side is the bare contact term present already in the simplest tree-level case.} \end{figure} The interaction current is seen to contain a bare Kroll--Ruderman-type contact term which is present already at the tree-level. In addition, it contains an exchange current contribution $U^{\mu}$ and an auxiliary current $b^{\mu}$ dressed by hadronic final-state interactions denoted by $X$. The details of $b^{\mu}$ are given in Fig.\ \ref{fig3}. \begin{figure} \centerline{\psfig{file=m97fig3.eps,width=4.5in,clip=,silent=}} \vspace{3mm} \caption{\label{fig3} Auxiliary current $b^{\mu}$ required in Fig.\ \ref{fig2}. The exchange current $U^{\mu}$ appearing here and in Fig.\ \ref{fig2} is shown in Fig.\ \ref{figU}.} \end{figure} The remaining current pieces are depicted in Figs.\ \ref{figU} and \ref{figJ}, where the latter shows the (on {\it and} off-shell) current for a composite nucleon. \begin{figure} \centerline{\psfig{file=m97fig4.eps,width=4.5in,clip=,silent=}} \vspace{3mm} \caption[title]{\label{figU} (a) Exchange current $U^{\mu}$ due to driving term $U$ and (b) current $X^{\mu}$ appearing in (a).} \end{figure} \begin{figure} \centerline{\psfig{file=m97fig5.eps,width=4.5in,clip=,silent=}} \vspace{3mm} \caption[title]{\label{figJ} Various equivalent representations of the fully dressed electromagnetic current for the nucleon.} \end{figure} As alluded to in the Introduction, the basis for this dynamical structure is the gauge-derivative formalism of Ref. \cite{hh97}. The underlying effective hadronic field theory describing $\pi N$ scattering, which supplies the required $n$-point Green's functions, is summarized in Fig.\ \ref{fig4}. \begin{figure} \centerline{\psfig{file=m97fg6a.eps,width=3in,clip=,silent=}\hfill% \psfig{file=m97fg6b.eps,width=3in,clip=,silent=}} \vspace{3mm} \caption[Summary of coupled, nonlinear equations for {$\pi N\rightarrow \pi N$}]% {\label{fig4} % Summary of coupled, nonlinear equations for {$\pi N\rightarrow \pi N$} in the $P_{11}$ channel \cite{hh97} (a similar set obtains for the $P_{33}$ channel): (a) Fully dressed $N$-$N\pi$ vertex (filled circle) given in terms of bare vertex (open circle) and final-state interaction mediated by the nonpolar $\pi N$ amplitude $X$. (b) Nonlinear equation for the dressed nucleon propagator (line with solid circle) determined by the bare propagator (line with open circle) and the $\pi N$ self-energy bubble. (c) Integral equation for the auxiliary amplitude $X$ given in terms of the $\pi N$ irreducible driving term $U$ and intermediate propagation of noninteracting $\pi N$ system. (d) Full $\pi N$ Bethe--Salpeter amplitude $T$, expressed as the sum of the polar $s$-channel term (with fully dressed $\pi N$ vertices and $N$ propagator) and the solution $X$ of (c). Some of the lowest-order contributions for the driving term $U$ of the nonpolar amplitude $X$ are given in the figure on the right-hand side. The corresponding first graph subsumes the $u$-channel exchanges of both $N$ and $\Delta$, i.e., it also incorporates the crossing-symmetric partner of the $s$-channel graph of (d). The other three box graphs depict intermediate scattering processes of the nonpolar {$P_{11}$ $\pi N$} amplitude $X$ dressed by a pion, of the {$\pi \pi $} amplitude dressed by a baryon, and of the full {$P_{33}$ $\pi N$} amplitude with pion dressing, respectively.} \end{figure} The formalism is seen to be highly nonlinear at both the hadronic and electromagnetic levels. \section*{GAUGE INVARIANCE} The general condition for gauge invariance of all physcial currents can be formulated very succinctly: All current contributions $R^{\mu}$ associated with the photon's interaction {\it within} the interaction region of any hadronic reaction $R$ must satisfy the equation \cite{hh97} \begin{eqnarray} k_{\mu} R^{\mu} + R \widehat{Q}_i - \widehat{Q}_f R = 0 \; , \label{eq1} \end{eqnarray} where $Q_i$ is the sum of all charge operators for all incoming and $Q_f$ for all outgoing particle legs; the caret signifies that for all subsequent reaction mechanisms the particle described by the respective individual charge operator will have the photon's four-momentum $k^{\mu}$ added to its own. The situation is illustrated in Fig.\ \ref{fgRmu}. \begin{figure} \centerline{\psfig{file=m97fig7.eps,width=4.5in,clip=,silent=}} \vspace{3mm}\caption[title]{\label{fgRmu} Generic representation of $R \widehat{Q}_i - \widehat{Q}_f R$ of Eq.\ (\ref{eq1}). $R$ is an arbitrary hadronic reaction mechanism where all incoming and outgoing {\it uncharged} particles have been subsumed in the upper lines and all incoming and outgoing {\it charged} particles in the lower, thicker lines. The first graph on the left sums up all contributions where the photon is attached to an incoming particle whereas the one on the right depicts the sum of all contributions from the photon being attached to an outgoing particle. $R \widehat{Q}_i - \widehat{Q}_f R$ is the difference between the purely hadronic contributions enclosed in the dashed boxes; it measures the change brought about in the hadronic reaction when a photon momentum is transmitted through the hadronic interaction region entering and leaving via charged particles.} \end{figure} This is an {\it off-shell} generalization of the well-known Ward-Takahashi identity for propagators \cite{ward50,takah57} to the case of several incoming and outgoing particles. (Transverse magnetic moment contributions are contained in $R^{\mu}$; they are conserved by themselves, of course.) As has been shown in Ref.\ \cite{hh97}, all reaction elements appearing in Figs.\ \ref{fig1}--\ref{figJ} do satisfy the gauge condition (\ref{eq1}) and therefore all physical currents are indeed gauge-invariant. From a practical perspective, in view of the high nonlinearity of the present formalism, the question arises how this gauge invariance can be maintained if some elements of the reaction mechanisms can only be treated by approximations. Two well-known methods of ensuring gauge invariance in situations where the proper interaction currents are not available are due to Gross and Riska \cite{gross} and Ohta \cite{ohta}. The recent work of \cite{banerjee} suggests that there may be a theoretical problem with Ohta's method having to do with the fact that, in general, it requires knowledge of the vertex functions at unphysical values of its arguments which can be traced to an underlying non-Hermitian effective Lagrangian. Haberzettl \cite{hh97} has suggested another, quite general, method of maintaining gauge invariance when approximating interaction currents which essentially replaces the full interaction current $R^{\mu}$ by a sum of single-particle currents over all incoming and outgoing hadron legs of the reaction $R$, \begin{eqnarray} R^{\mu} \rightarrow R^{\mu}_{\rm approx} = \sum_{x_f} j^{\mu}_{{\rm leg},x_f} + \sum_{x_i} j^{\mu}_{{\rm leg},x_i} \;, \label{eq2} \end{eqnarray} where the individual leg currents $j^{\mu}_{{\rm leg},x}$ are obtained from a straightforward procedure using only charge conservation across the hadronic reaction in question (for details, see \cite{hh97}). When applied to a nonlocal vertex as, e.g., the bare $N$-$N \pi$ vertex (see Fig.\ \ref{fgfmu}) \begin{figure} \centerline{\psfig{file=m97fig8.eps,width=4.5in,clip=,silent=}} \vspace{0mm}\caption[title]{\label{fgfmu} Effective treatment of the bare current as an example of the gauge-invariant current approximation given in Eq.\ (\ref{eq2}), where the current pieces due to a photon attaching itself within a hadronic interaction region is replaced by a sum of suitably constructed currents over all incoming and outgoing hadron legs. Each of the three leg currents is given by an entire diagram on the right-hand side and can be interpreted \cite{hh97} as a product of a particle current, a propagator, and a bare vertex function, in the manner shown here. For a constant vertex with derivative coupling, the result is identical to the Kroll--Ruderman term \cite{kroll54} (see Eq. (93) of \cite{hh97}).} \end{figure} whose associated bare (QCD-based) interaction current is undetermined at the hadronic level, the resulting expressions are reminiscent of Ohta's \cite{ohta} for an extended vertex. However, they are different in detail and require only physical values of the vertex functions and, therefore, do not suffer from any of the problems discussed in Ref.\ \cite{banerjee}. We add that, of course, the approximate current (\ref{eq2}) can be amended by transverse pieces which are conserved by themselves. \bibliographystyle{unsrt}
1,314,259,992,904
arxiv
\section{Introduction} To operate within a scene, embodied agents require a comprehensive representation of their surroundings. Such perceptual understanding should not be limited to determining object identities, but should rather capture relationships between objects and between the agent and its surroundings. An expressive scene representation should also allow for downstream task completion of interactive tasks without additional training (i.e., zero-shot inference). \begin{figure}[tp] \centering \includegraphics[width=19pc]{figs/teaser.pdf} \vspace{-0.2cm} \caption{ \textbf{Representing relationships.} Our work, Continuous Scene Representations, encodes object relationships into a graph with nodes and edges represented as feature vectors. \textbf{(a)} If an object is shuffled between two trajectories, different embeddings allow for change detection. \textbf{(b)} As the agent moves within a trajectory, new nodes and edges are populated. } \label{fig:teaser} \vspace{-3mm} \end{figure} Towards building such representations, scene graphs \cite{Johnson2015CVPR,zhao2013scene,armeni20193d} are a candidate as they provide a compact and explicit description of a scene. In a typical scene graph pipeline, it is common to define a set of relationship labels, use them to manually annotate connections between objects in frames, and train a model to infer target graphs. However, once labels are defined, there are inherent limitations. Once trained, models are restricted to a fixed set of relationships. Even if all relationships are modeled, how well can pre-defined discrete symbols represent the complex relationships between objects? Consider \textsc{[support]}, a common semantic relation used in the literature \cite{zellers2018neural,yang2018graph,li2017scene}. The relationship ``table supports mug" indicates the mug is on the table, but it does not capture \emph{where}. In practice, the agent behavior may depend on this information. Beyond the inherent limitations of modeling relationships as discrete symbols, scene graphs used in the literature are typically static \cite{xu2017scene,ji2020action,lu2016visual} (i.e., they represent a snapshot of a scene or a bundle of frames). In embodied settings, new objects are observed as an agent explores, and the scene representation should update on-the-fly. If an agent returns to a position, it should determine if objects have moved. Inspired by these observations, we develop a scene representation that is more suitable for embodied AI tasks. We propose Continuous Scene Representations (CSR\xspace), a novel approach to construct a scene representation from egocentric RGB images as an embodied agent explores a scene. To address limitations of traditional scene graphs, our goal is to represent relations between objects as continuous vectors and update the representation on-the-fly as the agent moves. To enable downstream interactive task execution without additional training, we also present a simple strategy for planning with respect to the representation. Constructing such a scene representation interactively poses various challenges. The graph should accommodate the new objects, and relationships with other objects should be inferred. A successful algorithm should also determine when detections correspond to the same object even in different views. To tackle these challenges, our idea is to learn \emph{object relational embeddings} via a contrastive loss to represent the nodes and edges of a scene representation. In this scheme, modeling is not constrained to a pre-defined set of symbols. The agent maintains a memory of previously encountered embeddings. As the agent extracts new embeddings from egocentric observations, it compares them to the memory to determine which embeddings are new and which already exist. This can be used to detect changes in the scene and update the representation as shown in Fig. \ref{fig:teaser}. We perform experiments using the AI2-THOR \cite{ai2thor} framework and the YCB-Video dataset \cite{xiang2018posecnn}. We support experimentally that (1) \emph{without any task specific training}, a simple planning approach that employs CSR\xspace as the underlying representation, outperforms a map-based, reinforcement and imitation learning baseline trained directly on the task of room rearrangement~\cite{batra2020rearrangement,weihs2021visual}. (2) CSR\xspace is able to capture commonly used discrete relations that can be extracted via linear probes. (3) CSR\xspace effectively captures spatial relationship between objects within a scene. (4) Without any fine-tuning or hyperparameter tuning, a CSR\xspace trained on AI2-THOR is able to track objects over time in real world YCB-Video~\cite{xiang2018posecnn}, which contains objects unseen during training. \section{Related Work} \noindent\textbf{Scene graphs.} There are various methods in the literature that use scene graphs to develop richer scene understanding models. It is common to build scene graphs for static images \cite{lu2016visual,li2017scene,xu2017scene,dai2017detecting,zellers2018neural,yang2018graph}. One of the major issues of these approaches is the lack of temporal information (i.e., there is no mechanism to identify or track changes in a scene over time). To overcome this issue, other approaches create scene graphs from videos \cite{shang2017video,tsai2019video,liu2020beyond,ji2020action,cong2021spatial,ost2021neural}. These types of scene graphs capture temporal information; however, they are created using pre-recorded videos, hence, not suitable for embodied tasks that involve observations dependent on on-the-fly actions. The mentioned image-based and video-based scene graphs capture only 2D relations between objects in a scene. However, others propose to create 3D scene graphs that encode 3D relationships between the components of a scene \cite{armeni20193d,wald2020learning,zhou2019scenegraphnet,chen2019holistic++,fisher2011characterizing}. Our method is closer to the approaches that create scene graphs in embodied or physics based settings \cite{rosinol20203d,du2020learning,zhu2021hierarchical,li2021embodied,Battaglia2016InteractionNF}. There are two important differences between our work and these approaches. First, we do not have a pre-defined set of relationships (e.g., \textsc{[support]}, \textsc{[next-to]}). Instead, we represent the relationships by a learned embedding. Second, our approach does not require object category information at inference. \noindent\textbf{Mapping and localization.} Scene graphs are a form of abstraction for metric-semantic maps, so we discuss examples of mapping work. Classical mapping and localization work in the robotics literature \cite{mur2017orb,engel2014lsd,davison2007monoslam} relies on low-level geometric features and does not encode semantic information. There are object-based maps that encode object-level semantics and the relationships \cite{bao2012semantic,salas2013slam++,bowman2017probabilistic}. However, they still operate based on the assumption of static environments. Dynamic SLAM approaches \cite{runz2017co,xu2019mid,strecke2019fusion} have been proposed to handle moving objects in a scene. However, they primarily focus on table-top scenes. Recently, approaches have been designed to handle more complex dynamic scenes \cite{Rosinol21ijrr-Kimera,wong2021rigidfusion}. There are few major differences with this work. (1) Our method produces an abstract sparse representation rather than a dense representation.(2) We create a continuous representation of the scene. (3) Methods such as \cite{wong2021rigidfusion} require a full sequence of observations as input, which is not suitable for embodied applications since the observations vary depending on the taken actions. \noindent\textbf{Embodied tasks.} Recently, various embodied tasks such as navigation \cite{yang2018visual,wortsman2019learning,Wijmans2020DDPPOLN,perez2021robot,chaplot2020learning,objectnav}, instruction following \cite{anderson2018vision,Shridhar2020ALFREDAB}, embodied question answering \cite{Das2018EmbodiedQA,Gordon2018IQAVQ}, object manipulation \cite{ehsani2021manipulathor,shen2020igibson,gan2021threedworld,gadre2021act}, and room rearrangement \cite{batra2020rearrangement,weihs2021visual} have been of interest. These tasks benefit substantially from some form of memory to store the state of the scene. Neural graph-based \cite{Savarese-RSS-19,savinov2018semi,wu2019bayesian,chaplot2020neural} and dense representations \cite{gupta2017cognitive,chaplot2020learning} have been proposed to construct the map of the environment. These methods mostly focus on creating occupancy maps of free space and obstacles or the reachability of different nodes. In our approach, we propose a scene representation that encodes object relations by feature embedding and evolves based on new observations and agent actions. \section{Scene Representation} \label{sec:approach} Embodied AI requires an agent to have an understanding of the environment and to update its knowledge based on new observations. In this paper, our goal is to model scenes with our Continuous Scene Representations (CSR\xspace), which can then be used in downstream visual and embodied tasks (Fig.~\ref{fig:overview}). We start by giving an overview of CSR\xspace (Sec. \ref{sec:overview}). Then we address learning and updating the scene representation (Sec. \ref{sec:representation}). Finally, we describe strategies for linking CSR\xspace to agent embodiment (Sec. \ref{sec:grounding}) and detecting changes in the scene (Sec.~\ref{sec:changes}). \begin{figure}[tp] \centering \includegraphics[width=\linewidth]{figs/overview.pdf} \caption{ \textbf{Overview.} \textbf{(a)} Given an egocentric observation $I_t$, the scene representation module extracts nodes and edges to update the continuous scene representation (CSR\xspace). \textbf{(b)} References to observed nodes in the current image are stored in an embodied state graph, which links agent poses to CSR\xspace nodes that are seen from that pose. \textbf{(c)} An exploration policy infers an action to more comprehensively observe the scene and build a more complete CSR\xspace. \textbf{(d)} As the agent moves, the embodied state graph is updated with the action that leads to the next agent pose. } \label{fig:overview} \vspace{-3mm} \end{figure} \subsection{Overview} \label{sec:overview} Given a navigating agent in a scene (e.g., a robot in a kitchen), our goal is to create a CSR\xspace of the room. A CSR\xspace is a graph where nodes represent objects and edges represent the relationships between them. Both nodes and edges are represented by continuous valued features. As the agent moves, the CSR\xspace updates to accommodate observed objects and relationships. The agent is situated within the scene representation to allow for planning. \noindent\textbf{Features for nodes and edges.} To motivate continuous valued nodes and edges, consider a scene graph that models the \textsc{[support]} relation. In a kitchen scene, the graph might include a directed edge from \textsc{[stove]} to \textsc{[kettle]}, representing the ``kettle is on the stove". This representation throws away useful information like the appearance of the relationship or the relative positions of the objects. If instead, the representation was a continuous vector, it could preserve more information. Moreover, a continuous representation can implicitly encode many other relations (such as \textsc{[next-to]} and \textsc{[inside]}), which could be recovered when necessary using light-weight, trainable linear projection heads. Hence, learning continuous features for relations could provide a flexible representation that could conceivably adapt based on an agent's needs. \noindent\textbf{Scene representation consistency.} To motivate consistency of the representation, consider again the example of the kettle. As an agent navigates in the scene, it should determine if a kettle is the same object from previous timesteps to accurately model the underlying relationships. In the terminology of CSR\xspace, as the agent moves, it must decide if observed objects match with existing nodes or should spawn new nodes. Edges should be propagated accordingly. This is challenging, especially in our setting that uses only RGB images as input. \noindent\textbf{Adding the embodiment.} CSR\xspace does not directly model the agent or its actions. Such information must be captured if an agent is to navigate to a particular CSR\xspace node. For example, given a node corresponding to \textsc{[kettle]}, it would be useful to retrieve the sequence of actions to get to a position where the kettle is observable. \noindent\textbf{Scene representation notation.} A CSR\xspace is a graph $\mathcal{G_{\text{CSR\xspace}}(\mathcal{N_{\text{CSR\xspace}}}, \mathcal{E_{\text{CSR\xspace}}})}$, where $\mathcal{N_{\text{CSR\xspace}}}, \mathcal{E_{\text{CSR\xspace}}}$ are the sets of nodes and edges, respectively. Each node and edge is a vector in $\mathbb{R}^L$, $L$ being the feature dimension. A local scene representation $\mathcal{G}^t_{\text{CSR\xspace}}$ is the continuous scene representation constructed based on the observation of the agent at time $t$. There is a detection function that takes $I_t \in \mathbb{R}^{H \times W \times 3}$ and extracts bounding box masks $\{r_1, r_2, ..., r_n \}$, $r_i \in \{0, 1\}^{H \times W}$, one for each object. Each mask $r$ has value one within the box, else zero. A CSR\xspace encoder $f_\text{CSR\xspace} (I_t, r_i, r_j) \mapsto y_{i, j} \in \mathbb{R}^L$ is introduced to map directed pairs of objects to their feature representations, where $ y_{i, j}$ is the continuous valued vector representing the feature between node $i$ and $j$. If $i=j$, $y_{i, j} \in \mathcal{N_{\text{CSR\xspace}}}$, otherwise $y_{i, j} \in \mathcal{E_{\text{CSR\xspace}}}$. A matching function $M_{\text{CSR\xspace}}$, aggregates the local observation into $\mathcal{G_{\text{CSR\xspace}}}$ by taking an existing CSR\xspace and a local scene representation $\mathcal{G}^t_{\text{CSR\xspace}}$ and updating $\mathcal{G_{\text{CSR\xspace}}}$. \noindent\textbf{Embodied state graph notation.} An embodied state graph $\mathcal{G_{\text{state}}(\mathcal{S}, \mathcal{T})}$, contains states $\mathcal{S}$ pointing to nodes in the CSR\xspace, with one state for every egocentric observation, and action transitions $\mathcal{T}$ between states. The nodes $\mathcal{S}$ keep track of references to the objects and relationships in $\mathcal{G_{\text{CSR\xspace}}}$ that are seen by the agent at that state. A transition takes place via agent actions (e.g., \textsc{MoveForward}). Hence, $\mathcal{G_{\text{state}}}$ stores information about how the agent should reach particular visual observations. \subsection{Creating the Scene Representation} \label{sec:representation} In this section, we explain how to create a CSR\xspace or $\mathcal{G_{\text{CSR\xspace}}}$, which are used interchangeably. We first discuss building a local representation from the current egocentric view and learning node and edge representations. Since the agent explores over time, we then describe how the local information is aggregated. \noindent\textbf{Building the local scene representation.} In conventional scene representations, nodes are represented by class labels (e.g., \textsc{[mug]} and \textsc{[sink]}), and edges by attributes relating classes (e.g., \textsc{[under]} and \textsc{[inside]}). Instead CSR\xspace uses continuous features for nodes and edges. To get box region proposals corresponding to nodes, a detector is employed on the current image $I_t$, which yields $n$ detections. In practice we use a Faster-RCNN model; however, class labels are discarded. All $n^2$ pairs of regions are mapped into a latent space, using the CSR\xspace encoder $f_{\text{CSR\xspace}}$ as depicted in Fig. \ref{fig:local_graph}. A batch of $n^2$ items is fed through $f_{\text{CSR\xspace}}$, with each item in the batch containing (1) a unique ordered pair of regions and (2) $I_t$. The result is $n^2$ feature vectors, which form the local scene representation $\mathcal{G}^t_{\text{CSR\xspace}}$ with $n$ node features and $n^2-n$ edge features. While CSR\xspace feature extraction is quadratic in the number of detections, all computations are independent and hence parallelizable. \begin{figure}[tp] \centering \includegraphics[width=\linewidth]{figs/local_graph.pdf} \caption{ \textbf{Building the Local Graph.} A detector extracts $n$ boxes and all $n^2$ pairs of of boxes are batched and fed to the CSR\xspace encoder. This yields the CSR\xspace representation for this image. Node features are in green and edge features are in blue. } \label{fig:local_graph} \vspace{-3mm} \end{figure} What is a good objective to learn the CSR\xspace encoder $f_{\text{CSR\xspace}}$? Our high-level idea is to embed the same relationship from different view points to the same location in the embedding space, assuming objects do not move within a trajectory. Consider again the kettle on the stove. The relationship of the kettle to the stove can be observed from different angles, with a variety of objects in the background. However, the global characteristics of the relationship do not change across views. The same argument holds for nodes. To learn features capable of registering relations from different viewpoints, we take inspiration from the momentum contrast approaches \cite{he2020momentum,chen2020improved}. Given two views of the same pair of objects, a representation is extracted from the first view via $f_{\text{CSR\xspace}}$ and from the second view via a momentum encoder, which has the same architecture and initialization as $f_{\text{CSR\xspace}}$, but is updated via a slow moving average of $f_{\text{CSR\xspace}}$ weights instead of by gradient updates. InfoNCE loss \cite{oord2018representation} is used to enforce similarity of relations in feature space, $ \label{infonce} \mathcal{L}_{\text{CSR\xspace}} = -\log\frac{\exp({\mathsf{CosSim}(q, k^+)/\tau})}{\sum_{i=0}^K\exp({\mathsf{CosSim}(q, k^i)/\tau})}. $ Here $q$ is the feature representation of a node or edge outputted by $f_{\text{CSR\xspace}}$, $k^+$ is the momentum encoder representation for the second view of the same relationship, each $k^i$ is a negative drawn from a bank of $K$ other momentum encoded relationships, $\tau$ is a softmax temperature scaling parameter, and $\mathsf{CosSim}$ is the dot product between L2 normalized features. Notably this objective encourages learning features that (1) have high $\mathsf{CosSim}$ when they represent that same relationship, (2) are multi-view consistent in the feature space, and (3) are different from those of other relationships. See Fig.~\ref{fig:loss} for visualization of these attributes. \begin{figure}[tp] \centering \includegraphics[width=19pc]{figs/embedding_space.pdf} \caption{ \textbf{Embedding Space.} The loss encourages the same relationship observed from different views to be close in feature space and far from other relationships. } \label{fig:loss} \vspace{-3mm} \end{figure} \noindent\textbf{From local to global scene representation.} Considering the agent is navigating over time, it is not sufficient to capture information from a single view. As the agent traverses, it is necessary to determine if node representations are new objects or correspond to previously seen objects. To handle such complexity, we design a way to update nodes and edges based on the current local observation. Our insight is to use a matching function $M_{\text{CSR\xspace}}$ between the node embeddings from the current local scene representation $\mathcal{G}^t_{\text{CSR\xspace}}$ and $\mathcal{G_{\text{CSR\xspace}}}$ to update $\mathcal{G_{\text{CSR\xspace}}}$. At the first timestep $\mathcal{G}_{\text{CSR\xspace}} \gets \mathcal{G}^t_{\text{CSR\xspace}}$. For subsequent timesteps, node features corresponding to the same object should have high $\mathsf{CosSim}$. As shown in Fig. \ref{fig:matching}, $M_{\text{CSR\xspace}}$ first computes $\mathsf{CosSim}$ between pairs of features in $\mathcal{N}_{\text{CSR\xspace}}$ and $\mathcal{N}^t_{\text{CSR\xspace}}$, which yields a score matrix. Next a maximal linear score assignment (Hungarian match) creates an initial matching of nodes. If the score of a match exceeds a threshold, the nodes are considered a true match (more details in Sec.~\ref{sec:rearrangement}). The matched local node features are then averaged with their matches to update the CSR\xspace nodes. For edges, if a new relationship is observed it is added otherwise it is averaged into the representation in an analogous fashion to the nodes. Unmatched nodes and corresponding edges from $\mathcal{G}^t_{\text{CSR\xspace}}$ are then directly added as new nodes and relations. \subsection{Linking CSR\xspace to Agent States for Planning} \label{sec:grounding} We have discussed how to create our representation, which captures object and their relationships in the scene. How should this representation be used to solve embodied tasks? While using the representation as additional input for downstream training is an option, we present a simple alternative. Namely, we discuss the creation of an embodied state graph $\mathcal{G_{\text{state}}}$ to enable planning within the CSR\xspace. As the agent traverses, we keep track of (1) scene nodes that are observed in each egocentric view, which initialize states in $\mathcal{G_{\text{state}}}$ and (2) the actions that lead to changes in views, which initialize the transitions in $\mathcal{G_{\text{state}}}$. To navigate to a specific scene feature in $\mathcal{G_{\text{CSR\xspace}}}$, the agent can lookup a corresponding state where this feature was observed, and navigate to it by planning actions from its current state to the target state in $\mathcal{G_{\text{state}}}$. Since the states in $\mathcal{G_{\text{state}}}$ are linked via actions, the path and actions to the target object can be recovered using a standard breath first search planner over the state graph. Note, this is a simple planning strategy, is independent of CSR\xspace creation. In the presence of noise, it is also possible to employ more sophisticated closed-loop planning. \begin{figure}[tp] \centering \includegraphics[width=\linewidth]{figs/matching.pdf} \vspace{-0.4cm} \caption{ \textbf{Matching.} A function $M_{\text{CSR\xspace}}$ takes the local and the existing global CSR\xspace to update the global CSR\xspace. } \label{fig:matching} \vspace{-4mm} \end{figure} \subsection{Detecting Changes} \vspace{-0.2cm} \label{sec:changes} In practice, scenes typically change over time. For example, kitchen equipment frequently moves. Given two trajectories, each observing a different static configuration, it is important to determine the objects that changed. \noindent\textbf{Motivating between-trajectory object correspondences.} Our insight is to detect changes in the scene by leveraging object correspondences between trajectories. If a kettle is observed on a stove in one trajectory and on a countertop in another, the agent should know that plausibly this kettle has moved. While our scene representation captures nodes and edges in the context of the scene, we would also like to detect objects that shuffle over time. This motivates finding object correspondences between trajectories. \noindent\textbf{Between-trajectory object features.} Our idea is to augment the scene nodes of $\mathcal{G_{\text{CSR\xspace}}}$ with a object feature vector that is responsible for matching the object instance between trajectories. This feature is learned similarly to the scene node features. However, instead of positives for the contrastive loss coming from different views in the same static scene, they come from different views across varied initializations of the scene. The representation learner cannot rely on consistent scene information to minimize the low, but should instead focus only on embedding characteristics of the object instances. Using a similar Hungarian matching scheme, as in Sec.~\ref{sec:representation}, object features are matched between the two trajectories and thresholded based on $\mathsf{CosSim}$. \noindent\textbf{Using low feature similarity to detect change.} Recall, when learning node and edge features, we leveraged that the scene was static. Based on the object matches, detecting violations to the static assumption provides signal for what has moved. Concretely, using the object feature assignments, the $\mathsf{CosSim}$ between matched scene nodes and edges (i.e., scene features before object features were introduced) is computed. If scores are lower than a threshold there is signal the visual object has moved (i.e., the underlying spatial relationship has changed). In summary, the object features are used to match object instances between trajectories and then low scene node/edge feature similarity is used to determine what moved. \section{Experiments} \label{sec:experiments} In this section, we first present our datasets (Sec.~\ref{sec:datasets}). We show competitive performance of Continuous Scene Representations (CSR\xspace) on 2-phase visual room rearrangement~\cite{weihs2021visual,batra2020rearrangement}, \emph{without training for the task} (Sec.~\ref{sec:rearrangement}), CSR\xspace encodes spatial information that can be recovered via linear probes (Sec.~\ref{sec:probe}), CSR\xspace can retrieve layout-consistent views of the scene against hard negatives (Sec.~\ref{sec:change}), and CSR\xspace is effective for tracking objects in real-world data (Sec.~\ref{sec:ycb}). \subsection{Datasets} \vspace{-0.2cm} \label{sec:datasets} For rearrangement experiments (Sec.~\ref{sec:rearrangement}), we use the RoomR dataset introduced by Weihs \textit{et al.} \cite{weihs2021visual}. Its test and val sets each contain 1000 room rearrangements. To train the CSR\xspace encoder $f_{\text{CSR\xspace}}$, which embeds nodes and edges, we generate a dataset using AI2-THOR~\cite{ai2thor}. For each of the 80 train rooms, we render 20 random agent poses in 5 different room initialization with random lighting, texture, and object locations. RGB images for this and all following datasets are $224\times224$. We mine pairs of objects (i.e., edges) that are viewed from different view points in each room initialization to get a dataset of 900K samples. We additionally create smaller datasets for our various experiments. We create two image datasets of $\sim$5k images in 80/20 train/test splits with spatial relationship labels for pairs of ground truth boxes (e.g., ``$x$ supports $y$"). These are used in linear probes for our edge representations (Sec. \ref{sec:probe}). A test dataset of 4.4K triplets of images is generated, where a query and a positive image have the same static scene layout and a negative image presents the scene with one object moved. We use this set to evaluate the model's matching ability to retrieve the positive against a hard negative (Sec. \ref{sec:change}). Finally we use the YCB-Video test set \cite{xiang2018posecnn} of 12 videos to evaluate the tracking and updating components of our method on real world data (Sec. \ref{sec:ycb}). Across all datasets, train, validation, and test rooms are constant. More details on the Faster-RCNN detector and the exploration policy network can be found in Appx.~\ref{appx:rcnn}, \ref{appx:policy}. \subsection{Downstream Rearrangement} \vspace{-0.2cm} \label{sec:rearrangement} The goal is to evaluate CSR\xspace for a downstream embodied task, 2-phase visual room rearrangement~\cite{weihs2021visual}. Rearrangement serves as a difficult, but practical, end-task with applications to indoor service robots. Furthermore, performing visual room rearrangement requires capable perception, memory, navigation, and planning, thereby testing how the various components of our approach work together. \begin{table*} \small \centering \tabcolsep=0.25cm \begin{tabular}{l?cccc?cc?cc?cc} \toprule &\multicolumn{4}{c?}{\textsc{Inferred}} & \multicolumn{2}{c?}{100 $\cdot$ \textsc{Success} $\uparrow$} & \multicolumn{2}{c?}{100 $\cdot$ \textsc{\% FixedStrict} $\uparrow$} & \multicolumn{2}{c}{\textsc{\%E} $\downarrow$}\\ Experiment &plan & match & box & traj.& Val. & Test & Val. & Test & Val. & Test\\\midrule CSR\xspace (GT-MBT) &\ding{51}&\ding{55}&\ding{55}&\ding{55}& 8.8 & 10.0 & 26.0 & 27.0 & 0.74 & 0.73\\ CSR\xspace (GT-BT) &\ding{51}&\ding{51}&\ding{55}&\ding{55}& 3.0 & 2.2 & 7.9 & 5.9 & 0.97 & 0.98\\ CSR\xspace (GT-T) &\ding{51}&\ding{51}&\ding{51}&\ding{55}& 1.3 & 0.7 & 3.8 & 2.1 & 1.07 & 1.11\\\midrule VRR \cite{weihs2021visual} &-&-&-&-& 0.5 & 0.2 & 1.2 & 0.7 & 1.15 & 1.12\\ VRR+Map \cite{weihs2021visual} &-&-&-&-& 0.6 & 0.3 & 1.6 & 1.4 & 1.15 & \textbf{1.10}\\ CSR\xspace (Ours) &\ding{51}&\ding{51}&\ding{51}&\ding{51}& \textbf{1.2} & \textbf{0.4} & \textbf{3.3} & \textbf{1.9} & \textbf{1.13} & 1.17\\ \bottomrule \end{tabular} \vspace{-0.3cm} \caption{ \textbf{Downstream Rearrangement.} Using CSR\xspace, we are able to outperform the RL based visual room rearrangement baselines in the hard 2-phase setting (i.e., walkthrough and unshuffle phases happen sequentially). Additionally, the improved performance when including ground truth components, hints that as the various components of our approach improve, our end-to-end solution will also improve. }\label{tab:exp_rearrangement} \vspace{-0.2cm} \end{table*} \noindent\textbf{Task.} In a walkthrough phase, an agent observes objects in their target configurations, building a CSR\xspace and embodied state graph: $(^{\text{walk}} \mathcal{G_{\text{CSR\xspace}}}, ^{\text{walk}} \mathcal{G_{\text{state}}})$. The agent leaves the room and 1-5 objects change (e.g., a mug moves from a countertop to a table). In the unshuffle phase, the agent returns and must restore the room to its original layout. In our case, a second unshuffle CSR\xspace and embodied state graph: $(^{\text{un}} \mathcal{G_{\text{CSR\xspace}}}, ^{\text{un}} \mathcal{G_{\text{state}}})$ is built. Leveraging the fact the agent enters the room from the same starting location, we fuse the $^{\text{walk}} \mathcal{G_{\text{state}}}$ and $^{\text{un}} \mathcal{G_{\text{state}}}$ into a unified embodied state graph $^{(\text{walk},\text{un})} \mathcal{G_{\text{state}}}$, using the initial state as a point of correspondence between two trajectories. While this simplifying assumption is allowable due to the task construction of \cite{weihs2021visual}, it is plausible that state correspondence can be established using features, which is left to future work. By identifying target objects that moved between creation of $^{\text{walk}} \mathcal{G_{\text{CSR\xspace}}}$ and $^{\text{un}} \mathcal{G_{\text{CSR\xspace}}}$ (i.e., high object feature similarity but low node feature similarity as discussed in Sec.~\ref{sec:changes}), it is possible to query current and target locations for the moved objects in $^{(\text{walk},\text{un})} \mathcal{G_{\text{state}}}$. Using a simple breath first search planner over $^{(\text{walk},\text{un})} \mathcal{G_{\text{state}}}$, the agent plans to the location of the target object, executes a pick-up action, and navigates to the goal location, where the object is placed. \noindent\textbf{Model.} We use an exploration agent to explore the room during the two phases of the task. We use a simple exploration policy network with three convolutional layers and a GRU, which takes RGB images and outputs a distribution over navigation actions $\mathcal{A}_\text{nav} =$ \{\textsc{MoveForward, MoveLeft, MoveRight, MoveBack, RotateRight, RotateLeft, LookUp, LookDown}\}. The network is trained using DD-PPO \cite{Schulman2017ProximalPO,Wijmans2020DDPPOLN} in the AllenAct framework \cite{Weihs2020AllenActAF}, with a reward that encourages visiting more states and objects. Note: this objective has no notion of rearrangement. To generate the object region proposals used to generate box mask inputs for the CSR\xspace encoder, we use a Faster-RCNN~\cite{ren2015faster} detector. This detector is trained on LVIS \cite{Gupta2019LVISAD} and AI2-THOR data. Leveraging the policy network and detector during walkthrough and unshuffle, we get CSR\xspace representations for each trajectory, with corresponding embodied state graphs. We then detect changes in the scene and execute a plan to restore objects. For more details on the exploration training, reward, detector training, loss, and hyperparameters, see Appx.~\ref{appx:rcnn}, \ref{appx:policy}, and \ref{appx:csr}. \noindent\textbf{Metrics.} We adopt rearrangement metrics \cite{weihs2021visual}: \begin{itemize}[leftmargin=*] \itemsep-0.1em \item \textsc{Success}: The percentage of episodes in which \emph{all} the objects are correctly returned to their original states. \item \textsc{\%FixedStrict}: The percentage of objects returned to their proper place over all rooms. However, if a mistake is made in a room (e.g., an object is moved that should not be), the value is 0 for that rearrangement even if other objects are returned correctly. \item \textsc{\% E}: A ratio between the energy of a room before and after rearrangement. The energy function decreases monotonically as the agent brings objects closer to their target destinations. A value of 0 corresponds to a perfect rearrangement, 1 to no change, and greater than 1 to a final configuration further from the goal than the initialization. \end{itemize} \noindent\textbf{Ablation.} We ablate CSR\xspace in the context of the rearrangement task. We substitute ground truth (GT) for key components of our model to ablate the impact of their performances on the rearrangement task. We study the effects of using GT \textbf{M}atching, \textbf{B}oxes, and \textbf{T}rajectories. For GT matching, we substitute the matching function $M_{\text{CSR\xspace}}$ for GT object instance labels, which allow for perfect matching, both within a trajectory and between walkthrough and unshuffle trajectories. For GT boxes, we substitute the Faster-RCNN detector for GT boxes. For GT trajectories, we substitute the trained exploration policy for a heuristic policy that visits all objects that get shuffled in both walkthrough and unshuffle phases. Concretely, we compare against the following ablated models: \begin{itemize}[leftmargin=*] \itemsep-0.2em \item \textit{CSR\xspace (GT-MBT)}: GT object \textbf{M}atches, \textbf{B}oxes, and heuristic \textbf{T}rajectories. \item \textit{CSR\xspace (GT-BT)}: Matching is estimated within a trajectory and between the two trajectories, keeping box and trajectory GT constant from CSR\xspace (GT-MBT). \item \textit{CSR\xspace (GT-T)}: Boxes are estimated using the Faster-RCNN detector, keeping the heuristic exploration trajectories constant from CSR\xspace (GT-BT). \end{itemize} \noindent\textbf{Baselines.} We compare against the following prior work for the challenging 2-phase rearrangement setting: \begin{itemize}[leftmargin=*] \itemsep-0.2em \item \textit{VRR}: Corresponds to the (RN18, PPO+IL) from \cite{weihs2021visual}. The model uses a frozen ImageNet pretrained ResNet-18 and a mixture of imitation learning and reinforcement learning to train \emph{directly} on the rearrangement task. \item \textit{VRR+Map}: Simmilar to VRR, but with the addition of Active Neural SLAM \cite{chaplot2020neural} for mapping. Corresponds to (RN18+ANM,PPO+IL) from~\cite{weihs2021visual}. \end{itemize} \noindent\textbf{Is CSR\xspace competitive v.s. sophisticated task-specific baselines?} The results in Tab.~\ref{tab:exp_rearrangement} support that using a simple exploration policy and detector in concert with CSR\xspace performs favorably on the \textsc{Success} and \textsc{\%FixedStrict} metrics compared to the sophisticated baselines trained directly on the task. This shows the effectiveness of CSR\xspace since \textit{no part of our approach was trained to rearrange rooms}. \noindent\textbf{How do improved submodules improve the solution?} We replace each CSR\xspace submodule with GT as a proxy for an improved submodule. See Tab.~\ref{tab:exp_rearrangement}. Looking at CSR\xspace (GT-T), we see that performance is comparable to CSR\xspace (Ours), suggesting that the trained exploration agent generalizes sufficiently. Comparing CSR\xspace (GT-BT), we see larger gains suggesting that with better detection, performance would improve by $\sim 3$ times. Finally when comparing to CSR\xspace (GT-MBT), we find that there is room for improvement in matching. These results suggest a high ceiling to our pipeline. \subsection{Representation Linear Probes} \label{sec:probe} We probe if CSR\xspace encodes interpretable relationships. \noindent\textbf{Tasks.} We linearly probe CSR\xspace features for two relationship classification tasks. First, given a pair of box regions of interest, predict if one object supports the other. We call this the \textsc{[support]} task. This is a three-way classification problem, where object 1 either supports object 2, vice versa, or there is no support relationship. The second task is predicting, given a pair of boxes, if they are supported by the same surface. We call this the \textsc{[sibling]} task. In this binary task, the direction of the edge does not matter. \noindent\textbf{Model.} We fit a linear layer on a frozen CSR\xspace feature extractor $f_{\text{CSR\xspace}}$. We train two linear heads for \textsc{[support]} and \textsc{[sibling]} respectively. \noindent\textbf{Metrics.} We consider the mean accuracy across both tasks. \noindent\textbf{Baselines.} We compare to the following baselines: \vspace{-1mm} \begin{itemize}[leftmargin=*] \itemsep-0.3em \item \textit{Random Features}: Randomly initialized and frozen ResNet-18 receiving 5 channels as input (3 for RGB and 2 for object boxes). \item \textit{Supervised \textsc{[support]}}: a ResNet-18 backbone trained end-to-end on the \textsc{[support]} task. The representation is frozen and a separate linear head is trained for the \textsc{[sibling]} task. \item \textit{Supervised \textsc{[sibling]}}: Similar to the \textit{Supervised \textsc{[support]}} model but trained end-to-end on the \textsc{sibling} task. \item \textit{Fine-tuned \textsc{[support]}, Fine-tuned \textsc{[sibling]}}: Same as \textit{Supervised \textsc{support}, \textsc{sibling}}; however, encoders are initialized with CSR\xspace contrastively pre-trained weights instead of random initialization. \end{itemize} \begin{table} \scriptsize \centering \tabcolsep=0.3cm \begin{tabular}{l?c|c} \toprule Experiment & Linear Probe mAcc & Retrieval Acc.\\\midrule Random Chance & 41.7 & 50.0\\ Random Features & 51.8 & 52.4\\ Supervised \textsc{[support]} & 84.3 & 69.3\\ Supervised \textsc{[sibling]} & 70.2 & 54.9\\ Fine-tuned \textsc{[support]} & 77.7 & 64.6\\ Fine-tuned \textsc{[sibling]} & 78.3 & 57.6\\ \midrule CSR\xspace features & \textbf{85.3} & \textbf{84.3}\\ \bottomrule \end{tabular} \vspace{-0.3cm} \caption{ \textbf{Linear Probes and Retrieval.} CSR\xspace features outperform supervised and fine-tuned baseline features. } \label{tab:exp_rel_probe} \vspace{-3mm} \end{table} \noindent\textbf{Benefits to continuous instead of discrete relations.} While the baselines outperforms CSR\xspace for the specific tasks they are trained for, when averaging performance over both tasks, CSR\xspace outperforms the baselines, as seen in Tab.~\ref{tab:exp_rel_probe} (left). These results support that training on specific relations can be detrimental in transfer setting and that our features can be used to recover specific spatial relations. \subsection{Retrieving Consistent Relationships} \label{sec:change} Our CSR\xspace learning objective assumes scenes are static, enforcing the similarity of relations between objects seen from different viewpoints. We hypothesize that our features should allow for detecting violations to this assumption. \noindent\textbf{Task.} Consider triplets of images. The first is a query image containing a pair of objects. The second is the \emph{positive} image taken from a different vantage point that also captures the same pair. The third is the \emph{negative} image taken from the same pose as the second image; however, one of the objects in the pair is moved. The task then is to match the query and the positive image with $\mathsf{CosSim}$ of features as the matching criteria. Fig.~\ref{fig:retrival} shows sample data. \begin{figure}[tp] \centering \includegraphics[width=\linewidth]{figs/retrival.pdf} \vspace{-0.5cm} \caption{ \textbf{Consistent relationship sample data.} Features for edges in the same room layout should be more similar than features in different layouts even with relatively small pixel differences in the positive and negative images. } \label{fig:retrival} \vspace{-0.3cm} \end{figure} \noindent\textbf{Model.} We use our $f_{\text{CSR\xspace}}$ directly to embed three relationships. We compute the $\mathsf{CosSim}$ between the query features against the features from the positive and the negative images. To solve the classification task, we choose the image that has the higher $\mathsf{CosSim}$ score. \noindent\textbf{Metrics.} We measure accuracy of the predictions. \noindent\textbf{Baselines.} We adopt the same baselines as in Sec.~\ref{sec:probe}. The Supervised \textsc{[support]} baseline is the main point of comparison. This model determines if two objects participate in a support relationship (i.e., one on top of the other). Because changes in the dataset involve objects shifting to different supporting surfaces, these events should correspond to drastic changes in feature space. \noindent\textbf{Results.} Based on the results in Tab.~\ref{tab:exp_rel_probe} (right), we find that CSR\xspace outperforms the baselines in terms of matching the query and positive images. See Fig.~\ref{fig:retrival}, which qualitatively illustrates the difficulty of the task. The results suggest that CSR\xspace learns some underlying spatial relationships. \subsection{Real World Tracking} \vspace{-0.2cm} \label{sec:ycb} To demonstrate the potential of the representation on real-world data, we additionally evaluate CSR\xspace features, trained in simulation on object tracking in YCB-Video~\cite{xiang2018posecnn}. \noindent\textbf{Task.} The goal is to track objects over a trajectory. We formulate the problem as clustering, where, with each new observation, a detected object can either be assigned to an existing cluster (representing a previously observed object) or to a new cluster. The target GT clustering is one cluster for each object instance, with a count per cluster corresponding to the number of times the object was detected. \noindent\textbf{Model.} CSR\xspace feature extraction and matching within a trajectory. Note, we use GT boxes for detections in lieu of a detector. Each node represents a cluster with number of elements equal to the number of assignments to that cluster. \noindent\textbf{Metrics.} For each video, we measure the Adjusted Rand Index (ARI)~\cite{hubert1985comparing}, a common clustering accuracy metric, between the CSR\xspace and GT assignments. A value of 1 corresponds to perfect clustering and 0 to random clustering. \noindent\textbf{Ablations/Baselines:} \vspace{-0.3cm} \begin{itemize}[leftmargin=*] \itemsep-0.2em \item \textit{No-Update}: Node features are fixed at initialization without the feature updating discussed in Sec.~\ref{sec:representation}. \item \textit{Opt.}: Oracle with feature updating and matching threshold set optimally. \end{itemize} \noindent\textbf{Results.} In Fig.~\ref{fig:real_world} we see near perfect matching (compared to the baseline \textit{Opt.}) and the usefulness of updating features (compared to \textit{No-Update} baseline). These preliminary results suggest that CSR\xspace trained in simulation can be applied to real-world data distributions and that updating the representation on-the-fly is critical for tracking. \begin{figure}[tp] \centering \includegraphics[width=\linewidth]{figs/real_world.pdf} \vspace{-2em} \caption{ \textbf{Matching on YCB-Video.}. Tracking performance is close to optimal and outperforms an ablated version without feature updating. } \label{fig:real_world} \vspace{-0.3cm} \end{figure} \section{Limitations} \vspace{-0.2cm} In this work, we mainly focus on capturing the pose of objects and ignore the state of the objects (e.g., open v.s. closed microwave). The state of objects plays an important role in task planning. Future work will consider incorporating the state of the object. Also, the interaction of the agents with the objects is not used to update the scene representations. Hence, future work will address agent-object interactions. Additionally developing learned matching functions, which go beyond cosine similarity, is a promising direction to develop better matching performance. \section{Conclusion} \vspace{-0.2cm} We present Continuous Scene Representations (CSR\xspace) to model objects and their relations as continuous valued feature vectors. We describe an algorithm for updating the CSR\xspace as an agent moves and connect the representation to the agent itself. We discuss use of our pipeline to tackle visual room rearrangement. Our experiments show (1) A simple planner using CSR\xspace outperforms sophisticated baselines on visual room rearrangement, even though our model is not trained directly for this task. (2) The relational feature from CSR\xspace can be used to recover semantically meaningful spatial relations. (3) The representation can detect subtle visual changes in the scene. We hope this work will encourage investigation of techniques in scene representation learning for other downstream embodied tasks, going beyond learning task specific representations. \section*{Appendix} \section{Faster-RCNN} \label{appx:rcnn} Here we mention the architecture, data, and training details for the Faster-RCNN network. \noindent\textbf{Architecture.} We use the Faster-RCNN default architecture from Detectron2 \cite{wu2019detectron2}. \noindent\textbf{Data.} We use $\sim$10k images from train rooms in AI2-THOR with bounding box labels and $\sim$127k train images from LVIS \cite{Gupta2019LVISAD}. We choose this number of THOR images to ensure diversity of views without repetition of frames that would appear very similar. In total the detector is trained on 1235 classes (accounting for the class overlap between LVIS and the $\sim$100 THOR categories). \noindent\textbf{Training.} We train using the default Detectron2 3x schedule. We train on a machine with eight GeForce GTX TITAN X NVIDIA GPUs. The training takes $\sim$2 days. \noindent\textbf{Inference.} At inference, we treat the Faster-RCNN module as a region proposal network. Hence, we only require its detections for extracting node and edge features. Our proposed algorithm does not use the predicted class labels as input to create node and edge feature representations. \section{Policy} \label{appx:policy} Here we mention the architecture, data, and training details for the policy network. \noindent\textbf{Architecture.} The policy consists of three convolutional layers and a GRU. It is an actor-critic style network. The input is an image $I \in \mathbb{R}^{224 \times 224 \times 3}$. Note, we do not feed in the action from the last timestep. The output of the conv. backbone is a volume, $v \in \mathbb{R}^{24 \times 24 \times 512}$, which is flattened and projected by a linear layer into a feature $x \in \mathbb{R}^{512}$. $x$ is taken as input to the GRU module, which maintains a hidden state $h \in \mathbb{R}^{512}$ and outputs another feature vector $z \in \mathbb{R}^{512}$. An linear actor head takes $z$ and projects it to give logits over the eight discrete actions. A second linear critic head takes $z$ and projects it to give the critic score. \noindent\textbf{Data.} We conduct training rollouts within random starting locations drawn from the 80 train rooms in AI2-THOR. \noindent\textbf{Training.} We adopt the AllenAct \cite{Weihs2020AllenActAF} framework for training. Specifically we use the DD-PPO \cite{Schulman2017ProximalPO, Wijmans2020DDPPOLN} algorithm to train our network. We train on machines that have 48 CPU cores and four T4 NVIDIA GPUs. We train for 200 million steps, which takes $\sim$2 days. We use default AllenAct PPO settings, with rollout episode length of 150 steps. We employ sparse rewards, which are computed based on known simulation state at training. The agent receives positive reward of 0.1 if it visits a new position (agent orientation is disregarded) and reward of 0.4 if it sees a new object within a rollout. There is also a step penalty of -0.01 and a failed action penalty of -0.03. \section{Continuous Scene Representation} \label{appx:csr} \noindent\textbf{Architecture.} We use a standard ResNet-18 architecture. We modify \texttt{conv1} to take five channel input (three channels for RGB, a forth channel for the first binary box mask, and a fifth channel for the second binary box mask). Hence our ResNet takes input $\in \mathbb{R}^{224 \times 224 \times 5}$. After the ResNet, we have an MLP bottleneck projection head, which takes in a feature $\in \mathbb{R}^{512}$ and outputs a feature $\in \mathbb{R}^{512}$. Architectural details for the network that extracts object correspondence features are the same. \noindent\textbf{Data.} As stated in the paper we capture 20 random agent poses in 5 different configurations (random object placement and scene textures) in the 80 different train rooms. This leads to a train dataset of $\sim$600k relations. The rest of the dataset is composed of a near even split between validation and test relations, yielding a total dataset size of $\sim$900k relations. We provide some train statistics to give a better understanding of the dataset. Of the $\sim$600k relations, $\sim$60k are node relations, the rest are directed edge relations. In total there are $\sim$3k different object instances across the $\sim$100 AI2-THOR categories. \noindent\textbf{Training.} We train on a machine with eight GeForce GTX TITAN X NVIDIA GPUs. Our learning leverages InfoNCE \cite{oord2018representation} loss and builds on the MoCo framework \cite{he2020momentum, chen2020improved}. We use a relatively small queue of size of 1024 for negatives. The InfoNCE temperature parameter is 0.07 and the momentum update coefficient is 0.999. We train in minibatches of 512, with initial learning rate of 0.1, a cosine decay schedule, and standard SGD w/ momentum optimizer. Our model take less than one day to converge. \noindent\textbf{Constants.} We must set three thresholds in our method (1) to determine within trajectory matches, (2) to determine object correspondences matches, and (3) to determine if an object has moved. If (1) cosine similarity between two matched node features is greater than 0.5 within a trajectory, we consider the instances as a true match. If cosine similarity of matched object features between trajectories is greater than 0.4, we consider the objects to be a true match. Finally, if cosine similarity of the node features drops below 0.8 after nodes have been matched via object features between trajectories, we consider the object to be a candidate object that has moved. \section{Linear Probes} Here we provide more relevant details for our linear probes. \noindent\textbf{Data.} We consider two tasks, \textsc{[support]} and \textsc{[sibling]}. For \textsc{[support]} we create a balanced dataset with $\sim$2k positive examples of an object on top of another object. By reversing the order of the boxes for the input, we get another $\sim$2k examples of an object under another object. Finally we create a third category of $\sim$2k examples of unrelated objects (i.e., objects that do not follow the \textsc{[support]} relationship). For the \textsc{[sibling]} relation we create two categories each with $\sim$2k examples, the first with examples from this relationship (i.e., two objects on the same receptacle) and the second of unrelated objects (i.e., objects that do not follow the \textsc{[sibling]} relationship). For both datasets we use a 80/20 train/test split for each category. \noindent\textbf{Training.} For training our model, we conduct a linear probe, with the high learning rate of $0.5$, using the validation set loss to determine convergence. We use features before the MLP projection head as is common for linear probes in the contrastive learning literature. For end-to-end baseline, we first train with reasonable parameters (i.e., learning rate of 0.02, with cosine schedule, SGD w/ momentum, weight decay of 0.001) for 100 epochs, taking the checkpoint with the lowest validation loss. We then use the same linear probe routine discussed above to probe the baseline representations for transfer performance. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/real_world_qual.pdf} \caption{ \textbf{Qualitative matching on YCB-Video.}. All but a heavily occluded. } \label{fig:real_world_qual} \end{figure} \section{Exploration Heuristic} To ablate the effectiveness of our exploration policy in the visual room rearrangement pipeline, we also design a heuristic policy. Given the simulation state of the room before and after the shuffle, we can retrieve the squares in the map that are closest (in terms of euclidean distance) to the objects that get shuffled. Hence, we get $2n$ locations, where $n$ is the number of objects that get shuffled. During the walkthrough trajectory, based on the agent's current location, the heuristic policy greedily picks the closest location and takes the shortest path to this point. During the unshuffle exploration trajectory the locations are visited in reversed order (e.g., the waypoint visited last in the walkthrough is visited first in the unshuffle). \section{AI2-THOR Assets} AI2-THOR assets are available under Apache 2.0. \section{Rearrangement} While our setting is identical to that of Weihs \textit{et al.} \cite{weihs2021visual}, our method does not attempt to fix objects that have changed state (e.g., drawers opening). Hence our method cannot successfully rearrange rooms with these changes. However, for fair comparison to prior work, we report numbers on the full RoomR dataset with all data points. Planning for objects that change in openness is left to future work. \section{Qualitative Real World Tracking Results} We show a qualitative example where all but one heavily occluded object is properly matched in Fig.~\ref{fig:real_world_qual}.
1,314,259,992,905
arxiv
\section{Introduction}\label{sec:intro} In this paper, we propose and analyze a stochastic semismooth Newton framework for solving general nonsmooth, nonconvex optimization problems of the form \begin{equation}\label{eq:prob} \mathop{\textrm{min}}\limits_{x\in\mathbb{R}^n}~\psi(x):= f(x)+r(x) \end{equation} where $f: \mathbb{R}^n \to \mathbb{R}$ is a (twice) continuously differentiable but possibly nonconvex function and $r : \mathbb{R}^n \to (-\infty,+\infty]$ is a convex, lower semicontinuous, and proper mapping. Although the function $f$ is smooth, we assume that a full evaluation of $f$ and an exact computation of the gradient and Hessian values $\nabla f(x)$ and $\nabla^2 f(x)$ is either not completely possible or too expensive in practice. Instead, we suppose that only noisy gradient and Hessian information is available which can be accessed via calls to \textit{stochastic first} (${\mathcal{S}}\cF{\mathcal{O}}$) and \textit{second order oracles} (${\mathcal{S}}\cS{\mathcal{O}}$). Composite problems of the type \cref{eq:prob} arise frequently in statistics and in large-scale statistical learning, see, e.g., \cite{HTF2009,MaiBacPonSap09,BacJenMaiObo11,ShaBen14,BCN2016}, and in many other applications. In these examples and problems, the smooth mapping $f$ is typically of the form \begin{equation} \label{eq:prob-ex} f(x) := \Exp[F(x,\xi)] = \int_{\Omega} F(x,\xi(\omega))\,{\rm d}\Prob(\omega), \quad \text{or} \quad f(x) := \frac{1}{N} \sum_{i=1}^N f_i(x), \end{equation} where $\xi : \Omega \to W$ is a random variable defined on a given probability space $(\Omega,\mathcal F,\Prob)$, $W$ is a measurable space, and $F : \mathbb{R}^n \times W \to \mathbb{R}$ and the component functions $f_i : \mathbb{R}^n \to \mathbb{R}$, $i = 1,...,N$, correspond to certain loss models. More specifically, in the latter case, when the nonsmooth term $r \equiv 0$ vanishes, the problem \cref{eq:prob} reduces to the so-called and well-studied \textit{empirical risk minimization problem} \begin{equation} \label{eq:finite-sum} \min_{x\in\mathbb{R}^n}~ f(x),\quad f(x) := \frac{1}{N} \sum_{i=1}^N f_i(x). \end{equation} Since the distribution $\Prob$ in \cref{eq:prob-ex} might not be fully known and the number of components $N$ in \cref{eq:finite-sum} can be extremely large, stochastic approximation techniques, such as the mentioned stochastic oracles, have become an increasingly important tool in the design of efficient and computationally tractable numerical algorithms for the problems \cref{eq:prob} and \cref{eq:finite-sum}, \cite{NemJudLanSha08,GL2013,ShaBen14,GL2016,GhaLanZha16,WMGL2017,WZ2017}. Moreover, in various interesting problems such as deep learning, dictionary learning, training of neural networks, and classification tasks with nonconvex activation functions, \cite{MasBaxBarFre99,MaiBacPonSap09,BacJenMaiObo11,DenYu14,LeCBenHin15,Sch15,GooBenCou16}, the loss function $f$ is nonconvex, which represents another major challenge for stochastic optimization approaches. For further applications and additional connections to simulation-based optimization, we refer to \cite{GL2013,GhaLanZha16}. \subsection{Related Work} The pioneering idea of utilizing stochastic approximations and the development of the associated, classical stochastic gradient descent method (SGD) for problem \cref{eq:finite-sum} and other stochastic programs can be traced back to the seminal work of Robbins and Monro \cite{RM1951}. Since then, a plethora of stochastic optimization methods, strategies, and extensions has been studied and proposed for different problem formulations and under different basic assumptions. In the following, we give a brief overview of related research directions and related work. \textit{First order methods}. Advances in the research of stochastic first order methods for the smooth empirical risk problem \cref{eq:finite-sum} are numerous and we will only name a few recent directions here. Lately, based on the popularity and flexible applicability of the basic SGD method, a strong focus has been on the development and analysis of more sophisticated stochastic first order oracles to reduce the variance induced by gradient sampling and to improve the overall performance of the underlying SGD method. Examples of algorithms that utilize such variance reduction techniques include SVRG \cite{JZ2013}, SDCA \cite{SSZ2013}, SAG \cite{SLB2017}, and SAGA \cite{DBLJ2014}. Moreover, Friedlander and Schmidt \cite{FS2012} analyze the convergence of a mini-batch stochastic gradient method for strongly convex $f$, in which the sampling rates are increased progressively. Incorporating different acceleration strategies, the first order algorithms Catalyst \cite{LMH2015} and Katyusha \cite{AZ2017} further improve the iteration complexity of the (proximal) SGD method. Several of the mentioned algorithms can also be extended to the nonsmooth setting $r \not\equiv 0$ by using the proximity operator of $r$ and associated stochastic proximal gradient steps, see, e.g., the perturbed proximal gradient method \cite{AtcForMou17} studied by Atchad\'{e} et al$\text{.}$, prox-SVRG \cite{XZ2014}, prox-SAGA \cite{DBLJ2014}, and prox-SDCA \cite{SSZ2016}. AdaGrad \cite{DHS2011} is another extension of the classical SGD method that utilizes special adaptive step size strategies. Under the assumption that $r$ is block separable, Richt\'arik and Tak\'a\v c \cite{RT2014} develop a randomized block-coordinate descent method for \cref{eq:prob}. An accelerated variant of this approach is investigated by Lin et al$\text{.}$ \cite{LLX2015}. The methods discussed so far either require convexity of $f$ or of each of the component functions $f_i$ or even stronger assumptions. Ghadimi and Lan \cite{GL2013,GL2016} generalize the basic and accelerated SGD method to solve nonconvex and smooth minimization problems. Allen-Zhu and Hazan \cite{AZH2016} and Reddi et al$\text{.}$ \cite{RHSPS2016} analyze stochastic variance reduction techniques for the nonconvex version of problem \cref{eq:finite-sum}. Moreover, Reddi et al$\text{.}$ \cite{RSPS2016} and Allen-Zhu \cite{Allen2017} further extend existing stochastic first order methods to find approximate stationary points of the general nonconvex, nonsmooth model \cref{eq:prob}. In \cite{GhaLanZha16}, Ghadimi et al$\text{.}$ discuss complexity and convergence results for a mini-batch stochastic projected gradient algorithm for problem \cref{eq:prob}. Xu and Yin \cite{XuYin15} present and analyze a block stochastic gradient method for convex, nonconvex, and nonsmooth variants of the problem \cref{eq:prob}. \textit{Quasi-Newton and second order methods}. Recently, in order to accelerate and robustify the convergence of first order algorithms, stochastic second order methods have gained much attention. So far, the majority of stochastic second order methods is designed for the smooth problem \cref{eq:finite-sum} and is based on variants of the sub-sampled Newton method in which approximations of the gradient and Hessian of $f$ are generated by selecting only a sub-sample or mini-batch of the components $\nabla f_i$ and $\nabla^2 f_i$, $i = 1,..., N$. In \cite{BCNN2011,BCNW2012}, assuming positive definiteness of the sub-sampled Hessians, the authors analyze the convergence of a sub-sampled Newton-CG method and discuss strategies for selecting the sample sets. Erdogdu and Montanari \cite{EM2015} derive convergence rates of a projected sub-sampled Newton method with rank thresholding. In \cite{RKM2016I,RKM2016II}, Roosta-Khorasani and Mahoney establish non-asymptotic, probabilistic global and local convergence rates for sub-sampled Newton methods by applying matrix concentration inequalities. Xu et al$\text{.}$ \cite{XYRRM2016} present convergence and complexity results for a sub-sampled Newton-type approach with non-uniform sampling. Bollapragada et al$\text{.}$ \cite{BBN2016} consider a sub-sampled Newton method for problems with the more general loss function given in \cref{eq:prob-ex} and derive r-superlinear convergence rates in expectation using a ``bounded moment'' condition to overcome the nonequivalence of norms in infinite dimensions. In \cite{WZ2017}, Wang and Zhang propose an algorithm that combines the advantages of variance reduction techniques and sub-sampled Newton methods. Convergence properties are studied under the assumption that $f$ is strongly convex and the Hessians $\nabla^2 f_i$ are Lipschitz continuous (with a uniform constant). Based on the existence of a suitable square-root decomposition of the Hessian, Pilanci and Wainwright \cite{PW2017} propose a Newton sketch method for general convex, smooth programs. In \cite{BBN2017}, the numerical performance of the Newton sketch method and different sub-sampled Newton approaches is compared. Furthermore, based on unbiased estimators of the inverse Hessian, a stochastic method called LiSSA is studied in \cite{ABH2016}. A recent discussion of different stochastic second order algorithms can also be found in \cite{YeLuoZha17}. Stochastic quasi-Newton methods represent another large and important class of stochastic numerical algorithms for problem \cref{eq:finite-sum}. Typically, these methods combine specific sub-sampling schemes for $\nabla f$ with randomized BFGS or BFGS-type updates to approximate the Hessian $\nabla^2 f$. In \cite{SchYuGue07}, the authors propose a basic sto\-chas\-tic quasi-Newton algorithm for quadratic loss functions. Bordes et al$\text{.}$ \cite{BBG2009} present a quasi-Newton approach that is based on diagonal curvature estimation. Mokhtari and Ribeiro \cite{MR2014} investigate a regularized stochastic BFGS method for solving strongly convex problems. In \cite{BHNS2016}, Byrd et al$\text{.}$ consider a stochastic limited-memory BFGS (L-BFGS) algorithm that in\-cor\-po\-rates exact Hessian information of the functions $f_i$ to build the BFGS-type updates. The stochastic L-BFGS method discussed in \cite{MNJ2016} uses variance reduction techniques to improve its convergence and performance. Moreover, Gower et al$\text{.}$ \cite{GGR2016} establish linear convergence of a stochastic block L-BFGS method if the functions $f_i$ are strongly convex. In contrast, the number of stochastic second order algorithms for smooth but nonconvex problems seems to be still quite limited. Based on a damping strategy for BFGS-type updates introduced by Powell \cite{Pow78} and using general stochastic first order oracles, Wang et al$\text{.}$ \cite{WMGL2017} propose a stochastic L-BFGS method for smooth, nonconvex problems. Under the assumption that the full gradient of the objective function is available, Xu et al$\text{.}$ \cite{XRKM2017} derive worst-case optimal iteration complexity results for an adaptive cubic regularization method with inexact or sub-sampled Hessian information. Generalizations and further aspects of this approach have been considered very recently in \cite{XuRooMah17,YaoXuRooMah18}. Finally, in \cite{WZ2017}, Wang and Zhang mention an extension of their hybrid method to the nonsmooth setting. A similar and related idea has also been presented in \cite{ShiLiu15}. In particular, these approaches can be interpreted as stochastic variants of the proximal Newton method \cite{LeeSunSau14} for the general problem \cref{eq:prob}. Nevertheless, strong and uniform convexity assumptions are still required to guarantee convergence and well-definedness of the inner steps and subproblems. Let us note that there is also a vast literature on incremental methods for \cref{eq:prob} and \cref{eq:finite-sum}, see, e.g., \cite{Bert2015,GOP2015}, which is beyond the scope of this paper. \subsection{Contents and Contributions} In this paper, we develop a stochastic second order framework for the general optimization problem \cref{eq:prob}. Our basic idea is to apply a semismooth Newton method to approximately solve the nonsmooth fixed point-type equation \begin{equation} \label{eq:opt-intro} F^\Lambda(x) := x - \proxt{\Lambda}{r}(x - \Lambda^{-1} \nabla f(x)) = 0, \quad \Lambda \in \mathbb{S}^n_{++}, \end{equation} which represents a reformulation of the associated first order optimality conditions of problem \cref{eq:prob}. Specifically, we will consider stochastic variants of the nonsmooth residual \cref{eq:opt-intro} and of the semismooth Newton method, in which the gradient and Hessian of $f$ are substituted by stochastic oracles. Motivated by deterministic Newton-type approaches \cite{MilUlb14,Mil16,XLWZ2016}, our proposed method combines stochastic semismooth Newton steps, stochastic proximal gradient steps, and a globalization strategy that is based on controlling the acceptance of the Newton steps via growth conditions. In this way, the resulting stochastic algorithm can be guaranteed to converge globally in expectation and almost surely, i.e., for a generated sequence of iterates $(x^k)_k$, we have \[ \Exp[\|F^\Lambda(x^k)\|^2] \to 0 \quad \text{and} \quad F^\Lambda(x^k) \to 0 \;\; \text{almost surely}, \quad k \to \infty. \] Furthermore, inspired by \cite{RKM2016I,RKM2016II} and using matrix concentration inequalities \cite{Tro12}, we prove that transition to fast local r-linear or r-superlinear convergence can be established with high probability if the sample sizes and sampling rates are chosen appropriately and increase sufficiently fast. To the best of our knowledge, rigorous extensions of existing stochastic sec\-ond order methods to the nonsmooth, nonconvex setting considered in this work do not seem to be available. We now briefly summarize some of the main challenges and contributions. \vspace{.5ex} \begin{itemize} \item We provide a unified convergence theory for the proposed stochastic Newton method covering global and local aspects and transition to fast local convergence. In contrast to many other works, convexity of the smooth function $f$ or of the objective function $\psi$ is not required in our analysis. \item In order to ensure global convergence and based on an acceptance test, the algorithm is allowed to switch between Newton and proximal gradient steps. Hence, a priori, it is not clear whether the generated iterates correspond to measurable random variables or to a stochastic process. This structural mechanism is significantly different from other existing stochastic approaches and will be discussed in detail in \cref{sec:global}. \item Our algorithmic approach and theoretical results are applicable for general stochastic oracles. Consequently, a large variety of approximation schemes, such as basic sub-sampling strategies or more elaborate variance reduction techniques, \cite{JZ2013,XZ2014,RSPS2016,WMGL2017}, can be used within our framework. In particular, in our numerical experiments, we investigate a variance reduced version of our method. Similar to \cite{WMGL2017}, the numerical results indicate that the combination of second order information and variance reduction techniques is also very effective in the nonsmooth setting. We note that the proposed method (using different stochastic oracles) performs quite well in comparison with other state-of-the-art algorithms in general. \end{itemize} \subsection{Organization} This paper is organized as follows. Our specific stochastic setup, a derivation of the equation \cref{eq:opt-intro}, and the main algorithm are stated in \cref{sec:algo}. The global and local convergence results are presented in \cref{sec:global} and \cref{sec:local}, respectively. Finally, in \cref{sec:numerical}, we report and discuss our numerical comparisons and experiments in detail. \subsection{Notation} For any $n \in \mathbb{N}$, we set $[n] := \{1,...,n\}$. By $\iprod{\cdot}{\cdot}$ and $\|\cdot\| := \|\cdot\|_2$ we denote the standard Euclidean inner product and norm. The set of symmetric and positive definite $n \times n$ matrices is denoted by $\mathbb{S}^n_{++}$. For a given matrix $\Lambda \in \mathbb{S}^n_{++}$, we define the inner product $\iprod{x}{y}_\Lambda := \iprod{x}{\Lambda y} = \iprod{\Lambda x}{y}$ and $\|x\|_\Lambda := \sqrt{\iprod{x}{x}_\Lambda}$. The set $\mathrm{lev}_\alpha\,f := \{x : f(x) \leq \alpha\}$ denotes the lower level set of a function $f$ at level $\alpha \in \mathbb{R}$. For a given set $S \subset \mathbb{R}^n$, the set $\mathrm{cl}~S$ denotes the closure of $S$ and $\mathds 1_S : \mathbb{R}^n \to \{0,1\}$ is the associated characteristic function of $S$. For $p \in (0,\infty)$ the space $\ell^p_+$ consists of all sequences $(x_n)_{n\geq 0}$ satisfying $x_n \geq 0$, $n \geq 0$, and $\sum x_n^p < \infty$. Let $(\Omega,\mathcal F,\Prob)$ be a given probability space. The space $L^p(\Omega) := L^p(\Omega,\Prob)$, $p \in [1,\infty]$, denotes the standard $L^p$ space on $\Omega$. We write $\xi \in \mathcal F$ for ``$\xi$ is $\mathcal F$-measurable''. Moreover, we use $\sigma(\xi^1,...,\xi^k)$ to denote the $\sigma$-algebra generated by the family of random variables $\xi^1,...,\xi^k$. For a random variable $\xi \in L^1(\Omega)$ and a sub-$\sigma$-algebra $\cH \subseteq \mathcal F$, the conditional expectation of $\xi$ given $\cH$ is denoted by $\Exp[\xi \,|\, \cH]$. The conditional probability of $S \in \mathcal F$ given $\cH$ is defined as $\Prob(S \mid \cH) := \Exp[{\mathds 1}_S \mid \cH]$. We use the abbreviations ``$\text{a.e.}$'' and ``$\text{a.s.}$'' for ``almost everywhere'' and ``almost surely'', respectively. \section{A Stochastic Semismooth Newton Method} \label{sec:algo} \subsection{Probabilistic Setting and Preliminaries} \label{sec:prelim} In this section, we introduce several basic definitions and preparatory results. We start with an overview of the stochastic setting and the sampling strategy. \subsubsection{Stochastic Setup} \label{sec:setup} Although the function $f$ is smooth, we assume that an exact or full evaluation of the gradient $\nabla f$ and Hessian $\nabla^2 f$ is not possible or is simply too expensive. Hence, we will work with \textit{stochastic first} $({\mathcal{S}}\cF{\mathcal{O}})$ and \textit{second order oracles} $({\mathcal{S}}\cS{\mathcal{O}})$, \[ \cG : \mathbb{R}^n \times \Xi \to \mathbb{R}^n, \quad \cH : \mathbb{R}^n \times \Xi \to \mathbb{S}^n \] to approximate gradient and Hessian information. Specifically, given an underlying probability space $(\Omega,\cF,\Prob)$ and a measurable space $(\Xi,\mathcal X)$, we generate two mini-batches of random samples \[ {s}^k:=\{{s}^k_1,\ldots,{s}^k_{{\bf n}^{\sf g}_k}\} \quad \mbox{and}\quad {t}^k := \{{t}^k_1,\ldots,{t}^k_{{\bf n}^{\sf h}_k}\} \] and calculate the stochastic approximations $\cG(x,s^k_i) \approx \nabla f(x)$ and $\cH(x,t^k_j) \approx \nabla^2 f(x)$ in each iteration. Here, we assume that the space $\Omega$ is sufficiently rich allowing us to model and describe the (possibly independent) sample batches $s^k$, $t^k$ and other associated stochastic processes in a unified way. Moreover, each of the samples $s^k_i, t^k_j : \Omega \to \Xi$, $i \in [{\bf n}^{\sf g}_k]$, $j \in [{\bf n}^{\sf h}_k]$, corresponds to an $(\cF,\mathcal X)$-measurable, random mapping and ${\bf n}^{\sf g}_k$ and ${\bf n}^{\sf h}_k$ denote the chosen sample rates or sample sizes of $s^k$ and $t^k$, respectively. Similar to \cite{DanLan15,XuYin15,GL2016,GhaLanZha16,WMGL2017}, we then construct a mini-batch-type, stochastic gradient $G_{{s}^k}(x)$ and Hessian $H_{t^k}(x)$ as follows % \begin{equation} \label{eq:def-subgh} G_{{s}^k}(x):=\frac{1}{{\bf n}^{\sf g}_k}\sum_{i = 1}^{{\bf n}^{\sf g}_k}\cG(x,s^k_i), \quad \quad H_{t^k}(x):=\frac{1}{{\bf n}^{\sf h}_k}\sum_{i = 1}^{{\bf n}^{\sf h}_k}\cH(x,t^k_i). \end{equation} Throughout this work, we assume that the stochastic oracles $\cG$ and $\cH$ are \textit{Carath\'{e}odory functions}\,\footnote{A mapping $F: \mathbb{R}^n \times \Xi \to \mathbb{R}$ is called \textit{Carath\'{e}odory function} if $F(\cdot,z) : \mathbb{R}^n \to \mathbb{R}$ is continuous for all $z \in \Xi$ and if $F(x,\cdot) : \Xi \to \mathbb{R}$ is measurable for all $x \in \mathbb{R}^n$.}. Further assumptions on the stochastic setting will be introduced later in \cref{sec:global-assumption} and \cref{sec:loc-condconc}. We will also sometimes drop the index $k$ from the mini-batches $s^k$, $t^k$ and sample sizes ${\bf n}^{\sf g}_k$, ${\bf n}^{\sf h}_k$ when we consider a general pair of batches $s$ and $t$. \subsubsection{Definitions and First Order Optimality} In the following, we derive first order optimality conditions for the composite problem \cref{eq:prob}. Suppose that $x^* \in \mathrm{dom}~r$ is a local solution of problem \cref{eq:prob}. Then, $x^*$ satisfies the mixed-type variational inequality \begin{equation} \label{eq:fst-opt-vip} \iprod{\nabla f(x^*)}{x-x^*} + r(x) - r(x^*) \geq 0, \quad \forall~x \in \mathbb{R}^n. \end{equation} By definition, the latter condition is equivalent to $- \nabla f(x^*) \in \partial r(x^*)$, where $\partial r$ denotes the convex subdifferential of $r$. We now introduce the well-known \textit{proximal mapping} $\proxt{\Lambda}{r} : \mathbb{R}^n \to \mathbb{R}^n$ of $r$. For an arbitrary parameter matrix $\Lambda \in \mathbb{S}^n_{++}$, the proximity operator $\proxt{\Lambda}{r}(x)$ of $r$ at $x$ is defined as \begin{equation} \label{eq:def-prox} \proxt{\Lambda}{r}(x) := \argmin_{y \in \mathbb{R}^n}~r(y) + \frac{1}{2} \|x-y\|_\Lambda^2. \end{equation} The proximity operator is a $\Lambda$-firmly nonexpansive mapping, i.e., it satisfies \[ \|\proxt{\Lambda}{r}(x) - \proxt{\Lambda}{r}(y)\|^2_\Lambda \leq \iprod{\proxt{\Lambda}{r}(x) - \proxt{\Lambda}{r}(y)}{x - y}_\Lambda, \quad \forall~x,y \in \mathbb{R}^n. \] Consequently, $\proxt{\Lambda}{r}$ is Lipschitz continuous with modulus 1 with respect to the norm $\|\cdot\|_\Lambda$. We refer to \cite{Mor65,ComWaj05,BacJenMaiObo11,BauCom11} for more details and (computational) properties. Let us further note that the proximity operator can also be uniquely characterized by the optimality conditions of the underlying optimization problem \cref{eq:def-prox}, i.e., \begin{equation} \label{eq:prox:sub-prox} \proxt{\Lambda}{r}(x) \in x - \Lambda^{-1} \cdot \partial r(\proxt{\Lambda}{r}(x)). \end{equation} Using this characterization, condition \cref{eq:fst-opt-vip} can be equivalently rewritten as follows: \begin{equation} \label{eq:non-eq} F^\Lambda(x^*) = 0, \quad \text{where} \quad F^\Lambda(x) := x - \proxt{\Lambda}{r}(x- \Lambda^{-1} \nabla f(x)). \end{equation} We call $x \in \mathbb{R}^n$ a \textit{stationary point} of problem \cref{eq:prob}, if it is a solution of the nonsmooth equation \cref{eq:non-eq}. If the problem is convex, e.g., if $f$ is a convex function, then every stationary point is automatically a local and global solution of \cref{eq:prob}. The fixed point-type equation \cref{eq:non-eq} forms the basis of the proximal gradient method, \cite{FukMin81-1,FukMin81,ComWaj05,ParBoy14}, which has been studied intensively during the last decades. For an arbitrary sample $s$, the corresponding stochastic residual is given by \[ {F}^{\Lambda}_{s}(x) := x - \proxt{\Lambda}{r}(x- \Lambda^{-1} \gfsub(x)). \] We will also use $u^\Lambda_{s}(x) := x - \Lambda^{-1} \gfsub(x)$ and $p^\Lambda_{s}(x) := \proxt{\Lambda}{r}(u^\Lambda_{s}(x))$ to denote the stochastic (proximal) gradient steps. \subsection{Algorithmic Framework} \label{subsec:algo} In this section, we describe our algorithmic approach in detail. The overall idea is to use a stochastic semismooth Newton method to calculate an approximate solution of the optimality system \[ F^\Lambda(x) = 0. \] The associated Newton step $d^k$ at iteration $k$ is then given by the linear system of equations % \begin{equation} \label{eq:newton-step} M_k d^k = - F^{\Lambda_k}_{{s}^k}(x^k), \quad M_k \in \cM^{\Lambda_k}_{{s}^k,t^k}(x^k). \end{equation} % Here, we consider the following set of generalized derivatives % \begin{equation} \label{eq:gen-deriv} \cM^{\Lambda}_{{s},t}(x) := \{M \in \mathbb{R}^{n \times n}: M = (I - D) + D\Lambda^{-1} H_{t}(x), \, \, D \in \partial \proxt{\Lambda}{r}(u^\Lambda_{s}(x))\}, \end{equation} % where $\partial \proxt{\Lambda}{r}(u^\Lambda_{s}(x))$ denotes the Clarke subdifferential of $ \proxt{\Lambda}{r}$ at the point $u^\Lambda_{s}(x)$. The set $\cM^{\Lambda}_{{s},t}(x)$ depends on the stochastic gradient and on the stochastic Hessian defined in \cref{eq:def-subgh}. Moreover, the samples $s^k$, $t^k$ and the matrix $\Lambda_k$ used in \cref{eq:newton-step} may change in each iteration, see also \cref{remark:ada-lambda}. We further note that, in practice, the system \cref{eq:newton-step} can be solved inexactly via iterative approaches such as the conjugate gradient or other Krylov subspace methods. In the deterministic setting, the set $\cM^{\Lambda}_{{s},t}(x)$ reduces to $\mathcal M^\Lambda(x) := \{M = (I-D) + D\Lambda^{-1}\nabla^2 f(x), D \in \partial\proxt{\Lambda}{r}(u^\Lambda(x))\}$ with $u^\Lambda(x) = x - \Lambda^{-1}\nabla f(x)$. In general, $\mathcal M^\Lambda(x)$ does not coincide with Clarke's subdifferential $\partial F^\Lambda(x)$. As shown in \cite{Clarke1990}, we can only guarantee $\partial F^\Lambda(x)h \subseteq \text{co}(\mathcal M^\Lambda(x)h)$ for $h \in \mathbb{R}^n$. However, the set-valued mapping $\mathcal M^\Lambda : \mathbb{R}^n \rightrightarrows \mathbb{R}^{n \times n}$ defines a so-called \textit{(strong) linear Newton approximation} at $x$ if the proximity operator $\proxt{\Lambda}{r}$ is (strongly) semismooth at $u^\Lambda(x)$. In particular, $\mathcal M^\Lambda$ is upper semicontinuous and compact-valued. We refer to \cite[Chapter 7]{FP2003II} and \cite{PatSteBem14} for more details. We also note that the chain rule for semismooth functions implies that $F^\Lambda(x)$ is semismooth at $x$ with respect to $\mathcal M^\Lambda(x)$ if $\proxt{\Lambda}{r}$ is semismooth at $u^\Lambda(x)$. Furthermore, in various important examples including, e.g., $\ell_1$- or nuclear norm-regularized optimization, group sparse problems or semidefinite programming, the associated proximal mapping $\proxt{\Lambda}{r}$ can be shown to be (strongly) semismooth and there exist explicit and computationally tractable representations of the generalized derivatives $D \in \partial \proxt{\Lambda}{r}(\cdot)$, see \cite{PatSteBem14,Mil16,XLWZ2016} for a detailed discussion. \begin{algorithm2e}[t] \caption{A Stochastic Semismooth Newton Method} \label{alg:ssn} \lnlset{alg:SSN1}{1}{Initialization: ~~Choose an initial point $x^0 \in \mathrm{dom}~r$, $\theta_0 \in \mathbb{R}_+$, and mini-batches $s^0, t^0$. Select sample sizes $({\bf n}^{\sf g}_k)_{k}$, $({\bf n}^{\sf h}_k)_{k}$, parameter matrices $(\Lambda_k)_k \subset \mathbb{S}^n_{++}$, and step sizes $(\alpha_k)_{k}$. Choose $\eta, p \in (0,1)$, $\beta > 0$, and $(\nu_k)_k$, $(\varepsilon_k^1)_k$, $(\varepsilon^2_k)_k$. Set iteration $k:=0$.} \\ \vspace{.5ex} \While{did not converge}{ \lnlset{alg:SSN2}{3}{Compute $F^{\Lambda_k}_{{s}^k}(x^k)$ and choose $M_k \in \cM^{\Lambda_k}_{{s}^k,t^k}(x^k)$. For all $i = 1,...,{\bf n}^{\sf g}_{k+1}$ and $j =1,...,{\bf n}^{\sf h}_{k+1}$ select new samples $s^{k+1}_i, t^{k+1}_j$.} \\ \vspace{.5ex} \lnlset{alg:SSN3}{4}{Compute the Newton step $d^k$ by solving \begin{center} $M_kd^k = -F^{\Lambda_k}_{{s}^k}(x^k)$. \end{center}} \lnlset{alg:SSN4}{5}{Set $z^k_{\sf n} = x^k + d^k$. If the conditions $z^k_{\sf n} \in \mathrm{dom}~r$, \cref{eq:growth-1}, and \cref{eq:growth-2} are satisfied, skip step \ref{alg:SSN5} and set $x^{k+1} = z^k_{\sf n}$, $\theta_{k+1} = \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|$. Otherwise go to step \ref{alg:SSN5}.} \\ \vspace{.5ex} \lnlset{alg:SSN5}{6}{Set $v^k = - F^{\Lambda_k}_{{s}^k}(x^k)$, $x^{k+1} = x^k + \alpha_k v^k $, and $\theta_{k+1} = \theta_k$.} \\ \vspace{.5ex} \lnlset{alg:SSN6}{7}{Set $k\gets k+1$.} \\ } \end{algorithm2e} In order to control the acceptance of the Newton steps and to achieve global convergence of our algorithm, we introduce the following growth conditions for the trial step $z^k_{\sf n} = x^k + d^k$: \begin{align} \label{eq:growth-1} \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\| &\leq (\eta + \nu_k) \cdot \theta_k + \varepsilon^1_k, \\ \label{eq:growth-2} \psi(z^k_{\sf n}) &\leq \psi(x^k) + \beta \cdot \theta_k^{1-p} \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|^p + \varepsilon_k^2. \end{align} If the trial point $z^k_{\sf n}$ satisfies both conditions and is feasible, i.e., if $z^k_{\sf n} \in \mathrm{dom}~r$, we accept it and compute the new iterate $x^{k+1}$ via $x^{k+1} = z^k_{\sf n}$. The parameter sequences $(\nu_k)_k$, $(\varepsilon^1_k)_k$, and $(\varepsilon_k^2)_k$ are supposed to be nonnegative and summable and can be chosen during the initialization or during the iteration process. Furthermore, the parameter $\theta_k$ keeps track of the norm of the residual $F^{\Lambda_i}_{s^i}(x^i)$ of the last \textit{accepted} Newton iterate $x^i$, $i < k$, and is updated after a successful Newton step. The parameters $\beta > 0$, $\eta, p \in (0,1)$ are given constants. If the trial point $z^k_{\sf n}$ does not satisfy the conditions \cref{eq:growth-1} and \cref{eq:growth-2}, we reject it and perform an alternative proximal gradient step using the stochastic residual $F^{\Lambda_k}_{{s}^k}(x^k)$ as an approximate descent direction. We also introduce a step size $\alpha_k$ to damp the proximal gradient step and to guarantee sufficient decrease in the objective function $\psi$. A precise bound for the step sizes $\alpha_k$ is derived in \cref{prop:prox-descent}. The details of the method are summarized in Algorithm \ref{alg:ssn}. Our method can be seen as a hybrid of the semismooth Newton method and the standard proximal gradient method generalizing the deterministic Newton approaches presented in \cite{MilUlb14,Mil16} to the stochastic setting. Our globalization technique is inspired by \cite{MilUlb14}, where a filter globalization strategy was proposed to control the acceptance of the Newton steps. Similar to \cite{MilUlb14,Mil16}, we add condition \cref{eq:growth-1} to monitor the behavior and convergence of the Newton steps. The second condition \cref{eq:growth-2} (together with the feasibility condition $z^k_{\sf n} \in \mathrm{dom}~r$) is required to bound the possible $\psi$-ascent of intermediate Newton steps. In contrast to smooth optimization problems, descent-based damping techniques or step size selections, as used in, e.g., \cite{BCNN2011,BBN2016,BHNS2016,RKM2016I,RKM2016II,WMGL2017}, can not always guarantee sufficient $\psi$-descent of the semismooth Newton steps due to the nonsmooth nature of problem \cref{eq:prob}. This complicates the analysis and globalization of semismooth Newton methods in general. In practice, the second growth condition \cref{eq:growth-2} can be restrictive since an evaluation of the full objective function is required. However, similar descent conditions also appeared in other globalization strategies for smooth problems, \cite{RKM2016I,XRKM2017,XuRooMah17,YaoXuRooMah18}. In the next section, we verify that Algorithm \ref{alg:ssn} using the proposed growth conditions \cref{eq:growth-1}--\cref{eq:growth-2} converges globally in expectation. Moreover, in \cref{theorem:conv-prox-strong}, we establish global convergence of Algorithm \ref{alg:ssn} without condition \cref{eq:growth-2} in a strongly convex setting. Under standard assumptions and if the sample sizes ${\bf n}^{\sf g}$ and ${\bf n}^{\sf h}$ are chosen sufficiently large, we can further show that the conditions \cref{eq:growth-1} and \cref{eq:growth-2} are satisfied locally in a neighborhood of a stationary point with high probability. This enables us to derive fast local convergence results in probability. Let us note that the growth conditions \cref{eq:growth-1}--\cref{eq:growth-2} are checked using a new sample mini-batch $s^{k+1}$. Thus, only one gradient evaluation is required per iteration if the Newton step is accepted. We note that the feasibility condition $z^k_{\sf n} \in \mathrm{dom}~r$ can be circumvented by setting $x^{k+1} = \mathcal P_{\mathrm{dom}~r}(z^k_{\sf n})$, where $\mathcal P_{\mathrm{dom}~r}$ denotes the projection onto the set $\mathrm{dom}~r$. We refer to \cref{remark:proj-loc} for a related discussion. Let us mention that an alternative globalization is analyzed in \cite{PatSteBem14,SteThePat17} where the authors propose the so-called forward-backward envelope (FBE) as a smooth merit function for problem \cref{eq:prob}. Since this framework requires an additional proximal gradient step (and thus, an additional gradient evaluation) after each iteration, we do not consider this approach here. \section{Global Convergence} \label{sec:global} In this section, we analyze the global convergence behavior of Algorithm \ref{alg:ssn}. We first present and summarize our main assumptions. \subsection{Assumptions} \label{sec:global-assumption} Throughout this paper, we assume that $f : \mathbb{R}^n \to \mathbb{R}$ is continuously differentiable on $\mathbb{R}^n$ and $r : \mathbb{R}^n \to (-\infty,+\infty]$ is convex, lower semicontinuous, and proper. As already mentioned, we also assume that the oracles $\cG, \cH : \mathbb{R}^n \times \Xi \to \mathbb{R}$ are Carath\'{e}odory functions. In the following, we further specify the assumptions on the functions $f$ and $r$. \begin{assumption} \label{ass:lip} Let $f : \mathbb{R}^n \to \mathbb{R}$ be given as in \cref{eq:prob}. We assume: \vspace{.5ex} \begin{itemize} \item[{\rm(A.1)}] The gradient mapping $\nabla f$ is Lipschitz continuous on $\mathbb{R}^n$ with modulus $L > 0$. \vspace{.5ex} \item[{\rm(A.2)}] The objective function $\psi$ is bounded from below on $\mathrm{dom}~r$. \vspace{.5ex} \item[{\rm(A.3)}] There exist parameters $\mu_f \in \mathbb{R}$ and $\mu_r \geq 0$ with $\bar\mu := \mu_f + \mu_r > 0$ such that the shifted functions $f - \frac{\mu_f}{2}\|\cdot\|^2$ and $r - \frac{\mu_r}{2}\|\cdot\|^2$ are convex. \vspace{.5ex} \item[{\rm(A.4)}] There exists a constant $\bar g_r > 0$ such that for all $x \in \mathrm{dom}~r$ there exists $\lambda \in \partial r(x)$ with $\|\lambda\| \leq \bar g_r$. \end{itemize} \end{assumption} Assumption (A.3) implies that the function $\psi$ is strongly convex with convexity parameter $\bar\mu$. Furthermore, if both assumption (A.1) and (A.3) are satisfied, then the parameter $\mu_f$ is bounded by the Lipschitz constant $L$, i.e., we have $|\mu_f| \leq L$. The assumptions (A.3)--(A.4) are only required for a variant of Algorithm \ref{alg:ssn} that uses a modified globalization strategy, see \cref{theorem:conv-prox-strong}. A concrete example for $r$ that satisfies (A.4) is given in \cref{remark:example-c3}. We continue with the assumptions on the parameters used within our algorithmic framework. \begin{assumption} Let $(\Lambda_k)_k \subset \mathbb{S}^n_{++}$ be a family of symmetric, positive definite parameter matrices and let $(\nu_k)_k$, $(\varepsilon_k^1)_k$, and $(\varepsilon_k^2)_k$ be given sequences. Then, for some given parameter $p \in (0,1)$ we assume: \vspace{.5ex} \begin{itemize} \item[{\rm(B.1)}] There exist $0 < \lambda_m \leq \lambda_M < \infty$ such that $\lambda_M I \succeq \Lambda_k \succeq \lambda_m I$ for all $k \in \mathbb{N}$. \vspace{0.5ex} \item[{\rm(B.2)}] It holds $(\nu_k)_k$, $(\varepsilon_k^2)_k \in \ell_+^1$, and $(\varepsilon_k^1)_k \in \ell_+^p$. \end{itemize} \end{assumption} In the following sections, we study the convergence properties of the stochastic process $(x^k)_k$ generated by Algorithm \ref{alg:ssn} with respect to the filtrations \[ \mathcal F_k := \sigma(s^0, \ldots, s^k, t^0, \ldots, t^k), \quad \text{and} \quad \hat {\mathcal F}_k := \sigma(s^0, \ldots, s^k,s^{k+1}, t^0, \ldots, t^k). \] The filtration $\mathcal F_k$ represents the information that is collected up to iteration $k$ and that is used to compute the trial point $z^k_{\sf n}$ or a proximal gradient step \[ z^k_{\sf p} := x^k + \alpha_k v^k = x^k - \alpha_k F^{\Lambda_k}_{{s}^k}(x^k). \] The filtration $\hat {\mathcal F}_k$ has a similar interpretation, but it also contains the information produced by deciding whether the Newton step $z^k_{\sf n}$ should be accepted or rejected, i.e., it holds $\hat {\mathcal F}_k = \sigma({\mathcal F}_k \cup \sigma(s^{k+1}))$. The filtrations $\{\mathcal F_k, \hat{\mathcal F}_k\}$ naturally describe the aggregation of information generated by Algorithm \ref{alg:ssn}. We will work with the following stochastic conditions. \begin{assumption} We assume: \begin{itemize} \item[{\rm(C.1)}] For all $k \in \mathbb{N}_0$, the generalized derivative $M_k$, chosen in step \ref{alg:SSN2} of Algorithm \ref{alg:ssn} , is an $\mathcal F_k$-measurable mapping, i.e., the function $M_k : \Omega \to \mathbb{R}^{n \times n}$ is an $\mathcal F_k$-measurable selection of the multifunction ${\mathcal M}_k : \Omega \rightrightarrows \mathbb{R}^{n \times n}$, ${\mathcal M}_k(\omega) := \mathcal M^{\Lambda_k}_{s^k(\omega),t^k(\omega)}(x^k(\omega))$. \item[{\rm(C.2)}] The variance of the individual stochastic gradients is bounded, i.e., for all $k \in \mathbb{N}$ there exists $\sigma_k \geq 0$ such that \end{itemize} \[ {\mathds E}[ \|\nabla f(x^k) - G_{{s}^k}(x^k)\|^2] \leq \sigma_k^2. \] \end{assumption} The second condition is common in stochastic programming, see, e.g., \cite{GL2013,XuYin15,BBN2016,BHNS2016,GhaLanZha16,WMGL2017}. Since the generalized derivative $M_k$ is generated iteratively and depends on the random process $(x^k)_k$ and on the mini-batches $(s^k)_k$, $(t^k)_k$, condition (C.1) is required to guarantee that the selected matrices $M_k$ actually define $\mathcal F_k$-measurable random operators. A similar assumption was also used in \cite{WMGL2017}. Furthermore, applying the techniques and theoretical results presented in \cite{Ulb02,Ulb11} for infinite-dimensional nonsmooth operator equations, we can ensure that the multifunction ${\mathcal M}_k$ admits at least one measurable selection $M_k : \Omega \to \mathbb{R}^{n \times n}$. We discuss this important observation together with a proof of \cref{fact:one} in \cref{sec:app-fact}. Let us note that it is also possible to generalize the assumptions and allow $\mathcal F_k$-measurable parameter matrices $\Lambda_k$. However, in order to simplify our analysis, we focus on a deterministic choice of $(\Lambda_k)_k$ and do not consider this extension here. As a consequence of condition (C.1) and of the assumptions on $f$, we can infer that the random processes $(z^k_{\sf n})_k$, $(z^k_{\sf p})_k$, and $(x^k)_k$ are adapted to the filtrations $\mathcal F_k$ and $\hat {\mathcal F}_k$. \begin{fact} \label{fact:one} Under assumption {\rm(C.1)}, it holds $z^k_{\sf n}, z^k_{\sf p} \in \mathcal F_k$ and $x^{k+1} \in \hat {\mathcal F}_k$ for all $k \in \mathbb{N}_0$. \end{fact} Since the choice of the iterate $x^{k+1}$ depends on the various criteria, the properties stated in \cref{fact:one} are not immediately obvious. In particular, we need to verify that the decision of accepting or rejecting the stochastic semismooth Newton step $z^k_{\sf n}$ is an $\hat{\mathcal F}_k$-measurable action. A proof of \cref{fact:one} is presented in \cref{sec:app-fact}. \subsection{Properties of $F^\Lambda$} In this subsection, we discuss several useful properties of the nonsmooth function $F^\Lambda$ and of its stochastic version ${F}^{\Lambda}_{s}$. The next statement shows that $\|{F}^{\Lambda}_{s}(x) \|$ does not grow too much when the parameter matrix $\Lambda$ changes. This result was first established by Tseng and Yun in \cite{TseYun09}. \begin{lemma} \label{lemma:F-bound} Let $\Lambda_1, \Lambda_2 \in \mathbb{S}^n_{++}$ be two arbitrary matrices. Then, for all $x \in \mathbb{R}^n$, for all samples $s$, and for $W := \Lambda_2^{-\frac{1}{2}}\Lambda_1\Lambda_2^{-\frac{1}{2}}$, it follows \[ \|F^{\Lambda_1}_s(x)\| \leq \frac{1 + \lambda_{\max}(W) + \sqrt{1-2\lambda_{\min}(W)+\lambda_{\max}(W)^2}}{2} \frac{\lambda_{\max}(\Lambda_2)}{\lambda_{\min}(\Lambda_1)} \|F^{\Lambda_2}_s(x)\|. \] \end{lemma} \begin{proof} The proof is identical to the proof of \cite[Lemma 3]{TseYun09} and will be omitted here. \end{proof} \begin{remark} \label{remark:ada-lambda} Let $\Lambda \in \mathbb{S}^n_{++}$ be given and let $(\Lambda_k)_k \subset \mathbb{S}^n_{++}$ be a family of symmetric, positive definite matrices satisfying assumption {\rm(B.1)}. Then, it easily follows \[ \scalebox{0.9}{$\displaystyle\frac{\lambda_{\max}(\Lambda)}{\lambda_m}$} I \succeq \Lambda_k^{-\frac{1}{2}}\Lambda\Lambda_k^{-\frac{1}{2}} \succeq \scalebox{0.9}{$\displaystyle\frac{\lambda_{\min}(\Lambda)}{\lambda_M}$} I \quad \text{and} \quad \scalebox{0.9}{$\displaystyle\frac{\lambda_M}{\lambda_{\min}(\Lambda)}$} I \succeq \Lambda^{-\frac{1}{2}}\Lambda_k\Lambda^{-\frac{1}{2}} \succeq \scalebox{0.9}{$\displaystyle\frac{\lambda_m}{\lambda_{\max}(\Lambda)}$} I, \] for all $k \in \mathbb{N}$, and, due to \cref{lemma:F-bound}, we obtain the following bounds \begin{equation} \label{eq:new:misc3} \underline{\lambda} \cdot \| {F}^{\Lambda}_{s}(x)\| \leq \|F^{\Lambda_k}_s(x)\| \leq \overline{\lambda} \cdot \|{F}^{\Lambda}_{s}(x)\|, \quad \forall~k \in \mathbb{N}, \end{equation} and for all mini-batches $s$, $x \in \mathbb{R}^n$. The constants $\underline{\lambda}$, $\overline{\lambda} > 0$ do not depend on $k$, $\Lambda_k$, or $s$. Thus, the latter inequalities imply: \[ F^\Lambda(x^k) \to 0 \quad \iff \quad F^{\Lambda_k}(x^k) \to 0, \quad k \to \infty. \] As indicated in the last section, this can be used in the design of our algorithm. In particular, adaptive schemes or other techniques can be applied to update $\Lambda$. \end{remark} The following result is a simple extension of \cite[Theorem 4]{TseYun09}; see also \cite[Lemma 3.7]{XZ2014} and \cite{XuYin15} for comparison. \begin{lemma} \label{lemma:str-conv} Suppose that the assumptions {\rm(A.1)}, {\rm(A.3)} are satisfied and let $\Lambda \in \mathbb{S}^n_{++}$ be given with $\lambda_{M} I \succeq \Lambda \succeq \lambda_{m} I $. Furthermore, let $x^*$ denote the unique solution of the problem $\min_x \psi(x)$ and for any $\tau > 0$ let us set \[ b_1 := L - 2\lambda_{m} - \mu_r, \;\, b_2 := \frac{(\lambda_{M} + \mu_r)^2}{\bar\mu}, \;\, B_1(\tau) := \frac{1+\tau}{\bar\mu}(\sqrt{b_1 + b_2 + \tau} + \sqrt{b_2})^2 \] Then, there exists some positive constant $B_2(\tau)$ that only depends on $\tau$ such that \begin{equation} \label{eq:str-conv-prop2} \|x - x^*\|^2 \leq B_1(\tau) \cdot \|{F}^{\Lambda}_{s}(x)\|^2 + B_2(\tau) \cdot \|\nabla f(x) - \gfsub(x) \|^2, \end{equation} for all $x \in \mathbb{R}^n$ and for every sample $s$. If the full gradient is used, the term $\|\nabla f(x) - \gfsub(x) \|$ vanishes for all $x$ and \cref{eq:str-conv-prop2} holds with $B_1(\tau) \equiv B_1(0)$. \end{lemma} \begin{proof} The proof of \cref{lemma:str-conv} and an explicit derivation of the constant $B_2(\tau)$ are presented in \cref{sec:app-1}. \end{proof} \subsection{Convergence Analysis} In the following, we first verify that a stochastic proximal gradient step yields approximate $\psi$-descent whenever the step size $\alpha_k$ in step \ref{alg:SSN5} of Algorithm \ref{alg:ssn} is chosen sufficiently small. We also give a bound for the step sizes $\alpha_k$. Let us note that similar results were shown in \cite{XuYin15,GhaLanZha16,GL2016} and that the proof of \cref{prop:prox-descent} mainly relies on the well-known descent lemma \begin{equation} \label{eq:lip-ineq} f(y) \leq f(x) + \iprod{\nabla f(x)}{y-x} + \frac{L}{2} \|y-x\|^2, \quad \forall~x,y \in \mathbb{R}^n, \end{equation} which is a direct consequence of assumption {\rm(A.1)}. \begin{lemma} \label{prop:prox-descent} Let $x \in \mathrm{dom}~r$ and $\Lambda \in \mathbb{S}^n_{++}$ be arbitrary and suppose that the conditions {\rm(A.1)} and {\rm(B.1)} (for $\Lambda$) are satisfied. Moreover, let $\gamma \in (0,1)$, $\rho \in (1,\gamma^{-1})$, and the mini-batch $s$ be given and set $\overline\alpha := 2(1-\gamma\rho)\lambda_m L^{-1}$. Then, for all $\alpha \in [0,\min\{1,\overline\alpha\}]$ it holds \begin{equation} \label{eq:est-prox-step} \psi(x + \alpha v) - \psi(x) \leq -\alpha \gamma \|v\|_{\Lambda}^2 + \frac{\alpha}{4\gamma(\rho-1)\lambda_{m}} \|\nabla f(x) - \gfsub(x)\|^2, \end{equation} where $v := - {F}^{\Lambda}_{s}(x)$. \end{lemma} \begin{proof} We first define $\Delta := \iprod{\gfsub(x)}{v} + r(x + v) - r(x)$. Then, applying the optimality condition of the proximity operator \cref{eq:prox:sub-prox}, it follows \[ \Delta \leq \iprod{\gfsub(x)}{v} + \iprod{-\Lambda v - \gfsub(x)}{v} = - \|v\|_{\Lambda}^2. \] Using the descent lemma \cref{eq:lip-ineq}, the convexity of $r$, and Young's inequality, we now obtain \begin{align*} \psi(x + \alpha v) - \psi(x) + \alpha \gamma \|v\|_{\Lambda}^2 & \\ & \hspace{-25ex}\leq \alpha ( \iprod{\nabla f(x)}{v} + r(x + v) - r(x)) + \frac{L \alpha^2}{2} \|v\|^2 + \alpha \gamma \|v\|_{\Lambda}^2 \\ & \hspace{-25ex} \leq \frac{L \alpha^2}{2} \|v\|^2 - \alpha (1-\gamma) \|v\|^2_{\Lambda} + \alpha \iprod{\nabla f(x) - \gfsub(x)}{v} \\ &\hspace{-25ex} \leq \alpha \left ( \frac{1}{2}L\alpha - (1-\gamma\rho) \lambda_m \right) \|v\|^2 + \frac{\alpha}{4\gamma(\rho-1)\lambda_{m}} \|\nabla f(x) - \gfsub(x)\|^2. \end{align*} Since the first term is nonpositive for all $\alpha \leq \overline\alpha$, this establishes \cref{eq:est-prox-step}. \end{proof} In the special case $\lambda_{m} \geq L$ and $\rho \leq (2\gamma)^{-1}$, \cref{prop:prox-descent} implies that the approximate descent condition \cref{eq:est-prox-step} holds for all $\alpha \in [0,1]$. The next lemma is one of our key tools to analyze the stochastic behavior of the Newton iterates and to bound the associated residual terms $\|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|$. \begin{lemma} \label{lemma:gen-conv} Let $(Y_k)_k$ be an arbitrary binary sequence in $\{0,1\}$ and let $a_0 \geq 0$, $\eta \in (0,1),$ $p \in (0,1]$, and $(\nu_k)_k \in \ell^1_+$, $(\varepsilon_k)_k \in \ell^p_+$ be given. Let the sequence $(a_k)_k$ be defined by \[ a_{k+1} := (\eta + \nu_k)^{Y_k} a_k + Y_k \varepsilon_k, \quad \forall~k \in \mathbb{N}_0. \] Then, for all $R \geq 1$, $k \geq 0$, and all $q \in [p,1]$, it holds \[ a_{k+1} \leq C_\nu \left [ a_0 + \sum_{k=0}^\infty \varepsilon_k \right ] \quad \text{and} \quad \sum_{k=0}^{R-1} Y_k a_{k+1}^q \leq \frac{C_\nu^q}{1-\eta^q} \left [ (\eta a_0)^q + \sum_{k = 0}^{\infty} \varepsilon_k^q \right ], \] where $C_\nu := \exp\left(\eta^{-1}\sum_{i=0}^\infty \nu_i\right)$. \end{lemma} \begin{proof} Using an induction, we can derive an explicit representation for $a_{k+1}$ \begin{equation} \label{eq:a-rec} a_{k+1} = \left \{ \prod_{i=0}^k (\eta + \nu_i)^{Y_i} \right \} a_0 + \sum_{j = 0}^{k-1} \left \{ \prod_{i=j+1}^k (\eta + \nu_i)^{Y_i} \right \} Y_j \varepsilon_j + Y_k\varepsilon_k, \quad \forall~k \geq 0. \end{equation} Next, using $Y_i \in \{0,1\}$, $i \in \mathbb{N}$, and $\log(1+\nu_i\eta^{-1}) \leq \nu_i\eta^{-1}$, we obtain the estimate \begin{equation} \label{eq:a-fac} \prod_{i=\ell}^k (\eta + \nu_i)^{Y_i} \leq \left \{ \prod_{i=\ell}^k \eta^{Y_i} \right \} \cdot \exp\left ( \eta^{-1} { \sum_{i=\ell}^k Y_i \nu_i } \right) \leq C_\nu \cdot \eta^{\sum_{i=\ell}^k Y_i}, \quad \ell \geq 0. \end{equation} The bound on $a_{k+1}$ now follows from \cref{eq:a-rec}, \cref{eq:a-fac}, and $\eta \leq 1$. Let us now define the set $K_\ell := \{i \in \{\ell,...,R-1\}: Y_i = 1\}$. Then, it holds \[ \sum_{k=\ell}^{R-1} Y_k \eta^{\sum_{i=\ell}^k qY_i} = \sum_{k \in K_\ell} \eta^{\sum_{i \in K_\ell, i \leq k}q} = \sum_{j=1}^{|K_\ell|} \eta^{jq} \leq \sum_{k=\ell}^{R-1} \eta^{(k-\ell+1)q}, \quad \ell \in \{0,...,R-1\}. \] Combining the last results and using the subadditivity of $x \mapsto x^q$, $q \in [p,1]$, we have \begin{align*} \sum_{k=0}^{R-1} Y_k a_{k+1}^q & \leq C_\nu^q \sum_{k=0}^{R-1} Y_k \eta^{\sum_{i=0}^k qY_i} a_0^q + C_\nu^q \sum_{k=0}^{R-1} \sum_{j=0}^{k-1} \eta^{\sum_{i=j+1}^k qY_i} Y_k Y_j \varepsilon_j^q+\sum_{k=0}^{R-1} Y_k\varepsilon_k^q \\ & \leq C_\nu^q \sum_{k=0}^{R-1} \eta^{q(k+1)} a_0^q + C_\nu^q \sum_{j=0}^{R-1} \left\{ \sum_{k=j}^{R-1} \eta^{\sum_{i=j}^k qY_i} Y_k \right \} \eta^{-qY_j} Y_j \varepsilon_j^q \\ & \leq \frac{C_\nu^q}{1-\eta^q} \cdot (\eta a_0)^q + C_\nu^q \sum_{j=0}^{R-1} \left\{ \sum_{k=j}^{R-1} \eta^{q(k-j)} \right \} \varepsilon_j^q \leq \frac{C_\nu^q}{1-\eta^q} \left [(\eta a_0)^q + \sum_{j=0}^{\infty} \varepsilon_j^q \right ] \end{align*} as desired. Let us also note that the inclusion $\ell^q_+ \subset \ell^p_+$ is used in the last step. \end{proof} We are now in the position to establish global convergence of Algorithm \ref{alg:ssn} in the sense that the expectation ${\mathds E}[\|F^\Lambda(x^k)\|^2]$ converges to zero as $k \to \infty$. We first show convergence of Algorithm \ref{alg:ssn} under the conditions {\rm(C.1)}--{\rm(C.2)} and under the additional assumptions that the step sizes are diminishing and that the scaled stochastic error terms $\alpha_k \sigma_k^2$, $k \in \mathbb{N}$, are summable which is a common requirement in the analysis of stochastic methods for nonsmooth, nonconvex optimization, see, e.g., \cite{XuYin15,GhaLanZha16,GL2016}. Our basic idea is to show that both the proximal gradient and the semismooth Newton step yield approximate $\psi$-descent and that the error induced by gradient and Hessian sampling can be controlled in expectation. For a proximal gradient step this basically follows from \cref{prop:prox-descent}. For a Newton step, we combine the growth conditions \cref{eq:growth-1}--\cref{eq:growth-2} and \cref{lemma:gen-conv} to establish an estimate similar to \cref{eq:est-prox-step}. An analogous strategy was also used in \cite{Mil16,MilUlb14}. In our situation, however, a more careful discussion of the possible effects of the semismooth Newton steps is needed to cope with the stochastic situation. More specifically, since our convergence result is stated in expectation, all possible realizations of the random mini-batches $s^k$ and $t^k$, $k \in \mathbb{N}_0$, and their influence on the conditions \cref{eq:growth-1}--\cref{eq:growth-2} have to be considered. In order to apply \cref{lemma:gen-conv}, we now set up some preparatory definitions. Let $k \in \mathbb{N}_0$ be given and let us define ${\sf Q}_k : \mathbb{R}^n \to \mathbb{R}$, ${\sf Q}_k(a) := \|a\| - (\eta+\nu_k) \theta_k$ and ${\sf P}_k : \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$, ${\sf P}_k(a,b) := \psi(a) - \beta \theta_k^{1-p} \|b\|^p - \psi(x^k)$, and \[ {\sf S}_k := [\mathrm{dom}~r \times \mathbb{R}^n] \; \cap \; [\mathbb{R}^n \times \mathrm{lev}_{\varepsilon_k^1}\,{\sf Q}_k] \; \cap \; {\mathrm{lev}}_{\varepsilon_k^2}\,{\sf P}_k. \] Then, setting ${\sf Y}_{k+1} := {\mathds 1}_{{\sf S}_k}(z^k_{\sf n},F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n}))$, it holds \[ {\sf Y}_{k+1} = \begin{cases} 1 & \text{if $z^k_{\sf n} = x^k +d^k$ is feasible and satisfies the conditions \cref{eq:growth-1} and \cref{eq:growth-2}}, \\ 0 & \text{otherwise} \end{cases} \] and consequently, each iterate $x^{k+1}$ can be calculated as follows \begin{equation} \label{eq:def-xkk} \begin{aligned} x^{k+1} &= (1-{\sf Y}_{k+1}) z^k_{\sf p} + {\sf Y}_{k+1} z^k_{\sf n} \\ &= (1-{\sf Y}_{k+1})[ x^k - \alpha_k F^{\Lambda_k}_{{s}^k}(x^k) ] + {\sf Y}_{k+1} [ x^k - M_k^+ F^{\Lambda_k}_{{s}^k}(x^k) ]. \end{aligned} \end{equation} Here, the matrix $M_k^+$ denotes the \textit{Moore-Penrose inverse} of the generalized derivative $M_k$. Let us note that this compact representation of our iterative scheme turns out to be particularly useful in the proof of \cref{fact:one}, see \cref{sec:app-fact}. We also introduce the parameters ${\sf Z}_{k}$, $k \in \mathbb{N}_0$, which are defined recursively via \[ {\sf Z}_0 := \theta_0 \in \mathbb{R}_+, \quad {\sf Z}_{k+1} := (\eta + \nu_k)^{{\sf Y}_{k+1}} {\sf Z}_k + {\sf Y}_{k+1} \varepsilon_k^1. \] By construction of Algorithm \ref{alg:ssn} and by induction, we have $\theta_k \leq {\sf Z}_k$ and thus, \begin{equation} \label{eq:new-fund-2} {\sf Y}_{k+1} \| F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\| \leq {\sf Y}_{k+1}{\sf Z}_{k+1}, \quad \forall~k \in \mathbb{N}_0. \end{equation} Moreover, by identifying $Y_k \equiv {\sf Y}_{k+1}$ and $a_k \equiv {\sf Z}_{k}$, \cref{lemma:gen-conv} yields the following sample-independent and uniform bounds \begin{equation} \label{eq:new-fund-3} \sum_{k=0}^{R-1} {\sf Y}_{k+1} {\sf Z}_{k+1}^q \leq \frac{C_\nu^q}{1-\eta^q} \left [ (\eta \theta_0)^q + \sum_{k=0}^\infty (\varepsilon_k^1)^q \right ] =: C_{z}(q) < \infty \end{equation} and ${\sf Z}_{k+1} \leq C_\nu \left[ \theta_0 + \sum_{k=0}^\infty \varepsilon_k^1 \right] =: C_z $ for all $k \geq 0$, $R \in \mathbb{N}$, and $q \in [p,1]$. We now state our main result of this section. \begin{theorem} \label{theorem:conv-prox-v2} Let the sequence $(x^k)_k$ be generated by Algorithm \ref{alg:ssn}. Suppose that the assumptions {\rm(A.1)}--{\rm(A.2)}, {\rm(B.1)}--{\rm(B.2)}, and {\rm(C.1)}--{\rm(C.2)} are satisfied. Furthermore, suppose that the step sizes $\alpha_k \in [0,1]$, $k \in \mathbb{N}$, are chosen such that the approximate descent condition \cref{eq:est-prox-step} holds for some given $\gamma$ and $\rho$. Then, under the additional assumptions \[ (\alpha_k)_k \, \text{ is monotonically decreasing}, \quad \sum~\alpha_k = \infty, \quad \sum~\alpha_k \sigma_k^2 < \infty, \] it holds $\liminf_{k \to \infty} {\mathds E}[\|F^\Lambda({{x}}^k)\|^2] = 0$ and $\liminf_{k \to \infty} F^\Lambda({x}^k) = 0$ a.s. for any $\Lambda \in \mathbb{S}^n_{++}$. \end{theorem} \begin{proof} Assumption (A.1) implies that the gradient mapping $\nabla f(x)$ is Lipschitz continuous on $\mathbb{R}^n$ with Lipschitz constant $L$. Thus, for any matrix $\Gamma \in \mathbb{S}^n_{++}$ with $\lambda_{M} I \succeq \Gamma \succeq \lambda_{m} I$, we obtain the Lipschitz constant $1+ L \lambda_{m}^{-1}$ for $u^\Gamma(x) $. Since the proximity operator $\proxt{\Gamma}{r}$ is $\Gamma$-nonexpansive, we now have \begin{align*} \|F^\Gamma(x) - F^\Gamma(y)\| & \leq \|x-y\| + \lambda_{m}^{-\frac{1}{2}} \|\proxt{\Gamma}{r}(u^\Gamma(x)) - \proxt{\Gamma}{r}(u^\Gamma(y))\|_\Gamma \\ & \leq \|x-y\| + (\lambda_{m}^{-1}\lambda_{M})^{\frac{1}{2}} \|u^\Gamma(x) - u^\Gamma(y)\| \\ & \leq (1+({\lambda_{m}^{-1}\lambda_{M}})^\frac{1}{2} + L ({\lambda_{m}^{-3}\lambda_{M}})^\frac{1}{2}) \|x-y\| \end{align*} for all $x,y \in \mathbb{R}^n$. Hence, by assumption (B.1), the functions $x \mapsto F^{\Lambda_k}(x)$, $k \in \mathbb{N}$, are all Lipschitz continuous on $\mathbb{R}^n$ with modulus $L_{F} := 1+({\lambda_{m}^{-1}\lambda_{M}})^\frac{1}{2} + L ({\lambda_{m}^{-3}\lambda_{M}})^\frac{1}{2}$. We first consider the case where $x^{k+1} = z^k_{\sf p} = x^k + \alpha_k v^k$ is generated by the proximal gradient method in step \ref{alg:SSN5}. Then, due to (B.1) and \cref{remark:ada-lambda}, there exists a constant $\underline{\lambda} = \underline{\lambda}(\lambda_{m},\lambda_{M})$ such that \begin{align*} \|F^{\Lambda}(x^{k+1})\| & \leq {\underline{\lambda}}^{-1} ( L_{F} \alpha_k \|v^k\| + \lambda_{m}^{-\frac{1}{2}} \|F^{\Lambda_k}(x^k) - F^{\Lambda_k}_{{s}^k}(x^k)\|_{\Lambda_k} + \lambda_{m}^{-\frac{1}{2}} \|F^{\Lambda_k}_{{s}^k}(x^k)\|_{\Lambda_k} ) \\ & \leq \underline{\lambda}^{-1}\lambda_{m}^{-\frac{1}{2}} (L_F + 1) \|F^{\Lambda_k}_{{s}^k}(x^k)\|_{\Lambda_k} + (\underline{\lambda} \lambda_{m})^{-1} \|\nabla f(x^k) - G_{{s}^k}(x^k)\|. \end{align*} Here, we again used the $\Lambda_k$-nonexpansiveness of the proximity operator $\proxt{\Lambda_k}{r}$. Thus, applying \cref{prop:prox-descent}, using the estimate $\|a+b\|^2 \leq 2 \|a\|^2 + 2 \|b\|^2$, for $a, b \in \mathbb{R}^n$, and setting $\mathcal E_k := \|\nabla f(x_k) - G_{{s}^k}(x^k)\|$, we obtain \begin{align*} \psi(x^k) - \psi(x^{k+1}) & \\ &\hspace{-12ex} \geq \underbracket{\begin{minipage}[t][6ex][t]{12ex}\centering$\displaystyle\frac{\gamma\underline{\lambda}^2\lambda_{m}}{2(L_F+1)^2}$\end{minipage}}_{=: \, c_1} \cdot \, \alpha_k \|F^\Lambda(x^{k+1})\|^2 - \underbracket{\begin{minipage}[t][6ex][t]{30ex}\centering$\displaystyle\frac{1}{\lambda_{m}} \left( \frac{\gamma}{(L_F+1)^2} + \frac{1}{4\gamma(\rho-1)} \right)$\end{minipage}}_{=: \, c_2} \cdot \, \alpha_k\mathcal E_k^2. \end{align*} Next, we derive a similar estimate for a Newton step $x^{k+1} = z^k_{\sf n} = x^k + d^k$. As before and due to assumption (B.1) and \cref{remark:ada-lambda}, we have \begin{equation} \label{eq:flam-err} \|F^\Lambda(x^{k+1})\|^2 \leq 2 \underline{\lambda}^{-2} \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|^2 + 2(\underline{\lambda}\lambda_{m})^{-2} \|\nabla f(x^{k+1}) - G_{{s}^{k+1}}(x^{k+1}) \|^2. \end{equation} Combining the growth condition \cref{eq:growth-2}, \cref{eq:flam-err}, and the bound $\alpha_{k+1} \leq 1$, it holds \begin{align*} \psi(x^k) - \psi(x^{k+1}) & \geq - \beta \theta_k^{1-p} \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|^p - \varepsilon_k^2 \\ & \geq c_1 \cdot \alpha_{k+1} \|F^\Lambda(x^{k+1})\|^2 - \varepsilon_k^2 - 2 c_1(\underline{\lambda}\lambda_{m})^{-2} \cdot \alpha_{k+1}\mathcal E_{k+1}^2 \\ & \hspace{2ex} - \underbracket{\begin{minipage}[t][4ex][t]{34.5ex}\centering$\displaystyle\left( 2 c_1\underline{\lambda}^{-2} \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|^{2-p} + \beta \theta_k^{1-p}\right)$\end{minipage}}_{=: \,{\sf T}_k} \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|^{p}. \end{align*} Furthermore, using $\|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\| = \theta_{k+1} \leq C_z$ and $\theta_k \leq C_z$, it can be easily shown that the term ${\sf T}_k$ is bounded by a constant $\bar {\sf T}$ that does not depend on any of the random mini-batches $s^j$, $t^j$, $j \in \mathbb{N}_0$. Now, let $R \in \mathbb{N}$ be arbitrary. Then, the monotonicity of $(\alpha_k)_k$, \cref{eq:new-fund-2}--\cref{eq:new-fund-3}, and our last results imply \begin{align*} \psi(x^0) - \psi(x^{R+1}) & \\ & \hspace{-14ex} \geq \sum_{k=0}^{R} c_1 \min\left\{{\alpha_k},{\alpha_{k+1}}\right\} \|F^{\Lambda}(x^{k+1})\|^2 - \sum_{k=0}^R (1- {\sf Y}_{k+1}) c_2 \cdot \alpha_k \mathcal E_k^2 \\ & \hspace{-10ex} - \sum_{k=0}^R {\sf Y}_{k+1} \left [ 2 c_1(\underline{\lambda}\lambda_{m})^{-2} \cdot \alpha_{k+1} \mathcal E_{k+1}^2 + \varepsilon_k^2 + \bar {\sf T} \cdot \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|^p \right ] \\ & \hspace{-14ex} \geq \sum_{k=0}^{R} \left[ c_1 \alpha_{k+1} \|F^{\Lambda}(x^{k+1})\|^2 - c_2 \alpha_k \mathcal E_k^2 - 2 c_1(\underline{\lambda}\lambda_{m})^{-2} \alpha_{k+1} \mathcal E_{k+1}^2 \right] - \bar {\sf T}C_z(p) - \sum_{k=0}^R \varepsilon^2_k. \end{align*} Thus, taking expectation and setting $ c_3 := c_2 + 2 c_1(\underline{\lambda}\lambda_{m})^{-2}$, we obtain \[ \sum_{k=0}^R c_1{\alpha_{k+1}}{\mathds E}[\|F^{\Lambda}(x^{k+1})\|^2] \leq \psi(x^{0}) - {\mathds E}[\psi(x^{R+1})] + \bar {\sf T} C_z(p) + \sum_{k=0}^R \varepsilon_k^2+ {c_3} \sum_{k=0}^{R+1} \alpha_k{\sigma_k^2}. \] Since the objective function $\psi$ is bounded from below and the sequences $(\alpha_k\sigma_k^2)_k$ and $(\varepsilon^2_k)_k$ are summable, this obviously yields $\sum \alpha_k {\mathds E}[\|F^{\Lambda}(x^{k})\|^2] < \infty$. Consequently, our first claim follows from the assumption $\sum \alpha_k = \infty$. On the other hand, Fatou's lemma implies \[ \mathds E\left[ \sum_{k=0}^\infty \alpha_{k} \|F^\Lambda(x^{k})\|^2 \right] \leq \liminf_{R \to \infty} \mathds E\left[ \sum_{k=0}^R {\alpha_{k}}\|F^{\Lambda}(x^{k})\|^2 \right] < \infty \] and hence, we have $\sum \alpha_k \|F^\Lambda(x^k)\|^2 < \infty$ with probability 1. As before we can now infer $\liminf_{k \to \infty} F^\Lambda(x^k) = 0$ with probability 1 which completes our proof. \end{proof} \begin{remark} \label{remark:sample-size} In the case $\sum \sigma_k^2 < \infty$ and if the step sizes $\alpha_k$ are fixed or bounded, our results in \cref{theorem:conv-prox-v2} can be strengthened to $\lim_{k \to \infty} \Exp [\|F^\Lambda(x^k)\|^2] = 0$ and we have $\lim_{k \to \infty} F^\Lambda(x^k) = 0$ almost surely. Let us now assume that the samples $s^k_i$, $i \in [{\bf n}^{\sf g}_k]$, $k \in \mathbb{N}$, are chosen independently of each other and that the conditions \begin{equation} \label{eq:remark-uni-exp} \Exp[\cG(x,s^k_i)] = \nabla f(x), \quad\quad \Exp[\| \nabla f(x)- \cG(x,s^k_{i})\|^2] \leq \bar\sigma^2, \end{equation} hold uniformly for all $i \in [{\bf n}^{\sf g}_k]$, $k \in \mathbb{N}_0$, and $x \in \mathbb{R}^n$ and for some $\bar\sigma > 0$. Then, as shown in \cite{GhaLanZha16,IusJofOliTho17}, it follows $\Exp[\|\nabla f(x) - G_{{s}^k}(x)\|^2] \leq \bar\sigma^2 [{\bf n}^{\sf g}_k]^{-1}$ for all $x \in \mathbb{R}^n$ and consequently, due to ${\sf Y}_k \in \{0,1\}$ and $z^{k-1}_{\sf n}, z^{k-1}_{\sf p} \in \mathcal F_{k-1}$, we have \begin{align*} \Exp[\|\nabla f(x^k) - G_{{s}^k}(x^k)\|^2] & \leq 2 \Exp[(1-{\sf Y}_k) \|\nabla f(z^{k-1}_{\sf p})- G_{{s}^k}(z^{k-1}_{\sf p})\|^2] \\ & \hspace{4ex}+ 2 \Exp[{\sf Y}_k \|\nabla f(z^{k-1}_{\sf n}) - G_{{s}^k}(z^{k-1}_{\sf n}) \|^2] \leq 4\bar\sigma^2 [{\bf n}^{\sf g}_k]^{-1}. \end{align*} Hence, one way to guarantee summability of the error terms $\sigma_k^2$ is to asymptotically increase the sample size ${\bf n}^{\sf g}_k$ and set ${\bf n}^{\sf g}_k = \mathcal O(k^{1+\varpi})$ for some $\varpi > 0$. This observation is similar to the results in \cite{XuYin15,GhaLanZha16}. We will discuss the conditions \cref{eq:remark-uni-exp} in more detail in the next section. \end{remark} In the following, we present a situation where the approximate $\psi$-descent condition \cref{eq:growth-2} is not needed in order to guarantee global convergence of the method. In applications, this can be quite important, since calculating the full objective function $\psi$ may be similarly expensive as evaluating the full gradient $\nabla f$. The following variant of \cref{theorem:conv-prox-v2} is mainly based on the strong convexity assumption (A.3) and on the boundedness assumption (A.4). \begin{theorem} \label{theorem:conv-prox-strong} Let the sequence $(x^k)_k$ be generated by Algorithm \ref{alg:ssn} without checking the growth condition \cref{eq:growth-2}. Suppose that the assumptions {\rm(A.1)}, {\rm(A.3)}--{\rm(A.4)}, {\rm(B.1)}--{\rm(B.2)}, and {\rm(C.1)}--{\rm(C.2)} are satisfied. Furthermore, suppose that the step sizes $(\alpha_k)_k$ are chosen via $\alpha_k \in [\underline{\alpha}, \min\{1,\overline{\alpha}\}]$ for some $\underline{\alpha} > 0$ and all $k \in \mathbb{N}$. Then, under the additional assumption \[ \sum \sigma_k < \infty, \] it holds $\lim_{k \to \infty} {\Exp}[\|F^\Lambda(x^k)\|^2] = 0$ and $\lim_{k\to\infty} F^\Lambda(x^k) = 0$ a.s. for any $\Lambda \in \mathbb{S}^n_{++}$ . \end{theorem} \begin{proof} As in the proof of \cref{theorem:conv-prox-v2}, we want to derive suitable lower bounds for the $\psi$-descent $\psi(x^k) - \psi(x^{k+1})$. We first consider the case where $x^{k+1} = z^k_{\sf p}$ is generated by the proximal gradient method. Then, as shown in the proof of \cref{theorem:conv-prox-v2} and using the bound on $\alpha_k$, we have \[ \psi(x^k) - \psi(x^{k+1}) \geq c_1 \underline{\alpha} \|F^\Lambda(x^{k+1})\|^2 - c_2 \cdot \mathcal E_k^2. \] Next, we discuss the second case $x^{k+1} = z^k_{\sf n}$. By \cref{lemma:str-conv} and reusing the estimate $\|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\| \leq C_z$ (see again \cref{eq:new-fund-2}), it holds \[ \|z^k_{\sf n} - x^*\|^2 \leq B_1(\tau) \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\|^2 + B_2(\tau) \mathcal E_{k+1}^2 \leq B_1(\tau)C_z \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\| + B_2(\tau) \mathcal E_{k+1}^2 \] for some $\tau > 0$. By assumption (A.3), the functions $f - \frac{\mu_f}{2}\|\cdot\|^2$ and $r - \frac{\mu_r}{2}\|\cdot\|^2$ are convex (and directionally differentiable) and hence, we have \begin{equation} \label{eq:str-conv-dir} \psi(y) - \psi(x) \geq r^\prime(x;y-x) + \iprod{\nabla f(x)}{y-x} + \frac{\bar\mu}{2} \|y-x\|^2, \quad \forall~x,y \in \mathrm{dom}~r. \end{equation} Now, applying the optimality of $x^*$, \cref{eq:str-conv-dir} with $x \equiv z^k_{\sf n}$ and $y \equiv x^*$, $z^k_{\sf n} \in \mathrm{dom}~r$, the Lipschitz continuity of $\nabla f$ and $F^\Lambda$, the subadditivity of the square root, and defining \[ d_1 := \sqrt{B_1(\tau)}(\bar g_r + \|\nabla f(x^*)\|) + B_1(\tau)C_z L, \quad d_2 := \sqrt{B_2(\tau)}(\bar g_r + \|\nabla f(x^*)\|), \] we obtain \begin{align*} \psi(x^k) - \psi(x^{k+1}) & = \psi(x^k) - \psi(x^*) + \psi(x^*) - \psi(x^{k+1}) \\ & \geq r^\prime(z^k_{\sf n}; x^*-z^k_{\sf n}) + \iprod{\nabla f(z^k_{\sf n})}{x^* - z^k_{\sf n}} + \frac{\bar\mu}{2} \|z^k_{\sf n} - x^*\|^2 \\ & \geq -(\bar g_r + \|\nabla f(x^*)\|) \cdot \|z^k_{\sf n} - x^*\| + \left(\frac{\bar\mu}{2} - L \right) \|z^k_{\sf n} - x^*\|^2 \\ & \geq - d_1 \|F^{\Lambda_{k+1}}_{{s}^{k+1}}(z^k_{\sf n})\| - d_2 \mathcal E_{k+1} - B_2(\tau)L \mathcal E_{k+1}^2 + \frac{\bar\mu{\underline{\lambda}}^2}{2L_F^2} \|F^\Lambda(x^{k+1})\|^2. \end{align*} Combining the last inequalities, setting $d_3 := \min\{c_1\underline{\alpha}, (2L_F^2)^{-1}\bar\mu{\underline{\lambda}}^2\}$ and using again \cref{eq:new-fund-3} with $q = 1$, it holds \begin{align*} \psi(x^0) - \psi(x^{R+1}) \geq \sum_{k=1}^{R+1} \left[ d_3 \|F^\Lambda(x^{k})\|^2 - d_2 \mathcal E_k \right] - (c_2 + B_2(\tau)L) \sum_{k=0}^{R+1} \mathcal E_k^2 - d_1 C_z(1), \end{align*} for all $R \in \mathbb{N}$. Taking expectation, our first claim now follows from (C.2), Jensen's inequality, $\sum \sigma_k < \infty$, and from the lower boundedness of $\psi(x^{R+1})$. The probabilistic convergence of the sequence $(F^\Lambda(x^k))_k$ can then be inferred as in the proof of \cref{theorem:conv-prox-v2}. \end{proof} \begin{remark} \label{remark:example-c3} Let us note that assumption {\rm(A.4)} is required to derive a suitable lower bound for the difference terms $\psi(x^k) - \psi(z^k_{\sf n})$ which allows us to apply \cref{lemma:gen-conv}. Furthermore, condition {\rm(A.4)} is always satisfied in the following situation. Suppose that the mapping $r$ has the special form $r = \iota_{\mathcal C} + \varphi$, where $\varphi : \mathbb{R}^n \to \mathbb{R}$ is a real-valued, convex function and $\iota_{\mathcal C} : \mathbb{R}^n \to (-\infty,+\infty]$ is the indicator function of a nonempty, convex, and closed set $\mathcal C \subset \mathbb{R}^n$. Then, assumption {\rm(A.4)} holds if the set $\mathcal C$ is either compact or if $\varphi$ is positively homogeneous. In particular, condition {\rm(A.4)} is satisfied if $r$ is a norm. \end{remark} \begin{proof} For every feasible point $x \in \mathrm{dom}~r = \mathcal C$, we have $0 \in \partial \iota_{\mathcal C}(x)$ and thus, $\partial \varphi(x) \subset \partial r(x)$. By \cite[Proposition 16.17]{BauCom11}, the set $\bigcup_{x \in \mathcal C} \partial \varphi(x)$ is bounded if $\mathcal C$ is bounded. On the other hand, if $\varphi$ is positively homogeneous, then it follows $\partial \varphi(x) = \{\lambda \in \partial \varphi(0): \iprod{\lambda}{x} = \varphi(x)\} \subset \partial \varphi(0)$, see, e.g., \cite[Proposition 16.18]{BauCom11} and \cite[Example 2.5.17]{Mil16}. Since $\partial \varphi (0)$ is again a compact set, this proves our claim. \end{proof} \begin{remark} The result in \cref{theorem:conv-prox-strong} can be further improved by additionally damping the semismooth Newton step and setting $z^k_{\sf n} := x^k + \alpha_k d^k$. Then, due to the convexity of $\psi$, we have $\psi(x^k) - \psi(z^k_{\sf n}) \geq \alpha_k (\psi(x^k) - \psi(x^k+d^k))$ and we can use the weaker conditions \[ (\alpha_k)_k \, \text{ is monotonically decreasing}, \quad \sum~\alpha_k = \infty, \quad \sum~\alpha_k\sigma_k < \infty \] to guarantee $\liminf_{k \to \infty} {\mathds E}[\|F^\Lambda(x^k)\|^2] = 0$. Similar to \cite{FS2012, XuYin15,BBN2016} it is also possible to derive global convergence rates in terms of the expected distance to optimality $\mathds E[\psi(x^k) - \psi(x^*)]$. However, in our case these rates will depend on the occurrence and total number of accepted Newton steps which are of stochastic nature in general. \end{remark} Finally, let us emphasize that our global results do not explicitly depend on the sampling strategy or on any (uniform) invertibility properties of the stochastic second order oracle $H_{t}$ or of the chosen generalized derivatives $M_k$. Moreover, our results still hold if a different type of direction $d^k$ is used instead of the semismooth Newton direction $d^k = - M_k^+ F^{\Lambda_k}_{{s}^k}(x^k)$. (In our proofs, we only require $\mathcal F_k$-measurability of $d^k$). \section{Local convergence}\label{sec:local} In this part of the paper, we analyze the local convergence properties of our proposed method in detail. We will focus on a probabilistic setting, i.e., we consider a single trajectory of the stochastic process $(x^k)_k$ and show that transition to fast local convergence and a fast rate of convergence can be achieved with high probability if the sample sizes ${\bf n}^{\sf g}_k$ and ${\bf n}^{\sf h}_k$ are chosen appropriately. With a slight abuse of notation, we will use $(x^k)_k$ to denote either the underlying stochastic process or a corresponding trajectory generated by a single run of Algorithm \ref{alg:ssn} which should be clear from the context. Our analysis heavily relies on different second order properties of the proximity operator $\proxt{\Lambda}{r}$ and on concentration inequalities for vector- and matrix-valued martingales. In particular, these inequalities will allow us to quantify and control the errors induced by the stochastic oracles and by approximating the gradient and Hessian of $f$. A similar strategy was also used in \cite{RKM2016I,RKM2016II,XRKM2017,YeLuoZha17} for the analysis of pure, sub-sampled Newton methods for smooth optimization problems. In the next subsection, we present our local assumptions and the mentioned concentration results. \subsection{Assumptions and Conditional Concentration Inequalities} \label{sec:loc-condconc} We will mainly work with the following set of local assumptions. \begin{assumption} \label{assumption:local} Let the trajectory $(x^k)_k$ and the sequence $(\Lambda_k)_k$ be generated by Algorithm \ref{alg:ssn} and suppose that $x^*$ and $\Lambda_*$ are accumulation points of $(x^k)_k$ and $(\Lambda_k)_k$, respectively. We assume that the following conditions are satisfied. \vspace{.5ex} \begin{itemize} \item[{\rm(D.1)}] There exists $\bar k \in \mathbb{N}$ such that $\Lambda_k = \Lambda_*$ for all $k \geq \bar k$. \item[{\rm(D.2)}] The function $f : \mathbb{R}^n \to \mathbb{R}$ is twice continuously differentiable on $\mathbb{R}^n$. \item[{\rm(D.3)}] The proximity operator $\proxt{\Lambda_*}{r}$ is semismooth at $u^{\Lambda_*}(x^*)$. \item[{\rm(D.4)}] The function $\psi$ is Lipschitz continuous in a neighborhood of $x^*$ with constant $L_\psi$. \item[{\rm(D.5)}] There exists $C > 0$ such that every generalized derivative $M \in \mathcal M^{\Lambda_*}(x^*)$ is nonsingular with $\|M^{-1}\| \leq C$. \end{itemize} If, in addition, $x^*$ is a stationary point of \cref{eq:prob}, then we assume: \begin{itemize} \item[{\rm(D.6)}] The accumulation point $x^*$ is a local minimum of the problem \cref{eq:prob}. \end{itemize} \end{assumption} Let us briefly discuss the conditions in \cref{assumption:local}. Assumption (D.5) can be interpreted as a \textit{BD-} or \textit{CD-regularity condition} which is a common condition in the local analysis of nonsmooth optimization methods, see, e.g., \cite{QiSun93,Qi93,PanQi93}. Let us also mention that, by \cite[Corollary 8.30]{BauCom11}, the Lipschitz condition (D.4) is equivalent to $x^* \in \mathrm{int}~\mathrm{dom}~r$. Hence, in a suitable neighborhood of $x^*$, any point $x$ will be feasible with $x \in \mathrm{dom}~r$. See \cref{remark:proj-loc} for further comments. Finally, as shown in \cite[section 5.4]{Mil16}, the assumptions (D.5) and (D.6) are both satisfied if $x^*$ is a stationary point and $\nabla^2 f(x^*)$ is positive definite. In the following, we introduce three additional conditions that are connected to the variance of the error terms $\mathcal E^{\sf g}_{k}$ and $\mathcal E^{\sf h}_{k}$, \[ \mathcal E^{\sf g}_{k}(x) := \|G_{{s}^k}(x) - \nabla f(x)\|, \quad \mathcal E^{\sf h}_{k}(x) := \|H_{t^k}(x) - \nabla^2 f(x)\|, \] and that extend assumption (C.2). \begin{assumption} \label{assumption:con-var} We consider the conditions: \begin{itemize} \item[{\rm (E.1)}] The random mappings $s^k_i$, $t^k_j$ are mutually independent to each other for all $i \in [{\bf n}^{\sf g}_k]$, $j \in [{\bf n}^{\sf h}_k]$, and $k \in \mathbb{N}_0$. Furthermore, the stochastic oracles $\cS\cF\cO$ and $\cS\cS\cO$ generate unbiased estimators of the gradient and Hessian of $f$, i.e., for all $x \in \mathbb{R}^n$ it holds \end{itemize} \[ \Exp[\cG(x,s^k_i)] = \nabla f(x), \quad \Exp[\cH(x,t^k_j)] = \nabla^2 f(x), \quad \forall~i \in [{\bf n}^{\sf g}_k], \; \forall~j \in [{\bf n}^{\sf h}_k], \; \forall~k \in \mathbb{N}. \] \begin{itemize} \item[{\rm (E.2)}] We have $\Exp[[\mathcal E^{\sf g}_{k}(z^{k-1}_{\sf n})]^2] < \infty$, $\Exp[[\mathcal E^{\sf g}_{k}(z^{k-1}_{\sf p})]^2] < \infty$, $\Exp[[\mathcal E^{\sf h}_{k}(x^k)]^2] < \infty$ for all $k \in \mathbb{N}$ and there exists $\bar\sigma, \bar\rho \geq 0$ such that for all $k \in \mathbb{N}_0$, $i \in [{\bf n}^{\sf g}_k]$, $j \in [{\bf n}^{\sf h}_k]$ it holds \end{itemize} \begin{equation} \label{eq:var-bound-quad} \Exp[\|\cG(x,s^k_i) - \nabla f(x)\|^2] \leq \bar\sigma^2, \quad \Exp[\|\cH(x,t^k_j) - \nabla^2 f(x)\|^2] \leq \bar\rho^2, \quad \forall~x \in \mathbb{R}^n. \end{equation} \begin{itemize} \item[{\rm (E.3)}] There exists $\bar\sigma, \bar\rho > 0$ such that $\Exp[\exp([\mathcal E^{\sf g}_{k}(z^{k-1}_{\sf n})]^2/\bar\sigma^2)] < \infty$, \[ \Exp[\exp([\mathcal E^{\sf g}_{k}(z^{k-1}_{\sf p})]^2/\bar\sigma^2)] < \infty, \quad \Exp[\exp([\mathcal E^{\sf h}_{k}(x^k)]^2/\bar\rho^2)] < \infty, \quad \forall~k \in \mathbb{N}, \] and for all $x \in \mathbb{R}^n$ and all $k \in \mathbb{N}_0$, $i \in [{\bf n}^{\sf g}_k]$, $j \in [{\bf n}^{\sf h}_k]$, we have \end{itemize} \begin{equation} \label{eq:var-bound-exp} \Exp[\exp(\|\cG(x,s^k_i)- \nabla f(x)\|^2/\bar\sigma^2)] \leq e, \;\; \Exp[\exp(\|\cH(x,t^k_j) - \nabla^2 f(x)\|^2/\bar\rho^2)] \leq e. \end{equation} \end{assumption} The inequalities and properties stated in (E.1)--(E.2) have already been discussed in \cref{remark:sample-size}. As we have seen and as we will verify in \cref{lemma:tail-bound} and in \cref{sec:trans-loc} in more detail, these conditions allow us to bound and control the error terms $\mathcal E^{\sf g}_{k}$ and $\mathcal E^{\sf h}_{k}$ by means of the sample sizes ${\bf n}^{\sf g}_k$ and ${\bf n}^{\sf h}_k$. We note that the conditions given in \cref{assumption:con-var} are commonly used in the complexity and convergence analysis of stochastic optimization methods, see, e.g., \cite{GL2013,BBN2016,GhaLanZha16}. Next, we summarize several conditional, large deviation bounds for vector- and matrix-valued martingales. For more information on tail bounds and additional matrix concentration inequalities, we refer to the papers \cite{JudNem08,Tro12}. To the best of our knowledge, the ``light tail'' result for symmetric random matrices presented in \cref{lemma:tail-bound} (ii) seems to be new. \begin{lemma} \label{lemma:tail-bound} Let $(\mathcal U_k)_{k=0}^m$ be a given filtration of the $\sigma$-algebra $\mathcal F$ and let $\bar \sigma \in \mathbb{R}^m$ be a given vector with $\bar\sigma_k \neq 0$ for all $k$. It holds: \begin{itemize} \item[{\rm(i)}] Let $({\sf X}_k)_{k=1}^m$, ${\sf X}_k : \Omega \to \mathbb{R}^n$, be a family of random vectors, satisfying ${\sf X}_k \in \mathcal U_k$, $\Exp[{\sf X}_k \mid \mathcal U_{k-1}] = 0$, and $\Exp[\|{\sf X}_k\|^2 \mid \mathcal U_{k-1}] \leq \bar\sigma_k^2$ $\text{a.e.}$ for all $k \in [m]$. Then, we have \end{itemize} \[ \Exp[\|{\textstyle \sum_{k=1}^m} {\sf X}_k\|^2 \mid \mathcal U_0] \leq \|\bar\sigma\|^2 \quad \text{and} \quad \Prob( \|{\textstyle \sum_{k=1}^m} {\sf X}_k\| \geq \tau \|\bar\sigma\| \mid \mathcal U_0) \leq \tau^{-2}, \quad \forall~t > 0 \] \begin{itemize} \item[] almost everywhere. In addition, if it holds $\Exp[ \exp(\|{\sf X}_k\|^2/\bar\sigma_k^2) \mid \mathcal U_{k-1}] \leq \exp(1)$ $\text{a.e.}$ and for all $k \in [m]$, then with probability 1 it follows \end{itemize} \[ \Prob\left(\left\|{\textstyle\sum_{k=1}^m} {\sf X}_k\right\| \geq (1 + \tau) \|\bar\sigma\| \mid \mathcal U_0 \right) \leq \exp(-\tau^2/3), \quad \forall~\tau > 0. \] \begin{itemize} \item[\rm(ii)] Let $({\sf X}_k)_{k=1}^m$ be a sequence of symmetric random $n \times n$ matrices satisfying ${\sf X}_k \in \mathcal U_k$, $\Exp[{\sf X}_k \mid \mathcal U_{k-1}] = 0$, and $\Exp[\|{\sf X}_k\|^2 \mid \mathcal U_{k-1}] \leq \bar\sigma_k^2$ $\text{a.e.}$ for all $k \in [m]$. Then, it holds \end{itemize} \[ \Prob( \|{\textstyle \sum_{k=1}^m} {\sf X}_k\| \geq \tau \|\bar\sigma\| \mid \mathcal U_0) \leq \kappa_n \cdot \tau^{-2}, \quad \forall~\tau > 0 \] \begin{itemize} \item[] $\text{a.e.}$ with $\kappa_n :=(2\log(n+2)-1)e$. Additionally, if we have $\Exp[ \exp(\|{\sf X}_k\|^2/\bar\sigma_k^2) \mid \mathcal U_{k-1}] \leq \exp(1)$ $\text{a.e.}$ and for all $k \in [m]$, then with probability 1 it follows \end{itemize} \begin{equation} \label{eq:mat-bound-light} \Prob\left(\left\|{\textstyle\sum_{k=1}^m} {\sf X}_k\right\| \geq \tau \|\bar\sigma\| \mid \mathcal U_0 \right) \leq 2n \cdot \exp(-\tau^2/3), \quad \forall~\tau > 0. \end{equation} \end{lemma} \begin{proof} The first result in part (i) is well-known, see \cite{JudNem08,GhaLanZha16,IusJofOliTho17}. The associated probability bound directly follows from the conditional Markov inequality. Since the Euclidean norm is \textit{$1$-smooth}, the second result in part (i) follows from \cite[Theorem 4.1]{JudNem08}. In \cite{JudNem08}, Juditsky and Nemirovski also verified that the spectral norm is \textit{$\kappa_n$-regular} which implies \[ \Exp[\|{\textstyle\sum_{k=1}^m} {\sf X}_k\|^2 \mid \mathcal U_0] \leq \kappa_n \|\bar\sigma\|^2 \quad \text{a.e.} \] and establishes the first bound in part (ii). The remaining result can be shown by combining the techniques presented in \cite{Tro12} and \cite[Proposition 4.2]{JudNem08}. For the sake of completeness, an explicit proof is given in \cref{sec:app-3}. \end{proof} Let us now suppose that the assumptions (E.1)--(E.2) are satisfied. Then, using the integrability condition $\Exp[[\mathcal E^{\sf g}_{k}(z^{k-1}_{\sf n})]^2] < \infty$, \cref{fact:one}, and since $\cG$ is a Carath\'{e}odory function and the $\sigma$-algebras $\mathcal F_{k-1}$ and $\sigma(s^{k}_i)$ are independent, it follows \vspace{1ex} \begin{itemize} \item $\Exp[\cG(z^{k-1}_{\sf n},s^{k}_i) - \nabla f(z^{k-1}_{\sf n}) \mid \mathcal F_{k-1}] = \Exp[\cG(\cdot,s^{k}_i)](z^{k-1}_{\sf n}) - \nabla f(z^{k-1}_{\sf n}) = 0$, \vspace{.5ex} \item $\Exp[\|\cG(z^{k-1}_{\sf n},s^{k}_i) - \nabla f(z^{k-1}_{\sf n})\|^2 \mid \mathcal F_{k-1}] = \Exp[\|\cG(\cdot,s^{k}_i) - \nabla f(\cdot)\|^2](z^{k-1}_{\sf n}) \leq \bar\sigma^2$, \vspace{1.5ex} \end{itemize} $\text{a.e.}$ and for all $i = 1,...,{\bf n}^{\sf g}_k$ and $k \in \mathbb{N}$, see \cite[Theorem 2.10]{BhaWay16}. In a similar fashion (and since $\sigma(t_j^k)$ and $\hat{\mathcal F}_{k-1}$ are independent), we can derive almost sure bounds for the proximal gradient step $z^{k-1}_{\sf p}$ and for the Hessian error terms $\cH(x^k,t^k_j) - \nabla^2 f(x^k)$, $j \in [{\bf n}^{\sf h}_k]$. Hence, the results and bounds in \cref{lemma:tail-bound} are applicable in this situation. We also want to point out that the integrability conditions in \cref{assumption:con-var} are only required for the well-definedness of the conditional expectations and for the associated calculus, \cite{Wil91,BhaWay16}. \subsection{Second Order Properties and Bounded Invertibility} In this subsection, we derive a general invertibility result that can be applied to guarantee bounded invertibility of the generalized derivatives used in our stochastic semismooth Newton framework. We start with the presentation of several second order properties of the proximity operator that are essential for our analysis and that are motivated by the results in \cite{HirStrNgu84,MenSunZha05}. Since the proximity operator is a Lipschitz continuous function, Rademacher's theorem implies that the proximal mapping $\proxt{\Lambda}{r}$ is Fr\'echet differentiable almost everywhere. Let $\Omega_{r}^\Lambda \subset \mathbb{R}^n$ denote the set of all points at which the proximity operator $\proxt{\Lambda}{r}$ is differentiable. Then, as shown in \cite[Section 3.3]{Mil16}, the following statements are valid: \vspace{1ex} \begin{itemize} \item For all $x \in \Omega_{r}^\Lambda$ the matrix $\Lambda{D}\proxt{\Lambda}{r}(x)$ is symmetric and positive semidefinite. \vspace{.5ex} \item For all $x \in \Omega_{r}^\Lambda$ the matrix $\Lambda(I - {D}\proxt{\Lambda}{r}(x))$ is symmetric and positive semidefinite. \end{itemize} \vspace{1ex} A continuity argument shows that the last properties are also satisfied for every generalized derivative $D \in \partial_B \proxt{\Lambda}{r}(x)$. In the following lemma, we summarize our observations and state an analogue result for the Clarke subdifferential of the proximity operator $\proxt{\Lambda}{r}$. Let us mention that Meng et al. \cite{MenSunZha05} established a similar result for metric projections onto convex, nonempty, and closed sets. Our result also extends Theorem 3.2 in \cite{PatSteBem14}. \begin{lemma} \label{lemma:prox:sec-clarke} Let $\Lambda \in \mathbb{S}^n_{++}$ and $x \in \mathbb{R}^n$ be arbitrary. Then, for every $D \in \partial \proxt{\Lambda}{r}(x)$, the following statements are true: \vspace{0.5ex} \begin{itemize} \item[\rmn{(i)}] The matrices $\Lambda D$ and $\Lambda (I - D)$ are symmetric and positive semidefinite. \vspace{0.5ex} \item[\rmn{(ii)}] It holds $\iprod{Dh}{\Lambda(I-D)h} \geq 0$ for all $h \in \mathbb{R}^n$. \end{itemize} \end{lemma} \begin{proof} The first part is a consequence of $\partial \proxt{\Lambda}{r}(x) = {\rm conv}(\partial_B \proxt{\Lambda}{r}(x))$ and of the invertibility of $\Lambda$. The proof of the second part is identical to the proof of \cite[Proposition 1]{MenSunZha05} and therefore will be omitted. \end{proof} Next, we present the promised, local invertibility result. \begin{lemma} \label{lemma:invertibility} Suppose that the conditions {\rm(A.1)}, {\rm(D.2)}, and {\rm(D.5)} are satisfied and let $s$ and $t$ be arbitrary sample mini-batches. Furthermore, let $x^*$ and $\Lambda_*$ be given as in \cref{assumption:local} with $\lambda_{M} I \succeq \Lambda_* \succeq \lambda_{m} I$. Then, for all $\gamma_c \in (0,\beta_c/C)$, $\beta_c \in (0,1)$, there exists $\varepsilon_c > 0$ (that does not depend on $s$ or $t$) such that under the additional conditions \begin{equation} \label{eq:bd-inv-cond} x \in B_{\varepsilon_c}(x^*), \quad \|\nabla f(x) - \gfsub(x)\| \leq \varepsilon_c, \quad \|\nabla^2 f(x) - H_{t}(x)\| \leq 0.5\lambda_{m}\gamma_c, \end{equation} the matrices $M \in \mathcal M^{\Lambda_*}_{s,t}(x)$ are all boundedly invertible with $\|M^{-1}\| \leq C / (1-\beta_c)$. \end{lemma} \begin{proof} Let us set $K_* := \|\nabla^2 f(x^*)\|$. Since the multifunction $\partial \proxt{\Lambda_*}{r} : \mathbb{R}^n \rightrightarrows \mathbb{R}^{n \times n}$ is upper semicontinuous, there exists $\tilde\varepsilon > 0$ such that \begin{equation} \label{eq:prf-upp-semi} \partial \proxt{\Lambda_*}{r}(y) \subset \partial \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)) + B_{\tilde\delta}(0), \quad \forall~y \in B_{\tilde\varepsilon}(u^{\Lambda_*}(x^*)), \end{equation} where $\tilde\delta := \lambda_{m}\gamma_c / (4(\lambda_{m} + K_*))$, see, e.g., \cite[Proposition 2.6.2]{Clarke1990}. Moreover, by the continuity of the Hessian $\nabla^2 f$, we also have $\|\nabla^2 f(x) - \nabla^2 f(x^*)\| \leq 0.25\lambda_{m}\gamma_c$ for all $x \in B_{\tilde\varepsilon}(x^*)$ without loss of generality. Let us now set $\varepsilon_c := \min\{(L+\lambda_{m})^{-1},1\}\frac{\tilde\varepsilon\lambda_{m}}{2}$ and let us consider an arbitrary matrix $M \in \mathcal M^{\Lambda_*}_{s,t}(x)$ with \[ M = I - D + D\Lambda^{-1}_* H_{t}(x) \quad \text{and} \quad D \in \partial \proxt{\Lambda_*}{r}(u^{\Lambda_*}_s(x)). \] Then, due to \begin{align} \nonumber \|u^{\Lambda_*}_s(x) - u^{\Lambda_*}(x^*)\| & \leq \|u^{\Lambda_*}_s(x) - u^{\Lambda_*}(x)\| + \|u^{\Lambda_*}(x) - u^{\Lambda_*}(x^*)\| \\ & \leq \lambda_{m}^{-1} \|\nabla f(x) - \gfsub(x)\| + (1+L\lambda_{m}^{-1}) \|x-x^*\| \leq \tilde\varepsilon \label{eq:bound-ulam} \end{align} and \cref{eq:prf-upp-semi}, there exists $D_* \in \partial \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*))$ such that $\|D - D_*\| \leq \tilde \delta$. Using \cref{lemma:prox:sec-clarke} (i), we have $\|D\Lambda^{-1}_*\| = \|\Lambda_*^{-\frac{1}{2}}[\Lambda_*^{\frac{1}{2}}D\Lambda^{-\frac{1}{2}}_*]\Lambda^{-\frac{1}{2}}_*\| \leq \|\Lambda^{-1}_*\|$. Thus, defining $M_* := I - D_* + D_* \Lambda^{-1}_* \nabla^2 f(x^*) \in \mathcal M^{\Lambda_*}(x^*)$, it follows \begin{align*} \|M - M_*\| & = \|(D_* - D)(I - \Lambda_*^{-1}\nabla^2 f(x^*)) + D\Lambda_*^{-1}(H_{t}(x) - \nabla^2 f(x^*))\| \\ & \leq (1+\lambda_{m}^{-1}K_*)\tilde\delta + \lambda_{m}^{-1} (\|\nabla^2 f(x) - H_{t}(x)\| + \|\nabla^2 f(x) - \nabla^2 f(x^*)\|) \leq \gamma_c \end{align*} Due to (D.5) and $\gamma_c \leq \beta_c/C$, it now holds $\|M_*^{-1}(M - M_*)\| \leq \beta_c < 1$. Consequently, by the Banach perturbation lemma, $M$ is invertible with \[ \|M^{-1}\| = \|(M_* + M - M_*)^{-1}\| \leq \frac{\|M_*^{-1}\|}{1- \|M_*^{-1}(M - M_*)\|} \leq \frac{C}{1-\beta_c}. \] This finishes the proof of \cref{lemma:invertibility}. \end{proof} \begin{comment} \begin{lemma} \label{lemma:non-str-bound-1} Let $D \in \mathbb{R}^{n \times n}$, $H \in \mathbb{S}^n$, and $\Lambda \in \mathbb{S}^n_{++}$ be given matrices. Suppose that $\Lambda D$ and $\Lambda(I-D)$ are symmetric and positive semidefinite and let us set $M := (I-D) + D\Lambda^{-1}H$. Furthermore, suppose that the condition \begin{equation} \label{eq:sec-ord-inv-2} \iprod{x}{Hx} + \iprod{x}{\Lambda(I-D)x} \geq \nu \|x\|^2, \quad \forall~x \in \mathbb{R}^n, \end{equation} is satisfied for some $\nu > 0$. Then, $M$ is invertible and we have \begin{equation} \label{eq:inv-bound-2} \|M^{-1}\| \leq {\nu^{-1}} \|\Lambda + H\|. \end{equation} \end{lemma} \begin{proof} For all $x \in \mathbb{R}^n \setminus \{0\}$, it holds \begin{align*} \iprod{(\Lambda+H)x}{Mx} & = \iprod{x}{[\Lambda(I-D) + D^\top H]x} + \iprod{x}{[H(I-D) + H\Lambda^{-1}D^\top H] x} \\ &= \iprod{x}{Hx} + \iprod{x}{\Lambda(I-D)x} + \iprod{\Lambda^{-1}Hx}{[\Lambda D]\Lambda^{-1}Hx} \geq \nu \|x\|^2, \end{align*} where we used the positive semidefiniteness of $\Lambda D$ and $D^\top = \Lambda D \Lambda^{-1}$. Consequently, the matrix $M$ must be invertible. Moreover, the matrix $\bar M := (\Lambda+H) M$ has a positive definite Hermitian part. Thus, applying \cite[Lemma 2.1]{Mat92}, it follows \[ \|M^{-1}\| = \|\bar M^{-1} (\Lambda+H)\| \leq \|[{\textstyle\frac{1}{2}}(\bar M + \bar M^\top)]^{-1}\| \|\Lambda + H\| \leq {\nu}^{-1} \|\Lambda + H\|. \] \end{proof} Since every matrix $M \in \cM^{\Lambda}_{{s},t}(x)$ has the form $M = (I - D) + D \Lambda^{-1} H_{t}(x)$ for some arbitrary $D \in \partial\proxt{\Lambda}{r}(u^\Lambda_{s}(x))$, we can apply \cref{lemma:non-str-bound-1} to study the invertibility properties of $M$. In particular, based on assumption (D.2), \cref{lemma:mat-conc}, and on the implication \begin{equation} \label{eq:ew-bound} \begin{aligned} &\lambda_{\min}(\nabla^2 f(x)) \geq \nu_*, \;\; \lambda_{\max}(\nabla^2 f(x)) \leq K_*, \;\; \|H_{t}(x) - \nabla^2 f(x)\| \leq \rho\nu_* \\ & \hspace{12ex} \implies \lambda_{\min}(H_{t}(x)) \geq (1-\rho)\nu_*, \;\; \lambda_{\max}(H_{t}(x)) \leq (1+\rho) K_*, \end{aligned} \end{equation} $\rho \in [0,1]$, we can derive a probabilistic version of \cref{lemma:non-str-bound-1} for our stochastic framework. A similar strategy was also used in \cite{RKM2016I,RKM2016II} for the analysis of a sub-sampled Newton method for smooth problems. We refer to \cite[Section A.1]{RKM2016II} for a detailed derivation of \cref{eq:ew-bound}. \end{comment} \subsection{Transition to Fast Local Convergence and Convergence Rates} \label{sec:trans-loc} We now present our local convergence theory. As mentioned, our analysis and results rely on the observation that the stochastic Newton step $z^k_{\sf n} = x^k + d^k$ is always accepted as a new iterate with high probability if $x^k$ is close to a local solution $x^*$ and if the sample sizes ${\bf n}^{\sf g}_k$ and ${\bf n}^{\sf h}_k$ are sufficiently large. This will be discussed in detail in the next theorem. Throughout this subsection, we will work with the following functions and constants $\mu(x) := \min\{(2L_FC)^{-1},1\} x$, ${\mu_p}(x) := \min \{x,\min\{x^{p^{-1}},x^{(1-p)^{-1}}\}\}$, \[ {\Upsilon}_k := \min \{ \mu(\varepsilon_k^1),{\mu_p}(\varepsilon_k^2)\}, \quad \text{and} \quad \Gamma_k := \min\{{\Upsilon}_{k-1},{\Upsilon}_k\}, \] where $L_F$ denotes the Lipschitz constant of $F^{\Lambda_*}$. (See the proof of \cref{theorem:conv-prox-v2}). Furthermore, let us set $\beta_1 := (2^pL_\psi C)^{-1} \min\{\beta,\frac{1}{2}\}$ and \begin{equation} \label{eq:def-gam} \beta_2 := \min \left\{\frac{6\eta}{4L_F C+3\eta}, \beta_1^{\frac{1}{1-p}}\right\}, \quad \gamma_f := \frac{1}{2\max\{C,1\}}\min \left \{ \frac{1}{2},\beta_2 \right \}. \end{equation} \begin{theorem} \label{theorem:mega} Assume that the conditions {\rm(A.1)}--{\rm(A.2)}, {\rm(B.1)}--{\rm(B.2)}, {\rm(C.1)}, and {\rm(E.1)}--{\rm(E.2)} are satisfied and let the sequences $(x^k)_k$, $(\Lambda_k)_k$, and $(\alpha_k)_k$ be generated by Algorithm \ref{alg:ssn}. Let $x^*$ and $\Lambda_*$ be accumulation points of $(x^k)_k$ and $(\Lambda_k)_k$ fulfilling the conditions {\rm(D.1)}--{\rm(D.6)} and let $\gamma_f \in (0,1)$ be given as in \cref{eq:def-gam}. Moreover, suppose that the selected step sizes $(\alpha_k)_k$ are bounded via $\alpha_k \in [\underline{\alpha},\min\{1,\overline{\alpha}\}]$ for some $\underline{\alpha} > 0$ and all $k \in \mathbb{N}$. Then, for a given sequence $(\delta_k)_k \subset (0,1)$, the following statements are true: \begin{itemize} \item[{\rm(i)}] Suppose that there exists $\bar \ell \in \mathbb{N}$ such that \vspace{-0.5ex} \end{itemize} \begin{equation} \label{eq:bounds-i} {\bf n}^{\sf g}_k \geq \frac{1}{\delta_k}\left[\frac{2\bar\sigma}{\lambda_{m} \Gamma_k}\right]^2, \quad {\bf n}^{\sf h}_k \geq \frac{\kappa_n}{\delta_k}\left[\frac{2\bar\rho}{\lambda_{m}\gamma_f} \right]^2, \quad \forall~k \geq \bar\ell. \vspace{-0.5ex} \end{equation} \begin{itemize} \item[] Then, with probability $\delta_* := \prod_{k=\bar\ell}^\infty(1-\delta_k)(1-2\delta_k)$, the point $x^*$ is a stationary point of \cref{eq:prob}, there exists $\ell_* \in \mathbb{N}$ such that $x^k$ results from a stochastic semismooth Newton step for all $k \geq \ell_*$, and the whole sequence $(x^k)_k$ converges to $x^*$. \item[{\rm(ii)}] Let $\gamma_\eta \in (0,1)$ and $\bar \ell \in \mathbb{N}$ be given constants and let us set $\Gamma_k^\circ := \min \{\Upsilon_{k-1}^\circ,\Upsilon_k^\circ\}$ and $\Upsilon_k^\circ := \min\{\mu(\min\{\varepsilon_k^1,\gamma_\eta^{k-\bar\ell}\}),\mu_p(\varepsilon_k^2)\}$. Suppose that the bounds \vspace{-0.5ex} \end{itemize} \begin{equation} \label{eq:bounds-ii} {\bf n}^{\sf g}_k \geq \frac{1}{\delta_k}\left[\frac{2\bar\sigma}{\lambda_{m} \Gamma^\circ_k}\right]^2, \quad {\bf n}^{\sf h}_k \geq \frac{\kappa_n}{\delta_k}\left[\frac{2\bar\rho}{\lambda_{m}\gamma_f} \right]^2 \vspace{-0.5ex} \end{equation} \begin{itemize} \item[] hold for all $k \geq \bar\ell$. Then, with probability $\delta_*$, the statements in part {\rm(i)} are satisfied and $(x^k)_k$ converges r-linearly to $x^*$ with rate $\max\{\gamma_\eta,\frac{1}{2}\}$. \item[{\rm(iii)}] Let $(\gamma_k)_k \subset (0,\infty)$ be a non-increasing sequence with $\gamma_k \to 0$ and let $(\rho_k)_k \subset (0,\infty)$ with $\rho_k \to 0$ and $\bar \ell \in \mathbb{N}$ be given. Let us define $\Gamma_k^\diamond := \min \{\Upsilon_{k-1}^\diamond,\Upsilon_k^\diamond\}$ and $\Upsilon_k^\diamond := \min\{\mu(\min\{\varepsilon_k^1,\gamma_k^{k-\bar\ell}\}),\mu_p(\varepsilon_k^2)\}$ and assume that the sample sizes fulfill \vspace{-0.5ex} \end{itemize} \begin{equation} \label{eq:bounds-iii} {\bf n}^{\sf g}_k \geq \frac{1}{\delta_k}\left[\frac{2\bar\sigma}{\lambda_{m} \Gamma^\diamond_k}\right]^2, \quad {\bf n}^{\sf h}_k \geq \frac{1}{\delta_k\rho_k} \vspace{-0.5ex} \end{equation} \begin{itemize} \item[] for all $k \geq \bar \ell$. Then, with probability $\delta_*$, the statements in part {\rm(i)} are satisfied (for a possibly different $\ell_*$) and $(x^k)_k$ converges r-superlinearly to $x^*$. \end{itemize} \end{theorem} \begin{proof} The proof is split into several steps. First, we utilize the concentration results in \cref{lemma:tail-bound} to quantify the occurrence and (conditional) probability of the events \[ {\sf G}^{\sf n}_k(\varepsilon) := \{\omega \in \Omega: {\mathcal E}^{\sf g}_{k}(z^{k-1}_{\sf n}(\omega)) \leq \varepsilon\}, \quad {\sf G}^{\sf p}_k(\varepsilon) := \{\omega\in \Omega: {\mathcal E}^{\sf g}_{k}(z^{k-1}_{\sf p}(\omega)) \leq \varepsilon \}, \] ${\sf G}_k(\varepsilon) := {\sf G}^{\sf n}_k(\varepsilon) \cap {\sf G}^{\sf p}_k(\varepsilon)$, and ${\sf H}_k(\varepsilon) := \{\omega \in \Omega: {\mathcal E}^{\sf h}_{k}(x^k(\omega)) \leq \varepsilon\}$ for appropriately chosen $\varepsilon > 0$. As a result and to some extent, the local convergence analysis then reduces to a discussion of a highly inexact (rather than stochastic) version of the deterministic semismooth Newton method for convex composite programming. In particular, we can reuse some of the strategies presented in \cite{MilUlb14,Mil16} in our proof. Based on the invertibility result in \cref{lemma:invertibility}, we establish convergence of the whole sequence $(x^k)_k$ in step 2. Afterwards, in step 3 and 4, we show that the growth conditions \cref{eq:growth-1}--\cref{eq:growth-2} are always satisfied whenever $k$ is sufficiently large. In the last steps we derive r-linear and r-superlinear convergence rates and prove part (ii) and (iii) of \cref{theorem:mega}. \textit{Step 1: Probability bounds}. We want to show that the event ${\sf E} := \bigcap_{k = \bar \ell}^\infty {\sf G}_k({\lambda_{m}\Gamma_k}/{2}) \cap {\sf H}_k({\lambda_{m}\gamma_f}/{2})$ occurs with probability $\Prob({\sf E}) \geq \delta_*$. Using the assumptions (E.1)--(E.2), \cref{fact:one} and as demonstrated in the paragraph after \cref{lemma:tail-bound}, we can apply the concentration results in \cref{lemma:tail-bound} via identifying ${\sf X}_i \equiv \cG(z_{\sf n}^{k-1},s^k_i) - \nabla f(z^{k-1}_{\sf n})$, $i \in [{\bf n}^{\sf g}_k]$, etc. Specifically, setting $\tau = \sqrt{1/\delta_k}$ in part (i) and $\tau = \sqrt{\kappa_n/\delta_k}$ in part (ii) of \cref{lemma:tail-bound} and using the bounds \cref{eq:bounds-i}, it easily follows $\Prob({\sf G}^{\sf n}_k(\lambda_{m}\Gamma_k/2) \mid \mathcal F_{k-1}) \geq 1 - \delta_k$, \[ \Prob({\sf G}^{\sf p}_k(\lambda_{m}\Gamma_k/2) \mid \mathcal F_{k-1}) \geq 1 - \delta_k, \quad \text{and} \quad \Prob({\sf H}_k(\lambda_{m}\gamma_f/2) \mid \hat{\mathcal F}_{k-1}) \geq 1 - \delta_k \] almost everywhere and for all $k \geq \bar\ell$. Let us define $\bar\varepsilon_k := \frac{\lambda_{m}\Gamma_k}{2}$ and $\bar\gamma := \frac{\lambda_{m}\gamma_f}{2}$. Then, by the tower property of the conditional expectation and by utilizing \[ {\mathds 1}_{{\sf G}_k(\bar\varepsilon_k)}(\omega) = {\mathds 1}_{{\sf G}^{\sf n}_k(\bar\varepsilon_k) \cap {\sf G}^{\sf p}_k(\bar\varepsilon_k)}(\omega) \geq {\mathds 1}_{{\sf G}^{\sf n}_k(\bar\varepsilon_k)}(\omega) + {\mathds 1}_{{\sf G}^{\sf p}_k(\bar\varepsilon_k)}(\omega) - 1, \quad \forall~\omega \in \Omega,\] and ${\sf G}^{\sf n}_k(\bar\varepsilon_k), {\sf G}^{\sf p}_k(\bar\varepsilon_k) \in \hat{\mathcal F}_{k-1}$, ${\sf H}_k(\bar\gamma) \in \mathcal F_k$, $k \geq \bar \ell$, we inductively obtain \begin{align*} \Prob\left({\textstyle \bigcap_{k=\bar\ell}^L}~{\sf G}_k(\bar\varepsilon_k) \cap {\sf H}_k(\bar\gamma) \right) & \\ & \hspace{-20ex}= \Exp\left[{\textstyle\prod_{k=\bar\ell}^{L-1}}{\mathds 1}_{{\sf G}_k(\bar\varepsilon_k)}{\mathds 1}_{{\sf H}_k(\bar\gamma)} \left\{ \Exp[{\mathds 1}_{{\sf G}^{\sf n}_L(\bar\varepsilon_L)\cap{\sf G}^{\sf p}_L(\bar\varepsilon_L)}\Exp[{\mathds 1}_{{\sf H}_L(\bar\gamma)} \mid \hat{\mathcal F}_{L-1} ]  \mid {\mathcal F}_{L-1}] \right\} \right] \\ & \hspace{-20ex} \geq (1-2\delta_L)(1-\delta_L) \cdot \Exp\left[{\textstyle\prod_{k=\bar\ell}^{L-1}}{\mathds 1}_{{\sf G}_k(\bar\varepsilon_k)}{\mathds 1}_{{\sf H}_k(\bar\gamma)} \right] \geq \ldots \geq {\textstyle\prod_{k=\bar\ell}^L} (1-2\delta_k)(1-\delta_k) \end{align*} for any $L > \bar \ell$. Hence, taking the limit $L \to \infty$, this yields $\Prob({\sf E}) \geq \delta_*$. We now assume that the trajectories $(x^k)_k$, $(z^k_{\sf n})_k$, and $(z^k_{\sf p})_k$ are generated by a sample point $\bar\omega \in {\sf E}$, i.e., we have $(x^k)_k \equiv (x^k(\bar \omega))_k$ etc. (As we have just shown this happens with probability at least $\delta_*$). \textit{Step 2: Convergence of $(x^k)_k$}. Let us continue with the proof of the first part. Using the definition of the event ${\sf G}_k$, we can infer \begin{equation} \label{eq:bound-cg} \mathcal E_k := \mathcal E^{\sf g}_{k}(x^k) \leq \max\{\mathcal E^{\sf g}_{k}(z^{k-1}_{\sf n}),\mathcal E^{\sf g}_{k}(z^{k-1}_{\sf p})\} \leq \frac{\lambda_{m}\Gamma_k}{2} \leq \frac{\lambda_{m} \Upsilon_k}{2} \leq \frac{\lambda_{m} \varepsilon_k^1}{2} \end{equation} for all $k \geq \bar \ell$ and hence, by (B.2), the sequence $(\mathcal E_k)_k$ is summable. Following the proof of \cref{theorem:conv-prox-v2} and utilizing the boundedness of $(\alpha_k)_k$, this implies \begin{equation} \label{eq:prf-flam-conv} F^{\Lambda_*}(x^k) \to 0 \end{equation} and thus, in this situation every accumulation point of $(x^k)_k$ is a stationary point of problem \cref{eq:prob}. Since the event ${\sf H}_k(\bar\gamma)$ occurs for all $k \geq \bar\ell$ by assumption, we can apply \cref{lemma:invertibility} with $\gamma_c := \gamma_f$ and $\beta_c := \frac{1}{4}$. Hence, there exists a constant $\varepsilon_c$ (that neither depends on the samples $s^k$, $t^k$ nor on $k$) such that the statement in \cref{lemma:invertibility} holds whenever we have $x^k \in B_{\varepsilon_c}(x^*)$ and $\mathcal E_k \leq \varepsilon_c$. Furthermore, since $(\mathcal E_k)_k$ converges to zero, there exists $\tilde \ell \geq \max\{\bar\ell,\bar k\}$ such that the inequality $\mathcal E_k \leq \varepsilon_c$ is satisfied for all $k \geq \tilde \ell$. Setting $\mathcal K_{\varepsilon_c} := \{k \geq \tilde \ell: x^k \in B_{\varepsilon_c}(x^*)\}$, this now implies \begin{equation} \label{eq:uni-bound-mk} \| M_k^{-1} \| \leq \frac{4C}{3}, \quad \forall~M_k \in \mathcal M_{s^k,t^k}^{\Lambda_*}(x^k), \quad \forall~k \in \mathcal K_{\varepsilon_c}. \end{equation} Next, let $(x^k)_{k \in K}$ denote an arbitrary subsequence of $(x^k)_k$ converging to $x^*$. Then, there exists $\tilde k \in K$ such that $\{k \in K: k \geq \tilde k\} \subset {\mathcal K}_{\varepsilon_c}$ and consequently, using \cref{eq:prf-flam-conv}, the estimate $\|F^{\Lambda_*}_{s^k}(x) - F^{\Lambda_*}(x)\| \leq \lambda_{m}^{-1}\mathcal E^{\sf g}_{k}(x)$, and the summability of $(\mathcal E_k)_k$, we obtain \[ \|x^{k+1} - x^k\| \leq \min\{4C/3,1\} \|F^{\Lambda_*}_{s^k}(x^k)\| \leq \min\{4C/3,1\} [ \|F^{\Lambda_*}(x^k)\| + \lambda_{m}^{-1} \mathcal E_k ] \to 0 \] as $K \ni k \to \infty$. Let us emphasize that this limit behavior holds for any arbitrary subsequence $(x^k)_K$ converging to $x^*$. Moreover, combining the assumptions (D.2) and (D.3) and as mentioned in \cref{subsec:algo}, it follows that $F^{\Lambda_*}$ is semismooth at $x^*$ with respect to the multifunction $\mathcal M^{\Lambda_*}$. Thus, as in \cite[Proposition 3]{PanQi93}, it can be shown that $x^*$ is an isolated stationary point. Since condition \cref{eq:prf-flam-conv} ensures that every accumulation point of $(x^k)_k$ is a stationary point of problem \cref{eq:prob}, this also proves that $x^*$ is an isolated accumulation point of $(x^k)_k$. Hence, a well-known result by Mor\'{e} and Sorensen \cite{MorSor83} yields convergence of the whole sequence $(x^k)_k$ to $x^*$. \textit{Step 3: Acceptance of Newton steps}. Utilizing the assumptions (D.2)--(D.4), and (D.6), there exists $\varepsilon_s > 0$ such that the following properties and inequalities hold simultaneously: \vspace{0.5ex} \begin{itemize} \item Let us set $\bar \beta := (12(1+L\lambda_{m}^{-1}))^{-1}$. Then, for all $u \in B_{\varepsilon_s}(u^{\Lambda_*}(x^*))$ and all generalized derivatives $D(u) \in \partial \proxt{\Lambda_*}{r}(u)$ we have \vspace{-0.5ex} \end{itemize} \begin{equation} \label{eq:prf-semism} \|\proxt{\Lambda_*}{r}(u) - \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)) - D(u)(u-u^{\Lambda_*}(x^*))\| \leq \bar\beta\gamma_f \|u-u^{\Lambda_*}(x^*)\|. \end{equation} \begin{itemize} \item It holds $\|\nabla f(x) - \nabla f(x^*) - \nabla^2 f(x^*)(x-x^*)\| \leq \frac{\lambda_{m}\gamma_f}{12} \|x - x^*\|$ and $\|\nabla^2 f(x) - \nabla^2 f(x^*)\| \leq \frac{\lambda_{m}\gamma_f}{12}$ for all $x \in B_{\varepsilon_s}(x^*)$. \vspace{.5ex} \item The objective function $\psi$ is Lipschitz continuous on $B_{\varepsilon_s}(x^*)$ (with constant $L_\psi$). \vspace{.5ex} \item The stationary point $x^*$ is a global minimizer of $\psi$ on $B_{\varepsilon_s}(x^*)$. \vspace{.5ex} \end{itemize} Now, let us define \[ \varepsilon_e := \min \left\{ \frac{2\lambda_{m}\varepsilon_s}{3\lambda_{m}+2L}, \varepsilon_c \right \}, \,\, \varepsilon_\psi := \min\left \{\frac{1}{2}(2L_\psi C^{1-p})^{-\frac{1}{p}}, \frac{1}{2L_F}\right\}, \,\, \varepsilon := \min\left\{\varepsilon_e,\varepsilon_\psi \right\}. \] Then, since $(x^k)_k$ converges to $x^*$ and (B.2) implies $\Gamma_k \to 0$, there exists $\hat\ell \geq \tilde \ell$ such that $x^k \in B_\varepsilon(x^*)$ and $\min\{(\max\{C,1\})^{-1}\varepsilon,\Gamma_k\} = \Gamma_k$ for all $k \geq \hat\ell$. Next, let $x^k$, $k \geq \hat\ell$, be an arbitrary iterate and let us consider the associated semismooth Newton step $z^k_{\sf n} = x^k + d^k$, $d^k = - M_k^{+}F^{\Lambda_*}_{s^k}(x^k)$ with $M_k = I-D_k + D_k \Lambda_*^{-1}H_{t^k}(x^k)$ and $D_k \in \partial \proxt{\Lambda_*}{r}(u^{\Lambda_*}_{s^k}(x^k))$. By \cref{lemma:invertibility} and \cref{eq:uni-bound-mk}, $M_k$ is invertible with $\|M_k^{-1}\| \leq 2C$ for all $k \geq \hat \ell$. Since the events ${\sf G}_k(\lambda_{m}\Gamma_k/2)$ and ${\sf G}_{k+1}^{\sf n}(\lambda_{m}\Gamma_{k+1}/2)$ occur for all $k \geq \bar \ell$ by assumption, we can reuse the bounds \cref{eq:bound-cg}, i.e., we have \begin{equation} \label{eq:est-G} \max\{\mathcal E_k, \mathcal E_{k+1}^{\sf g}(z^k_{\sf n})\} \leq \frac{\lambda_{m}}{2} \min \left\{\frac{\varepsilon}{\max\{C,1\}}, \Upsilon_k \right\}. \end{equation} Consequently, setting $w^k := u^{\Lambda_*}_{s^k}(x^k) - u^{\Lambda_*}(x^*)$ and as shown in the proof of \cref{lemma:invertibility}, it holds \begin{equation} \label{eq:est-wk} \|w^k\| \leq (1+L\lambda_{m}^{-1})\|x^k-x^*\|+\lambda_{m}^{-1}\mathcal E_k \leq (1 +L\lambda_{m}^{-1})\varepsilon_e + \frac{\varepsilon_e}{2} \leq \varepsilon_s. \end{equation} Moreover, combining \cref{eq:prf-semism}, $\mathcal E^{\sf h}_{k}(x^k) \leq \bar\gamma$, $\bar\beta\gamma_f \leq 1/48$, and the last estimates, we can infer \begin{align} \nonumber \|z^k_{\sf n} - x^*\| &= \|M_k^{-1} [F^{\Lambda_*}_{s^k}(x^k) - F^{\Lambda_*}(x^*) - M_k(x^k-x^*)]\| \nonumber \\ \nonumber & \leq (4C/3) \|\proxt{\Lambda_*}{r}(u^{\Lambda_*}_{s^k}(x^k)) - \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)) \\ \nonumber &\hspace{6ex}- D_k(I-\Lambda_*^{-1}H_{t^k}(x^k))(x^k - x^*)\| \\ \nonumber & \leq (4C/3) \|\proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)+w^k) - \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)) - D_k w^k\| \\  \nonumber &\hspace{6ex}+ (4C/3) \|D_k\Lambda_*^{-1}[G_{{s}^k}(x^k)-\nabla f(x^*) - H_{t^k}(x^k)(x^k-x^*)]\| \\ \nonumber &\leq (4C/3)\bar\beta\gamma_f \|w^k\| + (4C/3)\lambda_{m}^{-1} [\|\nabla f(x^k)-\nabla f(x^*) - \nabla^2 f(x^*)(x^k-x^*)\| \\ \nonumber &\hspace{6ex}+ (\|\nabla^2 f(x^k) - \nabla^2 f(x^*)\| + \mathcal E^{\sf h}_{k}(x^k))\|x^k-x^*\|)+\mathcal E_k] \\ \nonumber & \leq \frac{4C}{3} \left[\bar\beta (1+L\lambda_{m}^{-1}) + \frac{1}{12} + \frac{1}{12} + \frac{1}{2} \right] \gamma_f \|x^k-x^*\| + \frac{4C}{3}\lambda_{m}^{-1} \left[\frac{1}{48}+1\right] \mathcal E_k \\ \label{eq:loc-est-1} & \leq C \gamma_f \|x^k-x^*\| + 1.5C \lambda_{m}^{-1}\mathcal E_k. \end{align} Let us note that due to \cref{eq:loc-est-1}, \cref{eq:est-G}, $C\gamma_f \leq \frac{1}{4}$, and (D.4), we also have $z^k_{\sf n} \in B_\varepsilon(x^*) \subset \mathrm{dom}~r$. The next steps essentially follow the proofs of \cite[Theorem 4.8]{MilUlb14} and \cite[Theorem 4.3.10]{Mil16}. In particular, our last result implies \[ \|x^k-x^*\| \leq \|z^k_{\sf n}-x^*\| + \|d^k\| \leq C \gamma_f \|x^k-x^*\| + 1.5C\lambda_{m}^{-1}\mathcal E_k + (4C/3) \|F^{\Lambda_*}_{s^k}(x^k)\| \] and thus, it follows \[ \|x^k - x^*\| \leq \frac{4C}{3(1-C\gamma_f)} \|F^{\Lambda_*}_{s^k}(x^k)\| + \frac{3C\lambda_{m}^{-1}}{2(1 - C \gamma_f)} \mathcal E_k. \] Furthermore, using the Lipschitz continuity of $F^{\Lambda_*}$, $F^{\Lambda_*}(x^*) = 0$, $(1-C\gamma_f)^{-1} \leq \frac{4}{3}$, the definition of $\gamma_f$, and \cref{eq:est-G}, we obtain \begin{align*} \| F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n}) \| & \leq \|F^{\Lambda_*}(z^k_{\sf n})\| + \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n}) - F^{\Lambda_*}(z^k_{\sf n})\| \leq L_F \|z^k_{\sf n} - x^*\| + \lambda_{m}^{-1} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n}) \\ & \leq L_F C \gamma_f \|x^k - x^*\| + 1.5L_F C \lambda_{m}^{-1} \mathcal E_k + \lambda_{m}^{-1} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n}) \\ & \leq \frac{ 4L_F C^2 \gamma_f}{3(1 - C \gamma_f)} \| F^{\Lambda_*}_{s^k}(x^k)\| + \frac{3L_F C\lambda_{m}^{-1}}{2(1 - C\gamma_f)} \mathcal E_k + \lambda_{m}^{-1} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n}) \\ & \leq \eta \|F^{\Lambda_*}_{s^k}(x^k)\| + 2L_FC\lambda_{m}^{-1} \mathcal E_k + \lambda_{m}^{-1} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n}) \leq \eta \|F^{\Lambda_*}_{s^k}(x^k)\| + \varepsilon_k^1. \end{align*} Since the Newton step $z^k_{\sf n}$ is contained in $B_{\varepsilon}(x^*)$, we can utilize the semismoothness condition \cref{eq:prf-semism} for $z^k_{\sf n}$. Moreover, \cref{lemma:invertibility} is also applicable for any matrix $M_* = I - \bar D(z^k_{\sf n}) + \bar D(z^k_{\sf n})\Lambda_*^{-1}\nabla^2 f(x^*)$ with $\bar D(z^k_{\sf n}) \in \partial \proxt{\Lambda_*}{r}(u^{\Lambda_*}(z^k_{\sf n}))$. This yields $\|M_*^{-1}\| \leq (4C)/3$ and similar to the estimates in \cref{eq:loc-est-1} (but simpler), we can get \begin{align*} \| z^k_{\sf n} - x^*\| & = \|M_*^{-1} [F^{\Lambda_*}(z^k_{\sf n}) - F^{\Lambda_*}(x^*) - M_*(z^k_{\sf n} - x^*) - F^{\Lambda_*}(z^k_{\sf n})]\| \\ & \leq (4C/3)[ \|\proxt{\Lambda_*}{r}(u^{\Lambda_*}(z^k_{\sf n})) - \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)) - \bar D(z^k_{\sf n})(I-\Lambda_*^{-1}\nabla^2 f(x^*)) \\ & \hspace{4ex} \cdot (z^k_{\sf n} - x^*)\| + \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n}) \| + {\lambda_{m}^{-1}} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n})] \\ & \leq(4C/3) [ \|\proxt{\Lambda_*}{r}(u^{\Lambda_*}(z^k_{\sf n})) - \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)) - \bar D(z^k_{\sf n})\bar w^k\| + \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n}) \| \\ & \hspace{4ex} +\|\bar D(z^k_{\sf n})\Lambda^{-1}_* [\nabla f(z^k_{\sf n}) - \nabla f(x^*) - \nabla^2 f(x^*)(z^k_{\sf n} - x^*)]\| + {\lambda_{m}^{-1}} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n})] \\ & \leq \frac{2}{9} C\gamma_f \|z^k_{\sf n} - x^*\| + \frac{4C}{3} \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n}) \| + \frac{4C}{3} \lambda_{m}^{-1} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n}), \end{align*} where we used $\bar w^k := u^{\Lambda_*}(z^k_{\sf n}) - u^{\Lambda_*}(x^*)$, $\|\bar w^k\| \leq (1+L\lambda_{m}^{-1})\|z^k_{\sf n} - x^*\|$, \cref{eq:prf-semism}, and the differentiability of $\nabla f$. This implies \[ \|z^k_{\sf n} - x^*\| \leq \frac{12C}{9 - 2C\gamma_f} \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n})\| + \frac{12C\lambda_{m}^{-1}}{9 - 2C\gamma_f} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n}). \] Finally, using condition (D.4), $(9-2C\gamma_f)^{-1} \leq \frac{1}{8}$, the subadditivity of the mapping $x\mapsto x^q$, $q \in \{p,1-p\} \subset (0,1)$, $0.5^q + 1.5^q \leq 2$ for all $q \in [0,1]$, and the fact that the stationary point $x^*$ is a global minimum of the problem $\min_ {x \in B_{\varepsilon}(x^*)}~\psi(x)$, it follows \begin{align*} \psi(z^k_{\sf n}) - \psi(x^k) & \\ &\hspace{-12ex}\leq \psi(z^k_{\sf n}) - \psi(x^*) \leq L_\psi \|z^k_{\sf n} - x^*\| = L_\psi \|z^k_{\sf n} - x^* \|^{1-p} \|z^k_{\sf n} - x^*\|^{p} \\ &\hspace{-12ex}\leq L_\psi C^{1-p} [\gamma_f^{1-p} \|x^k - x^*\|^{1-p} + (1.5\lambda_{m}^{-1}\mathcal E_k)^{1-p}] \|z^k_{\sf n}-x^*\|^p \\ &\hspace{-12ex}\leq L_\psi C^{1-p} \left[ (2C\gamma_f \|F^{\Lambda_*}_{s^k}(x^k)\|)^{1-p} + ((2C\gamma_f)^{1-p}+1.5^{1-p})(\lambda_{m}^{-1} \mathcal E_k)^{1-p} \right] \|z^k_{\sf n}-x^*\|^p \\ &\hspace{-12ex} \leq L_\psi (2C^2\gamma_f)^{1-p} \|F^{\Lambda_*}_{s^k}(x^k)\|^{1-p} [ (1.5C)^p \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n})\|^p + (1.5C\lambda_{m}^{-1} \mathcal E_{k+1}^{\sf g}(z^k_{\sf n}))^{p} ] \\ & \hspace{-10ex} + 2L_\psi(C\lambda_{m}^{-1})^{1-p}\mathcal E_k^{1-p} \|z^k_{\sf n}-x^*\|^p \\ & \hspace{-12ex} \leq 2L_\psi C (C\gamma_f)^{1-p} \|F^{\Lambda_*}_{s^k}(x^k)\|^{1-p} \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n})\|^p + 0.5{\varepsilon^2_k}\\ & \hspace{-10ex} + 2L_\psi C(C \gamma_f)^{1-p}(L_F \|x^k -x^*\| + \lambda_{m}^{-1} \mathcal E_k)^{1-p} \lambda_{m}^{-p} [\mathcal E_{k+1}^{\sf g}(z^k_{\sf n})]^p \\ & \hspace{-12ex} \leq \beta \|F^{\Lambda_*}_{s^k}(x^k)\|^{1-p} \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n})\|^p + \frac{\varepsilon^2_k}{2} + \frac{1}{2} (2^{p-1} + 2^{p-1}\mu_p(\varepsilon^2_k)^{1-p}) 2^{-p} \mu_p(\varepsilon^2_k)^p \\ & \hspace{-12ex} \leq \beta \|F^{\Lambda_*}_{s^k}(x^k)\|^{1-p} \|F^{\Lambda_*}_{s^{k+1}}(z^k_{\sf n})\|^p + \varepsilon^2_k . \end{align*} \textit{Step 4: Transition to fast local convergence}. Let us point out that the bounds and inequalities derived in the last part primarily depend on the occurrence of the events ${\sf G}_k$ and ${\sf H}_k$ and hold for any $k \geq \hat\ell$. Now, let $k$ be any index $k \geq \hat \ell$ with $ \|F^{\Lambda_*}_{s^k}(x^k)\| \leq \theta_k$. Since the algorithm does not terminate after a finite number of steps and we have $F^{\Lambda_*}_{s^k}(x^k) \to 0$, there exist infinitely many such indices. Let $\ell_* - 1$ be the smallest such index. Then, as shown in step 3, the semismooth Newton step $z^{\ell_*-1}_{\sf n}$ satisfies all of the acceptance criterions and it follows $x^{\ell_*} = z^{\ell_*-1}_{\sf n}$ and $\theta_{\ell_*} = \|F^{\Lambda_*}_{s^{\ell_*}}(x^{\ell_*})\|$. Inductively, this implies $x^{k+1} = z^k_{\sf n}$ for all $k \geq \ell_* - 1$. Since the success probability of the event ${\sf E}$ is at least $\delta_*$, this finishes the proof of the first part. \textit{Step 5: Proof of part (ii)}. Revisiting the derivations in step 1, it is easy to see that the event ${\sf E}^\circ := \bigcap_{k=\bar\ell}^\infty {\sf G}_k(\lambda_{m}\Gamma_k^\circ/2) \cap {\sf H}_k(\lambda_{m}\gamma_f/2)$ also occurs with probability $\delta_*$. Moreover, due to ${\sf E}^\circ \subset {\sf E}$, all of the results and inequalities shown in step 2--4 remain valid. In particular, the estimate \cref{eq:loc-est-1} holds for all $k \geq \ell_* - 1$ with $x^{k+1} = z^k_{\sf n}$ and using $L_F \geq 1$, we obtain \[ \|x^{k+1} - x^*\| \leq C\gamma_f \|x^k-x^*\| + \frac{3}{4}C \Upsilon_k^\circ \leq \frac{1}{2} \left [ \frac{1}{2} \|x^k - x^*\| + \gamma_\eta^{k-\bar\ell} \right]. \] Let us set $\bar\tau :=\max\{\|x^{\ell_*-1}-x^*\|,\gamma_\eta^{\ell_*-\bar\ell-2}\} $ and $\tau_k := \bar\tau \max \{\frac{1}{2},\gamma_\eta\}^{k-\ell_*+1}$. We now prove $\|x^k - x^*\| \leq \tau_k$ for all $k \geq \ell_*-1$ by induction. For $k = \ell_*-1$, it follows $\tau_{\ell_*-1} = \bar\tau$ and the latter inequality is obviously true. Furthermore, it holds \begin{align*} \|x^{k+1}-x^*\| & \leq {\textstyle\frac{1}{2}}\max\{{\textstyle\frac{1}{2}},\gamma_\eta\} [ \tau_k + \gamma_\eta^{k-\bar \ell-1}] \\ & \leq {\textstyle\frac{1}{2}}\max\{{\textstyle\frac{1}{2}},\gamma_\eta\}^{k-\ell_*+2} [\bar\tau + \gamma_\eta^{\ell_*-\bar\ell-2}] \leq \tau_{k+1}. \end{align*} Consequently, due to $\tau_k \to 0$ and $\tau_{k+1}/\tau_k = \max\{\frac{1}{2},\gamma_\eta\} < 1$, the sequence $(x^k)_k$ converges r-linearly to $x^*$ with rate $\max\{\frac{1}{2},\gamma_\eta\}$. \textit{Step 6: Proof of part (iii)}. Again, following our previous discussions, it can be easily shown that the event ${\sf E}^\diamond := \bigcap_{k=\bar\ell}^\infty {\sf G}_k(\lambda_{m}\Gamma_k^\diamond/2) \cap {\sf H}_k(\bar\rho\sqrt{\kappa_n\rho_k})$ occurs with probability $\delta_*$. Since $(\rho_k)_k$ converges to zero, we have ${\sf H}_k(\bar\rho\sqrt{\kappa_n\rho_k}) \subset {\sf H}_k(\lambda_{m}\gamma_f/2)$ for all $k$ sufficiently large and hence, the results derived in step 2--4 still hold after possibly adjusting the constant $\ell_*$. In particular, we have $x^k \to x^*$ as $k\to \infty$. Next, let us set \begin{align*} \beta^1_k & = \frac{\|\proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)+w^k) - \proxt{\Lambda_*}{r}(u^{\Lambda_*}(x^*)) - D_k w^k\|}{\|w^k\|}, \\ \beta_k^2 & = \frac{\|\nabla f(x^k) - \nabla f(x^*) - \nabla^2 f(x^*)(x^k-x^*)\|}{\|x^k-x^*\|}, \end{align*} and $\beta_k^3 = \|\nabla^2 f(x^k) - \nabla^2 f(x^*)\| + \mathcal E^{\sf h}_{k}(x^k)$, where $w^k$ and $D_k$ have been defined in step 3. Since $(w^k)_k$ converges to zero, the semismoothness of $\proxt{\Lambda_*}{r}$ implies $\beta_k^1 \to 0$. Moreover, by the differentiability of $f$ and using $\rho_k \to 0$, we obtain $\beta^2_k \to 0$ and $\beta_k^3 \to 0$. Without loss of generality, we now may assume $x^{k+1} = z^k_{\sf n}$ for all $k \geq \ell_* - 1$ and as a consequence, by \cref{eq:est-wk} and \cref{eq:loc-est-1} and defining $\vartheta_k := (4C/3)\lambda_{m}^{-1}[(\lambda_{m}+L)\beta_k^1 + \beta_k^2 + \beta_k^3]$, we have \begin{align*} \|x^{k+1} - x^*\| &\leq ({4C}/3)\lambda_{m}^{-1} [(\lambda_{m}+L)\beta_k^1 + \beta_k^2+\beta_k^3] \|x^k - x^*\| + ({4C}/3)\lambda_{m}^{-1} (\bar\beta\gamma_f+1) \mathcal E_k \\ & \leq \vartheta_k \|x^k - x^*\| + (3C/4)\Upsilon_k^\diamond \leq \vartheta_k \|x^k - x^*\| + \gamma_k^{k-\bar\ell} \end{align*} for all $k \geq \ell_*-1$. Next, due to $\vartheta_k,\gamma_k \to 0$, there exists a constant $\ell_\diamond \geq \ell_*$ such that $\vartheta_k, \gamma_k \leq \frac{1}{2}$ for all $k \geq \ell_\diamond$. Let us set \[ \tau_{\ell_\diamond+1} := \max\left\{\|x^{\ell_\diamond}-x^*\|,\gamma_{\ell_\diamond}^{(\ell_\diamond-\bar\ell)/2}\right\}, \,\, \tau_{k+1} := \max\left\{(\vartheta_k+\gamma_k^{(k-\bar\ell)/2})\tau_k,\gamma_{k+1}^{(k+1-\bar\ell)/2}\right\} \] for all $k > \ell_\diamond$. Then, by induction and by using $\tau_k \geq \gamma_k^{(k-\bar\ell)/2}$, it follows \[ \|x^{k+1}-x^*\| \leq \vartheta_k \|x^k-x^*\| + \gamma_k^{(k-\bar\ell)/2}\tau_k \leq (\vartheta_k + \gamma_k^{(k-\bar\ell)/2}) \tau_k \leq \tau_{k+1} \] for all $k > \ell_\diamond$. Due to $\vartheta_k,\gamma_k \leq \frac{1}{2}$, this also establishes $\|x^{\ell_\diamond+1}-x^*\| \leq \tau_{\ell_{\diamond}+1}$. Finally, utilizing the boundedness of $(\tau_k)_{k>{\ell_\diamond}}$ and the monotonicity of $(\gamma_k)_k$, we obtain $\tau_k \to 0$ and \[ \frac{\tau_{k+1}}{\tau_k} = \max\left\{ \vartheta_k + \gamma_k^{\frac{k-\bar\ell}{2}}, \gamma_{k+1}^{\frac{1}{2}} \left[\frac{\gamma_{k+1}}{\gamma_k}\right]^{{(k-\bar\ell)}/{2}} \right\} \leq \max\left\{ \vartheta_k + \gamma_k^{\frac{k-\bar\ell}{2}}, \gamma_{k+1}^{\frac{1}{2}} \right\} \to 0, \] which concludes the proof of \cref{theorem:mega}. \end{proof} Neglecting the dependence on $\delta_k$ and on $\varepsilon_k^1$, $\varepsilon_k^2$ for a moment, the results in \cref{theorem:mega} can be summarized as follows. In order to guarantee r-linear convergence of the sequence $(x^k)_k$, it suffices to increase the sample size ${\bf n}^{\sf g}_k$ at a geometric rate and to choose ${\bf n}^{\sf h}_k$ sufficiently large. If ${\bf n}^{\sf g}_k$ is increased at a rate that is faster than geometric and we have ${\bf n}^{\sf h}_k \to \infty$, then we obtain r-superlinear convergence with high probability. Similar results were established in \cite{BBN2016,RKM2016II,YeLuoZha17} for stochastic Newton-type methods for smooth optimization problems. Clearly, the rate of convergence in \cref{theorem:mega} (ii) can be further improved if $\gamma_f$ is adjusted appropriately. Moreover, if the full gradient is used in the algorithm eventually, i.e., we have $G_{{s}^k}(x^k) \equiv \nabla f(x^k)$ for all $k$ sufficiently large, then the gradient related error terms $\mathcal E_k^{\sf g}$ vanish and we can derive q-linear and q-superlinear convergence, respectively. We continue with an additional remark. \begin{remark} \label{remark:proj-loc} Similar to \cref{remark:example-c3}, let us again assume that the function $r$ has the special form $r = \iota_{\mathcal C} + \varphi$, where $\varphi$ is a real-valued, convex mapping and $\iota_{\mathcal C}$ is the indicator function of a closed, convex, and nonempty set $\mathcal C$. In this case, as has already been discussed in \cref{sec:algo}, we can perform a projected Newton step $z^k = \mathcal P_{\mathrm{dom}~r}(x^k+d^k) = \mathcal P_{\mathcal C}(x^k+d^k)$ to obtain a feasible trial point $z^k \in \mathrm{dom}~r$. Due to the nonexpansiveness of the projection, this additional operation does also not affect our local convergence results. Moreover, since $\varphi$ is locally Lipschitz continuous, assumption {\rm(D.4)} is no longer needed in this situation. \end{remark} Based on the ``light tail'' assumption (E.3), we now present a straight-forward variant of \cref{theorem:mega} with an improved dependence on the probabilities $\delta_k$. \begin{corollary} \label{corollary:mega} Consider the setup discussed in \cref{theorem:mega} and let us assume that the conditions in \cref{theorem:mega} are fulfilled. Suppose that assumption {\rm(E.3)} is satisfied. Then, the statements in \cref{theorem:mega} {\rm(i)} hold under the following improved sample size bounds \begin{equation} \label{eq:bounds-iv} {\bf n}^{\sf g}_k \geq \left[\left(1+\sqrt{3\log(\delta_k^{-1}})\right)\frac{2\bar\sigma}{\lambda_{m}\Gamma_k}\right]^2, \quad {\bf n}^{\sf h}_k \geq 3\log(2n\delta_k^{-1})\left[ \frac{2\bar\rho}{\lambda_{m}\gamma_f} \right]^2, \quad k \geq \bar \ell. \end{equation} Furthermore, if \cref{eq:bounds-iv} holds with $\Gamma_k \equiv \Gamma_k^\circ$ or with $\Gamma_k \equiv \Gamma_k^\diamond$ and ${\bf n}^{\sf h}_k \geq \log(2n\delta_k^{-1})\rho_k^{-1}$, then the statements in \cref{theorem:mega} {\rm(ii)} and {\rm(iii)} are satisfied, respectively. \end{corollary} \cref{corollary:mega} can be shown directly by applying \cref{lemma:tail-bound}. Finally, let us note that our local results can be further improved in the situation that was considered in \cref{theorem:conv-prox-strong}. We conclude this section with an example and discuss a specific choice of the parameters and sample sizes satisfying the assumptions in \cref{theorem:mega} and \cref{corollary:mega}. \begin{example} Suppose that the conditions in \cref{corollary:mega} are satisfied and let $C_1, C_2 > 0$, and $\varpi > 0$ be given. Let the parameter sequences $(\varepsilon_k^1)_k$, $(\varepsilon_k^2)_k$ be chosen via \[ \varepsilon_1^k = {C_1}{k^{-(2+\frac{\varpi}{4})}}, \quad \varepsilon_2^k = {C_2}{k^{-(1+\frac{\varpi}{8})}}, \quad \forall~k \in \mathbb{N}, \] and let us set $p = \frac{1}{2}$ and $\delta_k = \frac{1}{2k^{8}}$. Then, setting ${\bf n}^{\sf g}_k =k^{4+\varpi}\log(k)$ and ${\bf n}^{\sf h}_k = \log(k)^{1+\varpi}$, it can be shown that the statements in \cref{theorem:mega} {\rm(i)} hold with probability \[ \textstyle\delta_* \geq \left[\prod_{k=2}^\infty \left(1-\frac{1}{k^{8}}\right)\right]^2 = \left[\frac{\sinh(\pi)(\cosh(\pi\sqrt{2})-\cos(\pi\sqrt{2}))}{16\pi^3}\right]^2 \geq 0.99, \] see \cite{Wei17}. Additionally, if the gradient sample size increases geometrically, i.e., if we have ${\bf n}^{\sf g}_k = C_3 \ell^{k}$ for some $\ell > 1$, $C_3 > 0$, the conditions in \cref{theorem:mega} {\rm(ii)} are satisfied and we can guarantee r-linear convergence with rate $\max\{\frac{1}{2},\frac{1}{\ell}\}$ with probability $99\%$. \end{example} \section{Numerical Results}\label{sec:numerical} In this section, we demonstrate the efficiency of the proposed stochastic semismooth Newton framework and compare it with several state-of-the-art algorithms on a variety of test problems. All numerical experiments are performed using MATLAB R2017b on a desktop computer with Intel(R) Core(TM) i7-7700T 2.90GHz and 8GB memory. \subsection{Logistic Regression} \label{sec:logloss} In our first experiment, we consider the well-known empirical $\ell_1$-logistic regression problem \begin{equation}\label{eq:LR} \min_{x\in\mathbb{R}^n}~\psi(x) := f(x)+\mu\|x\|_1,\quad f(x):=\frac{1}{N}\sum_{i=1}^Nf_i(x), \end{equation} where $f_i(x):=\log(1+\exp(-b_i \cdot \iprod{a_i}{x}))$ denotes the logistic loss function and the data pairs $(a_i,b_i) \in \mathbb{R}^n \times \{-1,1\}$, $i \in [N]$, correspond to a given dataset or are drawn from a given distribution. The regularization parameter $\mu > 0$ controls the level of sparsity of a solution of problem \cref{eq:LR}. In our numerical tests, we always choose $\mu=0.01$. \subsubsection{Algorithmic details and implementation} Next, we describe the implementational details of our method and of the state-of-the-art algorithms used in our numerical comparison. \textit{Stochastic oracles.} In each iteration and similar to other stochastic second order methods, \cite{BCNN2011,BBN2016,BHNS2016,RKM2016I}, we generate stochastic approximations of the gradient and Hessian of $f$ via first selecting two sub-samples ${\mathcal{S}}_k, \mathcal T_k \subset [N]$ uniformly at random and without replacement from the index set $\{1,...,N\}$. We then define the following mini-batch-type stochastic oracles \begin{equation} \label{eq:exp-oracle} G_{{s}^k}(x) := \frac{1}{|{\mathcal{S}}_k|} \sum_{i \in {\mathcal{S}}_k} \nabla f_i(x), \quad H_{t^k}(x) := \frac{1}{|\mathcal T_k|} \sum_{j \in \mathcal T_k} \nabla^2 f_j(x). \end{equation} We refer to the variant of Algorithm \ref{alg:ssn} using the stochastic oracles \cref{eq:exp-oracle} as S4N (\textbf{s}ub-\textbf{s}ampled \textbf{s}emi\textbf{s}mooth \textbf{N}ewton method). Furthermore, motivated by the recent success of variance reduction techniques \cite{JZ2013, XZ2014, RHSPS2016, AZH2016, WMGL2017}, we will also work with a variance reduced stochastic gradient that can be calculated as follows \begin{equation} \label{eq:var-oracle} \begin{cases} {\scriptsize \textbf{1}} \quad \textbf{if} \quad {k}~\mathrm{mod}~{m} = 0 \quad \textbf{then} \quad \text{set $\tilde x := x^k$ and calculate $\tilde u := \nabla f(\tilde x)$.} \\ {\scriptsize \textbf{2}} \quad \text{Compute $G_{{s}^k}(x^k) := \frac{1}{|{\mathcal{S}}_k|} \sum_{i \in {\mathcal{S}}_k} (\nabla f_i(x^k) - \nabla f_i(\tilde x)) + \tilde u$.} \end{cases} \end{equation} Here, $k \in \mathbb{N}$ is the current iteration and $m \in \mathbb{N}$ denotes the number of iterations after which the full gradient $\nabla f$ is evaluated at the auxiliary variable $\tilde x$. As in \cite{RHSPS2016, AZH2016, WMGL2017}, this additional noise-free information is stored and utilized in the computation of the stochastic oracles for the following iterations. \textit{Overview of the tested methods.} \begin{itemize} \item \textbf{Adagrad}, \cite{DHS2011}. Adagrad is a stochastic proximal gradient method with a specific strategy for choosing the matrices $\Lambda_k$. We use the mini-batch gradient \cref{eq:exp-oracle} as first order oracle in our implementation. This leads to the following update rule \end{itemize} \begin{equation} \label{eq:adagrad-update} x^{k+1} = \proxt{\Lambda_k}{\varphi}(x^k - \Lambda_k^{-1} G_{{s}^k}(x^k)), \quad \Lambda_k := \lambda^{-1} \diag(\delta\mathds 1 + \sqrt{G_k}), \end{equation} \begin{itemize} \item[] where $\delta, \lambda > 0$, $G_k := G_{k-1} + G_{{s}^k}(x^k) \odot G_{{s}^k}(x^k)$, and the multiplication ``$\odot$'' and the square root ``$\sqrt{\,\cdot\,}$'' are performed component-wise. \item \textbf{prox-SVRG}, \cite{XZ2014}. Prox-SVRG is a variance reduced, stochastic proximal gradient method. Similar to \cite{RHSPS2016,RSPS2016,WZ2017}, we substitute the basic variance reduction technique proposed in \cite{JZ2013,XZ2014} with the mini-batch version \cref{eq:var-oracle} to improve the performance of prox-SVRG. \vspace{.5ex} \item \textbf{S2N-D}. S2N-D is the deterministic version of the stochastic semismooth Newton method using the full gradient and Hessian of $f$ instead of stochastic oracles. \vspace{.5ex} \item \textbf{S4N-HG}. S4N with both sub-sampled gradient and Hessian \cref{eq:exp-oracle}. In the numerical experiments, the maximum sample size $|{\mathcal{S}}_k|$ of the stochastic oracle $G_{s^k}$ is limited to 10\%, 50\% and 100\% of the training data size $N$, respectively. \vspace{.5ex} \item \textbf{S4N-H}. This version of S4N uses the full gradient $\nabla f$ and the sub-sampled Hessian $H_{t^k}$ as defined in \cref{eq:exp-oracle}. \vspace{.5ex} \item \textbf{S4N-VR}. S4N-VR is a variant of S4N combining the variance reduced stochastic oracle \cref{eq:var-oracle} with the basic sub-sampling strategy \cref{eq:exp-oracle} for the Hessian of $f$. \vspace{.5ex} \end{itemize} Adagrad and prox-SVRG are two popular and efficient first order stochastic optimization approaches for solving nonsmooth and possibly nonconvex problems of the form \cref{eq:prob}. We compare them with four different versions of our S4N method: S2N-D (deterministic), S4N-HG (sub-sampled gradient and Hessian), S4N-H (full gradient and sub-sampled Hessian), and S4N-VR (variance reduced stochastic gradient and sub-sampled Hessian). \textit{Implementational details.} For Adagrad, the sample size $|{\mathcal{S}}_k|$ of the stochastic gradient is fixed to 5\% of the training data size $N$ and we set $\delta = 10^{-7}$. The parameter $\lambda$ varies for the different tested datasets and is chosen from the set $\{i\cdot10^j : i \in [9], j \in \{-2,-1,0,1\}\}$ to guarantee optimal performance. The iterative scheme of prox-SVRG basically coincides with \cref{eq:adagrad-update}. Here, we also use a fixed sample size $|{\mathcal{S}}_k| = \lfloor 0.01 N \rfloor$ and we set $m = 10$. The parameter matrix $\Lambda_k$ is defined via $\Lambda_k := (1/\lambda_k)I$ and based on the full gradient values $\nabla f(\tilde x)$, $\lambda_k$ is chosen adaptively to approximate the Lipschitz constant of the gradient $\nabla f$. In S4N-HG, the initial sample size of the stochastic gradient is set to $|{\mathcal{S}}_0| = \lfloor 0.01 N \rfloor$. The size of the mini-batch ${\mathcal{S}}_k$ is then increased by a factor of 3.375 every 30 iterations until $|{\mathcal{S}}_k|$ reaches the maximum sizes $\lfloor 0.1 N \rfloor$, $\lfloor 0.5 N \rfloor$, and $N$, respectively. In the following, we will use S4N-HG 10\%, S4N-HG 50\%, and S4N-HG 100\% to denote the different variants of S4N-HG. In S4N-VR, we use the fixed sample size $|{\mathcal{S}}_k| = \lfloor 0.01 N \rfloor$ for all $k$ and $m = 6$. The mini-batch sizes of the stochastic Hessians are adjusted in a similar way. More specifically, in S4N-HG, S4N-H, and S4N-VR, we first set $|\mathcal T_0| = \lfloor 0.01 N \rfloor$. As soon as the sample ${\mathcal{S}}_k$ reaches its maximum size, we repeatedly increase the size of the set $\mathcal T_k$ by a factor of 3.375 after 15 iterations. The upper limit of the Hessian sample size is set to 10\% of the training data size, i.e., we have \[ |\mathcal T_k| \leq t_{\max}, \quad t_{\max} := \lfloor 0.1 N \rfloor, \quad \forall~k. \] In S4N-HG 10\% and different from the other methods, the size of $\mathcal T_k$ is not changed, i.e., it holds $t_{\max} = \lfloor 0.01 N \rfloor$. As in prox-SRVG, we use $\Lambda_k := (1/\lambda_k)I$ and choose $\lambda_k$ adaptively to estimate the Lipschitz constant of the gradient. In particular, we compute \[ \lambda_k^1 = \frac{\|x^k - x^{k-1}\|}{\|G_{s^k}(x^k) - G_{s^{k-1}}(x^{k-1})\|}, \quad \lambda_k^2 = \max\{10^{-3},\min\{10^4,\lambda_k^1\}\}. \] In order to prevent outliers, we calculate a weighted mean of $\lambda_k^2$ and of the previous parameters $\lambda_j$, $j \in [k-1]$. This mean is then used as the new step size parameter $\lambda_k$. The initial step size is set to $\lambda_0 = 0.1$. The proximity operator of the $\ell_1$-norm has the explicit representation $\proxt{\Lambda_k}{\mu\|\cdot\|_1}(u) = u - \mathcal P_{[-\mu\lambda_k,\mu\lambda_k]}(u)$ and is also known as the shrinkage operator or soft-thresholding function. Similar to \cite{MilUlb14,PatSteBem14,XLWZ2016}, we will work with the following generalized Jacobian of $\proxt{\Lambda_k}{\mu\|\cdot\|_1}$ at some $u \in \mathbb{R}^n$ \[ D(u) := \diag(d(u)), \quad d(u) \in \mathbb{R}^n, \quad d(u)_i := \begin{cases} 1 & |u_i| > \mu\lambda_k, \\ 0 & \text{otherwise.} \end{cases} \] The generalized derivatives of $F^{\Lambda_k}_{{s}^k}$ are then built as in \cref{eq:gen-deriv}. As described in \cite{MilUlb14,PatSteBem14,XLWZ2016}, we can exploit the structure of the resulting semismooth Newton system and reduce it to a smaller and symmetric linear system of equations. We utilize an early terminated conjugate gradient (CG) method to solve this system approximately. The maximum number of iterations and the desired accuracy of the CG method are adjusted adaptively depending on the computed residual $\|F^{\Lambda_k}_{{s}^k}(x^k)\|$. When the residual is large, only few iterations are performed to save time. The initial relative tolerance and the initial maximum number of iterations are set to 0.01 and 2, respectively. The total maximum number of CG-iterations is restricted to 12. In order to numerically robustify the computation of the Newton step $z^k_{\sf n}$, we also consider the following, regularized version of the Newton system \[\left(M_k + \rho_k I\right) \cdot d^k = -F^{\Lambda_k}_{{s}^k}(x^k), \quad M_k \in \cM^{\Lambda_k}_{{s}^k,t^k}(x^k), \] where $\rho_k>0$ is a small positive number. We adjust $\rho_k$ according to the norm of the residual $F^{\Lambda_k}_{{s}^k}(x^k)$ so that $\rho_k \to 0$ as $\|F^{\Lambda_k}_{{s}^k}(x^k)\| \to 0.$ Finally, in our implementation of S4N, we only check the first growth condition \cref{eq:growth-1} to measure the quality of the Newton step $z^k_{\sf n}$. Although both growth conditions \cref{eq:growth-1} and \cref{eq:growth-2} are generally required to guarantee global convergence, this adjustment does not affect the globalization process and convergence of S4N in the numerical experiments. Moreover, as we have shown in \cref{theorem:conv-prox-strong} and in \cref{theorem:mega}, the condition \cref{eq:growth-2} is actually not necessary for strongly convex problems and it is satisfied locally close to a stationary point of problem \cref{eq:prob} under certain assumptions. These different observations motivate us to restrict the acceptance test of the semismooth Newton steps to the cheaper condition \cref{eq:growth-1}. We use the following parameters $\eta = 0.85$, $\nu_k =\varepsilon_k^1 = c_\nu k^{-1.1}$, $c_\nu = 500$, and $\alpha_k = 10^{-2}$. \subsubsection{Numerical comparison} The datasets tested in our numerical comparison are summarized in \cref{table:datasets}. We linearly scale the entries of the data-matrix $ (a_1,...,a_N)$ to $[0,1]$ for each dataset. The datasets for multi-class classification have been manually divided into two types or features. For instance, the MNIST dataset is used for classifying even and odd digits. For all methods, we choose $x^0 = 0$ as initial point. \begin{table}[t] \centering \begin{tabular}{|c||c|c|c|c|} \hline Data Set & Data Points $N$ & Variables $n$ & {Density} & Reference \\ \hline $\mathtt{CINA}$ & 16033 & 132 & 29.56\% &\cite{CINA}\\ $\mathtt{gisette}$ & 6000 & 5000 & 12.97\% &\cite{gisette}\\ $\mathtt{MNIST}$ & 60000 & 784 & 19.12\% &\cite{MNIST}\\ $\mathtt{rcv1}$ & 20242 & 47236 & 0.16\% &\cite{RCV1}\\ \hline \end{tabular} \caption{A description of binary datasets used in the experiments} \label{table:datasets} \end{table} \begin{figure} \centering \begin{tabular}{cc} \subfloat[$\mathtt{CINA}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_CINA_epoch_rel_err_id_1.eps}} & \subfloat[$\mathtt{gisette}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_gisette_epoch_rel_err_id_1.eps}}\\ \subfloat[$\mathtt{MNIST}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_MNIST_epoch_rel_err_id_1.eps}} & \subfloat[$\mathtt{rcv1}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_rcv1_epoch_rel_err_id_1.eps}} \end{tabular} \caption{Change of the relative error with respect to the required epochs for solving the $\ell_1$-logistic regression problem \cref{eq:LR}. (Averaged over 50 independent runs).} \label{figure:logistic_epoch} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \subfloat[$\mathtt{CINA}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_CINA_time_rel_err_id_1.eps}} & \subfloat[$\mathtt{gisette}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_gisette_time_rel_err_id_1.eps}}\\ \subfloat[$\mathtt{MNIST}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_MNIST_time_rel_err_id_1.eps}} & \subfloat[$\mathtt{rcv1}$]{ \includegraphics[width=6cm]{results_l1_logloss_BASE_rcv1_time_rel_err_id_1.eps}} \end{tabular} \caption{Change of the relative error with respect to the cpu-time for solving the $\ell_1$-logistic regression problem \cref{eq:LR}. (Averaged over 50 independent runs).} \label{figure:logistic_time} \end{figure} In \cref{figure:logistic_epoch} and \cref{figure:logistic_time}, we show the performances of all methods for solving the logistic regression problem \cref{eq:LR}. The change of the \textit{relative error} $(\psi(x) - \psi(x^*)) / \max\{1,|\psi(x^*)| \}$ is reported with respect to \textit{epochs} and \textit{cpu-time}, respectively. Here, $x^*$ is a reference solution of problem \cref{eq:LR} generated by S2N-D with stopping criterion $\|F^I(x)\| \leq 10^{-12}$. Moreover, one epoch denotes a full pass over a dataset. The results presented in \cref{figure:logistic_epoch} and \cref{figure:logistic_time} are averaged over 50 independent runs. At first, we observe that S2N-D, S4N-HG 100\%, S4N-H, and S4N-VR outperform the first order method Adagrad both with respect to the required number of epochs and cpu-time. Furthermore, the different variants S4N and S2N-D seem to be especially well-suited for recovering high accuracy solutions. The deterministic semismooth Newton method S2N-D decreases slowly in the early stage of the iteration process, but converges rapidly when the iterates are close to an optimal solution. The results show that in the early stage the performance of S2N-D is inferior to the performance of the other stochastic methods. If a higher precision is required, then $\text{S2N-D}$ becomes more efficient and behaves similar to S4N-HG 100\%. Overall, S2N-D is not competitive with the stochastic variants S4N-H and S4N-VR and converges slower. These observations indicate the strength of stochastic algorithms in general. Our numerical experiments show that the different performances of the stochastic methods can be roughly split into two categories. The first category includes Adagrad, S4N-HG 10\%, and S4N-HG 50\%, while the second category consists of S4N-H, S4N-HG 100\%, and S4N-VR. The performance of prox-SVRG depends on the tested datasets. While in $\mathtt{gisette}$ and $\mathtt{MNIST}$, it converges slowly and performs similarly to Adagrad, prox-SVRG shows much faster convergence on the datasets $\mathtt{CINA}$ and $\mathtt{rcv1}$. Comparing the results in \cref{figure:logistic_epoch}, it appears that the performance of S4N-HG 10\% and S4N-HG 50\% is comparable to the one of the first order method Adagrad. Since the maximum sample set size of the stochastic gradient in S4N-HG 10\% and S4N-HG 50\% is limited to $\lfloor0.1N\rfloor$ and $\lfloor0.5N\rfloor$, the associated gradient error terms still might be too large, preventing transition to fast local convergence and causing stagnation of the methods. Thus and similar to the observations for stochastic quasi-Newton methods \cite{BHNS2016,WMGL2017}, the performance of S4N is greatly affected by the sampling strategy and the accuracy of the gradient approximation. This is also partly illustrated by the acceptance of the growth condition \cref{eq:growth-1}. While in S2N-D, S4N-HG 100\%, S4N-H, and S4N-VR usually every semismooth Newton step is accepted as a new iterate, a small number of Newton steps is rejected in S4N-HG 10\% and S4N-HG 50\%. This situation typically occurs in the $\mathtt{rcv1}$ dataset, when either S4N-HG 10\% or S4N-HG 50\% stagnates and the stochastic Newton step does not provide sufficient progress to be accepted. The results in \cref{figure:logistic_epoch} and \cref{figure:logistic_time} demonstrate that the performance of S4N can be further improved by increasing the sample size of the gradient gradually to its full size, as in S4N-HG 100\%, or by introducing an additional variance reduction technique as in S4N-VR. We also observe that S4N-VR outperforms most of the other methods (especially with respect to number of required epoch evaluations) which indicates that the combination of second order information and variance reduction is advantageous and very promising. \begin{figure} \centering \begin{tabular}{cc} \subfloat[$\mathtt{CINA}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_CINA_epoch_res_id_3.eps}} & \subfloat[$\mathtt{gisette}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_gisette_epoch_res_id_1.eps}} \\ \subfloat[$\mathtt{MNIST}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_MNIST_epoch_res_id_2.eps}} & \subfloat[$\mathtt{rcv1}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_rcv1_epoch_res_id_2.eps}} \end{tabular} \caption{Change of the residual $\|F^I(x)\|$ with respect to epochs for solving the nonconvex binary classification problem \cref{eq:nonconvex2}. (Averaged over 50 independent runs).} \label{figure:sigmoid_epoch} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \subfloat[$\mathtt{CINA}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_CINA_time_res_id_3.eps}} & \subfloat[$\mathtt{gisette}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_gisette_time_res_id_1.eps}} \\ \subfloat[$\mathtt{MNIST}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_MNIST_time_res_id_2.eps}} & \subfloat[$\mathtt{rcv1}$]{ \includegraphics[width=6cm]{results_l1_nonc_sigmoid_BASE_rcv1_time_res_id_2.eps}} \end{tabular} \caption{Change of the residual $\|F^I(x)\|$ with respect to cpu-time for solving the nonconvex binary classification problem \cref{eq:nonconvex2}. (Averaged over 50 independent runs). \vspace{-1ex}} \label{figure:sigmoid_time} \end{figure} \subsection{Nonconvex Binary Classification} In this subsection, we consider the following nonconvex, binary classification problem \cite{MasBaxBarFre99,WMGL2017} \begin{equation}\label{eq:nonconvex2} \min_{x\in\mathbb{R}^n} \frac{1}{N}\sum_{i=1}^N\,[1-\tanh(b_i\cdot\iprod{a_i}{x}))]+\mu \|x\|_1, \end{equation} where $f_i(x) := 1-\tanh(b_i\iprod{a_i}{x}))$ is the sigmoid loss function and $\mu = 0.01$ is a regularization parameter. We test the same datasets as in \cref{sec:logloss} to evaluate the performance of the different versions of S4N. The sampling strategy and parameters are adjusted as follows. For all S4N methods, the initial mini-batch size of the stochastic gradient is increased to $|{\mathcal{S}}_0| = \lfloor 0.05N \rfloor$. For S4N-HG 10\%, S4N-HG 50\%, and S4N-VR, we set $|\mathcal T_0| = \lfloor 0.025N \rfloor$. The other variants of S4N start with the initial sample size $|\mathcal T_0| = \lfloor 0.05N \rfloor$. We set $t_{\max} = \lfloor 0.25N \rfloor$, except for S4N-HG 10\% where $t_{\max} = \lfloor 0.05N \rfloor$ is used. We utilize the minimal residual method (MINRES) to solve the reduced Newton system and the maximum number of MINRES-iterations is set to 32. We choose $c_\nu = 2500$ and in S4N-VR, the parameter $m$ is adjusted to $m = 8$. For the $\mathtt{gisette}$ dataset, we changed the initial value for $\lambda_0$ to 5, which significantly improved the performance of the S4N methods. All remaining parameters and strategies follow the setup discussed in the last subsection. The numerical results are presented in \cref{figure:sigmoid_epoch} and \cref{figure:sigmoid_time}. We report the change of the residual $\|F^I(x)\|$ with respect to the required epochs and cpu-time. In general, the overall performance of the different methods is similar to the results shown in the last subsection. However, in contrast to the convex logistic regression problem, the more accurate approximation of the Hessian seems to be beneficial and can accelerate convergence. This observation is also supported by the slightly improved performance of the deterministic semismooth Newton method S2N-D. Our results show that prox-SVRG now consistently outperforms S4N-HG 10\% and S4N-HG 50\% in recovering high precision solutions. Similar to the convex example, S4N-HG 100\% manages to significantly reduce the residual $\|F^I(x)\|$ in the first iterations. As soon as the stochastic error in the gradient approximation becomes negligible, the behavior of S4N-HG 100\% changes and fast local converges can be observed. The methods S4N-H and S4N-VR still compare favorably with the other stochastic approaches. With respect to the epochs, S4N-VR again outperforms the other methods (expect for the dataset $\mathtt{gisette}$). Regarding the required cpu-time, S4N-H achieves good results and converges quickly to highly accurate solutions. \section{Conclusion} In this paper, we investigate a stochastic semismooth Newton method for solving nonsmooth and nonconvex minimization problems. In the proposed framework, the gradient and Hessian of the smooth part of the objective function are approximated by general stochastic first and second order oracles. This allows the application of various sub-sampling and variance reduction techniques or other stochastic approximation schemes. The method is based on stochastic semismooth Newton steps, stochastic proximal gradient steps, and growth conditions and a detailed discussion of the global convergence properties is provided. Under suitable assumptions, transition to fast local convergence is established. More specifically, we show that the method converges locally with high probability with an r-linear or r-superlinear rate if the sample sizes of the stochastic gradient and Hessian are increased sufficiently fast. The approach is tested on an $\ell_1$-logistic regression and a nonconvex binary classification problem on a variety of datasets. The numerical comparisons indicate that our algorithmic framework and especially the combination of (generalized) second order information and variance reduction are promising and competitive.
1,314,259,992,906
arxiv
\section{\label{sec:Introduction}Introduction} The main motivation for the present paper is the study of functional traits in evolutionary biology and quantitative genetics. Evolutionary biology investigates the change of physical traits (phenotypes) across generations. Some traits are univariate or multivariate, but others are functional, like growth curves or thermal performance curves (Kirkpatrick and Heckman, 1989; Heckman, 2003; Kingsolver et al., 2002; Meyer and Kirkpatrick, 2005; Ragland and Carter, 2004). Understanding the modes of variability of these curves is important in order to understand the biological processes behind the trait, and in particular the genetic aspects of it. Consider for example the flour-beetle growth curves shown in Figure \re {fig:curves}(a) (see Irwin and Carter, 2013, for details about these data). They are mass measurements of larvae from hatching to pupation. The dataset consists of 122 half-siblings sired by 29 fathers and different mothers. A distinct characteristic of these curves is an inflection point around day 15; this is the time when larvae stop eating and begin searching for a place to pupate. This process is triggered by hormonal mechanisms whose timing varies from individual to individual; determining what proportion of the time variability can be attributed to genetic factors and what proportion can be attributed to environmental factors is important for understanding the evolution of development and growth. Similarly, in the study of thermal performance curves (which are functions of temperature, not time), the optimal temperature varies from individual to individual and characterizing the sources of this variability is important for understanding thermal adaptations (Huey and Kingsolver, 1989; Izem and Kingsolver, 2005). We can see, then, that functional samples usually present two types of variability: what we can denominate \textquotedblleft horizontal\textquotedblright\ or \textquotedblleft phase\textquotedblright\ variability (e.g.~variability in the location of the mass peaks in Figure \ref{fig:curves}(a)) and \textquotedblleft vertical\textquotedblright\ or \textquotedblleft amplitude\textquotedblright\ variability (e.g.~variability in the mass magnitude at the peak in Figure \ref{fig:curves}(a)). It is important to point out that for a given data set there is often some ambiguity about what constitutes amplitude variability and what constitutes phase variability (this will be discussed in more depth in Section \re {sec:background}). The problem of decomposing functional variability into amplitude and phase variability has been addressed by many authors (Kneip and Engel, 1995; Ramsay and Li, 1998; Wang and Gasser, 1999; Kneip et al., 2000; Gervini and Gasser, 2004, 2005; Kneip and Ramsay, 2008; Tang and M\"{u ller, 2008; Telesca and Inoue, 2008; Bigot and Gadat, 2010; Claeskens et al., 2010). All of these papers, however, have focused on independent and identically distributed samples of curves, but for the type of applications we have in mind the curves are not independent. For example, the growth curves in Figure \ref{fig:curves}(a) are correlated for individuals with the same father. This type of design is common in evolutionary biology and quantitative genetics for the following reason. The variability observed in physical traits has two sources: genetic and environmental. Because environmental factors generally are not passed from one generation to the next, the evolution of phenotypes is driven largely by genetic variability (but see Skinner et al., 2010 and Manikkam et al., 2012 for a discussion of epigenetic effects). Examining samples of genetically related individuals, like siblings or half-siblings, makes the genetic and environmental sources of variability mathematically identifiable and therefore estimable, allowing biologists to predict the evolution of traits in response to selection (Gomulkiewicz and Beder, 1996; Kingsolver at al., 2002). Therefore, it is important to possess statistical tools for the study of amplitude and phase variability of non-independent functional data. Some existing functional-data methods handle non-independent or non-identically distributed curves, such as mixed-effects ANOVA models (Guo, 2002; Morris and Carroll, 2006; Di et al., 2009; Chen and Wang, 2011), but they do not specifically address phase variability. To date, the problem of amplitude/phase variability of functional traits has been addressed mostly in an ad-hoc way, by first aligning the curves with respect to some trait, and then studying amplitude variability of the aligned curves. (This process of aligning curves is variously known as \textquotedblleft time warping\textquotedblright\ or \textquotedblleft curve registration\textquotedblright\ in the Functional Data literature.) But evolutionary biologists frequently must make decisions about how to align or register individual curves from a population of individuals. For example, when studying growth curves in a population of animals that undergo metamorphosis from one life history state to another (usually from a non-reproductive larval form to a reproductive adult form), it is not necessarily clear how to align the individual curves. The default choice for most biologists is to align the curves at the date of birth or hatching, but an equally valid choice might be the date of metamorphosis or the peak body mass prior to metamorphosis. For example, Ragland and Carter (2004) chose to align the growth curves of larval salamanders by date of metamorphosis and then reset the growth period to a fractional scale. Although this approach was effective, it was unsophisticated and ad hoc; more rigorous methods would be beneficial. In this paper we propose a functional ANOVA approach that explicitly models time variability. For simplicity, we consider only the one-way random factor model, but the ideas can be easily extended to more complex ANOVA models. We follow a likelihood-based approach that uses the raw data directly, without pre-smoothing. Therefore the method can be applied to irregularly sampled trajectories, with possibly different starting points and endpoints. The fact that pre-smoothing is not necessary makes the method applicable to longitudinal data, where a smooth latent process is assumed but the observed data themselves are not smooth (Rice, 2004; M\"{u}ller, 2008). The paper is organized as follows: a brief background on random processes is given in Section \ref{sec:background}; the warped ANOVA model is presented in Section \ref{sec:Model}; the asymptotic distribution of the main parameter estimators is derived in Section \ref{sec:Inference}; the small sample behavior of the estimators is studied by simulation in Section \re {sec:Simulations}; finally, the beetle growth data is analyzed in detail in Section \ref{sec:Example}. \section{\label{sec:background}Brief background on random processes} Before we present the warped ANOVA model, it is useful to review some basic properties of stochastic processes. Let $x:I\rightarrow \mathbb{R}$ be a random function defined on a finite interval $I\subset \mathbb{R}$. Suppose x(t)$ is square-integrable with probability one, and has finite variance. Let $\mu (t)=\mathrm{E}\{x(t)\}$ and $\rho (s,t)=\mathrm{cov}\{x(s),x(t)\}$. Then $x(t)$ admits the decomposition \begin{equation} x(t)=\mu (t)+\sum_{k=1}^{\infty }Z_{k}\phi _{k}(t), \label{eq:KL_decomp} \end{equation which is known as Karhunen--Lo\`{e}ve decomposition (Ash and Gardner, 1975), where the $Z_{k}$s are uncorrelated random variables with $\mathrm{E (Z_{k})=0$ and $\mathrm{var}(Z_{k})=\lambda _{k}$ (without loss of generality we can assume $\lambda _{1}\geq \lambda _{2}\geq \cdots \geq 0$). The $\phi _{k}$s form an orthonormal system in $\mathcal{L}^{2}(I)$ and are eigenfunctions of the covariance function $\rho $ with eigenvalues $\lambda _{k}$; that is, $\int \rho (s,t)\phi _{k}(s)ds=\lambda _{k}\phi _{k}(t)$, which implies \begin{equation} \rho (s,t)=\sum_{k=1}^{\infty }\lambda _{k}\phi _{k}(s)\phi _{k}(t). \label{eq:spectral_decomp} \end{equation If the covariance function $\rho $ is continuous then (\ref{eq:KL_decomp}) and (\ref{eq:spectral_decomp}) converge pointwise; otherwise the convergence is only in the sense of the $\mathcal{L}^{2}(I)$ norm (Gohberg \emph{et al. , 2003). In either case, $\sum_{k=1}^{\infty }\lambda _{k}<\infty $, so the sequence of eigenvalues converges to zero. The Karhunen--Lo\`{e}ve decomposition is the functional equivalent of the multivariate principal-component decomposition. Although from a mathematical point of view decomposition (\ref{eq:KL_decomp ) always holds, from a statistical point of view it is not always the most parsimonious model. It is often the case that the sample curves present a few distinct peaks and valleys that systematically repeat themselves across curves, albeit at somewhat different locations. It may take a lot of terms in (\ref{eq:KL_decomp}) to explain this kind of variability, but Kneip and Ramsay (2008, Proposition 1) show that if the process $x(t)$ has at most $K$ peaks and valleys and its derivative $x^{\prime }(t)$ has at most $K$ zeros, then $x(t)$ admits the decomposition \begin{equation} x(t)=\sum_{j=1}^{p}C_{j}\xi _{j}\{v(t)\} \label{eq:Kneip_exp} \end{equation for some $p\leq K+2$, where the $\xi _{j}$s are non-random basis functions, the $C_{j}$s are random coefficients, and $v:I\rightarrow I$ is a monotone increasing stochastic process such that $\mathrm{E}\{v(t)\}=t$ (or alternatively $\mathrm{E}\{w(s)\}=s$, where $w(t)$ is the inverse function of $v(t)$.) We can re-express (\ref{eq:Kneip_exp}) as \begin{equation} x\{w(s)\}=\mu ^{\ast }(s)+\sum_{k=1}^{p}Z_{k}^{\ast }\phi _{k}^{\ast }(s), \label{eq:warped_KL_decomp} \end{equation which is just the Karhunen--Lo\`{e}ve decomposition of the warped process z=x\circ w$. The process $w$ is called the warping process, and it explains the \textquotedblleft horizontal\textquotedblright\ variability in the location of the peaks and valleys of $x$. It is important to point out that the Karhunen--Lo\`{e}ve decomposition (\re {eq:KL_decomp}) is essentially unique (up to the usual indeterminacy of eigenfunctions for multiple eigenvalues), and so is (\re {eq:warped_KL_decomp}) for a given warping process $w$; but the warping process $w$ itself is not unique. For a given $x(t)$, different warping processes $w(t)$ can be chosen that will give rise to different decompositions (\ref{eq:warped_KL_decomp}). In general, it is not possible to uniquely define what constitutes amplitude\ variability and what constitutes phase\ variability for a given process $x(t)$. The approach usually followed in the literature is to specify a warping family $\mathcal{ }$ where $w(t)$ is constrained to live, and then define as phase variability whatever is accounted for by the family $\mathcal{W}$ and as amplitude variability whatever is accounted for by the residual decomposition (\re {eq:warped_KL_decomp}). This may sound vague, but in fact it is possible to give simple conditions for model (\ref{eq:warped_KL_decomp}) to be identifiable given a family $\mathcal{W}$; see the discussion in Web Appendix E. Some authors choose very rigid warping families $\mathcal{W}$, like linear warping functions (Sangalli \emph{et al.}, 2010), while others use extremely flexible nonparametric families (Telesca and Inoue, 2008; Ramsay and Li, 1998). We will follow an intermediate approach, using the semiparametric family of interpolating monotone Hermite splines (Fritsch and Carlson, 1980), although the proposed method can be implemented with any other warping family. Monotone interpolating Hermite splines are defined as follows (more details are given in Web Appendix C). For a subject $i$, let $\mathbf{\tau }_{i}\in \mathbb{R}^{r}$ be a vector of \textquotedblleft landmarks\textquotedblright\ in $I=[a,b]$, with $a<\tau _{i1}<\cdots <\tau _{ir}<b$; the $\tau $s can be, for example, the locations of the peaks and valleys of the observed curve. Let $\mathbf{\tau }_{0}\in \mathbb{R}^{r}$ be a knot vector, usually taken as the mean of the $\mathbf{\tau }_{i}$s. For given values $s_{i0},\ldots ,s_{i,r+1}$, there exists a unique piecewise cubic function $w_{i}(t)$ such that $w_{i}(a)=a$, $w_{i}(b)=b$, $w_{i}(\tau _{0j})=\tau _{ij}$ for all $j$, $w_{i}^{\prime }(a)=s_{i0}$, $w_{i}^{\prime }(b)=s_{i,r+1}$, and $w_{i}^{\prime }(\tau _{0j})=s_{ij}$ for all $j$. This function $w_{i}(t)$ aligns the individual features $\mathbf{\tau }_{i}$ with the average features $\mathbf{\tau }_{0}$ in a smooth way, so it is useful for \textquotedblleft landmark registration\textquotedblright\ (Bookstein, 1997). For $w_{i}(t)$ to be strictly monotone increasing the derivatives s_{ij}$s must satisfy certain conditions, given in Fritsch and Carlson (1980). But for curve-alignment purposes only the $\mathbf{\tau }_{i}$s are specified; in that case Fritsch and Carlson (1980) provide an algorithm that produces a vector of derivatives $\mathbf{s}_{i}$ that satisfy the sufficient conditions for $w_{i}(t)$ to be monotone increasing. Since the algorithm is deterministic, $\mathbf{s}_{i}$ is a function of $\mathbf{\tau _{i}$ and $\mathbf{\tau }_{0}$, therefore $w_{i}(t)$ is entirely parameterized by $\mathbf{\tau }_{i}$ and $\mathbf{\tau }_{0}$. In this paper, instead of specifying $\mathbf{\tau }_{i}$ for each curve and taking \mathbf{\tau }_{0}=\mathbf{\bar{\tau}}$, we will specify $\mathbf{\tau }_{0}$ and treat the $\mathbf{\tau }_{i}$s as unobserved random effects. Our family of warping functions $\mathcal{W}_{\mathbf{\tau }_{0}}$, then, is an $r -dimensional space ($r$ will usually be small). In general, it is not problematic to specify a reasonable $\mathbf{\tau }_{0}$ for a given data set; for example, for the curves in Figure \ref{fig:curves}(a) a single knot at $\tau _{0}=15$ will provide reasonable warping flexibility, and the rest of the variation will be considered amplitude variability. For other warping families, such as monotone B-splines (Telesca and Inoue, 2008) or smooth monotone transformations (Ramsay and Li, 1998), the number and placement of the knots are harder to specify because they are not directly associated with curve features. \section{\label{sec:Model}The warped ANOVA model} Let us go back now to the original problem of a one-factor design, where the sample of $n$ individuals can be separated into $I$ groups, with group $i$ containing $J_{i}$ individuals. For subject $j$ in group $i$ we observe certain variable (e.g.~mass) at time points $t_{ij1},\ldots ,t_{ij\nu _{ij}} , obtaining observations $y_{ij1},\ldots ,y_{ij\nu _{ij}}$. The number of observations $\nu _{ij}$ as well as the time points may change from individual to individual. We assume \begin{equation} y_{ijk}=x_{ij}(t_{ijk})+\varepsilon _{ijk}, \label{eq:raw_data_model} \end{equation where $\{x_{ij}(t)\}$ are underlying smooth curves, no directly observable, and $\{\varepsilon _{ijk}\}$ are i.i.d.$~N(0,\sigma ^{2})$ random errors independent of the underlying $x_{ij}(t)$s. Observational model (\re {eq:raw_data_model}), which treats the smooth curves $\{x_{ij}(t)\}$ as latent variables, is the usual way to bridge functional data analysis and longitudinal data analysis (M\"{u}ller, 2008). As discussed in Section \re {sec:background}, we can write $x_{ij}(t)=z_{ij}\{w_{ij}^{-1}(t)\}$ for a warped process $z_{ij}(t)$ and a warping function $w_{ij}(t)$. These will inherit the dependence structure of the $x_{ij}$s, so we can assume \begin{equation} z_{ij}(t)=\mu (t)+\alpha _{i}(t)+\beta _{ij}(t),\ \ j=1,\ldots ,J_{i},\ \ i=1,\ldots ,I, \label{eq:z_model} \end{equation with $\{\alpha _{i}(t)\}$ and $\{\beta _{ij}(t)\}$ zero-mean random factors independent of each other and among themselves. For the main factor $\alpha (t)$ and the residual term $\beta (t)$ we assume expansions analogous to \ref{eq:warped_KL_decomp}): \begin{equation} \alpha (t)=\sum_{k=1}^{p}U_{k}\phi _{k}(t), \label{eq:KL_alpha} \end{equation \begin{equation} \beta (t)=\sum_{k=1}^{q}V_{k}\psi _{k}(t), \label{eq:KL_beta} \end{equation where $\{\phi _{k}(t)\}$ and $\{\psi _{k}(t)\}$ are orthonormal functions in $\mathcal{L}^{2}(I)$, the $U_{k}$s are uncorrelated with $\mathrm{E (U_{k})=0 $ and $\mathrm{var}(U_{k})=\gamma _{k}$, and the $V_{k}$s are uncorrelated with $\mathrm{E}(V_{k})=0$ and $\mathrm{var}(V_{k})=\lambda _{k} $. Without loss of generality we assume $\gamma _{1}\geq \cdots \geq \gamma _{p}>0$ and $\lambda _{1}\geq \cdots \geq \lambda _{q}>0$. From (\ref{eq:z_model}), (\ref{eq:KL_alpha}) and (\ref{eq:KL_beta}) it follows that the total variance of $z_{ij}(t)$, defined as $\mathrm{E}(\Vert z_{ij}-\mu \Vert ^{2})$ with $\left\Vert \cdot \right\Vert $ the usual \mathcal{L}^{2}$-norm, can be decomposed as $\mathrm{E}(\Vert \alpha \Vert ^{2})+\mathrm{E}(\Vert \beta \Vert ^{2})$, where $\mathrm{E}(\Vert \alpha \Vert ^{2})=\sum_{k=1}^{p}\gamma _{k}$ is the main-factor variance and \mathrm{E}(\Vert \beta \Vert ^{2})=\sum_{k=1}^{q}\lambda _{k}$ is the residual-factor variance. The ratio \begin{equation} h_{z}=\frac{\sum_{k=1}^{p}\gamma _{k}}{\sum_{k=1}^{p}\gamma _{k}+\sum_{k=1}^{q}\lambda _{k}} \label{eq:h_z} \end{equation is then the proportion of amplitude variability explained by the main factor. In Section \ref{sec:Inference} we will derive asymptotic confidence intervals for $h_{z}$. The mean function $\mu (t)$ and the Karhunen--Lo\`{e}ve components $\{\phi _{k}(t)\}$ and $\{\psi _{k}(t)\}$ are functional parameters that must be estimated from the data, using for instance semiparametric spline models. Let $\mathbf{b}(t)=(b_{1}(t),\ldots ,b_{s}(t))^{T}$ be a spline basis in \mathcal{L}^{2}(I)$ (for simplicity we will use the same spline basis for all functional parameters, but this is not strictly necessary); then we assume $\mu (t)=\mathbf{b}(t)^{T}\mathbf{m}$, $\phi _{k}(t)=\mathbf{b}(t)^{T \mathbf{c}_{k}$, and $\psi _{k}(t)=\mathbf{b}(t)^{T}\mathbf{d}_{k}$, for parameters $\mathbf{m}$, $\mathbf{c}_{k}$ and $\mathbf{d}_{k}$ in $\mathbb{R ^{s}$. Let $\mathbf{C=[c}_{1},\ldots ,\mathbf{c}_{p}]\in \mathbb{R}^{s\times p}$, $\mathbf{D=[d}_{1},\ldots ,\mathbf{d}_{q}]\in \mathbb{R}^{s\times q}$ and $\mathbf{J}=\int_{a}^{b}\mathbf{b}(t)\mathbf{b}(t)^{T}dt$ $\in \mathbb{R ^{s\times s}$. The orthogonality conditions on the $\phi _{k}$s and the \psi _{k}$s translate into the conditions $\mathbf{C}^{T}\mathbf{JC}=\mathbf I}_{p}$ and $\mathbf{D}^{T}\mathbf{JD}=\mathbf{I}_{q}$ for $\mathbf{C}$ and \mathbf{D}$. Regarding the $U_{k}$s and $V_{k}$s in (\ref{eq:KL_alpha}) and \ref{eq:KL_beta}), we assume that $\mathbf{U}=(U_{1},\ldots ,U_{p})^{T}$ follows a multivariate $N(\mathbf{0},\mathbf{\Gamma })$ distribution with \mathbf{\Gamma }=\mathrm{diag}(\gamma _{1},\ldots ,\gamma _{p})$ and that \mathbf{V}=(V_{1},\ldots ,V_{q})^{T}$ follows a multivariate $N(\mathbf{0} \mathbf{\Lambda })$ distribution with $\mathbf{\Lambda }=\mathrm{diag (\lambda _{1},\ldots ,\lambda _{q})$. To summarize, the warped process (\re {eq:z_model}) is parameterized by $\mathbf{m}$, $\mathbf{C}$, $\mathbf{D}$, \mathbf{\Gamma }$ and $\mathbf{\Lambda }$. For the warping functions $w_{ij}(t)$ we cannot simply assume an additive model like (\ref{eq:z_model}) with Gaussian factors, because there are no monotone Gaussian processes. Therefore, a more indirect approach is needed. We will assume the $w_{ij}$s belong to the family of interpolating Hermite cubic splines $\mathcal{W}_{\mathbf{\tau }_{0}}$ for a specified knot vector $\mathbf{\tau }_{0}$. We have seen in Section \ref{sec:background} that a w_{ij}\in \mathcal{W}_{\mathbf{\tau }_{0}}$ is parameterized by a vector \mathbf{\tau }_{ij}$ that can be treated as a random effect. However, due to the restriction $a<\tau _{ij1}<\cdots <\tau _{ijr}<b$ we cannot assume \mathbf{\tau }_{ij}$ is Normal. So we follow the approach of Brumback and Lindstrom (2004) and use the Jupp (1978) transform $\mathbf{\theta }_{ij} \mathcal{J}(\mathbf{\tau }_{ij})$, defined as $\theta _{ijk}=\log \{(\tau _{ij,k+1}-\tau _{ijk})/(\tau _{ijk}-\tau _{ij,k-1})\}$ for $k=1,\ldots ,r$, which is an invertible transformation that maps vectors $\mathbf{\tau }_{ij}$ with increasing coordinates into unconstrained vectors $\mathbf{\theta _{ij} $. For the unconstrained vector $\mathbf{\theta }_{ij}$ we can assume a multivariate Normal distribution and an additive ANOVA model: \begin{equation} \mathbf{\theta }_{ij}=\mathbf{\theta }_{0}+\mathbf{\eta }_{i}+\mathbf{\xi _{ij}, \label{eq:model_th} \end{equation with $\mathbf{\eta }_{i}\sim N(\mathbf{0},\mathbf{\Sigma })$ and $\mathbf \xi }_{ij}\sim N(\mathbf{0},\mathbf{\Omega })$ independent of each other and among themselves. We will also assume the $\mathbf{\theta }_{ij}$s are independent of the amplitude factors $\alpha _{i}(t)$ and $\beta _{ij}(t)$, although a model with correlations between amplitude and warping factors can be set up (see below). We take $\mathbf{\theta }_{0}=\mathcal{J}(\mathbf \tau }_{0})$; the covariance matrices $\mathbf{\Sigma }$ and $\mathbf{\Omega }$ will be estimated from the data. In analogy with (\ref{eq:h_z}) we define \begin{equation} h_{w}=\frac{\mathrm{tr}(\mathbf{\Sigma })}{\mathrm{tr}(\mathbf{\Sigma } \mathbf{\Omega })}, \label{eq:h_w} \end{equation which is the proportion of the warping variability explained by the main factor. Putting together the models for $z_{ij}(t)$, $w_{ij}(t)$ and the observational model (\ref{eq:raw_data_model}), we can derive the likelihood function for the observed data vectors $\mathbf{y}_{ij}=(y_{ij1},\ldots ,y_{ij\nu _{ij}})$. Given a realization of the random effect $\mathbf{\theta }_{ij}$, which is determined by realizations of $\mathbf{\eta }_{i}$ and \mathbf{\xi }_{ij}$, the corresponding warped time grids are $t_{ijk}^{\ast }(\mathbf{\theta }_{ij})=w_{ij}^{-1}(t_{ijk})$, $k=1,\ldots ,\nu _{ij}$, and the corresponding warped B-spline matrices $\mathbf{B}_{ij}^{\ast }(\mathbf \theta }_{ij})\in \mathbb{R}^{\nu _{ij}\times s}$ are given by $[\mathbf{B _{ij}^{\ast }(\mathbf{\theta }_{ij})]_{kl}=b_{l}\{t_{ijk}^{\ast }(\mathbf \theta }_{ij})\}$. Then \begin{equation*} \mathbf{y}_{ij}|(\mathbf{u}_{i},\mathbf{v}_{ij}\mathbf{,\eta }_{i},\mathbf \xi }_{ij})\sim N\left\{ \mathbf{B}_{ij}^{\ast }(\mathbf{\theta }_{ij} \mathbf{m+B}_{ij}^{\ast }(\mathbf{\theta }_{ij})\mathbf{Cu}_{i}\mathbf{+B _{ij}^{\ast }(\mathbf{\theta }_{ij})\mathbf{Dv}_{ij},\sigma ^{2}\mathbf{I _{\nu _{ij}}\right\} , \end{equation* and the $\mathbf{y}_{ij}$s are conditionally independent given $(\mathbf{u _{i},\mathbf{v}_{ij}\mathbf{,\eta }_{i},\mathbf{\xi }_{ij})$. If $\mathbf{y _{i\cdot }=(\mathbf{y}_{i1},\ldots ,\mathbf{y}_{iJ_{i}})$, we have \begin{equation} f(\mathbf{y}_{i\cdot })=\iint g(\mathbf{u}_{i},\mathbf{\eta }_{i})f(\mathbf{ }_{i})f(\mathbf{\eta }_{i})\mathrm{d}\mathbf{u}_{i}\mathrm{d}\mathbf{\eta _{i} \label{eq:f_y} \end{equation with $g(\mathbf{u}_{i},\mathbf{\eta }_{i})=\prod_{j=1}^{J_{i}}\iint f \mathbf{y}_{ij}|\mathbf{u}_{i},\mathbf{v}_{ij}\mathbf{,\eta }_{i},\mathbf \xi }_{ij})f(\mathbf{v}_{ij})f(\mathbf{\xi }_{ij})\mathrm{d}\mathbf{v}_{ij \mathrm{d}\mathbf{\xi }_{ij}$, and the log-likelihood function is $\ell =\sum_{i=1}^{I}\log f(\mathbf{y}_{i\cdot })$. The maximum likelihood estimators are $(\mathbf{\hat{m}},\mathbf{\hat{C}},\mathbf{\hat{D}},\mathbf \hat{\Lambda}},\mathbf{\hat{\Gamma}},\allowbreak \mathbf{\hat{\Sigma}} \mathbf{\hat{\Omega}},\hat{\sigma}^{2})=\arg \max \ell $. We compute them via the EM algorithm. The implementation of the EM algorithm presents certain complications arising from the orthogonality restrictions on \mathbf{C}$ and $\mathbf{D}$, which are discussed in detail in Web Appendix A. In the rest of the paper we will use the estimators as presented above, but to conclude the section we discuss a few possible generalizations. First, it is possible to use other families of warping functions, such as B-splines with monotone increasing coefficients (Brumback and Lindstrom, 2004; Telesca and Inoue, 2008) or smooth monotone transformations (Ramsay and Li, 1998). The problem is that the spline coefficients for these families cannot be directly related to features of the sample curves in the way interpolating Hermite spline coefficients can; therefore, one may have to use a relatively large number of knots placed at somewhat arbitrary locations (equally spaced, for example). This may result in a warping family that is too flexible and lead to overwarping (i.e., produce warping functions with flat parts that are close to singular). To prevent this, the warping variability must somehow be penalized. This can be done by adding a penalty term to the log-likelihood function and minimize $\ell _{\lambda }=\sum_{i=1}^{I}\log f \mathbf{y}_{i\cdot })-\lambda \mathrm{tr}(\mathbf{\Sigma }+\mathbf{\Omega }) , where $\lambda \geq 0$ is a penalty parameter chosen by the user. Second, it is possible to incorporate correlations between the warping process $w_{ij}(t)$ and the amplitude process $z_{ij}(t)$. This can be done by assuming that $(\mathbf{u}_{i},\mathbf{\eta }_{i})$ and $(\mathbf{v}_{ij} \mathbf{\xi }_{ij})$ have joint Normal distributions, for instance. The only change in (\ref{eq:f_y}) would be that $f(\mathbf{u}_{i})f(\mathbf{\eta _{i})$ is replaced by the joint density $f(\mathbf{u}_{i},\mathbf{\eta _{i}) $ and $f(\mathbf{v}_{ij})f(\mathbf{\xi }_{ij})$ by $f(\mathbf{v}_{ij} \mathbf{\xi }_{ij})$. From the computational point of view this does not have a big impact, because the EM algorithm can be easily modified to accommodate this (a Matlab implementation is available as supplementary material). But from a statistical point of view the results may be harder to interpret, and the extra $rp+rq$ covariance parameters that need to be estimated may affect the precision of the rest of the estimators if the sample size is not very large. Finally, we note that the assumption of normality of the random effects is mostly a working assumption to derive estimators. It is usually the case that properties like consistency and asymptotic normality of maximum likelihood estimators hold for broader families of distributions than the one they were derived for. But such a thorough asymptotic analysis is beyond the scope of this paper. Instead, in Section \ref{sec:Simulations} we will study by simulation the robustness of the estimators to at least some mild departures from normality. On the other hand, if robustness to outliers is desired, this may be attained by substituting the Normal distributions by multivariate $t$ distributions, as in Gervini (2009); the Normal EM algorithm is easy to adapt for multivariate $t$ distributions. \section{\label{sec:Inference}Asymptotics and inference} It is usually of interest in applications to determine if the main-factor variance is significantly different from zero or not. To this end, we derive in this section the asymptotic distributions of the maximum likelihood estimators and the variance ratios (\ref{eq:h_z}) and (\ref{eq:h_w}), which can then be used to construct asymptotic confidence intervals and tests for h_{z}$ and $h_{w}$. For simplicity, we assume that \emph{(i)} the true functional parameters $\mu (t)$, $\{\phi _{k}(t)\}$ and $\{\psi _{k}(t)\}$ belong to the spline space used for estimation, which is fixed, and \emph (ii)} the $\mathbf{y}_{i\cdot }$s are identically distributed, so $J_{i}=J$ for all $i$ and the time grid $(t_{1},\ldots ,t_{\nu })$ is the same for all individuals. The asymptotic distribution of the estimators will be derived for $I\rightarrow \infty $ and $J$ fixed, or in practical terms, for \textquotedblleft large\ $I$ and small\ $J$\textquotedblright ; this is the usual situation in random-effect one-way ANOVA models. Under these conditions the standard maximum likelihood asymptotic theory applies: if $\mathbf{\omega }=(\gamma _{1},\ldots ,\gamma _{p},\lambda _{1},\ldots ,\lambda _{q})$, then $\sqrt{I}(\mathbf{\hat{\omega}}-\mathbf \omega })\overset{D}{\longrightarrow }N(\mathbf{0},\mathbf{F}^{-1})$, where \mathbf{F}=\mathrm{E}[\{\frac{\partial }{\partial \mathbf{\omega }}\log f \mathbf{y}_{i\cdot })\}\allowbreak \{\frac{\partial }{\partial \mathbf \omega }}\log f(\mathbf{y}_{i\cdot })\}^{T}]$ is the Fisher Information Matrix for the parameter $\mathbf{\omega }$. Straightforward differentiation of (\ref{eq:f_y}), which is carried out in detail in Web Appendix B, gives \begin{equation*} \frac{\partial }{\partial \gamma _{k}}\log f(\mathbf{y}_{i\cdot })=-\frac{1} 2\gamma _{k}}+\frac{\mathrm{E}(u_{ik}^{2}|\mathbf{y}_{i\cdot })}{2\gamma _{k}^{2}},\ \ k=1,\ldots ,p, \end{equation* and \begin{equation*} \frac{\partial }{\partial \lambda _{k}}\log f(\mathbf{y}_{i\cdot })=-\frac{ }{2\lambda _{k}}+\frac{1}{2\lambda _{k}^{2}}\sum_{j=1}^{J}\mathrm{E (v_{ijk}^{2}|\mathbf{y}_{i\cdot }),\ \ k=1,\ldots ,q. \end{equation* Let $\widehat{u_{ik}^{2}}=\mathrm{E}(u_{ik}^{2}|\mathbf{y}_{i\cdot })$ and \widehat{v_{ijk}^{2}}=\mathrm{E}(v_{ijk}^{2}|\mathbf{y}_{i\cdot })$. Since \mathrm{E}(\widehat{u_{ik}^{2}})=\mathrm{E}(u_{ik}^{2})=\gamma _{k}$ and \mathrm{E}(\widehat{v_{ijk}^{2}})=\mathrm{E}(v_{ijk}^{2})=\lambda _{k}$, we obtain the following expressions: \begin{equation*} F_{kl}=-\frac{1}{4\gamma _{k}\gamma _{l}}+\frac{\mathrm{E}(\widehat u_{ik}^{2}}\widehat{u_{il}^{2}})}{4\gamma _{k}^{2}\gamma _{l}^{2}},\ \ \text for }k=1,\ldots ,p\text{ and }l=1,\ldots ,p, \end{equation* \begin{equation*} F_{k,p+l}=-\frac{J}{4\gamma _{k}\lambda _{l}}+\frac{\mathrm{E}(\widehat u_{ik}^{2}}\sum_{j=1}^{J}\widehat{v_{ijl}^{2}})}{4\gamma _{k}^{2}\lambda _{l}^{2}},\ \ \text{for }k=1,\ldots ,p\text{ and }l=1,\ldots ,q, \end{equation* and \begin{equation*} F_{p+k,p+l}=-\frac{J^{2}}{4\lambda _{k}\lambda _{l}}+\frac{\mathrm{E (\sum_{j=1}^{J}\widehat{v_{ijk}^{2}}\sum_{j=1}^{J}\widehat{v_{ijl}^{2}})} 4\lambda _{k}^{2}\lambda _{l}^{2}},\ \ \text{for }k=1,\ldots q\text{ and l=1,\ldots ,q. \end{equation* The estimator $\mathbf{\hat{F}}$ is obtained replacing expectations by averages over $i=1,\ldots ,I$. The asymptotic distribution of (\ref{eq:h_z}) is derived via the Delta Method: since $h_{z}$ is a differentiable function of $\mathbf{\omega }$, \sqrt{I}(\hat{h}_{z}-h_{z})\overset{D}{\longrightarrow }N\left( 0,\left\{ \partial h_{z}/\partial \mathbf{\omega }\right\} ^{T}\mathbf{F}^{-1}\left\{ \partial h_{z}/\partial \mathbf{\omega }\right\} \right) $ with \begin{equation*} \frac{\partial h_{z}}{\partial \gamma _{k}}=\frac{\sum_{k=1}^{q}\lambda _{k }{\left( \sum_{k=1}^{p}\gamma _{k}+\sum_{k=1}^{q}\lambda _{k}\right) ^{2}},\ \ k=1,\ldots ,p, \end{equation* and \begin{equation*} \frac{\partial h_{z}}{\partial \lambda _{k}}=-\frac{\sum_{k=1}^{p}\gamma _{k }{\left( \sum_{k=1}^{p}\gamma _{k}+\sum_{k=1}^{q}\lambda _{k}\right) ^{2}},\ \ k=1,\ldots ,q. \end{equation* The asymptotic variance of $\hat{h}_{z}$ is then given by \begin{eqnarray*} \mathrm{avar}(\hat{h}_{z}) &=&\frac{\left( \sum_{k=1}^{p}\lambda _{k}\right) ^{2}}{\left( \sum_{k=1}^{p}\gamma _{k}+\sum_{k=1}^{q}\lambda _{k}\right) ^{4 }\cdot \sum_{k=1}^{p}\sum_{l=1}^{p}\left( \mathbf{F}^{-1}\right) _{kl} \\ &&-\frac{2\left( \sum_{k=1}^{p}\gamma _{k}\right) \left( \sum_{k=1}^{q}\lambda _{k}\right) }{\left( \sum_{k=1}^{p}\gamma _{k}+\sum_{k=1}^{q}\lambda _{k}\right) ^{4}}\cdot \sum_{k=1}^{p}\sum_{l=1}^{q}\left( \mathbf{F}^{-1}\right) _{k,p+l} \\ &&+\frac{\left( \sum_{k=1}^{q}\gamma _{k}\right) ^{2}}{\left( \sum_{k=1}^{p}\gamma _{k}+\sum_{k=1}^{q}\lambda _{k}\right) ^{4}}\cdot \sum_{k=1}^{q}\sum_{l=1}^{q}\left( \mathbf{F}^{-1}\right) _{p+k,p+l}. \end{eqnarray*} The asymptotic distribution of (\ref{eq:h_w}) is derived in a similar way. If $\mathbf{\zeta }=(\mathrm{diag}(\mathbf{\Sigma }),\mathrm{diag}(\mathbf \Omega }))$, then $\sqrt{I}(\mathbf{\hat{\zeta}}-\mathbf{\zeta })\overset{D} \longrightarrow }N(\mathbf{0},\mathbf{G}^{-1})$ with $\mathbf{G}=\mathrm{E [\{\frac{\partial }{\partial \mathbf{\zeta }}\log f(\mathbf{y}_{i\cdot })\}\ \frac{\partial }{\partial \mathbf{\zeta }}\log f(\mathbf{y}_{i\cdot })\}^{T}] $, and $\sqrt{I}(\hat{h}_{w}-h_{w})\overset{D}{\longrightarrow N(0,\mathrm{avar}(\hat{h}_{w}))$ with $\mathrm{avar}(\hat{h}_{w})=\left( \partial h_{w}/\partial \mathbf{\zeta }\right) ^{T}\mathbf{G}^{-1}\left( \partial h_{w}/\partial \mathbf{\zeta }\right) $, where \begin{equation*} \frac{\partial h_{w}}{\partial \Sigma _{kk}}=\frac{\mathrm{tr}(\mathbf \Omega })}{\left\{ \mathrm{tr}(\mathbf{\Sigma })+\mathrm{tr}(\mathbf{\Omega )\right\} ^{2}},\ \ \text{for }k=1,\ldots ,r, \end{equation* and \begin{equation*} \frac{\partial h_{w}}{\partial \Omega _{kk}}=-\frac{\mathrm{tr}(\mathbf \Sigma })}{\left\{ \mathrm{tr}(\mathbf{\Sigma })+\mathrm{tr}(\mathbf{\Omega )\right\} ^{2}},\ \ \text{for }k=1,\ldots ,r. \end{equation* Then \begin{eqnarray*} \mathrm{avar}(\hat{h}_{w}) &=&\frac{\left\{ \mathrm{tr}(\mathbf{\Omega )\right\} ^{2}}{\left\{ \mathrm{tr}(\mathbf{\Sigma })+\mathrm{tr}(\mathbf \Omega })\right\} ^{4}}\cdot \sum_{k=1}^{r}\sum_{l=1}^{r}\left( \mathbf{G ^{-1}\right) _{kl} \\ &&-\frac{2\mathrm{tr}(\mathbf{\Sigma })\mathrm{tr}(\mathbf{\Omega })} \left\{ \mathrm{tr}(\mathbf{\Sigma })+\mathrm{tr}(\mathbf{\Omega })\right\} ^{4}}\cdot \sum_{k=1}^{r}\sum_{l=1}^{r}\left( \mathbf{G}^{-1}\right) _{k,r+l} \\ &&+\frac{\left\{ \mathrm{tr}(\mathbf{\Sigma })\right\} ^{2}}{\left\{ \mathrm tr}(\mathbf{\Sigma })+\mathrm{tr}(\mathbf{\Omega })\right\} ^{4}}\cdot \sum_{k=1}^{r}\sum_{l=1}^{r}\left( \mathbf{G}^{-1}\right) _{r+k,r+l}. \end{eqnarray* As shown in Web Appendix B, differentiation of (\ref{eq:f_y}) gives \begin{equation*} \frac{\partial }{\partial \Sigma _{kk}}\log f(\mathbf{y}_{i\cdot })=-\frac{ }{2}\left( \mathbf{\Sigma }^{-1}\right) _{kk}+\frac{1}{2}\left( \mathbf \Sigma }^{-1}\right) _{\cdot k}^{T}\mathrm{E}\left( \mathbf{\eta }_{i \mathbf{\eta }_{i}^{T}|\mathbf{y}_{i\cdot }\right) \left( \mathbf{\Sigma ^{-1}\right) _{\cdot k} \end{equation* and \begin{equation*} \frac{\partial }{\partial \Omega _{kk}}\log f(\mathbf{y}_{i\cdot })=-\frac{ }{2}\left( \mathbf{\Omega }^{-1}\right) _{kk}+\frac{1}{2}\left( \mathbf \Omega }^{-1}\right) _{\cdot k}^{T}\sum_{j=1}^{J}\mathrm{E}\left( \mathbf \xi }_{ij}\mathbf{\xi }_{ij}^{T}|\mathbf{y}_{i\cdot }\right) \left( \mathbf \Omega }^{-1}\right) _{\cdot k}, \end{equation* where $\left( \mathbf{\Sigma }^{-1}\right) _{\cdot k}$ and $\left( \mathbf \Omega }^{-1}\right) _{\cdot k}$ denote the $k$th columns of $\mathbf{\Sigma }^{-1}$ and $\mathbf{\Omega }^{-1}$, respectively. Then, if we define \widehat{\mathbf{\eta }_{i}^{\otimes 2}}=\mathrm{E}\left( \mathbf{\eta _{i}\otimes \mathbf{\eta }_{i}|\mathbf{y}_{i\cdot }\right) $ and $\widehat \mathbf{\xi }_{ij}^{\otimes 2}}=\mathrm{E}\left( \mathbf{\xi }_{ij}\otimes \mathbf{\xi }_{ij}|\mathbf{y}_{i\cdot }\right) $, after some algebra we obtain: \begin{eqnarray*} G_{kl} &=&-\frac{1}{4}\left( \mathbf{\Sigma }^{-1}\right) _{kk}\left( \mathbf{\Sigma }^{-1}\right) _{ll} \\ &&+\frac{1}{4}\left\{ \left( \mathbf{\Sigma }^{-1}\right) _{\cdot k}^{T}\otimes \left( \mathbf{\Sigma }^{-1}\right) _{\cdot k}^{T}\right\} \mathrm{E}\Bigl(\widehat{\mathbf{\eta }_{i}^{\otimes 2}}\widehat{\mathbf \eta }_{i}^{\otimes 2}}^{T}\Bigr)\left\{ \left( \mathbf{\Sigma }^{-1}\right) _{\cdot l}\otimes \left( \mathbf{\Sigma }^{-1}\right) _{\cdot l}\right\} , \end{eqnarray* \begin{eqnarray*} G_{k,r+l} &=&-\frac{J}{4}\left( \mathbf{\Sigma }^{-1}\right) _{kk}\left( \mathbf{\Omega }^{-1}\right) _{ll} \\ &&+\frac{1}{4}\left\{ \left( \mathbf{\Sigma }^{-1}\right) _{\cdot k}^{T}\otimes \left( \mathbf{\Sigma }^{-1}\right) _{\cdot k}^{T}\right\} \mathrm{E}\Bigl(\widehat{\mathbf{\eta }_{i}^{\otimes 2}}\sum_{j=1}^{J \widehat{\mathbf{\xi }_{ij}^{\otimes 2}}^{T}\Bigr)\left\{ \left( \mathbf \Omega }^{-1}\right) _{\cdot l}\otimes \left( \mathbf{\Omega }^{-1}\right) _{\cdot l}\right\} , \end{eqnarray* and \begin{eqnarray*} G_{r+k,r+l} &=&-\frac{J^{2}}{4}\left( \mathbf{\Omega }^{-1}\right) _{kk}\left( \mathbf{\Omega }^{-1}\right) _{ll} \\ &&+\frac{1}{4}\left\{ \left( \mathbf{\Omega }^{-1}\right) _{\cdot k}^{T}\otimes \left( \mathbf{\Omega }^{-1}\right) _{\cdot k}^{T}\right\} \mathrm{E}\Bigl(\sum_{j=1}^{J}\widehat{\mathbf{\xi }_{ij}^{\otimes 2} \sum_{j=1}^{J}\widehat{\mathbf{\xi }_{ij}^{\otimes 2}}^{T}\Bigr)\left\{ \left( \mathbf{\Omega }^{-1}\right) _{\cdot l}\otimes \left( \mathbf{\Omega ^{-1}\right) _{\cdot l}\right\} , \end{eqnarray* for $k=1,\ldots ,r$ and $l=1,\ldots ,r$. As before, $\mathbf{\hat{G}}$ is obtained replacing expectations by averages. Since the random-effect estimators $\widehat{u_{ik}^{2}}$, $\widehat{v_{ijk}^{2}}$, $\widehat \mathbf{\eta }_{i}^{\otimes 2}}$ and $\widehat{\mathbf{\xi }_{ij}^{\otimes 2 }$ are by-products of the EM algorithm, no extra computational costs are incurred in computing $\mathbf{\hat{F}}$ and $\mathbf{\hat{G}}$. Finally, we note that since $\hat{h}_{z}$ and $\hat{h}_{w}$ live in the interval $[0,1]$, a transformation like $\arcsin \sqrt{h}$ usually provides a better Normal approximation when $\hat{h}_{z}$ or $\hat{h}_{w}$ are close to the boundaries. The asymptotic variance of $\arcsin \sqrt{\hat{h}}$ is given by $\mathrm{avar}(\hat{h})/\{4\hat{h}(1-\hat{h})\}$. The simplest procedure to derive a confidence interval for $\hat{h}$ in that case is to construct a standard confidence interval for $\arcsin \sqrt{\hat{h}}$ and then back-transform the endpoints. \section{\label{sec:Simulations}Simulations} In this section we study the finite-sample behavior of the new estimators by simulation. The main goals are to determine if the new method \emph{(i)} represents a substantial improvement over common functional ANOVA in presence of time variability, \emph{(ii)} is at least comparable to the naive approach of pre-warping the data using an existing warping method, \emph{(iii)} is robust to mild departures from the normality assumptions, and \emph{(iv)} does not overfit, i.e.~is not worse than common functional ANOVA in absence of time variability. To this end we generated data from ten different models, all balanced, with I=10$ groups and $J=5$ observations per group. The raw data (\re {eq:raw_data_model}) was sampled on an equally-spaced time grid of $\nu =20$ points in $[0,1]$, and the noise variance was $\sigma ^{2}=.1^{2}$ in all cases. The mean function was $\mu (t)=.6\varphi (t,.3,.1)+.4\varphi (t,.6,.1) $ in all cases, where $\varphi (t,a,b)$ denotes the $N(a,b^{2})$ density function. The models considered were the following: \begin{enumerate} \item One-component models (\ref{eq:KL_alpha}) and (\ref{eq:KL_beta}) with no warping and $\phi _{1}(t)=\psi _{1}(t)=\varphi (t,.3,.1)/1.68$. The variances were $\gamma _{1}=.2^{2}$ and $\lambda _{1}=.1^{2}$, so $h_{z}=.80 . \item One-component models (\ref{eq:KL_alpha}) and (\ref{eq:KL_beta}) with no warping but with different components for $\alpha (t)$ and $\beta (t)$: \phi _{1}(t)$ as in Model 1, but $\psi _{1}(t)=\varphi (t,.6,.1)/1.68$. The variances $\gamma _{1}$ and $\lambda _{1}$ were as in Model 1. \item Same $\alpha (t)$ and $\beta (t)$ as in Model 1, but with a Hermite-spline warping process $w(t)$ with knot $\tau _{0}=.3$ and variances $\Sigma =.2^{2}$ and $\Omega =.1^{2}$, so $h_{w}=.80$. \item Same $\alpha (t)$ and $\beta (t)$ as in Model 2, with warping $w(t)$ as in Model 3. \item Same $\alpha (t)$ and $\beta (t)$ as in Model 1, but with a warping process $w(t)$ with knots $\mathbf{\tau }_{0}=(.3,.6)$ and covariance matrices $\mathbf{\Sigma }=.2^{2}\mathbf{I}_{2}$ and $\mathbf{\Omega }=.1^{2 \mathbf{I}_{2}$, so $h_{w}=.80$ as before. \item Same $\alpha (t)$ and $\beta (t)$ as in Model 2, with warping $w(t)$ as in Model 5. \item Same as Model 4, but the random factors $U$ and $V$ in (\re {eq:KL_alpha}) and (\ref{eq:KL_beta}) have Student's $t$ distributions with 4 degrees of freedom and scale parameters $\gamma _{1}^{1/2}=.2$ and \lambda _{1}^{1/2}=.1$ (so the variance ratio is still $h_{z}=.80$). \item Same as Model 4, but the random factors $U$ and $V$ in (\re {eq:KL_alpha}) and (\ref{eq:KL_beta}) have contaminated Normal distributions $(1-\varepsilon )N(0,\gamma _{1})+\varepsilon N(0,k\gamma _{1})$ and (1-\varepsilon )N(0,\lambda _{1})+\varepsilon N(0,k\lambda _{1})$ respectively, with $\varepsilon =.10$ and $k=5$ (the variance ratio is still $h_{z}=.80$). \item Two-component models (\ref{eq:KL_alpha}) and (\ref{eq:KL_beta}), with \phi _{1}(t)=\psi _{1}(t)=\varphi (t,.3,.1)/1.68$, $\phi _{2}(t)=\psi _{2}(t)=(\varphi (t,.6,.1)/1.68-.105\phi _{1}(t))/.99$ (so that each pc is associated with amplitude variation at each peak), variances $\gamma _{1}=.2^{2}$, $\gamma _{2}=.1^{2}$, $\lambda _{1}=.1^{2}$, $\lambda _{2}=.05^{2}$ (so $h_{z}=.80$ as in previous models), and a one-knot warping process as in Model 3. \item Same $\alpha (t)$ and $\beta (t)$ as in Model 9, with two-knot warping as in Model 5. \end{enumerate} For each sample we computed the common (un-warped) ANOVA estimator, the warped ANOVA estimator proposed in this paper, and a naive two-step warped ANOVA estimator. The latter is computed as follows: first the curves are aligned by least-squares registration (i.e.~minimizing $\sum_{i=1}^{n}\Vert x_{i}\circ w_{i}-\mu \Vert ^{2}$ over $w_{i}$s in $\mathcal{W}_{\mathbf{\tau }_{0}}$) and then the common ANOVA estimators are computed on the warped data. We used cubic B-splines with 10 equispaced knots as basis functions for the functional parameters. As warping families we used interpolating Hermite splines with $\tau _{0}=.3$ for models 1--4 and 7--9, and $\tau _{0}=(.3,.6)$ for models 5--6 and 10. As error measures we used the bias, the standard deviation and the root mean squared error, defined as follows: if $f_{0}\in \mathcal{L}^{2}(I)$ and $\hat{f}$ is the estimator, then \mathrm{bias}(\hat{f})=[\int \{\mathrm{E}\hat{f}(t)-f_{0}(t)\}^{2}dt]^{1/2} , $\mathrm{sd}(\hat{f})=[\int \mathrm{E}\{\hat{f}(t)-\mathrm{E}\hat{f (t)\}^{2}dt]^{1/2}$ and $\mathrm{rmse}(\hat{f})=\{\mathrm{bias}^{2}(\hat{f}) \mathrm{sd}^{2}(\hat{f})\}^{1/2}$. Some care must be taken with the principal components, because their sign is undefined: to determine the \textquotedblleft right\textquotedblright\ sign, we multiplied $\hat{\phi _{1}$ and $\hat{\psi}_{1}$ by $\langle \hat{\phi}_{1},\phi _{1}\rangle $ and $\langle \hat{\psi}_{1},\psi _{1}\rangle $, respectively. The estimation errors based on 200 Monte Carlo replications for each model are shown in Table \ref{tab:simulations}. The effect of warping is more clearly seen in the estimators of the principal components $\phi $ and $\psi $. The common ANOVA estimators, as expected, have the largest biases; lacking a specific mechanism to handle time variability, common ANOVA estimators $\hat{\phi}$ and $\hat{\psi}$ attempt to fit amplitude and phase variability at the same time and get severely distorted compared to the true $\phi $ and $\psi $. The two warped estimators, on the other hand, can handle phase variability well. The maximum-likelihood estimator proposed in this paper always has smaller bias than the naive two-step approach; this is to be expected, since the warping step of the two-step estimator minimizes variation about the mean $\mu $ without taking into account amplitude variability or the dependence structure in the data, whereas the maximum likelihood estimator explicitly models $\phi $ and $\psi $. The down side of the new estimators is that, as always, the bias reduction provided by the more complex model is accompanied by a higher variance. However, looking at the total root mean squared errors, we see that the new estimators outperform the naive two-step estimators in almost all cases. This is also true for the non-normal models 7 and 8, so the warped maximum likelihood estimators are robust to mild departures from normality. \section{\label{sec:Example}Example: beetle growth data} In this section we study mass growth curves of flour beetles from birth to pupation, from Irwin and Carter (2013). A total of 122 insects are considered. This is a subset of a larger dataset that includes both siblings and half-siblings, but in order to apply the one-way ANOVA model, which assumes independence between groups, we consider only the half-siblings. (The full data set can be modeled as a nested two-way ANOVA, with the mother factor nested within the father factor.) The insects were sired by 29 different fathers, which will constitute the grouping variable. The number of insects per father varies between 2 and 5, with a median of 4. Part of the raw data is shown in Figure \ref{fig:curves}(a); for better visualization we only plotted half of the sample curves. The mass measures were taken about every 3 days early in the growth curve, and up to once per day late in the growth curve. However, only 18 of the 122 larvae were measured for mass for the first time on the day they hatched; 76 were measured for mass for the first time on the second day, 22 on the third day, 5 on the fourth day, and one was not measured for mass until the seventh day. Therefore, the starting points of the curves are unequal. The endpoints are also irregular, because larvae reached pupation at different points between days 16 and 25. However, while the unequal starting points are due to missing data, the unequal endpoints are due to a well-defined biological landmark which is reached at different times. Therefore we rescaled the time grids so that all trajectories end at the median pupation day 19, but we did not align the starting points at day 1. We also took logarithms to stabilize the error variance. The rescaled log-data is shown in Figure \ref{fig:curves (b). These curves have a noticeable inflection point around day 15. This is because in response to hormonal changes occurring prior to pupation, larvae stop eating and start wandering in search of a place to pupate, and so lose body mass. Therefore we fitted warped ANOVA models with a single warping knot at $\tau _{0}=15$. As spline basis we used cubic B-splines with 7 equispaced knots; this gives a total of 9 basis functions, providing enough flexibility without excessive irregularity. We considered several ANOVA models with equal number of components for the main factor and the residual term, ranging from 0 (mean-only model) to 3 components. The resulting parameter estimators were: \begin{itemize} \item For $p=q=0$ (mean-only model): $\hat{\Sigma}=.013$, $\hat{\Omega}=.031 , $\hat{\sigma}=.181$. \item For $p=q=1$: $\hat{\Sigma}=.010$, $\hat{\Omega}=.035$, $\hat{\gamma =.323$, $\hat{\lambda}=.128$, $\hat{\sigma}=.138$. \item For $p=q=2$: $\hat{\Sigma}=.010$, $\hat{\Omega}=.051$, $\mathbf{\hat \gamma}}=(.344,.021)$, $\mathbf{\hat{\lambda}}=(.168,.010)$, $\hat{\sigma =.124$. \item For $p=q=3$: $\hat{\Sigma}=.005$, $\hat{\Omega}=.035$, $\mathbf{\hat \gamma}}=(.426,.022,.005)$, $\mathbf{\hat{\lambda}}=(.186,.028,.012)$, $\hat \sigma}=.121$. \end{itemize} Overall, it seems that a single principal component is sufficient to explain amplitude variability, so we chose the model with $p=q=1$. The fitted curves $\hat{x}_{ij}(t)$ are shown in Figure \ref{fig:fits}(a) and we see that they provide a good approximation to the data in Figure \ref{fig:curves}(b). The estimated warping functions $\hat{w}_{ij}(t)$ are shown in Figure \re {fig:fits}(b); the time variability around day 15, which is substantial, is captured well by these curves. The amplitude principal components $\hat{\phi (t)$ and $\hat{\psi}(t)$ are shown in Figure \ref{fig:fits}(c); to facilitate interpretation of the components we plotted $\hat{\mu}(t)$ together with $\hat{\mu}(t)\pm \hat{\phi}(t)$ in Figure \ref{fig:fits}(d). We see that $\hat{\phi}(t)$ and $\hat{\psi}(t)$, which are very similar, explain variation in overall mass: individuals with positive pc scores tend to have trajectories above the mean and individuals with negative pc scores tend to have trajectories below the mean. The similarity between $\hat{\phi}(t)$ and $\hat{\psi}(t)$ has a biological explanation: the main factor of the ANOVA model represents the genetic contribution of the father, while the residual term represents the genetic contribution of the mother together with environmental factors (see e.g.~Heckman, 2003, sec.~3). For a population in Hardy-Weinberg equilibrium, the genetic contribution of both parents is identical, so the $\hat{\phi _{k}(t)$s and the $\hat{\psi}_{k}(t)$s will be similar if the environmental factors are not very strong. Supporting this result is the fact that Irwin and Carter (2013) showed that most of the phenotypic variance was explained by genetic effects in most parts of the growth curve. The amplitude principal components reveal a very interesting biological result that was not apparent in the original analysis of the raw data in Irwin and Carter (2013): very little variation in amplitude exists at the inflection point at day 15 (Figure \ref{fig:fits}(c) and \ref{fig:fits}(d)). This indicates that the beetles have a target peak mass that is reached prior to entry into the wandering phase, which suggests that the target peak mass must be reached before pupation can begin, and that selection for that peak mass (or a related physiological trait) may occur. Interestingly the warping functions shown in Figure \ref{fig:fits}(b), as well as the original analysis in Irwin and Carter (2013) demonstrate there is substantial variation at the age at which peak mass is reached. In combination these two results provide a basis for future experiments investigating physiological mechanisms, genetic underpinnings and evolutionary implications of size and age of peak mass. The variance ratios for the amplitude and warping components are $\hat{h _{z}=.72$ and $\hat{h}_{w}=.23$, with respective asymptotic standard deviations $.15$ and $.13$. The bootstrap distributions of $\hat{h}_{z}$ and $\hat{h}_{w}$ are shown in Web Appendix D; the bootstrap standard deviations are $.20$ and $.16$ respectively, not far from the asymptotic values, but the Normal approximation is more accurate for the transformations $\arcsin \sqrt{\hat{h}_{z}}$ and $\arcsin \sqrt{\hat{h}_{w}}$. The 90\% asymptotic confidence intervals obtained by the back-transformation method are (.45,.92)$ for $h_{z}$ and $(.06,.47)$ for $h_{w}$. Clearly the father effect is strong on the amplitude component, but weak on the warping component. This can be cross-checked by applying the classical ANOVA $F -test on the estimated random effects $\hat{\theta}_{ij}$s: it yields a $p -value of $0.058$ for the hypothesis of no father effect (the reasonableness of the normality assumption on the random factors is also discussed in Web Appendix D.) The reason the father effect is weak on the warping component is that we removed a lot of time variability by aligning the endpoints at the median pupation day. In fact, the ANOVA $F$-test on the original endpoints yields a $p$-value of $0.020$, indicating that there is a significant father\ effect on the date of pupation; this is also supported by Irwin and Carter (2013) demonstrating a highly significant heritability (genetic variance ratio) for date of pupation in the full sample. But once the endpoints are aligned, the time variability that remains, although still substantial, does not have a strong father effect. If we assume $\phi =\psi $, which is not unreasonable given Figure \re {fig:fits}(c), then (\ref{eq:z_model}) comes down to $z_{ij}(t)=\mu (t)+(U_{i}+V_{ij})\phi (t)$ and the classical ANOVA $F$-test can be applied to $\{\hat{U}_{i}+\hat{V}_{ij}\}$. This gives a very significant $F$-value 11.03$ with $p$-value $0.00$, confirming that the father effect is very strong on the amplitude variability of the growth curves. As indicated at the end of Section \ref{sec:Model}, a more general model with correlations between amplitude and warping components can be set up. We fitted a one-component model with correlations for these data and obtained estimators $\hat{\phi}_{1}$ and $\hat{\psi}_{1}$ very similar to the ones obtained above, and correlations $\widehat{\mathrm{corr}}(U_{i},\mathbf{\eta }_{i})=.18$ and $\widehat{\mathrm{corr}}(V_{ij},\mathbf{\xi }_{ij})=.17$, which do not seem very significant. The statistical significance of these correlations could be studied, for instance, by bootstrap confidence intervals, but for brevity's sake we will not do it here. Finally, it is important to note that for the unaligned raw data, variation in the length of the larval period and the onset of the wandering phase resulted in crossing of family curves late in the larval period (Irwin and Carter, 2013). After application of the warping method, the warped curves are aligned by peak body mass at the onset of the wandering phase, resulting in family curves late in the larval period that maintain relative positions similar to early in the larval period. This realignment undoubtedly will facilitate estimation of genetic components of variance, a proposition that we can test in the future. \section{Supplementary materials} Web Appendices referenced in Sections 3--6 and Matlab programs implementing the new estimators are available with this paper at the Biometrics website on Wiley Online Library. \section*{Acknowledgements} This research was supported by National Science Foundation grant DMS-1006281 to Daniel Gervini, NSF grant EF-0328594 to Patrick A.~Carter and a grant from the National Institute for Mathematical and Biological Synthesis to Patrick A.~Carter. \section*{References} \begin{description} \item Ash, R.B., and Gardner, M.F. (1975). \emph{Topics in Stochastic Processes}. Academic Press. \item Bigot, J., and Gadat, S. (2010). A deconvolution approach to estimation of a common shape in a shifted curves model. \emph{The Annals of Statistics} \textbf{38} 2422--2464. \item Bookstein, F. L. (1997). \emph{Morphometric tools for landmark data: geometry and biology}. Cambridge University Press. \item Brumback, L.C., and Lindstrom, M. (2004). Self modeling with flexible, random time transformations. \emph{Biometrics} \textbf{60} 461--470. \item Chen, H., and Wang, Y. (2011). A penalized spline approach to functional mixed effects model analysis. \emph{Biometrics} \textbf{67} 861--870. \item Claeskens, G., Silverman, B. W., and Slaets, L. (2010). A multiresolution approach to time warping achieved by a Bayesian prior--posterior transfer fitting strategy. \emph{Journal of the Royal Statistical Society Series B} \textbf{72} 673--694. \item Di, C. Z., Crainiceanu, C. M., Caffo, B. S., and Punjabi, N. M. (2009). Multilevel functional principal component analysis. \emph{The Annals of Applied Statistics} \textbf{3} 458--488. \item Fritsch, F. N., and Carlson, R. E. (1980). Monotone piecewise cubic interpolation. \emph{SIAM Journal of Numerical Analysis} \textbf{17} 238--246. \item Gervini, D. (2009). Detecting and handling outlying trajectories in irregularly sampled functional datasets. \emph{The Annals of Applied Statistics} \textbf{3} 1758--1775. \item Gervini, D., and Gasser, T. (2004). Self-modelling warping functions. \emph{Journal of the Royal Statistical Society Series B} \textbf{66} 959--971. \item Gervini, D., and Gasser, T. (2005). Nonparametric maximum likelihood estimation of the structural mean of a sample of curves. \emph{Biometrika} \textbf{92} 801--820. \item Gohberg, I., Goldberg, S., and Kaashoek, M. A. (2003). \emph{Basic Classes of Linear Operators}. Basel: Birkh\"{a}user Verlag. \item Gomulkiewicz, R., and Beder, J. H. (1996). The selection gradient of an infinite-dimensional trait. \emph{SIAM Journal of Applied Mathematics} \textbf{56} 509--523. \item Guo, W. (2002). Functional mixed effects models. \emph{Biometrics} \textbf{58} 121--128. \item Heckman, N. E. (2003). Functional data analysis in evolutionary biology. In \emph{Recent Advances and Trends in Nonparametric Statistics.} Elsevier. \item Huey, R. B., and Kingsolver, J. G. (1989). Evolution of thermal sensitivity of ectotherm performance. \emph{Trends in Ecology and Evolution} \textbf{4} 131--135. \item Irwin, K.K., and Carter, P.A. (2013). Constraints on the evolution of function-valued traits: a study of growth in \emph{Tribolium casteneum}. \emph{Journal of Evolutionary Biology }(in press). \item Izem, R., and Kingsolver, J. G. (2005). Variation in continuous reaction norms: quantifying directions of biological interest. \emph{The American Naturalist}, \textbf{166}, 277--289. \item Jupp, D. L. B. (1978) Approximation to data by splines with free knots. \emph{SIAM Journal of Numerical Analysis} \textbf{15} 328--343. \item Kingsolver, J. G., Gomulkiewicz, R., and Carter, P. A. (2002). Variation, selection and evolution of function-valued traits. In \emph Microevolution Rate, Pattern, Process,} pp.~87--104. Springer. \item Kirkpatrick, M., and Heckman, N. (1989). A quantitative genetic model for growth, shape, reaction norms, and other infinite-dimensional characters. \emph{Journal of Mathematical Biology} \textbf{27} 429--450. \item Kneip, A., and Engel, J. (1995). Model estimation in nonlinear regression under shape invariance. \emph{The Annals of Statistics} \textbf{2 } 551--570. \item Kneip, A., Li, X., MacGibbon, K. B., and Ramsay, J. O. (2000). Curve registration by local regression. \emph{Canadian Journal of Statistics} \textbf{28} 19--29. \item Kneip, A., and Ramsay, J. O. (2008). Combining registration and fitting for functional models. \emph{Journal of the American Statistical Association} \textbf{103} 1155--1165. \item Manikkam, M., Guerrero-Bosagna, C., Tracey, R., Haque, M., and Skinner, M. (2012). Transgenerational actions of environmental compounds on reproductive disease and identification of epigenetic biomarkers of ancestral exposures. \emph{PLoS One 7}. \item Meyer, K., and Kirkpatrick, M. (2005). Up hill, down dale: quantitative genetics of curvaceous traits. \emph{Philosophical Transactions of the Royal Society B: Biological Sciences}, \textbf{360}, 1443--1455. \item Morris, J. S., and Carroll, R. J. (2006). Wavelet-based functional mixed models. \emph{Journal of the Royal Statistical Society Series B} \textbf{68} 179--199. \item M\"{u}ller, H. G. (2008). Functional modeling of longitudinal data. In \emph{Longitudinal data analysis. Handbooks of modern statistical methods}. Chapman \& Hall/CRC, New York, pp.~223--252. \item Ragland, G.J., and Carter, P.A. (2004). Genetic constraints on the evolution of growth and life history traits in the salamander Ambystoma macrodactylum. \emph{Heredity} \textbf{92 }569--578. \item Ramsay, J. O., and Li, X. (1998). Curve registration. \emph{Journal of the Royal Statistical Society Series B} \textbf{60} 351--363. \item Rice, J. A. (2004). Functional and longitudinal data analysis: Perspectives on smoothing. \emph{Statistica Sinica} \textbf{14} 631--648. \item Sangalli, L.M., Secchi, P., Vantini, S., and Vitelli, V. (2010). k-mean alignment for curve clustering. \emph{Computational Statistics and Data Analysis} \textbf{54} 1219--1233. \item Skinner, M. K., Manikkam, M., and Guerrero-Bosagna, C. (2010). Epigenetic transgenerational actions of environmental factors in disease etiology. \emph{Trends in Endocrinology and Metabolism} \textbf{21} 214--222. \item Tang, R., and M\"{u}ller, H. G. (2008). Pairwise curve synchronization for functional data. \emph{Biometrika} \textbf{95} 875--889. \item Telesca, D., and Inoue, L. Y. (2008). Bayesian hierarchical curve registration. \emph{Journal of the American Statistical Association} \textbf 103} 328--339. \item Wang, K. and Gasser, T. (1999). Synchronizing sample curves nonparametrically. \emph{The Annals of Statistics} \textbf{27} 439--460. \end{description} \newpage \bigskip \begin{table}[h] \centerin \begin{sideways \begin{tabular}{lccccccllllccccccccc} & \multicolumn{9}{c}{\small Model 1} & & \multicolumn{9}{c}{\small Model 2} \\ & \multicolumn{3}{c}{\small bias} & \multicolumn{3}{c}{\small sd} & \multicolumn{3}{c}{\small rmse} & & \multicolumn{3}{c}{\small bias} & \multicolumn{3}{c}{\small sd} & \multicolumn{3}{c}{\small rmse} \\ & {\small C} & {\small 2s} & {\small ML} & {\small C} & {\small 2s} & {\small ML} & {\small C} & {\small 2s} & {\small ML} & & {\small C} & {\small 2s} & {\small ML} & {\small C} & {\small 2s} & {\small ML} & {\small C} & {\small 2s} & {\small ML} \\ ${\small \hat{\mu}}$ & \multicolumn{1}{l}{\small .008} & {\small .016} & \multicolumn{1}{l}{\small .008} & \multicolumn{1}{|l}{\small .055} & {\small .055} & \multicolumn{1}{l}{\small .055} & \multicolumn{1}{|l}{\small .055} & {\small .057} & {\small .056} & & \multicolumn{1}{l}{\small .008} & {\small .017} & \multicolumn{1}{l}{\small .008} & \multicolumn{1}{|l}{\small .051} & {\small .052} & \multicolumn{1}{l}{\small .057} & \multicolumn{1}{|c}{\small .051} & {\small .054} & {\small .058} \\ ${\small \hat{\phi}}_{1}$ & \multicolumn{1}{l}{\small .007} & {\small .127} & \multicolumn{1}{l}{\small .048} & \multicolumn{1}{|l}{\small .064} & {\small .067} & \multicolumn{1}{l}{\small .069} & \multicolumn{1}{|l}{\small .064} & {\small .143} & {\small .084} & & \multicolumn{1}{l}{\small .172} & {\small .219} & \multicolumn{1}{l}{\small .130} & \multicolumn{1}{|l}{\small .078} & {\small .077} & \multicolumn{1}{l}{\small .101} & \multicolumn{1}{|c {\small .189} & {\small .233} & {\small .165} \\ ${\small \hat{\psi}}_{1}$ & \multicolumn{1}{l}{\small .014} & {\small .127} & \multicolumn{1}{l}{\small .032} & \multicolumn{1}{|l}{\small .120} & {\small .121} & \multicolumn{1}{l}{\small .116} & \multicolumn{1}{|l}{\small .121} & {\small .175} & {\small .121} & & \multicolumn{1}{l}{\small .013} & {\small .184} & \multicolumn{1}{l}{\small .050} & \multicolumn{1}{|l}{\small .112} & {\small .111} & \multicolumn{1}{l}{\small .136} & \multicolumn{1}{|c {\small .113} & {\small .215} & {\small .145} \\ & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & & & \\ & \multicolumn{9}{c}{\small Model 3} & & \multicolumn{9}{c}{\small Model 4} \\ ${\small \hat{\mu}}$ & \multicolumn{1}{l}{\small .073} & {\small .033} & \multicolumn{1}{l}{\small .013} & \multicolumn{1}{|l}{\small .081} & {\small .083} & \multicolumn{1}{l}{\small .088} & \multicolumn{1}{|l}{\small .109} & {\small .089} & {\small .089} & & \multicolumn{1}{r}{\small .071} & \multicolumn{1}{r}{\small .032} & \multicolumn{1}{r}{\small .014} & \multicolumn{1}{|r}{\small .085} & \multicolumn{1}{r}{\small .085} & \multicolumn{1}{r}{\small .093} & \multicolumn{1}{|r}{\small .111} & \multicolumn{1}{r}{\small .091} & \multicolumn{1}{r}{\small .094} \\ ${\small \hat{\phi}}_{1}$ & \multicolumn{1}{l}{\small .175} & {\small .129} & \multicolumn{1}{l}{\small .051} & \multicolumn{1}{|l}{\small .530} & {\small .109} & \multicolumn{1}{l}{\small .145} & \multicolumn{1}{|l}{\small .559} & {\small .168} & {\small .154} & & \multicolumn{1}{r}{\small .109} & \multicolumn{1}{r}{\small .213} & \multicolumn{1}{r}{\small .087} & \multicolumn{1}{|r}{\small .422} & \multicolumn{1}{r}{\small .108} & \multicolumn{1}{r}{\small .173} & \multicolumn{1}{|r}{\small .436} & \multicolumn{1}{r}{\small .239} & \multicolumn{1}{r}{\small .194} \\ ${\small \hat{\psi}}_{1}$ & \multicolumn{1}{l}{\small .327} & {\small .126} & \multicolumn{1}{l}{\small .027} & \multicolumn{1}{|l}{\small .736} & {\small .145} & \multicolumn{1}{l}{\small .167} & \multicolumn{1}{|l}{\small .806} & {\small .192} & {\small .169} & & \multicolumn{1}{r}{\small 1.095} & \multicolumn{1}{r}{\small .192} & \multicolumn{1}{r}{\small .057} & \multicolumn{1}{|r}{\small .293} & \multicolumn{1}{r}{\small .135} & \multicolumn{1}{r}{\small .226} & \multicolumn{1}{|r}{\small 1.134} & \multicolumn{1}{r}{\small .235} & \multicolumn{1}{r}{\small .233} \\ & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & & & \\ & \multicolumn{9}{c}{\small Model 5} & & \multicolumn{9}{c}{\small Model 6} \\ ${\small \hat{\mu}}$ & \multicolumn{1}{l}{\small .106} & {\small .041} & \multicolumn{1}{l}{\small .024} & \multicolumn{1}{|l}{\small .090} & {\small .094} & \multicolumn{1}{l}{\small .119} & \multicolumn{1}{|l}{\small .139} & {\small .103} & {\small .122} & & \multicolumn{1}{r}{\small .104} & \multicolumn{1}{r}{\small .048} & \multicolumn{1}{r}{\small .023} & \multicolumn{1}{|r}{\small .088} & \multicolumn{1}{r}{\small .094} & \multicolumn{1}{r}{\small .113} & \multicolumn{1}{|r}{\small .136} & \multicolumn{1}{r}{\small .106} & \multicolumn{1}{r}{\small .116} \\ ${\small \hat{\phi}}_{1}$ & \multicolumn{1}{l}{\small .226} & {\small .233} & \multicolumn{1}{l}{\small .114} & \multicolumn{1}{|l}{\small .603} & {\small .135} & \multicolumn{1}{l}{\small .302} & \multicolumn{1}{|l}{\small .644} & {\small .269} & {\small .323} & & \multicolumn{1}{r}{\small .179} & \multicolumn{1}{r}{\small .282} & \multicolumn{1}{r}{\small .047} & \multicolumn{1}{|r}{\small .521} & \multicolumn{1}{r}{\small .130} & \multicolumn{1}{r}{\small .245} & \multicolumn{1}{|r}{\small .551} & \multicolumn{1}{r}{\small .310} & \multicolumn{1}{r}{\small .250} \\ ${\small \hat{\psi}}_{1}$ & \multicolumn{1}{l}{\small .468} & {\small .230} & \multicolumn{1}{l}{\small .082} & \multicolumn{1}{|l}{\small .838} & {\small .163} & \multicolumn{1}{l}{\small .263} & \multicolumn{1}{|l}{\small .960} & {\small .282} & {\small .275} & & \multicolumn{1}{r}{\small 1.013} & \multicolumn{1}{r}{\small .380} & \multicolumn{1}{r}{\small .154} & \multicolumn{1}{|r}{\small .329} & \multicolumn{1}{r}{\small .281} & \multicolumn{1}{r}{\small .362} & \multicolumn{1}{|r}{\small 1.065} & \multicolumn{1}{r}{\small .473} & \multicolumn{1}{r}{\small .393} \\ & & & & & & & & & & & & & & & & & & & \\ & \multicolumn{9}{c}{\small Model 7} & & \multicolumn{9}{c}{\small Model 8} \\ ${\small \hat{\mu}}$ & {\small .070} & {\small .042} & {\small .020} & \multicolumn{1}{|c}{\small .093} & {\small .099} & {\small .129} & \multicolumn{1}{|l}{\small .116} & {\small .108} & {\small .130} & & {\small .072} & {\small .036} & {\small .016} & \multicolumn{1}{|c}{\small .083} & {\small .086} & {\small .100} & \multicolumn{1}{|c}{\small .110} & {\small .093} & {\small .101} \\ ${\small \hat{\phi}}_{1}$ & {\small .082} & {\small .244} & {\small .080} & \multicolumn{1}{|c}{\small .322} & {\small .122} & {\small .195} & \multicolumn{1}{|l}{\small .332} & {\small .273} & {\small .211} & & {\small .132} & {\small .242} & {\small .096} & \multicolumn{1}{|c}{\small .385} & {\small .111} & {\small .182} & \multicolumn{1}{|c}{\small .407} & {\small .266} & {\small .206} \\ ${\small \hat{\psi}}_{1}$ & {\small .874} & {\small .205} & {\small .043} & \multicolumn{1}{|c}{\small .389} & {\small .154} & {\small .221} & \multicolumn{1}{|l}{\small .957} & {\small .257} & {\small .225} & & {\small 1.001} & {\small .199} & {\small .069} & \multicolumn{1}{|c}{\small .354} & {\small .126} & {\small .205} & \multicolumn{1}{|c}{\small 1.061} & {\small .235} & {\small .217} \\ & & & & & & & & & & & & & & & & & & & \\ & \multicolumn{9}{c}{\small Model 9} & & \multicolumn{9}{c}{\small Model 10} \\ ${\small \hat{\mu}}$ & \multicolumn{1}{r}{\small .072} & \multicolumn{1}{r {\small .035} & \multicolumn{1}{r}{\small .019} & \multicolumn{1}{|r}{\small .086} & \multicolumn{1}{r}{\small .084} & \multicolumn{1}{r}{\small .105} & \multicolumn{1}{|r}{\small .112} & \multicolumn{1}{r}{\small .091} & \multicolumn{1}{r}{\small .106} & & \multicolumn{1}{r}{\small .106} & \multicolumn{1}{r}{\small .059} & \multicolumn{1}{r}{\small .042} & \multicolumn{1}{|r}{\small .097} & \multicolumn{1}{r}{\small .100} & \multicolumn{1}{r}{\small .136} & \multicolumn{1}{|r}{\small .143} & \multicolumn{1}{r}{\small .116} & \multicolumn{1}{r}{\small .143} \\ ${\small \hat{\phi}}_{1}$ & \multicolumn{1}{r}{\small .242} & \multicolumn{1}{r}{\small .206} & \multicolumn{1}{r}{\small .172} & \multicolumn{1}{|r}{\small .633} & \multicolumn{1}{r}{\small .337} & \multicolumn{1}{r}{\small .559} & \multicolumn{1}{|r}{\small .678} & \multicolumn{1}{r}{\small .395} & \multicolumn{1}{r}{\small .585} & & \multicolumn{1}{r}{\small .346} & \multicolumn{1}{r}{\small .262} & \multicolumn{1}{r}{\small .236} & \multicolumn{1}{|r}{\small .663} & \multicolumn{1}{r}{\small .353} & \multicolumn{1}{r}{\small .637} & \multicolumn{1}{|r}{\small .748} & \multicolumn{1}{r}{\small .439} & \multicolumn{1}{r}{\small .679} \\ ${\small \hat{\phi}}_{2}$ & \multicolumn{1}{r}{\small .357} & \multicolumn{1}{r}{\small .602} & \multicolumn{1}{r}{\small .418} & \multicolumn{1}{|r}{\small .571} & \multicolumn{1}{r}{\small .903} & \multicolumn{1}{r}{\small .801} & \multicolumn{1}{|r}{\small .673} & \multicolumn{1}{r}{\small 1.085} & \multicolumn{1}{r}{\small .904} & & \multicolumn{1}{r}{\small .503} & \multicolumn{1}{r}{\small .708} & \multicolumn{1}{r}{\small .469} & \multicolumn{1}{|r}{\small .742} & \multicolumn{1}{r}{\small .825} & \multicolumn{1}{r}{\small .845} & \multicolumn{1}{|r}{\small .896} & \multicolumn{1}{r}{\small 1.087} & \multicolumn{1}{r}{\small .967} \\ ${\small \hat{\psi}}_{1}$ & \multicolumn{1}{r}{\small .383} & \multicolumn{1}{r}{\small .258} & \multicolumn{1}{r}{\small .139} & \multicolumn{1}{|r}{\small .761} & \multicolumn{1}{r}{\small .209} & \multicolumn{1}{r}{\small .427} & \multicolumn{1}{|r}{\small .852} & \multicolumn{1}{r}{\small .332} & \multicolumn{1}{r}{\small .449} & & \multicolumn{1}{r}{\small .521} & \multicolumn{1}{r}{\small .312} & \multicolumn{1}{r}{\small .156} & \multicolumn{1}{|r}{\small .835} & \multicolumn{1}{r}{\small .232} & \multicolumn{1}{r}{\small .510} & \multicolumn{1}{|r}{\small .984} & \multicolumn{1}{r}{\small .389} & \multicolumn{1}{r}{\small .533} \\ ${\small \hat{\psi}}_{2}$ & \multicolumn{1}{r}{\small .839} & \multicolumn{1}{r}{\small .321} & \multicolumn{1}{r}{\small .387} & \multicolumn{1}{|r}{\small .888} & \multicolumn{1}{r}{\small .284} & \multicolumn{1}{r}{\small .771} & \multicolumn{1}{|r}{\small 1.222} & \multicolumn{1}{r}{\small .429} & \multicolumn{1}{r}{\small .863} & & \multicolumn{1}{r}{\small .780} & \multicolumn{1}{r}{\small .552} & \multicolumn{1}{r}{\small .448} & \multicolumn{1}{|r}{\small .910} & \multicolumn{1}{r}{\small .471} & \multicolumn{1}{r}{\small .826} & \multicolumn{1}{|r}{\small 1.211} & \multicolumn{1}{r}{\small .726} & \multicolumn{1}{r}{\small .940 \end{tabular \end{sideways \caption{Simulation Results. Biase, standard deviation and root mean squared error for common (C), two-step (2s) and maximum likelihood (ML) ANOVA estimators.}\labe {tab:simulations \end{table \newpage \FRAME{fhFU}{6.4636in}{2.2329in}{0pt}{\Qcb{Flour Beetle Growth Example. (a) Raw mass trajectories; (b) log-trajectories re-scaled to common endpoint.}} \Qlb{fig:curves}}{curves.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 6.4636in;height 2.2329in;depth 0pt;original-width 8.3385in;original-height 6.1609in;cropleft "0.0663";croptop "1";cropright "1";cropbottom "0";filename 'Curves.eps';file-properties "XNPEU";}} \newpage \FRAME{fhFU}{6.4454in}{5.0246in}{0pt}{\Qcb{Flour Beetle Growth Example. (a) Fitted trajectories using warped ANOVA model; (b) warping functions; (c) principal component of the main factor, $\hat{\protect\phi}(t)$ (solid line), and of the residual term, $\hat{\protect\psi}(t)$ (dashed line); (d) estimated mean $\hat{\protect\mu}(t)$ (solid line), $\hat{\protect\mu}(t) \hat{\protect\phi}(t)$ (dash-dot line), and $\hat{\protect\mu}(t)-\hat \protect\phi}(t)$ (dotted line). }}{\Qlb{fig:fits}}{fits.eps}{\specia {language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 6.4454in;height 5.0246in;depth 0pt;original-width 9.4537in;original-height 3.022in;cropleft "0.0728";croptop "1";cropright "1";cropbottom "0";filename 'Fits.eps';file-properties "XNPEU";}} \end{document}
1,314,259,992,907
arxiv
\section{Introduction} \indent \par The operators of fractional differentiation have been introduced by Leibnitz, Liouville, Riemann, Grunwal and Letnikov \cite{Cresson}. The fractional derivatives and integrals are used in the description of some models in mechanics, physics \cite{Cresson}, economics \cite{Caputo} and medicine \cite{broasca}. The fractional variational calculus \cite{Agrawal} is an important instrument in the analysis of such models. The Euler-Lagrange equations are non-autonomous fractional differential equations in those models. \indent \par In this paper we present the fractional jet fibre bundle of order $k$ on a differentiable manifold as being $J^{\alpha k}({\mathbb R}, M)={\mathbb R}\times Osc^{\alpha k}(M)$, $\alpha \in (0,1)$, $k\in {\mathbb N}^{*}$. The fibre bundle $J^{\alpha k}$ is built in a similar way as the fibre bundle $E^{k}$ by R. Miron \cite{Miron}. Among the geometrical structures defined on $J^{\alpha}({\mathbb R}, M)$ we consider the dynamical fractional connection and the fractional Euler-Lagrange equations associated with a function defined on $J^{\alpha k}({\mathbb R}, M)$. \\ \indent In section 2 we describe the fractional operators on ${\mathbb R}$ and some of their properties which are used in the paper. In section 3 we describe the fractional osculator bundle of order $k$. In section 4 the fractional jet fibre bundle $J^{\alpha}({\mathbb R}, M)$ is defined, the fractional dynamical connection is built and the fractional Euler-Lagrange equations are established using the notion of fractional extremal value and classical extremal value on $J^{\alpha k}({\mathbb R}, M)$. In section 5 we consider some examples and applications. \section{Elements of fractional integration and differentiation on $\mathbb{R}$} \indent \par Let $f:[a, b]\to {\mathbb{R}}$ be an integrable function and $\alpha \in (0, 1)$. The left-sided (right-sided) fractional derivative of $f$ is the function \renewcommand{\arraystretch}{2} \begin{equation} \label{1)} \begin{array}{l} {({}_{-} D_{t}^{\alpha } f)(t)=\frac{1}{\Gamma (1-\alpha )} \frac{d}{dt} \int _{a}^{t}\frac{f(s)-f(a)}{(t-s)^{\alpha } } ds } \\ {({}_{+} D_{t}^{\alpha } f)(t)=\frac{1}{\Gamma (1-\alpha )} \frac{d}{dt} \int _{t}^{b}\frac{f(b)-f(s)}{(s-t)^{\alpha } } ds, } \end{array} \end{equation} where $t\in [a,\; b)$ and $\Gamma$ is Euler's gamma function. \newtheorem{theorem}{Theorem} \newtheorem{proposition}[theorem]{Proposition} \renewcommand{\arraystretch}{1.5} \begin{proposition} \normalfont(see \cite{Cresson}) The operators ${}_{-} D_{t}^{\alpha }$ and ${}_{+} D_{t}^{\alpha }$ have the properties:\\ 1. If $f_{1}$ and $f_{2}$ are defined on $[a, b]$ and ${}_{-} D_{t}^{\alpha}$, ${}_{+} D_{t}^{\alpha }$ exists, then \begin{equation} \label{2)} {}_{-} D_{t}^{\alpha } (c_{1} f_{1} +c_{2} f_{2} )(t)=c_{1} ({}_{-} D_{t}^ {\alpha } f_{1} )(t)+c_{2} ({}_{-} D_{t}^{\alpha } f_{2} )(t). \end{equation} 2. If $\{ \alpha _{n} \} _{n\ge 0}$ is a real number sequence with $\mathop{\lim }\limits_{n\to \infty } \alpha _{n} =1$ then \begin{equation} \label{3)} \mathop{\lim }\limits_{n\to \infty } ({}_{-} D_{t}^{\alpha _{n} } f)(t)=({}_{-} D_{t}^{1} f)(t)=\frac{d}{dt} f(t). \end{equation} 3. a) If $f(t)=c$, $t\in [a, b]$, $c\in {\mathbb{R}}$ then \begin{equation} \label{4)} ({}_{-} D_{t}^{\alpha } f)(t)=0. \end{equation} b) If $f(t)=t^{\gamma }$, $t\in (a, b]$, $\gamma \in {\mathbb{R}}$, then \begin{equation} \label{5)} ({}_{-} D_{t}^{\alpha } f)(t)=\frac{t^{\gamma -\alpha } \Gamma (1+\gamma )}{\Gamma (1+\gamma -\alpha )} . \end{equation} c) If $f(t)=\frac{t^{\alpha } }{\Gamma (1+\alpha )}$, then \begin{equation} \label{6)} ({}_{-} D_{t}^{\alpha } f)(t)=1. \end{equation} 4. If $f_{1}$ and $f_{2}$ are analytic functions on $[a, b]$ then \begin{equation} \label{7)} ({}_{-} D_{t}^{\alpha } (f_{1} f_{2} ))(t)=\sum _{k=0}^{\infty }\left(\begin{array}{c} {\alpha } \\ {k} \end{array}\right) ({}_{-} D_{t}^{\alpha -k} f_{1} )(t)\frac{d^{k} }{(dt)^{k} } f_{2} (t), \end{equation} where $\frac{d^{k} }{(dt)^{k} } =\frac{d}{dt} \circ \frac{d}{dt} \circ ...\circ \frac{d}{dt}$. \\ 5. It also holds true \begin{equation} \label{8)} \int _{a}^{b}f_{1} (t)({}_{-} D_{t}^{\alpha } f_{2} )(t)dt=-\int _{a}^{b}f_{2} (t)({}_{+} D_{t}^{\alpha } f_{1} )(t)dt. \end{equation} 6. a) If $f:[a, b]\to {\mathbb R}$ admits fractional derivatives of order $a\alpha $, $a\in {\mathbb N}$, then \begin{equation} \label{9)} f(t+h)=E_{\alpha}((ht)^{\alpha } {}_{-} D_{t}^{\alpha } )f(t), \end{equation} where $E_{\alpha }$ is the Mittag-Leffler function given by \begin{equation} \label{10)} E_{\alpha } (t)=\sum _{a=0}^{\infty }\frac{t^{\alpha a} }{\Gamma (1+\alpha a)}. \end{equation} b) If $f:[a, b]\to {\mathbb R}$ is analytic and $0\in (a, b)$ then the fractional McLaurin series is \begin{equation} \label{ZEqnNum520508} f(t)=\sum _{a=0}^{\infty }\frac{t^{\alpha a} }{\Gamma (1+\alpha a)} ({}_{-} D_{t}^{\alpha a} f)(t)\left|_{t=0} \right.. \end{equation} \end{proposition} \indent The physical and geometrical interpretation of the fractional derivative on ${\mathbb R}$ is suggested by the interpretation of the Stieltjes integral, because the integral used in the definition of the fractional derivative is a Riemann-Stieltjes integral \cite{Podlubny}.\\ \indent By definition, the left-sided (right-sided) fractional derivative of $f$, of order $\alpha$, $m=[\alpha]+1$, is the function \begin{equation} \label{eq12} \begin{array}{l} {D_{t}^{\alpha } f(t)=\frac{1}{\Gamma (m-\alpha )} \left(\frac{d}{dt} \right)^{m} \int _{-\infty }^{t}\frac{f(s)-f(0)}{(t-s)^{\alpha } } ds , \; 0\in(-\infty,t) } \\ {{}^{*} D_{t}^{\alpha } f(t)=\frac{1}{\Gamma (m-\alpha )} \left(-\frac{d}{dt} \right)^{m} \int _{t}^{\infty }\frac{f(s)-f(0)}{(s-t)^{\alpha } } ds , \; 0\in(t,\infty).} \end{array} \end{equation} If $\overline{{\rm supp}{\it f}}\subset [a,b]$, then $D_{t}^{\alpha}f={}_{-} D_{t}^{\alpha } f$, ${}^{*} D_{t}^{\alpha } f={}_{+} D_{t}^{\alpha }f$.\\ \indent Let us consider the seminorms \begin{equation*} \begin{array}{l} {\left|x\right|_{J_{L}^{\alpha } ({\mathbb R})} =\left\| D_{t}^{\alpha } x\right\| _{L^{2} ({\mathbb R})} \; } \\ {\left|x\right|_{J_{{ R}}^{\alpha } ({\mathbb R})} =\left\| {}^{*} D_{t}^{\alpha } x\right\| _{L^{2} ({\mathbb R})} ,} \end{array} \end{equation*} and the norms \begin{equation*} \begin{array}{l} {\left\| x\right\| _{J_{L}^{\alpha } ({\mathbb R})} =\left(\left\| x\right\| _{L^{2} ({\mathbb R})}^{2} +\left|x\right|_{J_{L}^{\alpha } ({\mathbb R})}^{2} \right)^{1/2} \; } \\ {\left\| x\right\| _{J_{{R}}^{\alpha } ({\mathbb R})} =\left(\left\| x\right\| _{L^{2} ({\mathbb R})}^{2} +\left|x\right|_{J_{{R}}^{\alpha } ({\mathbb R})}^{2} \right)^{1/2},} \end{array} \end{equation*} and $J_{_{0}L}^{\alpha } ({\mathbb R})$, $J_{_{0} R}^{\alpha } ({\mathbb R})$ the closures of $C_{0}^{\infty} ({\mathbb R})$ with respect to the two norms from above, respectively. In \cite{Cresson} it is proved that the operators $D_{t}^{\alpha }$ and ${}^{*} D_{t}^{\alpha}$ satisfy the properties: \begin{proposition} Let $I\subset {\mathbb R}$ and let $J_{_{0}L }^{\alpha } (I)$ and $J_{_{0} R} ^{\alpha } (I)$ be the closures of $C_{0}^{\infty } (I)$ with respect to the norms from above. For any $x\in J_{_{0}L }^{\beta } (I)$, $0<\alpha <\beta $, the following relation holds: \begin{equation*} D_{t}^{\beta } x(t)=D_{t}^{\alpha } D_{t}^{\beta -\alpha } x(t). \end{equation*} For any $x\in J_{_{0} R }^{\beta } (I)$, $0<\alpha <\beta $, it also holds \begin{equation*} {}^{*} D_{t}^{\beta } x(t)={}^{*} D_{t}^{\alpha } {}^{*} D_{t}^{\beta -\alpha } x(t). \end{equation*} \end{proposition} In the following we shall consider the fractional derivatives defined above. \section{The fractional osculator bundle of order $k$ on a differentiable manifold} \indent \par Let $\alpha \in (0, 1]$ be fixed and $M$ a differentiable manifold of dimension $n$. Two curves $\rho ,\; \sigma :I\to {\mathbb R}$, with $\rho(0) = \sigma(0)=x_{0} \in M$, $0\in I$, have a fractional contact $\alpha$ of order $k\in {\mathbb N}^{*}$ in $x_{0} $, if for any $f\in {\mathcal F}(U)$, $x_{0} \in U$, $U$ a chart on $M$, it holds \begin{equation} \label{ZEqnNum286695} D_{t}^{\alpha a} (f\circ \rho )\left|_{t=0} \right. =D_{t}^{\alpha a} (f\circ \sigma )\left|_{t=0} \right. \end{equation} where $a=\overline{1,\; k}$. The relation \eqref{ZEqnNum286695} is an equivalence relation. The equivalence class $[\rho ]_{x_{0} }^{\alpha k}$ is called the fractional \textit{k}-osculator space of $M$ in $x_{0}$ and it will be denoted by $Osc_{x_{0} }^{\alpha k} (M)$. If the curve $\rho :I\to M$ is given by $x^{i} =x^{i} (t)$, $t\in I$, $i=\overline{1,\; n}$, then, considering the formula \eqref{ZEqnNum520508}, the class $[\rho ]_{x_{0} }^{\alpha k}$, may be written as \begin{equation} \label{14)} x^{i} (t)=x^{i} (0)+\frac{t^{\alpha } }{\Gamma (1+\alpha )} D_{t}^{\alpha } x^{i} (t)\left|_{t=0} \right. +...+\frac{t^{\alpha k} }{\Gamma (1+\alpha k)} D_{t}^{\alpha k} x^{i} (t)\left|_{t=0} \right. , \end{equation} where $t\in (-\varepsilon , \varepsilon )$. We shall use the notation \begin{equation} \label{ZEqnNum283756} x^{i} (0)=x^{i},\quad y^{i(\alpha a)} =\frac{1}{\Gamma (1+\alpha a)} D_{t}^{\alpha a} x^{i} (t)\left|_{t=0} \right. , \end{equation} for $i=\overline{1, n}$ and $a=\overline{1, k}$.\\ By definition, the fractional osculator bundle of order $r$ is the fibre bundle $(Osc^{\alpha k}(M), M)$ where $Osc^{\alpha k} (M)=\bigcup _{x_{0} \in M}Osc_{x_{0} }^{\alpha k}(M)$ and \\ $\pi _{0}^{\alpha k}:Osc^{\alpha k} (M)\to M$ is defined by $\pi_{0} ^{\alpha k} ([\rho ]_{x_{0} }^{\alpha k} )=x_{0}$, $(\forall )[\rho ]_{x_{0} }^{\alpha k} \in Osc^{\alpha k} (M)$. \\ \indent For $f\in {\mathcal F}(U)$, the fractional derivative of order $\alpha $, $\alpha \in (0,\; 1)$, with respect to the variable $x^{i} $, is defined by \begin{equation} \label{16)} \begin{array}{l} {(D_{x^{i} }^{\alpha } f)(x)=} \\ {\frac{1}{\Gamma (1-\alpha )} \frac{\partial }{\partial x^{i} } \int _{a^{i} }^{x^{i} }\frac{f(x^{1} ,...,x^{i-1} ,s,x^{i+1} ,...,x^{n} )-f(x^{1} ,...,x^{i-1} ,a^{i} ,x^{i+1} ,...,x^{n} )}{(x^{i} -s)^{\alpha } } ds,} \end{array} \end{equation} where $x^{i}$ are the coordinate functions on $U$, $\frac{\partial }{\partial x^{i} } $, $i=\overline{1, n}$, is the canonical base of the vector fields on $U$ and $U_{ab} =\{ x\in U,\; a^{i} \le x^{i} \le b^{i} ,\; i=\overline{1,n}\} \subset U$. \\ Let $U,\; U'\subset M$ be two charts on $M$, $U\cap U'\ne \emptyset$ and consider the change of variable \begin{equation} \label{ZEqnNum197194} \bar{x}^{i} =\bar{x}^{i} (x^{1} ,...,x^{n} ) \end{equation} with $\det \left(\frac{\partial \bar{x}^{i} }{\partial x^{j} } \right)\ne 0$. Let $\left\{dx^{i} \right\}_{i=\overline{1,\; n}}$ be the canonical base of 1-forms of ${\mathcal D}^{1} (U)$ and let us define the 1-forms $d(x^{i} )^{\alpha } =\alpha (x^{i} )^{\alpha -1} dx^{i} $, $i=\overline{1, n}$. The exterior differential $d^{\alpha } :{\mathcal F}(U\cap U')\to {\mathcal D}^{1} (U\cap U')$ is defined by \begin{equation} \label{ZEqnNum198901} d^{\alpha } =d(x^{j} )^{\alpha }D_{x^{j} }^{\alpha }=d(\bar{x}^{j} )^{\alpha} D_{\bar{x}^{j} }^{\alpha } . \end{equation} Using \eqref{ZEqnNum198901} and the property $D_{x^{i} }^{\alpha } \left(\frac{(x^{i} )^{\alpha } }{\Gamma (1+\alpha )} \right)=1$, it follows that \begin{equation} \label{ZEqnNum965179} d(x^{j} )^{\alpha } =\frac{1}{\Gamma (1+\alpha )} D_{\bar{x}^{i} }^{\alpha } (x^{j} )^{\alpha } d(\bar{x}^{i} )^{\alpha }. \end{equation} Using the notation \begin{equation} \label{20)} \mathop{J_{i}^{j} }\limits^{\alpha } (x,\; \bar{x})=\frac{1}{\Gamma (1+\alpha )} D_{\bar{x}^{i} }^{\alpha } (x^{j} )^{\alpha }, \end{equation} from \eqref{ZEqnNum965179} we get \begin{equation} \label{ZEqnNum547724} d(x^{j} )^{\alpha } =\mathop{J_{i}^{j} }\limits^{\alpha } (x,\; \bar{x})d(\bar{x}^{i} )^{\alpha }. \end{equation} From \eqref{ZEqnNum547724} it follows that \begin{equation} \label{22)} \mathop{J_{i}^{j} }\limits^{\alpha } (x,\; \bar{x})\mathop{J_{h}^{i} }\limits^{\alpha } (x,\; \bar{x})=\delta _{h}^{j} . \end{equation} Consider $x^{i} =x^{i} (t)$ and $\bar{x}^{i} (t)=\bar{x}^{i} (x(t))$, $i=\overline{1, n}$, $t\in I$. Applying the operator $D_{t}^{\alpha }$ we get \begin{equation} \label{23)} (D_{t}^{\alpha } \bar{x}^{i} )(t)=D_{x^{j} }^{\alpha } \bar{x}^{i} (x)(D_{t}^{\alpha } x^{j} )(t)=\mathop{J_{j}^{i} }\limits^{\alpha } (\bar{x},\; x)(D_{t}^{\alpha } x^{j} )(t). \end{equation} Considering the notation from \eqref{ZEqnNum283756} we have \begin{equation} \label{ZEqnNum957067} y^{i(\alpha )} =\mathop{J_{j}^{i} }\limits^{\alpha } (\bar{x},\; x)\bar{y}^{j(\alpha )} . \end{equation} Also, from \eqref{ZEqnNum283756} we deduce \begin{equation} \label{25)} D_{t}^{\alpha } y^{i(\alpha a)} =\frac{\Gamma (\alpha a)}{\Gamma (\alpha (a-1))} y^{i(\alpha a)} , \end{equation} where $i=\overline{1, n}$. Applying the operator $D_{t}^{\alpha }$ in the relation \eqref{ZEqnNum957067} we find \begin{equation} \label{ZEqnNum440443} \begin{array}{l} {\frac{\Gamma (\alpha (a-1))}{\Gamma (\alpha )} \bar{y}^{i(\alpha a)} =\Gamma (1+\alpha )\mathop{J_{j}^{i} }\limits^{\alpha } (\bar{y}^{\alpha (a-1)} ,\; x)y^{j(\alpha )} +} \\ {\frac{\Gamma (2\alpha )}{\Gamma (\alpha )} \mathop{J_{j}^{i} }\limits^{\alpha } (y^{(\alpha (a-1))} ,\; y^{\alpha } )y^{j(2\alpha )} +...+\frac{\Gamma (2\alpha )}{\Gamma (\alpha )} \mathop{J_{j}^{i} }\limits^{\alpha } (\bar{y}^{\alpha (a-1)} ,\; y^{\alpha b} )y^{j((b+1)\alpha )} +} \\ {...+\frac{\Gamma (\alpha (a-1))}{\Gamma (\alpha )} y^{i(\alpha a)} ,} \end{array} \end{equation} where $a=\overline{1, k}$. \begin{proposition} \normalfont(see \cite{Albu}, \cite{Cottrill}) \\ a) The coordinate transformation on $Osc^{(\alpha k)}(M)$, \\ $(x^{i} ,y^{i(\alpha )} ,...,y^{i(\alpha k)} )\to (\bar{x}^{i} ,\bar{y}^{i(\alpha )} ,...,\bar{y}^{i(\alpha k)} )$ are given by the formulas \eqref{ZEqnNum197194} and \eqref{ZEqnNum440443}. \\ b) The operators $D_{x^{i} }^{\alpha }$ and the 1-forms $(dx^{i} )^{\alpha } $, $i=\overline{1, n}$, transform by the formulas \begin{equation} \label{27)} \begin{array}{l} {D_{\bar{x}^{i} }^{\alpha } =\mathop{J_{j}^{i} }\limits^{\alpha } (x,\; \bar{x})D_{x^{j} }^{\alpha } } \\ {d(\bar{x}^{i})^{\alpha} =\mathop{J_{j}^{i} }\limits^{\alpha } (\bar{x},\; x)d(x^{j} )^{\alpha } .} \end{array} \end{equation} \end{proposition} \section{The fractional jet bundle of order $k$\\ on a differentiable manifold; geometrical objects } \indent \par By definition, the $k$-order fractional jet bundle is the space $J^{\alpha k} ({\mathbb R},\; M)={\mathbb R}\times Osc^{k\alpha } (M).$ A system of local coordinates on $J^{\alpha k} ({\mathbb R},\; M)$ will be denoted by $(t,x,y^{(\alpha )} ,y^{(2\alpha )} ,...,y^{(k\alpha )} )$. Consider the projections $\pi _{0}^{\alpha k} :J^{\alpha k}({\mathbb R},\; M)\to M$ defined by \begin{equation} \label{28)} \begin{array}{l} {\pi _{0}^{\alpha k} (t,x,y^{(\alpha )} ,...,y^{(\alpha k)} )=x.} \end{array} \end{equation} Let $U,\; U'\subset M$ be two charts on $M$ with $U\cap U'\ne \emptyset $, $(\pi _{0}^{\alpha } )^{-1} (U),\; (\pi _{0}^{\alpha } )^{-1} (U')\subset J^{\alpha } ({\mathbb R},\; M)$ the corresponding charts on $J^{\alpha } ({\mathbb R},\; M)$ and, respectively, the corresponding coordinates $(x^{i} )$, $(\bar{x}^{i} )$ and $(t,x^{i} ,y^{i(\alpha )} )$, $(t,\bar{x}^{i} ,\bar{y}^{i(\alpha )} )$. The transformations of coordinates are given by \begin{equation} \label{ZEqnNum534725} \begin{array}{l} {\bar{x}^{i} =\bar{x}^{i} (x^{1} ,...,x^{n} )} \\ {\bar{y}^{i(\alpha )} =\mathop{J}\limits^{\alpha } (x,\bar{x})y^{i(\alpha )} .} \end{array} \end{equation} \indent Consider the functions $(t)^{\alpha } $, $(x^{i} )^{\alpha }$, $(y^{i(\alpha )} )^{\alpha } \in {\mathcal F}((\pi_{0} ^{\alpha } )^{-1} (U))$, the 1-forms $\frac{1}{\Gamma (1+\alpha )} d(t)^{\alpha } $, $\frac{1}{\Gamma (1+\alpha )} d(x^{i} )^{\alpha } $, $\frac{1}{\Gamma (1+\alpha )} d(y^{i(\alpha )} )^{\alpha } \in {\mathcal D}^{1} ((\pi_{0} ^{\alpha } )^{-1} (U))$ and the operators $D_{t}^{\alpha } $, $D_{x^{i} }^{\alpha } $, $D_{y^{i(\alpha )} }^{\alpha } $ on $(\pi_{0} ^{\alpha } )^{-1} (U)$, $i=\overline{1, n}$. The following relations hold: \begin{equation} \label{ZEqnNum227932} \begin{array}{l} {D_{t}^{\alpha } (\frac{1}{\Gamma (1+\alpha )} t^{\alpha } )=1,\; \; D_{x^{i} }^{\alpha } (\frac{1}{\Gamma (1+\alpha )} (x^{j} )^{\alpha } )=\delta _{i}^{j} ,} \\ {D_{y^{i(\alpha )} }^{\alpha } (\frac{1}{\Gamma (1+\alpha )} (y^{j(\alpha )} )^{\alpha } )=\delta _{i}^{j} ,\; \; \frac{1}{\Gamma (1+\alpha )} d(t^{\alpha } )(D_{t}^{\alpha } )=1,} \\ {\frac{1}{\Gamma (1+\alpha )} d(x^{i} )^{\alpha } (D_{x^{j} }^{\alpha } )=\delta _{j}^{i} ,\; \; \frac{1}{\Gamma (1+\alpha )} d(y^{i(\alpha )} )^{\alpha } (D_{y^{j(\alpha )} }^{\alpha } )=\delta _{j}^{i} .} \end{array} \end{equation} On $J^{\alpha } ({\mathbb R},\; M)$ we may define the canonical structures \begin{equation} \label{ZEqnNum331309} \begin{array}{l} {\mathop{\theta_{1} }\limits^{\alpha } =d(t^{\alpha } )\otimes (D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{\alpha } )} \\ {\mathop{\theta_{2} }\limits^{\alpha } =\mathop{\theta ^{i} }\limits^{\alpha } \otimes D_{x^{i} }^{\alpha } ,\; \; \mathop{\theta ^{i} }\limits^{\alpha } =\frac{1}{\Gamma (1+\alpha )} (d(x^{i} )^{\alpha } -y^{i(\alpha )} d(t)^{\alpha } )} \\ {\mathop{S}\limits^{\alpha } =\mathop{\theta ^{i} }\limits^{\alpha } \otimes D_{y^{i(\alpha )} }^{\alpha } } \\ {\mathop{V_{i} }\limits^{\alpha } =D_{y^{i(\alpha )} }^{\alpha } .} \end{array} \end{equation} Using \eqref{ZEqnNum534725} it is easy to show that the structures \eqref{ZEqnNum331309} have geometrical character. The space of the operators generated by the operators $\{ D_{t}^{\alpha } ,D_{x^{i} }^{\alpha } ,D_{y^{i(\alpha )} }^{\alpha } \} $, $i=\overline{1,\; n}$, will be denoted by $\chi ^{\alpha } ((\pi _{0}^{\alpha } )^{-1} (U))$. For $\alpha \to 1$ the space of these operators represents the space of the vector fields on $\pi _{0}^{-1} (U)$. \\ \indent A vector field $\mathop{\Gamma }\limits^{\alpha } \in \chi ^{\alpha } ((\pi _{0}^{\alpha } )^{-1} (U))$ is called \textit{FODE} (fractional ordinary differential equation) iff \begin{equation} \label{32)} \begin{array}{l} {d(t)^{\alpha } (\mathop{\Gamma }\limits^{\alpha } )=1} \\ {\mathop{\theta ^{i} }\limits^{\alpha } (\mathop{\Gamma }\limits^{\alpha } )=0,} \end{array} \end{equation} for $i=\overline{1, n}$. In local coordinates \textit{FODE} is given by \begin{equation} \label{33)} \mathop{\Gamma }\limits^{\alpha } =D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{\alpha } +F^{i} D_{y^{i(\alpha )} }^{\alpha } , \end{equation} where $F^{i} \in C^{\infty } ((\pi _{0}^{\alpha } )^{-1} (U))$, $i=\overline{1, n}$. The integral curves of the field \textit{FODE} are the solutions of the fractional differential equation (\textit{EDF}) \begin{equation} \label{34)} D_{t}^{2\alpha } x^{i} (t)=F^{i} (t,x(t),D_{t}^{\alpha } x(t)), \qquad i=\overline{1, n}. \end{equation} \indent The fractional dynamical connection on $J^{\alpha } ({\mathbb R},\; M)$ is defined by the fractional tensor fields $\mathop{H}\limits^{\alpha }$ of type $(1,1)$ which satisfy the conditions \begin{equation} \label{35)} \begin{array}{l} {\mathop{\theta _{1} }\limits^{\alpha } \circ \mathop{H}\limits^{\alpha } =0\; } \\ {\mathop {\theta _{2}}\limits^{\alpha} \circ \mathop{H}\limits^{\alpha }} ={\mathop{\theta _{2} }\limits^{\alpha } \; } \\ {\mathop{H}\limits^{\alpha }\left|_{\mathop{V}\limits^{\alpha } } \right. =-id\left|_{\mathop{V}\limits^{\alpha } } \right. ,\; } \end{array} \end{equation} where $\mathop{V}\limits^{\alpha }$ is formed by operators generated by $\{ D_{y^{i(\alpha )} }^{\alpha } \} _{i=\overline{1,n}}$. In the chart $(\pi _{0}^{\alpha } )^{-1} (U)$ the fractional tensor field $\mathop{H}\limits^{\alpha }$ has the expression \begin{equation} \label{36)} \begin{array}{l} {\mathop{H}\limits^{\alpha } =(\mathop{H}\limits^{1} d(t)^{\alpha } +\mathop{H_{j}^{i} }\limits^{2} d(x^{i} )^{\alpha } +\mathop{H_{i} }\limits^{3} d(y^{i(\alpha )} )^{\alpha } )\otimes D_{t}^{\alpha } +} \\ {(\mathop{H_{j}^{i} }\limits^{4} (dt)^{\alpha } +\mathop{H_{j}^{i} }\limits^{5} d(x^{j} )^{\alpha } +\mathop{H_{j}^{i} }\limits^{6} d(y^{i(\alpha )} )^{\alpha } )\otimes D_{x^{i} }^{\alpha } +} \\ {(\mathop{H^{i} }\limits^{7} d(t)^{\alpha } +\mathop{H_{j}^{i} }\limits^{8} d(x^{j} )^{\alpha } +\mathop{H_{j}^{i} }\limits^{9} d(y^{i(\alpha )} )^{\alpha } )\otimes D_{y^{i(\alpha )} }^{\alpha } .} \end{array} \end{equation} The tensor field $\mathop{H}\limits^{\alpha } $ has a geometrical character, fact which results by using the relations \eqref{ZEqnNum534725}, and is called a $d^{\alpha }$-tensor field. Using the relations \eqref{ZEqnNum227932} and \eqref{ZEqnNum331309} we get \begin{proposition} a) The fractional dynamical connection $\mathop{H}\limits^{\alpha }$, in the chart $(\pi _{0}^{\alpha } )^{-1} (U)$, is given by \begin{equation} \label{37)} \begin{array}{l} {\mathop{H}\limits^{\alpha } =\frac{1}{\Gamma (1+\alpha )} [(-y^{i(\alpha )} D_{x^{i} }^{\alpha } +H^{i} D_{y^{i(\alpha )} }^{\alpha } )\otimes d(t)^{\alpha } +} \\ {(D_{x^{i} }^{\alpha } +H_{i}^{j} D_{y^{j(\alpha )} }^{\alpha } )\otimes d(x^{i} )^{\alpha } -D_{y^{i(\alpha )} }^{\alpha }\otimes d(y^{i(\alpha )} )^{\alpha } ].} \end{array} \end{equation} b) The fractional dynamical connection $\mathop{H}\limits^{\alpha } $ defines a $f(3,-1)$ fractional structure on $J^{\alpha } ({\mathbb R},\; M)$, i.e., $\left(\mathop{H}\limits^{\alpha } \right)^{3} =\mathop{H}\limits^{\alpha } $. \\ c) The fractional tensor fields $\mathop{l}\limits^{\alpha } $ and $\mathop{m}\limits^{\alpha } $ which are defined by \begin{equation} \label{38)} \begin{array}{l} {\mathop{l}\limits^{\alpha } =\mathop{H}\limits^{\alpha } \circ \mathop{H}\limits^{\alpha } } \\ {\mathop{m}\limits^{\alpha } =-\mathop{H}\limits^{\alpha } \circ \mathop{H}\limits^{\alpha } +I,} \end{array} \end{equation} where $I$ is the identity map, satisfy the relations \begin{equation} \label{39)} \begin{array}{l} {\mathop{l}\limits^{\alpha } \circ \mathop{l}\limits^{\alpha } =\mathop{l}\limits^{\alpha } ,\; \; \mathop{m}\limits^{\alpha } \circ \mathop{m}\limits^{\alpha } =\mathop{m}\limits^{\alpha } \circ \mathop{l}\limits^{\alpha } ,\; \; \mathop{l}\limits^{\alpha } +\mathop{m}\limits^{\alpha } =I} \\ {\mathop{l}\limits^{\alpha } (D_{t}^{\alpha } )=-y^{i(\alpha )} D_{x^{i} } -(y^{i(\alpha )} \mathop{H_{i}^{j} }\limits^{\alpha } +\mathop{H^{j} }\limits^{\alpha } )D_{y^{i(\alpha )} } } \\ {\mathop{l}\limits^{\alpha } (D_{x^{i} }^{\alpha } )=D_{x^{i} }^{\alpha } ,\; \; \mathop{l}\limits^{\alpha } (D_{y^{i(\alpha )} }^{\alpha } )=D_{y^{i(\alpha )} }^{\alpha } } \\ {\mathop{m}\limits^{\alpha } (D_{t}^{\alpha } )=D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{\alpha } +(y^{i(\alpha )} \mathop{H_{i}^{j} }\limits^{\alpha } +\mathop{H^{j} }\limits^{\alpha } )D_{y^{i(\alpha )} }^{\alpha } } \\ {\mathop{m}\limits^{\alpha } (D_{x^{i} }^{\alpha } )=0,\; \; \mathop{m}\limits^{\alpha } (D_{y^{i(\alpha )} }^{\alpha } )=0.} \end{array} \end{equation} d) The fractional vector field $\mathop{\Gamma }\limits^{\alpha } \in \chi ^{\alpha } (J^{\alpha } ({\mathbb R},\; M))$ given by \begin{equation} \label{40)} \mathop{\Gamma }\limits^{\alpha } =\mathop{m}\limits^{\alpha}(D_{t}^{\alpha } )=D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{\alpha } +(y^{i(\alpha )} \mathop{H_{i}^{j} }\limits^{\alpha } +\mathop{H^{j} }\limits^{\alpha } )D_{y^{j(\alpha )} }^{\alpha } \end{equation} defines a field \textit{FODE} associated to the fractional dynamical connection. The integral curves are the solutions of the \textit{EDF} \begin{equation} \label{41)} D_{t}^{2\alpha } x^{i} (t)=D_{t}^{\alpha } x^{i} (t)\mathop{H_{i}^{j} }\limits^{\alpha } +\Gamma (1+\alpha )\mathop{H^{j} }\limits^{\alpha } \end{equation} where $\mathop{H_{i}^{j} }\limits^{\alpha } $ and $\mathop{H^{j} }\limits^{\alpha } $ are functions of $(t,\; x(t),\; y^{(\alpha) } (t))$. \end{proposition} Let $L\in C^{\infty } (J^{\alpha } ({\mathbb R},\; M))$ be a fractional Lagrange function. By definition, the Cartan fractional 1-form is the 1-form $\mathop{\theta _{L} }\limits^{\alpha }$ given by \begin{equation} \label{42)} \mathop{\theta _{L} }\limits^{\alpha } =Ld(t)^{\alpha } +\mathop{S}\limits^{\alpha } (L). \end{equation} We call the Cartan fractional 2-form, the 2-form $\mathop{\omega _{L} }\limits^{\alpha }$ given by \begin{equation} \label{43)} \mathop{\omega _{L} }\limits^{\alpha } =d^{\alpha } \mathop{\theta _{L} }\limits^{\alpha } \end{equation} where $d^{\alpha } $ is the fractional exterior differential: \begin{equation} \label{44)} d^{\alpha } =d(t)^{\alpha } D_{t}^{\alpha } +d(x^{i} )^{\alpha } D_{x^{i} }^{\alpha } +d(y^{i(\alpha )} )^{\alpha } D_{y^{i(\alpha )} }^{\alpha } . \end{equation} In the chart $(\pi _{0}^{\alpha } )^{-1} (U)$, $\mathop{\theta _{L} }\limits^{\alpha } $ and $\mathop{\omega _{L} }\limits^{\alpha } $ are given by \begin{equation} \label{45)} \begin{array}{l} {\mathop{\theta _{L} }\limits^{\alpha } =(L-\frac{1}{\Gamma (1+\alpha )} y^{i(\alpha )} D_{y^{i(\alpha )} }^{\alpha } (L)d(t)^{\alpha } +\frac{1}{\Gamma (1+\alpha )} D_{y^{i(\alpha )} }^{\alpha } (L)d(x^{i} )^{\alpha } } \\ {\mathop{\omega _{L} }\limits^{\alpha } =A_{i} d(t)^{\alpha } \wedge d(x^{i} )^{\alpha } +B_{i} d(t^{\alpha } )\wedge d(y^{i(\alpha )} )^{\alpha } +} \\ {\; \; \; \; \; \; \; A_{ij} d(x^{i} )^{\alpha } \wedge d(x^{j} )^{\alpha } +B_{ij} d(x^{i} )^{\alpha } \wedge d(y^{j(\alpha )} )^{\alpha } ,} \end{array} \end{equation} where \begin{equation} \label{46)} \begin{array}{l} {A_{i} =\frac{1}{\Gamma (1+\alpha )} D_{t}^{\alpha } D_{y^{i(\alpha )} }^{\alpha } (L)+\frac{1}{\Gamma (1+\alpha )} y^{j(\alpha )} D_{x^{i} }^{\alpha } D_{y^{j(\alpha )} }^{\alpha } (L)-D_{x^{i} }^{\alpha } (L)} \\ {B_{i} =\frac{1}{\Gamma (1+\alpha )} D_{y^{i(\alpha )} }^{\alpha } (y^{j(\alpha )} D_{j}^{\alpha } (\alpha )(L))} \\ {A_{ij} =D_{x^{i} }^{\alpha } D_{y^{i(\alpha )} }^{\alpha } (L),\; \; B_{ij} =-D_{y^{j(\alpha )} }^{\alpha } D_{y^{i(\alpha )} }^{\alpha } (L).} \end{array} \end{equation} \begin{proposition} If $L$ is regular (i.e., $\det \left(\frac{\partial ^{2} L}{\partial y^{i(\alpha )} \partial y^{j(\alpha )} } \right)\ne 0$) then there exists a fractional field \textit{FODE} $\mathop{\Gamma _{L} }\limits^{\alpha }$ such that $i_{\mathop{\Gamma _{L} }\limits^{\alpha } } \mathop{\omega _{L} }\limits^{\alpha } =0$. In the chart $(\pi _{0}^{\alpha } )^{-1} (U)$ we have \begin{equation} \label{47)} \mathop{\Gamma _{L} }\limits^{\alpha } =D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} } +\mathop{M^{i} }\limits^{\alpha } D_{y^{i(\alpha )} }^{\alpha } , \end{equation} where \begin{equation} \label{48)} \begin{array}{l} {\mathop{M^{i} }\limits^{\alpha } =g^{ik} (D_{k}^{\alpha } (L)-d_{t}^{\alpha } (\frac{\partial ^{\alpha } L}{\partial y^{k(\alpha )} } )} \\ {d_{t}^{\alpha } =D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{\alpha } } \\ {(g^{ik} )=(D_{y^{i(\alpha )} }^{\alpha } D_{y^{k(\alpha )} }^{\alpha } (L))^{-1} .} \end{array} \end{equation} \end{proposition} An important structure on $J^{\alpha } ({\mathbb R},\; M)$ is described by the fractional Euler-Lagrange equations. Let $c:t\in [0,1]\to (x^{i} (t))\in M$ be a parameterized curve, such that $Imc\subset U\subset M$. The extension of the curve $c$ to $J^{\alpha } ({\mathbb R},\; M)$ is the curve $c^{\alpha } :t\in [0,1]\to (t,x^{i} (t),y^{i(\alpha )} (t))\in J^{\alpha } ({\mathbb R},\; M)$. Consider $L\in C^{\infty } (J^{\alpha } ({\mathbb R},\; M))$. The action of $L$ along the curve $c^{\alpha }$ is defined by \begin{equation} \label{49)} {\mathcal A}(c^{\alpha } )=\int _{0}^{1}L(t,x(t),y^{\alpha } (t))dt . \end{equation} Let $c_{\varepsilon } :t\in [0,1]\to (x^{i} (t,\varepsilon ))\in M$ be a family of curves, where $\varepsilon $ is sufficiently small so that $Imc_{\varepsilon } \subset U$, $c_{0} (t)=c(t)$, $D_{\varepsilon }^{\alpha } c_{\varepsilon }(0)=D_{\varepsilon }^{\alpha } c_{\varepsilon}(1)=0$. The action of $L$ along the curves $c_{\varepsilon } $ is \begin{equation} \label{ZEqnNum353506} {\mathcal A}(c_{\varepsilon }^{\alpha } )=\int _{0}^{1}L(t,x(t,\varepsilon ),y^{\alpha } (t,\varepsilon ))dt , \end{equation} where $y^{i(\alpha )} (t,\varepsilon )=\frac{1}{\Gamma (1+\alpha )} D_{t}^{\alpha } x^{i} (t,\varepsilon )$. The action \eqref{ZEqnNum353506} has a fractional extremal value if \begin{equation} \label{51)} D_{\varepsilon }^{\alpha } {\mathcal A}(c_{\varepsilon }^{\alpha } )\left|_{\varepsilon =0} \right. =0. \end{equation} The action \eqref{ZEqnNum353506} has an extremal value if \begin{equation} \label{52)} D_{\varepsilon }^{1} {\mathcal A}(c_{\varepsilon }^{\alpha } )\left|_{\varepsilon =0} \right. =0. \end{equation} Using the properties of the fractional derivative we obtain \begin{proposition} a) A necessary condition for the action \eqref{ZEqnNum353506} to reach a fractional extremal value is that $c(t)$ satisfies the fractional Euler-Lagrange equations \begin{equation} \label{ZEqnNum157890} \begin{array}{l} {D_{x^{i} }^{\alpha } L-d_{t}^{2\alpha } (D_{y^{i(\alpha )} }^{\alpha } L)=0} \\ {d_{t}^{\alpha } =D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{\alpha } +y^{i(2\alpha )} D_{y^{i(\alpha )} }^{\alpha } ,} \end{array} \end{equation} where $i=\overline{1, n}$. \\ b) A necessary condition for the action \eqref{ZEqnNum353506} to reach an extremal value is that $c(t)$ satisfies the Euler-Lagrange equations \begin{equation} \label{ZEqnNum578225} \begin{array}{l} {D_{x^{i} }^{1} L-d_{t}^{2} (D_{y^{i(\alpha )} }^{1} L)=0} \\ {d_{t}^{2} =D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{1} +y^{i(2\alpha )} D_{y^{i(\alpha )} }^{1} ,} \end{array} \end{equation} where $i=\overline{1, n}$. \end{proposition} The equations \eqref{ZEqnNum157890} may be written in the form \begin{equation} \label{ZEqnNum577970} D_{x^{i} }^{\alpha } L-d_{t}^{\alpha } (D_{y^{i(\alpha )} }^{\alpha } L)-y^{j(2\alpha )} D_{y^{j(\alpha )} }^{\alpha } (D_{y^{i(\alpha )} }^{\alpha } L)=0, \end{equation} for $i=\overline{1, n}$. The equations \eqref{ZEqnNum578225} may be written as \begin{equation} \label{56)} \frac{\partial L}{\partial x^{i} } -d_{t}^{\alpha } \left(\frac{\partial L}{\partial y^{i(\alpha )} } \right)-y^{j(2\alpha )} \frac{\partial ^{2} L}{\partial y^{i(\alpha )} \partial y^{j(\alpha )} } =0, \end{equation} where $i=\overline{1, n}$. Let us denote by \begin{equation} \label{57)} g_{ij}^{\alpha } =D_{y^{i(\alpha )} }^{\alpha } (D_{y^{j(\alpha )} }^{\alpha } L), \end{equation} and by $\left(\mathop{g^{ik} }\limits^{\alpha } \right)=\left(g_{ij}^{\alpha } \right)^{-1} $, if $\det (g_{ij}^{\alpha } )\ne 0$. From \eqref{ZEqnNum577970} and from Proposition 5, we get the fractional field \textit{FODE} $\mathop{\Gamma _{L} }\limits^{\alpha }$ associated to $L$. \\ Let $c:t\in [0,1]\to (x^{i} (t))\subset U$ be a parameterized curve. The extension of $c$ to $J^{\alpha k} ({\mathbb R},\; M)$ is the curve $c^{\alpha k} :t\in [0,1]\to (t,x^{i} (t),y^{\alpha a} (t))\in J^{\alpha k} ({\mathbb R},\; M)$, $a=\overline{1,k}$. Let $L:J^{\alpha k} ({\mathbb R},\; M)\to {\mathbb R}$ be a Lagrange function. The action of $L$ along the curve $c^{\alpha k} $ is \begin{equation} \label{ZEqnNum533362} {\mathcal A}(c^{\alpha k} )=\int _{0}^{1}L(t,x(t),y^{\alpha a} (t))dt . \end{equation} Let $c_{\varepsilon } :t\in [0,1]\to (x^{i} (t,\varepsilon ))\in M$ be a family of curves, where the absolute value of $\varepsilon $ is sufficiently small so that $Imc_{\varepsilon } \subset U\subset M$, $c_{0} (t)=c(t)$, $D_{\varepsilon }^{\alpha } c(\varepsilon )\left|_{\varepsilon =0} \right. =D_{\varepsilon }^{\alpha } c(\varepsilon )\left|_{\varepsilon =1} \right. =0$. The action of $L$ on the curve $c_{\varepsilon } $ is given by \begin{equation} \label{ZEqnNum472103} {\mathcal A}(c_{\varepsilon }^{\alpha k} )=\int _{0}^{1}L(t,x(t,\varepsilon ),y^{\alpha a} (t,\varepsilon ))dt \end{equation} where $y^{i(\alpha a)} (t,\varepsilon )=\frac{1}{\Gamma (1+\alpha a)} D_{t}^{\alpha a} x^{i} (t,\varepsilon )$, $a=\overline{1,k}$. The action \eqref{ZEqnNum472103} has a fractional extremal value if \begin{equation} \label{60)} D_{\varepsilon }^{\alpha } ({\mathcal A}(c_{\varepsilon }^{\alpha k} ))\left|_{\varepsilon =0} \right. =0. \end{equation} The action \eqref{ZEqnNum472103} has an extremal value if \begin{equation} \label{61)} D_{\varepsilon }^{1} ({\mathcal A}(c_{\varepsilon }^{\alpha k} ))\left|_{\varepsilon =0} \right. =0. \end{equation} \begin{proposition} a) A necessary condition for the action \eqref{ZEqnNum533362} to reach a fractional extremal value is that $c(t)$ satisfies the fractional Euler-Lagrange equations \begin{equation} \label{eq62} D_{x^{i} }^{\alpha } L+\sum _{a=1}^{k}(-1)^{a} d_{t}^{\alpha a} (D_{y^{i(\alpha a)} }^{\alpha } L)=0 , \end{equation} where \begin{equation} \label{63)} d_{t}^{\alpha a} =D_{t}^{\alpha } +y^{i(\alpha )} D_{x^{i} }^{\alpha } +y^{i(2\alpha )} D_{y^{i(\alpha )} }^{\alpha } +...+y^{i(\alpha a)} D_{y^{i(\alpha (a-1))} }^{\alpha } , \end{equation} and $i=\overline{1, n}$. \\ b) A necessary condition that the action \eqref{ZEqnNum533362} reaches an extremal value is that $c(t)$ satisfies the Euler-Lagrange equations \begin{equation} \label{eq64} \frac{\partial L}{\partial x^{i} } +\sum _{a=1}^{k}(-1)^{a} d_{t}^{a} (D_{y^{i(\alpha a)} }^{\alpha } L)=0 , \end{equation} where \begin{equation} \label{ZEqnNum420605} d_{t}^{a} =D_{t}^{1} +y^{i(\alpha )} D_{x^{i} }^{1} +...+y^{i(\alpha a)} D_{y^{i(\alpha (a-1))} }^{1} . \end{equation} \end{proposition} \textbf{Example.} Consider the fractional differential equation \begin{equation} \label{eq66} \begin{array}{l} {\frac{c\Gamma (1+\gamma )}{\Gamma (1+\gamma -\alpha )} x^{\gamma -\alpha } (t)f(t)+a_{1} \Gamma (1+2\alpha )y^{(2\alpha )} +} \\ {a_{2} \Gamma (1+3\alpha )y^{(3\alpha )} =0.} \end{array} \end{equation} The equation \eqref{eq66} is the fractional Euler-Lagrange equation \eqref{eq62} for the function \begin{equation*} \begin{array}{l} {L=\frac{c}{1+\gamma -\alpha } x^{\gamma } -a_{1} \Gamma (1+2\alpha )(y^{\alpha } )^{\alpha } +} \\ {a_{2} \Gamma (1+3\alpha )(y^{2\alpha } )^{\alpha } .} \end{array} \end{equation*} The equation \eqref{eq66} is the fractional Euler-Lagrange equation \eqref{eq64} for the function \begin{equation*} L=\frac{c\Gamma (1+\gamma )x^{\gamma -\alpha +1} }{\Gamma (1+\gamma -\alpha )^{(1+\gamma -\alpha )} } f-\frac{a_{1} }{2} \Gamma (1+2\alpha )(y^{\alpha } )^{2} +\frac{a_{2} }{2} \Gamma (1+3\alpha )(y^{2\alpha } )^{2} . \end{equation*} \section{Examples and applications} 1. \textbf{The nonhomogeneous Bagley-Torvik equation} \\ \indent The dynamics of a flat rigid body embedded in a Newton fluid is described by the equation \begin{equation} \label{ZEqnNum785447} aD_{t}^{2}x(t)+bD_{t}^{3/2} x(t)+cx(t)-f(t)=0, \end{equation} where $a,b,c\in {\mathbb R}$ and the initial conditions are $x(0)=0$, $D_{t}^{1}x(0)=0$. The equation \eqref{ZEqnNum785447} is a fractional differential equation on the bundle $J^{\alpha } ({\mathbb R},\; {\mathbb R})$ for $\alpha =\frac{1}{4} $. Indeed, let's consider the fractional differential equation \begin{equation} \label{ZEqnNum769966} aD_{t}^{8\alpha } x(t)+bD_{t}^{6\alpha } x(t)+cx(t)-f(t)=0, \end{equation} with $\alpha >0$. For $\alpha =\frac{1}{4}$ the equation \eqref{ZEqnNum769966} reduces to \eqref{ZEqnNum785447}. With the notations \eqref{ZEqnNum283756}, the equation \eqref{ZEqnNum769966} becomes \begin{equation} \label{68)} a\Gamma (1+8\alpha )y^{(8\alpha )} (t)+b\Gamma (1+6\alpha )y^{(6\alpha )} (t)+cx(t)-f(t)=0. \end{equation} On the bundle $J^{4\alpha } ({\mathbb R},\; {\mathbb R})$ let us consider the Lagrange function \begin{equation} \label{ZEqnNum983897} \begin{array}{l} {L(t,x,y^{(3\alpha )} ,y^{(4\alpha )} )=\frac{1}{2} cx^{2} -fx-\frac{b}{2} \Gamma (1+6\alpha )(y^{(3\alpha )} )^{2} +} \\ {\frac{a}{2} \Gamma (1+8\alpha )(y^{(4\alpha )} )^{2} .} \end{array} \end{equation} Using the relation \eqref{ZEqnNum420605}, the Euler-Lagrange equation for \eqref{ZEqnNum983897} is \begin{equation} \label{70)} \begin{array}{l} {D_{x}^{1} L-D_{t}^{3\alpha } (D_{y^{(3\alpha )} }^{1} L)+D_{t}^{4\alpha } (D_{y^{(4\alpha )} }^{1} L)=} \\ {cx-f+b\Gamma (1+6\alpha )D_{t}^{3\alpha } y^{(3\alpha )} +a\Gamma (1+8\alpha )D_{t}^{4\alpha } y^{(4\alpha )} =} \\ {cx-f+b\Gamma (1+6\alpha )y^{(6\alpha )} +a\Gamma (1+8\alpha )y^{(8\alpha )} =0.} \end{array} \end{equation} \begin{proposition} The equation \eqref{ZEqnNum785447} represents the Euler-Lagrange equation on the bundle $J^{4\alpha } ({\mathbb R},\; {\mathbb R})$ for $\alpha =\frac{1}{4} $, with the Lagrange function given by \begin{equation} \label{71)} \begin{array}{l} {L(t,x,y^{(3/2)} ,y^{(2)} )=\frac{1}{2} cx^{2} -fx-\frac{b}{2} \Gamma ({5\mathord{\left/ {\vphantom {5 2)}} \right. \kern-\nulldelimiterspace} 2)} (y^{(3/2)} )^{2} +} \\ {\frac{a}{2} \Gamma (3)(y^{(2)} )^{2} .} \end{array} \end{equation} \end{proposition} 2. \textbf{Differential equations of order one, two and three which admit fractional Lagrangians} \\ The following differential equations don't have classical Lagrangians such that the Euler-Lagrange equation represents the given equation: \begin{equation} \label{ZEqnNum860560} \dot{x}(t)+V_{1} (t,x)=0,\; \; V_{1} (t,x)=\frac{\partial U_{1} (t,x)}{\partial x} , \end{equation} \begin{equation} \label{ZEqnNum277412} \ddot{x}(t)+a_{1} \dot{x}(t)+V_{2} (t,x)=0,\; \; V_{2} (t,x)=\frac{\partial U_{2} (t,x)}{\partial x} , \end{equation} \begin{equation} \label{ZEqnNum426478} \dddot{x}(t)+a_{2} \ddot{x}(t)+a_{1} \dot{x}(t)+V_{3} (t,x)=0,\; \; V_{3} (t,x)=\frac{\partial U_{3} (t,x)}{\partial x} . \end{equation} Let us associate the fractional equations from below to the equations \eqref{ZEqnNum860560}, \eqref{ZEqnNum277412} and \eqref{ZEqnNum426478}, respectively: \begin{equation} \label{75)} D_{t}^{2\alpha } x(t)+V_{1} (t,x)=0, \end{equation} \begin{equation} \label{76)} D_{t}^{4\alpha } x(t)+a_{1} D_{t}^{2\alpha } x(t)+V_{2} (t,x)=0, \end{equation} \begin{equation} \label{77)} D_{t}^{6\alpha } x(t)+a_{2} D_{t}^{4\alpha } x(t)+a_{1} D_{t}^{2\alpha } x(t)+V_{3} (t,x)=0. \end{equation} \begin{proposition} a) Let $J^{\alpha}({\mathbb R},\; {\mathbb R})\to {\mathbb R}$ be the fractional bundle and consider $L:J^{\alpha}({\mathbb R},\; {\mathbb R})\to {\mathbb R}$ given by \begin{equation} \label{ZEqnNum972723} L(t,x,y^{(\alpha )} )=U_{1} (t,x)-\frac{1}{2} \Gamma (1+2\alpha )(y^{\alpha } )^{2} . \end{equation} The Euler-Lagrange equation of \eqref{ZEqnNum972723} is \begin{equation} \label{79)} \begin{array}{l} {\frac{\partial L}{\partial x} -D_{t}^{\alpha } \left(\frac{\partial L}{\partial y^{\alpha } } \right)=\frac{\partial U_{1} (t,x)}{\partial x} +\Gamma (1+2\alpha )y^{(2\alpha )} =} \\ {V_{1} (t,x)+D_{t}^{2\alpha } x(t)=0.} \end{array} \end{equation} b) Let $J^{2\alpha } ({\mathbb R},\; {\mathbb R})\to {\mathbb R}$ be the fractional bundle and the Lagrangian $L:J^{2\alpha } ({\mathbb R},\; {\mathbb R})\to {\mathbb R}$ given by \begin{equation} \label{ZEqnNum400853} \begin{array}{l} {L(t,x,y^{(\alpha )} ,y^{(2\alpha )} )=U_{2} (t,x)-\frac{1}{2} a_{1} \Gamma (1+2\alpha )(y^{\alpha } )^{2} +} \\ {\frac{1}{2} \Gamma (1+4\alpha )(y^{(2\alpha )} )^{2} .} \end{array} \end{equation} The Euler-Lagrange equation of \eqref{ZEqnNum400853} is \begin{equation} \label{81)} \begin{array}{l} {\frac{\partial L}{\partial x} -D_{t}^{\alpha } \left(\frac{\partial L}{\partial y^{\alpha } } \right)+D_{t}^{2\alpha } \left(\frac{\partial L}{\partial y^{2\alpha } } \right)=} \\ {V_{2} (t,x)+a_{1} \Gamma (1+2\alpha )y^{(2\alpha )} +} \\ {a_{2} \Gamma (1+4\alpha )y^{(4\alpha )} =} \\ {V_{2} (t,x)+a_{1} D_{t}^{2\alpha } x(t)+D_{t}^{4\alpha } x(t)=0.} \end{array} \end{equation} c) Let $J^{3\alpha } ({\mathbb R},\; {\mathbb R})\to {\mathbb R}$ be the fractional bundle and $L:J^{3\alpha } ({\mathbb R},\; {\mathbb R})\to {\mathbb R}$ given by \begin{equation} \label{ZEqnNum619704} \begin{array}{l} {L(t,x,y^{(\alpha )} ,y^{(2\alpha )} ,y^{(3\alpha )} )=V_{3} (t,x)-\frac{a_{1} }{2} \Gamma (1+2\alpha )(y^{(\alpha )} )^{2} +} \\ {\frac{a_{2} }{2} \Gamma (1+4\alpha )(y^{(2\alpha )} )^{2} -\frac{1}{2} \Gamma (1+6\alpha )(y^{(3\alpha )} )^{2} .} \end{array} \end{equation} The Euler-Lagrange equation of \eqref{ZEqnNum619704} is \begin{equation} \label{83)} \begin{array}{l} {\frac{\partial L}{\partial x} -D_{t}^{\alpha } \left(\frac{\partial L}{\partial y^{\alpha } } \right)+D_{t}^{2\alpha } \left(\frac{\partial L}{\partial y^{(2\alpha )} } \right)-D_{t}^{3\alpha } \left(\frac{\partial L}{\partial y^{(3\alpha )} } \right)=V_{3} (t,x)+} \\ {a_{1} \Gamma (1+2\alpha )y^{(2\alpha )} +a_{2} \Gamma (1+4\alpha )y^{(4\alpha )} +\Gamma (1+6\alpha )y^{(6\alpha )} =} \\ {V_{3} (t,x)+a_{1} D_{t}^{2\alpha } x(t)+a_{2} D_{t}^{4\alpha } x(t)+D_{t}^{6\alpha } x(t)=0.} \end{array} \end{equation} d) For $\alpha =\frac{1}{2} $ we obtain the fractional Lagrangians that describe the equations \eqref{ZEqnNum860560}, \eqref{ZEqnNum277412}, \eqref{ZEqnNum426478}, respectively \begin{equation} \label{84)} \begin{array}{l} {L(t,x,y^{(1/2)} )=U_{1} (t,x)-\frac{1}{2} \Gamma (2)(y^{(1/2)} )^{2} } \\ {L(t,x,y^{(1/2)} ,y^{(1)} )=U_{2} (t,x)-\frac{1}{2} a_{1} \Gamma (2)(y^{(1/2)} )^{2} +\frac{1}{2} \Gamma (3)(y^{(1)} )^{2} } \\ {L(t,x,y^{(1/2)} ,y^{(1)} ,y^{(3/2)} )=U_{3} (t,x)-\frac{a_{1} }{2} \Gamma (2)(y^{(1/2)} )^{2} +} \\ {\frac{a_{2} }{2} \Gamma (2)(y^{(1)} )^{2} -\frac{1}{2} \Gamma (4)(y^{(3/2)} )^{2} .} \end{array} \end{equation} \end{proposition} In the category of the equations \eqref{ZEqnNum277412} and \eqref{ZEqnNum426478} there are: \\ a) the nonhomogeneous classical friction equation \begin{equation} \label{85)} m\ddot{x}(t)+\gamma \dot{x}(t)-\frac{\partial U(t,x)}{\partial x} =0, \end{equation} b) the nonhomogeneous model of Phillips \cite{Lorenz} \begin{equation} \label{86)} \ddot{x}(t)+a_{1} \dot{x}(t)+b_{1} x(t)+f(t)=0, \end{equation} c) the nonhomogeneous business cycle with innovation \cite{Lorenz} \begin{equation} \label{87)} \dddot{y}(t)+a_{2} \ddot{y}(t)+a_{1} \dot{y}(t)+b_{1} x(t)+f(t)=0. \end{equation} \section*{Conclusions} The paper presents the main differentiable structures on $J^{\alpha } ({\mathbb R},\; M)$, in order to describe fractional differential equations and ordinary differential equations, using Lagrange functions defined on $J^{\alpha } ({\mathbb R},\; M)$. \\ \indent With the help of the methods shown, there may be analyzed other models, such as those found in \cite{Caputo} and \cite{broasca}.
1,314,259,992,908
arxiv
\section{Introduction} Simple rings are so-called because their two-sided ideal structure is as simple as possible. A natural way of generalizing the notion of a simple ring is to allow the presence of two-sided ideals, but to insist that the nonzero ones be very ``large.'' Specifically, we consider algebras for which all nonzero ideals have finite codimension. We begin, as always, with a definition. \begin{definition}\em{ A $k$-algebra $A$ is called \emph{just infinite\xspace dimensional}, or \emph{just infinite\xspace} for short, if $\dim_{k}(A) = \infty$ and each of its nonzero two-sided ideals has finite codimension.} \end{definition} The nomenclature which has been adopted is to call such rings \emph{projectively simple} in case the ring is $\mathbb{N}$-graded. There are many examples of just infinite algebras. Aside from any infinite dimensional simple ring, the most immediate example is a polynomial ring in one variable $k[x]$. Reichstein, Rogalski, and Zhang \cite{ReichsteinRogalskiZhang:Projectively-simple-rings} give non-PI examples constructed using abelian varieties and Bartholdi \cite{Bartholdi:Branch-Rings-Thinned} constructs interesting just infinite\xspace rings using groups acting on infinite trees. Much is known for just infinite algebras over uncountable fields; for example, Farkas and Small \cite{FarkasSmall:Algebras-which-are-nearly} show they are either primitive, have nonzero Jacobson radical, or satisfy a polynomial identity. Over countable fields much less is known and many of the basic questions that are easily answered over uncountable fields are still open. For this reason we are interested in when we can extend scalars in a just infinite algebra and remain just infinite. We thus give the following definition. \begin{definition}\em{ A just infinite\xspace $k$-algebra $A$ is called \emph{stably just infinite\xspace} if $A\otimes_{k}K$ is just infinite\xspace over $K$, for every field extension $K/k$.} \end{definition} Just infinite algebras, even affine ones, may behave rather poorly upon extension of scalars. For example, $\mathbb{C}[x]$ is an affine just infinite\xspace $\mathbb{R}$-algebra, but $\mathbb{C}[x]\otimes_{\mathbb{R}}\mathbb{C}$ contains $\mathbb{C}\otimes_{\mathbb{R}}\mathbb{C}$, which is not a domain. Thus $\mathbb{C}[x]\otimes_{\mathbb{R}}\mathbb{C}$ is not even prime and hence $\mathbb{C}[x]$ fails to be just infinite\xspace over $\mathbb{R}$ \cite{FarinaPendergrass:A-Few-Properties-of-Just-Infinite}. This example shows that in case the ground field is not algebraically closed, things can go horribly awry even upon extending scalars by a finite dimensional field extension! Bergman proved the tensor product of any two prime $k$-algebras is again prime when $k$ is algebraically closed \cite[Proposition 17.2]{Bergman:Zero-divisors-in-tensor-products}. It is therefore reasonable to restrict our attention to the case that the ground field is algebraically closed. In this way we can, at the very least, avoid the unpleasantness of having a just infinite\xspace algebra $A$ for which extension of scalars results in a ring which fails to be prime. We summarize our main results: We show that right noetherian just infinite\xspace algebras are well-behaved under extension of scalars. \begin{theorem}\label{thm:non-pi-noetherian-ji-implies-sji} Let $k$ be an algebraically closed field and let $A$ be a non-PI right noetherian just infinite\xspace $k$-algebra. Then $A$ is stably just infinite\xspace. \end{theorem} In addition, we show that the hypothesis that $A$ is a right noetherian ring cannot be weakened when the ground field is countable---even if the algebra is finitely generated\xspace. \begin{theorem}\label{thm:affine-ji-non-sji-rings-over-countable-fields-exist} Let $k$ be a countable algebraically closed field. Then there exists an affine non-simple right Goldie just infinite\xspace $k$-algebra which is not stably just infinite\xspace. \end{theorem} It is necessary to take $k$ to be a countable algebraically closed field in this theorem. In fact, we show that if $A$ is a countably generated just infinite\xspace $k$-algebra over an uncountable algebraically closed field, then $A$ is stably just infinite\xspace. \begin{theorem}\label{thm:cg-ji-over-big-fields-are-sji} Let $k$ be an uncountable algebraically closed field and let $A$ be a countably generated just infinite\xspace $k$-algebra. Then $A$ is stably just infinite\xspace. \end{theorem} We remark that this gives an analog of a result of Reichstein et al. \cite[Lemma 1.8]{ReichsteinRogalskiZhang:Projectively-simple-rings} for ungraded rings. To complete our study of stably just infinite\xspace algebras, we look at the \textup{PI}\xspace case and show that a \textup{PI}\xspace just infinite\xspace $k$-algebra $A$ is stably just infinite\xspace if and only if the center of the quotient division algebra of $A$ is a finitely generated\xspace field extension of $k$ of transcendence degree $1$. This paper is organized as follows. In \S \ref{sec:ji-rings} and \S \ref{sec:central-closure}, we give some useful facts about just infinite\xspace algebras and extended centers, which we later use in our consideration of stably just infinite\xspace algebras. In \S \ref{sec:noetherian-ji-rings}, we prove Theorem \ref{thm:non-pi-noetherian-ji-implies-sji}. In \S \ref{sec:pi-ji-rings}, we present our characterization of \textup{PI}\xspace stably just infinite\xspace algebras. In \S \ref{sec:uncountable-fields}, we turn our attention to just infinite\xspace algebras over uncountable fields and prove Theorem \ref{thm:cg-ji-over-big-fields-are-sji}. Finally, in \S \ref{sec:examples}, we construct interesting just infinite\xspace algebras that are not stably just infinite\xspace to obtain Theorem \ref{thm:affine-ji-non-sji-rings-over-countable-fields-exist}. All rings under consideration will be associative algebras, with $1$, over a field $k$, and homomorphisms are assumed to be unital. \section{Just Infinite Algebras}\label{sec:ji-rings} Reichstein et al. \cite{ReichsteinRogalskiZhang:Projectively-simple-rings} show that a non-\textup{PI}\xspace projectively simple algebra has a unique maximal ideal, namely the augmentation ideal. A similar result holds for just infinite\xspace algebras with nonzero Jacobson radical. \begin{proposition}\label{thm:ji-not-semiprimitive-implies-finite-spec} Let $A$ be a just infinite\xspace $k$-algebra with nonzero Jacobson radical. Then $A$ has only finitely many prime ideals. \end{proposition} \noindent {\bf Proof.} We denote the Jacobson radical of the algebra $A$ by $J(A)$. Since $A$ is just infinite\xspace and $J(A)\neq (0)$, $A/J(A)$ is finite dimensional. It follows that there are only finitely many prime ideals in $A$ that properly contain $J(A)$ \cite[Theorem 2.3.9 (ii)]{Rowen:Ring-theory-Vol1}. But every nonzero prime ideal in $A$ is necessarily maximal and thus primitive and so it contains $J(A)$. The result follows. \qed The hypothesis that $J(A) \neq (0)$ is necessary, as $\mathbb{C}[x]$ demonstrates. Proposition \ref{thm:ji-not-semiprimitive-implies-finite-spec} motivates the following \begin{question}\em{ If $A$ is a finitely generated\xspace non-\textup{PI}\xspace just infinite\xspace $k$-algebra, does $A$ have only finitely many prime ideals? } \end{question} It is known that a \textup{PI}\xspace just infinite\xspace algebra is a finitely generated\xspace module over its center and the center is itself just infinite\xspace \cite{FarinaPendergrass:A-Few-Properties-of-Just-Infinite}. In stark contrast to the \textup{PI}\xspace case, a just infinite\xspace algebra that doesn't satisfy a polynomial identity necessarily has a small center (assuming the ring isn't simple). In fact, we next show that the center of any non-simple non-\textup{PI}\xspace just infinite\xspace algebra is a finite dimensional field extension of the ground field. This result is essentially a consequence of two lemmas of Farkas and Small \cite{FarkasSmall:Algebras-which-are-nearly}. For a fixed positive integer $n$, we denote by $I_{n}(A)$ the two-sided ideal of the algebra $A$ generated by all specializations in $A$ of all polynomial identities of $n \times n$ matrices. \begin{proposition}\label{thm:center-of-non-pi-ji} Let $A$ be a non-\textup{PI}\xspace just infinite\xspace $k$-algebra that is not simple. Then $Z(A)$ is a finite dimensional field extension of $k$. \end{proposition} \noindent {\bf Proof.} Since $A$ is just infinite\xspace, it is prime \cite{FarinaPendergrass:A-Few-Properties-of-Just-Infinite}, and so $Z(A)$ is a domain. Choose $0\neq z\in Z(A)$ and set $J = \bigcap_{d\in \mathbb{N}}Az^{d}$. Since $A$ is just infinite\xspace, the two-sided ideal $Az$ has finite codimension in $A$. A result of Farkas and Small \cite[Lemma 1.1]{FarkasSmall:Algebras-which-are-nearly} gives $I_{n}(A) \subseteq J$ for some $n$. Since $A$ doesn't satisfy a polynomial identity, $I_{n}(A)$ is a nonzero ideal of $A$, and so $J$ is nonzero as well. It follows that $z$ is a unit of $A$ \cite[Lemma 1.2]{FarkasSmall:Algebras-which-are-nearly} and thus is invertible in $Z(A)$. Thus $Z(A)$ is a field extension of $k$. Next, since $A$ is not simple, there is a nonzero proper ideal $I$ of $A$. Since $Z(A)$ is a field, $I\cap Z(A) = (0)$ and so the natural projection map $A \rightarrow A/I$ induces an injection from $Z(A)$ to $A/I$, and the latter is finite dimensional. \qed The fact that the center of a (non-simple) non-\textup{PI}\xspace just infinite\xspace algebra is a finite dimensional field extension of $k$ shows that non-\textup{PI}\xspace just infinite\xspace algebras are a reasonable generalization of central simple algebras. In particular, showing that a just infinite\xspace algebra is stably just infinite\xspace can be regarded as an analog of the fact that a central simple algebra remains simple under extension of scalars. \section{Central Closure}\label{sec:central-closure} To prove our results we make use of the so-called \emph{extended center}. This idea was largely inspired by some of the methods of Resco \cite{Resco:A-reduction-theorem-for-the-primitivity}, to which we owe a great debt. Let $A$ be a prime ring and consider the set of all right $A$-module homomorphisms $f\colon I \rightarrow A$, where $I$ ranges over all nonzero two-sided ideals of $A$. Martindale \cite{Martindale:Prime-rings-satisfying} shows how one can endow this set of maps with a useful algebra structure. For the reader's convenience we include the full definition of Martindale's ring of quotients, but we refer to other sources \cite{:Handbook-of-algebra.-Vol., Lam:Lectures-on-modules-and-rings, Rowen:Ring-theory-Vol1} for the proof that this construction actually yields a ring with the stated properties. \begin{definition}\label{def:martindale-ring-of-quotients}\em{ Let $A$ be a prime $k$-algebra. The \emph{(right) Martindale ring of quotients of $A$}, denoted $\mart{A}$, consists of equivalence classes of pairs $(I, f)$ where $I \lhd A$, $I\neq (0)$, and $f\in \operatorname{Hom}_{A}(I_{A}, A_{A})$. Here two pairs $(I,f)$, $(J, g)$ are defined to be equivalent if $f = g$ on the intersection $I\cap J$. Addition and multiplication are given by} \begin{align*} (I, f) + (J, g) &= (I\cap J, f+g), \\ (I,f)\cdot(J,g) &= (JI, f\circ g). \end{align*} \end{definition} \begin{definition}\em{ Let $A$ be a prime $k$-algebra. The \emph{extended center of $A$}, written $\extcent{A}$, is defined to be $Z(\mart{A})$. } \end{definition} \begin{remark}\em{ $\extcent{A}$ is also called the \emph{extended centroid} in the literature. Hopefully our terminology won't cause confusion, as the centroid of a unital ring coincides with its center. $\extcent{A}$ is a field extension of $k$ and $\extcent{A}\cap A = Z(A)$. The \emph{central closure of $A$}, denoted $\centclose{A}$, is the $\extcent{A}$-linear subspace of $\mart{A}$ generated by $A$. Lastly, $A$ is called \emph{centrally closed} if $\extcent{A} = Z(A)$, (equivalently, if $\centclose{A} = A$). Note that the central closure of any ring is itself centrally closed. } \end{remark} There is an entirely internal characterization of $\extcent{A}$ which bears mentioning. The extended center of $A$ consists precisely of those pairs $(I, f)$ where $f\colon I \rightarrow A$ is an $(A,A)$-bimodule homomorphism: \[ \extcent{A} = \setmid{(I,f)}{(0) \neq I\lhd A, f\in \operatorname{Hom}_{A}(_{A}I_{A}, _{A}A_{A})}. \] Centrally closed prime algebras behave particularly well with respect to extension of scalars, and as a result we can show that a non-\textup{PI}\xspace just infinite\xspace algebra is stably just infinite\xspace if and only if it is centrally closed. \begin{theorem}\label{thm:non-pi-sji-iff-centrally-closed} Let $k$ be algebraically closed, and let $A$ be a non-\textup{PI}\xspace just infinite\xspace algebra with $Z(A) = k$. Then $A$ is stably just infinite\xspace iff $A$ is centrally closed. \end{theorem} \noindent {\bf Proof.} When $A$ is simple (and hence centrally closed) the result follows from the well-known fact that central simple rings remain simple upon extension of scalars. Thus we may assume that $A$ is not simple. Suppose that $A$ is centrally closed. Choose a nonzero ideal $I \lhd A\otimes_{k}K$ and set $J = I\cap A$, so $J \otimes_{k}K \subseteq I$. Since $A$ is centrally closed, $J$ is a nonzero ideal of $A$ \cite[Lemma 3.4]{EricksonMartindale:Prime-nonassociative-algebras}. Since \[ (A\otimes_{k}K)/(J\otimes_{k}K) \cong A/J \otimes_{k}K \] and the latter is finite dimensional over $K$, we see that $I$ has finite codimension in $A\otimes_{k}K$. For the other direction, suppose that $A$ is stably just infinite\xspace but not centrally closed. Choose $(\varphi,I) \in \extcent{A} \setminus k$. We may assume that $I$ is generated by a single element $a$, and so $\varphi$ is completely determined by $b:= \varphi(a)$. Since $(\varphi,I) \notin k$, we may further assume that $b \neq \lambda a$ for any $\lambda \in k$. Set $B := A\otimes_{k} k(t)$ where $t$ is an indeterminate. We identify $B$ with the localization of $A[t]$ obtained by inverting all nonzero polynomials in $t$ with coefficients in $k$. Consider the ideal $(a+tb)$ of $B$ generated by $a+tb$. We will show that $A\cap (a+tb) = (0)$, so suppose there is some nonzero element $c \in A \cap (a+tb)$. We may write $c = \sum_{i} r_{i}(a + tb) s_{i}$, with $r_{i}, s_{i} \in B$. If $q(t)\in k[t]$ is a common right denominator for the $r_{i}$ and $s_{i}$, we see that \[ c q(t) = \sum_{i} f_{i}(t)(a+tb) g_{i}(t), \; \mbox{ for some } f_{i}, g_{i} \in A[t]. \] Now, writing $f_{i}(t) = \sum_{j} p_{ij} t^{j}$ and $g_{i}(t) = \sum_{l} q_{il} t^{l}$ where $p_{ij}, q_{il} \in A$, we see that \begin{align*} c q(t) &= \sum_{i,j,l} p_{ij} a q_{il} t^{j+l} + p_{ij} b q_{il} t^{j+l +1} \\ &= \sum_{i,j,l} p_{ij} a q_{il} t^{j+l} + t \sum_{i,j,l} p_{ij} \varphi(a) q_{il} t^{j+l}. \end{align*} Set $u(t):= \sum_{i,j,l} p_{ij} a q_{il} t^{j+l} \in A[t]$, so \[ c q(t) = u(t) + t \varphi(u(t)), \] where here $\varphi$ denotes the $(A,A)$-bimodule endomorphism of $A[t]$ defined by $\varphi(a) = b$ and $\varphi(t) = t$. Note that for all $\lambda \in k$, \[ cq(\lambda) = u(\lambda) + \lambda \varphi(u(\lambda)) \in A, \] and so for all $\lambda \in k$ we have \[ c (q(\lambda) - q(0)) = u(\lambda) + \lambda \varphi(u(\lambda)) - u(0).\tag{\dag} \] Let $V$ denote the $k$-span of $\setmid{u(\lambda)}{\lambda \in k}$, $W = V + kc$, and note that $W$ is a nonzero subspace of $A$ with $\dim_{k}(W) < \infty$. Moreover, by equation $(\dag)$, \[ \varphi(W) \subseteq W, \] and so $\varphi$ is algebraic over $k$ by the Cayley-Hamilton theorem. Since $k$ is algebraically closed, we see that $\varphi \in k$. Thus $A \cap (a + tb) = (0)$ and hence $A$ embeds in $B/(a+tb)$. However, $A$ is stably just infinite\xspace so $B/(a+tb)$ is finite dimensional over $k(t)$, and thus $A$ is \textup{PI}\xspace. This contradiction completes the proof. \qed The careful reader might note that we only used the non-\textup{PI}\xspace hypothesis in one direction of the preceding proof. But note that if $A$ is \textup{PI}\xspace and centrally closed, then the center of $Q(A)$ is reduced to scalars. Thus $Q(A)$ is finite dimensional over $k$ by Kaplansky's theorem and hence $A$ cannot be just infinite\xspace. If we remove the condition that $Z(A) = k$ in Theorem \ref{thm:non-pi-sji-iff-centrally-closed}, then it seems that to decide whether or not a non-\textup{PI}\xspace just infinite\xspace algebra is stably just infinite\xspace requires consideration of arbitrary field extensions $K/k$. In fact, the next proposition shows that one only need consider the extended center. \begin{proposition}\label{thm:sji-iff-tensor-with-extcent} Let $A$ be a non-\textup{PI}\xspace just infinite\xspace $k$-algebra, and let $\extcent{A}$ denote the extended center of $A$. Then $A$ is stably just infinite\xspace iff $A\otimes_{k}\extcent{A}$ is just infinite\xspace over $\extcent{A}$. \end{proposition} \noindent {\bf Proof.} One direction is trivial. For the other, suppose that $A\otimes_{k}\extcent{A}$ is just infinite\xspace over $\extcent{A}$. Note that we have a surjective $k$-algebra homomorphism from $A\otimes_{k}\extcent{A}$ onto the central closure of $A$ (given by multiplication). Since $A$ is just infinite\xspace and not \textup{PI}\xspace, $\dim_{\extcent{A}}\centclose{A} = \infty$, and thus the above map is an isomorphism: $A\otimes_{k}\extcent{A} \cong \centclose{A}$. We see that $\centclose{A}$ is just infinite\xspace over $\extcent{A}$. We claim then that $\centclose{A}$ is stably just infinite\xspace over $\extcent{A}$. If $A$ is simple then $\extcent{A} = Z(A)$, so $\centclose{A} = A$ is central simple over $\extcent{A}$ and thus $A$ is stably just infinite\xspace. If $A$ is not simple, then, as in the first part of the proof of Theorem \ref{thm:non-pi-sji-iff-centrally-closed}, it is easy to see that $\centclose{A}$ is stably just infinite\xspace over $\extcent{A}$. Now, let $K/k$ be any field extension, and let $L$ denote a compositum (over $k$) of $K$ and $\extcent{A}$. We then have that \[A\otimes_{k}L \cong (A\otimes_{k}\extcent{A})\otimes_{\extcent{A}}L \] is just infinite\xspace over $L$, and Lemma \ref{lem:sji-descends-to-subfields} shows that $A\otimes_{k}K$ is just infinite\xspace over $K$, completing the proof. \qed Theorem \ref{thm:non-pi-sji-iff-centrally-closed} and Proposition \ref{thm:sji-iff-tensor-with-extcent} give entirely internal characterizations of non-\textup{PI}\xspace stably just infinite\xspace algebras. The difficulty is that in practice it is often hard to compute the extended center. \section{Noetherian Just Infinite\xspace Algebras}\label{sec:noetherian-ji-rings} In this section we prove that a non-\textup{PI}\xspace right noetherian just infinite\xspace algebra over an algebraically closed field is stably just infinite\xspace. To obtain this result, we use central closure. \begin{proposition} Let $A$ be a right noetherian non-\textup{PI}\xspace just infinite\xspace algebra with center $k$. Then $\extcent{A}$ is an algebraic extension of $k$. \end{proposition} \noindent {\bf Proof.} Since $A$ is right Goldie, the right maximal ring of quotients of $A$ coincides with its right Goldie quotient ring, which we denote by $Q(A)$. Since $A$ is prime, $\extcent{A} = Z(Q(A))$ \cite[Prop 14.17]{Lam:A-first-course-in-noncommutative}. Choose a nonzero element $t \in \extcent{A}$ and let $B$ be the subalgebra of $Q(A)$ generated by $A$ and $t$. Let $p = p(t)$ be any nonzero polynomial in $t$ with coefficients in $k$ and let $\pi : B \rightarrow B/pB$ denote the natural projection map. There are nonzero elements $a,b\in A$ with $b$ regular such that $p = ab^{-1}$ and so $pb = a$ in $B$. Hence $pB \cap A\neq (0)$. Moreover, \[ B/pB \cong \left(\frac{A}{pB \cap A}\right) \set{\overline{t}}, \; \mbox{ where } \overline{t} = t + pB. \] Thus every element of $B/pB$ can be written as a polynomial in $\overline{t}$ of degree $< \deg(p)$ with coefficients in $A/(pB \cap A)$. Since $A$ is just infinite\xspace, this shows that $\dim_{k}(B/pB) < \infty$. Set $n = \dim_{k}(B/pB)$. By a result of Farkas and Small \cite[Lemma 1.1]{FarkasSmall:Algebras-which-are-nearly}, $I_n(B) \subseteq \textstyle\bigcap\limits_{d\geq 1}p^{d}B$, and since $A$, and hence also $B$, is not \textup{PI}\xspace, $I_{n}(B) \neq (0)$. Choose any nonzero regular element $$y \in \textstyle\bigcap\limits_{d\geq 1} p^{d}B$$ (such a regular element exists because $B$ is prime right Goldie). Since $p$ is central, we can then find regular elements $a_{1}, a_{2}, \ldots \in B$ with \[ y = a_{d}p^{d}, \;\mbox{ for all } d\in \mathbb{N}, \] and since $p$ is regular in $B$, we have $a_{d} = a_{d+1}p$ for all $d$. In particular, $a_{d}B \subseteq a_{d+1}B$. Since $B$ is right noetherian, the chain of right ideals \[ a_{1}B \subseteq a_{2}B \subseteq \cdots \] must terminate, so $a_{i}B = a_{i+1}B$ for some $i$. In particular, $a_{i}B = a_{i+1}B = a_{i+1}p B$, and since $a_{i+1}$ is regular, the second equality shows that $p$ has a right inverse in $B$. Since $B$ is right noetherian, $p$ is a unit in $B$. Let $\mf{m}\in \operatorname{mspec}(B)$ be any nonzero maximal ideal of $B$. We have a $k$-algebra isomorphism \[ \frac{B}{ \mf{m} } \cong \left(\frac{A}{\mf{m}\cap A}\right)\set{\overline{t}}, \mbox{ where } \overline{t} = t + \mf{m}. \] Since $A$ is just infinite\xspace, $A/(\mf{m}\cap A)$ is finite dimensional over $k$, and since $\overline{t}$ is central, $B/\mf{m}$ is a simple \textup{PI}\xspace ring, and thus a finite module over its center, $Z$, by Kaplansky's theorem. Since $B/\mf{m}$ is clearly an affine $k$-algebra, the Artin-Tate lemma then shows that $Z$ is affine as well. But $Z$ is a field, and thus $\dim_{k}(Z) < \infty$. This shows that $\dim_{k}(B/\mf{m}) < \infty$. Finally, since $B/\mf{m} \cong (A/(\mf{m}\cap A))\set{\overline{t}}$ is finite dimensional, we see that $\overline{t}$ is algebraic over $k$, and we claim that this implies that $t$ is algebraic over $k$ as well. To see this, suppose that $t$ were transcendental. Since any nonzero polynomial $p(t)\in k\set{t}$ is a unit in $B$, $B$ contains the field $k(t)$, which has transcendence degree $1$. Since $\mf{m}\neq B$, $\mf{m} \cap k(t) = (0)$, and so $B/\mf{m}$ contains an isomorphic copy of $k(t)$, namely $k(\overline{t})$. Since $\overline{t}$ is algebraic over $k$, this is a contradiction. \qed Since being centrally closed and being stably just infinite\xspace are equivalent properties for non-\textup{PI}\xspace just infinite\xspace algebras, we obtain as a corollary Theorem \ref{thm:non-pi-noetherian-ji-implies-sji}. \begin{corollary}\label{cor:noetherian-non-pi-ji-implies-sji} Let $k$ be an algebraically closed field, and let $A$ be a right noetherian non-\textup{PI}\xspace just infinite\xspace algebra with $Z(A) = k$. Then $A$ is stably just infinite\xspace. \end{corollary} We can obtain further stability results by combining the above ideas with the Nullstellensatz. \begin{proposition}\label{thm:primitive-null-implies-sji} Let $k$ be an algebraically closed field and let $A$ be a primitive non-\textup{PI}\xspace just infinite\xspace $k$-algebra. If $A$ satisfies the Nullstellensatz, then $A$ is stably just infinite\xspace. \end{proposition} \noindent {\bf Proof.} By a result of Martindale \cite{Martindale:Lie-isomorphisms-of-prime-rings}, the extended center $C(A)$ embeds in $\operatorname{End}_{A}(M)$ for any faithful simple $A$-module $M$. Since $A$ satisfies the Nullstellensatz, this implies that $C(A) = k$, so $A$ is centrally closed and the result follows from Theorem \ref{thm:non-pi-sji-iff-centrally-closed}. \qed \section{\textup{PI}\xspace Just Infinite\xspace Algebras}\label{sec:pi-ji-rings} In this section, we characterize \textup{PI}\xspace stably just infinite\xspace rings over algebraically closed fields. We begin by considering the commutative case. \begin{lemma}\label{lem:noetherian-domains-of-trdeg-1-are-ji} Let $k$ be an algebraically closed field and let $A$ be a commutative noetherian domain such that $Q(A)$ is a finitely generated\xspace field extension of $k$ with $\operatorname{trdeg}(Q(A)) \leq 1$, then $A$ is just infinite\xspace. \end{lemma} \noindent {\bf Proof.} We denote the (classical) Krull dimension of a ring $A$ by $\operatorname{Kdim}(A)$. Since $A$ is a domain with $\operatorname{Kdim}(A) \leq 1$, $\operatorname{Kdim}(A/P) = 0$ for every nonzero $P\in\operatorname{spec}(A)$. Thus $A/P$ is a field extension of $k$. If $A$ is not just infinite\xspace, then, since $A$ is noetherian, we may choose an ideal $I$ maximal with respect to $\dim_{k}(A/I) = \infty$. It is easy to see that $I$ is then prime and it suffices to show that $A/P = k$ for every nonzero prime ideal $P$ of $A$. Fix a nonzero $P\in \operatorname{spec}(A)$ and suppose there is some element $x + P\in A/P$ which is transcendental over $k$. Since $k$ is algebraically closed and $A$ is a domain, every nonzero element of $P$ is transcendental over $k$. Fix a nonzero $t\in P$. Then $x$ and $t$ are algebraically independent over $k$ since $x$ is regular mod $P$, and so $\operatorname{trdeg}(Q(A)) \geq 2$, a contradiction. Thus we see that $A/P$ is an algebraic extension of $k$. Since $k$ is algebraically closed field, we see that $A/P\cong k$. \qed To characterize commutative stably just infinite\xspace rings, we first require another lemma. \begin{lemma}\label{lem:sji-descends-to-subfields} Let $A$ be a $k$-algebra and let $K/k$ be a field extension. If $A\otimes_{k}K$ is just infinite\xspace over $K$, then $A$ is just infinite\xspace over $k$. \end{lemma} \noindent {\bf Proof.} Choose a nonzero ideal $I \lhd A$. Then $I\otimes_{k}K$ is a nonzero two-sided ideal of $A\otimes_{k}K$ with $(A\otimes_{k}K)/(I\otimes_{k}K) \cong (A/I)\otimes_{k}K$. Taking dimensions then yields \begin{align*} \dim_{k}(A/I) &= \dim_{K}( (A/I)\otimes_{k} K ) \\ &= \dim_{K} ( (A\otimes_{k}K)/(I\otimes_{k}K) ) < \infty. \qed \end{align*} \begin{proposition}\label{thm:commutative-sji-characterization} Let $k$ be an algebraically closed field and let $A$ be a commutative just infinite\xspace $k$-algebra. Then $A$ is stably just infinite\xspace iff $Q(A)$ is a finitely generated\xspace field extension of $k$ of transcendence degree at most $1$. \end{proposition} \noindent {\bf Proof.} Suppose first that $A$ is stably just infinite\xspace. Set $F = Q(A)$, the quotient field of $A$. If $A$ is stably just infinite\xspace, then $A\otimes_k K$ is noetherian for every field extension $K/k$ since a commutative just infinite ring is necessarily commutative. Since $F\otimes_{k} K$ is a localization of $A\otimes_k K$, it is also noetherian for every field extension $K/k$. By a result of V\'{a}mos \cite{Vamos:On-the-minimal-prime-ideal}, $F$ is a finitely generated\xspace field extension of $k$, and it suffices to show that $\operatorname{trdeg}_{k}(F) \leq 1$. Note that $\operatorname{Kdim}(A\otimes_{k} F) \leq 1$ since $A\otimes_{k} F$ is a just infinite\xspace $F$-algebra. Since $F\otimes_{k} F$ is a localization of $A\otimes_{k} F$, $F\otimes_{k} F$ is a just infinite\xspace $F$-algebra as well, and so $\operatorname{Kdim}(F\otimes_{k} F) \leq 1$, and thus $\operatorname{trdeg}_{k}(F)\leq 1$ \cite[6.4.8 \& 6.6.17]{McConnellRobson:Noncommutative-Noetherian-rings}. Next suppose that $F$ is a finitely generated\xspace field extension of $k$ with $\operatorname{trdeg}_{k}(F) \leq 1$. We claim that $A$ is stably just infinite\xspace. Let $K/k$ be a field extension, and let $\overline{K}$ denote the algebraic closure of $K$. Since $F$ is finitely generated\xspace, there is a finitely generated\xspace purely transcendental extension $E/k$ with $E\subseteq F$ and $[F:E]<\infty$. Then $\operatorname{trdeg}_k(E)\le 1$ and so the field of fractions of $E\otimes_k \overline{K}$ is a purely transcendental extension of $\overline{K}$ of transcendence degree at most one. But $Q(F\otimes_{k} \overline{K})$ is a finite extension of $E\otimes_k \overline{K}$ and thus also has transcendence degree at most one. Since $Q(A\otimes_k \overline{K})=Q(F\otimes_k \overline{K})$, $A\otimes_{k} \overline{K}$ has Krull dimension at most $1$ over $\overline{K}$. Applying Lemma \ref{lem:noetherian-domains-of-trdeg-1-are-ji}, we see that $A\otimes_{k} \overline{K}$ is just infinite\xspace over $\overline{K}$. Lemma \ref{lem:sji-descends-to-subfields} then implies that $A\otimes_{k} K$ is just infinite\xspace over $K$, completing the proof. \qed \begin{corollary} Affine commutative just infinite\xspace algebras over algebraically closed fields are stably just infinite\xspace. \end{corollary} Suppose now that $A$ is a \textup{PI}\xspace just infinite\xspace algebra over an algebraically closed field $k$ such that $Q(Z(A))$ is a finitely generated\xspace field extension of $k$ with $\operatorname{trdeg}_{k}(Q(Z(A))) \leq 1$. Then $A$ is a finitely generated\xspace module over $Z(A)$, which is itself just infinite\xspace \cite[Corollary 2]{FarinaPendergrass:A-Few-Properties-of-Just-Infinite}. By Proposition \ref{thm:commutative-sji-characterization}, $Z(A)$ is stably just infinite\xspace. If $K/k$ is a field extension, then $Z(A)\otimes_{k}K$ is a (noetherian) just infinite\xspace $K$-algebra. Since $A\otimes_{k}K$ is a finitely generated\xspace $Z(A)\otimes_{k}K$-module, $A\otimes_{k}K$ is also noetherian. Choose a nonzero two-sided ideal $I\lhd A\otimes_{k}K$. As $A\otimes_{k}K$ is prime noetherian, $I$ contains a nonzero regular element $x$. Since $A\otimes_{k}K$ is a finite $Z(A)\otimes_{k}K$-module, $x$ is algebraic over $Z(A)\otimes_{k}K$. If \[ x^{t} + z_{t-1}x^{t-1} + \ldots + z_{0} = 0, \quad z_{j}\in Z(A)\otimes_{k}K \] has $t$ minimal, then $z_{0} \neq 0$. It follows that $z_{0}\in I\cap (Z(A)\otimes_{k}K)$, and so $I \cap (Z(A)\otimes_{k}K) \neq (0)$. Finally, $(A\otimes_{k}K)/I$ is a finitely generated\xspace module over $(Z\otimes_{k}K)/(I\cap (Z\otimes_{k}K))$, hence $A\otimes_{k}K$ is just infinite\xspace over $K$. To summarize, we have the following \begin{theorem}\label{thm:pi-sji-characterization} Let $k$ be an algebraically closed field, and let $A$ be a \textup{PI}\xspace just infinite\xspace $k$-algebra. Then $A$ is stably just infinite\xspace iff $Q(Z(A))$ is a finitely generated\xspace field extension of $k$ with $\operatorname{trdeg}_{k}(Q(Z(A))) \leq 1$. \end{theorem} \begin{corollary}\label{cor:affine-pi-over-alg-closed-is-sji} Affine \textup{PI}\xspace just infinite\xspace algebras over algebraically closed fields are stably just infinite\xspace. \end{corollary} \section{Algebras over Uncountable Fields}\label{sec:uncountable-fields} In this section we show that countably generated just infinite\xspace algebras over uncountable algebraically closed fields are stably just infinite\xspace, proving Theorem \ref{thm:cg-ji-over-big-fields-are-sji}. We first consider the \textup{PI}\xspace case. \begin{proposition}\label{thm:cg-comm-ji-rings-over-uncountable-are-sji} Let $k$ be an uncountable algebraically closed field and let $A$ be a countably generated \textup{PI}\xspace just infinite\xspace $k$-algebra. Then $A$ is stably just infinite\xspace. \end{proposition} \noindent {\bf Proof.} We begin by showing we can reduce to the commutative case. Since $A$ is just infinite\xspace and \textup{PI}\xspace, we have that $A$ is a finite module over its center and that its center is just infinite\xspace \cite{FarinaPendergrass:A-Few-Properties-of-Just-Infinite}. Moreover, since an algebra is countably generated iff it is countable dimensional, we see that $Z(A)$ is countably generated as well. By Theorem \ref{thm:pi-sji-characterization}, we may then assume that $A$ is commutative. Note that $\operatorname{Kdim}(A) \leq 1$ since $A$ is just infinite\xspace. In fact, $\operatorname{Kdim}(A) = 1$ because there are no proper field extensions of $k$ which are countably generated as algebras. By Proposition \ref{thm:commutative-sji-characterization} it is enough to show that $Q(A)$ is a finitely generated\xspace field extension of $k$ of transcendence degree at most $1$. $A$ is noetherian, so $A\otimes_{k} Q(A)$ is noetherian \cite[Theorem 1.2]{Bell:Noetherian-algebras-over}. Thus $Q(A)\otimes_{k} Q(A)$ is noetherian since it is a localization of $A\otimes_{k}Q(A)$. It follows from a result of V\'{a}mos \cite{Vamos:On-the-minimal-prime-ideal} that $Q(A)$ is a finitely generated\xspace field extension of $k$. Let $R$ be any affine commutative $k$-algebra with $Q(R) = Q(A)$. Since $A$ is countably generated, we may choose elements $a_{1}b_{1}^{-1}, a_{2}b_{2}^{-1}, \ldots$, with $a_{i}, b_{i} \in R$, which generate $A$ as a $k$-algebra. Set $B = RS^{-1}$, where $S$ is the multiplicatively closed subset of $R$ generated by the set $\setmid{b_{i}}{i \in \mathbb{N}}$. Note that $B$ satisfies the Nullstellensatz since it is countably generated and $k$ is uncountable. Since $k$ is algebraically closed, $B/M \cong k$ for some maximal ideal $M$ of $B$. Set $P = M \cap R$. Then $P$ is a maximal ideal of $R$. Note also that $R_{P}$, the localization of $R$ at $P$, contains $B$ and thus also contains $A$. Hence $\operatorname{Kdim}(A) = \operatorname{trdeg}_{k}(A)$ \cite[Theorem 1.5]{GilmerNashierNichols:The-prime-spectra-of-subalgebras}. \qed Next we dispose of the non-\textup{PI}\xspace case. The techniques are rather different, and we need a preliminary result which may be of independent interest. \begin{proposition}\label{thm:extended-center-is-countable-dimensional} Let $A$ be a countably generated non-\textup{PI}\xspace just infinite\xspace $k$-algebra. Then $\extcent{A}$ is a countable dimensional field extension of $k$. \end{proposition} \noindent {\bf Proof.} If $A$ is simple, then $Z(A)$ is a field. But $A$ is countable dimensional over $k$, so $Z(A)$ is countable dimensional over $k$ as well, and in a simple ring, $\extcent{A} = Z(A)$. Now suppose that $A$ is not simple. Since $A$ doesn't satisfy any polynomial identity, each $I_{n}(A)$ is nonzero. Moreover, $A$ is just infinite\xspace, and so every nonzero ideal $I \lhd A$ contains some $I_{n}(A)$. Suppose there is an uncountable collection of elements \[ \setmid{ (I_{\alpha}, f_{\alpha}) }{ \alpha \in \mathcal{I} } \subseteq \extcent{A} \] which is $k$-linearly independent. Note that if $(I,f) \in \extcent{A}$, then \[ (I,f) = (I_{n}(A), f|_{I_{n}(A)}), \] so every bimodule map is determined by it's action on some $I_{n}$. Thus for each $\alpha$, we may assume that $(I_{\alpha}, f_{\alpha}) = (I_n(A), f_{\alpha})$ for some $n$. Since there are only countably many $I_{n}(A)$, there is some $d\in \mathbb{N}$ for which the uncountable set $\setmid{ (I_{d}(A), f_{\alpha}) }{ \alpha\in \mathcal{I} }$ is $k$-linearly independent. Next, choose any nonzero element $a\in I_{d}(A)$ and note that by replacing $I_{d}(A)$ with $AaA$, we may assume that there is an uncountable $k$-linearly independent set of the form $\setmid{ (AaA, f_{\alpha}) }{\alpha \in \mathcal{I}}$. Being an $(A,A)$-bimodule map, each $f_{\alpha}$ is completely determined by $f_{\alpha}(a)$. However, since $A$ is a countably generated $k$-algebra, $\dim_{k}(A)$ is countable, and it follows that the set \[ \setmid{ f_{\alpha}(a) }{ \alpha \in \mathcal{I}} \] is $k$-linearly dependent. This proves the theorem. \qed \begin{remark}\em{ Though Proposition \ref{thm:extended-center-is-countable-dimensional} requires $A$ to be non-\textup{PI}\xspace and just infinite\xspace, the conclusion still holds for any countably generated prime algebra with the property that there is a countable set of nonzero elements such that every nonzero ideal contains at least one of these elements; this is often referred to as having a \emph{countable separating set} in the literature. Note that, for example, any primitive homomorphic image of an enveloping algebra of a finite dimensional Lie algebra has a countable separating set, as does any countably generated prime non-\textup{PI}\xspace right Goldie ring of $\operatorname{GKdim}$ $2$. We also remark that the preceding proof, with the obvious modifications, yields a generalization of \cite[Theorem 1]{RowenSmall:Primitive-ideals-of-algebras}. } \end{remark} \begin{corollary}\label{cor:cg-non-PI-ji-over-big-fields-are-sji} Let $k$ be an uncountable algebraically closed field and let $A$ be a countably generated non-\textup{PI}\xspace just infinite\xspace $k$-algebra. Then $A$ is stably just infinite\xspace. \end{corollary} \noindent {\bf Proof.} If $A$ is simple then $\extcent{A} = k$ and the result follows from Theorem \ref{thm:non-pi-sji-iff-centrally-closed}. Otherwise $A$ is not simple, and so $\extcent{A}$ is a countable dimensional field extension of $k$ by Proposition \ref{thm:extended-center-is-countable-dimensional}. Since $k$ is uncountable, $\extcent{A}$ must be algebraic over $k$. But $k$ is algebraically closed, so $A$ is centrally closed, and the result follows from Theorem \ref{thm:non-pi-sji-iff-centrally-closed}. \qed Combining Proposition \ref{thm:cg-comm-ji-rings-over-uncountable-are-sji} and Corollary \ref{cor:cg-non-PI-ji-over-big-fields-are-sji}, we obtain Theorem \ref{thm:cg-ji-over-big-fields-are-sji}. \section{An Example}\label{sec:examples} In this short section we prove Theorem \ref{thm:affine-ji-non-sji-rings-over-countable-fields-exist}. We construct an interesting example showing that the right noetherian hypothesis in Theorem \ref{thm:non-pi-noetherian-ji-implies-sji} and the uncountable hypothesis in Theorem \ref{thm:cg-ji-over-big-fields-are-sji} are necessary. Let $k$ be a countable algebraically closed field. There exists a simple Ore domain $R$, affine over $k$, with $\operatorname{trdeg}_{k}(Z(R)) = \infty$ \cite{Irving:Finitely-generated-simple}. Thus $R$ is affine, simple, and infinite dimensional, and so $R$ cannot satisfy a polynomial identity. The ring $R$ is obtained as a skew laurent extension of an Ore domain with respect to an automorphism of infinite order \cite{Irving:Finitely-generated-simple}. In particular, $R$ is not a division ring. Fix a nonzero element $x\in R$ which is not a unit, and set $A = k + xR$. $A$ is affine \cite[Proposition 2]{Resco:Affine-domains-of-finite}, and since $xR$ is simple as a (non-unital) ring, $A$ is just infinite\xspace. As $R$ is not \textup{PI}\xspace, neither is $A$, and so we see that $Z(A)$ is a finite dimensional field extension of $k$ by Proposition \ref{thm:center-of-non-pi-ji}. But note that $Z(A) \cap xR = (0)$ because $xR$ is a proper right ideal of $R$. Hence $Z(A) = k$. Finally, it is easy to see that $A$ is right Ore, and so the extended center of $A$ coincides with the center of it's right Goldie quotient ring. Since $Q(A) = R$, we see that $\extcent{A} = Z(R)$. Thus $A$ is not centrally closed, and so $A$ is not stably just infinite\xspace by Theorem \ref{thm:non-pi-sji-iff-centrally-closed}. \section*{Acknowledgments} We thank Lance Small for many helpful conversations. \bibliographystyle{abbrv}
1,314,259,992,909
arxiv
\chapter{Introduction} \label{intro} Physical theories are usually formulated as differential equations. Some such examples are Hamilton's equation of motion (describing the classical motion of a point particle) \begin{align} \frac{\dd}{\dd t} \left ( % \begin{matrix} q \\ p \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} + p \\ - \nabla_q V \\ \end{matrix} % \right ) , \label{intro:eqn:hamiltons_eom} \end{align} where $\nabla_q V = (\partial_{q_1} V , \ldots , \partial_{q_n} V)$ is the gradient of the potential $V$, the Schrödinger equation \begin{align} \ii \partial_t \Psi &= \bigl ( - \tfrac{1}{2m} \Delta_x + V \bigr ) \Psi , \label{intro:eqn:Schroedinger_eqn} \end{align} where $\Delta_x = \sum_{j = 1}^n \partial_{x_j}^2$ is the Laplace operator, the wave equation \begin{align} \partial_t^2 u - \Delta_x u = 0 , \label{intro:eqn:wave_eqn} \end{align} the heat equation (with source) \begin{align} \partial_t u &= + \Delta_x u + f(t) \label{intro:eqn:heat_eqn} \end{align} and the Maxwell equations ($\nabla_x \times \mathbf{E}$ is the rotation of $\mathbf{E}$) \begin{align} \frac{\dd}{\dd t} \left ( % \begin{matrix} \mathbf{E} \\ \mathbf{H} \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} + \eps^{-1} \nabla_x \times \mathbf{H} \\ - \mu^{-1} \nabla_x \times \mathbf{E} \\ \end{matrix} % \right ) - \left ( % \begin{matrix} \eps^{-1} j \\ 0 \\ \end{matrix} % \right ) \label{intro:eqn:Maxwell_dynamical_eqns} \\ \left ( % \begin{matrix} \nabla_x \cdot \eps \mathbf{E} \\ \nabla_x \cdot \mu \mathbf{H} \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} \rho \\ 0 \\ \end{matrix} % \right ) \label{intro:eqn:Maxwell_conservation_laws} . \end{align} Of course, there are many, many more examples, so it helps to classify them systematically. In the zoology of differential equations, the first distinction we need to make is between \emph{ordinary differential equations} (ODEs) and \emph{partial differential equations} (PDEs), \ie differential equations which involve derivatives of only one variable (\eg \eqref{intro:eqn:hamiltons_eom}) or several variables (all other examples). The order of a differential equation is determined by the highest derivative involved in it. On a second axis, we need to distinguish between non-linear (\eg \eqref{intro:eqn:hamiltons_eom} for $H(q,p) = \tfrac{1}{2m} p^2 + V(q)$ where $V$ is not a second-order polynomial) or linear equations (\eg \eqref{intro:eqn:hamiltons_eom} for $V(q) = \omega^2 \, q^2$ or \eqref{intro:eqn:Maxwell_conservation_laws}). We can write any differential equation in the form \begin{align*} L(u) = f \end{align*} where the contribution $L(u)$ which depends on the solution $u$ is collected on the left-hand side and the $u$-\emph{in}dependent part $f$ is on the right-hand side. \emph{Linear} differential equations are the distinguished case where the \emph{operator} $L$ satisfies \begin{align} L \bigl ( \alpha_1 u_1 + \alpha_2 u_2 \bigr ) &= \alpha_1 \, L(u_1) + \alpha_2 \, L(u_2) \label{intro:eqn:linearity} \end{align} for all scalars $\alpha_1 , \alpha_2$ and suitable functions $u_1 , u_2$; otherwise, the differential equation is \emph{non-linear}. Among \emph{linear} differential equations we further distinguish between \emph{homogeneous} ($f = 0$) and \emph{inhomogeneous} ($f \neq 0$) linear differential equations. Solving linear differential equations is much, much easier, because linear combinations of solutions of the homogeneous equation $L(u) = 0$ are once again solutions of the homogeneous equation. In other words, \emph{the solutions form a vector space.} This makes it easier to find solutions which satisfy the correct initial conditions by, for instance, systematically finding all solutions to $L(u) = 0$ and then forming suitable linear combinations. However, there are cases when solving a non-linear problem may be more desirable. In case of many-particle quantum mechanics, a non-linear problem on a lower-dimensional space is often preferable to a high-dimensional linear problem. Secondly, one can often relate easier-to-solve \emph{ordinary} differential equations to \emph{partial} differential equations in a systematic fashion, \eg by means of semiclassical limits which relate \eqref{intro:eqn:hamiltons_eom} and \eqref{intro:eqn:Schroedinger_eqn}, by “diagonalizing” a PDE or considering an associated “eigenvalue problem”. A last important weapon in the arsenal of a mathematical physicist is to systematically exploit \emph{symmetries}. \paragraph{Well-posedness} The most fundamental question one may ask about a differential equation on a domain of functions (which may include boundary conditions and such) is whether it is a \emph{well-posed problem}, \ie \begin{enumerate}[(1)] \item whether \emph{a} (non-trivial) solution \emph{exists}, \item whether the solution is \emph{unique} and \item whether the solution depends on the initial conditions in a continuous fashion (\emph{stability}). \end{enumerate} Solving this sounds rather like an exercise, but it can be tremendously hard. (Proving the \href{http://www.claymath.org/millennium/Navier-Stokes_Equations/}{well-posedness of the Navier-Stokes equation} is one of the -- unsolved -- Millenium Prize Problems, for instance.) Indeed, there are countless examples where a differential equation either has no non-trivial solution\footnote{For linear-homogeneous differential equations, if the zero function is in the domain, it is automatically a solution. Hence, the zero function is often referred to as “trivial solution”.} or the solution is not unique; we will come across some of these cases in the exercises. \paragraph{Course outline} This course is located at the intersection of mathematics and physics, so one of the tasks is to establish a dictionary between the mathematics and physics community. Both communities have benefitted from each other tremendously over the course of history: physicists would often generate new problems for mathematicians while mathematicians build and refine new tools to analyze problems from physics. Concretely, the course is structured so as to show the interplay between different fields of mathematics (\eg functional analysis, harmonic analysis and the theory of Schrödinger operators) as well as to consider different aspects of solving differential equations. However, the course is \emph{not} meant to be a comprehensive introduction to any of these fields in particular, but just give an overview, elucidate some of the connections and whet the appetite for more. \chapter{Ordinary differential equations} \label{odes} An ordinary differential equation -- or ODE for short -- is an equation of the form \begin{align} \tfrac{\dd}{\dd t} x &= \dot{x} = F(x) , && x(0) = x_0 \in U \subseteq \R^n , \label{odes:eqn:primordial_ode} \end{align} defined in terms of a \emph{vector field} $F = (F_1 , \ldots , F_n) : U \subseteq \R^n \longrightarrow \R^n$. Its solutions are curves $x = (x_1 , \ldots , x_n) : (-T,+T) \longrightarrow U$ in $\R^n$ for times up to $0 < T \leq +\infty$. You can also consider ODEs on other spaces, \eg $\C^d$. The solution to an ODE is a \emph{flow} $\Phi : (-T,+T) \times U \longrightarrow U$, \ie the map which satisfies \begin{align*} \Phi_t(x_0) = x(t) \end{align*} where $x(t)$ is the solution of \eqref{odes:eqn:primordial_ode}. \emph{Locally}, $\Phi$ is a “group representation”\footnote{If $T = +\infty$, it really is a group representation of $(\R,+)$.} \begin{enumerate}[(i)] \item $\Phi_0 = \id_U$ \item $\Phi_{t_1} \circ \Phi_{t_2} = \Phi_{t_1 + t_2}$ as long as $t_1 , t_2 , t_1 + t_2 \in (-T,+T)$ \item $\Phi_t \circ \Phi_{-t} = \Phi_0 = \id_U$ for all $t \in (-T,+T)$ \end{enumerate} The ODE \eqref{odes:eqn:primordial_ode} uniquely determines the flow $\Phi$ and vice versa: on the one hand, we can \emph{define} the flow map $\Phi_t (x_0) = x(t)$ for each initial condition $x_0$ and $t$ as the solution of \eqref{odes:eqn:primordial_ode}. Property~(i) follows from $x(0) = x_0$. The other two properties follow from the explicit construction of the solution later on in the proof of Theorem~\ref{odes:thm:Picard_Lindeloef} On the other hand, assume we are given a flow $\Phi$ with properties (i)--(iii) above which is differentiable in $t$. Then we can recover the vector field $F$ at point $x_0$ by taking the time derivative of the flow, \begin{align*} \left . \frac{\dd}{\dd t} \Phi_t(x_0) \right \vert_{t = 0} &= \lim_{\delta \to 0} \tfrac{1}{\delta} \bigl ( \Phi_{\delta}(x_0) - \Phi_0(x_0) \bigr ) \\ & = \lim_{\delta \to 0} \tfrac{1}{\delta} \bigl ( \Phi_{\delta}(x_0) - x_0 \bigr ) \\ & =: F \bigl ( \Phi_0(x_0) \bigr ) = F(x_0) . \end{align*} Now by property (ii), the above also holds at any later point in time: \begin{align*} \frac{\dd}{\dd t} \Phi_t(x_0) &= \lim_{\delta \to 0} \tfrac{1}{\delta} \bigl ( \Phi_{t + \delta}(x_0) - \Phi_t(x_0) \bigr ) \\ & \overset{(ii)}{=} \lim_{\delta \to 0} \frac{1}{\delta} \Bigl ( \Phi_{\delta} \bigl ( \underbrace{\Phi_t(x_0)}_{= y} \bigr ) - \Phi_t(x_0) \Bigr ) \\ & = \lim_{\delta \to 0} \tfrac{1}{\delta} \bigl ( \Phi_{\delta}(y) - y \bigr ) \\ &= F(y) = F \bigl ( \Phi_t(x_0) \bigr ) \end{align*} In other words, we have just shown \begin{align*} \frac{\dd}{\dd t} \Phi_t(x_0) &= F \bigl ( \Phi_t(x_0) \bigr ) , && \Phi_0(x_0) = x_0 , \\ \Leftrightarrow \; \frac{\dd}{\dd t} \Phi_t &= F \circ \Phi_t , && \Phi_0 = \id . \end{align*} Clearly, there are three immediate questions: \marginpar{2013.09.10} \begin{enumerate}[(1)] \item When does the flow exist? \item Is it unique? \item How large can we make $T$? \end{enumerate} \section{Linear ODEs} \label{odes:examples} The simplest ODEs are \emph{linear} where the vector field \begin{align*} F(x) = H x \end{align*} is defined in terms of a $n \times n$ matrix $H = \bigl ( H_{jk} \bigr )_{1 \leq j , k \leq n} \in \mathrm{Mat}_{\C}(n)$. The flow $\Phi_t = \e^{t H}$ is given in terms of the \emph{matrix exponential} \begin{align} \e^{t H} = \exp(t H) := \sum_{k = 0}^{\infty} \frac{t^k}{k!} \, H^k . \end{align} This series converges in the vector space $\mathrm{Mat}_{\C}(n) \cong \C^{n^2}$: if we choose \begin{align*} \norm{H} := \max_{1 \leq j , k \leq n} \abs{H_{jk}} \end{align*} as norm\footnote{We could have picked \emph{any} other norm on $\C^{n^2}$ since all norms on finite-dimensional vector spaces are equivalent.}, we can see that the sequence of partial sums $S_N := \sum_{k = 0}^N \frac{t^k}{k!} \, H^k$ is a Cauchy sequence, \begin{align*} \bnorm{S_N - S_K} &\leq \norm{\sum_{k = K+1}^N \frac{t^k}{k!} \, H^k} \leq \sum_{k = K+1}^N \frac{t^k}{k!} \, \norm{H^k} \\ &\leq \sum_{k = K+1}^N \frac{t^k}{k!} \, \norm{H}^k \leq \sum_{k = K+1}^{\infty} \frac{t^k}{k!} \, \norm{H}^k \\ &\xrightarrow{K \to \infty} 0 . \end{align*} Using the completeness of $\C^{n^2}$ with respect to the the norm $\norm{\cdot}$ (which follows from the completeness of $\C$ with respect to the absolute value), we now deduce that $S_N \rightarrow \e^{t H}$ as $N \to \infty$. Moreover, we also obtain the norm bound $\bnorm{\e^{t H}} \leq \e^{t \norm{H}}$. \medskip \noindent From linear algebra we know that \emph{any} matrix $H$ is similar to a matrix of the form \begin{align*} J = U^{-1} \, H \, U = \left ( % \begin{matrix} J_{r_1}(\lambda_1) & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & J_{r_N}(\lambda_N) \\ \end{matrix} % \right ) \end{align*} where the $\{ \lambda_j \}_{j = 1}^N$ are the eigenvalues of $H$ and the \begin{align*} J_{r_j}(\lambda_j) = \left ( % \begin{matrix} \lambda_j & 1 & 0 & 0 \\ 0 & \ddots & \ddots & 0 \\ \vdots & \ddots & \lambda_j & 1 \\ 0 & \cdots & 0 & \lambda_j \\ \end{matrix} % \right ) =: \lambda \, \id_{\C^{r_j}} + N_{r_j} \in \mathrm{Mat}_{\C}(r_j) \end{align*} are the $r_j$-dimensional Jordan blocks associated to $\lambda_j$. Now the exponential \begin{align*} \e^{t H} &= \sum_{k = 0}^{\infty} \frac{t^k}{k!} \, \bigl ( U \, J \, U^{-1} \bigr )^k = U \, \Biggl ( \sum_{k = 0}^{\infty} \frac{t^k}{k!} \, J^k \Biggr ) \, U^{-1} \\ &= U \, \e^{t J} \, U^{-1} \end{align*} is also similar to the exponential of $t J$. Moreover, one can see that \begin{align*} \e^{t J} &= \left ( % \begin{matrix} \e^{t J_{r_1}(\lambda_1)} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \e^{t J_{r_N}(\lambda_n)} \\ \end{matrix} % \right ) \end{align*} and the matrix exponential of a Jordan block can be computed explicitly (\cf problem~6 on Sheet~02), \begin{align*} \e^{t J_r(\lambda)} &= \e^{t \lambda} \, \left ( % \begin{matrix} 1 & t & \frac{t^2}{2} & \cdots & \cdots & \frac{t^{r-1}}{(r-1)!} \\ 0 & 1 & t & \frac{t^2}{2} & \cdots & \frac{t^{r-2}}{(r-2)!} \\ 0 & 0 & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 1 & t & \frac{t^2}{2} \\ 0 & 0 & 0 & 0 & 1 & t \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{matrix} % \right ) . \end{align*} In case $H$ is similar to a diagonal matrix, \begin{align*} H = U \, \left ( % \begin{matrix} \lambda_1 & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \lambda_n \\ \end{matrix} % \right ) \, U^{-1} , \end{align*} this formula simplifies to \begin{align*} \e^{t H} = U \, \left ( % \begin{matrix} \e^{t \lambda_1} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \e^{t \lambda_n} \\ \end{matrix} % \right ) \, U^{-1} . \end{align*} \begin{example}[Free Schrödinger equation] The solution to % \begin{align*} \ii \tfrac{\dd}{\dd t} \widehat{\psi}(t) &= \tfrac{1}{2} k^2 \, \widehat{\psi}(t) , && \widehat{\psi}(0) = \widehat{\psi}_0 \in \C , \end{align*} % is $\widehat{\psi}(t) = \e^{- \ii t \, \frac{1}{2} k^2} \widehat{\psi}_0$. \end{example} \paragraph{Inhomogeneous linear ODEs} In case the linear ODE is inhomogeneous, \begin{align*} \dot{x}(t) = A x(t) + f(t) , && x(0) = x_0 , \end{align*} it turns out we can still find a closed solution: \begin{align} x(t) &= \e^{t A} x_0 + \int_0^t \dd s \, \e^{(t-s) A} \, f(s) \label{odes:eqn:inhomogeneous_linear_solution} \end{align} \paragraph{Higher-order ODEs} \begin{align} \frac{\dd^n}{\dd t^n} x &= F(x) + f(t) , && x^{(j)} = \alpha^{(j)} , \; j = 0 , \ldots , n-1 , \label{odes:eqn:higher_order_ode} \end{align} can be expressed as a system of \emph{first}-order ODEs. If $x \in \R$ for simplicity, we can write \begin{align*} y_1 &:= x \\ y_2 &:= \tfrac{\dd}{\dd t} x \\ y_j &:= \tfrac{\dd}{\dd t} y_{j-1} = \tfrac{\dd^j}{\dd t^j} x \\ y_n &:= \tfrac{\dd^{n-1}}{\dd t^{n-1}} x = F(x) + f(t) = F(y_1) + f(t) . \end{align*} Thus, assuming the vector field $F(x) = \lambda x$, $\lambda \in \R$, is linear and the dimension of the underlying space $1$, we obtain the first-order linear equation \begin{align} \frac{\dd}{\dd t} \left ( % \begin{matrix} y_1 \\ \vdots \\ \vdots \\ \vdots \\ y_n \\ \end{matrix} % \right ) = \underbrace{\left ( % \begin{matrix} 0 & 1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & 0 \\ 0 & & & \ddots & 1 \\ \lambda & 0 & \cdots & \cdots & 0 \\ \end{matrix} % \right )}_{=: H} \left ( % \begin{matrix} y_1 \\ \vdots \\ \vdots \\ \vdots \\ y_n \\ \end{matrix} % \right ) + \left ( % \begin{matrix} 0 \\ \vdots \\ \vdots \\ 0 \\ f(t) \\ \end{matrix} % \right ) \label{odes:eqn:higher_order_ode_reduced_first_order} \end{align} and the solution is given by \eqref{odes:eqn:inhomogeneous_linear_solution} with initial condition $y_0 = \bigl ( \alpha^{(0)} , \alpha^{(1)} , \ldots , \alpha^{(n-1)} \bigr )$. Then the solution to \eqref{odes:eqn:higher_order_ode} is then just the first component $y_1(t)$. \begin{example}[Newton's equation of motion for a free particle] The dynamics of a free particle of mass $m$ starting at $x_0$ with initial momentum $p_0$ is % \begin{align*} \ddot{x} = 0 , && x(0) = x_0 , \; \dot{x}(0) = \tfrac{p_0}{m} . \end{align*} % This leads to the first-order equation % \begin{align*} \frac{\dd}{\dd t} \left ( % \begin{matrix} y_1 \\ y_2 \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} 0 & 1 \\ 0 & 0 \\ \end{matrix} % \right ) \left ( % \begin{matrix} y_1 \\ y_2 \\ \end{matrix} % \right ) =: H y . \end{align*} % The matrix $H$ is nilpotent, $H^2 = 0$, and thus the exponential series % \begin{align*} \e^{t H} = \sum_{k = 0}^{\infty} \frac{t^k}{k!} \, H^k = \id + t \, H + 0 = \left ( % \begin{matrix} 1 & t \\ 0 & 1 \\ \end{matrix} % \right ) \end{align*} % terminates after finitely many terms. Hence, the solution is % \begin{align*} x(t) = y_1(t) = x_0 + t \, \tfrac{p_0}{m} . \end{align*} % \end{example} \section{Existence and uniqueness of solutions} \label{odes:existence_uniqueness} Now we turn back to the non-linear case. One important example of non-linear ODEs are Hamilton's equations of motion~\eqref{intro:eqn:hamiltons_eom} which describe the motion of a classical particle; this will be the content of Chapter~\ref{classical_mechanics}. The standard existence and uniqueness result is the Picard-Lindelöf theorem. The crucial idea is to approximate the solution to the ODE and to improve this approximation iteratively. To have a notion of convergence of solutions, we need to introduce the idea of \begin{definition}[Metric space] Let $\mathcal{X}$ be a set. A mapping $d : \mathcal{X} \times \mathcal{X} \longrightarrow [0,+\infty)$ with properties % \begin{enumerate}[(i)] \item $d(x,y) = 0$ exactly if $x = y$ (definiteness), \item $d(x,y) = d(y,x)$ (symmetry), and \item $d(x,z) \leq d(x,y) + d(y,z)$ (triangle inequality), \end{enumerate} % for all $x,y,z \in \mathcal{X}$ is called \emph{metric}. We refer to $(\mathcal{X},d)$ as \emph{metric space} (often only denoted as $\mathcal{X}$). A metric space $(\mathcal{X},d)$ is called \emph{complete} if all Cauchy sequences $(x_{(n)})$ (with respect to the metric) converge to some $x \in \mathcal{X}$. \end{definition} A metric gives a notion of distance -- and thus a notion of convergence, continuity and open sets (a topology): quite naturally, one considers the topology generated by open balls defined in terms of $d$. There are more general ways to study convergence and alternative topologies (\eg Fréchet topologies or weak topologies) can be both useful and necessary. \begin{example} % \begin{enumerate}[(i)] \item Let $\mathcal{X}$ be a set and define % \begin{align*} d(x,y) := % \begin{cases} 1 \qquad x \neq y \\ 0 \qquad x = y \\ \end{cases} % . \end{align*} % It is easy to see $d$ satisfies the axioms of a metric and $\mathcal{X}$ is complete with respect to $d$. This particular choice leads to the discrete topology. \item Let $\mathcal{X} = \Cont([a,b],U)$, $U \subseteq \R^n$ be the space of continuous functions on an interval. Then one naturally considers the metric % \begin{align*} d_{\infty}(f,g) := \sup_{x \in [a,b]} \babs{f(x) - g(x)} = \max_{x \in [a,b]} \babs{f(x) - g(x)} \end{align*} % with respect to which $\Cont([a,b],U)$ is complete. \end{enumerate} % \end{example} One important result we need concerns the existence of so-called fixed points in complete metric spaces: \begin{theorem}[Banach's fixed point theorem]\label{odes:thm:Banach_fixed_point} Let $(\mathcal{X},d)$ be a complete metric space and $P$ a \emph{contraction}, \ie a map for which there exists $C \in [0,1)$ so that for all $x , y \in \mathcal{X}$ % \begin{align} d \bigl ( P(x) , P(y) \bigr ) \leq C \, d(x,y) \label{odes:eqn:contraction_map} \end{align} % holds. Then there exists a \emph{unique} fixed point $x_{\ast} = P(x_{\ast})$ so that for any $x_0 \in \mathcal{X}$, the sequence % \begin{align*} \bigl \{ P^n(x_0) \bigr \}_{n \in \N} = \bigl \{ \underbrace{P \circ \cdots \circ P}_{\mbox{$n$ times}} (x_0) \bigr \}_{n \in \N} \end{align*} % converges to $x_{\ast} \in \mathcal{X}$. \end{theorem} \begin{proof} Let us define $x_n := P^n(x_0)$ for brevity. To show existence of \emph{a} fixed point, we will prove that $\{ x_n \}_{n \in \N}$ is a Cauchy sequence. First of all, the distance between \emph{neighboring} sequence elements goes to $0$, % \begin{align} d(x_{n+1} , x_n) &= d \bigl ( P(x_n) , P(x_{n-1}) \bigr ) \leq C \, d(x_n , x_{n-1}) \notag \\ &\leq C^n \, d(x_1,x_0) . \label{odes:eqn:contraction_estimate_neighbors} \end{align} % Without loss of generality, we shall assume that $m < n$. Then, we use the triangle inequality to estimate the distance between $x_n$ and $x_m$ by distances to neighbors, % \begin{align*} d(x_n,x_m) &\leq d(x_n , x_{n-1}) + d(x_{n-1},x_{n-2}) + \ldots + d(x_{m+1},x_m) = \sum_{j = m+1}^n d(x_j,x_{j-1}) . \end{align*} % Hence, we can plug in the estimate on the distance between neighbors, \eqref{odes:eqn:contraction_estimate_neighbors}, and sum over $j$, % \begin{align*} d(x_n,x_m) &\leq \sum_{j = m+1}^n C^{j-1} \, d(x_1,x_0) = C^m \, \sum_{j = 0}^{n-m} C^j \, d(x_1,x_0) \\ &\leq C^m \, d(x_1,x_0) \, \sum_{j = 0}^{\infty} C^j = \frac{C^m}{1-C} \, d(x_1,x_0) . \end{align*} % If we choose $m$ large enough, we can make the right-hand side as small as we want and thus, $\{ x_n \}_{n \in \N}$ is a Cauchy sequence. By assumption the space $(\mathcal{X},d)$ is complete, and thus, there exists $x_{\ast}$ so that % \begin{align*} \lim_{n \to \infty} x_n = \lim_{n \to \infty} P^n(x_0) = x_{\ast} . \end{align*} % It remains to show that this fixed point is unique. Pick another initial point $x_0'$ with limit $x_{\ast}' = \lim_{n \to \infty} P^n(x_0')$; this limit exists by the arguments above. Both fixed points satisfy $P x_{\ast} = x_{\ast}$ and $P x_{\ast}' = x_{\ast}'$, because % \begin{align*} P(x_{\ast}) &= P \bigl ( \lim_{n \to \infty} P^n(x_0) \bigr ) = \lim_{n \to \infty} P^{n+1}(x_0) = x_{\ast} \end{align*} % holds (and analogously for $x_{\ast}'$). Using the contractivity property \eqref{odes:eqn:contraction_map}, we then estimate the distance between $x_{\ast}$ and $x_{\ast}'$ by % \begin{align*} d(x_{\ast},x_{\ast}') &= d \bigl ( P(x_{\ast}) , P(x_{\ast}') \bigr ) \leq C \, d(x_{\ast},x_{\ast}') . \end{align*} % Since $0 \leq C < 1$, the above equation can only hold if $d(x_{\ast},x_{\ast}') = 0$ which implies $x_{\ast} = x_{\ast}'$. Hence, the fixed point is also unique. \marginpar{2013.09.12} \end{proof} To ensure the existence of the flow, we need to impose conditions on the vector field. One common choice is to require $F$ to be \emph{Lipschitz}, meaning there exists a constant $L > 0$ so that \begin{align} \babs{F(x) - F(x')} \leq L \babs{x - x'} \label{odes:eqn:Lipschitz_condition} \end{align} holds for all $x$ and $x'$ in some open neighborhood $U_{x_0}$ of the initial point $x_0 = x(0)$. The Lipschitz condition has two implications: first of all, it states that the vector field grows at most linearly. Secondly, if the vector field is continuously differentiable, $L$ is also a bound on the norm of the differential $\sup_{x \in U_{x_0}} \norm{DF(x)}$. However, not all vector fields which are Lipschitz need to be continuously differentiable, \eqref{odes:eqn:Lipschitz_condition} is in fact weaker than requiring that $\sup_{x \in U_{x_0}} \norm{DF(x)}$ is bounded. For instance, the vector field $F(x) = - \abs{x}$ in one dimension is Lipschitz on all of $\R$, but not differentiable at $x = 0$. So if the vector field is locally Lipschitz, the flow exists at least for some time interval: \begin{theorem}[Picard-Lindelöf]\label{odes:thm:Picard_Lindeloef} Let $F$ be a continuous vector field, $F \in \Cont(U,\R^n)$, $U \subseteq \R^n$ open, which defines a system of differential equations, % \begin{align} \dot{x} &= F(x) \label{odes:eqn:ode_Picard_Lindeloef} . \end{align} % Assume for a certain initial condition $x_0 \in U$ there exists a closed ball % \begin{align*} B_{\rho}(x_0) := \bigl \{ x \in \R^n \; \vert \; \sabs{x - x_0} \leq \rho \bigr \} \subseteq U , && \rho > 0 , \end{align*} % such that $F$ is Lipschitz on $B_{\rho}(x_0)$ with Lipschitz constant $L$ (\cf equation~\eqref{odes:eqn:Lipschitz_condition}). Then the initial value problem, equation~\eqref{odes:eqn:ode_Picard_Lindeloef} with $x(0) = x_0$, has a unique solution $t \mapsto x(t)$ for times $\abs{t} \leq T := \min \bigl ( \nicefrac{\rho}{v_{\max}} , \nicefrac{1}{2L} \bigr )$ where the maximal velocity is defined as $v_{\max} := \sup_{x \in B_{\rho}(x_0)} \sabs{F(x)}$. \end{theorem} \begin{proof} The proof consists of three steps: \medskip \noindent \textbf{Step 1:} We can rewrite the initial value problem equation~\eqref{odes:eqn:ode_Picard_Lindeloef} with $x(0) = x_0$ as % \begin{align*} x(t) = x_0 + \int_0^t \dd s \, F \bigl ( x(s) \bigr ) \end{align*} % This equation can be solved iteratively: we define $x_{(0)}(t) := x_0$ and the $n+1$th iteration $x_{(n+1)}(t) := \bigl ( P (x_{(n)}) \bigr )(t)$ in terms of the so-called Picard map % \begin{align*} \bigl ( P (x_{(n)}) \bigr )(t) := x_0 + \int_0^t \dd s \, F \bigl ( x_{(n)}(t) \bigr ) . \end{align*} % \textbf{Step 2:} We will determine $T > 0$ small enough so that $P : \mathcal{X} \longrightarrow \mathcal{X}$ is a contraction on the space of trajectories which start at $x_0$, % \begin{align*} \mathcal{X} := \Bigl \{ y \in \Cont \bigl ([-T,+T] , B_{\rho}(x_0) \bigr ) \; \big \vert \; y(0) = x_0 \Bigr \} . \end{align*} % First, we note that $\mathcal{X}$ is a \emph{complete} metric space if we use % \begin{align*} \mathrm{d} (y,z) := \sup_{t \in [-T,+T]} \babs{y(t) - z(t)} && y,z \in \mathcal{X} \end{align*} % to measure distances between trajectories. Hence, Banach's fixed point theorem~\ref{odes:thm:Banach_fixed_point} applies, and once we show that $P$ is a contraction, we know that $x_{(n)}$ converges to the (unique) fixed point $x = P(x) \in \mathcal{X}$; this fixed point, in turn, is the solution to the ODE \eqref{odes:eqn:ode_Picard_Lindeloef} with $x(0) = x_0$. To ensure $P$ is a contraction with $C = \nicefrac{1}{2}$, we propose $T \leq \nicefrac{1}{2L}$. Then the Lipschitz property implies that for any $y,z \in \mathcal{X}$, we have % \begin{align*} \mathrm{d} \bigl ( P(y) , P(z) \bigr ) &= \sup_{t \in [-T,+T]} \abs{\int_0^t \dd s \, \bigl [ \bigl ( F(y) \bigr )(s) - \bigl ( F(z) \bigr )(s) \bigr ]} \\ &\leq T \, L \sup_{t \in [-T,+T]} \babs{y(t) - z(t)} \leq \tfrac{1}{2L} L \, \mathrm{d} (y,z) = \tfrac{1}{2} \mathrm{d} (y,z) . \end{align*} % \textbf{Step 3:} We need to ensure that the trajectory does not leave the ball $B_{\rho}(x_0)$ for all $t \in [-T,+T]$: For any $y \in \mathcal{X}$, we have \marginpar{2013.09.17} % \begin{align*} \babs{P(y) - x_0} = \abs{\int_0^t \dd s \, F \bigl ( y(s) \bigr )} \leq t \, v_{\max} \leq T \, v_{\max} < \rho . \end{align*} % Hence, as long as $T \leq \min \bigl \{ \nicefrac{1}{2L} , \nicefrac{v_{\max}}{\rho} \bigr \}$ trajectories exist and do not leave $U$. This concludes the proof. \end{proof} This existence result for a \emph{single} initial condition implies immediately that \begin{corollary}\label{odes:cor:Picard_Lindeloef_flow} Assume we are in the setting of Theorem~\ref{odes:thm:Picard_Lindeloef}. Then for any $x_V \in U$ there exists an open neighborhood $V$ of $x_V$ so that the flow exists as a map $\Phi : [-T,+T] \times V \longrightarrow U$. \end{corollary} \begin{proof} Pick any $x_V \in U$. Then according to the Picard-Lindelöf Theorem~\ref{odes:thm:Picard_Lindeloef} there exists an open ball $B_{\rho}(x_V) \subseteq U$ around $x_V$ so that the trajectory $t \mapsto x(t)$ with $x(0) = x_V$ exists for all times $t \in [-T,+T]$ where $T = \min \bigl \{ \nicefrac{\rho}{v_{\max}} \nicefrac{1}{2L} \bigr \}$ and $v_{\max} = \sup_{x \in B_{\rho}(x_V)}$. Now consider initial conditions $x_0 \in B_{\nicefrac{\rho}{2}}(x_V)$: since the vector field satisfies the same Lipschitz condition as before, the arguments from Step~3 of the proof of the Picard-Lindelöf Theorem tell us that for times $t \in [-\nicefrac{T}{2} , +\nicefrac{T}{2}]$, the particles will not leave $B_{\rho}(x_V)$. Put more concretely: the maximum velocity of the particle (which is the maximum of the vector field) dictates how far it can go. So even if we start at the border of the ball with radius $\nicefrac{\delta}{2}$, for short enough times (\eg $\nicefrac{T}{2}$) we can make sure it never reaches the boundary of $B_{\rho}(x_V)$. This means the flow map $\Phi : [-\nicefrac{T}{2} , + \nicefrac{T}{2}] \times B_{\nicefrac{\rho}{2}}(x_V) \longrightarrow B_{\rho}(x_V)$ exists. Note that since $B_{\nicefrac{\rho}{2}}(x_V)$ is contained in $B_{\rho}(x_V) \subseteq U$, we know that the smaller ball is also a subset of $U$. \end{proof} Another important fact is that the flow $\Phi$ inherits the smoothness of the vector field which generates it.% \begin{theorem}\label{odes:thm:smoothness_flow} Assume the vector field $F$ is $k$ times continuously differentiable, $F \in \Cont^k ( U , \R^n )$, $U \subseteq \R^n$. Then the flow $\Phi$ associated to \eqref{odes:eqn:ode_Picard_Lindeloef} is also $k$ times continuously differentiable, \ie $\Phi \in \Cont^k \bigl ( [-T,+T] \times V , U \bigr )$ where $V \subset U$ is suitable. \end{theorem} \begin{proof} We refer to Chapter 3, Section 7.3 in \cite{Arnold:ode:2006}. \end{proof} \subsection{Interlude: the Grönwall lemma} \label{odes:existence_uniqueness:Groenwall} To show \emph{global} uniqueness of the flow, we need to make use of the exceedingly useful Grönwall lemma. It is probably the simplest “differential inequality”. \begin{lemma}[Grönwall]\label{odes:lem:Groenwall} Let $u$ be differentiable on the interior of $I = [a,b]$ or $I = [a,+\infty)$, and satisfy the differential inequality % \begin{align*} \dot{u}(t) \leq \beta(t) \, u(t) \end{align*} % where $\beta$ is a real-valued, continuous function on $I$. Then % \begin{align} u(t) \leq u(a) \, \e^{\int_a^t \dd s \, \beta(s)} \label{odes:eqn:Groenwall_estimate} \end{align} % holds for all $t \in I$. \end{lemma} \begin{proof} Define the function % \begin{align*} v(t) := \e^{\int_a^t \dd s \, \beta(s)} . \end{align*} % Then $\dot{v}(t) = \beta(t) \, v(t)$ and $v(a) = 1$ hold, and we can use the assumption on $u$ to estimate % \begin{align*} \frac{\dd}{\dd t} \frac{u(t)}{v(t)} &= \frac{\dot{u}(t) \, v(t) - u(t) \, \dot{v}(t)}{v(t)^2} \\ & = \frac{\dot{u}(t) \, v(t) - u(t) \, \beta(t) \, v(t)}{v(t)^2} \\ &\leq \frac{\beta(t) \, u(t) \, v(t) - \beta(t) \, u(t) \, v(t)}{v(t)^2} = 0 . \end{align*} % Hence, equation~\eqref{odes:eqn:Groenwall_estimate} follows from the mean value theorem, % \begin{align*} \frac{u(t)}{v(t)} \leq \frac{u(a)}{v(a)} = u(a) . \end{align*} % \end{proof} One important application is to relate bootstrap information on the vector fields to information on the flow itself. For instance, if the vector fields of two ODEs are close, then also their flows remain close -- but only for logarithmic times at best. After that, the flows will usually diverge \emph{exponentially}. If one applies this reasoning to the \emph{same} ODE for different initial conditions, then one observes the same effect: no matter how close initial conditions are picked, usually, the trajectories will diverge exponentially. This fact gives rise to \emph{chaos}. \begin{proposition}\label{odes:prop:closeness_vector_fields_closeness_flows} Suppose the vector field $F_{\eps} = F_0 + \eps \, F_1$ satisfies the Lipschitz condition~\eqref{odes:eqn:Lipschitz_condition} for some open set $U \subseteq \R^n$ with Lipschitz constant $L > 0$, consisting of a leading-order term $F_0$ that is also Lipschitz with constant $L$, and a small, bounded perturbation $\eps \, F_1$, \ie $0 < \eps \ll 1$ and $C := \sup_{x \in U} \abs{F_1} < \infty$. Then the flows $\Phi^{\eps}$ and $\Phi^0$ associated to $\dot{x}^{\eps} = F_{\eps}$ and $\dot{x}^{(0)} = F_0$ exist for the same times $t \in [-T,+T]$, and the two are $\order(\eps)$ close in the following sense: there exists an open neighborhood $V \subseteq U$ so that % \begin{align} \sup_{x \in V}\babs{\Phi^{\eps}_t(x) - \Phi^0_t(x)} \leq \eps \, \frac{C}{L} \, \bigl ( \e^{L \abs{t}} - 1 \bigr ) \label{odes:eqn:closeness_vector_fields_closeness_flows} \end{align} % holds. \end{proposition} An important observation is that this result is in some sense optimal: while there are a few specific ODEs (in particular linear ones) for which estimates analogous to \eqref{odes:eqn:closeness_vector_fields_closeness_flows} hold for longer times, in general the Proposition really mirrors what happens: solutions will diverge exponentially. In physics, the time scale at which chaos sets in, $\order(\abs{\ln \eps})$, is known as \emph{Ehrenfest time scale}; classical chaos is the reason why semiclassics (approximating the \emph{linear} quantum evolution by \emph{non-linear} classical dynamics) is limited to the Ehrenfest time scale. \begin{proof} First of all, due to Corollary~\ref{odes:cor:Picard_Lindeloef_flow}, there exist $T > 0$ and an open neighborhood $V$ so that for all initial conditions $x_0 \in V$, the trajectories do not leave $U$. Let $x^{\eps}$ and $x^{(0)}$ be the trajectories which solve $\dot{x}^{\eps} = F_{\eps}$ and $\dot{x}^{(0)} = F_0$ for the initial condition $x^{\eps}(0) = x_0 = x^{(0)}(0)$. Moreover, let us define the difference vector $X(t) := x^{\eps}(t) - x^{(0)}(t)$ so that $u(t) = \sabs{X(t)}$. Using $\abs{x} - \abs{y} \leq \abs{x-y}$ for vectors $x , y \in \R^n$ (which follows from the triangle inequality), we obtain % \begin{align} \abs{\frac{\dd}{\dd t} X(t)} &= \abs{\lim_{\delta \to 0} \frac{X(t+\delta) - X(t)}{\delta}} = \lim_{\delta \to 0} \frac{\babs{X(t+\delta) - X(t)}}{\sabs{\delta}} \notag \\ &\geq \lim_{\delta \to 0} \frac{\sabs{X(t+\delta)} - \sabs{X(t)}}{\sabs{\delta}} = \frac{\dd}{\dd t} u(t) . \label{odes:eqn:abs_ddt_X_geq_ddt_abs_X} \end{align} % Hence, we can estimate the derivative of $u(t)$ from above by % \begin{align*} \frac{\dd}{\dd t} u(t) &\leq \abs{\frac{\dd}{\dd t} \bigl ( x^{\eps}(t) - x^{(0)}(t) \bigr )} = \Babs{F_{\eps} \bigl ( x^{\eps}(t) \bigr ) - F_0 \bigl ( x^{(0)}(t) \bigr )} \\ &\leq \Babs{F_0 \bigl ( x^{\eps}(t) \bigr ) - F_0 \bigl ( x^{(0)}(t) \bigr )} + \eps \, \babs{F_1 \bigl ( x^{\eps}(t) \bigr )} \\ &\leq L \, \babs{x^{\eps}(t) - x^{(0)}(t)} + \eps \, C = L \, u(t) + \eps \, C . \end{align*} % The above inequality is in fact equivalent to % \begin{align*} \frac{\dd}{\dd t} \bigl ( \e^{- L t} \, u(t) \bigr ) &= \e^{- L t} \, \bigl ( \dot{u}(t) - L \, u(t) \bigr ) \leq \eps \, C \, \e^{- L t} . \end{align*} % Now we integrate left- and right-hand side: since we assume both trajectories to start at the same point $x_0$, we have $u(0) = \babs{x^{\eps}(0) - x^{(0)}(0)} = \babs{x_0 - x_0} = 0$. Moreover, the integrands on both sides are non-negative functions, so that for $t > 0$ we obtain % \begin{align*} \int_0^t \dd s \, \frac{\dd}{\dd s} \bigl ( \e^{- L s} \, u(s) \bigr ) &= \bigl [ \e^{- L s} \, u(s) \bigr ]_0^t = \e^{- L t} \, u(t) \\ &\leq \int_0^t \dd s \, \eps \, C \, \e^{- L s} = - \eps \, \frac{C}{L} \, \bigl [ \e^{- L s} \bigr ]_0^t = \eps \, \frac{C}{L} \, \bigl ( 1 - \e^{-L t} \bigr ) . \end{align*} % A similar result is obtained for $t < 0$. Hence, we get % \begin{align*} u(t) \leq \eps \, \frac{C}{L} \, \bigl ( \e^{L \abs{t}} - 1 \bigr ) . \end{align*} % Since the Lipschitz constant $L$ and $C$ were independent of the initial point $x_0$, this estimate holds for all $x_0 \in U$, and we have shown equation~\eqref{odes:eqn:closeness_vector_fields_closeness_flows}. \end{proof} Note that the estimate~\eqref{odes:eqn:closeness_vector_fields_closeness_flows} also holds \emph{globally} (meaning for all times $t \in \R$ and initial conditions $x_0 \in \R^n$) if the vector fields are \emph{globally} Lipschitz, although we need the \emph{global} existence and uniqueness theorem, Corollary~\ref{frameworks:classical_mechanics:cor:existence_flow}, from the next subsection. \subsection{Conclusion: global existence and uniqueness} \label{odes:existence_uniqueness:global} The Grönwall lemma is necessary to prove the uniqueness of the solutions in case the vector field is globally Lipschitz. \begin{corollary}\label{frameworks:classical_mechanics:cor:existence_flow} If the vector field $F$ satisfies the Lipschitz condition \emph{globally}, \ie there exists $L > 0$ such that % \begin{align*} \babs{F(x) - F(x')} \leq L \babs{x - x'} \end{align*} % holds for all $x , x' \in \R^n$, then $t \mapsto \Phi_t(x_0)$ exists globally for all $t \in \R$ and $x_0 \in \R^n$. \end{corollary} \begin{proof} For every $x_0 \in \R^n$, we can solve the initial value problem at least for $\abs{t} \leq \nicefrac{1}{2 L}$. Since we do not require the particle to remain in a neighborhood of the initial point as in Theorem~\ref{odes:eqn:ode_Picard_Lindeloef}, the other condition on $T$ is void. Using $\Phi_{t_1} \circ \Phi_{t_2} = \Phi_{t_1 + t_2}$ we obtain a \emph{global} trajectory $x : \R \longrightarrow \R^n$ for all times. However, we potentially lose uniqueness of the trajectory. So assume $\tilde{x}$ is another trajectory. Then we define $u(t) := \babs{x(t) - \tilde{x}(t)}$ and use the Lipschitz property~\eqref{odes:eqn:Lipschitz_condition} to deduce % \begin{align*} \frac{\dd}{\dd t} u(t) &= \frac{\dd}{\dd t} \babs{x(t) - \tilde{x}(t)} \\ & \overset{\eqref{odes:eqn:abs_ddt_X_geq_ddt_abs_X}}{\leq} \babs{\dot{x}(t) - \dot{\tilde{x}}(t)} = \babs{F \bigl ( x(t) \bigr ) - F \bigl ( \tilde{x}(t) \bigr )} \\ &\leq L \, \babs{x(t) - \tilde{x}(t)} . \end{align*} % Hence, the Grönwall lemma applies with $a = 0$ and % \begin{align*} u(0) = \babs{x(0) - \tilde{x}(0)} = \babs{x_0 - x_0} = 0 , \end{align*} % and we obtain the estimate % \begin{align*} 0 \leq u(t) = \babs{x(t) - \tilde{x}(t)} \leq u(0) \, \e^{L t} = 0 , \end{align*} % \ie the two trajectories coincide and the solution is unique. \marginpar{2013.09.19} \end{proof} \section{Stability analysis} \label{odes:stability} What if an ODE is not analytically solvable, beyond using numerics, what information can we extract? A simple way to learn something about the qualityative behavior is to linearize the vector field near fixed points, \ie those $x_0 \in \R^n$ for which \begin{align*} F(x_0) = 0 . \end{align*} Writing the solution $x(t) = x_0 + \delta y(t)$ where $x(0) = x_0$ and Taylor expanding the vector field, we obtain another \emph{linear} ODE involving the differential $DF(x_0) = \bigl ( \partial_{x_j} F_k(x_0) \bigr )_{1 \leq j , k \leq d}$, \begin{align*} \frac{\dd}{\dd t} x(t) = \delta \, \frac{\dd}{\dd t} y(t) & = F \bigl ( x_0 + \delta y(t) \bigr ) \\ & = F(x_0) + \delta \, DF(x_0) \, y(t) + \order(\delta^2) = \delta \, DF(x_0) \, y(t) \\ \Longrightarrow \; \frac{\dd}{\dd t} y(t) &= DF(x_0) \, y(t) . \end{align*} The latter can be solved explicitly, namely $\e^{t DF(x_0)} y_0$. Now the stability of the solutions near fixed points is determined by the \emph{eigenvalues} $\{ \lambda_j \}_{j = 1}^N$ of $DF(x_0)$. \begin{definition}[Stability of fixed points] We call an ODE near a fixed point $x_0$ % \begin{enumerate}[(i)] \item \emph{stable} if $\Re \lambda_j < 0$ holds for all eigenvalues $\lambda_j$ of $DF(x_0)$, \item \emph{marginally stable} (or Liapunov stable) $\Re \lambda_j \leq 0$ holds for all $j$ and $\Re \lambda_j = 0$ for at least one $j$, and \item \emph{unstable} otherwise. \end{enumerate} % \end{definition} There is also another characterization of fixed points: \begin{definition} An ODE near a fixed point $x_0$ is called % \begin{enumerate}[(i)] \item \emph{elliptic} if $\Re \lambda_j = 0$ for all $j = 1 , \ldots , N$, and \item \emph{hyperbolic} if $\Im \lambda_j = 0$ for all $j = 1 , \ldots , N$ and $\Re \lambda_j > 0$ for some $j$. \end{enumerate} % \end{definition} \begin{example}[Lorenz equation] The Rayleigh-Bernard equation describes the behavior of a liquid in between two warm plates of different temperatures: % \begin{align*} \left ( % \begin{matrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} \sigma \, (x_2 - x_1) \\ - x_1 \, x_3 + r x_1 - x_2 \\ x_1 \, x_2 - b \, x_3 \\ \end{matrix} % \right ) =: F(x) \end{align*} % Here, the variables $x_1$ and $x_2$ are temperatures, $x_3$ is a speed, the constant $\sigma$ is a volume ratio, $b$ is the so-called \emph{Prandtl number} and $r \geq 0$ is a control parameter. Typical values for the parameters are $\sigma \simeq 10$ and $b \simeq \nicefrac{8}{5}$. This vector field may have several fixed points (depending on the choice of the other parameters), but $x = 0$ is always a fixed point independently of the values of $\sigma$, $b$ and $r$. Let us analyze the stability of the ODE near the fixed point at $x = 0$: % \begin{align*} DF(0) = \left ( % \begin{matrix} - \sigma & + \sigma & 0 \\ r & -1 & 0 \\ 0 & 0 & -b \\ \end{matrix} % \right ) \end{align*} % The block structure of the matrix yields that one of the eigenvalues is $-b$. The other two eigenvalues can be inferred from finding the zeros of % \begin{align*} \chi(\lambda) &= \mathrm{det} \left ( % \begin{matrix} \lambda + \sigma & -\sigma \\ -r & \lambda + 1 \\ \end{matrix} % \right ) = (\lambda + \sigma)(\lambda + 1) - (-1)^2 \, r \sigma \\ & = \lambda^2 + (\sigma + 1) \, \lambda + (1-r) \, \sigma , \end{align*} % namely % \begin{align*} \lambda_{\pm} = - \frac{\sigma + 1}{2} \pm \frac{1}{2} \sqrt{(\sigma + 1)^2 - (1 - r) \, \sigma} . \end{align*} % If $r > 1$, then $(\sigma + 1)^2 - (1 - r) \, \sigma > (\sigma + 1)^2$, and thus $\lambda_- < 0 < \lambda_+$ as long as $\sigma \geq r$ is large enough. Hence, the fixed point at $x = 0$ is unstable and hyperbolic. For the other case, $0 \leq r < 1$, both eigenvalues are negative, and thus $\lambda_{\pm} < 0$, the fixed point is stable (but neither elliptic nor hyperbolic). \marginpar{2013.09.24} \end{example} \chapter{Classical mechanics} \label{classical_mechanics} This section serves to give a short introduction to the hamiltonian point of view of classical mechanics and some of its beautiful mathematical structures. For simplicity, we only treat classical mechanics of a \emph{spinless point particle moving in $\R^n$}, for the more general theory, we refer to \cite{Marsden_Ratiu:intro_mechanics_symmetry:1999,Arnold:classical_mechanics:1997}. A more thorough introduction to this topic would easily take up a whole lecture, so we content ourselves with introducing the key precepts and apply our knowledge from Section~\ref{odes}. We will only treat \emph{hamiltonian} mechanics here: the dynamics is generated by the so-called hamilton function $H : \Pspace \longrightarrow \R$ which describes the energy of the system for a given configuration. Here, $\Pspace$ is also known as \emph{phase space}. Since only \emph{energy differences} are measurable, the hamiltonian $H' := H + E_0$, $E_0 \in \R$, generates the \emph{same dynamics} as $H$. This is obvious from the \emph{Hamilton's equations of motion}, \begin{subequations}\label{classical_mechanics:eqn:hamiltons_eom_simple} % \begin{align} \dot{q}(t) &= + \nabla_p H \bigl ( q(t),p(t) \bigr ) , \\ \dot{p}(t) &= - \nabla_q H \bigl ( q(t),p(t) \bigr ) , \end{align} % \end{subequations} which can be rewritten in matrix notation as \begin{align} J \left ( % \begin{matrix} \dot{q}(t) \\ \dot{p}(t) \\ \end{matrix} % \right ) :& \negmedspace= \left ( % \begin{matrix} 0 & - \id_{\R^n} \\ + \id_{\R^n} & 0 \\ \end{matrix} % \right ) \left ( % \begin{matrix} \dot{q}(t) \\ \dot{p}(t) \\ \end{matrix} % \right ) \label{classical_mechanics:eqn:hamiltons_eom} \\ &= \left ( % \begin{matrix} \nabla_q H \\ \nabla_p H \\ \end{matrix} % \right ) \bigl ( q(t),p(t) \bigr ) =: X_H \bigl ( q(t),p(t) \bigr ) \notag . \end{align} The matrix $J$ appearing on the left-hand side is often called \emph{symplectic form} and leads to a geometric point of view of classical mechanics. For fixed initial condition $(q_0,p_0) \in \Pspace$ at time $t_0 = 0$, \ie initial position and momentum, the \emph{Hamilton's flow} \begin{align} \Phi : \R \times \Pspace \longrightarrow \Pspace \end{align} maps $(q_0,p_0)$ onto the trajectory which solves the Hamilton's equations of motion, \begin{align*} \Phi_t(q_0,p_0) = \bigl ( q(t),p(t) \bigr ) , && \bigl ( q(0),p(0) \bigr ) = (q_0,p_0) . \end{align*} If the flow exists for all $t \in \R$, it has the following nice properties: for all $t , t' \in \R$ and $(q_0,p_0) \in \Pspace$, we have \begin{enumerate}[(i)] \item $\Phi_t \bigl ( \Phi_{t'}(q_0,p_0) \bigr ) = \Phi_{t + t'}(q_0,p_0)$, \item $\Phi_0(q_0,p_0) = (q_0,p_0)$, and \item $\Phi_{t} \bigl ( \Phi_{-t}(q_0,p_0) \bigr ) = \Phi_{t - t}(q_0,p_0) = (q_0,p_0)$. \end{enumerate} Mathematically, this means $\Phi$ is a \emph{group action} of $\R$ (with respect to time translations) on phase space $\Pspace$. This is a fancy way of saying: \begin{enumerate}[(i)] \item If we first evolve for time $t$ and then for time $t'$, this is the same as evolving for time $t + t'$. \item If we do not evolve at all in time, nothing changes. \item The system can be evolved forwards or backwards in time. \end{enumerate} The above results immediately apply to the Hamilton's equations of motion: \begin{corollary} Let $H(q,p) = \tfrac{1}{2m} p^2 + V(q)$ be the hamiltonian which generates the dynamics according to equation~\eqref{classical_mechanics:eqn:hamiltons_eom} such that $\nabla_q V$ satisfies a global Lipschitz condition % \begin{align*} \babs{\nabla_q V(q) - \nabla_q V(q')} \leq L \, \babs{q - q'} && \forall q , q' \in \R^n . \end{align*} % Then the hamiltonian flow $\Phi$ exists for all $t \in \R$. \end{corollary} Obviously, if $\nabla_q V$ is only \emph{locally} Lipschitz, we have \emph{local} existence of the flow. \begin{remark} Note that if all second-order derivatives of $V$ are bounded, then the hamiltonian vector field $X_H$ is Lipschitz. \end{remark} \section{The trinity of physical theories} \label{classical_mechanics:trinity} It turns out to be useful to take a step back and analyze the \emph{generic structure} of most physical theories. They usually consist of \emph{three ingredients:} \begin{enumerate}[(i)] \item A notion of \emph{state} which encodes the configuration of the system, \item a notion of \emph{observable} which predicts the outcome of measurements and \item a \emph{dynamical equation} which governs how the physical system evolves. \end{enumerate} \subsection{States} \label{classical_mechanics:trinity:states} The classical particle always moves in \emph{phase space} $\R^{2n} \simeq \R^n_q \times \R^n_p$ of positions and momenta. Pure states in classical mechanics are simply points in phase space: a point particle's state at time $t$ is characterized by its position $q(t)$ and its momentum $p(t)$. More generally, one can consider \emph{distributions of initial conditions} which are relevant in statistical mechanics, for instance. \begin{definition}[Classical states]\label{classical_mechanics:defn:state} A classical state is a probability measure $\mu$ on phase space, that is a positive Borel measure\footnote{Unfortunately we do not have time to define Borel sets and Borel measures in this context. We refer the interested reader to chapter~1 of \cite{Lieb_Loss:analysis:2001}. Essentially, a Borel measure assigns a “volume” to “nice” sets, \ie Borel sets. } which is normed to $1$, % \begin{align*} \mu(U) &\geq 0 && \mbox{for all Borel sets $U \subseteq \Pspace$} \\ \mu(\Pspace) &= \int_{\R^{2n}} \dd \mu = 1 . \end{align*} % Pure states are \emph{point measures}, \ie if $(q_0,p_0) \in \Pspace$, then the associated pure state is given by $\mu_{(q_0,p_0)}(\cdot) := \delta_{(q_0,p_0)}(\cdot) = \delta \bigl ( \cdot - (q_0,p_0) \bigr )$.\footnote{Here, $\delta$ is the Dirac distribution which we will consider in detail in Chapter~\ref{S_and_Sprime}. } \end{definition} \subsection{Observables} \label{classical_mechanics:trinity:observables} Observables $f$ such as position, momentum, angular momentum and energy describe the outcome of measurements. \begin{definition}[Classical observables] Classical observables $f \in \Cont^{\infty}(\R^{2n},\R)$ are smooth functions on $\R^{2n}$ with values in $\R$. \end{definition} Of course, there are cases when observables are functions which are not smooth on all or $\R^{2n}$, \eg the Hamiltonian which describes Newtonian gravity in three dimensions, \begin{align*} H(q,p) = \frac{1}{2m} p^2 - \frac{g}{\abs{q}} , \end{align*} has a singularity at $q = 0$, but $H \in \Cont^{\infty} \bigl ( \R^{2n} \setminus \{ 0 \} , \R \bigr )$. Intimately linked is the concept of \begin{definition}[Spectrum of an observable] The spectrum of a classical observables, \ie the set of possible outcomes of measurements, is given by % \begin{align*} \spec f := f(\Pspace) = \image f . \end{align*} % \end{definition} If we are given a classical state $\mu$, then the \emph{expectation value $\mathbb{E}_{\mu}$} of an observable $f$ for the distribution of initial conditions $\mu$ is given by \begin{align*} \mathbb{E}_{\mu}(f) :=& \int_{\Pspace} \dd \mu(q,p) \, f(q,p) . \end{align*} \subsection{Dynamics: Schrödinger vs{.} Heisenberg picture} \label{classical_mechanics:trinity:dynamics} There are two equivalent ways prescribe dynamics: either we evolve states in time and keep observables fixed or we keep states fixed in time and evolve observables. In \emph{quantum} mechanics, these two points of view are known as the \emph{Schrödinger} and \emph{Heisenberg picture} (after the famous physicists of the same name). Usually, there are situations when one point of view is more convenient than the other. In both cases the crucial ingredient in the dynamical equation is the \emph{energy observable} $H(q,p)$ also known as \emph{Hamilton function}, or \emph{Hamiltonian} for short. The prototypical form for a non-relativistic particle of mass $m > 0$ subjected to a potential $V$ is \begin{align*} H(q,p) = \frac{1}{2m} p^2 + V(q) . \end{align*} The first term, $\frac{1}{2m} p^2$, is also known as \emph{kinetic energy} while $V$ is the \emph{potential}. We have juxtaposed the Schrödinger and Heisenberg point of view in Table~\ref{classical_mechanics:table:Schroedinger_vs_Heisenberg}: in the Schrödinger picture, the states \begin{align} \mu(t) := \mu \circ \Phi_{-t} \end{align} are evolved \emph{backwards} in time while observables $f$ remain constant. (That may seem unintuitive at first, but we ask the skeptic to continue reading until the end of Section~\ref{classical_mechanics:equivalence_S_H}.) Conversely, in the Heisenberg picture, \emph{observables} \begin{align} f(t) := f \circ \Phi_t \end{align} evolve forwards in time whereas states $\mu$ remain fixed. In both cases, the dynamical equations can be written in terms of the so-called \emph{Poisson bracket} \begin{align} \bigl \{ f , g \bigr \} := \sum_{j = 1}^n \bigl ( \partial_{p_j} f \, \partial_{q_j} g - \partial_{q_j} f \, \partial_{p_j} g \bigr ) . \label{classical_mechanics:eqn:Poisson_bracket} \end{align} These equations turn out to be equivalent to proposing Hamilton's equations of motion~\eqref{classical_mechanics:eqn:hamiltons_eom} (\cf Proposition~\ref{classical_mechanics:prop:equations_of_motion_Poisson_bracket}). \begin{table} \hfil % \begin{tabular}{l | c | c} & \emph{Schrödinger picture} & \emph{Heisenberg picture} \\ \hline \emph{States} & $\mu(t) = \mu \circ \Phi_{-t}$ & $\mu$ \\ \emph{Observables} & $f$ & $f(t) = f \circ \Phi_t$ \\ \emph{Dynamical equation} & $\frac{\dd}{\dd t} \mu(t) = - \bigl \{ H , \mu(t) \bigr \}$ & $\frac{\dd}{\dd t} f(t) = \bigl \{ H , f(t) \bigr \}$ \end{tabular} % \hfil \caption{Comparison between Schrödinger and Heisenberg picture. } \label{classical_mechanics:table:Schroedinger_vs_Heisenberg} \end{table} \begin{example} For the special observables position $q_j$ and momentum $p_j$ equation~\eqref{classical_mechanics:eqn:eom_observables} reduces to the components of Hamilton's equations of motion~\eqref{classical_mechanics:eqn:hamiltons_eom}, % \begin{align*} \dot{q}_j &= \bigl \{ H , q_j \bigr \} = + \partial_{p_j} H \\ \dot{p}_j &= \bigl \{ H , p_j \bigr \} = - \partial_{q_j} H . \end{align*} % \end{example} \section{Conservation of probability and the Liouville theorem} \label{classical_mechanics:Liouville} In the Schrödinger picture, states are time-dependent while observables remain fixed in time. If $\mu$ is a state, we develop it \emph{backwards} in time, $\mu(t) = \mu \circ \Phi_{-t}$, using the hamiltonian flow $\Phi$ associated to \eqref{classical_mechanics:eqn:hamiltons_eom}. A priori, it is \emph{not} obvious that $\mu(t)$ is still a classical state in the sense of Definition~\ref{classical_mechanics:defn:state}. The first requirement, $\bigl ( \mu(t) \bigr )(U) \geq 0$, is still satisfied since $\Phi_{-t}(U)$ is again a subset of $\R^{2n}$\footnote{The fact that $\Phi_{-t}(U)$ is again Borel measurable follows from the continuity of $\Phi_t$ in $t$.} What is not obvious is whether $\mu(t)$ is still normed, \ie whether \begin{align*} \bigl ( \mu(t) \bigr )(\R^{2n}) &= \int_{\R^{2n}} \dd \mu(t) = 1 ? \end{align*} \begin{proposition}\label{classical_mechanics:prop:states_stay_states} Let $\mu$ be a state on phase space $\Pspace$ and $\Phi_t$ the flow generated by a hamiltonian $H \in \Cont^{\infty}(\Pspace)$ which we assume to exist for $\abs{t} \leq T$ where $0 < T \leq \infty$ is suitable. Then $\mu(t)$ is again a state. \end{proposition} The proof of this relies on a very deep result of classical mechanics, the so-called \begin{theorem}[Liouville]\label{classical_mechanics:thm:Liouville} The hamiltonian vector field is divergence free, \ie the hamiltonian flow preserves volume in phase space of bounded subsets $V$ of $\Pspace$ with smooth boundary $\partial V$. In particular, the functional determinant of the flow is constant and equal to % \begin{align*} \mathrm{det} \, \bigl ( D \Phi_t(q,p) \bigr ) = 1 \end{align*} % for all $t \in \R$ and $(q,p) \in \Pspace$. \end{theorem} \begin{figure} \hfil\includegraphics[height=4cm]{./images/cat_flow}\hfil \caption{Phase space volume is preserved under the hamiltonian flow. } \end{figure} \begin{remark}\label{classical_mechanics:remark:Liouville} We will need a fact from the theory of dynamical systems: if $\Phi_t$ is the flow associated to a differential equation $\dot{x} = F(x)$ with $F \in \Cont^1(\R^n,R^d)$, then % \begin{align*} \frac{\dd }{\dd t} D \Phi_t(x) = DF \bigl ( \Phi_t(x) \bigr ) \, D \Phi_t(x) , && D \Phi_t \big \vert_{t = 0} = \id_{\R^n} , \end{align*} % holds for the differential of the flow. As a consequence, one can prove % \begin{align*} \frac{\dd }{\dd t} \bigl ( \mathrm{det} \, D \Phi_t(x) \bigr ) &= \mathrm{tr} \, \bigl ( DF \bigl (\Phi_t(x) \bigr ) \bigr ) \, \mathrm{\det} \, \bigl ( D \Phi_t(x) \bigr ) \\ &= \mathrm{div} \, F \bigl ( \Phi_t(x) \bigr ) \, \mathrm{\det} \, \bigl ( D \Phi_t(x) \bigr ) , \end{align*} % and $\mathrm{det} \, \bigl ( D \Phi_t \bigr ) \big \vert_{t = 0} = 1$. There are more elegant, general and geometric proofs of this fact, but they are beyond the scope of our short introduction. \end{remark} \begin{proof}[Theorem~\ref{classical_mechanics:thm:Liouville}] Let $H$ be the hamiltonian which generates the flow $\Phi_t$. Let us denote the hamiltonian vector field by % \begin{align*} X_H = \left ( % \begin{matrix} + \nabla_p H \\ - \nabla_q H \\ \end{matrix} % \right ) . \end{align*} % Then a direct calculation yields % \begin{align*} \mathrm{div} \, X_H = \sum_{j = 1}^n \Bigl ( \partial_{q_j} \bigl ( + \partial_{p_j} H \bigr ) + \partial_{p_j} \bigl ( - \partial_{q_j} H \bigr ) \Bigr ) = 0 \end{align*} % and the hamiltonian vector field is divergence free. This implies the hamiltonian flow $\Phi_t$ preserves volumes in phase space: let $V \subseteq \Pspace$ be a bounded region in phase space (a Borel subset) with smooth boundary. Then for all $- T \leq t \leq T$ for which the flow exists, we have % \begin{align*} \frac{\dd}{\dd t} \mathrm{Vol} \, \bigl ( \Phi_t(V) \bigr ) &= \frac{\dd}{\dd t} \int_{\Phi_t(V)} \dd q \, \dd p \\ &= \frac{\dd}{\dd t} \int_{V} \dd x' \, \dd p' \, \mathrm{det} \, \bigl ( D \Phi_t(q',p') \bigr ) . \end{align*} % Since $V$ is bounded, we can bound $\mathrm{det} \, \bigl ( D \Phi_t \bigr )$ and its time derivative uniformly. Thus, we can interchange integration and differentiation and apply Remark~\ref{classical_mechanics:remark:Liouville}, % \begin{align*} \frac{\dd}{\dd t} \int_{V} \dd x' \, \dd p' \, \mathrm{det} \, \bigl ( D \Phi_t(q',p') \bigr ) &= \int_V \dd x' \, \dd p' \, \frac{\dd}{\dd t} \mathrm{det} \, \bigl ( D \Phi_t(q',p') \bigr ) \\ & = \int_V \dd x' \, \dd p' \, \underbrace{\mathrm{div} \, X_H \bigl ( \Phi_t(q',p') \bigr )}_{= 0} \, \mathrm{det} \, \bigl ( D \Phi_t(q',p') \bigr ) \\ & = 0 . \end{align*} % Hence $\frac{\dd}{\dd t} \mathrm{Vol} \, (V) = 0$ and the hamiltonian flow conserves phase space volume. The functional determinant of the flow is constant as the time derivative vanishes, % \begin{align*} \frac{\dd}{\dd t} \mathrm{det} \, \bigl ( D \Phi_t(q',p') \bigr ) = 0 , \end{align*} % and equal to $1$, % \begin{align*} \mathrm{det} \, \bigl ( D \Phi_t (q',p') \bigr ) \big \vert_{t = 0} = \mathrm{det} \, \id_{\Pspace} = 1 . \end{align*} % This concludes the proof. \end{proof} With a different proof relying on alternating multilinear forms, the requirements on $V$ can be lifted, see \eg \cite[Theorem on pp.~204--207]{Arnold:classical_mechanics:1997}. \begin{proof}[Proposition~\ref{classical_mechanics:prop:states_stay_states}] Since $\Phi_t$ is continuous, it is also measurable. Thus $\mu(t) = \mu \circ \Phi_{-t}$ is also a Borel measure on $\Pspace$ ($\Phi_{-t}$ exists by assumption on $t$). In fact, $\Phi_t$ is a diffeomorphism on phase space. Liouville's theorem~\ref{classical_mechanics:thm:Liouville} not only ensures that the measure $\mu(t)$ remains positive, but also that it is normalized to 1: let $U \subseteq \Pspace$ be a Borel set. Then we conclude % \begin{align*} \bigl ( \mu(t) \bigr ) (U) &= \int_{U} \dd \bigl ( \mu(t) \bigr )(q,p) = \int_{U} \dd \mu \bigl ( \Phi_{-t}(q,p) \bigr ) \\ &= \int_{\Phi_{-t}(U)} \dd \mu (q,p) \, \mathrm{det} \bigl ( D \Phi_t(q,p) \bigr ) = \int_{\Phi_{-t}(U)} \dd \mu (q,p) \geq 0 \end{align*} % where we have used the positivity of $\mu$ and the fact that $\Phi_{-t}(U)$ is again a Borel set by continuity of $\Phi_{-t}$. If we set $U = \Pspace$ and use the fact that the flow is a diffeomorphism, we see that $\Pspace$ is mapped onto itself, $\Phi_{-t}(\Pspace) = \Pspace$, and the normalization of $\mu$ leads to % \begin{align*} \bigl ( \mu(t) \bigr ) (\Pspace) &= \int_{\Phi_{-t}(\Pspace)} \dd \mu (q,p) = \int_{\Pspace} \dd \mu (q,p) = 1 \end{align*} % This concludes the proof. \end{proof} \section{Equation of motion for observables and Poisson algebras} \label{classical_mechanics:Poisson} Viewed in the Heisenberg picture, observables move in time, $f(t) = f \circ \Phi_t$, while states are remain fixed. Seeing as $\Phi_t$ is invertible, it maps $\R^{2n}$ onto itself, and thus the spectrum of the observable does not change in time, \begin{align*} \spec f(t) = \spec f . \end{align*} For many applications and arguments, it will be helpful to find a dynamical equation for $f(t)$ directly: \begin{proposition}\label{classical_mechanics:prop:equations_of_motion_Poisson_bracket} Let $f \in \Cont^{\infty}(\Pspace,\R)$ be an observable and $\Phi$ the hamiltonian flow which solves the equations of motion~\eqref{classical_mechanics:eqn:hamiltons_eom} associated to a hamiltonian $H \in \Cont^{\infty}(\Pspace,\R)$ which we assume to exist globally in time for all $(q_0,p_0) \in \Pspace$. Then % \begin{align} \frac{\dd }{\dd t} f(t) = \bigl \{ H , f(t) \bigr \} \label{classical_mechanics:eqn:eom_observables} \end{align} % holds where $\bigl \{ f , g \bigr \} := \sum_{j = 1}^n \bigl ( \partial_{p_j} f \, \partial_{q_j} g - \partial_{q_j} f \, \partial_{p_j} g \bigr )$ is the so-called \emph{Poisson bracket}. \end{proposition} \begin{proof} Theorem~\ref{odes:thm:smoothness_flow} implies the smoothness of the flow from the smoothness of the hamiltonian. This means $f(t) \in \Cont^{\infty}(\Pspace,\R)$ is again a classical observable. By assumption, all initial conditions lead to trajectories that exist globally in time.\footnote{A slightly more sophisticated argument shows that the Proposition holds if the hamiltonian flow exists only locally in time. } For $(q_0,p_0)$, we compute the time derivative of $f(t)$ to be % \begin{align*} \left ( \frac{\dd }{\dd t} f(t) \right )(q_0,p_0) &= \frac{\dd }{\dd t} f \bigl ( q(t) , p(t) \bigr ) \\ &= \sum_{j = 1}^n \Bigl ( \partial_{q_j} f \circ \Phi_t(q_0,p_0) \, \dot{q}_j(t) + \partial_{p_j} f \circ \Phi_t(q_0,p_0) \, \dot{p}_j(t) \Bigr ) \\ &\overset{\ast}{=} \sum_{j = 1}^n \Bigl ( \partial_{q_j} f \circ \Phi_t(q_0,p_0) \, \partial_{p_j} H \circ \Phi_t(q_0,p_0) + \\ &\qquad \qquad + \partial_{p_j} f \circ \Phi_t(q_0,p_0) \, \bigl ( - \partial_{q_j} H \circ \Phi_t(q_0,p_0) \bigr ) \Bigr ) \\ &= \bigl \{ H(t) , f(t) \bigr \}(q_0,p_0) . \end{align*} % In the step marked with $\ast$, we have inserted the Hamilton's equations of motion. Compared to equation~\eqref{classical_mechanics:eqn:eom_observables}, we have $H$ instead of $H(t)$ as argument in the Poisson bracket. However, by setting $f(t) = H(t)$ in the above equation, we see that energy is a \emph{conserved quantity}, % \begin{align*} \frac{\dd }{\dd t} H(t) = \bigl \{ H(t) , H(t) \bigr \} = 0 . \end{align*} % Hence, we can replace $H(t)$ by $H$ in the Poisson bracket with $f$ and obtain equation~\eqref{classical_mechanics:eqn:eom_observables}. \end{proof} \begin{remark}\label{classical_mechanics:remark:uniqueness_solution_eom_observables} One can prove that $f(t) = f \circ \Phi_t$ is the \emph{only} solution to equation~\eqref{classical_mechanics:eqn:eom_observables}, but that requires a little more knowledge about symplectic geometry (\cf Proposition~5.4.2 and Proposition~5.5.2 of \cite{Marsden_Ratiu:intro_mechanics_symmetry:1999}). \marginpar{2013.09.26} \end{remark} \paragraph{Conserved quantities} The proof immediately leads to the notion of conserved quantity: \begin{definition}[Conserved quantity/constant of motion]\label{classical:defn:conserved_quantity} An observable $f \in \Cont^{\infty}(\Pspace,\R)$ which is invariant under the flow $\Phi$ generated by the hamiltonian $H \in \Cont^{\infty}(\Pspace,\R)$, \ie % \begin{align*} f(t) = f(0) , \end{align*} % or equivalently satisfies % \begin{align*} \frac{\dd }{\dd t} f(t) = \bigl \{ H , f(t) \bigr \} = 0 , \end{align*} % is called \emph{conserved quantity} or \emph{constant of motion}. \end{definition} As is very often in physics and mathematics, we have completed the circle: starting from the Hamilton's equations of motion, we have proven that the time evolution of observables is given by the Poisson bracket. Alternatively, we could have \emph{started} by postulating \begin{align*} \frac{\dd}{\dd t} f(t) = \bigl \{ H , f(t) \bigr \} \end{align*} for observables and we would have \emph{arrived} at the Hamilton's equations of motion by plugging in $q$ and $p$ as observables. Seemingly, to check whether an observable is a constant of motion requires one to \emph{solve} the equation of motion, but this is not so. A very important property of the Poisson bracket is the following: \begin{proposition}[Properties of the Poisson bracket]\label{classical_mechanics:prop:properties_Poisson_bracket} If $\Phi_t$ is the flow associated to $H \in \Cont^{\infty}(\R^{2n})$, then for any $f , g , h \in \Cont^{\infty}(\R^{2n})$, the following statements hold true: % \begin{enumerate}[(i)] \item $\{ \cdot \, , \cdot \} : \Cont^{\infty}(\R^{2n}) \times \Cont^{\infty}(\R^{2n}) \longrightarrow \Cont^{\infty}(\R^{2n})$ \item $\{ f , g \} = - \{ g , f \}$ (antisymmetry) \item $\{ f , g \} \circ \Phi_t = \bigl \{ f \circ \Phi_t , g \circ \Phi_t \bigr \}$ \item $\bigl \{ f , \{ g , h \} \bigr \} + \bigl \{ h , \{ f , g \} \bigr \} + \bigl \{ g , \{ h , f \} \bigr \} = 0$ (Jacobi identity) \item $\{ f g , h \} = f \, \{ g , h \} + g \, \{ f , h \}$ (derivation property) \end{enumerate} % \end{proposition} \begin{proof} % \begin{enumerate}[(i)] \item $\{ f , g \}$ consists of products of derivatives of $f$ and $g$, and thus their Poisson bracket is in the class $\Cont^{\infty}(\R^{2n})$. \item The antisymmetry is obvious. \item The fact that the time evolution and the Poisson bracket commute is a very deep result of Hamiltonian mechancs (see \eg \cite[Proposition~5.4.2]{Marsden_Ratiu:intro_mechanics_symmetry:1999}), but that goes far beyond our current capabilities. \item This follows from either a straight-forward (and boring) calculation or one can use (iii): we compute the derivative of this equation and obtain % \begin{align*} \frac{\dd}{\dd t} &\{ f , g \} \circ \Phi_t - \frac{\dd}{\dd t} \bigl \{ f \circ \Phi_t , g \circ \Phi_t \bigr \} = \\ &= \bigl \{ H , \{ f , g \} \circ \Phi_t \bigr \} - \bigl \{ \{ H , f \circ \Phi_t \} , g \circ \Phi_t \bigr \} - \bigl \{ f \circ \Phi_t , \{ H , g \circ \Phi_t \} \bigr \} \\ &= \bigl \{ H , \{ f , g \} \circ \Phi_t \bigr \} + \bigl \{ g \circ \Phi_t , \{ H , f \circ \Phi_t \} \bigr \} + \bigl \{ f \circ \Phi_t , \{ g \circ \Phi_t , H \} \bigr \} . \end{align*} % Setting $t = 0$ yields the Jacobi identity. \item The derivation property follows directly from the product rule for partial derivatives and the definition of $\{ \cdot \, , \cdot \}$. \end{enumerate} % \end{proof} Even though we cannot prove (iii) with our current means, under the \emph{assumption} that the solution to equation~\eqref{classical_mechanics:eqn:eom_observables} for $f = 0$ is unique and given by $f(t) = 0$, we can deduce (iii) using the Jacobi identity (iv): $\{ f , g \} \circ \Phi_t = \bigl \{ f \circ \Phi_t , g \circ \Phi_t \bigr \}$ holds if this equality is true for $t = 0$ (which follows directly from $\Phi_0 = \id_{\R^{2n}}$) and if the time derivative of this equality is satisfied. Hence, we compare \begin{align*} \frac{\dd}{\dd t} \{ f , g \} \circ \Phi_t &= \bigl \{ H , \{ f , g \} \circ \Phi_t \bigr \} \end{align*} which holds by Proposition~\ref{classical_mechanics:prop:equations_of_motion_Poisson_bracket} to \begin{align*} \frac{\dd}{\dd t} \bigl \{ f \circ \Phi_t , g \circ \Phi_t \bigr \} &= \bigl \{ \{ H , f \circ \Phi_t \} , g \circ \Phi_t \bigr \} + \bigl \{ f \circ \Phi_t , \{ H , g \circ \Phi_t \} \bigr \} \\ &= - \bigl \{ f \circ \Phi_t , \{ g \circ \Phi_t , H \} \bigr \} - \bigl \{ g \circ \Phi_t , \{ H , f \circ \Phi_t \} \bigr \} \\ &= + \bigl \{ H , \{ f \circ \Phi_t , g \circ \Phi_t \} \bigr \} . \end{align*} Hence, the difference $\Delta(t) := \{ f , g \} \circ \Phi_t - \{ f \circ \Phi_t , g \circ \Phi_t \}$ satisfies equation~\eqref{classical_mechanics:eqn:eom_observables}, \begin{align*} \frac{\dd}{\dd t} \Delta(t) &= \bigl \{ H , \Delta(t) \bigr \} , \end{align*} with initial condition $\Delta(0) = 0$. We have assumed this equation has the unique solution $\Delta(t) = 0$ which yields (iii). \medskip \noindent Proposition~\ref{classical_mechanics:prop:properties_Poisson_bracket}~(iii) simplifies \emph{computing} the solution $f(t)$ and finding constants of motion: \begin{corollary} % \begin{enumerate}[(i)] \item Equation~\eqref{classical_mechanics:eqn:eom_observables} is equivalent to % \begin{align*} \frac{\dd}{\dd t} f \circ \Phi_t &= \bigl \{ H , f \bigr \} \circ \Phi_t , && f \circ \Phi_0 = f . \end{align*} % \item $f$ is a constant of motion if and only if % \begin{align*} \{ H , f \} = 0 . \end{align*} % \end{enumerate} % \end{corollary} \begin{proof} % \begin{enumerate}[(i)] \item Since $H = H(t)$ is a constant of motion, the right-hand side of \eqref{classical_mechanics:eqn:eom_observables} can be written as % \begin{align*} \bigl \{ H , f \circ \Phi_t \bigr \} &= \bigl \{ H \circ \Phi_t , f \circ \Phi_t \bigr \} \\ &= \bigl \{ H , f \bigr \} \circ \Phi_t \end{align*} % using Proposition~\ref{classical_mechanics:prop:properties_Poisson_bracket}~(iii). \item This follows directly from (i) and the definition of constant of motion. \end{enumerate} % \end{proof} Moreover, it turn out to be \marginpar{2013.10.01} \begin{definition}[Poisson algebra] Let $\mathcal{P} \subseteq \Cont^{\infty}(\R^{2n})$ be a subalgebra of the smooth functions (\ie $\mathcal{P}$ is closed under taking linear combinations and products). Moreover, assume that the Poisson bracket has the derivation property, $\{ f g , h \} = f \, \{ g , h \} + g \, \{ f , h \}$, and % \begin{align*} \{ \cdot \, , \cdot \} : \mathcal{P} \times \mathcal{P} \longrightarrow \mathcal{P} \end{align*} % maps $\mathcal{P} \times \mathcal{P}$ into $\mathcal{P}$. Then $\bigl ( \mathcal{P} , \{ \cdot \, , \cdot \} \bigr )$ is a Poisson algebra. \end{definition} \section{Equivalence of Schrödinger and Heisenberg picture} \label{classical_mechanics:equivalence_S_H} We are still missing the proof that Heisenberg and Schrödinger picture equally describe the physics. The main observation is that taking expectation values in either lead to the same result, \ie that for any observable $f$ and state $\mu$ \begin{align*} \mathbb{E}_{\mu(t)}(f) &= \int_{\R^{2n}} \dd q \, \dd p \, \bigl ( \mu(t) \bigr )(q,p) \, f(q,p) \\ & = \int_{\R^{2n}} \dd q \, \dd p \, \mu \circ \Phi_{-t}(q,p) \, f(q,p) \\ &= \int_{\Phi_t(\R^{2n})} \dd q \, \dd p \, \mu(q,p) \, f \circ \Phi_t(q,p) \, \mathrm{det} \bigl ( D \Phi_t(q,p) \bigr ) \\ & \overset{\ast}{=} \int_{\R^{2n}} \dd q \, \dd p \, \mu(q,p) \, \bigl ( f(t) \bigr )(q,p) \\ & = \mathbb{E}_{\mu} \bigl ( f(t) \bigr ) \end{align*} holds. Note that the crucial ingredient in the step marked with $\ast$ is again the Liouville theorem~\ref{classical_mechanics:thm:Liouville}. Moreover, we see why states need to be evolved backwards in time. \section{The inclusion of magnetic fields} \label{classical_mechanics:magnetic} Magnetic fields can only be defined in dimension $2$ or higher; usually, one considers the case $n = 3$. There are two main ways to include magnetic fields into classical mechanics, \emph{minimal substitution}, and a more geometric way where the magnetic field enters into the geometric structure of phase space. Both descriptions are equivalent, though. The two-dimensional case can be obtained by restricting the three-dimensional case to the $q_1 \, q_2$-plane. The starting point from a physical point of view is the \emph{Lorentz force law}: an electric field $\mathbf{E}$ and a magnetic field $\mathbf{B}(q) = \bigl ( \mathbf{B}_1(q) , \mathbf{B}_2(q) , \mathbf{B}_3(q) \bigr )$ exert a force on a particle with charge $e$ at $q$ moving with velocity $v$ that is given by \begin{align} F_{\mathrm{L}} = e \, \mathbf{E} + v \times e \, \mathbf{B} . \label{classical_mechanics:eqn:Lorentz_force_law} \end{align} For simplicity, from now on, we set the charge $e = 1$. \medskip \noindent The goal of this section is to include magnetic fields in the framework of Hamiltonian mechanics. Electric fields $\mathbf{E} = - \nabla_q V$ appear as potentials in the Hamilton function $H(q,p) = \frac{1}{2m} p^2 + V(q)$, but magnetic fields are not gradients of a potential. Instead, one can express the magnetic field \begin{align*} \mathbf{B} = \nabla_q \times \mathbf{A} \end{align*} as the curl of a vector potential $\mathbf{A} = (\mathbf{A}_1 , \mathbf{A}_2 , \mathbf{A}_3)$. \subsection{Minimal substitution} \label{classical_mechanics:magnetic:minimal_substitution} The standard way to add the interaction to a magnetic field is to consider the equations of motion for position $q$ and \emph{kinetic momentum} \begin{align} p^A(p,q) &= p - \mathbf{A}(q) . \label{classical_mechanics:eqn:kinetic_momentum} \end{align} Moreover, $p$ is replaced by $p^A = p - \mathbf{A}$ in the Hamiltonian, \begin{align*} H^A(q,p) := H \bigl ( q , p - \mathbf{A}(q) \bigr ) . \end{align*} One then proposes the usual equations of motion: \begin{align*} \left ( % \begin{matrix} 0 & - \id_{\R^3} \\ + \id_{\R^3} & 0 \\ \end{matrix} % \right ) \left ( % \begin{matrix} \dot{q} \\ \dot{p} \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} \nabla_q H^A \\ \nabla_p H^A \\ \end{matrix} % \right ) \end{align*} Taking the time-derivative of kinetic momentum yields \begin{align} \frac{\dd}{\dd t} p^A_j &= \dot{p}_j - \sum_{k = 1}^3 \partial_{q_k} \mathbf{A}(q) \, \dot{q}_k \notag \\ &= - \tfrac{\dd}{\dd q_j} H^A - \sum_{k = 1}^3 \partial_{q_k} \mathbf{A}(q) \, \partial_{p_k} H^A \notag \\ &= - \partial_{q_j} H^A - \sum_{k = 1}^3 \partial_{p_k} H^A \, \partial_{q_j} \bigl ( p_k - \mathbf{A}_k \bigr ) - \sum_{k = 1}^3 \partial_{q_k} \mathbf{A}_j \, \partial_{p_k} H^A \notag \\ &= - \partial_{q_j} H^A + \sum_{k = 1}^3 \partial_{p_k} H^A \, \bigl ( \partial_{q_j} \mathbf{A}_k - \partial_{q_k} \mathbf{A}_j \bigr ) . \label{classical_mechanics:eqn:eom_kinetic_momentum} \end{align} If we set $B_{jk} := \partial_{q_j} \mathbf{A}_k - \partial_{q_k} \mathbf{A}_j$, then the magnetic field matrix \begin{align*} B := \bigl ( B_{jk} \bigr )_{1 \leq j , k \leq 3} = \left ( % \begin{matrix} 0 & +\mathbf{B}_3 & -\mathbf{B}_2 \\ -\mathbf{B}_3 & 0 & +\mathbf{B}_1 \\ +\mathbf{B}_2 & -\mathbf{B}_1 & 0 \\ \end{matrix} % \right ) , \end{align*} and use \begin{align*} B p &= \left ( % \begin{matrix} 0 & +\mathbf{B}_3 & -\mathbf{B}_2 \\ -\mathbf{B}_3 & 0 & +\mathbf{B}_1 \\ +\mathbf{B}_2 & -\mathbf{B}_1 & 0 \\ \end{matrix} % \right ) \left ( % \begin{matrix} p_1 \\ p_2 \\ p_3 \\ \end{matrix} % \right ) \\ &= \left ( % \begin{matrix} \mathbf{B}_3 \, p_2 - \mathbf{B}_2 \, p_3 \\ - \mathbf{B}_3 \, p_1 + \mathbf{B}_1 \, p_3 \\ \mathbf{B}_2 \, p_1 - \mathbf{B}_1 \, p_2 \\ \end{matrix} % \right ) = p \times \mathbf{B} , \end{align*} we can simplify the equation for $p^A$ to \begin{align*} \frac{\dd}{\dd t} p^A &= - (\nabla_q H)^A + \nabla_p H^A \times \mathbf{B} = - \nabla_q H + \dot{q} \times \mathbf{B} . \end{align*} For a non-relativistic particle with Hamiltonian $H(q,p) = \frac{1}{2m} p^2 + V(q)$, these equation reduce to the Lorentz force law~\eqref{classical_mechanics:eqn:Lorentz_force_law}: \begin{align*} \dot{p}^A &= - \nabla_q V + \frac{p^A}{m} \times \mathbf{B} \\ &= - (\nabla_q H)^A + B \, \nabla_p H^A = - (\nabla_q H)^A + B \, \dot{q} \end{align*} \paragraph{Changes of gauge} A choice of vector potential $\mathbf{A}$ is also called a \emph{choice of gauge}. If $\chi : \R^3 \longrightarrow \R$ is a scalar function then $\mathbf{A}$ and \begin{align*} \mathbf{A}' &= \mathbf{A} + \nabla_q \chi \end{align*} are both vector potentials associated to $\mathbf{B}$, because $\nabla_q \times \nabla_q \chi = 0$, \begin{align*} \nabla_q \times \mathbf{A}' &= \nabla_q \times \bigl ( \mathbf{A} + \nabla_q \chi \bigr ) \\ &= \nabla_q \times \mathbf{A} + \nabla_q \times \nabla_q \chi \\ &= \nabla_q \times \mathbf{A} = \mathbf{B} . \end{align*} Thus, we can either choose the gauge $\mathbf{A}$ or $\mathbf{A}'$, either one describes the physical situation: the equation of motion for $p^A$ only involves $\mathbf{B}$ rather than $\mathbf{A}$. Hence, gauges are usually chosen for convenience (\eg a particular “symmetry” or being divergence free, $\nabla_q \cdot \mathbf{A} = 0$). \subsection{Magnetic symplectic form} \label{classical_mechanics:magnetic:magnetic_symplectic_form} A second way to include the interaction to a magnetic field is to integrate it into the symplectic form: \begin{align} \left ( % \begin{matrix} B & - \id_{\R^3} \\ + \id_{\R^3} & 0 \\ \end{matrix} % \right ) \left ( % \begin{matrix} \dot{q} \\ \dot{p} \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} \nabla_q H \\ \nabla_p H \\ \end{matrix} % \right ) \label{classical_mechanics:eqn:magnetic_eom} \end{align} Note that neither the Hamiltonian nor the momentum are altered. Instead, in this variant, $q$ is position and $p$ is \emph{kinetic} momentum. Solving the above equation for $\dot{q}$ and $\dot{p}$ yields \eqref{classical_mechanics:eqn:eom_kinetic_momentum}, \begin{align*} \left ( % \begin{matrix} B & - \id_{\R^3} \\ + \id_{\R^3} & 0 \\ \end{matrix} % \right ) \left ( % \begin{matrix} \dot{q} \\ \dot{p} \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} B \, \dot{q} - \dot{p} \\ \dot{q} \\ \end{matrix} % \right ) = \left ( % \begin{matrix} \nabla_q H \\ \nabla_p H \\ \end{matrix} % \right ) \\ & \Leftrightarrow \\ \left ( % \begin{matrix} \dot{q} \\ \dot{p} \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} + \nabla_p H \\ - \nabla_q H + B \, \dot{q} \\ \end{matrix} % \right ) = \left ( % \begin{matrix} + \nabla_p H \\ - \nabla_q H + \dot{q} \times \mathbf{B} \\ \end{matrix} % \right ) \end{align*} In other words, we have again recovered the Lorentz force law~\eqref{classical_mechanics:eqn:Lorentz_force_law}. From a mathematical perspective, the advantage of this formulation is that magnetic fields are always “nicer” functions than associated vector potentials. \section{Stability analysis} \label{classical_mechanics:stability} It turns out that hamiltonian systems are \emph{never} stable: we start by linearizing the hamiltonian vector field, \begin{align*} X_H(q,p) &= \left ( % \begin{matrix} + \nabla_p H(q,p) \\ - \nabla_q H(q,p) \\ \end{matrix} % \right ) \; \Longrightarrow \; D X_H(q,p) = \left ( % \begin{matrix} \nabla_q^T \nabla_p H(q,p) & \nabla_p^T \nabla_p H(q,p) \\ - \nabla_q^T \nabla_q H(q,p) & - \nabla_p^T \nabla_q H(q,p) \\ \end{matrix} % \right ) . \end{align*} Thus, the linearized vector field is always trace-less, \begin{align*} \trace D X_H(q,p) &= \sum_{j = 1}^n \bigl ( \partial_{q_j} \partial_{p_j} H(q,p) - \partial_{p_j} \partial_{q_j} H(q,p) \bigr ) = 0 , \end{align*} and using that the sum of eigenvalues $\{ \lambda_j \}_{j = 1}^N$ of $D X_H(q,p)$ equals the trace of $D X_H(q,p)$, we know that the $\lambda_j$ (repeated according to their multiplicity) sum to $0$, \begin{align*} \trace D X_H(q,p) = 0 = \sum_{j = 1}^{2n} \lambda_j . \end{align*} Moreover, seeing as the entries of $D X_H(q,p)$ are real, the eigenvalues come in complex conjugate pairs $\{ \lambda_j , \overline{\lambda_j} \}$. Combined with the fact that $D X_H(q,p)$ is $2n \times 2n$-dimensional, we deduce that if $\lambda_j$ is an eigenvalue of $D X_H(q,p)$, then so is $- \lambda_j$. This suggests that hamiltonian systems tend to be either elliptic or hyperbolic. \paragraph{Electric fields and other gradient fields} To better understand the influence interactions to electromagnetic fields (and other forces which can be expressed as the gradient of a potential) have to the dynamics of a particle, let us start with considering the purely electric case for a non-relativistic particle. Here, the interaction is given by the standard Hamiltonian \begin{align*} H(q,p) = \tfrac{1}{2m} p^2 + V(q) . \end{align*} which gives the interaction to the electric field $\mathbf{E} = - \nabla_q V$. Here, the vector field \begin{align*} X_H(q,p) = \left ( % \begin{matrix} \frac{p}{m} \\ - \nabla_q V(q) \\ \end{matrix} % \right ) \end{align*} vanishes at $(q_0,0)$ where $q_0$ is a critical point of $V$, \ie $\nabla_q V(q_0) = 0$. Its linearization \begin{align*} D X_H(q,p) &= \left ( % \begin{matrix} 0 & \frac{1}{m} \, \id_{\R^n} \\ - \mathrm{Hess} V(q) & 0 \\ \end{matrix} % \right ) \end{align*} involves the Hessian \begin{align*} \mathrm{Hess} V = \left ( % \begin{matrix} \partial_{q_1}^2 V & \cdots & \partial_{q_1} \partial_{q_n} V \\ \vdots & & \vdots \\ \partial_{q_1} \partial_{q_n} V & \cdots & \partial_{q_n}^2 V \\ \end{matrix} % \right ) \end{align*} of the potential. The block structure allows us to simplify the characteristic polynomial using \begin{align*} \mathrm{det} \left ( % \begin{matrix} A & B \\ C & D \\ \end{matrix} % \right ) &= \mathrm{det} A \; \mathrm{det} \, \bigl ( D - B \, D^{-1} \, C \bigr ) . \end{align*} Then the zeros of the characteristic polynomial \begin{align*} \chi_{q_0}(\lambda) &= \mathrm{det} \, \bigl ( \lambda \, \id_{\R^{2n}} - D X_H(q_0,0) \bigr ) \\ &= \mathrm{det} \left ( % \begin{matrix} \lambda \, \id_{\R^n} & - \frac{1}{m} \, \id_{\R^n} \\ + \mathrm{Hess} \, V(q_0) & \lambda \, \id_{\R^n} \\ \end{matrix} % \right ) \\ & = \mathrm{det} \bigl ( \lambda \, \id_{\R^n} \bigr ) \; \mathrm{det} \, \bigl ( \lambda \, \id_{\R^n} + \lambda^{-1} \, m^{-1} \, \mathrm{Hess} \, V(q_0) \bigr ) \\ &= \mathrm{det} \, \bigl ( \lambda^2 \, \id_{\R^n} + m^{-1} \, \mathrm{Hess} \, V(q_0) \bigr ) \end{align*} are the square roots of the eigenvalues of $- m^{-1} \, \mathrm{Hess} \, V(q_0)$. In case $q_0$ is a local \emph{maximum}, for instance, then \begin{align} \mathrm{Hess} \, V(q_0) > 0 \; \; \Leftrightarrow \; \; x \cdot \mathrm{Hess} \, V(q_0) x > 0 \quad \forall x \in \R^3 \end{align} holds in the sense of matrices; this equation is equivalent to requiring all eigenvalues $\omega_j$ of the Hessian to be positive. Hence, the eigenvalues of the linearized vector field are \begin{align*} \lambda_{\pm j} &= \pm \sqrt{- \tfrac{\omega_j}{m}} = \pm \ii \, \sqrt{\tfrac{\omega_j}{m}} , \end{align*} which means $(q_0,0)$ is a marginally stable, elliptic fixed point. \paragraph{Magnetic fields} In case \emph{only} a magnetic field is present, \ie we consider the Hamilton function $H(q,p) = \frac{1}{2m} p^2$ and the \emph{magnetic} equations of motion~\eqref{classical_mechanics:eqn:magnetic_eom}. The corresponding vector field \begin{align*} X_H(q,p) &= \frac{1}{m} \, \left ( % \begin{matrix} p \\ B(q) \, p \\ \end{matrix} % \right ) \end{align*} linearizes to \begin{align*} D X_H(q,p) &= \frac{1}{m} \, \left ( % \begin{matrix} 0 & \id_{\R^3} \\ B'(q) \, p & B(q) \\ \end{matrix} % \right ) \end{align*} where \begin{align*} B' \, p := \nabla_q^{\mathrm{T}} \bigl ( B \, p \bigr ) . \end{align*} Any of the fixed points are of the form $(q_0,0)$, so that we need to find the eigenvalues of \begin{align*} D X_H(q_0,0) &= \frac{1}{m} \, \left ( % \begin{matrix} 0 & \id_{\R^3} \\ 0 & B(q_0) \\ \end{matrix} % \right ) . \end{align*} Using the block form of the matrix, we see right away that three eigenvalues are $0$ while the others are, up to a factor of $\nicefrac{1}{m}$, the eigenvalues of $B(q)$: \begin{align*} \chi_{q_0}(\lambda) :& \negmedspace= \mathrm{det} \, \bigl ( \lambda \, \id_{\R^6} - D X_H(q_0,0) \bigr ) \\ &= \mathrm{det} \, \left ( % \begin{matrix} \lambda \, \id_{\R^3} & - \tfrac{1}{m} \, \id_{\R^3} \\ 0 & \lambda \, \id_{\R^3} - \tfrac{1}{m} \, B(q_0) \\ \end{matrix} % \right ) \\ &= \lambda^3 \; \mathrm{det} \, \bigl ( \lambda \, \id_{\R^3} - \tfrac{1}{m} \, B(q_0) \bigr ) \end{align*} The eigenvalues of $B(q_0)$ are $0$ and $\pm \ii \, \babs{\mathbf{B}(q_0)}$. This can be seen from $B = \overline{B} = - B^T$, $\mathrm{tr} B = 0$ and $\mathrm{det} \, B = 0$ which implies: (i) $\lambda_1 = 0$, (ii) $\lambda_2 = \overline{\lambda_3}$ and $\lambda_2 + \lambda_3 = 0$. \marginpar{2013.10.03} Hence, magnetic field have \emph{metastable, elliptic} fixed points which are all of the form $(q_0,0)$. That means, we are confronted with two problems: first of all, $4$ of the eigenvalues are $0$, so there are many metastable directions. The second one is much more serious: linearization is a \emph{local} technique, and studying the stability via linearization hinges on the fact that you can separate fixed points by open neighborhoods. But here, none of the fixed points can be isolated from the others (in the sense that there does not exist an open neighborhood which contains only a single fixed point). So one needs more care: for instance, it is crucial to look at how the direction of $\mathbf{B}$ changes, looking at the linearization of the vector field is insufficient. \chapter{Banach \& Hilbert spaces} \label{spaces} This section intends to introduce some fundamental notions on \emph{Banach spaces}; those are vector spaces of functions where the notion of distance is compatible with the linear structure. In addition, \emph{Hilbert spaces} also allow one to introduce a notion of angle via a \emph{scalar product}. Those notions are crucial to understand PDEs and ODEs, because these are defined on a \emph{domain} (similar to domains of functions). For instance, one may ask: \begin{enumerate}[(i)] \item How does the existence of solutions depend on the domain, \eg by imposing different boundary conditions? \item How well can I approximate a solution with a given set of base vectors? This is important for numerics, because one needs to approximate elements of infinite-dimensional spaces by finite linear combinations. \end{enumerate} \section{Banach spaces} \label{spaces:Banach} Many vector spaces $\mathcal{X}$ can be equipped with a \emph{norm} $\norm{\cdot}$, and if they are complete with respect to that norm, the pair $\bigl ( \mathcal{X} , \norm{\cdot} \bigr )$ is a Banach space. \subsection{Abstract Banach spaces} \label{spaces:Banach:generic} The abstract definition of Banach spaces is quite helpful when we construct Banach spaces from other Banach spaces (\eg Banach spaces of integrable, vector-valued functions).\footnote{We will only consider vector spaces over $\C$, although much of what we do works just fine if the field of scalars is $\R$.} \begin{definition}[Normed space] Let $\mathcal{X}$ be a vector space. A mapping $\norm{\cdot} : \mathcal{X} \longrightarrow [0,+\infty)$ with properties % \begin{enumerate}[(i)] \item $\norm{x} = 0$ if and only if $x = 0$, \item $\norm{\alpha x} = \abs{\alpha} \, \norm{x}$, and \item $\norm{x + y} \leq \norm{x} + \norm{y}$, \end{enumerate} % for all $x,y \in \mathcal{X}$, $\alpha \in \C$, is called norm. The pair $(\mathcal{X},\norm{\cdot})$ is then referred to as normed space. \end{definition} A norm on $\mathcal{X}$ quite naturally induces a metric by setting \begin{align*} d(x,y) := \norm{x-y} \end{align*} for all $x,y \in \mathcal{X}$. Unless specifically mentioned otherwise, one always works with the metric induced by the norm. \begin{definition}[Banach space] A complete normed space is a Banach space. \end{definition} \begin{example} The space $\mathcal{X} = \Cont([a,b],\C)$ from the previous list of examples has a norm, the sup norm % \begin{align*} \norm{f}_{\infty} = \sup_{x \in [a,b]} \abs{f(x)} . \end{align*} % Since $\Cont([a,b],\C)$ is complete, it is a Banach space. \end{example} \subsection{Prototypical Banach spaces: $L^p(\Omega)$ spaces} \label{spaces:Banach:prototypical} The prototypical examples of Banach spaces are the so-called \emph{$L^p$ spaces} or $p$-integrable spaces; a rigorous definition requires a bit more care, so we refer the interested reader to \cite{Lieb_Loss:analysis:2001}. When we say integrable, we mean integrable with respect to the Lebesgue measure \cite[p.~6~ff.]{Lieb_Loss:analysis:2001}. For any open or closed set $\Omega \subseteq \R^n$, the space of \emph{$p$-integrable functions} $\mathcal{L}^p(\Omega)$ is a $\C$-vector space, but \begin{align*} \norm{\varphi}_{L^p(\Omega)} := \left ( \int_{\Omega} \dd x \, \abs{\varphi(x)}^p \right )^{\nicefrac{1}{p}} \end{align*} is not a norm: there are functions $\varphi \neq 0$ for which $\norm{\varphi} = 0$. Instead, $\norm{\varphi} = 0$ only ensures \begin{align*} \varphi(x) = 0 \mbox{ almost everywhere (with respect to the Lebesgue measure $\dd x$).} \end{align*} Almost everywhere is sometimes abbreviated with a.~e. and the terms “almost surely” and “for almost all $x \in \Omega$” can be used synonymously. If we introduce the equivalence relation \begin{align*} \varphi \sim \psi :\Leftrightarrow \norm{\varphi - \psi} = 0 , \end{align*} then we can define the vector space $L^p(\Omega)$: \begin{definition}[$L^p(\Omega)$] Let $1 \leq p < \infty$. Then we define % \begin{align*} \mathcal{L}^p(\Omega) := \Bigl \{ f : \Omega \longrightarrow \C \; \big \vert \; \mbox{$f$ measurable, } \int_{\Omega} \dd x \, \abs{f(x)}^p < \infty \Bigr \} \end{align*} % as the vector space of functions whose $p$th power is integrable. Then $L^p(\Omega)$ is the vector space % \begin{align*} L^p(\Omega) := \mathcal{L}^p(\Omega) / \sim \end{align*} % consisting of equivalence classes of functions that agree almost everywhere. With the $p$ norm % \begin{align*} \norm{f}_p := \biggl ( \int_{\Omega} \dd x \, \abs{f(x)}^p \biggr )^{\nicefrac{1}{p}} \end{align*} % it forms a normed space. \end{definition} In practice, one usually does not distinguish between equivalence classes of functions $[f]$ (which make up $L^p(\Omega)$) and functions $f$. This abuse of notation is pervasive in the literature and it is perfectly acceptable to write $f \in L^p(\Omega)$ even though strictly speaking, one should write $[f] \in L^p(\Omega)$. Only when necessary, one takes into account that $f = 0$ actually means $f(x) = 0$ for almost all $x \in \Omega$. In case $p = \infty$, we have to modify the definition a little bit. \begin{definition}[$L^{\infty}(\Omega)$] We define % \begin{align*} \mathcal{L}^{\infty}(\Omega) := \Bigl \{ f : \Omega \longrightarrow \C \; \big \vert \; \mbox{$f$ measurable, } \exists 0 < K < \infty : \abs{f(x)} \leq K \mbox{ almost everywhere} \Bigr \} \end{align*} % to be the space of functions that are bounded almost everywhere and % \begin{align*} \norm{f}_{\infty} := \mathrm{ess} \sup_{x \in \Omega} \babs{f(x)} := \inf \bigl \{ K \geq 0 \; \big \vert \abs{f(x)} \leq K \mbox{ for almost all $x \in \Omega$} \bigr \} . \end{align*} % Then the space $L^{\infty}(\Omega) := \mathcal{L}^{\infty}(\Omega) / \sim$ is defined as the vector space of equivalence classes where two functions are identified if they agree almost everywhere.\marginpar{2013.10.08} \end{definition} \begin{theorem}[Riesz-Fischer] For any $1 \leq p \leq \infty$, $L^p(\Omega)$ is complete with respect to the $\norm{\cdot}_p$ norm and thus, a Banach space. \end{theorem} \begin{definition}[Separable Banach space] A Banach space $\mathcal{X}$ is called separable if there exists a countable dense subset. \end{definition} This condition is equivalent to asking that the space has a \emph{countable basis}. \begin{theorem} For any $1 \leq p < \infty$, the Banach space $L^p(\Omega)$ is separable. \end{theorem} \begin{proof} We refer to \cite[Lemma~2.17]{Lieb_Loss:analysis:2001} for an explicit construction. The idea is to approximate arbitrary functions by functions which are constant on cubes and take only values in the rational complex numbers. \end{proof} For future reference, we collect a few facts on $L^p(\Omega)$ spaces. In particular, we will make use of dominated convergence frequently. We will give them without proof, they can be found in standard text books on analysis, see \eg \cite{Lieb_Loss:analysis:2001}. \begin{theorem}[Monotone Convergence] Let $(f_k)_{k \in \N}$ be a sequence of non-decreasing functions in $L^1(\Omega)$ with pointwise limit $f$ defined almost everywhere. Define $I_k := \int_{\Omega} \dd x \, f_k(x)$; then the sequence $(I_k)$ is non-decreasing as well. If $I := \lim_{k \to \infty} I_k < \infty$, then $I = \int_{\Omega} \dd x \, f(x)$, \ie % \begin{align*} \lim_{k \to \infty} \int_{\Omega} \dd x \, f_k(x) = \int_{\Omega} \dd x \, \lim_{k \to \infty} f_k(x) = \int_{\Omega} \dd x \, f(x) \end{align*} % holds. \end{theorem} \begin{theorem}[Dominated Convergence] Let $(f_k)_{k \in \N}$ be a sequence of functions in $L^1(\Omega)$ that converges almost everywhere pointwise to some $f : \Omega \longrightarrow \C$. If there exists a non-negative $g \in L^1(\Omega)$ such that $\abs{f_k(x)} \leq g(x)$ holds almost everywhere for all $k \in \N$, then $g$ also bounds $\abs{f}$, \ie $\abs{f(x)} \leq g(x)$ almost everywhere, and $f \in L^1(\Omega)$. Furthermore, the limit $k \to \infty$ and integration with respect to $x$ commute and we have % \begin{align*} \lim_{k \to \infty} \int_{\Omega} \dd x \, f_k(x) = \int_{\Omega} \dd x \, \lim_{k \to \infty} f_k(x) = \int_{\Omega} \dd x \, f(x) . \end{align*} % \end{theorem} \begin{example}[First half of Riemann-Lebesgue lemma]\label{spaces:example:Riemann-Lebesgue_half} We define the Fourier transform of $f \in L^1(\R^n)$ as % \begin{align*} (\Fourier f)(\xi) := \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \e^{- \ii \xi \cdot x} \, f(x) . \end{align*} % The integrability of $f$ implies that $\Fourier f$ is uniformly bounded: % \begin{align*} \babs{(\Fourier f)(\xi)} &\leq \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \babs{\e^{- \ii \xi \cdot x} \, f(x)} = \norm{f}_{L^1(\R^n)} \end{align*} % In fact, $\Fourier f$ is continuous, and the crucial tool in the proof is dominated convergence: to see that $\Fourier f$ is continuous in $\xi_0 \in \R^n$, let $(\xi_n)$ be any sequence converging to $\xi_0$. Since we can bound the integrand uniformly in $\xi$, % \begin{align*} \babs{\e^{- \ii \xi \cdot x} \, f(x)} \leq \abs{f(x)} , \end{align*} % dominated convergence applies, and we may interchange integration and differentiation, % \begin{align*} \lim_{\xi \to \xi_0} (\Fourier f)(\xi) &= \lim_{\xi \to \xi_0} \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \e^{- \ii \xi \cdot x} \, f(x) \\ &= \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \lim_{\xi \to \xi_0} \bigl ( \e^{- \ii \xi \cdot x} \, f(x) \bigr ) \\ &= \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \e^{- \ii \xi_0 \cdot x} \, f(x) = (\Fourier f)(\xi_0) . \end{align*} % This means $\Fourier f$ is continuous in $\xi_0$. However, $\xi_0$ was chosen arbitrarily so that we have in fact $\Fourier f \in L^{\infty}(\R^n) \cap \Cont(\R^n)$. \end{example} \subsection{Boundary value problems} \label{spaces:boundary_value_problems} Now let us consider the wave equation \begin{align} \partial_t^2 u - \partial_x^2 u &= 0 \label{spaces:eqn:wave_equation} \end{align} in one dimension for the initial conditions \begin{align*} u(x,0) &= \varphi(x) , \\ u'(x,0) &= \psi(x) . \end{align*} This formal description defines only \emph{half} of the wave equation, the other half is to state clearly what space $u$ is taken from, and if it should satisfy additional conditions. A priori it is not clear whether $u$ is a function of $\R$ or a subset of $\R$, say, an interval $[a,b]$. The derivatives appearing in \eqref{spaces:eqn:wave_equation} need only exist in the interior of the spatial domain, \eg $(a,b)$. Moreover, we could impose integrability conditions on $u$, \eg $u \in L^1(\R)$. If $u$ is a function $[0,L]$ to $\C$, for instance, it turns out that we need to specify the behavior of $u$ or $u'$ at the \emph{boundary}, \eg \begin{enumerate}[(i)] \item Dirichlet boundary conditions: $u(t,0) = 0 = u(t,L)$ \item Neumann boundary conditions: $\partial_x u(t,0) = 0 = \partial_x u(t,L)$ \item Mixed or Robin boundary conditions: $\alpha_0 u(t,0) + \beta_0 \partial_x u(t,0) = 0$, $\alpha_L u(t,L) + \beta_L \partial_x u(t,L) = 0$ \end{enumerate} If one of the boundaries is located at $\pm \infty$, then the corresponding boundary condition often becomes meaningless because “$u(+\infty,T)$” usually makes no sense. \medskip \noindent Let us start by solving \eqref{spaces:eqn:wave_equation} via a \emph{product ansatz}, \ie we assume $u$ is of the form \begin{align*} u(x,t) &= \tau(t) \, \xi(x) \end{align*} for suitable functions $\xi$ and $\tau$. Plugging the product ansatz into the wave equation and assuming that $\tau(t)$ and $\xi(x)$ are non-zero yields \begin{align*} \ddot{\tau}(t) \, \xi(x) - \tau(t) \, \xi''(x) &= 0 \; \; \Longleftrightarrow \; \; \frac{\ddot{\tau}(t)}{\tau(t)} = \frac{\xi''(x)}{\xi(x)} = \lambda \in \C . \end{align*} This means $\tau$ and $\xi$ each need to satisfy the harmonic oscillator equation, \begin{align*} \ddot{\tau} - \lambda \tau &= 0 \, , \\ \xi'' - \lambda \xi &= 0 . \end{align*} Note that these two equations are \emph{coupled} via the constant $\lambda$ which has yet to be determined. The solutions to these equations are \begin{align*} \tau(t) &= % \begin{cases} a_1(0) + a_2(0) \, t & \lambda = 0 \\ a_1(\lambda) \, \e^{+ t \sqrt{\lambda}} + a_2(\lambda) \, \e^{- t \sqrt{\lambda}} & \lambda \neq 0 \\ \end{cases} % \end{align*} and \begin{align*} \xi(x) &= % \begin{cases} b_1(0) + b_2(0) \, x & \lambda = 0 \\ b_1(\lambda) \, \e^{+ x \sqrt{\lambda}} + b_2(\lambda) \, \e^{- x \sqrt{\lambda}} & \lambda \neq 0 \\ \end{cases} % \end{align*} for $a_1(\lambda) , a_2(\lambda) , b_1(\lambda) , b_2(\lambda) \in \C$ where we always choose the root whose imaginary part is positive. These solutions are smooth functions on $\R \times \R$. \paragraph{The wave equation on $\R$} As mentioned in problem~4 on sheet~1, conditions on $u$ now restrict the admissible values for $\lambda$. For instance, if we ask that \begin{align*} u \in L^{\infty}(\R \times \R) , \end{align*} then this condition only allows $\lambda \leq 0$ and excludes the linear solutions, \ie $a_2(0) = 0 = b_2(0)$. \paragraph{The wave equation on $[0,L]$ with Dirichlet boundary conditions} Now assume we are interested in the case where the wave equation is considered on the interval $[0,L]$ with Dirichlet boundary conditions, $u(t,0) = 0 = u(t,L)$. It turns out that the boundary condition only allows for a discrete set of negative $\lambda$: one can see easily that for $\lambda = 0$, only the trivial solution satisfies the Dirichlet boundary conditions, \ie $a_1(0) = a_2(0) = 0$ and $b_1(0) = b_2(0) = 0$. For $\lambda \neq 0$, the first boundary condition \begin{align*} \xi_{\lambda}(0) &= a_1(\lambda) \, \e^{+ 0 \sqrt{\lambda}} + a_2(\lambda) \, \e^{- 0 \sqrt{\lambda}} \\ &= a_1(\lambda) + a_2(\lambda) \overset{!}{=} 0 \end{align*} implies $a_2(\lambda) = - a_1(\lambda)$. Plugging that back into the second equation yields \begin{align*} \xi_{\lambda}(L) &= a_1(\lambda) \bigl ( \e^{+ L \sqrt{\lambda}} - \e^{- L \sqrt{\lambda}} \bigr ) \overset{!}{=} 0 \end{align*} which is equivalent to \begin{align*} \e^{2 L \sqrt{\lambda}} = 1 . \end{align*} The solutions to this equation are of the form \begin{align*} \sqrt{\lambda} = \ii \frac{n \pi}{L} \end{align*} for some $n \in \N$, \ie $\lambda < 0$. Moreover, as discussed in problem~4, the only admissible solutions of the spatial equation are \begin{align*} \xi_n(x) &= \sin n \tfrac{\pi x}{L} , \qquad \qquad n \in \N . \end{align*} That means, the solutions are indexed by an integer $n \in \N$, \begin{align*} u_n(t,x) = \Bigl ( a_1(n) \, \e^{+ \ii n \frac{\pi t}{L}} + a_2(n) \, \e^{- \ii n \frac{\pi t}{L}} \Bigr ) \, \sin n \tfrac{\pi x}{L} , \end{align*} and a generic solution is of the form \begin{align} u(t,x) &= \sum_{n \in \N} \Bigl ( a_1(n) \, \e^{+ \ii n \frac{\pi t}{L}} + a_2(n) \, \e^{- \ii n \frac{\pi t}{L}} \Bigr ) \, \sin n \tfrac{\pi x}{L} . \label{spaces:eqn:solution_wave_equation_Fourier} \end{align} To obtain the coefficients, we need to solve \begin{align*} u(0,x) &= \sum_{n \in \N} \bigl ( a_1(n) + a_2(n) \bigr ) \, \sin n \tfrac{\pi x}{L} \overset{!}{=} \varphi(x) \\ \partial_t u(0,x) &= \sum_{n \in \N} \frac{\ii \, n \, \pi}{L} \, \bigl ( a_1(n) - a_2(n) \bigr ) \, \sin n \tfrac{\pi x}{L} \overset{!}{=} \psi(x) . \end{align*} Later on, we will see that all integrable functions on intervals can be expanded in terms of $\e^{\pm \ii n \lambda x}$ for suitable choices of $\lambda$ (\cf Section~\ref{Fourier}). Hence, we can expand \begin{align*} \varphi(x) &= \sum_{n \in \N} b_{\varphi}(n) \, \sin n \tfrac{\pi x}{L} \\ \psi(x) &= \sum_{n \in \N} b_{\psi}(n) \, \sin n \tfrac{\pi x}{L} \end{align*} in the same fashion as $u$, and we obtain the following relation between the coefficients: \begin{align*} b_{\varphi}(n) &= a_1(n) + a_2(n) \\ b_{\psi}(n) &= \frac{\ii \, n \, \pi}{L} \, \bigl ( a_1(n) - a_2(n) \bigr ) \end{align*} For each $n$, we obtain two linear equations with two unknowns, and we can solve them explicitly. Now the question is whether the sum in $u(t,\cdot)$ is also integrable: if we assume for now that the coefficients of $\varphi$ and $\psi$ are absolutely summable\footnote{In Chapter~\ref{Fourier} we will give conditions explicit conditions on $\varphi$ and $\psi$ which will ensure the absolute summability of the Fourier coefficients.},\marginpar{2013.10.10} \begin{align*} \sum_{n \in \N} \babs{a_1(n) + a_2(n)} &< \infty \, , \\ \sum_{n \in \N} \sabs{n} \, \babs{a_1(n) - a_2(n)} &< \infty \, , \end{align*} then we deduce \begin{align*} 2 \babs{a_1(n)} &= \babs{a_1(n) + a_1(n)} \\ & = \babs{a_1(n) - a_2(n) + a_2(n) + a_1(n)} \\ &\leq \babs{a_1(n) + a_2(n)} + \babs{a_1(n) - a_2(n)} \\ & \leq \babs{a_1(n) + a_2(n)} + \abs{n} \, \babs{a_1(n) - a_2(n)} . \end{align*} Thus, the expression on the right-hand side of \eqref{spaces:eqn:solution_wave_equation_Fourier} converges in $L^1([0,L])$, \begin{align*} \babs{u(x,t)} &\leq \biggl \lvert \sum_{n \in \N} \bigl ( a_1(n) \, \e^{+ \ii \frac{n \pi}{L} t} + a_2(n) \, \e^{- \ii \frac{n \pi}{L} t} \bigr ) \, \sin n \tfrac{\pi x}{L} \biggr \rvert \\ &\leq \sum_{n \in \N} \bigl ( \sabs{a_1(n)} + \sabs{a_2(n)} \bigr ) < \infty . \end{align*} Since the bound is independent of $t$ we deduce $u(t,\cdot)$ exists for all $t \in \R$. Overall, we have shown the following: if we place enough conditions on the initial values $u(0) = \varphi$ and $\partial_t u(0) = \psi$ (here: $\varphi, \psi \in L^1([0,L])$ and absolutely convergent Fourier series), then in fact $u(t) \in L^1([0,L])$ is integrable for all times (bounded functions on bounded intervals are integrable). \section{Hilbert spaces} \label{spaces:Hilbert} \emph{Hilbert spaces} $\Hil$ are Banach spaces with a \emph{scalar product} $\scpro{\cdot \,}{\cdot}$ which allows to measure the “angle” between two vectors. Most importantly, it yields a characterization of vectors which are orthogonal to one another, giving rise to the notion of \emph{orthonormal bases} (ONB). This type of basis is particularly efficient to work with and has some rather nice properties (\eg a Pythagorean theorem holds). \subsection{Abstract Hilbert spaces} \label{spaces:Hilbert:generic} First, let us define a Hilbert space in the abstract, starting with \begin{definition}[pre-Hilbert space and Hilbert space] A \emph{pre-Hilbert space} is a complex vector space $\Hil$ with scalar product % \begin{align*} \scpro{\cdot \, }{\cdot} : \Hil \times \Hil \longrightarrow \C , \end{align*} % \ie a mapping with properties % \begin{enumerate}[(i)] \item $\scpro{\varphi}{\varphi} \geq 0$ and $\scpro{\varphi}{\varphi} = 0$ implies $\varphi = 0$ (positive definiteness), \item $\scpro{\varphi}{\psi}^* = \scpro{\psi}{\varphi}$, and \item $\scpro{\varphi}{\alpha \psi + \chi} = \alpha \scpro{\varphi}{\psi} + \scpro{\varphi}{\chi}$ \end{enumerate} % for all $\varphi , \psi , \chi \in \Hil$ and $\alpha \in \C$. This induces a natural norm $\norm{\varphi} := \sqrt{\sscpro{\varphi}{\varphi}}$ and metric $d(\varphi,\psi) := \norm{\varphi - \psi}$, $\varphi , \psi \in \Hil$. If $\Hil$ is complete with respect to the induced metric, it is a \emph{Hilbert space.} \end{definition} The scalar product induces a norm \begin{align*} \norm{f} := \sqrt{\scpro{f}{f}} \end{align*} and a metric $d(f,g) := \norm{f - g}$. \begin{example} % \begin{enumerate}[(i)] \item $\C^n$ with scalar product % \begin{align*} \scpro{z}{w} := \sum_{l = 1}^n z_j^* \, w_j \end{align*} % is a Hilbert space. \item $\Cont([a,b],\C)$ with scalar product % \begin{align*} \scpro{f}{g} := \int_a^b \dd x \, f(x)^* \, g(x) \end{align*} % is just a pre-Hilbert space, since it is not complete. \end{enumerate} % \end{example} \subsection{Orthonormal bases and orthogonal subspaces} \label{spaces:Hilbert:orthonormal} Hilbert spaces have the important notion of orthonormal vectors and sequences which do not exist in Banach spaces. \begin{definition}[Orthonormal set] Let $\mathcal{I}$ be a countable index set. A family of vectors $\{ \varphi_k \}_{k \in \mathcal{I}}$ is called orthonormal set if for all $k,j \in \mathcal{I}$ % \begin{align*} \scpro{\varphi_k}{\varphi_j} = \delta_{kj} \end{align*} % holds. \end{definition} As we will see, all vectors in a separable Hilbert spaces can be written in terms of a countable orthonormal basis. Especially when we want to approximate elements in a Hilbert space by elements in a proper closed subspace, the vector of best approximation can be written as a linear combination of basis vectors. \begin{definition}[Orthonormal basis]\label{spaces:Hilbert:defn:ONB} Let $\mathcal{I}$ be a countable index set. An orthonormal set of vectors $\{ \varphi_k \}_{k \in \mathcal{I}}$ is called orthonormal basis if and only if for all $\psi \in \Hil$, we have % \begin{align*} \psi = \sum_{k \in \mathcal{I}} \sscpro{\varphi_k}{\psi} \, \varphi_k . \end{align*} % If $\mathcal{I}$ is countably infinite, $\mathcal{I} \cong \N$, then this means the sequence $\psi_n := \sum_{j = 1}^n \sscpro{\varphi_j}{\psi} \, \varphi_j$ of partial converges in norm to $\psi$, % \begin{align*} \lim_{n \rightarrow \infty} \Bnorm{\psi - \mbox{$\sum_{j = 1}^n$} \sscpro{\varphi_j}{\psi} \, \varphi_j} = 0 \end{align*} % \end{definition} \begin{example} Hermitian matrices $H = H^* \in \mathrm{Mat}_{\C}(n)$ give rise to a set of orthonormal vectors, namely the set of eigenvectors. To see that this is so, let $v_k$ be an eigenvector to $\lambda_n$ (the eigenvalues are repeated according to their multiplicity). Then for all eigenvectors $v_j$ and $v_k$, we compute % \begin{align*} 0 = \scpro{v_j}{H v_k} - \scpro{v_j}{H v_k} &= \lambda_n \, \scpro{v_j}{v_k} - \scpro{H^* v_j}{v_k} \\ &= \lambda_n \, \scpro{v_j}{v_k} - \scpro{H v_j}{v_k} = (\lambda_n - \lambda_j) \, \scpro{v_j}{v_k} \end{align*} % where we have used that the eigenvalues of hermitian matrices are real (this follows from repeating the above argument for $j = n$). Hence, either $\scpro{v_j}{v_k} = 0$ if $\lambda_j \neq \lambda_n$ or $\lambda_j = \lambda_n$. In the latter case, we obtain a higher-dimensional subspace of $\C^n$ associated to the eigenvalue $\lambda_n$ for which we can construct an orthonormal basis using the Gram-Schmidt procedure. Thus, we obtain a basis of eigenvectors $\{ v_j \}_{j = 1}^n$. This basis is particularly convenient when working with the matrix $H$ since % \begin{align*} H w &= H \, \sum_{j = 1}^n \scpro{v_j}{w} \, v_j \\ &= \sum_{j = 1}^n \lambda_j \, \scpro{v_j}{w} \, v_j . \end{align*} % We will extend these arguments in the next chapter to operators on infinite-dimensional Hilbert spaces where one needs to take a little more care. \end{example} With this general notion of orthogonality, we have a Pythagorean theorem: \begin{theorem}[Pythagoras] Given a finite orthonormal family $\{ \varphi_1 , \ldots , \varphi_n \}$ in a pre-Hilbert space $\Hil$ and $\varphi \in \Hil$, we have % \begin{align*} \bnorm{\varphi}^2 = \mbox{$\sum_{k = 1}^n$} \babs{\sscpro{\varphi_k}{\varphi}}^2 + \bnorm{\varphi - \mbox{$\sum_{k = 1}^n$} \sscpro{\varphi_k}{\varphi} \, \varphi_k}^2 . \end{align*} % \end{theorem} \begin{proof} It is easy to check that $\psi := \sum_{k = 1}^n \sscpro{\varphi_k}{\varphi} \, \varphi_k$ and $\psi^{\perp} := \varphi - \sum_{k = 1}^n \sscpro{\varphi_k}{\varphi} \, \varphi_k$ are orthogonal and $\varphi = \psi + \psi^{\perp}$. Hence, we obtain % \begin{align*} \norm{\varphi}^2 &= \sscpro{\varphi}{\varphi} = \sscpro{\psi+\psi^{\perp}}{\psi+\psi^{\perp}} = \sscpro{\psi}{\psi} + \sscpro{\psi^{\perp}}{\psi^{\perp}} \\ &= \bnorm{\mbox{$\sum_{k = 1}^n$} \sscpro{\varphi_k}{\varphi} \, \varphi_k}^2 + \bnorm{\varphi - \mbox{$\sum_{k = 1}^n$} \sscpro{\varphi_k}{\varphi} \, \varphi_k}^2 . \end{align*} % This concludes the proof. \end{proof} A simple corollary are Bessel's inequality and the Cauchy-Schwarz inequality. \begin{theorem} Let $\Hil$ be a pre-Hilbert space. % \begin{enumerate}[(i)] \item Bessel's inequality holds: let $\bigl \{ \varphi_1 , \ldots \varphi_n \bigr \}$ be a finite orthonormal sequence. Then % \begin{align*} \snorm{\psi}^2 \geq \sum_{j = 1}^n \sabs{\sscpro{\varphi_j}{\psi}}^2 . \end{align*} % holds for all $\psi \in \Hil$. \item The Cauchy-Schwarz inequality holds, \ie % \begin{align*} \sabs{\sscpro{\varphi}{\psi}} \leq \snorm{\varphi} \snorm{\psi} \end{align*} % is valid for all $\varphi , \psi \in \Hil$ \end{enumerate} % \end{theorem} \begin{proof} % \begin{enumerate}[(i)] \item This follows trivially from the previous Theorem as $\snorm{\psi^{\perp}}^2 \geq 0$. \item Pick $\varphi , \psi \in \Hil$. In case $\varphi = 0$, the inequality holds. So assume $\varphi \neq 0$ and define % \begin{align*} \varphi_1 := \frac{\varphi}{\norm{\varphi}} \end{align*} % which has norm $1$. We can apply (i) for $n = 1$ to conclude % \begin{align*} \norm{\psi}^2 \geq \abs{\sscpro{\varphi_1}{\psi}}^2 = \frac{1}{\norm{\varphi}^2} \abs{\sscpro{\varphi}{\psi}}^2 . \end{align*} % This is equivalent to the Cauchy-Schwarz inequality. \end{enumerate} % \end{proof} An important corollary says that the scalar product is continuous with respect to the norm topology. This is not at all surprising, after all the norm is induced by the scalar product! \begin{corollary}\label{hilbert_spaces:onb:cor:continuity_scalar_product} Let $\Hil$ be a Hilbert space. Then the scalar product is continuous with respect to the norm topology, \ie for two sequences $(\varphi_n)_{n \in \N}$ and $(\psi_m)_{m \in \N}$ that converge to $\varphi$ and $\psi$, respectively, we have % \begin{align*} \lim_{n,m \rightarrow \infty} \sscpro{\varphi_n}{\psi_m} &= \sscpro{\varphi}{\psi} . \end{align*} % \end{corollary} \begin{proof} Let $(\varphi_n)_{n \in \N}$ and $(\psi_m)_{m \in \N}$ be two sequences in $\Hil$ that converge to $\varphi$ and $\psi$, respectively. Then by Cauchy-Schwarz, we have % \begin{align*} \lim_{n,m \rightarrow \infty} \babs{\sscpro{\varphi}{\psi} - \sscpro{\varphi_n}{\psi_m}} &= \lim_{n,m \rightarrow \infty} \babs{\sscpro{\varphi - \varphi_n}{\psi} - \sscpro{\varphi_n}{\psi_m - \psi}} \\ &\leq \lim_{n,m \rightarrow \infty} \babs{\sscpro{\varphi - \varphi_n}{\psi}} + \lim_{n,m \rightarrow \infty} \babs{\sscpro{\varphi_n}{\psi_m - \psi}} \\ &\leq \lim_{n,m \rightarrow \infty} \snorm{\varphi - \varphi_n} \, \snorm{\psi} + \lim_{n,m \rightarrow \infty} \snorm{\varphi_n} \, \snorm{\psi_m - \psi} = 0 \end{align*} % since there exists some $C > 0$ such that $\norm{\varphi_n} \leq C$ for all $n \in \N$. \end{proof} Before we prove that a Hilbert space is separable exactly if it admits a \emph{countable} basis, we need to introduce the notion of orthogonal complement: if $A$ is a subset of a pre-Hilbert space $\Hil$, then we define \begin{align*} A^{\perp} := \bigl \{ \varphi \in \Hil \; \vert \; \sscpro{\varphi}{\psi} = 0 \; \forall \psi \in A \bigr \} . \end{align*} The following few properties of the orthogonal complement follow immediately from its definition: \begin{enumerate}[(i)] \item $\{ 0 \}^{\perp} = \Hil$ and $\Hil^{\perp} = \{ 0 \}$. \item $A^{\perp}$ is a closed linear subspace of $\Hil$ for \emph{any} subset $A \subseteq \Hil$. \item If $A \subseteq B$, then $B^{\perp} \subseteq A^{\perp}$. \item If we denote the sub vector space spanned by the elements in $A$ by $\mathrm{span} \, A$, we have % \begin{align*} A^{\perp} = \bigl ( \mathrm{span} \, A \bigr )^{\perp} = \bigl ( \overline{\mathrm{span} \, A} \bigr )^{\perp} \end{align*} % where $\overline{\mathrm{span} \, A}$ is the completion of $\mathrm{span} \, A$ with respect to the norm topology. \marginpar{2013.10.15} \end{enumerate} \subsection{Prototypical Hilbert spaces} \label{spaces:Hilbert:prototypical} We have already introduced the \emph{Banach} space of square-integrable functions on $\R^n$, \begin{align*} \mathcal{L}^2(\Omega) := \Bigl \{ \varphi : \Omega \longrightarrow \C \; \big \vert \; \varphi \mbox{ measurable, } \int_{\Omega} \dd x \, \abs{\varphi(x)}^2 < \infty \Bigr \} , \end{align*} and this space can be naturally equipped with a scalar product, \begin{align} \scpro{f}{g} &= \int_{\Omega} \dd x \, f(x)^* \, g(x) . \label{spaces:eqn:L2_scalar_product} \end{align} When talking about wave functions in quantum mechanics, the Born rule states that $\abs{\psi(x)}^2$ is to be interpreted as a probability density on $\Omega$ for position (\ie $\psi$ is normalized, $\norm{\psi} = 1$). Hence, we are interested in solutions to the Schrödinger equation which are also square integrable. If $\psi_1 \sim \psi_2$ are two normalized functions in $\mathcal{L}^2(\Omega)$, then we get the same probabilities for both: if $\Lambda \subseteq \Omega \subseteq \R^n$ is a measurable set, then \begin{align*} \mathbb{P}_1(X \in \Lambda) = \int_{\Lambda} \dd x \, \abs{\psi_1(x)}^2 = \int_{\Lambda} \dd x \, \abs{\psi_2(x)}^2 = \mathbb{P}_2(X \in \Lambda) . \end{align*} This is proven via the triangle inequality and the Cauchy-Schwartz inequality: \begin{align*} 0 &\leq \babs{\mathbb{P}_1(X \in \Lambda) - \mathbb{P}_2(X \in \Lambda)} = \abs{\int_{\Lambda} \dd x \, \abs{\psi_1(x)}^2 - \int_{\Lambda} \dd x \, \abs{\psi_2(x)}^2} \\ & = \abs{\int_{\Lambda} \dd x \, \bigl ( \psi_1(x) - \psi_2(x) \bigr )^* \, \psi_1(x) + \int_{\Lambda} \dd x \, \psi_2(x)^* \, \bigl ( \psi_1(x) - \psi_2(x) \bigr )} \\ &\leq \int_{\Lambda} \dd x \, \babs{\psi_1(x) - \psi_2(x)} \, \babs{\psi_1(x)} + \int_{\Lambda} \dd x \, \babs{\psi_2(x)} \, \babs{\psi_1(x) - \psi_2(x)} \\ &\leq \norm{\psi_1 - \psi_2} \, \norm{\psi_1} + \norm{\psi_2} \, \norm{\psi_1 - \psi_2} = 0 \end{align*} Very often, another space is used in applications (\eg in tight-binding models): \begin{definition}[$\ell^2(S)$]\label{spaces:Hilbert:defn:ell2} Let $S$ be a countable set. Then % \begin{align*} \ell^2(S) := \Bigl \{ c : S \longrightarrow \C \; \big \vert \; \mbox{$\sum_{j \in S} c_j^* c_j < \infty$} \Bigr \} \end{align*} % is the Hilbert space of square-summable sequences with scalar product $\scpro{c}{c'} := \sum_{j \in S} c_j^* c'_j$. \end{definition} On $\ell^2(S)$ the scalar product induces the norm $\norm{c} := \sqrt{\scpro{c}{c}}$. With respect to this norm, $\ell^2(S)$ is complete. \subsection{Best approximation} \label{spaces:Hilbert:best_approx} If $(\Hil,d)$ is a metric space, we can define the distance between a point $\varphi \in \Hil$ and a subset $A \subseteq \Hil$ as \begin{align*} d(\varphi,A) := \inf_{\psi \in A} d(\varphi,\psi) . \end{align*} If there exists $\varphi_0 \in A$ which minimizes the distance, \ie $d(\varphi,A) = d(\varphi,\varphi_0)$, then $\varphi_0$ is called \emph{element of best approximation} for $\varphi$ in $A$. This notion is helpful to understand why and how elements in an infinite-dimensional Hilbert space can be approximated by finite linear combinations -- something that is used in numerics all the time. If $A \subset \Hil$ is a convex subset of a Hilbert space $\Hil$, then one can show that there always exists an element of best approximation. In case $A$ is a linear subspace of $\Hil$, it is given by projecting an arbitrary $\psi \in \Hil$ down to the subspace $A$. \begin{theorem}\label{hilbert_spaces:onb:thm:best_approx} Let $A$ be a closed convex subset of a Hilbert space $\Hil$. Then there exists for each $\varphi \in \Hil$ exactly one $\varphi_0 \in A$ such that % \begin{align*} d(\varphi,A) = d(\varphi,\varphi_0) . \end{align*} % \end{theorem} \begin{proof} We choose a sequence $(\psi_n)_{n \in \N}$ in $A$ with $d(\varphi,\psi_n) = \norm{\varphi - \psi_n} \rightarrow d(x,A)$. This sequence is also a Cauchy sequence: we add and subtract $\varphi$ to get % \begin{align*} \bnorm{\psi_n - \psi_m}^2 &= \bnorm{(\psi_n - \varphi) + (\varphi - \psi_m)}^2 . \end{align*} % If $\Hil$ were a normed space, we could have to use the triangle inequality to estimate the right-hand side from above. However, $\Hil$ is a Hilbert space and by using the parallelogram identity,\footnote{For all $\varphi , \psi \in \Hil$, the identity $2 \norm{\varphi}^2 + 2 \norm{\psi}^2 = \norm{\varphi + \psi}^2 + \norm{\varphi - \psi}^2$ holds. } we see that the right-hand side is actually \emph{equal} to % \begin{align*} \bnorm{\psi_n - \psi_m}^2 &= 2 \bnorm{\psi_n - \varphi}^2 + 2 \bnorm{\psi_m - \varphi}^2 - \bnorm{\psi_n + \psi_m - 2 \varphi}^2 \\ & = 2 \bnorm{\psi_n - \varphi}^2 + 2 \bnorm{\psi_m - \varphi}^2 - 4 \bnorm{\tfrac{1}{2} (\psi_n + \psi_m) - \varphi}^2 \\ &\leq 2 \bnorm{\psi_n - \varphi}^2 + 2 \bnorm{\psi_m - \varphi}^2 - 4 d(\varphi,A) \\ & \xrightarrow{n \rightarrow \infty} 2 d(\varphi,A) + 2 d(\varphi,A) - 4 d(\varphi,A) = 0 . \end{align*} % By convexity, $\frac{1}{2} (\psi_n + \psi_m)$ is again an element of $A$. This is crucial once again for the uniqueness argument. Letting $n,m \rightarrow \infty$, we see that $(\psi_n)_{n \in \N}$ is a Cauchy sequence in $A$ which converges in $A$ as it is a closed subset of $\Hil$. Let us call the limit point $\varphi_0 := \lim_{n \rightarrow \infty} \psi_n$. Then $\varphi_0$ is \emph{an} element of best approximation, % \begin{align*} \bnorm{\varphi - \varphi_0} = \lim_{n \rightarrow \infty} \bnorm{\varphi - \psi_n} = d(\varphi,A) . \end{align*} % To show uniqueness, we assume that there exists another element of best approximation $\varphi_0' \in A$. Define the sequence $(\tilde{\psi}_n)_{n \in \N}$ by $\tilde{\psi}_{2n} := \varphi_0$ for even indices and $\tilde{\psi}_{2n + 1} := \varphi_0'$ for odd indices. By assumption, we have $\norm{\varphi - \varphi_0} = d(\varphi,A) = \norm{\varphi - \varphi_0'}$ and thus, by repeating the steps above, we conclude $(\tilde{\psi}_n)_{n \in \N}$ is a Cauchy sequence that converges to some element. However, since the sequence is alternating, the two elements $\varphi'_0 = \varphi_0$ are in fact identical. \end{proof} As we have seen, the condition that the set is convex and closed is crucial in the proof. Otherwise the minimizer may not be unique or even contained in the set. This is all very abstract. For the case of a closed subvector space $E \subseteq \Hil$, we can express the element of best approximation in terms of the basis: not surprisingly, it is given by the projection of $\varphi$ onto $E$. \begin{theorem}\label{hilbert_spaces:onb:thm:best_approx_explicit} Let $E \subseteq \Hil$ be a closed subspace of a Hilbert space that is spanned by countably many orthonormal basis vectors $\{ \varphi_k \}_{k \in \mathcal{I}}$. Then for any $\varphi \in \Hil$, the element of best approximation $\varphi_0 \in E$ is given by % \begin{align*} \varphi_0 = \sum_{k \in \mathcal{I}} \sscpro{\varphi_k}{\varphi} \, \varphi_k . \end{align*} % \end{theorem} \begin{proof} It is easy to show that $\varphi - \varphi_0$ is orthogonal to any $\psi = \sum_{k \in \mathcal{I}} \lambda_k \, \varphi_k \in E$: we focus on the more difficult case when $E$ is not finite-dimensional. Then, we have to approximate $\varphi_0$ and $\psi$ by finite linear combinations and take limits. We call $\varphi_0^{(n)} := \sum_{k = 1}^n \sscpro{\varphi_k}{\varphi} \, \varphi_k$ and $\psi^{(m)} := \sum_{l = 1}^m \lambda_l \, \varphi_l$. With that, we have % \begin{align*} \bscpro{\varphi - \varphi_0^{(n)}}{\psi^{(m)}} &= \Bscpro{\varphi - \mbox{$\sum_{k = 1}^n$} \sscpro{\varphi_k}{\varphi} \, \varphi_k}{\mbox{$\sum_{l = 1}^m$} \lambda_l \, \varphi_l} \ \\ & = \sum_{l = 1}^m \lambda_l \, \sscpro{\varphi}{\varphi_l} - \sum_{k = 1}^n \sum_{l = 1}^m \lambda_l \, \sscpro{\varphi_k}{\varphi}^* \, \sscpro{\varphi_k}{\varphi_l} \\ &= \sum_{l = 1}^m \lambda_l \, \sscpro{\varphi}{\varphi_l} \, \Bigl ( 1 - \mbox{$\sum_{k = 1}^n$} \, \delta_{kl} \Bigr ) . \end{align*} % By continuity of the scalar product, Corollary~\ref{hilbert_spaces:onb:cor:continuity_scalar_product}, we can take the limit $n,m \rightarrow \infty$. The term in parentheses containing the sum is $0$ exactly when $l \in \{ 1 , \ldots , m \}$ and $1$ otherwise. Specifically, if $n \geq m$, the right-hand side vanishes identically. Hence, we have % \begin{align*} \bscpro{\varphi - \varphi_0}{\psi} &= \lim_{n,m \rightarrow \infty} \bscpro{\varphi - \varphi_0^{(n)}}{\psi^{(m)}} = 0 , \end{align*} % in other words $\varphi - \varphi_0 \in E^{\perp}$. This, in turn, implies by the Pythagorean theorem that % \begin{align*} \norm{\varphi - \psi}^2 &= \norm{\varphi - \varphi_0}^2 + \norm{\varphi_0 - \psi}^2 \geq \norm{\varphi - \varphi_0}^2 \end{align*} % and hence $\norm{\varphi - \varphi_0} = d(\varphi,E)$. Put another way, $\varphi_0$ is \emph{an} element of best approximation. Let us now show uniqueness. Assume, there exists another element of best approximation $\varphi'_0 = \sum_{k \in \mathcal{I}} \lambda'_k \, \varphi_k$. Then we know by repeating the previous calculation backwards that $\varphi - \varphi'_0 \in E^{\perp}$ and the scalar product with respect to any of the basis vectors $\varphi_k$ which span $E$ has to vanish, % \begin{align*} 0 = \bscpro{\varphi_k}{\varphi - \varphi'_0} &= \sscpro{\varphi_k}{\varphi} - \sum_{l \in \mathcal{I}} {\lambda'_l} \, \sscpro{\varphi_k}{\varphi_l} = \sscpro{\varphi_k}{\varphi} - \sum_{l \in \mathcal{I}} {\lambda'_l} \, \delta_{kl} \\ &= \sscpro{\varphi_k}{\varphi} - {\lambda'_k} . \end{align*} % This means the coefficients with respect to the basis $\{ \varphi_k \}_{k \in \mathcal{I}}$ all agree with those of $\varphi_0$. Hence, the element of approximation is unique, $\varphi_0 = \varphi_0'$, and given by the projection of $\varphi$ onto $E$. \end{proof} \begin{theorem}\label{hilbert_spaces:onb:thm:direct_sum_decomp} Let $E$ be a closed linear subspace of a Hilbert space $\Hil$. Then % \begin{enumerate}[(i)] \item $\Hil = E \oplus E^{\perp}$, \ie every vector $\varphi \in \Hil$ can be uniquely decomposed as $\varphi = \psi + \psi^{\perp}$ with $\psi \in E$, $\psi^{\perp} \in E^{\perp}$. \item $E^{\perp \perp} = E$. \end{enumerate} % \end{theorem} \begin{proof} % \begin{enumerate}[(i)] \item By Theorem~\ref{hilbert_spaces:onb:thm:best_approx}, for each $\varphi \in \Hil$, there exists $\varphi_0 \in E$ such that $d(\varphi,E) = d(\varphi,\varphi_0)$. From the proof of the previous theorem, we see that $\varphi_0^{\perp} := \varphi - \varphi_0 \in E^{\perp}$. Hence, $\varphi = \varphi_0 + \varphi_0^{\perp}$ is \emph{a} decomposition of $\varphi$. To show that it is unique, assume $\varphi'_0 + {\varphi'_0}^{\perp} = \varphi = \varphi_0 + \varphi_0^{\perp}$ is another decomposition. Then by subtracting, we are led to conclude that % \begin{align*} E \ni \varphi'_0 - \varphi_0 = \varphi_0^{\perp} - {\varphi'_0}^{\perp} \in E^{\perp} \end{align*} % holds. On the other hand, $E \cap E^{\perp} = \{ 0 \}$ and thus $\varphi_0 = {\varphi'_0}$ and $\varphi_0^{\perp} = {\varphi'_0}^{\perp}$, the decomposition is unique. \item It is easy to see that $E \subseteq E^{\perp \perp}$. Let $\tilde{\varphi} \in E^{\perp \perp}$. By the same arguments as above, we can decompose $\tilde{\varphi} \in E^{\perp \perp} \subseteq \Hil$ into % \begin{align*} \tilde{\varphi} = \tilde{\varphi}_0 + {\tilde{\varphi}}_0^{\perp} \end{align*} % with $\tilde{\varphi}_0 \in E \subseteq E^{\perp \perp}$ and ${\tilde{\varphi}}_0^{\perp} \in E^{\perp}$. Hence, ${\tilde{\varphi}} - \tilde{\varphi}_0 \in E^{\perp \perp} \cap E^{\perp} = (E^{\perp})^{\perp} \cap E^{\perp} = \{ 0 \}$ and thus $\tilde{\varphi} = \tilde{\varphi}_0 \in E$. \end{enumerate} % \end{proof} Now we are in a position to prove the following important Proposition: \begin{proposition} A Hilbert space $\Hil$ is separable if and only if there exists a countable orthonormal basis. \end{proposition} \begin{proof} $\Leftarrow$: The set generated by the orthonormal basis $\{ \varphi_j \}_{j \in \mathcal{I}}$, $\mathcal{I}$ countable, and coefficients $z = q + i p$, $q , p \in \Q$, is dense in $\Hil$, % \begin{align*} \Bigl \{ \mbox{$\sum_{j = 1}^n$} z_j \varphi_j \in \Hil \; \big \vert \; \N \ni n \leq \abs{\mathcal{I}}, \; \varphi_j \in \{ \varphi_k \}_{k \in \N} , \; z_j = q_j + i p_j , \; q_j , p_j \in \Q \Bigr \} . \end{align*} % \medskip \noindent $\Rightarrow$: Assume there exists a countable dense subset $\mathcal{D}$, \ie $\overline{\mathcal{D}} = \Hil$. If $\Hil$ is finite dimensional, the induction terminates after finitely many steps and the proof is simpler. Hence, we will assume $\Hil$ to be infinite dimensional. Pick a vector $\tilde{\varphi}_1 \in \mathcal{D} \setminus \{ 0 \}$ and normalize it. The normalized vector is then called $\varphi_1$. Note that $\varphi_1$ need not be in $\mathcal{D}$. By Theorem~\ref{hilbert_spaces:onb:thm:direct_sum_decomp}, we can split any $\psi \in \mathcal{D}$ into $\psi_1$ and $\psi_1^{\perp}$ such that $\psi_1 \in \mathrm{span} \, \{ \varphi_1 \} := E_1$, $\psi_1^{\perp} \in \mathrm{span} \, \{ \varphi_1 \}^{\perp} := E_1^{\perp}$ and % \begin{align*} \psi = \psi_1 + \psi_1^{\perp} . \end{align*} % pick a second $\tilde{\varphi}_2 \in \mathcal{D} \setminus E_1$ (which is non-empty). Now we apply Theorem~\ref{hilbert_spaces:onb:thm:best_approx_explicit} (which is in essence Gram-Schmidt orthonormalization) to $\tilde{\varphi}_2$, \ie we pick the part which is orthogonal to $\varphi_1$, % \begin{align*} \varphi'_2 := \tilde{\varphi}_2 - \sscpro{\varphi_1}{\tilde{\varphi}_2} \, \varphi_2 \end{align*} % and normalize to $\varphi_2$, % \begin{align*} \varphi_2 := \frac{\varphi'_2}{\snorm{\varphi'_2}} . \end{align*} % This defines $E_2 := \mathrm{span} \, \{ \varphi_1 , \varphi_2 \}$ and $\Hil = E_2 \oplus E_2^{\perp}$. Now we proceed by induction: assume we are given $E_n = \mathrm{span} \, \{ \varphi_1 , \ldots , \varphi_n \}$. Take $\tilde{\varphi}_{n+1} \in \mathcal{D} \setminus E_n$ and apply Gram-Schmidt once again to yield $\varphi_{n+1}$ which is the obtained from normalizing the vector % \begin{align*} \varphi'_{n+1} := \tilde{\varphi}_{n+1} - \sum_{k = 1}^n \sscpro{\varphi_k}{\tilde{\varphi}_{n+1}} \, \varphi_k . \end{align*} % This induction yields an orthonormal sequence $\{ \varphi_n \}_{n \in \N}$ which is by definition an orthonormal basis of $E_{\infty} := \overline{\mathrm{span} \, \{ \varphi_n \}_{n \in \N}}$ a closed subspace of $\Hil$. If $E_{\infty} \subsetneq \Hil$, we can split the Hilbert space into $\Hil = E_{\infty} \oplus E_{\infty}^{\perp}$. Then either $\mathcal{D} \cap (\Hil \setminus E_{\infty}) = \emptyset$ -- in which case $\mathcal{D}$ cannot be dense in $\Hil$ -- or $\mathcal{D} \cap (\Hil \setminus E_{\infty}) \neq \emptyset$. But then we have terminated the induction prematurely. \end{proof} \subsection{Best approximation on $L^2([-\pi,+\pi])$ using $\{ \e^{+ \ii n x} \}_{n \in \Z}$} Let us consider the free Schrödinger equation \begin{align} \ii \, \partial_t \psi(t) &= - \partial_x^2 \psi(t) \, , && \psi(0) = \psi_0 \in L^2([-\pi,+\pi]) \, , \label{spaces:eqn:free_Schroedinger_interval} \end{align} for a suitable initial condition $\psi_0$ (we will be more specific in a moment). Since $[-\pi,+\pi]$ has a boundary, we need to impose boundary conditions. We pick periodic boundary conditions, \ie we consider functions for which $\psi(-\pi) = \psi(+\pi)$. It is often convenient to think of functions with periodic boundary conditions as $2\pi$-periodic functions on $\R$, \ie $\psi(x + 2\pi) = \psi(x)$. Periodic functions are best analyzed using their \emph{Fourier series} (which we will discuss in detail in Chapter~\ref{Fourier}), \ie their expansion in terms of the exponentials $\{ \e^{+ \ii n x} \}_{n \in \Z}$. To simplify computations, let us choose a convenient prefactor for the scalar product, \begin{align*} \scpro{\varphi}{\psi} := \frac{1}{2\pi} \int_{-\pi}^{+\pi} \dd x \, \varphi(x)^* \, \psi(x) . \end{align*} Then a quick computation yields that $\{ \e^{+ \ii n x} \}_{n \in \Z}$ is indeed an orthonormal set: \begin{align*} \scpro{\e^{+ \ii j x}}{\e^{+ \ii n x}} &= \frac{1}{2\pi} \int_{-\pi}^{+\pi} \dd x \, \e^{+ \ii (n - j) x} = % \begin{cases} 1 & j = n \\ 0 & j \neq n \\ \end{cases} % \end{align*} As we will see later, this set of orthonormal vectors is also \emph{basis}, and we can expand \emph{any} vector $\psi$ in terms of its Fourier components, \begin{align*} \psi(x) &= \sum_{n \in \Z} \scpro{\e^{+ \ii n x}}{\psi} \, \e^{+ \ii n x} . \end{align*} Using the \emph{product ansatz} $\psi(t,x) = \tau(t) \, \varphi(x)$ (separation of variables) from Chapter~\ref{spaces:boundary_value_problems}, we obtain two coupled equations: \begin{align*} \ii \, \dot{\tau}(t) \, \varphi(x) &= - \tau(t) \varphi''(x) \; \; \Longleftrightarrow \; \; \ii \, \frac{\dot{\tau}(t)}{\tau(t)} = - \frac{\varphi''(x)}{\varphi(x)} = \lambda \in \C \end{align*} The periodic boundary conditions $\varphi(-\pi) = \varphi(+\pi)$ as well as the condition $\varphi \in L^2([-\pi,+\pi])$ eliminates many choices of $\lambda \in \C$. The equation for $\varphi$ is just a harmonic oscillator equation, and the periodicity requirement means that only $\lambda = n^2$ with $n \in \N_0 = \{ 0 , 1 , \ldots \}$ are admissible. The equation for $\tau$ can be obtained by elementary integration, \begin{align*} \ii \, \dot{\tau}_n(t) &= n^2 \tau_n(t) \; \; \Rightarrow \; \; \tau_n(t) = \tau_n(0) \, \e^{- \ii n^2 t} . \end{align*} Hence, the solution to $\lambda = n^2$ is a scalar multiple of $\e^{- \ii n^2 t} \, \e^{+ \ii n x}$, and the solution to \eqref{spaces:eqn:free_Schroedinger_interval} can formally be written down as the sum \begin{align*} \psi(t,x) &= \sum_{n \in \Z} \e^{- \ii n^2 t} \, \scpro{\e^{+ \ii n x}}{\psi_0} \, \e^{+ \ii n x} . \end{align*} \emph{A priori} it is not clear in what sense this sum converges. It turns out the correct notion of convergence is to require the finiteness of \begin{align*} \norm{\psi(t)}^2 &= \sum_{n \in \Z} \babs{\e^{- \ii n^2 t} \, \scpro{\e^{+ \ii n x}}{\psi_0}}^2 \\ &= \sum_{n \in \Z} \babs{\scpro{\e^{+ \ii n x}}{\psi_0}}^2 = \norm{\psi_0}^2 < \infty \end{align*} This condition, however, is automatically satisfied since we have assumed $\psi_0$ is an element of $L^2([-\pi,+\pi])$ from the start. In other words, the dynamics even \emph{preserve} the $L^2$-norm. In the next section, we will see that this is not at all accidental, but by design. Now to the part about best approximation: one immediate idea is to allow only $n^2 \in \{ 0 , 1 , \ldots , N \}$, because for $\sum_{n \in \Z} \babs{\scpro{\e^{+ \ii n x}}{\psi_0}}^2$ to converge, a necessary condition is that $\babs{\scpro{\e^{+ \ii n x}}{\psi_0}}^2 \to 0$ as $n \to \infty$. So for any upper bound for the error $\eps > 0$ (which we shall also call \emph{precision}), we can find $N(\eps)$ so that the initial condition can be approximated in norm up to an error $\eps$, \begin{align*} \norm{\psi_0}^2 - \sum_{n = -N(\eps)}^{+N(\eps)} \babs{\scpro{\e^{+ \ii n x}}{\psi_0}}^2 &= \sum_{\abs{n} > N(\eps)} \babs{\scpro{\e^{+ \ii n x}}{\psi_0}}^2 < \eps^2 . \end{align*} Since all the vectors $\{ \e^{+ \ii n x} \}_{n \in \Z}$ are orthonormal, the vector of best approximation is \begin{align*} \psi_{\mathrm{best}}(x) = \sum_{n = -N(\eps)}^{+N(\eps)} \scpro{\e^{+ \ii n x}}{\psi_0} \, \e^{+ \ii n x} , \end{align*} and we can repeat the above arguments to see that the time-evolved $\psi_{\mathrm{best}}(t)$ stays $\eps$-close to $\psi(t)$ \emph{for all times} in norm, \begin{align*} \norm{\psi(t) - \psi_{\mathrm{best}}(t)}^2 &= \sum_{\abs{n} > N(\eps)} \babs{\e^{- \ii n^2 t} \, \scpro{\e^{+ \ii n x}}{\psi_0}}^2 \\ &= \sum_{\abs{n} > N(\eps)} \babs{\scpro{\e^{+ \ii n x}}{\psi_0}}^2 < \eps^2 . \end{align*} In view of the Grönwall lemma~\ref{odes:lem:Groenwall}, one may ask why the two are close for all times. The crucial ingredient here is \emph{linearity} of \eqref{spaces:eqn:free_Schroedinger_interval}. \section{Linear functionals, dual space and weak convergence} \label{hilbert_spaces:dual_space} A very important notion is that of a functional. We have already gotten to know the free energy functional \begin{align*} E_{\mathrm{free}} : &\mathcal{D}(E_{\mathrm{free}}) \subset L^2(\R^n) \longrightarrow [0,+\infty) \subset \C , \\ &\varphi \mapsto E_{\mathrm{free}}(\varphi) = \frac{1}{2m} \sum_{l = 1}^d \bscpro{(- i \hbar \partial_{x_l} \varphi)}{(- i \hbar \partial_{x_l} \varphi)} . \end{align*} This functional, however, is not linear, and it is not defined for all $\varphi \in L^2(\R^n)$. Let us restrict ourselves to a smaller class of functionals: \begin{definition}[Bounded linear functional] Let $\mathcal{X}$ be a normed space. Then a map % \begin{align*} L : \mathcal{X} \longrightarrow \C \end{align*} % is a bounded linear functional if and only if % \begin{enumerate}[(i)] \item there exists $C>0$ such that $\abs{L(x)} \leq C \norm{x}$ and \item $L(x + \mu y) = L(x) + \mu L(y)$ \end{enumerate} % hold for all $x,y \in \mathcal{X}$ and $\mu \in \C$. \end{definition} A very basic fact is that boundedness of a linear functional is equivalent to its continuity. \marginpar{2013.10.17} \begin{theorem}\label{hilbert_spaces:dual_space:thm:bounded_functionals_continuous} Let $L : \mathcal{X} \longrightarrow \C$ be a linear functional on the normed space $\mathcal{X}$. Then the following statements are equivalent: % \begin{enumerate}[(i)] \item $L$ is continuous at $x_0 \in \mathcal{X}$. \item $L$ is continuous. \item $L$ is bounded. \end{enumerate} % \end{theorem} \begin{proof} (i) $\Leftrightarrow$ (ii): This follows immediately from the linearity. \medskip \noindent (ii) $\Rightarrow$ (iii): Assume $L$ to be continuous. Then it is continuous at $0$ and for $\eps = 1$, we can pick $\delta > 0$ such that % \begin{align*} \abs{L(x)} \leq \eps = 1 \end{align*} % for all $x \in \mathcal{X}$ with $\norm{x} \leq \delta$. By linearity, this implies for any $y \in \mathcal{X} \setminus \{ 0 \}$ that % \begin{align*} \babs{L \bigl ( \tfrac{\delta}{\norm{y}} y \bigr )} = \tfrac{\delta}{\norm{y}} \, \babs{L(y)} \leq 1 . \end{align*} % Hence, $L$ is bounded with bound $\nicefrac{1}{\delta}$, % \begin{align*} \babs{L(y)} \leq \tfrac{1}{\delta} \norm{y} . \end{align*} % (iii) $\Rightarrow$ (ii): Conversely, if $L$ is bounded by $C > 0$, % \begin{align*} \babs{L(x) - L(y)} \leq C \norm{x-y} , \end{align*} % holds for all $x,y \in \mathcal{X}$. This means, $L$ is continuous: for $\eps > 0$ pick $\delta = \nicefrac{\eps}{C}$ so that % \begin{align*} \babs{L(x) - L(y)} \leq C \norm{x-y} \leq C \tfrac{\eps}{C} = \eps \end{align*} % holds for all $x,y \in \mathcal{X}$ such that $\norm{x - y} \leq \nicefrac{\eps}{C}$. \end{proof} \begin{definition}[Dual space]\label{spaces:defn:dual_space} Let $\mathcal{X}$ be a normed space. The dual space $\mathcal{X}'$ is the vector space of bounded linear functionals endowed with the norm % \begin{align*} \norm{L}_* := \sup_{x \in \mathcal{X} \setminus \{ 0 \}} \frac{\abs{L(x)}}{\norm{x}} = \sup_{\substack{x \in \mathcal{X} \\ \norm{x} = 1}} \abs{L(x)} . \end{align*} % \end{definition} Independently of whether $\mathcal{X}$ is complete, $\mathcal{X}'$ is a Banach space. \begin{proposition}\label{hilbert_spaces:dual_space:prop:completeness_Xstar} The dual space to a normed linear space $\mathcal{X}$ is a Banach space. \end{proposition} \begin{proof} Let $( L_n )_{n \in \N}$ be a Cauchy sequence in $\mathcal{X}'$, \ie a sequence for which % \begin{align*} \norm{L_k - L_j}_* \xrightarrow{k,j \rightarrow \infty} 0 . \end{align*} % We have to show that $( L_n )_{n \in \N}$ converges to some $L \in \mathcal{X}'$. For any $\eps > 0$, there exists $N(\eps) \in \N$ such that % \begin{align*} \norm{L_k - L_j}_* < \eps \end{align*} % for all $k,j \geq N(\eps)$. This also implies that for any $x \in \mathcal{X}$, $\bigl ( L_n(x) \bigr )_{n \in \N}$ converges as well, % \begin{align*} \babs{L_k(x) - L_j(x)} \leq \bnorm{L_k - L_j}_* \, \norm{x} < \eps \norm{x} . \end{align*} % The field of complex numbers is complete and $\bigl ( L_n(x) \bigr )_{n \in \N}$ converges to some $L(x) \in \C$. We now \emph{define} % \begin{align*} L(x) := \lim_{n \rightarrow \infty} L_n(x) \end{align*} % for any $x \in \mathcal{X}$. Clearly, $L$ inherits the linearity of the $(L_n)_{n \in \N}$. The map $L$ is also bounded: for any $\eps > 0$, there exists $N(\eps) \in \N$ such that $\norm{L_j - L_n}_* < \eps$ for all $j , n \geq N(\eps)$. Then % \begin{align*} \babs{(L - L_n)(x)} &= \lim_{j \rightarrow \infty} \babs{(L_j - L_n)(x)} \leq \lim_{j \to \infty} \bnorm{L_j - L_n}_* \, \bnorm{x} \\ &< \eps \norm{x} \end{align*} % holds for all $n \geq N(\eps)$. Since we can write $L$ as $L = L_n + (L - L_n)$, we can estimate the norm of the linear map $L$ by $\snorm{L}_* \leq \snorm{L_n}_* + \eps < \infty$. This means $L$ is a bounded linear functional on $\mathcal{X}$. \end{proof} In case of Hilbert spaces, the dual $\Hil'$ can be canonically identified with $\Hil$ itself: \begin{theorem}[Riesz' Lemma]\label{hilbert_spaces:dual_space:thm:Riesz_Lemma} Let $\Hil$ be a Hilbert space. Then for all $L \in \Hil'$ there exist $\psi_L \in \Hil$ such that % \begin{align*} L(\varphi) = \sscpro{\psi_L}{\varphi} . \end{align*} % In particular, we have $\norm{L}_* = \snorm{\psi_L}$. \end{theorem} \begin{proof} Let $\ker L := \bigl \{ \varphi \in \Hil \; \vert \; L(\varphi) = 0 \bigr \}$ be the kernel of the functional $L$ and as such is a closed linear subspace of $\Hil$. If $\ker L = \Hil$, then $0 \in \Hil$ is the associated vector, % \begin{align*} L(\varphi) = 0 = \sscpro{0}{\varphi} . \end{align*} % So assume $\ker L \subsetneq \Hil$ is a proper subspace. Then we can split $\Hil = \ker L \oplus (\ker L)^{\perp}$. Pick $\varphi_0 \in (\ker L)^{\perp}$, \ie $L(\varphi_0) \neq 0$. Then define % \begin{align*} \psi_L := \frac{L(\varphi_0)}{\snorm{\varphi_0}^2} \, \varphi_0 . \end{align*} % We will show that $L(\varphi) = \sscpro{\psi_L}{\varphi}$. If $\varphi \in \ker L$, then $L(\varphi) = 0 = \sscpro{\psi_L}{\varphi}$. One easily shows that for $\varphi = \alpha \, \varphi_0$, $\alpha \in \C$, % \begin{align*} L(\varphi) &= L(\alpha \, \varphi_0) = \alpha \, L(\varphi_0) \\ &= \sscpro{\psi_L}{\varphi} = \Bscpro{\tfrac{L(\varphi_0)^*}{\snorm{\varphi_0}^2} \varphi_0}{\alpha \, \varphi_0} \\ &= \alpha \, L(\varphi_0) \, \frac{\sscpro{\varphi_0}{\varphi_0}}{\snorm{\varphi_0}^2} = \alpha \, L(\varphi_0) . \end{align*} % Every $\varphi \in \Hil$ can be written as % \begin{align*} \varphi = \biggl ( \varphi - \frac{L(\varphi)}{L(\varphi_0)} \, \varphi_0 \biggr ) + \frac{L(\varphi)}{L(\varphi_0)} \, \varphi_0 . \end{align*} % Then the first term is in the kernel of $L$ while the second one is in the orthogonal complement of $\ker L$. Hence, $L(\varphi) = \sscpro{\psi_L}{\varphi}$ for all $\varphi \in \Hil$. If there exists a second $\psi_L' \in \Hil$, then for any $\varphi \in \Hil$ % \begin{align*} 0 = L(\varphi) - L(\varphi) = \sscpro{\psi_L}{\varphi} - \sscpro{\psi_L'}{\varphi} = \sscpro{\psi_L - \psi_L'}{\varphi} . \end{align*} % This implies $\psi_L' = \psi_L$ and thus the element $\psi_L$ is unique. To show $\snorm{L}_* = \snorm{\psi_L}$, assume $L \neq 0$. Then, we have % \begin{align*} \norm{L}_* &= \sup_{\norm{\varphi} = 1} \babs{L(\varphi)} \geq \babs{L \bigl ( \tfrac{\psi_L}{\snorm{\psi_L}} \bigr )} \\ &= \bscpro{\psi_L}{\tfrac{\psi_L}{\snorm{\psi_L}}} = \snorm{\psi_L} . \end{align*} % On the other hand, the Cauchy-Schwarz inequality yields % \begin{align*} \norm{L}_* &= \sup_{\norm{\varphi} = 1} \babs{L(\varphi)} = \sup_{\norm{\varphi} = 1} \babs{\sscpro{\psi_L}{\varphi}} \\ &\leq \sup_{\norm{\varphi} = 1} \snorm{\psi_L} \snorm{\varphi} = \snorm{\psi_L} . \end{align*} % Putting these two together, we conclude $\snorm{L}_* = \snorm{\psi_L}$. \end{proof} \begin{definition}[Weak convergence]\label{spaces:defn:weak_convergence} Let $\mathcal{X}$ be a Banach space. Then a sequence $(x_n)_{n \in \N}$ in $\mathcal{X}$ is said to converge weakly to $x \in \mathcal{X}$ if for all $L \in \mathcal{X}'$ % \begin{align*} L(x_n) \xrightarrow{n \rightarrow \infty} L(x) \end{align*} % holds. In this case, one also writes $x_n \rightharpoonup x$. \end{definition} Weak convergence, as the name suggests, is really weaker than convergence in norm. The reason why “more” sequences converge is that, a sense, uniformity is lost. If $\mathcal{X}$ is a Hilbert space, then applying a functional is the same as computing the inner product with respect to some vector $\psi_L$. If the “non-convergent part” lies in the orthogonal complement to $\{ \psi_L \}$, then this particular functional does not notice that the sequence has not converged yet. \begin{example} Let $\Hil$ be a separable infinite-dimensional Hilbert space and $\{ \varphi_n \}_{n \in \N}$ an ortho\-normal basis. Then the sequence $(\varphi_n)_{n \in \N}$ does not converge in norm, for as long as $n \neq k$ % \begin{align*} \norm{\varphi_n - \varphi_k} = \sqrt{2} , \end{align*} % but it does converge weakly to $0$: for any functional $L = \sscpro{\psi_L}{\cdot}$, we see that $\bigl ( \abs{L(\varphi_n)} \bigr )_{n \in \N}$ is a sequence in $\R$ that converges to $0$. Since $\{ \varphi_n \}_{n \in \N}$ is a basis, we can write % \begin{align*} \psi_L = \sum_{n = 1}^{\infty} \sscpro{\varphi_n}{\psi_L} \, \varphi_n \end{align*} % and for the sequence of partial sums to converge to $\psi_L$, the sequence of coefficients % \begin{align*} \bigl ( \sscpro{\varphi_n}{\psi_L} \bigr )_{n \in \N} = \bigl ( L(\varphi_n)' \bigr )_{n \in \N} \end{align*} % must converge to $0$. Since this is true for any $L \in \Hil'$, we have proven that $\varphi_n \rightharpoonup 0$ (\ie $\varphi_n \rightarrow 0$ weakly). \end{example} In case of $\mathcal{X} = L^p(\Omega)$, there are three basic mechanisms for when a sequence of functions $(f_k)$ does not converge in norm, but only weakly: \begin{enumerate}[(i)] \item \emph{$f_k$ oscillates to death:} take $f_k(x) = \sin (kx)$ for $0 \leq x \leq 1$ and zero otherwise. \item \emph{$f_k$ goes up the spout:} pick $g \in L^p(\R)$ and define $f_k(x) := k^{\nicefrac{1}{p}} \, g(kx)$. This sequence explodes near $x = 0$ for large $k$. \item \emph{$f_k$ wanders off to infinity:} this is the case when for some $g \in L^p(\R)$, we define $f_k(x) := g(x+k)$. \end{enumerate} All of these sequences converge weakly to $0$, but do not converge in norm. \marginpar{2013.10.22} \chapter{Linear operators} \label{operators} Linear operators appear quite naturally in the analysis of \emph{linear} PDEs. Many PDEs share a common structure: for instance, the Schrödinger equation \begin{align} \ii \, \partial_t \psi(t) &= \bigl ( - \Delta + V \bigr ) \psi(t) \, , && \psi(0) = \psi_0 \, , \label{operators:eqn:Schroedinger_equation} \end{align} and the heat equation \begin{align} \partial_t \psi(t) &= - \bigl ( - \Delta + V \bigr ) \psi(t) \, , && \psi(0) = \psi_0 \, , \label{operators:eqn:heat_equation} \end{align} both involve the same operator \begin{align*} H = - \Delta + V \end{align*} on the right-hand side. \emph{Formally}, we can solve these equations in closed form, \begin{align*} \psi(t) &= \e^{- \ii t H} \psi_0 \end{align*} solves the Schrödinger equation~\eqref{operators:eqn:Schroedinger_equation} while $\psi(t) = \e^{- t H} \psi_0$ solves the heat equation, because we can \emph{formally} compute the derivative \begin{align*} \ii \frac{\dd}{\dd t} \psi(t) &= \ii \frac{\dd}{\dd t} \bigl ( \e^{- \ii t H} \psi_0 \bigr ) = - \ii^2 \, H \, \e^{- \ii t H} \psi_0 \\ &= H \psi(t) \end{align*} and verify that also the initial condition is satisfied, $\psi(0) = \e^{0} \, \psi_0 = \psi_0$. A priori, these are just formal manipulations, though; If $H$ were a matrix, we know how to give rigorous meaning to these expressions, but in case of operators on infinite-dimensional spaces, this is much more involved. However, we see that the dynamics of both, the Schrödinger and the heat equation are generated by the \emph{same} operator $H$. As we will see in Chapter~\ref{operators:Maxwell}, also the Maxwell equations can be recast in the form~\eqref{operators:eqn:Schroedinger_equation}. This gives one access to all the powerful tools for the analysis of Schrödinger operators in order to gain understanding of the dynamics of electromagnetic waves (light). Moreover, one can see that the selfadjointness of $H = H^*$ leads to $U(t) := \e^{- \ii t H}$ being a \emph{unitary} operator, an operator which preserves the norm on the Hilbert space (“$\e^{- \ii t H}$ is just a phase”). Lastly, states will be closely related to (orthogonal) \emph{projections} $P$ which satisfy $P^2 = P$. \medskip \noindent This chapter will give a zoology of operators and expound on three particularly important classes of operator: selfadjoint operators, orthogonal projections and unitary operators as well as their relations. Given the brevity, much of what we do will not be rigorous. In fact, some of these results (\eg Stone's theorem) require extensive preparation until one can understand all these facets. For us, the important aspect is to elucidate the connections between these fundamental results and PDEs. \section{Bounded operators} \label{operators:bounded} The simplest operators are bounded operators. \begin{definition}[Bounded operator] Let $\mathcal{X}$ and $\mathcal{Y}$ be normed spaces. A linear operator $T : \mathcal{X} \longrightarrow \mathcal{Y}$ is called bounded if there exists $M \geq 0$ with $\norm{T x}_{\mathcal{Y}} \leq M \norm{x}_{\mathcal{X}}$. \end{definition} Just as in the case of linear functionals, we have \begin{theorem} Let $T : \mathcal{X} \longrightarrow \mathcal{Y}$ be a linear operator between two normed spaces $\mathcal{X}$ and $\mathcal{Y}$. Then the following statements are equivalent: % \begin{enumerate}[(i)] \item $T$ is continuous at $x_0 \in \mathcal{X}$. \item $T$ is continuous. \item $T$ is bounded. \end{enumerate} % \end{theorem} \begin{proof} We leave it to the reader to modify the proof of Theorem~\ref{hilbert_spaces:dual_space:thm:bounded_functionals_continuous}. \end{proof} We can introduce a norm on the operators which leads to a natural notion of convergence:% \begin{definition}[Operator norm] Let $T : \mathcal{X} \longrightarrow \mathcal{Y}$ be a bounded linear operator between normed spaces. Then we define the operator norm of $T$ as % \begin{align*} \norm{T} := \sup_{\substack{x \in \mathcal{X} \\ \norm{x} = 1}} \norm{T x}_{\mathcal{Y}} . \end{align*} % The space of all bounded linear operators between $\mathcal{X}$ and $\mathcal{Y}$ is denoted by $\mathcal{B}(\mathcal{X},\mathcal{Y})$. \end{definition} One can show that $\norm{T}$ coincides with \begin{align*} \inf \bigl \{ M \geq 0 \; \vert \; \norm{T x}_{\mathcal{Y}} \leq M \norm{x}_{\mathcal{X}} \; \forall x \in \mathcal{X} \bigr \} = \norm{T} . \end{align*} The product of two bounded operators $T \in \mathcal{B}(\mathcal{Y},\mathcal{Z})$ and $S \in \mathcal{B}(\mathcal{X},\mathcal{Y})$ is again a bounded operator and its norm can be estimated from above by \begin{align*} \norm{T S} \leq \norm{T} \norm{S} . \end{align*} If $\mathcal{Y} = \mathcal{X} = \mathcal{Z}$, this implies that the product is jointly continuous with respect to the norm topology on $\mathcal{X}$. For Hilbert spaces, the following useful theorem holds: \begin{theorem}[Hellinger-Toeplitz] Let $A$ be a linear operator on a Hilbert space $\Hil$ with dense domain $\mathcal{D}(A)$ such that $\scpro{\psi}{A \varphi} = \scpro{A \psi}{\varphi}$ holds for all $\varphi , \psi \in \mathcal{D}(A)$. Then $\mathcal{D}(A) = \Hil$ if and only if $A$ is bounded. \end{theorem} \begin{proof} $\Leftarrow$: If $A$ is bounded, then $\norm{A \varphi} \leq M \norm{\varphi}$ for some $M \geq 0$ and all $\varphi \in \Hil$ by definition of the norm. Hence, the domain of $A$ is all of $\Hil$. \medskip \noindent $\Rightarrow$: This direction relies on a rather deep result of functional analysis, the so-called Open Mapping Theorem and its corollary, the Closed Graph Theorem. The interested reader may look it up in Chapter~III.5 of \cite{Reed_Simon:M_cap_Phi_1:1972}. \end{proof} Let $T,S$ be bounded linear operators between the normed spaces $\mathcal{X}$ and $\mathcal{Y}$. If we define \begin{align*} (T + S) x := T x + S x \end{align*} as addition and \begin{align*} \bigl ( \lambda \cdot T \bigr ) x := \lambda T x \end{align*} as scalar multiplication, the set of bounded linear operators forms a vector space. \begin{proposition}\label{operators:prop:B_X_Y_complete} The vector space $\mathcal{B}(\mathcal{X},\mathcal{Y})$ of bounded linear operators between normed spaces $\mathcal{X}$ and $\mathcal{Y}$ with operator norm forms a normed space. If $\mathcal{Y}$ is complete, $\mathcal{B}(\mathcal{X},\mathcal{Y})$ is a Banach space. \end{proposition} \begin{proof} The fact $\mathcal{B}(\mathcal{X},\mathcal{Y})$ is a normed vector space follows directly from the definition. To show that $\mathcal{B}(\mathcal{X},\mathcal{Y})$ is a Banach space whenever $\mathcal{Y}$ is, one has to modify the proof of Theorem~\ref{hilbert_spaces:dual_space:prop:completeness_Xstar} to suit the current setting. This is left as an exercise. \end{proof} Very often, it is easy to \emph{define} an operator $T$ on a ``nice'' dense subset $\mathcal{D} \subseteq \mathcal{X}$. Then the next theorem tells us that \emph{if}\ the operator is bounded, there is a unique bounded extension of the operator to the whole space $\mathcal{X}$. For instance, this allows us to instantly extend the Fourier transform from Schwartz functions to $L^2(\R^n)$ functions (see~Proposition~\ref{Fourier:R:thm:Parseval_Plancherel}). \begin{theorem}\label{operators:bounded:thm:extensions_bounded_operators} Let $\mathcal{D} \subseteq \mathcal{X}$ be a dense subset of a normed space and $\mathcal{Y}$ be a Banach space. Furthermore, let $T : \mathcal{D} \longrightarrow \mathcal{Y}$ be a bounded linear operator. Then there exists a unique bounded linear extension $\tilde{T} : \mathcal{X} \longrightarrow \mathcal{Y}$ and $\snorm{\tilde{T}} = \norm{T}_{\mathcal{D}}$. \end{theorem} \begin{proof} We construct $\tilde{T}$ explicitly: let $x \in \mathcal{X}$ be arbitrary. Since $\mathcal{D}$ is dense in $\mathcal{X}$, there exists a sequence $(x_n)_{n \in \N}$ in $\mathcal{D}$ which converges to $x$. Then we set % \begin{align*} \tilde{T} x := \lim_{n \to \infty} T x_n . \end{align*} % First of all, $\tilde{T}$ is linear. It is also well-defined: $(T x_n)_{n \in \N}$ is a Cauchy sequence in $\mathcal{Y}$, % \begin{align*} \bnorm{T x_n - T x_k}_{\mathcal{Y}} \leq \snorm{T}_{\mathcal{D}} \, \snorm{x_n - x_k}_{\mathcal{X}} \xrightarrow{n,k \to \infty} 0 , \end{align*} % where the norm of $T$ is defined as % \begin{align*} \snorm{T}_{\mathcal{D}} := \sup_{x \in \mathcal{D} \setminus \{ 0 \}} \frac{\norm{T x}_{\mathcal{Y}}}{\norm{x}_{\mathcal{X}}} . \end{align*} % This Cauchy sequence in $\mathcal{Y}$ converges to some unique $y \in \mathcal{Y}$ as the target space is complete. Let $(x_n')_{n \in \N}$ be a second sequence in $\mathcal{D}$ that converges to $x$ and assume the sequence $(T x_n')_{n \in \N}$ converges to some $y' \in \mathcal{Y}$. We define a third sequence $(z_n)_{n \in \N}$ which alternates between elements of the first sequence $(x_n)_{n \in \N}$ and the second sequence $(x_n')_{n \in \N}$, \ie % \begin{align*} z_{2n - 1} &:= x_n \\ z_{2n} &:= x_n' \end{align*} % for all $n \in \N$. Then $(z_n)_{n \in \N}$ also converges to $x$ and $\bigl ( T z_n \bigr )$ forms a Cauchy sequence that converges to, say, $\zeta \in \mathcal{Y}$. Subsequences of convergent sequences are also convergent and they must converge to the same limit point. Hence, we conclude that % \begin{align*} \zeta &= \lim_{n \to \infty} T z_n = \lim_{n \to \infty} T z_{2n} = \lim_{n \to \infty} T x_n = y \\ &= \lim_{n \to \infty} T z_{2n - 1} = \lim_{n \to \infty} T x_n' = y' \end{align*} % holds and $\tilde{T} x$ does not depend on the particular choice of sequence which approximates $x$ in $\mathcal{D}$. It remains to show that $\snorm{\tilde{T}} = \snorm{T}_{\mathcal{D}}$: we can calculate the norm of $\tilde{T}$ on the dense subset $\mathcal{D}$ and use that $\tilde{T} \vert_{\mathcal{D}} = T$ to obtain % \begin{align*} \snorm{\tilde{T}} &= \sup_{\substack{x \in \mathcal{X} \\ \norm{x} = 1}} \snorm{\tilde{T} x} = \sup_{x \in \mathcal{X} \setminus \{ 0 \}} \frac{\snorm{\tilde{T} x}}{\norm{x}} = \sup_{x \in \mathcal{D} \setminus \{ 0 \}} \frac{\snorm{\tilde{T} x}}{\norm{x}} \\ &= \sup_{x \in \mathcal{D} \setminus \{ 0 \}} \frac{\norm{T x}}{\norm{x}} = \norm{T}_{\mathcal{D}} . \end{align*} % Hence, the norm of the extension $\tilde{T}$ is equal to the norm of the original operator $T$. \end{proof} The spectrum of an operator is related to the set of possible outcomes of measurements in quantum mechanics. \begin{definition}[Spectrum]\label{operators:defn:spectrum} Let $T \in \mathcal{B}(\mathcal{X})$ be a bounded linear operator on a Banach space $\mathcal{X}$. We define: % \begin{enumerate}[(i)] \item The resolvent of $T$ is the set $\rho(T) := \bigl \{ z \in \C \; \vert \; T - z \, \id \mbox{ is bijective} \bigr \}$. \item The spectrum $\sigma(T) := \C \setminus \rho(T)$ is the complement of $\rho(T)$ in $\C$. \item The set of all eigenvalues is called \emph{point spectrum} % \begin{align*} \sigma_{\mathrm{p}}(T) := \bigl \{ z \in \C \; \vert \; T - z \, \id \mbox{ is not injective} \bigr \} . \end{align*} % \item The \emph{continuous spectrum} is defined as % \begin{align*} \sigma_{\mathrm{c}}(T) := \bigl \{ z \in \C \; \vert \; T - z \, \id \mbox{ is injective, } \im (T - z \, \id) \subseteq \mathcal{X} \mbox{ dense} \bigr \} . \end{align*} % \item The remainder of the spectrum is called \emph{residual spectrum}, % \begin{align*} \sigma_{\mathrm{r}}(T) := \bigl \{ z \in \C \; \vert \; T - z \, \id \mbox{ is injective, } \im (T - z \, \id) \subseteq \mathcal{X} \mbox{ not dense} \bigr \} \end{align*} % \end{enumerate} % \end{definition} One can show that for all $\lambda \in \rho(T)$, the map $(T - z \, \id)^{-1}$ is a bounded operator and the spectrum is a closed subset of $\C$. One can show its $\sigma(T)$ is \emph{compact} and contained in $\bigl \{ \lambda \in \C \; \vert \; \abs{\lambda} \leq \norm{T} \bigr \} \subset \C$. \begin{example}[Spectrum of $H = -\partial_x^2$] % \begin{enumerate}[(i)] \item The spectrum of $- \partial_x^2$ on $L^2(\R)$ is $\sigma(- \partial_x^2) = \sigma_{\mathrm{c}}(- \partial_x^2) = [0,\infty)$; it is purely continuous. \item Restricting $- \partial_x^2$ to a bounded domain (\eg an interval $[a,b]$) turns out to be purely discrete, $\sigma(- \partial_x^2) = \sigma_{\mathrm{p}}(- \partial_x^2)$. \end{enumerate} % \end{example} Note that $- \partial_x^2$ need not have point spectrum. If an operator $H$ on a Banach space $\mathcal{X}$ has continuous spectrum, then there exist no eigenvectors \emph{in} $\mathcal{X}$. For instance, the eigenfunctions of $- \partial_x^2$ to $\lambda^2 \neq 0$ are of the form $\e^{\pm \ii \lambda x}$. However, none of these plane waves is square integrable on $\R$, $\norm{\e^{\pm \ii \lambda x}}_{L^2(\R)} = \infty$. \section{Adjoint operator} \label{operators:adjoint} If $\mathcal{X}$ is a normed space, then we have defined $\mathcal{X}'$, the space of bounded linear functionals on $\mathcal{X}$. If $T : \mathcal{X} \longrightarrow \mathcal{Y}$ is a bounded linear operator between two normed spaces, it naturally defines the \emph{adjoint operator} $T' : \mathcal{Y}' \longrightarrow \mathcal{X}'$ via \begin{align} (T' L)(x) := L(Tx) \label{operators:adjoint:eqn:adjoint_operator_functional} \end{align} for all $x \in \mathcal{X}$ and $L \in \mathcal{Y}'$. In case of Hilbert spaces, one can associate the \emph{Hilbert space adjoint}. We will almost exclusively work with the latter and thus drop “Hilbert space” most of the time. \begin{definition}[Adjoint and selfadjoint operator] Let $\Hil$ be a Hilbert space and $A \in \mathcal{B}(\Hil)$ be a bounded linear operator on the Hilbert space $\Hil$. Then for any $\varphi \in \Hil$, the equation % \begin{align*} \scpro{A \psi}{\varphi} = \scpro{\psi}{\phi} && \forall \psi \in \mathcal{D}(A) \end{align*} % defines a vector $\phi$. For each $\varphi \in \Hil$, we set $A^* \varphi := \phi$ and $A^*$ is called the (Hilbert space) \emph{adjoint} of $A$. In case $A^* = A$, the operator is called \emph{selfadjoint}. \end{definition} Hilbert and Banach space adjoint are related through the map $C \psi := \sscpro{\psi}{\cdot \,} = L_{\psi}$, because then the Hilbert space adjoint is defined as \begin{align*} A^* := C^{-1} A' C . \end{align*} \begin{example}[Adjoint of the time-evolution group] $\bigl ( \e^{- \ii t H} \bigr )^* = \e^{+ \ii t H^*} = \e^{+ \ii t H}$ \end{example} \begin{proposition} Let $A , B \in \mathcal{B}(\Hil)$ be two bounded linear operators on a Hilbert space $\Hil$ and $\alpha \in \C$. Then, we have: % \begin{enumerate}[(i)] \item $(A + B)^* = A^* + B^*$ \item $(\alpha A)^* = \alpha^* \, A^*$ \item $(A B)^* = B^* A^*$ \item $\norm{A^*} = \norm{A}$ \item $A^{**} = A$ \item $\norm{A^* A} = \norm{A A^*} = \norm{A}^2$ \item $\ker A = (\im A^*)^{\perp}$, $\ker A^* = (\im A)^{\perp}$ \end{enumerate} % \end{proposition} \begin{proof} Properties (i)-(iii) follow directly from the defintion. To show (iv), we note that $\norm{A} \leq \norm{A^*}$ follows from % \begin{align*} \norm{A \varphi} &= \abs{\Bscpro{\tfrac{A \varphi}{\snorm{A \varphi}}}{A \varphi}} \overset{*}{=} \sup_{\norm{L}_* = 1} \abs{L(A \varphi)} \\ &= \sup_{\norm{\psi_L} = 1} \abs{\scpro{A^* \psi_L}{\varphi}} \leq \norm{A^*} \norm{\varphi} \end{align*} % where in the step marked with $\ast$, we have used that we can calculate the norm from picking the functional associated to $\tfrac{A \varphi}{\snorm{A \varphi}}$: for a functional with norm 1, $\norm{L}_* = 1$, the norm of $L(A \varphi)$ cannot exceed that of $A \varphi$ % \begin{align*} \abs{L(A \varphi)} &= \sabs{\sscpro{\psi_L}{A \varphi}} \leq \snorm{\psi_L} \snorm{A \varphi} = \snorm{A \varphi} . \end{align*} % Here, $\psi_L$ is the vector such that $L = \sscpro{\psi_L}{\cdot \,}$ which exists by Theorem~\ref{hilbert_spaces:dual_space:thm:Riesz_Lemma}. This theorem also ensures $\norm{L}_* = \snorm{\psi_L}$. On the other hand, from % \begin{align*} \bnorm{A^* \psi_L} &= \bnorm{L_{A^* \psi_L}}_* = \sup_{\norm{\varphi} = 1} \babs{\bscpro{A^* \psi_L}{\varphi}} \\ &\leq \sup_{\norm{\varphi} = 1} \norm{\psi_L} \norm{A \varphi} = \norm{A} \norm{L}_* = \norm{A} \snorm{\psi_L} \end{align*} % we conclude $\norm{A^*} \leq \norm{A}$. Hence, $\norm{A^*} = \norm{A}$. (v) is clear. For (vi), we remark % \begin{align*} \norm{A}^2 &= \sup_{\norm{\varphi} = 1} \norm{A \varphi}^2 = \sup_{\norm{\varphi} = 1} \bscpro{\varphi}{A^* A \varphi} \\ &\leq \sup_{\norm{\varphi} = 1} \norm{A^* A \varphi} = \norm{A^* A} . \end{align*} % This means % \begin{align*} \norm{A}^2 \leq \norm{A^* A} \leq \snorm{A^*} \norm{A} = \norm{A}^2 . \end{align*} % which combined with (iv), % \begin{align*} \norm{A}^2 = \norm{A^*}^2 \leq \norm{A A^*} \leq \snorm{A} \norm{A^*} = \norm{A}^2 \end{align*} % implies $\norm{A^* A}= \norm{A}^2 = \norm{A A^*}$. (vii) is left as an exercise. \end{proof} \begin{definition} Let $\Hil$ be a Hilbert space and $A \in \mathcal{B}(\Hil)$. Then $A$ is called % \begin{enumerate}[(i)] \item selfadjoint (or hermitian) if $A^* = A$, \item unitary if $A^* = A^{-1}$, \item an orthogonal projection if $A^2 = A$ and $A^* = A$, and \item positive semidefinite (or non-negative) iff $\bscpro{\varphi}{A \varphi} \geq 0$ for all $\varphi \in \Hil$ and positive (definite) if the inequality is strict. \end{enumerate} % \end{definition} This leads to a particular characterization of the spectrum as a set \cite[Theorem~VII.12]{Reed_Simon:M_cap_Phi_1:1972}: \begin{theorem}[Weyl's criterion]\label{operators:thm:Weyl_criterion} Let $H$ be a bounded selfadjoint operator on a Hilbert space $\Hil$. Then $\lambda \in \sigma(H)$ holds if and only if there exists a sequence $\{ \psi_n \}_{n \in \N}$ so that $\snorm{\psi_n} = 1$ and % \begin{align*} \lim_{n \to \infty} \bnorm{H \psi_n - \lambda \, \psi_n}_{\Hil} = 0 . \end{align*} % \end{theorem} \begin{example}[Weyl's criterion for $H = -\partial_x^2$ on $L^2(\R)$] For any $\lambda \in \R \setminus \{ 0 \}$, one can choose a sequence $\{ \psi_n \}_{n \in \N}$ of normalized and cut off plane waves $\e^{\pm \ii \lambda x}$. To make sure they are normalized, we know that pointwise $\psi_n(x) \to 0$ as $n \to \infty$. \marginpar{2013.10.24} \end{example} \section{Unitary operators} \label{operators:unitary} Unitary operators $U$ have the nice property that \begin{align*} \scpro{U \varphi}{U \psi} = \scpro{\varphi}{U^* U \psi} = \scpro{\varphi}{U^{-1} U \psi} = \scpro{\varphi}{\psi} \end{align*} for all $\varphi , \psi \in \Hil$. In case of quantum mechanics, we are interested in solutions to the Schrödinger equation \begin{align*} \ii \frac{\dd }{\dd t} \psi(t) = H \psi(t) , && \psi(t) = \psi_0 , \end{align*} for a hamilton operator which satisfies $H^* = H$. Assume that $H$ is bounded (this is really the case for many simple quantum systems). Then the unitary group generated by $H$, \begin{align*} U(t) = \e^{- \ii t H} , \end{align*} can be written as a power series, \begin{align*} \e^{- \ii t H} = \sum_{n = 0}^{\infty} \frac{1}{n!} (-\ii t)^n \, H^n \, , \end{align*} where $H^0 := \id$ by convention. The sequence of partial sums converges in the operator norm to $\e^{- \ii t H}$, \begin{align*} \sum_{n = 0}^N \frac{1}{n!} (-i t)^n \, H^n \xrightarrow{N \to \infty} \e^{- \ii t H} , \end{align*} since we can make the simple estimate \begin{align*} \norm{\sum_{n = 0}^{\infty} \frac{1}{n!} (- \ii t)^n \, H^n \psi} &\leq \sum_{n = 0}^{\infty} \frac{1}{n!} \abs{t}^n \norm{H^n \psi} \leq \sum_{n = 0}^{\infty} \frac{1}{n!} \abs{t}^n \norm{H}^n \, \norm{\psi} \\ &= \e^{\abs{t} \norm{H}} \norm{\psi} < \infty . \end{align*} This shows that the power series of the exponential converges in the operator norm independently of the choice of $\psi$ to a bounded operator. Given a unitary evolution group, it is suggestive to obtain the hamiltonian which generates it by deriving $U(t) \psi$ with respect to time. This is indeed the correct idea. The left-hand side of the Schrödinger equation (modulo a factor of $\ii$) can be expressed as a limit \begin{align*} \frac{\dd}{\dd t} \psi(t) = \lim_{\delta \to 0} \tfrac{1}{\delta} \bigl ( \psi(t+\delta) - \psi(t) \bigr ) . \end{align*} This limit really exists, but before we compute it, we note that since \begin{align*} \psi(t+\delta) - \psi(t) = \e^{- \ii (t+\delta) H} \psi_0 - \e^{- \ii t H} \psi_0 = \e^{- \ii t H} \bigl ( \e^{- \ii \delta H} - 1 \bigr ) \psi_0 \, , \end{align*} it suffices to consider differentiability at $t = 0$: taking limits in norm of $\Hil$, we get \begin{align*} \frac{\dd}{\dd t} \psi(0) &= \lim_{\delta \to 0} \, \tfrac{1}{\delta} \bigl ( \psi(\delta) - \psi_0 \bigr ) = \lim_{\delta \to 0} \, \frac{1}{\delta} \left ( \sum_{n = 0}^{\infty} \frac{(- \ii)^n}{n!} \delta^n H^n \psi_0 - \psi_0 \right ) \\ & = \lim_{\delta \to 0} \sum_{n = 1}^{\infty} \frac{(- \ii)^n}{n!} \delta^{n-1} H^n \psi_0 = - \ii H \psi_0 . \end{align*} Hence, we have established that $\e^{- \ii t H} \psi_0$ solves the Schrödinger condition with $\psi(0) = \psi_0$, \begin{align*} \ii \frac{\dd}{\dd t} \psi(t) = H \psi(t) . \end{align*} However, this procedure \emph{does not work} if $H$ is unbounded (\ie the generic case)! Before we proceed, we need to introduce several different notions of convergence of sequences of operators which are necessary to define derivatives of $U(t)$. \begin{definition}[Convergence of operators] Let $\{ A_n \}_{n \in \N} \subset \mathcal{B}(\Hil)$ be a sequence of bounded operators. We say that the sequence converges to $A \in \mathcal{B}(\Hil)$ % \begin{enumerate}[(i)] \item uniformly/in norm if $\lim_{n \to \infty} \bnorm{A_n - A} = 0$. \item strongly if $\lim_{n \to \infty} \bnorm{A_n \psi - A \psi} = 0$ for all $\psi \in \Hil$. \item weakly if $\lim_{n \to \infty} \bscpro{\varphi}{A_n \psi - A \psi} = 0$ for all $\varphi , \psi \in \Hil$. \end{enumerate} % \end{definition} Convergence of a sequence of operators in norm implies strong and weak convergence, but not the other way around. In the tutorials, we will also show explicitly that weak convergence does not necessarily imply strong convergence. \begin{example} With the arguments above, we have shown that if $H = H^*$ is selfadjoint and bounded, then $t \mapsto \e^{- \ii t H}$ is \emph{uniformly} continuous. \end{example} If $\norm{H} = \infty$ on the other hand, uniform continuity is too strong a requirement. If $H = - \frac{1}{2} \Delta_x$ is the free Schrödinger operator on $L^2(\R^n)$, then the Fourier transform $\Fourier$ links the position representation on $L^2(\R^n)$ to the momentum representation on $L^2(\R^n)$. In this representation, the free Schrödinger operator $H$ simplifies to the multiplication operator \begin{align*} \hat{H} = \tfrac{1}{2} \hat{k}^2 \end{align*} acting on $L^2(\R^n)$. This operator is not bounded since $\sup_{k \in \R^n} \tfrac{1}{2} k^2 = \infty$ (\cf problem~24). More elaborate mathematical arguments show that for any $t \in \R$, the norm of the difference between $\hat{U}(t) = \e^{- \ii t \frac{1}{2} \hat{k}^2}$ and $\hat{U}(0) = \id$ \begin{align*} \bnorm{\hat{U}(t) - \id} = \sup_{k \in \R^n} \babs{\e^{- \ii t \frac{1}{2} k^2} - 1} = 2 \end{align*} is exactly $2$ and $\hat{U}(t)$ \emph{cannot} be uniformly continuous in $t$. However, if $\widehat{\psi} \in L^2(\R^n)$ is a wave function, the estimate \begin{align*} \bnorm{\hat{U}(t) \widehat{\psi} - \widehat{\psi}}^2 &= \int_{\R^n} \dd k \, \babs{\e^{- \ii t \frac{1}{2} k^2} - 1}^2 \, \babs{\widehat{\psi}(k)}^2 \\ &\leq 2^2 \int_{\R^n} \dd k \, \babs{\widehat{\psi}(k)}^2 = 4 \bnorm{\widehat{\psi}}^2 \end{align*} shows we can invoke the Theorem of Dominated Convergence to conclude $\hat{U}(t)$ is \emph{strongly continuous} in $t \in \R$. \begin{definition}[Strongly continuous one-parameter unitary evolution group] A family of unitary operators $\{ U(t) \}_{t \in \R}$ on a Hilbert space $\Hil$ is called a strongly continuous one-parameter unitary group -- or unitary evolution group for short -- if % \begin{enumerate}[(i)] \item $t \mapsto U(t)$ is strongly continuous and \item $U(t) U(t') = U(t+t')$ as well as $U(0) = \id_{\Hil}$ \end{enumerate} % hold for all $t,t' \in \R$. \end{definition} This is again a \emph{group representation of $\R$} just as in the case of the classical flow $\Phi$. The form of the Schrödinger equation, \begin{align*} \ii \frac{\dd }{\dd t} \psi(t) = H \psi(t) , \end{align*} also suggests that strong continuity/differentiability is the correct notion. Let us once more consider the free hamiltonian $H = - \frac{1}{2} \Delta_x$ on $L^2(\R^n)$. We will show that its domain is \begin{align*} \mathcal{D}(H) = \bigl \{ \varphi \in L^2(\R^n) \; \vert \; - \Delta_x \varphi \in L^2(\R^n) \bigr \} . \end{align*} In Chapter~\ref{S_and_Sprime}, we will learn that $\mathcal{D}(H)$ is mapped by the Fourier transform onto \begin{align*} \mathcal{D}(\hat{H}) = \bigl \{ \widehat{\psi} \in L^2(\R^n) \; \vert \; \hat{k}^2 \widehat{\psi} \in L^2(\R^n) \bigr \} . \end{align*} Dominated Convergence can once more be used to make the following claims rigorous: for any $\widehat{\psi} \in \mathcal{D}(\hat{H})$, we have \begin{align} \lim_{t \to 0} &\Bnorm{\tfrac{\ii}{t} \bigl ( \hat{U}(t) - \id \bigr ) \widehat{\psi} - \tfrac{1}{2} \hat{k}^2 \widehat{\psi}} \leq \lim_{t \to 0} \Bnorm{\tfrac{\ii}{t} \bigl ( \hat{U}(t) - \id \bigr ) \widehat{\psi}} + \bnorm{\tfrac{1}{2} \hat{k}^2 \widehat{\psi}} \label{operators:unitary:eqn:free_evolution_strongly_continuous} . \end{align} The second term is finite since $\widehat{\psi} \in \mathcal{D}(\hat{H})$ and we have to focus on the first term. On the level of functions, \begin{align*} \lim_{t \to 0} \tfrac{\ii}{t} \bigl ( \e^{- \ii t \frac{1}{2} k^2} - 1 \bigr ) = \ii \frac{\dd}{\dd t} \e^{- \ii t \frac{1}{2} k^2} \Big \vert_{t = 0} = \tfrac{1}{2} k^2 \end{align*} holds pointwise. Furthermore, by the mean value theorem, for any finite $t \in \R$ with $\abs{t} \leq 1$, for instance, then there exists $0 \leq t_0 \leq t$ such that \marginpar{2013.10.29} \begin{align*} \tfrac{1}{t} \bigl ( \e^{- \ii t \frac{1}{2} k^2} - 1 \bigr ) = \partial_t \e^{- \ii t \frac{1}{2} k^2} \big \vert_{t = t_0} = - \ii \, \tfrac{1}{2} k^2 \, \e^{- \ii t_0 \frac{1}{2} k^2} . \end{align*} This can be bounded uniformly in $t$ by $\frac{1}{2} k^2$. Thus, also the first term can be bounded by $\bnorm{\frac{1}{2} \hat{k}^2 \widehat{\psi}}$ uniformly. By Dominated Convergence, we can interchange the limit $t \to 0$ and integration with respect to $k$ on the left-hand side of equation~\eqref{operators:unitary:eqn:free_evolution_strongly_continuous}. But then the integrand is zero and thus the domain where the free evolution group is differentiable coincides with the domain of the Fourier transformed hamiltonian, \begin{align*} \lim_{t \to 0} \norm{\tfrac{\ii}{t} \bigl ( \hat{U}(t) - \id) \widehat{\psi} - \tfrac{1}{2} \hat{k}^2 \widehat{\psi}} = 0 . \end{align*} This suggests to use the following definition: \begin{definition}[Generator of a unitary group] A densely defined linear operator on a Hilbert space $\Hil$ with domain $\mathcal{D}(H) \subseteq \Hil$ is called generator of a \emph{unitary evolution group} $U(t)$, $t \in \R$, if % \begin{enumerate}[(i)] \item the domain coincides with % \begin{align*} \widetilde{\mathcal{D}(H)} = \Bigl \{ \varphi \in \Hil \; \big \vert \; t \mapsto U(t) \varphi \mbox{ differentiable} \Bigr \} = \mathcal{D}(H) \end{align*} % \item and for all $\psi \in \mathcal{D}(H)$, the Schrödinger equation holds, % \begin{align*} \ii \frac{\dd}{\dd t} U(t) \psi = H U(t) \psi . \end{align*} % \end{enumerate} % \end{definition} This is only one of the two implications: usually we are given a hamiltonian $H$ and we would like to know under which circumstances this operator generates a unitary evolution group. We will answer this question conclusively in the next section with Stone's Theorem. \begin{theorem} Let $H$ be the generator of a strongly continuous evolution group $U(t)$, $t \in \R$. Then the following holds: % \begin{enumerate}[(i)] \item $\mathcal{D}(H)$ is invariant under the action of $U(t)$, \ie $U(t) \mathcal{D}(H) = \mathcal{D}(H)$ for all $t \in \R$. \item $H$ commutes with $U(t)$, \ie $[U(t) , H] \psi := U(t) \, H \psi - H \, U(t) \psi = 0$ for all $t \in \R$ and $\psi \in \mathcal{D}(H)$. \item $H$ is \emph{symmetric}, \ie $\scpro{H \varphi}{\psi} = \scpro{\varphi}{H \psi}$ holds for all $\varphi , \psi \in \mathcal{D}(H)$. \item $U(t)$ is uniquely determined by $H$. \item $H$ is uniquely determined by $U(t)$. \end{enumerate} % \end{theorem} \begin{proof} % \begin{enumerate}[(i)] \item Let $\psi \in \mathcal{D}(H)$. To show that $U(t) \psi$ is still in the domain, we have to show that the norm of $H U(t) \psi$ is finite. Since $H$ is the generator of $U(t)$, it is equal to % \begin{align*} H \psi = \ii \frac{\dd}{\dd s} U(s) \psi \bigg \vert_{s = 0} = \lim_{s \to 0} \tfrac{\ii}{s} \bigl ( U(s) - \id \bigr ) \psi . \end{align*} % Let us start with $s > 0$ and omit the limit. Then % \begin{align*} \Bnorm{\tfrac{\ii}{s} \bigl ( U(s) - \id \bigr ) U(t) \psi} &= \Bnorm{U(t) \tfrac{\ii}{s} \bigl ( U(s) - \id \bigr ) \psi} = \Bnorm{\tfrac{\ii}{s} \bigl ( U(s) - \id \bigr ) \psi} < \infty \end{align*} % holds for all $s > 0$. Taking the limit on left and right-hand side yields that we can estimate the norm of $H U(t) \psi$ by the norm of $H \psi$ -- which is finite since $\psi$ is in the domain. This means $U(t) \mathcal{D}(H) \subseteq \mathcal{D}(H)$. To show the converse, we repeat the proof for $U(-t) = U(t)^{-1} = U(t)^*$ to obtain % \begin{align*} \mathcal{D}(H) = U(-t) U(t) \mathcal{D}(H) \subseteq U(t) \mathcal{D}(H) . \end{align*} % Hence, $U(t) \mathcal{D}(H) = \mathcal{D}(H)$. \item This follows from an extension of the proof of (i): since the domain $\mathcal{D}(H)$ coincides with the set of vectors on which $U(t)$ is strongly differentiable and is left invariant by $U(t)$, taking limits on left- and right-hand side of % \begin{align*} \Bnorm{\tfrac{\ii}{s} \bigl ( U(s) - \id \bigr ) U(t) \psi - U(t) \tfrac{\ii}{s} \bigl ( U(s) - \id \bigr ) \psi} = 0 \end{align*} % leads to $[H,U(t)] \psi = 0$. \item This follows from differentiating $\scpro{U(t) \varphi}{U(t) \psi}$ for arbitrary $\varphi , \psi \in \mathcal{D}(H)$ and using $\bigl [ U(t) , H \bigr ] = 0$ as well as the unitarity of $U(t)$ for all $t \in \R$. \item Assume that both unitary evolution groups, $U(t)$ and $\tilde{U}(t)$, have $H$ as their generator. For any $\psi \in \mathcal{D}(H)$, we can calculate the time derivative of $\bnorm{(U(t) - \tilde{U}(t)) \psi}^2$, % \begin{align*} \frac{\dd }{\dd t} \Bnorm{\bigl ( U(t) - \tilde{U}(t) \bigr ) \psi}^2 &= 2 \frac{\dd }{\dd t} \bigl ( \norm{\psi}^2 - \Re \bscpro{U(t) \psi}{\tilde{U}(t) \psi} \bigr ) \\ &= - 2 \Re \Bigl ( \bscpro{- \ii \, H U(t) \psi}{\tilde{U}(t) \psi} + \bscpro{U(t) \psi}{- \ii \, H \tilde{U}(t) \psi} \Bigr ) \\ &= 0 . \end{align*} % Since $U(0) = \id = \tilde{U}(0)$, this means $U(t)$ and $\tilde{U}(t)$ agree at least on $\mathcal{D}(H)$. Using the fact that there is only bounded extension of a bounded operator to all of $\Hil$, Theorem~\ref{operators:bounded:thm:extensions_bounded_operators}, we conclude they must be equal on all of $\Hil$. \item This follows from the definition of the generator and the density of the domain. \end{enumerate} % \end{proof} Now that we have collected a few facts on unitary evolution groups, one could think that \emph{symmetric} operators generate evolution groups, but \emph{this is false!} The standard example to showcase this fact is the group of translations on $L^2([0,1])$. Since we would like $T(t)$ to conserve ``mass'' -- or more accurately, probability, we define for $\varphi \in L^2([0,1])$ and $0 \leq t < 1$ \begin{align*} \bigl ( T(t) \varphi \bigr )(x) := \left \{ % \begin{matrix} \varphi(x - t) & x - t \in [0,1] \\ \varphi(x - t + 1) & x - t + 1 \in [0,1] \\ \end{matrix} % \right . . \end{align*} For all other $t \in \R$, we extend this operator periodically, \ie we plug in the fractional part of $t$. Clearly, $\bscpro{T(t) \varphi}{T(t) \psi} = \bscpro{\varphi}{\psi}$ holds for all $\varphi , \psi \in L^2([0,1])$. Locally, the infinitesimal generator is $- \ii \partial_x$ as a simple calculation shows: \begin{align*} \biggl ( \ii \frac{\dd}{\dd t} \bigl ( T(t) \varphi \bigr ) \biggr )(x) \bigg \vert_{t = 0} &= \ii \frac{\dd}{\dd t} \varphi(x - t) \bigg \vert_{t = 0} = - \ii \partial_x \varphi(x) \end{align*} However, $T(t)$ does not preserve the maximal domain of $- \ii \partial_x$, \begin{align*} \mathcal{D}_{\max}(- \ii \partial_x) = \bigl \{ \varphi \in L^2([0,1]) \; \vert \; -\ii \partial_x \varphi \in L^2([0,1]) \bigr \} . \end{align*} Any element of the maximal domain has a continuous representative, but if $\varphi(0) \neq \varphi(1)$, then for $t > 0$, $T(t) \varphi$ will have a discontinuity at $t$. We will denote the operator $- \ii \partial_x$ on $\mathcal{D}_{\max}(- \ii \partial_x)$ with $\mathsf{P}_{\max}$. Let us check whether $\mathsf{P}_{\max}$ is symmetric: for any $\varphi , \psi \in \mathcal{D}_{\max}(- \ii \partial_x)$, we compute \begin{align} \bscpro{\varphi}{-\ii \partial_x \psi} &= \int_0^1 \dd x \, \varphi^*(x) \, (- \ii \partial_x \psi)(x) = \Bigl [ - \ii \varphi^*(x) \, \psi(x) \Bigr ]_0^1 - \int_0^1 \dd x \, (-\ii) \partial_x \varphi^*(x) \, \psi(x) \notag \\ & = \ii \bigl ( \varphi^*(0) \, \psi(0) - \varphi^*(1) \, \psi(1) \bigr ) + \int_0^1 \dd x \, (- \ii \partial_x \varphi)^*(x) \, \psi(x) \notag \\ &= \ii \bigl ( \varphi^*(0) \, \psi(0) - \varphi^*(1) \, \psi(1) \bigr ) + \bscpro{- \ii \partial_x \varphi}{\psi} \label{operators:unitary:eqn:symmetry_translations_interval} . \end{align} In general, the boundary terms do not disappear and the maximal domain is “too large” for $- \ii \partial_x$ to be symmetric. Thus, it is not at all surprising, $T(t)$ does not leave $\mathcal{D}_{\max}(- \ii \partial_x)$ invariant. Let us try another domain: one way to make the boundary terms disappear is to choose \begin{align*} \mathcal{D}_{\min}(- \ii \partial_x) := \Bigl \{ \varphi \in L^2([0,1]) \; \big \vert \; - \ii \partial_x \varphi \in L^2([0,1]) , \; \varphi(0) = 0 = \varphi(1) \Bigr \} . \end{align*} We denote $- \ii \partial_x$ on this “minimal” domain with $\mathsf{P}_{\min}$. In this case, the boundary terms in equation~\eqref{operators:unitary:eqn:symmetry_translations_interval} vanish which tells us that $\mathsf{P}_{\min}$ is symmetric. Alas, the domain is still not invariant under translations $T(t)$, even though $\mathsf{P}_{\min}$ is symmetric. This is an example of a symmetric operator which \emph{does not} generate a unitary group. There is another thing we have missed so far: the translations allow for an additional phase factor, \ie for $\varphi , \psi \in L^2([0,1])$ and $\vartheta \in [0,2\pi)$, we define for $0 \leq t < 1$ \begin{align*} \bigl ( T_{\vartheta}(t) \varphi \bigr )(x) := \left \{ % \begin{matrix} \varphi(x - t) & x - t \in [0,1] \\ \e^{\ii \vartheta} \varphi(x - t + 1) & x - t + 1 \in [0,1] \\ \end{matrix} % \right . . \end{align*} while for all other $t$, we plug in the fractional part of $t$. The additional phase factor cancels in the inner product, $\bscpro{T_{\vartheta}(t) \varphi}{T_{\vartheta}(t) \psi} = \bscpro{\varphi}{\psi}$ still holds true for all $\varphi , \psi \in L^2([0,1])$. In general $T_{\vartheta}(t) \neq T_{\vartheta'}(t)$ if $\vartheta \neq \vartheta'$ and the unitary groups are genuinely different. Repeating the simple calculation from before, we see that the local generator still is $- \ii \partial_x$ and it would seem we can generate a family of unitary evolutions from a \emph{single} generator. The confusion is resolved if we focus on \emph{invariant domains}: choosing $\vartheta \in [0,2\pi)$, we define $\mathsf{P}_{\vartheta}$ to be the operator $- \ii \partial_x$ on the domain \begin{align*} \mathcal{D}_{\vartheta}(- \ii \partial_x) := \Bigl \{ \varphi \in L^2([0,1]) \; \big \vert \; - \ii \partial_x \varphi \in L^2([0,1]) , \; \varphi(0) = \e^{- \ii \vartheta} \varphi(1) \Bigr \} . \end{align*} A quick look at equation~\eqref{operators:unitary:eqn:symmetry_translations_interval} reassures us that $\mathsf{P}_{\vartheta}$ is symmetric and a quick calculation shows it is also \emph{invariant} under the action of $T_{\vartheta}(t)$. Hence, $\mathsf{P}_{\vartheta}$ is the generator of $T_{\vartheta}$, and the \emph{definition of an unbounded operator is incomplete without spelling out its domain}. \begin{example}[The wave equation with boundary conditions] Another example where the domain is crucial in the properties is the wave equation on $[0,L]$, % \begin{align*} \partial_t^2 u(x,t) - \partial_x^2 u(x,t) = 0 , && u \in \Cont^2([0,L] \times \R) . \end{align*} % Here, $u$ is the amplitude of the vibration, \ie the lateral deflection. If we choose Dirichlet boundary conditions at both ends, \ie $u(0) = 0 = u(L)$, we model a closed pipe, if we choose Dirichlet boundary conditions on one end, $u(0) = 0$, and von Neumann boundary conditions on the other, $u'(L) = 0$, we model a half-closed pipe. Choosing domains is a question of physics! \end{example} \section{Selfadjoint operators} \label{operators:selfadjoint_operators} Although we do not have time to explore this very far, the crucial difference between $\mathsf{P}_{\min}$ and $\mathsf{P}_{\vartheta}$ is that the former is only symmetric while the latter is also selfadjoint. We first recall the definition of the adjoint of a possibly unbounded operator: \begin{definition}[Adjoint operator] Let $A$ be a densely defined linear operator on a Hilbert space $\Hil$ with domain $\mathcal{D}(A)$. Let $\mathcal{D}(A^*)$ be the set of $\varphi \in \Hil$ for which there exists $\phi \in \Hil$ with % \begin{align*} \scpro{A \psi}{\varphi} = \scpro{\psi}{\phi} && \forall \psi \in \mathcal{D}(A) . \end{align*} % For each $\varphi \in \mathcal{D}(A^*)$, we define $A^* \varphi := \phi$ and $A^*$ is called the adjoint of $A$. \end{definition} \begin{remark} By Riesz Lemma, $\varphi$ belongs to $\mathcal{D}(A^*)$ if and only if % \begin{align*} \babs{\scpro{A \psi}{\varphi}} \leq C \norm{\psi} && \forall \psi \in \mathcal{D}(A) . \end{align*} % This is equivalent to saying $\varphi \in \mathcal{D}(A^*)$ if and only if $\psi \mapsto \sscpro{A \psi}{\varphi}$ is continuous on $\mathcal{D}(A)$. As a matter of fact, we could have used to latter to \emph{define} the adjoint operator. \end{remark} One word of caution: even if $A$ is densely defined, $A^*$ need not be. \begin{example} Let $f \in L^{\infty}(\R)$, but $f \not\in L^2(\R)$, and pick $\psi_0 \in L^2(\R)$. Define % \begin{align*} \mathcal{D}(T_f) := \Bigl \{ \psi \in L^2(\R) \; \vert \; \int_{\R} \dd x \, \abs{f(x) \, \psi(x)} < \infty \Bigr \} . \end{align*} % Then the adjoint of the operator % \begin{align*} T_f \psi := \sscpro{f}{\psi} \, \psi_0 , && \psi \in \mathcal{D}(T_f) , \end{align*} % has domain $\mathcal{D}(T_f^*) = \{ 0 \}$. Let $\psi \in \mathcal{D}(T_f)$. Then for any $\varphi \in \mathcal{D}(T_f^*)$ % \begin{align*} \bscpro{T_f \psi}{\varphi} &= \bscpro{\sscpro{f}{\psi} \, \psi_0}{\varphi} = \bscpro{\psi}{f} \, \bscpro{\psi_0}{\varphi} \\ &= \bscpro{\psi}{\sscpro{\psi_0}{\varphi} f} . \end{align*} % Hence $T_f^* \varphi = \sscpro{\psi_0}{\varphi} f$. However $f \not\in L^2(\R)$ and thus $\varphi = 0$ is the only possible choice for which $T_f^* \varphi$ is well defined. \end{example} Symmetric operators, however, are special: since $\scpro{H \varphi}{\psi} = \scpro{\varphi}{H \psi}$ holds by definition for all $\varphi , \psi \in \Hil$, the domain of $H^*$ is contained in that of $H$, $\mathcal{D}(H^*) \supseteq \mathcal{D}(H)$. In particular, $\mathcal{D}(H^*) \subseteq \Hil$ is also dense. Thus, $H^*$ is an \emph{extension of $H$}. \begin{definition}[Selfadjoint operator] Let $H$ be a symmetric operator on a Hilbert space $\Hil$ with domain $\mathcal{D}(H)$. $H$ is called selfadjoint, $H^* = H$, iff $\mathcal{D}(H^*) = \mathcal{D}(H)$. \end{definition} One word regarding notation: if we write $H^* = H$, we do not just imply that the “operator prescription” of $H$ and $H^*$ is the same, but that the \emph{domains} of both coincide. \begin{example} In this sense, $\mathsf{P}_{\min}^* \neq \mathsf{P}_{\min}$. \end{example} The central theorem of this section is Stone's Theorem: \begin{theorem}[Stone] To every strongly continuous one-parameter unitary group $U$ on a Hilbert space $\Hil$, there exists a selfadjoint operator $H = H^*$ which generates $U(t) = \e^{- \ii t H}$. Conversely, every selfadjoint operator $H$ generates the unitary evolution group $U(t) = \e^{- \ii t H}$. \end{theorem} A complete proof \cite[Chapter~VIII.3]{Reed_Simon:M_cap_Phi_1:1972} is beyond our capabilities. \marginpar{2013.10.31} \section{Recasting the Maxwell equations as a Schrödinger equation} \label{operators:Maxwell} The Maxwell equations in a medium with electric permittivity $\eps$ and magnetic permeability $\mu$ are given by the two \emph{dynamical} \begin{subequations} \label{operators:eqn:dynamical_Maxwell} % \begin{align} \partial_t \mathbf{E}(t) &= + \eps^{-1} \, \nabla_x \times \mathbf{H}(t) \\ \partial_t \mathbf{H}(t) &= - \mu^{-1} \, \nabla_x \times \mathbf{E}(t) \end{align} % \end{subequations} and the two \emph{kinetic Maxwell equations} \begin{subequations} \label{operators:eqn:source_Maxwell_eqns} % \begin{align} \nabla_x \cdot \eps \mathbf{E}(t) &= \rho \\ \nabla_x \cdot \mu \mathbf{H}(t) &= j . \end{align} % \end{subequations} Here, the source terms in the kinetic equations are the \emph{charge density} $\rho$ and the \emph{current density} $j$; in the absence of sources, $\rho = 0$ and $j = 0$, the Maxwell equations are homogeneous. We can rewrite the dynamical equations~\eqref{operators:eqn:dynamical_Maxwell} as a Schrödinger-type equation, \begin{align} \ii \frac{\dd}{\dd t} \left ( % \begin{matrix} \mathbf{E}(t) \\ \mathbf{H}(t) \\ \end{matrix} % \right ) &= M(\eps,\mu) \left ( % \begin{matrix} \mathbf{E}(t) \\ \mathbf{H}(t) \\ \end{matrix} % \right ) \, , && \left ( % \begin{matrix} \mathbf{E}(0) \\ \mathbf{H}(0) \\ \end{matrix} % \right ) = \left ( % \begin{matrix} \mathbf{E}^{(0)} \\ \mathbf{H}^{(0)} \\ \end{matrix} % \right ) \in L^2(\R^3,\C^6) \, , \end{align} where the \emph{Maxwell operator} \begin{align} M(\eps,\mu) := W \, \Rot := \left ( % \begin{matrix} \eps^{-1} & 0 \\ 0 & \mu^{-1} \\ \end{matrix} % \right ) \left ( % \begin{matrix} 0 & + \ii \nabla_x^{\times} \\ - \ii \nabla_x^{\times} & 0 \\ \end{matrix} % \right ) \end{align} takes the role of the Schrödinger operator $H = - \Delta_x + V$. It can be conveniently written as the product of the multiplication operator $W$ which contains the \emph{material weights} $\eps$ and $\mu$, and the \emph{free Maxwell operator} $\Rot$. Here, $\nabla_x^{\times}$ is just a short-hand for the curl, $\nabla_x^{\times} \mathbf{E} := \nabla_x \times \mathbf{E}$. The solution can now be expressed just like in the Schrödinger case, \begin{align*} \left ( % \begin{matrix} \mathbf{E}(t) \\ \mathbf{H}(t) \\ \end{matrix} % \right ) = \e^{- \ii t M(\eps,\mu)} \left ( % \begin{matrix} \mathbf{E}^{(0)} \\ \mathbf{H}^{(0)} \\ \end{matrix} % \right ) \, , \end{align*} where the initial conditions must satisfy the no sources condition (equations~\eqref{operators:eqn:source_Maxwell_eqns} for $\rho = 0$ and $j = 0$); one can show that this is enough to ensure that also the time-evolved fields $\mathbf{E}(t)$ and $\mathbf{H}(t)$ satisfy the no sources conditions for all times. Physically, the condition that $\mathbf{E}$ and $\mathbf{H}$ be square-integrable stems from the requirement that the \emph{field energy} \begin{align} \mathcal{E}(\mathbf{E},\mathbf{H}) := \frac{1}{2} \int_{\R^3} \dd x \Bigl ( \eps(x) \, \babs{\mathbf{E}(x)}^2 + \mu(x) \, \babs{\mathbf{H}(x)}^2 \Bigr ) \end{align} be finite; Moreover, the field energy is a \emph{conserved quantity}, \begin{align*} \mathcal{E} \bigl ( \mathbf{E}(t) , \mathbf{H}(t) \bigr ) = \mathcal{E} \bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr ) . \end{align*} It is not coincidental that the expression for $\mathcal{E}$ looks like the square of a weighted $L^2$-norm: if we assume that $\eps , \mu \in L^{\infty}(\R^3)$ are bounded away from $0$ and $+\infty$, \ie there exist $c , C > 0$ for which \begin{align*} 0 < c \leq \eps(x) , \mu(x) \leq C < + \infty \end{align*} holds almost everywhere in $x \in \R^3$, then $\eps^{-1}$ and $\mu^{-1}$ are also bounded away from $0$ and $+\infty$ in the above sense. Hence, we deduce \begin{align*} \Psi = (\psi^E , \psi^H) \in L^2(\R^3,\C^6) \; \; \Longleftrightarrow \; \; \Psi \in \Hil(\eps,\mu) := L^2_{\eps}(\R^3,\C^3) \oplus L^2_{\mu}(\R^3,\C^3) \end{align*} where $L^2_{\eps}(\R^3,\C^3)$ and $L^2_{\mu}(\R^3,\C^3)$ are defined analogously to problem~22. By definition, $\Psi$ is an element of $\Hil(\eps,\mu)$ if and only if the norm $\norm{\Psi}_{\Hil(\eps,\mu)}$ induced by the weighted scalar product \begin{align} \scpro{\Psi}{\Phi}_{\Hil(\eps,\mu)} :& \negmedspace= \bscpro{(\psi^E,\psi^H)}{(\phi^E,\phi^H)}_{\Hil(\eps,\mu)} = \scpro{\psi^E}{\phi^E}_{\eps} + \scpro{\psi^H}{\phi^H}_{\mu} \\ :& \negmedspace= \int_{\R^3} \dd x \, \eps(x) \, \psi^E(x) \cdot \phi^E(x) + \int_{\R^3} \dd x \, \mu(x) \, \psi^H(x) \cdot \phi^H(x) \notag \end{align} is finite. Adapting the arguments from problem~22, we conclude that $L^2(\R^3,\C^6)$ and $\Hil(\eps,\mu)$ can be canonically identified as \emph{Banach} spaces. Now the field energy can be expressed as \begin{align*} \mathcal{E}(\mathbf{E},\mathbf{H}) = \tfrac{1}{2} \, \bnorm{(\mathbf{E},\mathbf{H})}_{\Hil(\eps,\mu)}^2 := \tfrac{1}{2} \bscpro{(\mathbf{E},\mathbf{H})}{(\mathbf{E},\mathbf{H})}_{\Hil(\eps,\mu)} \, , \end{align*} and the conservation of field energy suggests that $\e^{- \ii t M(\eps,\mu)}$ is unitary with respect to the \emph{weighted} scalar product $\scpro{\cdot \,}{\cdot}_{\Hil(\eps,\mu)}$. Indeed, this is the case: the scalar product can alternatively be expressed in terms of $W^{-1}$ and the unweighted scalar product on $L^2(\R^3,\C^6)$, \begin{align} \scpro{\Psi}{\Phi}_{\Hil(\eps,\mu)} &= \bscpro{\Psi}{W^{-1} \Phi}_{L^2(\R^3,\C^6)} = \bscpro{W^{-1} \Psi}{\Phi}_{L^2(\R^3,\C^6)} . \label{operators:eqn:weighted_scalar_product_W_unweighted} \end{align} The last equality holds true, because $W^{\pm 1}$ are multiplication operators with scalar real-valued functions in the electric and magnetic component, and thus \begin{align*} \scpro{\psi^E}{\eps \phi^E}_{L^2(\R^3,\C^3)} = \scpro{\eps \psi^E}{\phi^E}_{L^2(\R^3,\C^3)} \end{align*} holds for all $\psi^E , \phi^E \in L^2(\R^3,\C^3)$, for instance. Under the assumption that the free Maxwell $\Rot$ is selfadjoint on $L^2(\R^3,\C^6)$, then one can also show the selfadjointness of the Maxwell operator $M(\eps,\mu) = W \, \Rot$ by using its product structure and equation~\eqref{operators:eqn:weighted_scalar_product_W_unweighted}: \begin{align*} \bscpro{\Psi}{M(\eps,\mu) \, \Phi}_{\Hil(\eps,\mu)} &= \bscpro{\Psi}{W^{-1} \, W \, \Rot \, \Phi}_{L^2(\R^3,\C^6)} = \bscpro{\Psi}{\Rot \, \Phi}_{L^2(\R^3,\C^6)} \\ &= \bscpro{\Rot \, \Psi}{\Phi}_{L^2(\R^3,\C^6)} = \bscpro{W^{-1} \, W \, \Rot \, \Psi}{\Phi}_{L^2(\R^3,\C^6)} \\ & = \bscpro{M(\eps,\mu) \, \Psi}{W^{-1} \, \Phi}_{L^2(\R^3,\C^6)} = \bscpro{M(\eps,\mu) \, \Psi}{\Phi}_{\Hil(\eps,\mu)} \end{align*} These arguments \emph{imply} that $\e^{- \ii t M(\eps,\mu)}$ is unitary with respect to $\scpro{\cdot \,}{\cdot}_{\Hil(\eps,\mu)}$, and thus, we deduce that the dynamics conserve energy, \begin{align*} \mathcal{E} \bigl ( \mathbf{E}(t) , \mathbf{H}(t) \bigr ) &= \frac{1}{2} \, \Bnorm{\e^{- \ii t M(\eps,\mu)} \bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr )}_{\Hil(\eps,\mu)}^2 \\ & = \frac{1}{2} \, \Bnorm{\bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr )}_{\Hil(\eps,\mu)}^2 = \mathcal{E} \bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr ) . \end{align*} Moreover, the formulation of the Maxwell equations as a Schrödinger equation also allows us to prove that the dynamics $\e^{- \ii t M(\eps,\mu)}$ map real-valued fields onto real-valued fields: define complex conjugation $(C \Psi)(x) := \overline{\Psi(x)}$ on $\Hil(\eps,\mu)$ component-wise. Then the fact that $\eps$ and $\mu$ are real-valued implies $C$ commutes with $W$, \begin{align*} \bigl ( C \, W \, \Psi \bigr )(x) &= \overline{\bigl ( W \, (\psi^E,\psi^H) \bigr )(x)} = \overline{\bigl ( \eps^{-1}(x) \, \psi^E(x) \, , \, \mu^{-1}(x) \, \phi^H(x) \bigr )} \\ &= \Bigl ( \eps^{-1}(x) \, \overline{\psi^E(x)} \, , \, \mu^{-1}(x) \, \overline{\phi^H(x)} \Bigr ) = \bigl ( W \, C \, \Psi \bigr )(x) \end{align*} In problem~23, we have shown that $C \, \Rot \, C = - \Rot$, and thus \begin{align*} C \, M(\eps,\mu) \, C &= C \, W \, \Rot \, C = W \, C \, \Rot \, C \\ &= - W \, \Rot = - M(\eps,\mu) \end{align*} holds just as in the case of the free Maxwell equations. This means the unitary evolution operator and complex conjugation \emph{commute,} \begin{align*} C \, \e^{- \ii t M(\eps,\mu)} \, C &= \e^{+ \ii t \, C \, M(\eps,\mu) \, C} = \e^{- \ii t M(\eps,\mu)} \, , \end{align*} as does the real part operator $\Re := \tfrac{1}{2} \bigl ( \id_{\Hil(\eps,\mu)} + C \bigr )$, \begin{align*} \Bigl [ \e^{- \ii t M(\eps,\mu)} , \Re \Bigr ] &= 0 \, , \end{align*} Now if the initial state $\bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr ) = \Re \, \bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr )$ is real-valued, then so is the time-evolved state, \begin{align*} \bigl ( \mathbf{E}(t) , \mathbf{H}(t) \bigr ) &= \e^{- \ii t M(\eps,\mu)} \, \bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr ) \\ & = \e^{- \ii t M(\eps,\mu)} \, \Re \, \bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr ) \\ & = \Re \, \e^{- \ii t M(\eps,\mu)} \, \bigl ( \mathbf{E}^{(0)} , \mathbf{H}^{(0)} \bigr ) \\ & = \Re \, \bigl ( \mathbf{E}(t) , \mathbf{H}(t) \bigr ) . \end{align*} The reformulation of the Maxwell equations as a Schrödinger-type equation was first made rigorous by \cite{Birman_Solomyak:L2_theory_Maxwell_operator:1987}; it allows to adapt and apply many of the techniques first developed for the analysis Schrödinger operators to Maxwell operators, \eg under suitable conditions one can derive ray optics as a “semiclassical limit” of the above equations. \marginpar{2013.11.05} \chapter{The Fourier transform} \label{Fourier} The wave equation, the free Schrödinger equation and the heat equation all admit the same class of “fundamental solutions”, namely exponential functions $\e^{- \ii \xi \cdot x}$. In some cases, the boundary conditions impose restrictions on the admissible values of $\xi$. This is because these equations share a common \emph{symmetry}, namely \emph{invariance under translations} (we will be more precise in Chapters~\ref{Fourier:T:periodic_operators} and \ref{Fourier:R:heat}--\ref{Fourier:R:Schroedinger}), and the Fourier transform converts a \textbf{P}DE into an \textbf{O}DE. Moreover, certain properties of the function are tied to certain properties of the Fourier transform, the most famous being that the regularity of $f$ is linked to the decay rate of $\Fourier f$. \section{The Fourier transform on $\T^n$} \label{Fourier:T} Let us consider the Fourier transform on the torus $\T^n := (\Sone)^n$ which will be identified with $[-\pi,+\pi]^n$. Moreover, we will view functions on $\T^n$ with $2\pi \Z^n$-periodic functions on $\R^n$ whenever convenient. Now we proceed to define the central notion of this section: \begin{definition}[Fourier transform on $\T^n$]\label{Fourier:T:defn:Fourier_transform} For all $f \in L^1(\T^n)$, we set % \begin{align*} (\Fourier f)(k) := \hat{f}(k) := \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \e^{- \ii k \cdot x} \, f(x) \end{align*} % for $k \in \Z^n$. The \emph{Fourier series} is the formal sum % \begin{align} \sum_{k \in \Z^n} \hat{f}(k) \, \e^{+ \ii k \cdot x} . \label{Fourier:eqn:T_Fourier_series} \end{align} % \end{definition} If all we know is that $f$ is integrable, then the question on whether the Fourier series exists turns out to be surprisingly hard. In fact, Kolmogorov has shown that there exist integrable functions for which the Fourier series diverges for almost all $x \in \T^n$. However, if we have additional information on $f$, \eg if $f \in \Cont^r(\T^n)$ for $r \geq n+1$, then we can show that \eqref{Fourier:eqn:T_Fourier_series} exists as an absolutely convergent sum. \begin{example} To compute the Fourier coefficients for $f(x) = x \in L^1([-\pi,+\pi])$, we need to distinguish the cases $k = 0$, % \begin{align*} (\Fourier x)(0) &= \frac{1}{2\pi} \int_{-\pi}^{+\pi} \dd x \, x = \left [ \frac{1}{4\pi} x^2 \right ]_{-\pi}^{+\pi} = 0 , \end{align*} % and $k \neq 0$, % \begin{align*} (\Fourier x)(k) &= \frac{1}{2\pi} \int_{-\pi}^{+\pi} \dd x \, \e^{- \ii k x} \, x \\ &= \left [ \frac{\ii}{2\pi k} \, x \, \e^{- \ii k x} \right ]_{-\pi}^{+\pi} - \frac{\ii}{k} \int_{-\pi}^{+\pi} \dd x \, \e^{- \ii k x} \cdot 1 \\ &= (-1)^k \, \frac{\ii}{k} . \end{align*} % Thus, the Fourier coefficients decay like $\nicefrac{1}{\abs{k}}$ for large $\abs{k}$, % \begin{align*} (\Fourier x)(k) &= % \begin{cases} 0 & k = 0 \\ (-1)^k \, \frac{\ii}{k} & k \in \Z \setminus \{ 0 \} \\ \end{cases} % . \end{align*} % We will see later on that this is because $f(x) = x$ has a discontinuity at $x = +\pi$ (which is identified with the point $x = -\pi$). \end{example} Before we continue, it is useful to introduce \emph{multiindex notation}: for any $\alpha \in \N_0^n$, we set \begin{align*} \partial_x^{\alpha} f := \partial_{x_1}^{\alpha_1} \cdots \partial_{x_n}^{\alpha_n} f \end{align*} and similarly $x^{\alpha} := x_1^{\alpha_1} \cdots x_n^{\alpha_n}$. The integer $\abs{\alpha} := \sum_{j = 1}^n \alpha_j$ is the degree of $\alpha$. \subsection{Fundamental properties} \label{Fourier:T:fundamentals} First, we will enumerate various fundamental properties of the Fourier transform on $\T^n$: \begin{proposition}[Fundamental properties of $\Fourier$]\label{Fourier:T:prop:fundamentals} Let $f \in L^1(\T^n)$. % \begin{enumerate}[(i)] \item $\Fourier : L^1(\R^n) \longrightarrow \ell^{\infty}(\Z^n)$ \item $\Fourier$ is linear. \item $\Fourier \bar{f}(k) = \overline{(\Fourier f)(-k)}$ \item $\bigl ( \Fourier f(- \, \cdot \,) \bigr )(k) = (\Fourier f)(-k)$ \item $\bigl ( \Fourier (T_y f) \bigr ) = \e^{- \ii k \cdot y} \, (\Fourier f)(k)$ where $(T_y f)(x) := f(x-y)$ for $y \in \T^n$ \item $(\Fourier f)(k - j) = \bigl ( \Fourier (\e^{+ \ii j \cdot x} f) \bigr )(k)$, $j \in \Z^n$ \item For all $f \in \Cont^r(\T^n)$, we have $\bigl ( \Fourier (\partial_x^{\alpha} f) \bigr )(k) = \ii^{\abs{\alpha}} \, k^{\alpha} \, (\Fourier f)(k)$ for all $\abs{\alpha} \leq r$. \end{enumerate} % \end{proposition} \begin{proof} (i) can be deduced from the estimate % \begin{align*} \bnorm{\Fourier f}_{\ell^{\infty}(\Z^n)} &= \sup_{k \in \Z^n} \babs{\Fourier f(k)} \\ & \leq \sup_{k \in \Z^n} \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \babs{\e^{- \ii k \cdot x} \, f(x)} = (2\pi)^{-n} \, \snorm{f}_{L^1(\T^n)} . \end{align*} % (ii)--(vi) follow from direct computation. For (vii), we note that continuous functions on $\T^n$ are also integrable, and thus we see that if $f \in \Cont^r(\T^n)$, then also $\partial_x^{\alpha} f \in L^1(\T^n)$ holds for any $\abs{\alpha} \leq r$. This means $\Fourier \bigl ( \partial_x^{\alpha} f \bigr )$ exists, and we obtain by partial integration % \begin{align*} \bigl ( \Fourier ( \partial_x^{\alpha} f ) \bigr )(k) &= \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \e^{- \ii k \cdot x} \, \partial_x^{\alpha} f(x) \\ & = \frac{(-1)^{\abs{\alpha}}}{(2\pi)^n} \int_{\T^n} \dd x \, \bigl ( \partial_x^{\alpha} \e^{- \ii k \cdot x} \bigr ) \, f(x) \\ &= \ii^{\abs{\alpha}} \, k^{\alpha} \, (\Fourier f)(k) . \end{align*} % Note that the periodicity of $f$ and its derivatives implies the boundary terms vanish. This finishes the proof. \end{proof} \begin{example}[$\Fourier$ representation of heat equation] Let us consider the heat equation % \begin{align*} \partial_t u = \Delta_x u \end{align*} % on $[-\pi,+\pi]^n$. We will see that we can write % \begin{align} u(t,x) &= \sum_{k \in \Z^n} \hat{u}(t,k) \, \e^{+ \ii k \cdot x} \label{Fourier:T:eqn:Fourier_series_heat_equation} \end{align} % in terms of Fourier coefficients $\hat{u}(t,k)$. If we \emph{assume} we can interchange taking derivatives and the sum, then this induces an equation involving the coefficients, % \begin{align*} \Fourier \bigl ( \partial_t u \bigr ) = \Fourier \bigl ( \Delta_x u \bigr ) \; \; \Longrightarrow \; \; \partial_t \hat{u}(t,k) &= - k^2 \hat{u}(t,k) . \end{align*} % And \emph{if} the sum \eqref{Fourier:T:eqn:Fourier_series_heat_equation} converges to an integrable function $u(t)$, then clearly % \begin{align*} (\Fourier u)(t,k) &= \hat{u}(t,k) \end{align*} % holds. \end{example} We have indicated before that it is not at all clear in what sense the Fourier series~\eqref{Fourier:eqn:T_Fourier_series} exists. The simplest type of convergence is \emph{absolute} convergence, and here Dominated Convergence gives a sufficient condition under which we can interchange taking limits and summation. This helps to prove that the Fourier series yields a continuous or $\Cont^r$ function if the Fourier coefficients decay fast enough. \begin{lemma}[Dominated convergence for sums]\label{Fourier:T:lem:dominated_convergence} Let $a^{(j)} \in \ell^1(\Z^n)$ be a sequence of absolutely summable sequences so that the pointwise limits $\lim_{j \to \infty} a^{(j)}(k) = a(k)$ exist. Moreover, assume there exists a non-negative sequence $b = \bigl ( b(k) \bigr )_{k \in \Z^n} \in \ell^1(\Z^n)$ so that % \begin{align*} \babs{a^{(j)}(k)} \leq b(k) \end{align*} % holds for all $k \in \Z^n$ and $j \in \N$. Then summing over $k \in \Z^n$ and taking the limit $\lim_{j \to \infty}$ commute, \ie % \begin{align*} \lim_{j \to \infty} \sum_{k \in \Z^n} a^{(j)}(k) = \sum_{k \in \Z^n} \lim_{j \to \infty} a^{(j)}(k) \, , \end{align*} % and $a = \bigl ( a(k) \bigr )_{k \in \Z^n} \in \ell^1(\Z^n)$. \end{lemma} \begin{proof} Let $\eps > 0$. Then we deduce from the triangle inequality and $\babs{a^{(j)}(k)} \leq b_k$, $\babs{a(k)} \leq b(k)$ that % \begin{align*} \abs{\sum_{k \in \Z^n} a^{(j)}(k) - \sum_{k \in \Z^n} a(k)} &\leq \sum_{\abs{k} \leq N} \babs{a^{(j)}(k) - a(k)} + \sum_{\abs{k} > N} \babs{a^{(j)}(k) - a(k)} \\ &\leq \sum_{\abs{k} \leq N} \babs{a^{(j)}(k) - a(k)} + 2 \, \sum_{\abs{k} > N} b(k) . \end{align*} % If we choose $N \in \N_0$ large enough, we can estimate the second term \emph{independently of $j$} and make it less than $\nicefrac{\eps}{2}$. The first term is a \emph{finite} sum converging to $0$, and hence, we can make it $< \nicefrac{\eps}{2}$ if we choose $j \geq K$ large enough. Hence, % \begin{align*} \abs{\sum_{k \in \Z^n} a^{(j)}(k) - \sum_{k \in \Z^n} a(k)} < \nicefrac{\eps}{2} + \nicefrac{\eps}{2} = \eps \end{align*} % holds for $j \geq K$. Moreover, $a = \bigl ( a(k) \bigr )_{k \in \Z^n}$ is absolutely summable since $\babs{a(k)} \leq b(k)$ and $b = \bigl ( b(k) \bigr )_{k \in \Z^n}$ is in $\ell^1(\Z^n)$. \end{proof} \begin{corollary}[Continuity, smoothness and decay properties]\label{Fourier:T:prop:continuity_smoothness_decay} % \begin{enumerate}[(i)] \item Assume the Fourier coefficients $\Fourier f \in \ell^1(\Z^n)$ of $f \in L^1(\T^n)$ are absolutely summable. Then % \begin{align*} f(x) &= \sum_{k \in \Z^n} (\Fourier f)(k) \, \e^{+ \ii k \cdot x} \end{align*} % holds almost everywhere and $f$ has a continuous representative. \item Assume the Fourier coefficients of $f \in L^1(\T^n)$ are such that $\bigl ( \abs{k}^s \hat{f}(k) \bigr )_{k \in \Z^n}$ is absolutely summable for some $s \in \N$. Then % \begin{align*} \partial_x^{\alpha} f(x) &= \sum_{k \in \Z^n} \ii^{\abs{\alpha}} \, k^{\alpha} \, (\Fourier f)(k) \, \e^{+ \ii k \cdot x} \end{align*} % holds almost everywhere for all $\abs{\alpha} \leq s$ and $f$ has a $\Cont^s(\T^n)$ representative. Moreover, the Fourier series of $\partial_x^{\alpha} f$, $\abs{\alpha} \leq s$, exist as absolutely convergent sums. \end{enumerate} % \end{corollary} \begin{proof} % \begin{enumerate}[(i)] \item This follows from Dominated Convergence, observing that % \begin{align*} b(k) := \babs{\hat{f}(k) \, \e^{+ \ii k \cdot x}} = \babs{\hat{f}(k)} \end{align*} % is a summable sequence which dominates each term of the sum on the right-hand side of \eqref{Fourier:eqn:T_Fourier_series} independently of $x \in \T^n$. \item For any multiindex $\alpha \in \N_0^n$ with $\abs{\alpha} \leq s$, we estimate each term in the sum from above by $\hat{f}(k)$ times % \begin{align*} \Babs{\partial_x^{\alpha} \bigl ( \e^{+ \ii k \cdot x} \bigr )} &= \babs{k^{\alpha} \, \e^{+ \ii k \cdot x}} = \babs{k^{\alpha}} \\ &\leq C \, \sabs{k}^{\abs{\alpha}} \leq C \, \sabs{k}^s . \end{align*} % By assumption, $\bigl ( \sabs{k}^s \hat{f}(k) \bigr )_{k \in \Z^n}$ and thus also $\bigl ( \sabs{k}^{\abs{\alpha}} \hat{f}(k) \bigr )_{k \in \Z^n}$, $\abs{\alpha} \leq s$, are elements of $\ell^1(\Z^n)$, and hence, we have found the sum which dominates the right-hand side of % \begin{align*} \partial_x^{\alpha} f(x) &= \sum_{k \in \Z^n} \ii^{\abs{\alpha}} \, k^{\alpha} \, \hat{f}(k) \, \e^{+ \ii k \cdot x} \end{align*} % for all $x \in \T^n$. Thus, a Dominated Convergence argument implies we can interchange differentiation with summation and that the sum depends on $x$ in a continuous fashion. \end{enumerate} % \end{proof} In what follows, we will need a multiplication, the convolution, defined on $L^1(\T^n)$ and $\ell^1(\Z^n)$. The convolution $\ast$ is intrinsically linked to the Fourier transform: similar to the case of $\R^n$, we define \begin{align*} (f \ast g)(x) := \int_{\T^n} \dd y \, f(x-y) \, g(y) . \end{align*} where we have used the identification between periodic functions on $\R^n$ and functions on $\T^n$. Moreover, a straightforward modification to the arguments in problem~16 show that \begin{align} \norm{f \ast g}_{L^1(\T^n)} \leq \norm{f}_{L^1(\T^n)} \, \norm{g}_{L^1(\T^n)} . \label{Fourier:eqn:convolution_Tn} \end{align} There is also a convolution on $\ell^1(\Z^n)$: for any two sequences $a , b \in \ell^1(\Z^n)$, we set \begin{align} (a \ast b)(k) := \sum_{j \in \Z^n} a(k-j) \, b(j) . \label{Fourier:eqn:convolution_Zn} \end{align} More careful arguments allow one to generalize the convolution as a map between different spaces, \eg $\ast : L^1(\T^n) \times L^p(\T^n) \longrightarrow L^p(\T^n)$. The Fourier transform intertwines pointwise multiplication of functions or Fourier coefficients with the convolution, a fact that will be eminently useful in applications. \begin{proposition} % \begin{enumerate}[(i)] \item $f , g \in L^1(\T^n)$ $\; \Longrightarrow \;$ $\Fourier (f \ast g) = (2\pi)^n \, \Fourier f \, \Fourier g$ \item $f , g \in L^1(\T^n)$, $\hat{f} , \hat{g} \in \ell^1(\Z^n)$ $\; \Longrightarrow \;$ $\displaystyle \sum_{k \in \Z^n} \bigl ( \Fourier f \ast \Fourier g \bigr ) \, \e^{+ \ii k \cdot x} = f(x) \, g(x)$ \end{enumerate} % \end{proposition} \begin{proof} % \begin{enumerate}[(i)] \item The convolution of two $L^1(\T^n)$ functions is integrable, and thus, the Fourier transform of $f \ast g$ exists. A quick computation yields the claim: % \begin{align*} \bigl ( \Fourier (f \ast g) \bigr )(k) &= \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \e^{- \ii k \cdot x} \, (f \ast g)(x) \\ &= \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \int_{\T^n} \dd y \, \e^{- \ii k \cdot x} \, f(x-y) \, g(y) \\ &= \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \int_{\T^n} \dd y \, \e^{- \ii k \cdot (x-y)} \, f(x-y) \, \e^{- \ii k \cdot y} \, g(y) \\ &= (2\pi)^n \, (\Fourier f)(k) \, (\Fourier g)(k) \end{align*} % \item By assumption on the Fourier series of $f$ and $g$, the sequence $\hat{f} \ast \hat{g}$ is absolutely summable, and hence % \begin{align*} \sum_{k \in \Z^n} \bigl ( \hat{f} \ast \hat{g} \bigr )(k) \, \e^{+ \ii k \cdot x} &= \sum_{k , j \in \Z^n} \hat{f}(k-j) \, \hat{g}(j) \, \e^{+ \ii (k-j) \cdot x} \, \e^{+ \ii j \cdot x} \\ &= \Biggl ( \sum_{k \in \Z^n} \hat{f}(k) \e^{+ \ii k \cdot x} \Biggr ) \, \Biggl ( \sum_{j \in \Z^n} \hat{g}(j) \e^{+ \ii j \cdot x} \Biggr ) \end{align*} % exists for all $x \in \T^n$. We will show later in Theorem~\ref{Fourier:thm:T_Fourier_inversion} that for almost all $x \in \T^n$, the sum $\sum_{k \in \Z^n} \hat{f}(k) \, \e^{+ \ii k \cdot x}$ equals $f(x)$ and similarly for $g$. \end{enumerate} % \end{proof} One helpful fact in applications is that the $L^p(\T^n)$ spaces are nested (\cf Section~\ref{Fourier:R:Lp}): \begin{lemma}\label{Fourier:T:lem:nesting_Lp_spaces} $L^q(\T^n) \subseteq L^p(\T^n)$ for $1 \leq p \leq q \leq +\infty$ \end{lemma} That means it suffices to define $\Fourier$ on $L^1(\T^n)$; Life on $\R^n$ is not so simple, though. \begin{proof} We content ourselves with a sketch: the main idea is to split the integral of $f$ into a region where $\abs{f} \leq 1$ and $\abs{f} > 1$, and then use the compactness of $\T^n$. \marginpar{2013.11.07} \end{proof} \subsection{Approximating Fourier series by trigonometric polynomials} \label{Fourier:T:trig_pol} The idea of the Fourier series is to approximate $L^1(\T^n)$ functions by \begin{definition}[Trigonometric polynomials] A trigonometric polynomial on $\T^n$ is a function of the form % \begin{align*} P(x) = \sum_{k \in \Z^n} a(k) \, \e^{+ \ii k \cdot x} \end{align*} % where $\{ a(k) \}_{k \in \Z}$ is a finitely supported sequence in $\Z^n$. The \emph{degree of $P$} is the largest number $\sum_{j = 1}^n \abs{k_j}$ so that $a(k) \neq 0$ where $k = (k_1, \ldots , k_n)$. We denote the set of trigonometric polynomials by $\mathrm{Pol}(\T^n)$. \end{definition} Writing a function in terms of its Fourier series is initially just an ansatz, \ie we do not know whether the formal sum $\sum_{k \in \Z^n} \hat{f}(k) \, \e^{+ \ii k \cdot x}$ converges in any meaningful way. In some sense, this is akin to approximating functions using the Taylor series: only \emph{analytic} functions can be locally expressed in terms of a power series in $x - x_0$, but a smooth function need not have a convergent or useful Taylor series at a point. The situation is similar for Fourier series: we need \emph{additional conditions on $f$} to ensure that its Fourier series converges in a strong sense (\eg absolute convergence). However, suitable resummations \emph{do} converge, and one particularly convenient way to approximate $f \in L^1(\T^n)$ by trigonometric polynomials is to convolve it with an \begin{definition}[Approximate identity]\label{Fourier:T:defn:approximate_id} An approximate identity or \emph{Dirac sequence} is a family of non-negative functions $(\delta_{\eps})_{\eps \in (0,\eps_0)} \subset L^1(\T^n)$, $\eps_0 > 0$, with the following two properties: % \begin{enumerate}[(i)] \item $\norm{\delta_{\eps}}_{L^1(\T^n)} = 1$ holds for all $\eps \in (0,\eps_0)$. \item For any $R > 0$ we have $\displaystyle \lim_{\eps \to 0} \int_{\abs{x} \leq R} \dd x \, \delta_{\eps}(x) = 1$. \end{enumerate} % \end{definition} Sometimes the assumption that the $\delta_{\eps}$ are non-negative is dropped. One can show that Dirac sequences are also named approximate identities, because \begin{theorem}\label{Fourier:T:thm:approximate_id_convolution} Let $(\delta_{\eps})_{n \in \N}$ be an approximate identity. Then for all $f \in L^1(\T^n)$ we have % \begin{align*} \lim_{\eps \to 0} \bnorm{\delta_{\eps} \ast f - f}_{L^1(\T^n)} = 0 . \end{align*} % \end{theorem} The proof is somewhat tedious: one needs to localize $\delta_{\eps}$ close to $0$ and away from $0$, approximate $f$ by a step function near $x = 0$ and use property~(ii) of approximate identities away from $x = 0$. The interested reader may look it up in \cite[Theorem~1.2.19~(i)]{Grafakos:Fourier_analysis:2008}. \medskip \noindent One standard example of an approximate identity in this context is the the \emph{Féjer kernel} which is constructed from the \emph{Dirichlet kernel}. In one dimension, the Dirichlet kernel is \begin{align} d_N(x_1) := \frac{1}{2\pi} \, \sum_{\abs{k} \leq N} \e^{+ \ii k_1 x_1} = \frac{\sin (2N+1) x_1}{\sin x_1} . \end{align} Now the one-dimensional Féjer kernel is the Césaro mean of $d_N$, \begin{align} f_N(x_1) := \frac{1}{2\pi} \, \sum_{k = -N}^{+N} \left ( 1 - \frac{\abs{k}}{N+1} \right ) \, \e^{+ \ii k x_1} = \frac{1}{N+1} \left ( \frac{\sin (N+1) x_1}{\sin x_1} \right )^2 . \end{align} The higher-dimensional Dirichlet and Féjer kernels are then defined as \begin{align} D_N(x) := \prod_{j = 1}^n d_N(x_j) \end{align} and \begin{align} F_N(x) :& \negmedspace= \frac{1}{(N+1)^2} \sum_{k_1 = 0}^N \cdots \sum_{k_n = 0}^N d_{k_1}(x_1) \cdots d_{k_n}(x_n) \\ & = \frac{1}{(2\pi)^n} \, \sum_{\substack{k \in \Z^n \\ \abs{k_j} \leq N}} \left ( \prod_{j = 1}^N \left ( 1 - \frac{\abs{k_j}}{N+1} \right ) \right ) \, \e^{+ \ii k \cdot x} \notag \\ &= \frac{1}{(2\pi)^n} \frac{1}{(N+1)^n} \prod_{j = 1}^n \left ( \frac{\sin (N+1) x_j}{\sin x_j} \right )^2 . \end{align} One can see that \begin{lemma} $(F_N)_{N \in \N}$ is an approximate identity. \end{lemma} \begin{proof} Since $F_N$ is the product of one-dimensional Féjer kernels and all the integrals factor, it suffices to consider the one-dimensional case: we note that $f_N$ is non-negative. Then the fact that $\int_{\T^n} \dd x \, \e^{+ \ii k \cdot x} = 0$ for $k \in \Z^n \setminus \{ 0 \}$ and $\int_{\T^n} \dd x \, \e^{0} = 2 \pi$ implies (i). (ii) is equivalent to proving $\lim_{N \to \infty} \int_{\abs{x} > R} \dd x \, f_N(x)$. But this follows from writing $f_N$ in terms of sines. \end{proof} Moreover, if we convolve $f \in L^1(\T^n)$ with $F_N$, then \begin{align*} (F_N \ast f)(x) &= \sum_{\substack{k \in \Z^n \\ \abs{k_j} \leq N}} \left ( 1 - \frac{\abs{k_1}}{N+1} \right ) \cdots \left ( 1 - \frac{\abs{k_n}}{N+1} \right ) \, \hat{f}(k) \, \e^{+ \ii k \cdot x} \end{align*} is a trigonometric polynomial of degree $N$, and thus \begin{proposition}\label{Fourier:T:prop:Pol_dense_Lp} $\mathrm{Pol}(\T^n)$ is dense in $L^p(\T^n)$ for any $1 \leq p < \infty$. \end{proposition} \begin{proof} Since $F_N$ is an approximate identity, $\lim_{N \to \infty} \bnorm{F_N \ast f - f}_{L^p(\T^n)} = 0$, and thus the trigonometric polynomial $F_N \ast f$ approximates $f$ arbitrarily well in norm. \end{proof} \subsection{Decay properties of Fourier coefficients} \label{Fourier:T:decay_properties} A fundamental question is to ask about the behavior of the Fourier coefficients for large $\abs{k}$. This is important in many applications, because it may mean that certain bases are more efficient than others. The simplest of these criteria is the \begin{lemma}[Riemann-Lebesgue lemma]\label{Fourier:T:cor:Riemann_Lebesgue} $f \in L^1(\T^n)$ $\Rightarrow$ $\displaystyle \lim_{\abs{k} \to \infty} \hat{f}(k) = 0$ \end{lemma} \begin{proof} By Proposition~\ref{Fourier:T:prop:Pol_dense_Lp}, we can approximate any $f \in L^1(\T^n)$ arbitrarily well by a trigonometric polynomial $P$, \ie for any $\eps > 0$, there exists a $P \in \mathrm{Pol}(\T^n)$ so that $\norm{f - P}_{L^1(\T^n)} < \eps$. Since the Fourier coefficients of $P$ satisfy $\lim_{\abs{k} \to \infty} \hat{P}(k) = 0$ (only finitely many are non-zero), this also implies that the Fourier coefficients of $f$ satisfy % \begin{align*} 0 \leq \lim_{\abs{k} \to \infty} \babs{\hat{f}(k)} \leq \lim_{\abs{k} \to \infty} \Bigl ( \babs{\hat{f}(k) - \hat{P}(k)} + \babs{\hat{P}(k)} \Bigr ) < \eps + 0 = \eps . \end{align*} % Given as $\eps$ can be chosen arbitrarily small, the above in fact implies $\displaystyle \lim_{\abs{k} \to \infty} \hat{f}(k) = 0$. \end{proof} \begin{proposition}\label{Fourier:T:prop:F_injective} $\Fourier : L^1(\T^n) \longrightarrow \ell^{\infty}(\Z^n)$ is injective, \ie % \begin{align*} \Fourier f = \Fourier g \; \; \Longleftrightarrow \; \; f = g \in L^1(\T^n) . \end{align*} % \end{proposition} \begin{proof} Given that $f$ is linear, it suffices to consider the case $f = 0$: “$\Rightarrow$:” Assume $\Fourier f = 0$. Then $F_N \ast f = 0$. Since $\{ F_N \}_{N \in \N}$ is an approximate identity, $0 = F_N \ast f \to f$ as $N \to \infty$, \ie $f = 0$. “$\Leftarrow$:” In case $f = 0$, also all of the Fourier coefficients vanish, $\Fourier f = 0$. \end{proof} \begin{proposition}[Fourier inversion]\label{Fourier:thm:T_Fourier_inversion} Suppose $f \in L^1(\T^n)$ has an absolutely convergent Fourier series, \ie $\Fourier f \in \ell^1(\Z^n)$, % \begin{align*} \bnorm{\Fourier f}_{\ell^1(\Z^n)} = \sum_{k \in \Z^n} \babs{\hat{f}(k)} < \infty . \end{align*} % Then % \begin{align} f(x) = \sum_{k \in \Z^n} \hat{f}(k) \, \e^{+ \ii k \cdot x} \label{Fourier:T:eqn:f_eq_Fourier_series} \end{align} % holds almost everywhere, and $f$ is almost-everywhere equal to a continuous function. \end{proposition} \begin{proof} Clearly, left- and right-hand side of equation~\eqref{Fourier:T:eqn:f_eq_Fourier_series} have the same Fourier coefficients, and thus, by Proposition~\ref{Fourier:T:prop:F_injective} they are equal as elements of $L^1(\T^n)$. The continuity of the right-hand side follows from Corollary~\ref{Fourier:T:prop:continuity_smoothness_decay}~(i). \end{proof} \begin{theorem}[Regularity $f$ $\leftrightarrow$ decay $\Fourier f$]\label{Fourier:T:thm:decay_properties} % \begin{enumerate}[(i)] \item Let $s \in \N_0$, $\delta \in (0,1)$ and assume that the Fourier coefficients of $f \in L^1(\T^n)$ decay as % \begin{align} \babs{\hat{f}(k)} \leq C (1 + \abs{k})^{-n-s-\delta} . \label{Fourier:T:eqn:decay_assumption_Fourier_coefficients} \end{align} % Then $f \in \Cont^s(\T^n)$. \item The Fourier coefficients of $f \in \Cont^s(\T^n)$ satisfy $\displaystyle \lim_{\abs{k} \to \infty} \bigl ( \sabs{k}^r \, \hat{f}(k) \bigr ) = 0$ for $r \leq s$. \item $f \in \Cont^{\infty}(\T^n)$ holds if and only if for all $r \geq 0$ there exists $C_r > 0$ such that % \begin{align*} \babs{\hat{f}(k)} \leq C_r \, \bigl ( 1 + \abs{k} \bigr )^{-r} . \end{align*} % \end{enumerate} % \end{theorem} \begin{proof} % \begin{enumerate}[(i)] \item The decay assumption \eqref{Fourier:T:eqn:decay_assumption_Fourier_coefficients} ensures that $\bigl ( \ii^{\abs{\alpha}} \, k^{\alpha} \hat{f}(k) \bigr )_{k \in \Z^n}$ is absolutely summable if $\abs{\alpha} \leq s$, and thus, by Proposition~\ref{Fourier:T:prop:fundamentals}~(vii) and \ref{Fourier:thm:T_Fourier_inversion} left- and right-hand side of % \begin{align*} \partial_x^{\alpha} f(x) &= \sum_{k \in \Z^n} \ii^{\abs{\alpha}} \, k^{\alpha} \hat{f}(k) \, \e^{+ \ii k \cdot x} \end{align*} % are equal and continuous in $x$. \item Clearly, $\abs{k}^r \leq \abs{k}^s$ holds for $r \leq s$ and $k \in \Z^n$. Moreover, $\partial^{\alpha}_x f \in L^1(\T^n)$, $\abs{\alpha} \leq s$, and thus, the Riemann-Lebesgue lemma~\ref{Fourier:T:cor:Riemann_Lebesgue} implies $\lim_{\abs{k} \to \infty} \bigl ( \sabs{k}^r \, \hat{f}(k) \bigr ) = 0$. \item “$\Rightarrow$:” $f \in \Cont^{\infty}(\T^n)$, then (ii) implies that $\lim_{\abs{k} \to \infty} \bigl ( \sabs{k}^r \, \hat{f}(k) \bigr ) = 0$ holds for all $r \geq 0$. “$\Leftarrow$:” Conversely, if $\hat{f}(k)$ decays faster than any polynomial, then (i) implies for each $s \geq 0$ the function $f$ is an element of $\Cont^{s-1-n}(\T^n)$, and thus $f \in \Cont^{\infty}(\T^n)$. \marginpar{2013.11.14} \end{enumerate} % \end{proof} \subsection{The Fourier transform on $L^2(\T^n)$} \label{Fourier:T:L2} We will dedicate a little more time to the example of the free Schrödinger equation on $\T^n \cong [-\pi,+\pi]^n$, \begin{align*} \ii \partial_t \psi = - \Delta_x \psi \, , \end{align*} where we equip $\Delta_x := \partial_{x_1}^2 + \ldots + \partial_{x_n}^2$ with periodic boundary conditions. If we can show that $\{ \e^{+ \ii k \cdot x} \}_{k \in \Z^n}$ is an orthonormal \emph{basis}, then any $\psi \in L^2(\T^n)$ has a \emph{unique} Fourier expansion \begin{align} \psi(x) &= \sum_{k \in \Z^n} \widehat{\psi}(k) \, \e^{+ \ii k \cdot x} \label{Fourier:T:eqn:Fourier_expansion} \end{align} where the sum converges in the $L^2$-sense and \begin{align*} \widehat{\psi}(k) = \bscpro{\e^{+ \ii k \cdot x}}{\psi}_{L^2(\T^n)} = \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \e^{- \ii k \cdot x} \, \psi(x) \end{align*} is the $k$th Fourier coefficient. Note that we have normalized the scalar product \begin{align*} \scpro{f}{g}_{L^2(\T^n)} := \frac{1}{(2\pi)^n} \int_{\T^n} \dd x \, \overline{f(x)} \, g(x) \end{align*} so that the $\e^{+ \ii k \cdot x}$ have $L^2$-norm $1$. Lemma~\ref{Fourier:T:lem:nesting_Lp_spaces} tells us that $\widehat{\psi} = \Fourier \psi$ is well-defined, because $\psi \in L^2(\T^n) \subset L^1(\T^n)$. Hence, if $\{ \e^{+ \ii k \cdot x} \}_{k \in \Z^n}$ is a basis, any $L^2(\T^n)$ function can be expressed as a Fourier series. \begin{lemma} $\{ \e^{+ \ii k \cdot x} \}_{k \in \Z^n}$ is an orthonormal basis of $L^2(\T^n)$. \end{lemma} \begin{proof} The orthonormality of $\{ \e^{+ \ii k \cdot x} \}_{k \in \Z^n}$ follows from a straight-forward calculation analogous to the one-dimensional case. The injectivity of $\Fourier : L^2(\T^n) \subset L^1(\T^n) \longrightarrow \ell^{\infty}(\Z^n)$ (Proposition~\ref{Fourier:T:prop:F_injective}) means $\psi = 0 \in L^2(\T^n)$ if and only if $\Fourier \psi = 0$. Hence, $\{ \e^{+ \ii k \cdot x} \}_{k \in \Z^n}$ is a basis. \end{proof} \begin{proposition}\label{Fourier:R:prop:Parseval_Plancherel} Let $f , g \in L^2(\T^n)$. Then the following holds true: % \begin{enumerate}[(i)] \item \emph{Parseval's identity:} $\displaystyle \scpro{f}{g}_{L^2(\T^n)} = \scpro{\Fourier f}{\Fourier g}_{\ell^2(\Z^n)}$ \item \emph{Plancherel's identity:} $\displaystyle \snorm{f}_{L^2(\T^n)} = \snorm{\Fourier f}_{\ell^2(\Z^n)}$ \item $\Fourier : L^2(\T^n) \longrightarrow \ell^2(\Z^n)$ is a unitary. \end{enumerate} % \end{proposition} \begin{proof} (i) follows from the fact that $\{ \e^{+ \ii k \cdot x} \}_{k \in \Z^n}$ is an orthonormal basis and that the coefficients of the basis expansion % \begin{align*} \bscpro{\e^{+ \ii k \cdot x}}{\psi}_{L^2(\T^n)} = (\Fourier \psi)(k) \end{align*} % coincide with the Fourier coefficients. (ii) and (iii) are immediate consequences of (i). \end{proof} \subsection{Periodic operators} \label{Fourier:T:periodic_operators} An important class of operators are those which commute with translations, \begin{align*} \bigl [ H , T_y \bigr ] = H \, T_y - T_y \, H = 0 , \qquad \qquad \forall y \in \T^n , \end{align*} because this symmetry implies the exponential functions $\e^{+ \ii k \cdot x}$ are eigenfunctions of $H$. \begin{theorem} Suppose that $H : L^p(\T^n) \longrightarrow L^q(\T^n)$, $1 \leq p,q \leq \infty$, is a bounded linear operator which commutes with translations. Then there exists $\{ h(k) \}_{k \in \Z} \in \ell^{\infty}(\Z^n)$ so that % \begin{align} (H f)(x) = \sum_{k \in \Z^n} h(k) \, \hat{f}(k) \, \e^{+ \ii k \cdot x} \label{Fourier:T:thm:periodic_op} \end{align} % holds for all $f \in \Cont^{\infty}(\T^n)$. Moreover, we have $\bnorm{\bigl ( h(k) \bigr )_{k \in \Z^n}}_{\ell^{\infty}(\Z^n)} \leq \norm{H}_{\mathcal{B}(L^p(\T^n),L^q(\T^n))}$. \end{theorem} An important example class of examples are \emph{differential operators}, \begin{align*} H := \sum_{\substack{a \in \N_0^d \\ \abs{a} \leq N}} \beta(a) \, \partial_x^a , && \beta(a) \in \C , \end{align*} whose eigenvalues are \begin{align*} H \e^{+ \ii k \cdot x} &= \biggl ( \sum_{\substack{a \in \N_0^d \\ \abs{a} \leq N}} f_a(k) \, \ii^{\abs{a}} \, k^a \biggr ) \, \e^{+ \ii k \cdot x} = h(k) \, \e^{+ \ii k \cdot x} . \end{align*} The most famous example so far was $- \Delta_x$ whose eigenvalues are $k^2$. \begin{proof} We already know that the Fourier series of $f \in \Cont^{\infty}(\T^n)$ converges absolutely, and its Fourier coefficients decay faster than any power of $\abs{k}$ (Theorem~\ref{Fourier:T:thm:decay_properties}). So consider the functions $\varphi_k(x) := \e^{+ \ii k \cdot x}$, $k \in \Z$. The exponential functions are eigenfunctions of the translation operator, % \begin{align*} (T_y \varphi_k)(x) &= \varphi_k(x-y) = \e^{+ \ii k \cdot (x-y)} \\ & = \e^{- \ii k \cdot y} \, \varphi_k(x) , \end{align*} % and thus, the fact that $T$ commutes with translations implies % \begin{align*} \bigl ( T_y H \varphi_k \bigr )(x) &= (H \varphi_k)(x - y) = \bigl ( H (T_y \varphi_k) \bigr )(x) \\ & = \e^{- \ii k \cdot y} \, (H \varphi_k)(x) = \varphi_k(-y) \, (H \varphi_k)(x) . \end{align*} % Now writing $x = y - (y - x)$ and interchanging the roles of $x$ and $y$ yields that $\varphi_k(x) = \e^{+ \ii k \cdot x}$ is an eigenfunction of $H$, % \begin{align*} (H \varphi_k)(x) &= (H \varphi_k)(x - y + y) = \e^{- \ii k \cdot (y-x)} \, (H \varphi_k)(y) \\ & = \e^{+ \ii k \cdot x} \, \bigl ( \e^{- \ii k \cdot y} \, (H \varphi_k)(y) \bigr ) = \bigl ( \e^{- \ii k \cdot y} \, (H \varphi_k)(y) \bigr ) \, \varphi_k(x) \\ & =: h(k) \, \varphi_k(x) . \end{align*} % It is easy to see that $h(k)$ is in fact independent of the choice of $y \in \T^n$. The above also means $\abs{h(k)} \leq \norm{H}_{\mathcal{B}(L^p(\T^n),L^q(\T^n))}$ holds for all $k \in \Z$, and taking the supremum over $k$ yields $\norm{h}_{\ell^{\infty}(\Z)} \leq \norm{H}_{\mathcal{B}(L^p(\T^n),L^q(\T^n))}$. Hence, equation~\eqref{Fourier:T:thm:periodic_op} holds for all $f \in \Cont^{\infty}(\T^n)$, \eg for all trigonometric polynomials. Since those are dense in $L^p(\T^n)$ and $T$ restricted to $\Cont^{\infty}(\T^n)$ is bounded, there exists a unique bounded extension on all of $L^p(\T^n)$ (Theorem~\ref{operators:bounded:thm:extensions_bounded_operators}). \marginpar{2013.11.19} \end{proof} \subsection{Solving quantum problems using $\Fourier$} \label{sub:solving_quantum} The Fourier transform helps to simplify solving quantum problems. The idea is to convert the Hamiltonian to a multiplication operator. We start with a very simple example: \subsubsection{The free Schrödinger equation} \label{Fourier:T:periodic_operators:free_schroedinger} Let us return to the example of the Schrödinger equation: if we denote the free Schödinger operator in the position representation with $H := - \Delta_x$ acting on $L^2(\T^n)$ and impose periodic boundary conditions (\cf the discussion about the shift operator on the interval in Chapter~\ref{operators:unitary}), then the Fourier transform $\Fourier : L^2(\T^n) \longrightarrow \ell^2(\Z^n)$ connects position and momentum representation, \ie \begin{align} H^{\Fourier} := \Fourier \, H \, \Fourier^{-1} = \hat{k}^2 \label{Fourier:T:eqn:free_Schroedinger_Fourier} \end{align} acts on suitable vectors from $\ell^2(\Z^n)$ by multiplication with $k^2$. The names position and momentum representation originate from physics, because here, the variable $x \in \T^n$ is interpreted as a position while $k \in \Z^n$ is a momentum. To arrive at \eqref{Fourier:T:eqn:free_Schroedinger_Fourier}, we use that any $\psi \in L^2(\T^n)$ has a Fourier expansion since the orthonormal set $\{ \e^{+ \ii k \cdot x} \}_{k \in \Z^n}$ is also a basis and that the $k$ Fourier coefficient of \begin{align*} - \Delta_x \psi(x) &= - \Delta_x \sum_{k \in \Z^n} \widehat{\psi}(k) \, \e^{+ \ii k \cdot x} = \sum_{k \in \Z^n} k^2 \, \widehat{\psi}(k) \, \e^{+ \ii k \cdot x} \end{align*} is just $k^2 \, \widehat{\psi}(k)$, provided that the sum on the right-hand side is square summable. The latter condition just means that $- \Delta_x \psi$ must exist in $L^2(\T^n)$. Clearly, the solution to the free Schrödinger equation in momentum representation is the multiplication operator \begin{align*} U^{\Fourier}(t) = \e^{- \ii t \hat{k}^2} , \end{align*} and thus we obtain the solution in position representation as well, \begin{align*} U(t) = \Fourier^{-1} \, U^{\Fourier}(t) \, \Fourier \end{align*} Applied to a vector, this yields \begin{align*} \psi(t) &= \sum_{k \in \Z^n} \e^{- \ii t k^2} \, \widehat{\psi}_0(k) \, \e^{+ \ii k \cdot x} . \end{align*} Note that this sum exists in $L^2(\T^n)$ if and only if the initial state $\psi$ is square-integrable. \subsubsection{Tight-binding models in solid state physics} \label{Fourier:T:periodic_operators:tight_binding} Quantum mechanical models are encoded via choosing a hamiltonian and a Hilbert space on which it acts. In the previous section, we have started in a situation where the position variable took values in $\T^n$, \ie wave functions were elements of $L^2(\T^n)$. Tight-binding models, for instance, are but one example where the position variable is a lattice vector and the wave function $\psi \in \ell^2(\Z^n)$ a square summable sequence. Tight-binding models describe conduction in semiconductors and insulators: here, the electron may jump from its current position to neighboring atoms with a certain amplitude. One usually restricts oneself to the case where only the hopping amplitudes to \emph{nearest} and sometimes \emph{next-nearest neighbors} are included: in many cases, one can prove that these hopping amplitudes decay exponentially with the distance, and hence, one only needs to include the leading-order terms. \paragraph{Single-band model} Let us consider a two-dimensional lattice. First of all, we note that the number of nearest neighbors actually depends on the crystal structure. For a simple square lattice, the number of nearest neighbors is $4$ while for a hexagonal lattice, there are only $3$. Let us start with the square lattice: then the hamiltonian \begin{align} H = \id_{\ell^2(\Z^2)} + q_1 \, \mathfrak{s}_1 + q_2 \, \mathfrak{s}_2 + \overline{q_1} \, \mathfrak{s}_1^* + \overline{q_2} \, \mathfrak{s}_2^* = H^* \label{Fourier:T:tight_binding_square} \end{align} which includes only nearest-neighbor hopping with amplitudes $q_1 , q_2 \in \C$ is defined in terms of the \emph{shift operators} \begin{align*} (\mathfrak{s}_j \psi)(\gamma) := \psi(\gamma - e_j) , \qquad \qquad \psi \in \ell^2(\Z^2) . \end{align*} Here, $e_j$ stands for either $e_1 = (1,0)$ or $e_2 = (0,1)$ and $\gamma \in \Z^2$. If one sees $\psi(\gamma)$ as the Fourier coefficients for \begin{align*} (\Fourier^{-1} \psi)(k) = \sum_{\gamma \in \Z^2} \psi(\gamma) \, \e^{+ \ii \gamma \cdot k} , \end{align*} then one can see that the shift operator in momentum representation is just the multiplication operator $\mathfrak{s}_j^{\Fourier} := \Fourier^{-1} \mathfrak{s}_j \Fourier = \e^{+ \ii \hat{k}_j}$: \begin{align*} \bigl ( \Fourier^{-1} \mathfrak{s}_j \psi \bigr )(k) &= \sum_{\gamma \in \Z^2} \psi(\gamma - e_j) \, \e^{+ \ii \gamma \cdot k} = \sum_{\gamma \in \Z^2} \psi(\gamma) \, \e^{+ \ii (\gamma + e_j) \cdot k} \\ &= \e^{+ \ii k_j} \, (\Fourier^{-1} \psi)(k) \end{align*} Note that $\mathfrak{s}_j^{\Fourier} = \Fourier^{-1} \mathfrak{s}_j \Fourier$ makes sense as a composition of bounded linear operators and that $\mathfrak{s}_j^{\Fourier} : L^2(\T^2) \longrightarrow L^2(\T^2)$ is again unitary. Hence, the Hamilton operator \eqref{Fourier:T:tight_binding_square} in momentum representation transforms to \begin{align*} H^{\Fourier} &= 1 + q_1 \, \e^{+ \ii \hat{k}_1} + q_2 \, \e^{+ \ii \hat{k}_2} + \overline{q_1} \, \e^{- \ii \hat{k}_1} + \overline{q_2} \, \e^{- \ii \hat{k}_2} \\ &= 1 + 2 \Re \bigl ( q_1 \, \e^{+ \ii \hat{k}_1} \bigr ) + 2 \Re \bigl ( q_2 \, \e^{+ \ii \hat{k}_2} \bigr ) . \end{align*} It turns out that in the absence of magnetic fields, the hopping amplitudes can be chosen to be real, and then $H^{\Fourier}$ becomes the multiplication operator associated to the \emph{band function} \begin{align*} E(k) = 1 + 2 q_1 \, \cos k_1 + 2 q_2 \, \cos k_2 . \end{align*} In other words, the Fourier transform converts an operator of shifts into a multiplication operator. That means we can solve the Schrödinger equation in momentum representation, \begin{align*} \ii \partial_t \widehat{\psi}(t) &= H^{\Fourier} \widehat{\psi}(t) , \qquad \qquad \widehat{\psi}(0) = \psi_0 \in L^2(\T^2) , \end{align*} because also the unitary evolution group is just a multiplication operator, \begin{align*} U^{\Fourier}(t) &= \e^{- \ii t E(\hat{k})} . \end{align*} Moreover, the unitary evolution group associated to the Schrödinger equation in position representation \begin{align*} \ii \partial_t \psi(t) = H \psi(t) , \qquad \qquad \psi(0) = \psi_0 \in \ell^2(\Z^2) , \end{align*} is obtained by changing back to the position representation with $\Fourier$, \begin{align*} U(t) &= \Fourier \, U^{\Fourier}(t) \, \Fourier^{-1} \\ &= \Fourier \, \e^{- \ii t E(\hat{k})} \, \Fourier^{-1} . \end{align*} \paragraph{Two-band model} The situation is more interesting and more complicated for hexagonal lattices: here, there are three nearest neighbors and two atoms per unit cell. The following operators have been studied as simplified operators for graphene and boron-nitride (see \eg \cite{Hasegawa_Konno_Nakano_Kohomoto:zero_modes_honeycomb:2006,DeNittis_Lein:piezo_graphene:2013}). Here, the relevant Hilbert space is $\ell^2(\Z^2,\C^2)$ where the “internal” $\C^2$ degree of freedom corresponds to the two atoms in the unit cell (black and white atoms in Figure~\ref{Fourier:R:figure:honeycomb}). Here, nearest neighbors are atoms of a “different color”, and the relevant operator takes the form \begin{align*} H = \left ( % \begin{matrix} 0 & 1_{\ell^2(\Z^2)} + q_1 \, \mathfrak{s}_1 + q_2 \, \mathfrak{s}_2 \\ 1_{\ell^2(\Z^2)} + \overline{q_1} \, \mathfrak{s}_1^* + \overline{q_2} \, \mathfrak{s}_2^* & 0 \\ \end{matrix} % \right ) = H^* , \end{align*} \begin{figure} \hfil\resizebox{80mm}{!}{\includegraphics{./images/honeycomb.pdf}}\hfil \caption{The honeycomb lattice is the superposition of two triangular lattices where the fundamental cell contains two atoms, one black and one white.} \label{Fourier:R:figure:honeycomb} \end{figure} and acts on $\psi = \bigl ( \psi^{(0)} , \psi^{(1)} \bigr ) \in \ell^2(\Z^2,\C^2)$ as \begin{align*} (H \psi)(\gamma) = \left ( % \begin{matrix} \psi^{(1)}(\gamma) + q_1 \, \psi^{(1)}(\gamma - e_1) + q_2 \, \psi^{(1)}(\gamma - e_2) \\ \psi^{(0)}(\gamma) + \overline{q_1} \, \psi^{(0)}(\gamma + e_1) + \overline{q_2} \, \psi^{(0)}(\gamma + e_2) \\ \end{matrix} % \right ) . \end{align*} One can again use the Fourier transform to relate $H$ to a matrix-valued multiplication operator on $L^2(\T^2,\C^2)$, namely \begin{align*} H^{\Fourier} = \left ( % \begin{matrix} 0 & \overline{\varpi(\hat{k})} \\ \varpi(\hat{k}) & 0 \\ \end{matrix} % \right ) = T(\hat{k}) . \end{align*} where we have defined \begin{align*} \varpi(k) = 1 + q_1 \, \e^{- \ii k_1} + q_2 \, \e^{- \ii k_2} , \end{align*} and one can conveniently write $T$ in terms of the Pauli matrices as \begin{align*} T(k) &= \Re \bigl ( \varpi(k) \bigr ) \, \sigma_1 + \Im \bigl ( \varpi(k) \bigr ) \, \sigma_2 . \end{align*} The advantage is that there exist closed formulas for the eigenvalues \begin{align*} E_{\pm}(k) &= \pm \babs{\varpi(k)} \end{align*} of $T(k)$ which are interpreted as the \emph{upper} and \emph{lower band functions} and the two eigenprojections \begin{align*} P_{\pm}(k) &= \frac{1}{2} \left ( \id_{\C^2} \pm \frac{\Re \bigl ( \varpi(k) \bigr ) \, \sigma_1 + \Im \bigl ( \varpi(k) \bigr ) \, \sigma_2}{\abs{\varpi(k)}} \right ) \end{align*} associated to $E_{\pm}(k)$. Now if one wants to solve the associated Schrödinger equation in momentum representation, \begin{align*} \ii \partial_t \widehat{\psi}(t) &= H^{\Fourier} \widehat{\psi}(t) , \qquad \qquad \widehat{\psi}(0) = \widehat{\psi}_0 \in L^2(\T^2,\C^2) , \end{align*} we can express the unitary evolution group in terms of the eigenvalues and projections as \begin{align*} U^{\Fourier}(t) = \e^{- \ii t \abs{\varpi(k)}} \, P_+(\hat{k}) + \e^{+ \ii t \abs{\varpi(k)}} \, P_-(\hat{k}) . \end{align*} Also here, the Fourier transform connects the evolution group in momentum and position representation, $U(t) = \Fourier \, U^{\Fourier}(t) \, \Fourier^{-1}$. \marginpar{2013.11.21} \section{The Fourier transform on $\R^n$} \label{Fourier:R} There also exists a Fourier transform on $\R^n$ which is defined analogously to Definition~\ref{Fourier:T:defn:Fourier_transform}. In spirit, the $L^1(\R^n)$ theory is very similar to that for the discrete Fourier transform. \subsection{The Fourier transform on $L^1(\R^n)$} \label{Fourier:R:L1} The Fourier transform on $\R^n$ is first defined on $L^1(\R^n)$ as follows: \begin{definition}[Fourier transform]\label{Fourier:R:defn:Fourier_transform} For any $f \in L^1(\R^n)$, we define its Fourier transform % \begin{align*} (\Fourier f)(\xi) := \hat{f}(\xi) := \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \e^{- \ii \xi \cdot x} \, f(x) . \end{align*} % \end{definition} The prefactor $(2\pi)^{-\nicefrac{n}{2}}$ is a matter of convention. Our choice is motivated by the fact that $\Fourier$ will define a \emph{unitary} map $L^2(\R^n) \longrightarrow L^2(\R^n)$. \subsubsection{Fundamental properties} \label{Fourier:R:L1:fundamental_properties} Let us begin by investigating some of the fundamental properties. First, just like in the case of the Fourier transform on $\T^n$, the Fourier transform decays at $\infty$. \begin{lemma}[Riemann-Lebesgue lemma]\label{Fourier:R:lem:Riemann_Lebesgue} The Fourier transform of any $f \in L^1(\R^n)$ is an element of $\Cont_{\infty}(\R^n)$, \ie $\Fourier f$ is continuous, bounded and decays at infinity, % \begin{align*} \lim_{\abs{\xi} \to \infty} \Fourier f(\xi) = 0 . \end{align*} % \end{lemma} \begin{proof} The first part, $\Fourier f \in L^{\infty}(\R^n) \cap \Cont(\R^n)$, has already been shown on page~\pageref{spaces:example:Riemann-Lebesgue_half}. It remains to show that $\Fourier f$ decays at infinity. But that follows from the fact that any integrable function can be approximated arbitrarily well by a finite linear combination of step functions % \begin{align*} 1_A(x) := % \begin{cases} 1 & x \in A \\ 0 & x \not\in A \\ \end{cases} % , \end{align*} % and $\lim_{\abs{\xi} \to \infty} (\Fourier 1_A)(\xi) = 0$. Thus, $\lim_{\abs{\xi} \to \infty} (\Fourier f)(\xi) = 0$ follows. \end{proof} We begin to enumerate a few important properties of the Fourier transform. These are tremendously helpful in computations. \begin{proposition}[Fundamental properties of $\Fourier$]\label{Fourier:R:prop:fundamentals} Let $f \in L^1(\R^n)$. % \begin{enumerate}[(i)] \item $\Fourier \bar{f}(\xi) = \overline{(\Fourier f)(-\xi)}$ \item $\bigl ( \Fourier f(- \, \cdot \,) \bigr )(\xi) = (\Fourier f)(-\xi)$ \item $\bigl ( \Fourier (T_y f) \bigr )(\xi) = \e^{- \ii \xi \cdot y} \, (\Fourier f)(\xi)$ where $(T_y f)(x) := f(x-y)$ for $y \in \R^n$ \item $(\Fourier f)(\xi - \eta) = \bigl ( \Fourier (\e^{+ \ii \eta \cdot x} f) \bigr )(\xi)$ \item $\bigl ( \Fourier (S_{\lambda} f) \bigr )(\xi) = \lambda^n \, (\Fourier f)(\lambda \xi)$ where $(S_{\lambda} f)(x) := f(\nicefrac{x}{\lambda})$, $\lambda > 0$ \item For all $f \in \Cont^r(\R^n)$ with $\partial_x^{\alpha} f \in L^1(\R^n)$, $\abs{\alpha} \leq r$, we have $\bigl ( \Fourier (\partial_x^{\alpha} f) \bigr )(\xi) = \ii^{\abs{\alpha}} \, \xi^{\alpha} \, (\Fourier f)(\xi)$ for all $\abs{\alpha} \leq r$. \end{enumerate} % \end{proposition} \begin{lemma}[Fourier transform of a Gaußian]\label{Fourier:R:lem:Fourier_Gaussian} The Fourier transform of the Gaußian function $g_{\lambda}(x) := \e^{- \frac{\lambda}{2} x^2}$ is % \begin{align*} (\Fourier g_{\lambda})(\xi) = \lambda^{- \nicefrac{n}{2}} \, \e^{- \frac{1}{2 \lambda} \xi^2} = \lambda^{- \nicefrac{n}{2}} \, g_{\nicefrac{1}{\lambda}}(\xi) . \end{align*} % \end{lemma} \begin{proof} By the scaling relation in Proposition~\ref{Fourier:R:prop:fundamentals}, it suffices to prove the Lemma for $\lambda = 1$. Moreover, we may set $n = 1$, because % \begin{align*} g_1(x) = \prod_{j = 1}^n \frac{1}{\sqrt{2\pi}} \e^{- \frac{1}{2} x_j^2} \end{align*} % is just the product of one-dimensional Gaußians. Completing the square, we can express the Fourier transform % \begin{align*} (\Fourier g_1)(\xi) &= \frac{1}{\sqrt{2\pi}} \int_{\R} \dd x \, \e^{- \ii \xi \cdot x} \, \e^{- \frac{1}{2} x^2} \\ &= \frac{1}{\sqrt{2\pi}} \int_{\R} \dd x \, \e^{- \frac{1}{2} \xi^2} \, \e^{- \frac{1}{2} (x + \ii \xi)^2} \\ &= g_1(\xi) \, f(\xi) \end{align*} % as the product of a Gaußian $g_1(\xi)$ with % \begin{align*} f(\xi) &= \frac{1}{\sqrt{2\pi}} \int_{\R} \dd x \, \e^{- \frac{1}{2} (x + \ii \xi)^2} . \end{align*} % A simple limiting argument shows that we can differentiate $f$ under the integral sign as often as we would like, \ie $f \in \Cont^{\infty}(\R)$, and that its first derivative vanishes, % \begin{align*} \frac{\dd}{\dd \xi} f(\xi) &= \frac{1}{\sqrt{2\pi}} \int_{\R} \dd x \, \frac{\dd}{\dd \xi} \Bigl ( \e^{- \frac{1}{2} (x + \ii \xi)^2} \Bigr ) \\ & = \frac{1}{\sqrt{2\pi}} \int_{\R} \dd x \, \bigl ( - \ii \, (x + \ii \xi) \bigr ) \, \e^{- \frac{1}{2} (x + \ii \xi)^2} \\ &= \frac{1}{\sqrt{2\pi}} \int_{\R} \dd x \, \ii \frac{\dd}{\dd x} \Bigl ( \e^{- \frac{1}{2} (x + \ii \xi)^2} \Bigr ) \\ &= \frac{\ii}{\sqrt{2\pi}} \Bigl [ \e^{- \frac{1}{2} (x + \ii \xi)^2} \Bigr ]_{-\infty}^{+\infty} = 0 . \end{align*} % But a smooth function whose first derivative vanishes everywhere is constant, and its value is % \begin{align*} f(0) &= \frac{1}{\sqrt{2\pi}} \int_{\R} \dd x \, \e^{- \frac{1}{2} x^2} = 1 . \end{align*} % \end{proof} The Fourier transform also has an inverse: \marginpar{2013.11.26} \begin{proposition} Assume $f \in L^1(\R^n)$ is such that also its Fourier transform $\hat{f} = \Fourier f$ is integrable. Then for this function, the inverse Fourier transform % \begin{align*} (\Fourier^{-1} \hat{f})(x) &= \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd \xi \, \e^{+ \ii \xi \cdot x} \, \hat{f}(\xi) = (\Fourier \hat{f})(-x) = f \end{align*} % agrees with $f \in L^1(\R^n)$. \end{proposition} We will postpone the proof until the next subsection, but the idea is that in the sense of distributions (\cf Section~\ref{S_and_Sprime}) one has \begin{align*} \int_{\R^n} \dd x \, \e^{+ \ii \xi \cdot x} = (2\pi)^n \, \delta(\xi) \end{align*} where $\delta$ is the Dirac distribution. A rigorous argument is more involved, though. \subsubsection{The convolution} \label{Fourier:R:L1:convolution} The convolution arises naturally from the group structure (a discussion for another time) and it appears naturally in the discussion, because the Fourier transform intertwines the pointwise product of functions and the convolution (\cf Proposition~\ref{Fourier:R:prop:Fourier_convolution}). \begin{definition}[Convolution] We define the convolution of $f , g \in L^1(\R^n)$ to be % \begin{align*} (f \ast g)(x) := \int_{\R^n} \dd y \, f(x-y) \, g(y) . \end{align*} % \end{definition} We have seen in the exercises that $\ast : L^1(\R^n) \times L^1(\R^n) \longrightarrow L^1(\R^n)$, \ie the convolution of two $L^1$ functions is again integrable. The Fourier transform intertwines the convolution and the pointwise product of functions: \begin{proposition}\label{Fourier:R:prop:Fourier_convolution} $\Fourier (f \ast g) = (2\pi)^{\nicefrac{n}{2}} \, \Fourier f \; \Fourier g$ holds for all $f , g \in L^1(\R^n)$. \end{proposition} \begin{proof} For any $f , g \in L^1(\R^n)$ also their convolution is integrable. Then we obtain the claim from direct computation: % \begin{align*} \bigl ( \Fourier ( f \ast g ) \bigr )(\xi) &= \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \e^{- \ii \xi \cdot x} \, (f \ast g)(x) \\ &= \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \int_{\R^n} \dd y \, \e^{- \ii \xi \cdot (x-y)} \, \e^{- \ii \xi \cdot y} \, f(x-y) \, g(y) \\ &= (2\pi)^{\nicefrac{n}{2}} \, \Fourier f(\xi) \; \Fourier g(\xi) \end{align*} % \end{proof} Also convolutions on $L^1(\R^n)$ have approximate identities. \begin{definition}[Approximate identity]\label{Fourier:R:defn:approximate_id} An approximate identity or \emph{Dirac sequence} is a family of non-negative functions $(\delta_{\eps})_{\eps \in (0,\eps_0)} \subset L^1(\R^n)$, $\eps_0 > 0$, with the following two properties: % \begin{enumerate}[(i)] \item $\norm{\delta_{\eps}}_{L^1(\R^n)} = 1$ holds for all $\eps \in (0,\eps_0)$. \item For any $R > 0$ we have $\displaystyle \lim_{\eps \to 0} \int_{\abs{x} \leq R} \dd x \, \delta_{\eps}(x) = 1$. \end{enumerate} % \end{definition} The name “approximate identity” again derives from the following \begin{theorem}\label{Fourier:R:thm:approximate_id_convolution} Let $(\delta_{\eps})_{n \in \N}$ be an approximate identity. Then for all $f \in L^1(\R^n)$ we have % \begin{align*} \lim_{\eps \to 0} \bnorm{\delta_{\eps} \ast f - f}_{L^1(\R^n)} = 0 . \end{align*} % \end{theorem} The interested reader may look up the proof in \cite[Chapter~2.16]{Lieb_Loss:analysis:2001}. \begin{example} Let $\chi \in L^1(\R^n)$ be a non-negative function normalized to $1$, $\norm{\chi}_{L^1(\R^n)} = 1$. Then one can show that % \begin{align*} \delta_k(x) := k^n \, \chi(k x) \end{align*} % is an approximate identity. \end{example} With this in mind, we can show that \begin{lemma}\label{Fourier:R:lem:Cinfty_c_dense_L1} $\Cont^{\infty}_{\mathrm{c}}(\R^n)$ is dense in $L^1(\R^n)$. \end{lemma} \begin{proof} We will only sketch the proof: Using the linearity of the convolution and the decomposition % \begin{align*} f = \bigl ( f_{\Re +} - f_{\Re -} \bigr ) + \ii \, \bigl ( f_{\Im +} - f_{\Im -} \bigr ) \in L^1(\R^n) \end{align*} % implies we may assume $f \geq 0$ without loss of generality. First, we smoothen $f$ by convolving it with a smooth approximate identity, because % \begin{align*} \partial_x^a \bigl ( \delta_k \ast f \bigr ) = (\partial_x^a \delta_k) \ast f \end{align*} % holds as shown in the homework assignments. Clearly, the convolution of two non-negative functions is non-negative. One may start with $\chi \in \Cont^{\infty}_{\mathrm{c}}(\R^n)$ and then scale it like in the example. To make the support compact, we multiply with a cutoff function, \eg pick a second function $\mu \in \Cont^{\infty}_{\mathrm{c}}(\R^n)$ taking values between $0$ and $1$ which is normalized to $1 = \norm{\mu}_{L^1(\R^n)}$ and satisfies % \begin{align*} \mu(x) = % \begin{cases} 1 & \abs{x} \leq 1 \\ 0 & \abs{x} \geq 2 \\ \end{cases} % . \end{align*} % Clearly, $\mu_j(x) := \mu(\nicefrac{x}{j}) \xrightarrow{j \to \infty} 1$ almost everywhere in $x$, and thus $\mu_j \, (\delta_k \ast f)$ converges to $\delta_k \ast f$ as $j \to \infty$ by the Monotone Convergence theorem. Thus, $f_k := \mu_k \, (\delta_k \ast f) \in \Cont^{\infty}_c(\R^n)$ converges to $f$ in $L^1(\R^n)$. \end{proof} \subsection{Decay of Fourier transforms} \label{Fourier:R:decay} We can prove an analog of Theorem~\ref{Fourier:T:thm:decay_properties}. \begin{theorem}[Regularity $f$ $\leftrightarrow$ decay $\Fourier f$]\label{Fourier:R:thm:decay_properties} % \begin{enumerate}[(i)] \item Let $s \in \N_0$, $\delta \in (0,1)$ and assume that the Fourier transform of $f \in L^1(\R^n)$ decays as % \begin{align} \babs{\hat{f}(\xi)} \leq C (1 + \abs{\xi})^{-n-s-\delta} . \label{Fourier:R:eqn:decay_assumption_Fourier_coefficients} \end{align} % Then $f \in \Cont^s(\R^n)$ and $\partial_x^a f \in L^{\infty}(\R^n)$ holds for all $\abs{a} \leq s$. \item Assume $f \in \Cont^s(\R^n)$ is such that all derivatives up to order $s$ are integrable. Then the Fourier transform $\Fourier f$ satisfies $\displaystyle \lim_{\abs{\xi} \to \infty} \bigl ( \sabs{\xi}^r \, \hat{f}(\xi) \bigr ) = 0$ for $r \leq s$. \item $f \in \Cont^{\infty}(\R^n)$, $\partial_x^{\alpha} f \in L^1(\R^n)$ for all $\alpha \in \N_0^n$ holds if and only if for all $r \geq 0$ there exists $C_r > 0$ such that % \begin{align*} \babs{\hat{f}(\xi)} \leq C_r \, \bigl ( 1 + \abs{\xi} \bigr )^{-r} . \end{align*} % \end{enumerate} % \end{theorem} \begin{proof} % \begin{enumerate}[(i)] \item The decay~\eqref{Fourier:R:eqn:decay_assumption_Fourier_coefficients} implies $\hat{f}$ and $\xi^{\alpha} \, \hat{f}$, $\abs{\alpha} \leq s$, are in fact integrable. Thus, the inverse % \begin{align*} \bigl ( \Fourier^{-1} (\ii^{\abs{\alpha}} \, \xi^{\alpha} \, \hat{f}) \bigr )(x) = \partial_x^{\alpha} (\Fourier \hat{f})(-x) = \partial_x^{\alpha} f \end{align*} % exists as long as $\abs{\alpha} \leq s$ and is an element of $L^{\infty}(\R^n)$ by the Riemann-Lebesgue lemma~\ref{Fourier:R:lem:Riemann_Lebesgue}. \item This is a consequence of Proposition~\ref{Fourier:R:prop:fundamentals} and the Riemann-Lebesgue lemma~\ref{Fourier:R:lem:Riemann_Lebesgue}. \item Just like in the discrete case, this follows immediately from (i) and (ii). \end{enumerate} % \end{proof} \subsection{The Fourier transform on $L^2(\R^n)$} \label{Fourier:R:Lp} The difficulty of \emph{defining} the Fourier transform on $L^p(\R^n)$ spaces is that they do not nest akin to Lemma~\ref{Fourier:T:lem:nesting_Lp_spaces}. Hence, we will need some preparation. First of all, it is easy to see that the convolution can also be seen as a continuous map \begin{align*} \ast : L^1(\R^n) \times L^p(\R^n) \longrightarrow L^p(\R^n) , \end{align*} Moreover, convolving with a suitable approximate identity is a standard method to regularize $L^p(\R^n)$ functions: \begin{lemma}\label{Fourier:R:lem:approximate_identity_Lp} If $(\delta_{\eps})$ is an approximate identity and $f \in L^p(\R^n)$, $1 \leq p < \infty$, then $\delta_{\eps} \ast f$ converges to $f$ in $L^p(\R^n)$. \end{lemma} The interested reader may look up the proof in \cite[Chapter~2.16]{Lieb_Loss:analysis:2001}. Hence, we can generalize Lemma~\ref{Fourier:R:lem:Cinfty_c_dense_L1} to \begin{lemma} $\Cont^{\infty}_{\mathrm{c}}(\R^n)$ is dense in $L^p(\R^n)$ for $1 \leq p < \infty$. \end{lemma} An immediate consequence of the Lemma is that $L^1(\R^n) \cap L^p(\R^n)$ is dense in $L^p(\R^n)$, because $\Cont^{\infty}_{\mathrm{c}}(\R^n) \subset L^1(\R^n) \cap L^p(\R^n) \subset L^p(\R^n)$ lies densely in $L^p(\R^n)$. The Gaußian can be conveniently used to define the Fourier transform on $L^2(\R^n)$. \begin{theorem}[Plancherel's theorem]\label{Fourier:R:thm:Parseval_Plancherel} If $f \in L^1(\R^n) \cap L^2(\R^n)$, then $\hat{f}$ is in $L^2(\R^n)$ and has the same $L^2(\R^n)$-norm as $f$, % \begin{align*} \norm{f} = \bnorm{\hat{f}} . \end{align*} % Hence, $f \mapsto \Fourier f$ has a unique continuous extension to a \emph{unitary} map $\Fourier : L^2(\R^n) \longrightarrow L^2(\R^n)$. Moreover, Parseval's formula holds, \ie % \begin{align*} \scpro{f}{g} = \bscpro{\Fourier f}{\Fourier g} \end{align*} % holds for all $f , g \in L^2(\R^n)$. \end{theorem} \begin{proof} Initially, pick any $f \in L^1(\R^n) \cap L^2(\R^n)$. Then according to the Riemann-Lebesgue lemma~\ref{Fourier:R:lem:Riemann_Lebesgue}, $\Fourier f$ is essentially bounded, and hence % \begin{align} \int_{\R} \dd \xi \, \babs{\hat{f}(\xi)}^2 \, \e^{- \frac{\eps}{2} \xi^2} \label{Fourier:R:eqn:Plancherel_proof_lhs} \end{align} % is finite. Since $f \in L^1(\R^n)$ by assumption, the function $f(x) \, \overline{f(y)} \, \e^{- \frac{\eps}{2} \xi^2}$ depending on the three variables $(x,y,\xi)$ is an element of $L^1(\R^{3n})$. Then writing out the Fourier transforms in the above integral and integrating over $\xi$, we obtain % \begin{align*} \eqref{Fourier:R:eqn:Plancherel_proof_lhs} &= \frac{1}{(2\pi)^n} \int_{\R^n} \dd \xi \int_{\R^n} \dd x \, \int_{\R^n} \dd y \, \e^{- \ii \xi \cdot (x-y)} \, f(x) \, \overline{f(y)} \, \e^{- \frac{\eps}{2} \xi^2} \\ &= \frac{1}{(2\pi)^{\nicefrac{n}{2}}} \int_{\R^n} \dd x \, \int_{\R^n} \dd y \, f(x) \, \overline{f(y)} \, (2\pi)^{\nicefrac{n}{2}} \, \eps^{-\nicefrac{n}{2}} \, \e^{- \frac{1}{2\eps} (x-y)^2} \\ &= \int_{\R^n} \dd y \, \overline{f(y)} \, \bigl ( \eps^{-\nicefrac{n}{2}} \e^{- \frac{1}{2\eps} x^2} \ast f \bigr )(y) . \end{align*} % Then by Lemma~\ref{Fourier:R:lem:approximate_identity_Lp} the function $\eps^{-\nicefrac{n}{2}} \e^{- \frac{1}{2\eps} x^2} \ast f$ converges to $f$ in $L^2$ as $\eps \to 0$, and by Dominated convergence the above expression approaches $\norm{f}^2$. On the other hand, the above is equal to \eqref{Fourier:R:eqn:Plancherel_proof_lhs}; Moreover, we may interchange integration and limit $\eps \to 0$ in \eqref{Fourier:R:eqn:Plancherel_proof_lhs} by the Monotone Convergence theorem, and thus we have proven $\norm{f} = \bnorm{\hat{f}}$ if $f \in L^1(\R^n) \cap L^2(\R^n)$. By density of $L^1(\R^n) \cap L^2(\R^n)$, this equality extends to all $f \in L^2(\R^n)$ and with the help of Theorem~\ref{operators:bounded:thm:extensions_bounded_operators}, we deduce that the Fourier transform extends to a continuous map $\Fourier : L^2(\R^n) \longrightarrow L^2(\R^n)$. Parseval's formula $\scpro{f}{g} = \bscpro{\Fourier f}{\Fourier g}$ follows from the polarization identity % \begin{align*} \scpro{f}{g} = \frac{1}{2} \Bigl ( \bnorm{f + g}^2 - \ii \, \bnorm{f + \ii \, g}^2 - (1 - \ii) \norm{f}^2 - (1 - \ii) \norm{g}^2 \Bigr ) . \end{align*} % Parseval's formula also implies the unitarity of $\Fourier$. \end{proof} The above Theorem also gives a \emph{definition} of the Fourier transform on $L^2(\R^n)$. \marginpar{2013.11.28} \subsection{The solution of the heat equation on $\R^n$} \label{Fourier:R:heat} Just like in the discrete case, the Fourier transform converts the heat equation, a linear \textbf{P}DE, into a linear \textbf{O}DE. This connection explains how to arrive at the solution to the heat equation given in problem~18: we first Fourier transform left- and right-hand side of \begin{align} \partial_t u(t) &= + D \, \Delta_x u(t) , \qquad \qquad u(0) = u_0 \in L^1(\R^n) , \label{Fourier:R:eqn:heat_equation_position_rep} \end{align} and obtain the heat equation in Fourier representation, \begin{align} \partial_t \hat{u}(t) &= - D \, \hat{\xi}^2 \, \hat{u}(t) , \qquad \qquad \hat{u}(0) = \hat{u}_0 \in \Cont_{\infty}(\R^n) . \label{Fourier:R:eqn:heat_equation_Fourier_rep} \end{align} Here, $\hat{u}$ and $\hat{u}_0$ are the Fourier transforms of $u$ and $u_0$, respectively, and $D > 0$ is the diffusion constant. $\hat{\xi}^2$ stands for the multiplication operator associated to the function $\xi \mapsto \xi^2$, \ie we set \begin{align*} \bigl ( \hat{\xi}^2 \hat{u} \bigr )(\xi) = \xi^2 \, \hat{u}(\xi) . \end{align*} Since the Laplacian in Fourier representation is just a multiplication operator, the solution to \eqref{Fourier:R:eqn:heat_equation_Fourier_rep} is \begin{align*} \hat{u}(t) &= \e^{- t D \hat{\xi}^2} \hat{u}_0 \in \Cont_{\infty}(\R^n) . \end{align*} The first factor $\e^{- t D \xi^2}$ is just a Gaußian, hence $\hat{u}(t)$ is integrable for $t > 0$ and its inverse Fourier exists. The solution in position representation is just the inverse Fourier transform applied to $\hat{u}(t)$: Proposition~\ref{Fourier:R:prop:Fourier_convolution} tells us that $u(t)$ can be seen as the convolution of the inverse Fourier transform of $\e^{- t D \xi^2}$ convolved with $u_0$, and using Lemma~\ref{Fourier:R:lem:Fourier_Gaussian} we compute \begin{align} u(t) &= \Fourier^{-1} \bigl ( \e^{- t D \hat{\xi}^2} \hat{u}_0 \bigr ) \notag \\ &= (2\pi)^{- \nicefrac{n}{2}} \, \Bigl ( \Fourier^{-1} \bigl ( \e^{- t D \hat{\xi}^2} \bigr ) \Bigr ) \ast u_0 \notag \\ &= \frac{1}{(4 \pi D t)^{\nicefrac{n}{2}}} \, \e^{- \frac{x^2}{4 D t}} \ast u_0 =: G(t) \ast u_0 . \label{Fourier:R:eqn:fundamental_solution_heat_equation} \end{align} Note that the right-hand side exists in $L^1(\R^n)$ as the convolution of two $L^1(\R^n)$ functions. Moreover, one can show that $\frac{1}{(4 \pi D t)^{\nicefrac{n}{2}}} \, \e^{- \frac{x^2}{4 D t}}$ is a Dirac sequence as $t \searrow 0$ so that $\lim_{t \searrow 0} u(t) = u_0$ holds where the limit is understood in the $L^1(\R^n)$-sense. \paragraph{Uniqueness of solutions to the heat equation} Let us revisit a topic that has not seen much attention up to now, namely whether solutions exist for all times and whether they are unique. \begin{theorem}\label{Fourier:R:thm:uniqueness_solution_heat_equation_L1} The initial value problem \eqref{Fourier:R:eqn:heat_equation_position_rep} has \eqref{Fourier:R:eqn:fundamental_solution_heat_equation} as its \emph{unique} solution if we require $u(t) , \partial_t u(t) \in L^1(\R^n)$ to hold for $t > 0$. \end{theorem} \begin{proof} The arguments preceding this theorem show that $u(t)$ as given by \eqref{Fourier:R:eqn:fundamental_solution_heat_equation} defines \emph{a} solution which is integrable for $t > 0$; moreover, computing $\Delta_x G(t)$ shows that it is also integrable, and hence, $\partial_t u(t) \in L^1(\R^n)$ is also established. So assume $\tilde{u}(t)$ is another integrable solution to \eqref{Fourier:R:eqn:heat_equation_position_rep} with $\partial_t \tilde{u}(t) \in L^1(\R^n)$ and define the difference % \begin{align*} g(t) := u(t) - \tilde{u}(t) . \end{align*} % Clearly, this difference vanishes as $t = 0$, \ie $g(0) = 0$. Since $u(t)$ and $\tilde{u}(t)$ as well as their time-derivatives are integrable, also $g(t) , \partial_t g(t) \in L^1(\R^n)$. The Riemann-Lebesgue Lemma~\ref{Fourier:R:lem:Riemann_Lebesgue} implies that the Fourier transform of the difference $\hat{g}(t) := \Fourier \Delta(t)$ is an element of $\Cont_{\infty}(\R^n)$. Hence, equations~\eqref{odes:eqn:abs_ddt_X_geq_ddt_abs_X} and \eqref{Fourier:R:eqn:heat_equation_Fourier_rep} yield % \begin{align*} \frac{\dd}{\dd t} \babs{\hat{g}(t,\xi)} \leq \abs{\frac{\dd}{\dd t} \hat{g}(t,\xi)} = D \, \xi^2 \, \babs{\hat{g}(t,\xi)} , && \forall \xi \in \R^n , \end{align*} % which is then the initial estimate for the Grönwall lemma~\ref{odes:lem:Groenwall}, % \begin{align*} 0 \leq \babs{\hat{g}(t,\xi)} \leq \babs{\hat{g}(0,\xi)} \, \e^{D \int_0^t \dd s \, \xi^2} = \babs{\hat{g}(0,\xi)} \, \e^{D t \xi^2} = 0 . \end{align*} % Since $\hat{g}(t,\xi)$ is continuous in $\xi$, this shows that \eqref{Fourier:R:eqn:fundamental_solution_heat_equation} is the \emph{unique} solution in $L^1(\R^n)$. \marginpar{2014.01.07} \end{proof} To show that the condition $u(t) \in L^1(\R^n)$ is crucial for the uniqueness, we give a concrete counterexample first found by Tychonoff \cite{Tychonoff:nonuniqueness_solution_heat_equation:1935}. For simplicity, we reduce to the one-dimensional case and set the diffusion constant $D$ to $1$. Define the function \begin{align*} u(t,x) := \frac{1}{\sqrt{1-t}} \, \e^{+ \frac{x^2}{4(1-t)}} \end{align*} for $t \in [0,1)$. A simple computation yields that $u(t)$ satisfies the heat equation \begin{align*} \partial_t u(t) = + \partial_x^2 u(t) \end{align*} to the initial condition $u(0,x) = \e^{\frac{x^2}{4}}$: \begin{align*} \partial_t u(t,x) &= + \tfrac{1}{2} \, (1-t)^{- \nicefrac{3}{2}} \, \e^{\frac{x^2}{4(1-t)}} + (1 - t)^{- \nicefrac{1}{2}} \cdot (+1) \cdot \frac{x^2}{4(1-t)^2} \, \e^{\frac{x^2}{4(1-t)}} \\ &= \frac{2 (1-t) + x^2}{4 (1-t)^{\nicefrac{5}{2}}} \, \e^{\frac{x^2}{4(1-t)}} \\ \partial_x u(t,x) &= \frac{x}{2 (1-t)^{\nicefrac{3}{2}}} \, \e^{\frac{x^2}{4(1-t)}} \\ \Rightarrow \; \partial_x^2 u(t,x) &= \frac{1}{2 (1-t)^{\nicefrac{3}{2}}} \, \e^{\frac{x^2}{4(1-t)}} + \frac{x^2}{4 (1-t)^{\nicefrac{5}{2}}} \, \e^{\frac{x^2}{4(1-t)}} \\ &= \frac{2 (1-t) + x^2}{4 (1-t)^{\nicefrac{5}{2}}} \, \e^{\frac{x^2}{4(1-t)}} \end{align*} Clearly, the solution explodes as $t \nearrow 1$ and is not integrable for $t \in [0,1)$. More careful study shows that asking for $u(t) \in L^1(\R^n)$ is a lot stronger than necessary. In fact, we will see in the next chapter that requiring the solution $u(t)$ to remain a tempered distribution suffices to make the solution unique. \subsection{The solution of the free Schrödinger equation on $\R^n$} \label{Fourier:R:Schroedinger} Now we will apply this to the free Schrödinger operator $H = - \tfrac{1}{2} \Delta_x$ where the natural space of solutions is $L^2(\R^n)$. Rewriting the free Schrödinger equation in the \emph{momentum representation} yields \begin{align*} \Fourier \bigl ( \ii \partial_t \psi(t) \bigr ) &= \ii \partial_t \Fourier \psi(t) = \Fourier \bigl ( - \tfrac{1}{2} \Delta_x \psi(t) \bigr ) \\ & = \Fourier \bigl ( - \tfrac{1}{2} \Delta_x \bigr ) \Fourier^{-1} \Fourier \psi(t) . \end{align*} Parseval's theorem~\ref{Fourier:R:thm:Parseval_Plancherel} tells us that $\Fourier : L^2(\R^n) \longrightarrow L^2(\R^n)$ is a unitary, and thus we can equivalently look for $\widehat{\psi}(t) = \Fourier \bigl ( \psi(t) \bigr ) \in L^2(\R^n)$ which solves the free Schrödinger equation in the momentum representation, \begin{align*} \ii \partial_t \widehat{\psi}(t) &= \Fourier \, \bigl ( - \tfrac{1}{2} \Delta_x \bigr ) \, \Fourier^{-1} \widehat{\psi}(t) , && \widehat{\psi}(0) = \widehat{\psi}_0 . \end{align*} If we compute the right-hand side at $t = 0$ and assume $\psi_0 \in L^1(\R^n) \cap L^2(\R^n)$, by Proposition~\ref{Fourier:R:prop:fundamentals}~(vi) this leads to \begin{align*} \Fourier \bigl ( - \tfrac{1}{2} \Delta_x \psi_0 \bigr ) (\xi) = \tfrac{1}{2} \xi^2 \, (\Fourier \psi_0)(\xi) =: \bigl ( H^{\Fourier} \Fourier \psi_0 \bigr )(\xi) . \end{align*} We will revisit this point in Chapter~\ref{S_and_Sprime:schwartz_functions}. Note that $- \Delta_x \psi_0 \in L^2(\R^n)$ since this is precisely the domain of definition of $H$, and thus $\mathcal{D}(H^{\Fourier})$ consists of those $L^2(\R^n)$-functions $\widehat{\psi}$ for which $\hat{\xi}^2 \, \widehat{\psi}$ is also in $L^2(\R^n)$. Again, the Fourier transform converts the PDE into the linear ODE \begin{align} \ii \partial_t \widehat{\psi}(t) &= \tfrac{1}{2} \hat{\xi}^2 \, \widehat{\psi}(t) , && \widehat{\psi}(0) = \Fourier \psi_0 = \widehat{\psi}_0 , \end{align} which can be solved explicitly by \begin{align} \widehat{\psi}(t) = U^{\Fourier}(t) \widehat{\psi}_0 := \e^{- \ii t \, \frac{1}{2} \hat{\xi}^2} \, \widehat{\psi}_0 . \end{align} The unitary evolution group associated to $H^{\Fourier} = \frac{1}{2} \hat{\xi}^2$ is the unitary multiplication operator $U^{\Fourier}(t) = \e^{- \ii t \, \frac{1}{2} \hat{\xi}^2}$, and hence, the evolution group generated by $H = - \frac{1}{2} \Delta_x$ is \begin{align*} U(t) = \Fourier^{-1} \, \e^{- \ii t \, \frac{1}{2} \hat{\xi}^2} \Fourier . \end{align*} $U(t)$ is also unitary: $U^{\Fourier}(t)$ and $\Fourier$ are unitary, and thus \begin{align*} U(t) \, U(t)^* &= \Bigl ( \Fourier^{-1} \, \e^{- \ii t \, \frac{1}{2} \hat{\xi}^2} \Fourier \Bigr ) \, \Bigl ( \Fourier^{-1} \, \e^{- \ii t \, \frac{1}{2} \hat{\xi}^2} \Fourier \Bigr )^* \\ &= \Fourier^* \, \e^{- \ii t \, \frac{1}{2} \hat{\xi}^2} \Fourier \, \Fourier^* \, \e^{+ \ii t \, \frac{1}{2} \hat{\xi}^2} \, \Fourier = \id_{L^2(\R^n)} . \end{align*} Similarly, one deduces $U(t)^* \, U(t) = \id_{L^2(\R^n)}$. One may be tempted to follow the computation leading up to \eqref{Fourier:R:eqn:fundamental_solution_heat_equation}, replace $t$ by $- \ii t$ and write the solution \begin{align} \psi(t) = p(t) \ast \psi_0 \label{Fourier:R:eqn:solution_free_Schroedinger_convolution} \end{align} as a convolution of the initial condition $\psi_0$ with the function \begin{align*} p(t,x) := \frac{\e^{+ \ii \frac{x^2}{2 t}}}{(2 \pi \ii t)^{\nicefrac{n}{2}}} . \end{align*} From a technical point of view, the derivation of \eqref{Fourier:R:eqn:solution_free_Schroedinger_convolution} is more delicate and will be postponed to Chapter~\ref{S_and_Sprime}. \paragraph{The uncertainty principle} One of the fundamentals of quantum mechanics is \emph{Heisenberg's uncertainty principle}, namely that one cannot arbitrarily localize wave functions in position \emph{and} momentum space simultaneously. This is a particular case of a much more general fact about non-commuting (quantum) observables: \begin{theorem}[Heisenberg's uncertainty principle] Let $A , B : \Hil \longrightarrow \Hil$ be two bounded selfadjoint operators on the Hilbert space $\Hil$. We define the expectation value % \begin{align*} \mathbb{E}_{\psi}(A) := \bscpro{\psi}{A \psi} \end{align*} % with respect to $\psi \in \Hil$ with $\norm{\psi} = 1$ and the variance % \begin{align*} \sigma_{\psi}(A)^2 := \mathbb{E}_{\psi} \Bigl ( \bigl (A - \mathbb{E}_{\psi}(A) \bigr )^2 \Bigr ) \end{align*} % Then Heisenberg's uncertainty relation holds: % \begin{align} \tfrac{1}{2} \babs{\mathbb{E}_{\psi} \bigl ( \ii [A,B] \bigr )} \leq \sigma_{\psi}(A) \, \sigma_{\psi}(B) \label{Fourier:R:eqn:uncertainty_principle} \end{align} % \end{theorem} \begin{proof} Let $\psi \in \Hil$ be an arbitrary normalized vector. Due to the selfadjointness of $A$ and $B$, the expectation values are real, % \begin{align*} \mathbb{E}_{\psi}(A) &= \bscpro{\psi}{A \psi} = \bscpro{A^* \psi}{\psi} = \bscpro{A \psi}{\psi} \\ &= \overline{\bscpro{\psi}{A \psi}} = \overline{\mathbb{E}_{\psi}(A)} . \end{align*} % In general $A$ and $B$ will not have mean $0$, but % \begin{align*} \tilde{A} := A - \mathbb{E}_{\psi}(A) \end{align*} % and $\tilde{B} := B - \mathbb{E}_{\psi}(B)$ do. Hence, we can express the variance of $A$ as an expectation value: % \begin{align*} \sigma_{\psi}(A)^2 &= \mathbb{E}_{\psi} \Bigl ( \bigl ( A - \mathbb{E}_{\psi}(A) \bigr )^2 \Bigr ) = \mathbb{E}_{\psi} \bigl ( \tilde{A}^2 \bigr ) \end{align*} % Moreover, the commutator of $\tilde{A}$ and $\tilde{B}$ coincides with that of $A$ and $B$, % \begin{align*} \bigl [ \tilde{A} , \tilde{B} \bigr ] &= \bigl [ A , B \bigr ] - \bigl [ \mathbb{E}_{\psi}(A) , B \bigr ] - \bigl [ A , \mathbb{E}_{\psi}(B) \bigr ] + \bigl [ \mathbb{E}_{\psi}(A) , \mathbb{E}_{\psi}(B) \bigr ] \\ &= [A , B] . \end{align*} % Then expressing the left-hand side of \eqref{Fourier:R:eqn:uncertainty_principle} in terms of the shifted observables $\tilde{A}$ and $\tilde{B}$, and using the Cauchy-Schwarz inequality as well as the selfadjointness yields Heisenberg's inequality, % \begin{align*} \babs{\mathbb{E}_{\psi} \bigl ( \ii [ A , B ] \bigr )} &= \babs{\mathbb{E}_{\psi} \bigl ( [ \tilde{A} , \tilde{B} ] \bigr )} = \Babs{\bscpro{\psi}{\tilde{A} \tilde{B} \psi} - \bscpro{\psi}{\tilde{B} \tilde{A} \psi}} \\ & \leq \babs{\bscpro{\tilde{A} \psi}{\tilde{B} \psi}} + \babs{\bscpro{\tilde{B} \psi}{\tilde{A} \psi}} \leq 2 \, \bnorm{\tilde{A} \psi} \, \bnorm{\tilde{B} \psi} \\ & = 2 \, \sqrt{\bscpro{\tilde{A} \psi}{\tilde{A} \psi}} \, \sqrt{\bscpro{\tilde{B} \psi}{\tilde{B} \psi}} = 2 \, \sqrt{\bscpro{\psi}{\tilde{A}^2 \psi}} \, \sqrt{\bscpro{\psi}{\tilde{B}^2 \psi}} \\ & = 2 \, \sigma_{\psi}(A) \, \sigma_{\psi}(B) . \end{align*} % \end{proof} Often Heisenberg's inequality is just stated for the \emph{position observable} $x_j$ (multiplication by $x_j$) and the \emph{momentum observable} $- \ii \hbar \partial_{x_k}$: even though these are unbounded selfadjoint operators (\cf the discussion in Chapters~\ref{operators:unitary}--\ref{operators:selfadjoint_operators}), this introduces only technical complications. For instance, the above arguments hold verbatim if we require in addition $\psi \in \Cont^{\infty}_{\mathrm{c}}(\R^n) \subset L^2(\R^n)$, and vectors of this type lie dense in $L^2(\R^n)$. Then the left-hand side of Heisenberg's inequality reduces to $\nicefrac{\hbar}{2}$ because \begin{align*} \bigl [ x_j , (- \ii \hbar \partial_{x_k}) \bigr ] \psi &= x_j \, (- \ii \hbar \partial_{x_k} \psi) - (- \ii \hbar) \partial_{x_k} \bigl ( x_j \, \psi \bigr ) \\ &= \ii \hbar \, \delta_{kj} \, \psi \end{align*} and $\psi$ is assumed to have norm $1$, \begin{align} \sigma_{\psi}(x_j) \, \sigma_{\psi} \bigl ( - \ii \hbar \partial_{x_k} \bigr ) \geq \tfrac{\hbar}{2} . \label{Fourier:R:eqn:uncertainty_principle_x_p} \end{align} Skipping over some of the details (there are technical difficulties defining the commutator of two unbounded operators), we see that one cannot do better than $\nicefrac{\hbar}{2}$ but there are cases when the right-hand side of \eqref{Fourier:R:eqn:uncertainty_principle_x_p} is not even finite. The physical interpretation of \eqref{Fourier:R:eqn:uncertainty_principle} is that one cannot measure non-commuting observables simultaneously with arbitrary precision. In his original book on quantum mechanics \cite{Heisenberg:prinzipien_quantentheorie:1930}, Heisenberg spends a lot of care to explain why in specific experiments position and momentum along the same direction cannot be measured simultaneously with arbitrary precision, \ie why increasing the resolution of the position measurement increases the error of the momentum measurement and vice versa. \chapter{Schwartz functions and tempered distributions} \label{S_and_Sprime} Often one wants to find more general solutions to PDEs, \eg one may ask whether the heat equation makes sense in case the initial condition is an element of $L^{\infty}(\R^n)$? A very fruitful ansatz which we will explore in Chapter~\ref{Greens_functions} is to ask whether “weak solutions” to a PDE exist. Weak means that the solution may be a so-called \emph{distribution} which is a linear functional from a space of \emph{test functions}. \emph{Schwartz functions} $\Schwartz(\R^d)$ are such a space of test functions, \ie a space of “very nicely behaved functions”. The dual of this space of test functions, the tempered distributions, allow us to extend common operations such as Fourier transforms and derivatives to objects which may not even be functions. \section{Schwartz functions} \label{S_and_Sprime:schwartz_functions} The motivation to define Schwartz functions on $\R^d$ comes from dealing with Fourier transforms: our class of test functions $\Schwartz(\R^d)$ has three defining properties: \begin{enumerate}[(i)] \item $\Schwartz(\R^d)$ forms a vector space. \item \emph{Stability under derivation}, $\partial_x^{\alpha} \Schwartz(\R^d) \subset \Schwartz(\R^d)$: for all multiindices $\alpha \in \N_0^d$ and $f \in \Schwartz(\R^d)$, we have $\partial_x^{\alpha} f \in \Schwartz(\R^d)$. \item \emph{Stability under Fourier transform}, $\Fourier \Schwartz(\R^d) \subseteq \Schwartz(\R^d)$: for all $f \in \Schwartz(\R^d)$, the Fourier transform % \begin{align} \Fourier^{\pm 1} f : \xi \mapsto \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd x \, \e^{\mp \ii x \cdot \xi} \, f(x) \in \Schwartz(\R^d) \label{S_and_Sprime:schwartz_functions:eqn:Fourier_transform} \end{align} % is also a test function. \end{enumerate} These relatively simple requirements have surprisingly rich implications: \begin{enumerate}[(i)] \item $\Schwartz(\R^d) \subset L^1(\R^d)$, \ie any $f \in \Schwartz(\R^d)$ and all of its derivatives are integrable. \item $\Fourier : \Schwartz(\R^d) \longrightarrow \Schwartz(\R^d)$ acts bijectively: if $f \in \Schwartz(\R^d) \subset L^1(\R^d)$, then $\Fourier f \in \Schwartz(\R^d) \subset L^1(\R^d)$. \item For all $\alpha \in \N_0^d$, we have $\Fourier \bigl ( (i \partial_x)^{\alpha} f \bigr ) = x^{\alpha} \, \Fourier f \in \Schwartz(\R^d)$. This holds as all derivatives are integrable. \item Hence, for all $a , \alpha \in \N_0^d$, we have $x^a \partial_x^{\alpha} f \in \Schwartz(\R^d)$. \item Translations of Schwartz functions are again Schwartz functions, $f( \cdot - x_0) \in \Schwartz(\R^d)$; this follows from $\Fourier f(\cdot - x_0) = \e^{- \ii \xi \cdot x_0} \, \Fourier f \in \Schwartz(\R^d)$ for all $x_0 \in \R^d$. \end{enumerate} This leads to the following definition: \begin{definition}[Schwartz functions] The space of Schwartz functions % \begin{align*} \Schwartz(\R^d) := \Bigl \{ f \in \Cont^{\infty}(\R^d) \; \big \vert \; \forall a , \alpha \in \N_0^d : \norm{f}_{a \alpha} < \infty \Bigr \} \end{align*} % is defined in terms of the family of seminorms\footnote{A seminorm has all properties of a norm except that $\norm{f} = 0$ does not necessarily imply $f = 0$. } indexed by $a , \alpha \in \N_0^d$ % \begin{align*} \norm{f}_{a \alpha} := \sup_{x \in \R^d} \babs{x^a \partial_x^{\alpha} f(x)} , && f \in \Cont^{\infty}(\R^d) . \end{align*} % \end{definition} The family of seminorms defines a so-called \emph{Fréchet topology:} put in simple terms, to make sure that sequences in $\Schwartz(\R^d)$ converge to rapidly decreasing smooth functions, we need to control all derivatives as well as the decay. This is also the reason why there is \emph{no norm on $\Schwartz(\R^d)$} which generates the same topology as the family of seminorms. However, $\norm{f}_{a \alpha} = 0$ for all $a , \alpha \in \N_0^d$ ensures $f = 0$, all seminorms put together can distinguish points. \begin{example} Two simple examples of Schwartz functions are % \begin{align*} f(x) = \e^{- a x^2} , && a > 0 , \end{align*} % and % \begin{align*} g(x) = \left \{ % \begin{matrix} \e^{- \frac{1}{1 - x^2} + 1} & \abs{x} < 1 \\ 0 & \abs{x} \geq 1 \\ \end{matrix} % \right . . \end{align*} % The second one even has compact support. \end{example} The first major fact we will establish is completeness. \begin{theorem} The space of Schwartz functions endowed with % \begin{align*} \mathrm{d}(f,g) := \sum_{n = 0}^{\infty} 2^{-n} \sup_{\abs{a} + \abs{\alpha} = n} \frac{\norm{f - g}_{a \alpha}}{1 + \norm{f - g}_{a \alpha}} \end{align*} % is a complete metric space. \end{theorem} \begin{proof} $\mathrm{d}$ is positive and symmetric. It also satisfies the triangle inequality as $x \mapsto \frac{x}{1 + x}$ is concave on $\R^+_0$ and all of the seminorms satisfy the triangle inequality. Hence, $\bigl ( \Schwartz(\R^d) , \mathrm{d} \bigr )$ is a metric space. \marginpar{2014.01.09} To show completeness, take a Cauchy sequence $(f_n)$ with respect to $\mathrm{d}$. By definition and positivity, this means $(f_n)$ is also a Cauchy sequence with respect to all of the seminorms $\norm{\cdot}_{a \alpha}$. Each of the $\bigl ( x^a \partial_x^{\alpha} f_n \bigr )$ converge to some $g_{a \alpha}$ as the space of bounded continuous functions $\BCont(\R^d)$ with $\sup$ norm is complete. It remains to show that $g_{a \alpha} = x^a \partial_x^{\alpha} g_{00}$. Clearly, only taking derivatives is problematic: we will prove this for $\abs{\alpha} = 1$, the general result follows from a simple induction. Assume we are interested in the sequence $(\partial_{x_k} f_n)$, $k \in \{ 1 , \ldots , d \}$. With $\alpha_k := (0 , \ldots , 0 , 1 , 0, \ldots)$ as the multiindex that has a $1$ in the $k$th entry and $e_k := (0 , \ldots , 0 , 1 , 0, \ldots) \in \R^d$ as the $k$th canonical base vector, we know that % \begin{align*} f_n(x) = f_n(x - x_k e_k) + \int_0^{x_k} \dd s \, \partial_{x_k} f_n \bigl ( x + (s - x_k) e_k \bigr ) \end{align*} % as well as % \begin{align*} g_{00}(x) = g_{00}(x - x_k e_k) + \int_0^{x_k} \dd s \, \partial_{x_k} g_{00} \bigl ( x + (s - x_k) e_k \bigr ) \end{align*} % hold since $f_n \to g_{00}$ and $\partial_{x_k} f_n \to g_{0\alpha_k}$ uniformly. Hence, $g_{00} \in \Cont^1(\R^d)$ and the derivative of $g_{00}$ coincides with $g_{0\alpha_k}$, $\partial_{x_k} g_{00} = g_{0 \alpha_k}$. We then proceed by induction to show $g_{00} \in \Cont^{\infty}(\R^d)$. This means $\mathrm{d}(f_n,g_{00}) \to 0$ as $n \to \infty$ and $\Schwartz(\R^d)$ is complete. \end{proof} The $L^p$ norm of each element in $\Schwartz(\R^d)$ can be dominated by two seminorms: \begin{lemma}\label{S_and_Sprime:schwartz_functions:lem:Lp_estimate} Let $f \in \Schwartz(\R^d)$. Then for each $1 \leq p < \infty$, the $L^p$ norm of $f$ can be dominated by a finite number of seminorms, % \begin{align*} \norm{f}_{L^p(\R^d)} \leq C_1(d) \norm{f}_{00} + C_2(d) \max_{\abs{a} = 2n(d)} \norm{f}_{a 0} , \end{align*} % where $C_1(d) , C_2(d) \in \R^+$ and $n(d) \in \N_0$ only depend on the dimension of $\R^d$. Hence, $f \in L^p(\R^d)$. \end{lemma} \begin{proof} We split the integral on $\R^d$ into an integral over the unit ball centered at the origin and its complement: let $B_n := \max_{\abs{a} = 2n} \norm{f}_{a0}$, then % \begin{align*} \norm{f}_{L^p(\R^d)} &= \biggl ( \int_{\R^d} \dd x \, \abs{f(x)}^p \biggr )^{\nicefrac{1}{p}} \leq \biggl ( \int_{\abs{x} \leq 1} \dd x \, \abs{f(x)}^p \biggr )^{\nicefrac{1}{p}} + \biggl ( \int_{\abs{x} > 1} \dd x \, \abs{f(x)}^p \biggr )^{\nicefrac{1}{p}} \\ &\leq \norm{f}_{00} \, \biggl ( \int_{\abs{x} \leq 1} \dd x \, 1 \biggr )^{\nicefrac{1}{p}} + \biggl ( \int_{\abs{x} > 1} \dd x \, \abs{f(x)}^p \frac{\abs{x}^{2np}}{\abs{x}^{2np}} \biggr )^{\nicefrac{1}{p}} \\ &\leq \mathrm{Vol}(B_1(0))^{\nicefrac{1}{p}} \, \norm{f}_{00} + B_n \, \biggl ( \int_{\abs{x} > 1} \dd x \, \frac{1}{\abs{x}^{2np}} \biggr )^{\nicefrac{1}{p}} . \end{align*} % If we choose $n$ large enough, $\abs{x}^{-2np}$ is integrable and can be computed explicitly, and we get % \begin{align*} \norm{f}_{L^p(\R^d)} \leq C_1(d) \, \norm{f}_{00} + C_2(d) \, \max_{\abs{a} = 2n} \norm{f}_{a0} . \end{align*} % This concludes the proof. \end{proof} \begin{lemma}\label{S_and_Sprime:schwartz_functions:lem:density_of_Cinfty_compact} The smooth functions with compact support $\Cont^{\infty}_c(\R^d)$ are dense in $\Schwartz(\R^d)$. \end{lemma} \begin{proof} Take any $f \in \Schwartz(\R^d)$ and choose % \begin{align*} g(x) = \left \{ % \begin{matrix} \e^{- \frac{1}{1 - x^2} + 1} & \abs{x} \leq 1 \\ 0 & \abs{x} > 1 \\ \end{matrix} % \right . . \end{align*} % Then $f_n := g(\nicefrac{\cdot}{n}) \, f$ converges to $f$ in $\Schwartz(\R^d)$, \ie % \begin{align*} \lim_{n \to \infty} \bnorm{f_n - f}_{a \alpha} = 0 \end{align*} % holds for all $a , \alpha \in \N_0^d$. \end{proof} Next, we will show that $\Fourier : \mathcal{S}(\R^d) \longrightarrow \mathcal{S}(\R^d)$ is a continuous and bijective map from $\mathcal{S}(\R^d)$ onto itself. \begin{theorem}\label{S_and_Sprime:schwartz_functions:thm:Fourier_is_bijection} The Fourier transform $\Fourier$ as defined by equation~\eqref{S_and_Sprime:schwartz_functions:eqn:Fourier_transform} maps $\Schwartz(\R^d)$ continuously and bijectively onto itself. The inverse $\Fourier^{-1}$ is continuous as well. Furthermore, for all $f \in \Schwartz(\R^d)$ and $a , \alpha \in \N_0^d$, we have % \begin{align} \Fourier \bigl ( x^a (- \ii \partial_x)^{\alpha} f \bigr ) = (+ \ii \partial_{\xi})^a \xi^{\alpha} \Fourier f . \label{S_and_Sprime:schwartz_functions:eqn:Fourier_is_bijection} \end{align} % \end{theorem} \begin{proof} We need to prove $\Fourier \bigl ( x^a (- \ii \partial_x)^{\alpha} f \bigr ) = (+ \ii \partial_{\xi})^a \xi^{\alpha} \Fourier f$ first: since $x^{\alpha} \partial_x^a f$ is integrable, its Fourier transform exists and is continuous by Dominated Convergence. For any $a , \alpha \in \N_0^d$, we have % \begin{align*} \Bigl ( \Fourier \bigl ( x^a (- \ii \partial_x)^{\alpha} f \bigr ) \Bigr )(\xi) &= \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd x \, \e^{- \ii x \cdot \xi} \, x^a \, (- \ii \partial_x)^{\alpha} f(x) \\ &= \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd x \, (+ \ii \partial_{\xi})^a \e^{- \ii x \cdot \xi} \, (- \ii \partial_x)^{\alpha} f(x) \\ & \overset{\ast}{=} \frac{1}{(2\pi)^{\nicefrac{d}{2}}} (+ \ii \partial_{\xi})^a \int_{\R^d} \dd x \, \e^{- \ii x \cdot \xi} \, (- \ii \partial_x)^{\alpha} f(x) . \end{align*} % In the step marked with $\ast$, we have used Dominated Convergence to interchange integration and differentiation. Now we integrate partially $\abs{\alpha}$ times and use that the boundary terms vanish, % \begin{align*} \Bigl ( \Fourier \bigl ( x^a (- \ii \partial_x)^{\alpha} f \bigr ) \Bigr )(\xi) &= \frac{1}{(2\pi)^{\nicefrac{d}{2}}} (+ \ii \partial_{\xi})^a \int_{\R^d} \dd x \, (+ \ii \partial_x)^{\alpha} \e^{- \ii x \cdot \xi} \, f(x) \\ & = \frac{1}{(2\pi)^{\nicefrac{d}{2}}} (+ \ii \partial_{\xi})^a \int_{\R^d} \dd x \, \xi^{\alpha} \e^{- \ii x \cdot \xi} \, f(x) \\ &= \bigl ( (+ \ii \partial_{\xi})^a \xi^{\alpha} \Fourier f \bigr )(\xi) . \end{align*} % To show $\Fourier$ is continuous, we need to estimate the seminorms of $\Fourier f$ by those of $f$: for any $a , \alpha \in \N_0^d$, it holds % \begin{align*} \bnorm{\Fourier f}_{a \alpha} &= \sup_{\xi \in \R^d} \babs{\bigl ( \xi^a \partial_{\xi}^{\alpha} \Fourier f \bigr )(\xi)} = \sup_{\xi \in \R^d} \Babs{\Bigl ( \Fourier \bigl ( \partial_x^a x^{\alpha} f \bigr ) \Bigr )(x)} \\ &\leq \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \bnorm{\partial_x^a x^{\alpha} f}_{L^1(\R^d)} . \end{align*} % In particular, this implies $\Fourier f \in \Schwartz(\R^d)$. Since $\partial_x^a x^{\alpha} f \in \Schwartz(\R^d)$, we can apply Lemma~\ref{S_and_Sprime:schwartz_functions:lem:Lp_estimate} and estaimte the right-hand side by a finite number of seminorms of $f$. Hence, $\Fourier$ is continuous: if $f_n$ is a Cauchy sequence in $\Schwartz(\R^d)$ that converges to $f$, then $\Fourier f_n$ has to converge to $\Fourier f \in \Schwartz(\R^d)$. \marginpar{2014.01.14} To show that $\Fourier$ is a bijection with continuous inverse, we note that it suffices to prove $\Fourier^{-1} \Fourier f = f$ for functions $f$ in a dense subset, namely $\Cont^{\infty}_c(\R^d)$ (see Lemma~\ref{S_and_Sprime:schwartz_functions:lem:density_of_Cinfty_compact}). Pick $f$ so that the support of is contained in a cube $W_n = [-n,+n]^d$ with sides of length $2n$. We can write $f$ on $W_n$ as a uniformly convergent Fourier series, % \begin{align*} f_n(x) = \sum_{\xi \in \frac{\pi}{n} \Z^d} \hat{f}_n(\xi) \, \e^{\ii x \cdot \xi} , \end{align*} % with % \begin{align*} \hat{f}_n(\xi) &= \frac{1}{\mathrm{Vol}(W_n)} \int_{W_n} \dd x \, \e^{- \ii x \cdot \xi} \, f(x) = \frac{(2 \pi)^{\nicefrac{d}{2}}}{(2n)^d} \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd x \, \e^{- \ii x \cdot \xi} \, f(x) . \end{align*} % The second equality holds if $n$ is large enough so that $\supp \, f$ fits into the cube $[-n,+n]^d$. Hence, $f_n$ can be expressed as % \begin{align*} f_n(x) = \sum_{\xi \in \frac{\pi}{n} \Z^d} \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \frac{\pi^d}{n^d} \, (\Fourier f)(\xi) \, \e^{\ii x \cdot \xi} \end{align*} % which is a Riemann sum that converges to % \begin{align*} f(x) &= \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd x \, \e^{\ii x \cdot \xi} \, (\Fourier f)(\xi) = \bigl ( \Fourier^{-1} \Fourier f \bigr )(x) \end{align*} % as $\Fourier f \in \Schwartz$. This concludes the proof. \end{proof} Hence, we have shown that $\Schwartz(\R^d)$ has the defining properties that suggested its motivation in the first place. The Schwartz functions also have other nice properties whose proofs are left as an exercise. \begin{proposition} The Schwartz functions have the following properties: % \begin{enumerate}[(i)] \item With pointwise multiplication $\cdot : \Schwartz(\R^d) \times \Schwartz(\R^d) \longrightarrow \Schwartz(\R^d)$, the space of $\Schwartz(\R^d)$ forms a Fréchet algebra (\ie the multiplication is continuous in both arguments). \item For all $a , \alpha$, the map $f \mapsto x^a \partial_{\xi}^{\alpha} f$ is continuous on $\Schwartz(\R^d)$. \item For any $x_0 \in \R^d$, the map $\tau_{x_0} : f \mapsto f(\cdot - x_0)$ continuous on $\Schwartz(\R^d)$. \item For any $f \in \Schwartz(\R^d)$, $\frac{1}{h} \bigl ( \tau_{h e_k} f - f)$ converges to $\partial_{x_k} f$ as $h \to 0$ where $e_k$ is the $k$th canonical base vector of $\R^d$. \end{enumerate} % \end{proposition} The next important fact will be mentioned without proof: \begin{theorem}\label{S_and_Sprime:schwartz_functions:thm:S_dense_in_Lp} $\Schwartz(\R^d)$ is dense in $L^p(\R^d)$, $1 \leq p < \infty$. \end{theorem} This means, we can approximate any $L^p(\R^d)$ function by a test function. We will use this and the next theorem to extend the Fourier transform to $L^2(\R^d)$. \begin{theorem}\label{S_and_Sprime:schwartz_functions:thm:unitarity_Fourier_on_S} For all $f , g \in \Schwartz(\R^d)$, we have % \begin{align*} \int_{\R^d} \dd x \, (\Fourier f)(x) \, g(x) = \int_{\R^d} \dd x \, f(x) \, (\Fourier g)(x) . \end{align*} % This implies $\sscpro{\Fourier f}{g} = \sscpro{f}{\Fourier^{-1} g}$ and $\sscpro{\Fourier f}{\Fourier g} = \sscpro{f}{g}$ where $\scpro{\cdot}{\cdot}$ is the usual scalar product on $L^2(\R^d)$. \end{theorem} \begin{proof} Using Fubini's theorem, we conclude we can first integrate with respect to $\xi$ instead of $x$, % \begin{align*} \int_{\R^d} \dd x \, (\Fourier f)(x) \, g(x) &= \int_{\R^d} \dd x \, \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd \xi \, \e^{- \ii x \cdot \xi} \, f(\xi) \, g(x) \\ & = \int_{\R^d} \dd \xi \, f(\xi) \, \frac{1}{(2\pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd x \, \e^{- \ii x \cdot \xi} g(x) = \int_{\R^d} \dd \xi \, f(\xi) \, (\Fourier g)(\xi) . \end{align*} % To prove the second part, we remark that compared to the scalar product on $L^2(\R^d)$, we are missing a complex conjugation of the first function. Furthermore, $(\Fourier f)^* = \Fourier^{-1} f^*$ holds. From this, it follows that $\sscpro{\Fourier f}{g} = \sscpro{f}{\Fourier^{-1} g}$ and upon replacing $g$ with $\Fourier g$, that $\sscpro{\Fourier f}{\Fourier g} = \sscpro{f}{\Fourier^{-1} \Fourier g} = \sscpro{f}{g}$. \end{proof} Consequently, the convolution defines a multiplication on $\Schwartz(\R^d)$: \begin{corollary} $\Schwartz(\R^d) \ast \Schwartz(\R^d) \subseteq \Schwartz(\R^d)$ \end{corollary} \begin{proof} Let $f , g \in \Schwartz(\R^d)$. Because Schwartz functions are also integrable, $f \ast g$ exists in $L^1(\R^d)$ and satisfies $\Fourier (f \ast g) = (2\pi)^{\nicefrac{d}{2}} \Fourier f \, \Fourier g$ (Proposition~\ref{Fourier:R:prop:Fourier_convolution}). This means we can rewrite $f \ast g = (2 \pi)^{\nicefrac{d}{2}} \Fourier^{-1} \bigl ( \Fourier f \, \Fourier g \bigr )$ as the Fourier transform of a product of Schwartz functions, and thus $f \ast g \in \Schwartz(\R^d)$. \end{proof} Now we will apply this to the free Schrödinger operator $H = - \tfrac{1}{2} \Delta_x$. First of all, we conclude from Theorem~\ref{S_and_Sprime:schwartz_functions:thm:S_dense_in_Lp} that the domain of $H$, \begin{align*} \Schwartz(\R^d) \subset \mathcal{D}(H) = \bigl \{ \varphi \in L^2(\R^d) \; \vert \; - \Delta_x \varphi \in L^2(\R^d) \bigr \} \subset L^2(\R^d) , \end{align*} is dense. Since derivatives of Schwartz functions are Schwartz functions, $H$ maps $\Schwartz(\R^d)$ to itself, and we deduce that the solution \begin{align*} \psi(t) = U(t) \psi_0 = \Fourier \, \e^{- \ii t \frac{1}{2} \hat{\xi}^2} \, \Fourier^{-1} \psi_0 \end{align*} to initial conditions $\psi_0 \in \Schwartz(\R^d) \subset L^2(\R^d)$ remains a Schwartz function: $\Fourier^{\pm 1}$ leaves $\Schwartz(\R^d)$ invariant (Theorem~\ref{S_and_Sprime:schwartz_functions:thm:Fourier_is_bijection}) as does multiplication by $\e^{- \ii t \, \frac{1}{2} \xi^2}$, because derivatives of that function are of the form polynomial times $\e^{- \ii t \, \frac{1}{2} \xi^2}$. For these initial conditions, we can also rigorously prove equation~\eqref{Fourier:R:eqn:solution_free_Schroedinger_convolution}: \begin{proposition}\label{S_and_Sprime:schwartz_functions:prop:free_Schroedinger} Let $\psi_0 \in \Schwartz(\R^d) \subset L^2(\R^d)$. Then for $t \neq 0$ the global solution of the free Schrödinger equation with initial condition $\psi_0$ is given by % \begin{align} \psi(t,x) &= \frac{1}{(2 \pi \ii t)^{\nicefrac{d}{2}}} \int_{\R^d} \dd y \, \e^{\ii \frac{(x-y)^2}{2 t}} \, \psi_0(y) =: \int_{\R^d} \dd y \, p(t,x-y) \, \psi_0(y) . \label{S_and_Sprime:schwartz_functions:eqn:free_propagator} \end{align} % This expression converges in the $L^2$ norm to $\psi_0$ as $t \to 0$. \end{proposition} \begin{proof} We denote the Fourier transform of $\e^{- \ii t \frac{1}{2} \hat{\xi}^2}$ by % \begin{align*} U(t) := \Fourier^{-1} \e^{- \ii t \frac{1}{2} \hat{\xi}^2} \Fourier . \end{align*} % If $t = 0$, then the bijectivity of the Fourier transform on $\Schwartz(\R^d)$, Theorem~\ref{S_and_Sprime:schwartz_functions:thm:Fourier_is_bijection}, yields $U(0) = \id_{\Schwartz}$. So let $t \neq 0$. If $\widehat{\psi}_0$ is a Schwartz function, so is $\e^{- \ii t \frac{1}{2} \xi^2} \widehat{\psi}_0$. As the Fourier transform is a unitary map on $L^2(\R^d)$ (Proposition~\ref{Fourier:R:prop:Parseval_Plancherel}) and maps Schwartz functions onto Schwartz functions (Theorem~\ref{S_and_Sprime:schwartz_functions:thm:Fourier_is_bijection}), $U(t)$ also maps Schwartz functions onto Schwartz functions. Plugging in the definition of the Fourier transform, for any $\psi_0 \in \Schwartz(\R^d)$ and $t \neq 0$ we can write out $U(t) \psi_0$ as % \begin{align} \bigl ( \Fourier^{-1} &\e^{- \ii t \frac{1}{2} \hat{\xi}^2} \Fourier \psi_0 \bigr )(x) = \frac{1}{(2\pi)^d} \int_{\R^d} \dd \xi \, \e^{+ \ii x \cdot \xi} \, \e^{- \ii t \frac{1}{2} \xi^2} \, \int_{\R^d} \dd y \, \e^{- \ii y \cdot \xi} \, \psi_0(y) \notag \\ &= \frac{1}{(2 \pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd \xi \int_{\R^d} \dd y \, \e^{\ii \frac{(x - y)^2}{2 t}} \, \left ( \frac{1}{(2 \pi)^{\nicefrac{d}{2}}} \, \e^{- \ii \frac{t}{2} ( \xi - \nicefrac{(x - y)}{t})^2} \right ) \, \psi_0(y) . \label{S_and_Sprime:schwartz_functions:eqn:proof_free_propagator} \end{align} % We need to regularize the integral: if we write the right-hand side of the above as % \begin{align*} \mbox{r. h. s.} &= \lim_{\eps \searrow 0} \frac{1}{(2 \pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd \xi \int_{\R^d} \dd y \, \e^{\ii \frac{(x - y)^2}{2 t}} \, \left ( \frac{1}{(2 \pi)^{\nicefrac{d}{2}}} \, \e^{- (\eps + \ii) \frac{t}{2} ( \xi - \nicefrac{(x - y)}{t})^2} \right ) \, \psi_0(y) \\ &= \lim_{\eps \searrow 0} \frac{1}{(2 \pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd y \, \e^{\ii \frac{(x - y)^2}{2 t}} \, \left ( \frac{1}{(2 \pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd \xi \, \e^{- (\eps + \ii) \frac{t}{2} ( \xi - \nicefrac{(x - y)}{t})^2} \right ) \, \psi_0(y) , \end{align*} % we can use Fubini to change the order of integration. The inner integral can be computed by interpreting it as an integral in the complex plane, % \begin{align*} \frac{1}{(2 \pi)^{\nicefrac{d}{2}}} \int_{\R^d} \dd \xi \, \e^{- (\eps + \ii) \frac{t}{2} ( \xi - \nicefrac{(x - y)}{t})^2} = \frac{1}{\bigl ( (\eps + \ii) t \bigr )^{\nicefrac{d}{2}}} . \end{align*} % Plugged back into equation~\eqref{S_and_Sprime:schwartz_functions:eqn:proof_free_propagator} and combined with the Dominated Convergence Theorem, this yields equation~\eqref{S_and_Sprime:schwartz_functions:eqn:free_propagator}. \end{proof} \section{Tempered distributions} \label{S_and_Sprime:Sprime} Tempered distributions are linear functionals on $\Schwartz(\R^d)$. \begin{definition}[Tempered distributions] The tempered distributions $\Schwartz'(\R^d)$ are the continuous linear functions on the Schwartz functions $\Schwartz(\R^d)$. If $L \in \Schwartz'(\R^d)$ is a linear functional, we will often write % \begin{align*} \bigl ( L , f \bigr ) := L(f) && \forall f \in \Schwartz(\R^d) . \end{align*} % \end{definition} \begin{example} % \begin{enumerate}[(i)] \item The $\delta$ distribution defined via % \begin{align*} \delta(f) := f(0) \end{align*} % is a linear continuous functional on $\Schwartz(\R^d)$. (See exercise sheet~12.) \item Let $g \in L^p(\R^d)$, $1 \leq p < \infty$, then for $f \in \Schwartz(\R^d)$, we define % \begin{align} L_g(f) = \int_{\R^d} \dd x \, g(x) \, f(x) =: (g , f) . \label{S_and_Sprime:Sprime:eqn:distributions_of_functions} \end{align} % As $f \in \Schwartz(\R^d) \subset L^q(\R^d)$, $\frac{1}{p} + \frac{1}{q} = 1$, by Hölder's inequality, we have % \begin{align*} \babs{\bigl ( g , f \bigr )} \leq \norm{g}_p \, \norm{f}_q . \end{align*} % Since $\norm{f}_q$ can be bounded by a finite linear combination of Fréchet seminorms of $f$, $L_g$ is continuous, and the inclusion map $\imath : L^p(\R^d) \longrightarrow \Schwartz'(\R^d)$ is continuous. \item Equation~\eqref{S_and_Sprime:Sprime:eqn:distributions_of_functions} is the \emph{canonical way to interpret less nice functions as distributions}: we identify a suitable function $g : \R^d \longrightarrow \C$ with the distribution $L_g$. For instance, polynomially bounded smooth functions (think of $g(x) = x^2$) define continuous linear functionals in this manner since for any $g \in \Cont^{\infty}_{\mathrm{pol}}(\R^d)$, there exists $n \in \N_0$ such that $\sqrt{1 + x^2}^{\, -n} g(x)$ is bounded. Hence, for any $f \in \Schwartz(\R^d)$, Hölder's inequality yields % \begin{align*} \babs{\bigl ( g , f \bigr )} &= \abs{\int_{\R^d} \dd x \, g(x) \, f(x)} = \abs{\int_{\R^d} \dd x \, \sqrt{1 + x^2}^{\, -n} \, g(x) \, \sqrt{1 + x^2}^{\, n} f(x)} \\ & \leq \bnorm{\sqrt{1 + x^2}^{\, -n} \, g(x)}_{L^{\infty}} \, \bnorm{\sqrt{1 + x^2}^{\, n} f(x)}_{L^1} . \end{align*} % Later on, we will see that this point of view, interpreting “not so nice” functions as distributions, helps us extend operations from test functions to much broader classes of functions. \end{enumerate} % \end{example} Similar to the case of normed spaces, we see that continuity implies “boundedness”. \begin{proposition} A linear functional $L : \Schwartz(\R^d) \longrightarrow \C$ is a tempered distribution (\ie continuous) if and only if there exist constants $C > 0$ and $k,n \in \N_0$ such that % \begin{align*} \babs{L(f)} \leq C \sum_{\substack{\abs{a} \leq k \\ \abs{\alpha} \leq n}} \norm{f}_{a \alpha} \end{align*} % for all $f \in \Schwartz(\R^d)$. \end{proposition} Even though we will not give a proof, let us at least sketch its idea: because one has no control over the growth or decay of the seminorms $\norm{f}_{a \alpha}$, maxima or sums of seminorms are finite if and only if only finitely many of them enter. \medskip \noindent As mentioned before, we can interpret suitable functions $g$ as tempered distributions. In particular, every Schwartz function $g \in \Schwartz(\R^d)$ defines a tempered distribution so that \begin{align*} \bigl ( \partial_{x_k} g , f \bigr ) &= \int_{\R^d} \dd x \, \partial_{x_k} g(x) \, f(x) = - \int_{\R^d} \dd x \, g(x) \, \partial_{x_k} f(x) = \bigr ( g , - \partial_{x_k} f \bigr ) \end{align*} holds for any $f \in \Schwartz(\R^d)$. We can use the right-hand side to \emph{define} derivatives of distributions: \begin{definition}[Weak derivative] For $\alpha \in \N_0^d$ and $L \in \Schwartz'(\R^d)$, we define the weak or distributional derivative of $L$ as % \begin{align*} \bigl ( \partial_x^{\alpha} L , f \bigr ) := \bigl ( L , (-1)^{\abs{\alpha}} \partial_x^{\alpha} f \bigr ) , && \forall f \in \Schwartz(\R^d) . \end{align*} % \end{definition} \begin{example} % \begin{enumerate}[(i)] \item The weak derivative of $\delta$ is % \begin{align*} \bigl ( \partial_{x_k} \delta , f \bigr ) &= \bigl ( \delta , - \partial_{x_k} f \bigr ) = - \partial_{x_k} f(0) . \end{align*} % \item Let $g \in \Cont^{\infty}_{\mathrm{pol}}(\R^d)$. Then the weak derivative coincides with the usual derivative, by partial integration, we get % \begin{align*} \bigl ( \partial_{x_k} g , f \bigr ) &= - \bigl ( g , \partial_{x_k} f \bigr ) = - \int_{\R^d} \dd x \, g(x) \, \partial_{x_k} f(x) \\ & = + \int_{\R^d} \dd x \, \partial_{x_k} g(x) \, f(x) . \end{align*} % \end{enumerate} % \end{example} Similarly, the Fourier transform can be extended to a bijection $\Schwartz'(\R^d) \longrightarrow \Schwartz'(\R^d)$. Theorem~\ref{S_and_Sprime:schwartz_functions:thm:unitarity_Fourier_on_S} tells us that if $g , f \in \Schwartz(\R^d)$, then \begin{align*} \bigl ( \Fourier g , f \bigr ) = \bigl ( g , \Fourier f \bigr ) \end{align*} holds. If we replace $g$ with an arbitrary tempered distribution, the right-hand side again serves as \emph{definition} of the left-hand side: \begin{definition}[Fourier transform on $\mathcal{S}'(\R^d)$] For any tempered distribution $L \in \Schwartz'(\R^d)$, we define its Fourier transform to be % \begin{align*} \bigl ( \Fourier L , f \bigr ) := \bigl ( L , \Fourier f \bigr ) && \forall f \in \Schwartz(\R^d) . \end{align*} % \end{definition} \begin{example} % \begin{enumerate}[(i)] \item The Fourier transform of $\delta$ is the constant function $(2\pi)^{-\nicefrac{d}{2}}$, % \begin{align*} \bigl ( \Fourier \delta , f \bigr ) &= \bigl ( \delta , \Fourier f \bigr ) = \Fourier f (0) = \frac{1}{(2\pi)^{\nicefrac{d}{2}}}\int_{\R^d} \dd x \, f(x) \\ &= \bigl ( (2\pi)^{-\nicefrac{d}{2}} , f \bigr ) . \end{align*} % \item The Fourier transform of $x^2$ makes sense as a tempered distribution on $\R$: $x^2$ is a polynomially bounded function and thus defines a tempered distribution via equation~\eqref{S_and_Sprime:Sprime:eqn:distributions_of_functions}: % \begin{align*} \bigl ( \Fourier x^2 , f \bigr ) &= \bigl ( x^2 , \Fourier f \bigr ) = \int_{\R} \dd x \, x^2 \, \frac{1}{(2\pi)^{\nicefrac{1}{2}}} \int_{\R} \dd \xi \, \e^{- \ii x \cdot \xi} \, f(\xi) \\ &= \frac{1}{(2\pi)^{\nicefrac{1}{2}}} \int_{\R} \dd \xi \int_{\R} \dd x \, (+\ii)^2 \partial_{\xi}^2 \e^{- \ii x \cdot \xi} \, f(\xi) \\ &= (-1)^2 \cdot (-1) \, \int_{\R} \dd \xi \left ( \frac{1}{(2\pi)^{\nicefrac{1}{2}}} \int_{\R} \dd x \, \e^{- \ii x \cdot \xi} \right ) \, \partial_{\xi}^2 f(\xi) \\ &= - \int_{\R} \dd \xi \, (2\pi)^{\nicefrac{1}{2}} \, \delta(\xi) \, \partial_{\xi}^2 f(\xi) \\ &= - (2\pi)^{\nicefrac{1}{2}} \, \partial_{\xi}^2 f(0) = \bigl ( (2\pi)^{\nicefrac{1}{2}} \delta , - \partial_{\xi}^2 f \bigr ) = \bigl ( - (2\pi)^{\nicefrac{1}{2}} \delta'' , f \bigr ) \end{align*} % This is consistent with what we have shown earlier in Theorem~\ref{S_and_Sprime:schwartz_functions:thm:Fourier_is_bijection}, namely % \begin{align*} \Fourier \bigl ( x^2 f \bigr ) = (+ \ii \partial_{\xi})^2 \Fourier f = - \partial_{\xi}^2 \Fourier f . \end{align*} % \end{enumerate} % \end{example} We have just computed Fourier transforms of functions that do not have Fourier transforms in the usual sense. We can apply the idea we have used to define the derivative and Fourier transform on $\Schwartz'(\R^d)$ to other operators initially defined on $\Schwartz(\R^d)$. Before we do that though, we need to introduce the appropriate notion of continuity on $\Schwartz'(\R^d)$. \begin{definition}[Weak-$\ast$ convergence] Let $\Schwartz$ be a metric space with dual $\Schwartz'$. A sequence $(L_n)$ in $\Schwartz'$ is said to converge to $L \in \Schwartz'$ in the weak-$\ast$ sense if % \begin{align*} L_n(f) \xrightarrow{n \to \infty} L(f) \end{align*} % holds for all $f \in \Schwartz$. We will write $\wastlim_{n \to \infty} L_n = L$. \end{definition} This notion of convergence implies a notion of continuity and is crucial for the next theorem. \begin{theorem}\label{S_and_Sprime:Sprime:thm:weak-star_continuity} Let $A : \Schwartz(\R^d) \longrightarrow \Schwartz(\R^d)$ be a linear continuous map. Then for all $L \in \Schwartz'(\R^d)$, the map $A' : \Schwartz'(\R^d) \longrightarrow \Schwartz'(\R^d)$ % \begin{align} \bigl ( A' L , f \bigr ) := \bigl ( L , A f \bigr ) , && f \in \Schwartz(\R^d) , \end{align} % defines a weak-$\ast$ continuous linear map. \marginpar{2014.01.16} \end{theorem} Put in the terms of Chapter~\ref{operators:adjoint}, $A'$ is the \emph{adjoint} of $A$. \begin{proof} First of all, $A'$ is linear and well-defined, $A' L$ maps $f \in \Schwartz(\R^d)$ onto $\C$. To show continuity, let $(L_n)$ be a sequence of tempered distributions which converges in the weak-$\ast$ sense to $L \in \Schwartz'(\R^d)$. Then % \begin{align*} \bigl ( A' L_n , f \bigr ) &= \bigl ( L_n , A f \bigr ) \xrightarrow{n \to \infty} \bigl ( L , A f \bigr ) = \bigl ( A' L , f \bigr ) \end{align*} % holds for all $f \in \Schwartz(\R^d)$ and $A'$ is weak-$\ast$ continuous. \end{proof} As a last consequence, we can extend the convolution from $\ast : \Schwartz(\R^d) \times \Schwartz(\R^d) \longrightarrow \Schwartz(\R^d)$ to \begin{align*} \ast &: \Schwartz'(\R^d) \times \Schwartz(\R^d) \longrightarrow \Schwartz'(\R^d) \\ \ast &: \Schwartz(\R^d) \times \Schwartz'(\R^d) \longrightarrow \Schwartz'(\R^d) . \end{align*} For any $f , g , h \in \Schwartz(\R^d)$, we can push the convolution from one argument of the duality bracket to the other, \begin{align*} \bigl ( f \ast g , h \bigr ) &= \bigl ( g \ast f , h \bigr ) = \int_{\R^d} \dd y \, (f \ast g)(y) \, h(y) = \int_{\R^d} \dd y \int_{\R^d} \dd x \, f(x) \, g(y-x) \, h(y) \\ & = \int_{\R^d} \dd x \, f(x) \, \bigl ( g(- \; \cdot) \ast h \bigr )(x) = \bigl ( f , g(- \; \cdot) \ast h \bigr ) . \end{align*} Thus, we define \begin{definition}[Convolution on $\Schwartz'(\R^d)$] Let $L \in \Schwartz'(\R^d)$ and $f \in \Schwartz(\R^d)$. Then the convolution of $L$ and $f$ is defined as % \begin{align} \bigl ( L \ast f , g \bigr ) := \bigl ( L , f(- \; \cdot) \ast g \bigr ) && \forall g \in \Schwartz(\R^d) . \label{S_and_Sprime:eqn:convolution_Sprime} \end{align} % \end{definition} By Theorem~\ref{S_and_Sprime:Sprime:thm:weak-star_continuity}, this extension of the convolution is weak-$\ast$ continuous. Moreover, the convolution has a neutral element in $\Schwartz'(\R^d)$, the delta distribution $\delta = \delta_0$: for all $f , g \in \Schwartz(\R^d)$ \begin{align*} \bigl ( \delta \ast f , g \bigr ) &= \bigl ( \delta , f(- \; \cdot) \ast g \bigr ) = \bigl ( f(- \; \cdot) \ast g \bigr )(0) \\ &= \int_{\R^d} \dd y \, f \bigl ( - (0 -y) \bigr ) \, g(y) = (f , g) \end{align*} holds, and thus we can succinctly write \begin{align} \delta \ast f = f . \label{Fourier:R:eqn:delta_unit_convolution} \end{align} In view of this, we can better understand what Dirac sequences are (\cf Definitions~\ref{Fourier:T:defn:approximate_id} and \ref{Fourier:R:defn:approximate_id}): since integrable functions define tempered distributions (\cf equation~\eqref{S_and_Sprime:Sprime:eqn:distributions_of_functions}), any Dirac sequence $\bigl ( \delta_{\eps} \bigr )_{\eps \in (0,\eps_0)}$ can be seen as a sequence of tempered distributions. Moreover, the inclusion $\imath : L^1(\R^d) \longrightarrow \Schwartz'(\R^d)$ is continuous, the fact that $\delta_{\eps} \ast f$ converges to $f$ in $L^1(\R^d)$, this sequence also converges in the distributional sense. Hence, $\delta_{\eps} \rightarrow \delta$ holds in the distributional sense as $\eps \to 0$. \section{Partial differential equations on $\Schwartz'(\R^d)$} \label{S_and_Sprime:PDEs_on_Sprime} We have extended the most common operations, taking Fourier transform, derivatives and the convolution, from Schwartz functions to tempered distributions. Hence, we have managed to ascribe meaning to the partial differential equation \begin{align*} L U := \sum_{\abs{\alpha} \leq N} c(\alpha) \, \partial_x^{\alpha} U = F \end{align*} even if $U , F \in \Schwartz'(\R^d)$ are tempered distributions, and we can ask whether $L U = F$ has a solution. More precisely, the above equation means that \begin{align*} \bigl ( L U , \varphi \bigr ) &= \sum_{\abs{\alpha} \leq N} (-1)^{\abs{\alpha}} \, c(\alpha) \, \bigl ( U , \partial_x^{\alpha} \varphi \bigr ) = \bigl ( F , \varphi \bigr ) \end{align*} holds for all test functions $\varphi \in \Schwartz(\R^d)$. This point of view is commonly used when considering differential equations and apply them to functions for which derivatives in the ordinary sense do not exist. \medskip \noindent To give an explicit example, let us reconsider the heat equation \begin{align*} \partial_t u(t) = D \Delta_x u(t) , && u(0) = u_0 . \end{align*} In the context of $L^1(\R^d)$, the unique solution $u(t) = G(t) \ast u_0$ to the initial value problem (Theorem~\ref{Fourier:R:thm:uniqueness_solution_heat_equation_L1}) involves the fundamental solution \begin{align*} G(t,x) &= \frac{1}{(4 \pi D t)^{\nicefrac{d}{2}}} \, \e^{- \frac{x^2}{4 D t}} . \end{align*} Seeing as $G(t)$ is a Gaußian for $t > 0$, it is also an element of $\Schwartz(\R^d)$, and thus convolving it with a \emph{bona fide} tempered distribution makes sense. It stands to reason that \begin{align*} U(t) := G(t) \ast U_0 \end{align*} solves the heat equation for the initial condition $U(0) = U_0 \in \Schwartz'(\R^d)$: first of all, $U(t)$ satisfies the initial condition $U(0) = U_0$, because for all $\varphi \in \Schwartz$ \begin{align*} \lim_{t \searrow 0} \bigl ( U(t) , \varphi \bigr ) &= \lim_{t \searrow 0} \bigl ( G(t) \ast U_0 , \varphi ) \\ & = \lim_{t \searrow 0} \bigl ( U_0 , G(t) \ast \varphi \bigr ) \end{align*} holds. Going from the first to the second line involves the definition of the convolution on $\Schwartz'(\R^d)$, equation~\eqref{S_and_Sprime:eqn:convolution_Sprime}, as well as $G(t,-x) = G(t,x)$. Given that $G(t)$ is a Dirac sequence, the limit $\lim_{t \searrow 0} G(t) \ast \varphi = \varphi$ exists in $L^1(\R^d)$; more specifically, the limit converges to the \emph{Schwartz function} $\varphi$, and we deduce \begin{align*} \lim_{t \searrow 0} \bigl ( U(t) , \varphi \bigr ) &= \bigl ( U_0 , \varphi \bigr ) . \end{align*} Moreover, $U(t)$ solves the heat equation on $\Schwartz'(\R^d)$: \begin{align*} \Bigl ( \tfrac{\dd}{\dd t} U(t) , \varphi \Bigr ) &= \frac{\dd}{\dd t} \bigl ( U_0 , G(t) \ast \varphi \bigr ) = \bigl ( U_0 , D \, \Delta G(t) \ast \varphi \bigr ) \\ &= \bigl ( D \, \Delta_x G(t) \ast U_0 , \varphi \bigr ) \end{align*} Hence, $U(t)$ is a solution to the heat equation with initial condition $U_0$. Note that just like in the case of integrable functions, showing uniqueness involves additional conditions on $\partial_t U(t)$ and the initial condition $U_0$. \marginpar{2014.01.21} \section{Other common spaces of distributions} \label{S_and_Sprime:others} The ideas outlined in the last two sections can be applied to other spaces of test functions: one starts with a space of “nice” functions and the distributions are then comprised of the linear continuous functionals on that space. Operations such as derivatives are extended to distributions via the adjoint operator. Instead of Schwartz functions, often $\Cont^{\infty}_{\mathrm{c}}(\R^d)$ is used. However, working with this space is a little bit more unwieldy as it is not stable under the Fourier transform -- which also implies that $\Fourier$ does \emph{not} extend to a map $\Fourier : \Cont^{\infty}_{\mathrm{c}}(\R^d)' \longrightarrow \Cont^{\infty}_{\mathrm{c}}(\R^d)'$. Moreover, one often works on bounded domains, \ie sufficiently regular bounded subsets $\Omega \subset \R^d$. Here, the distributions are the dual of $\Cont^{\infty}(\Omega)$. Smoothness is also optional, for instance, the Dirac distribution is defined also on bounded, continuous functions, $\Cont_{\mathrm{b}}(\R^d)$, as $\delta_{x_0}(f) = f(x_0)$. \chapter{Green's functions} \label{Greens_functions} The basis of this chapter is equation~\eqref{Fourier:R:eqn:delta_unit_convolution}: assume we are interested in solutions of the \emph{inhomogeneous} equation \begin{align} L u = f \label{Greens_functions:eqn:fundamental_PDE} \end{align} where we may take the differential operator $L$ to be of the form \begin{align*} L = \sum_{\abs{\alpha} \leq N} c(\alpha) \, \partial_x^{\alpha} , \end{align*} for instance. In addition, we may impose boundary conditions if this equation is defined on a subset of $\R^d$ with boundary. Formally, the solution to \eqref{Greens_functions:eqn:fundamental_PDE} can be written as $u = L^{-1} f$ in case $L$ is invertible, but clearly, this is not very helpful as is. A more fruitful approach starts with the observation that also in the distributional sense \begin{align*} \partial_x^{\alpha} \bigl ( G \ast f \bigr ) &= (\partial_x^{\alpha} G) \ast f = G \ast (\partial_x^{\alpha} f) \end{align*} holds true for any $G \in \Schwartz'(\R^d)$ and $f \in \Schwartz(\R^d)$. Hence, \emph{if} we can write the solution $u = G \ast f$ as the convolution of the inhomogeneity $f$ with some tempered distribution $G$, then $G$ necessarily satisfies \begin{align*} (L u)(x) &= L \bigl ( G \ast f \bigr )(x) = \bigl ( L G \ast f \bigr )(x) \\ &\overset{!}{=} f(x) = \bigl ( \delta_x , f \bigr ) , \end{align*} and we immediately obtain an equation for $G$, the \emph{Green's function} or \emph{fundamental solution}, that is \emph{independent} of $f$: \begin{align} L G(x) = \delta_x \label{Greens_functions:eqn:Greens_ansatz} \end{align} Once we solve this equation for $G$, we obtain a solution of \eqref{Greens_functions:eqn:fundamental_PDE} by setting \begin{align*} u(x) &= (G \ast f)(x) = \int_{\R^d} \dd y \, G(x-y) \, f(y) . \end{align*} The partial differential operator $L$ as given above has \emph{translational symmetry}. In a more general setting, where translational symmetry is absent, the Green's function depends on \emph{two} variables $G(x,y)$ and the solution here is related to the inhomogeneity via \begin{align*} u(x) = \int_{\R^d} \dd y \, G(x,y) \, f(y) . \end{align*} The purpose of this chapter is to compute Green's functions for specific cases and explore some of the caveats. Even though \emph{a priori} it is not clear that Green's functions are actually defined in terms of a \emph{function} (as opposed to a \emph{bona fide} distribution), in many cases it turns out they are. \section{Matrix equations as a prototypical example} \label{Greens_functions:matrix_eqns} Another way to understand Green's functions is to appeal to the theory of matrices: assume $A \in \mathrm{Mat}_{\C}(n)$ is invertible and we are looking for solutions $x \in \C^n$ of the equation \begin{align*} A x = y \end{align*} for some fixed $y \in \C^n$. We can expand $y = \sum_{j = 1}^n y_j \, e_j$ in terms of the canonical basis vectors $e_j = ( 0 , \ldots , 0 , 1 , 0 , \ldots)$, and if we solve \begin{align*} A g_j = e_j \end{align*} for all $j = 1 , \ldots , n$, then we obtain $x = \sum_{j = 1} y_j \, g_j$ as the solution of $A x = y$. Moreover, the matrix $G := \bigl ( g_1 \vert \cdots \vert g_n \bigr )$ whose columns are comprised of the vectors $g_j$ satisfies \begin{align*} x = G y . \end{align*} Put another way, here $G$ is just the \emph{matrix inverse} of $A$. This already points to one fundamental obstacle for the existence of Green's functions, namely it hinges on the invertibility of $A$. The story is more complicated if $A$ is not invertible. For instance, the equation $A x = y$ also makes sense in case $A \in \mathrm{Mat}_{\C}(n,m)$ is a rectangular matrix, and the vectors $x \in \C^m$ and $y \in \C^n$ are from vector spaces of different dimension. Here, it is not clear whether a unique \emph{left}-inverse $G \in \mathrm{Mat}_{\C}(m,n)$ exists: there may be cases when one can find no such $G$ ($A x = y$ has no solution) or when there is a family of left-inverses ($A x = y$ does not have a unique solution). \medskip \noindent In the same vein, the Green's \emph{function} $G(x,y)$ gives the response at $x$ to a unit impulse at $y$. The solution $u(x) = G(x) \ast f$ at $x$ can be seen as the “infinite superposition” of $G(x,y) \, f(y)$ where the unit impulse at $y$ is scaled by the inhomogeneity $f(y)$. \section{Simple examples} \label{Greens_functions:simple_examples} To exemplify the general method, we solve \eqref{Greens_functions:eqn:Greens_ansatz} for a particularly simple case, the \emph{one-dimensional Poisson equation} \begin{align} - \partial_x^2 u = f . \label{Greens_functions:eqn:1d_Poisson} \end{align} According to our discussion, we first need to solve \begin{align} - \partial_x^2 G(x,y) = \delta(x-y) \label{Greens_functions:eqn:1d_Poisson_fundamental} \end{align} for $G$. Put another way, $G$ is the second anti-derivative of the Dirac distribution $\delta$ which can be found “by hand”, namely \begin{align*} G(x,y) &= - \rho(x-y) + a x + b \end{align*} where \begin{align*} \rho(x) &= % \begin{cases} 0 & x \in (-\infty,0) \\ x & x \in [0,+\infty) \\ \end{cases} % . \end{align*} It is instructive to verify \eqref{Greens_functions:eqn:1d_Poisson_fundamental}: clearly, the term $a x + b$ is harmless, because for smooth functions weak derivatives (meaning derivatives on $\Schwartz'(\R^d)$) and ordinary derivatives (of $\Cont^k(\R^d)$ functions) coincide. Thus, $- \partial_x^2 G(x,y) = - \partial_x^2 \rho(x-y)$ holds and \eqref{Greens_functions:eqn:1d_Poisson_fundamental} follows from the fact that the derivative of the \emph{Heavyside function} \begin{align*} \theta(x) &= % \begin{cases} 0 & x \in (-\infty,0) \\ 1 & x \in [0,+\infty) \\ \end{cases} % \end{align*} is the Dirac distribution. Note that it does not matter how we define $\rho(0)$ and $\theta(0)$, this is just a modification on a set of measure $0$. For instance, it is the size of the jump \begin{align*} \lim_{x \nearrow 0} \theta(x) - \lim_{x \searrow 0} \theta(x) = 1 \end{align*} which matters, and its size is independent of how we define $\theta(0)$ and $\rho(0)$. So far we have derived the Green's function $G$ for the one-dimensional Poisson equation on all of $\R$. $G$ depends on the two parameters $a , b \in \R$, \ie $G$ is \emph{not} unique. Additional conditions, \eg \emph{boundary conditions}, are needed to nail down a \emph{unique} solution. For instance, we could restrict \eqref{Greens_functions:eqn:1d_Poisson} to the interval $[0,1]$ and use Dirichlet boundary conditions $u(0) = 0 = u(1)$. Note that we need \emph{two} condition to fix the values of the \emph{two} parameters $a$ and $b$. To determine the values of $a$ and $b$ (which depend on the second variable $y$), we solve $G(0,y) = b = 0$ and \begin{align*} G(1,y) &= - (1 - y) + a = 0 \end{align*} for $a$ and $b$, and obtain \begin{align*} G(x,y) &= - \rho(x-y) + (1-y) \, x = % \begin{cases} (1-y) \, x & x \in [0,y] \\ y \, (1 - x) & x \in (y,1] \\ \end{cases} % . \end{align*} The solution $u(x) = G(x) \ast f$ satisfies the boundary conditions: from $G(0,y) = 0$ we immediately deduce \begin{align*} u(0) = \int_0^1 \dd y \, G(0,y) \, f(y) = 0 \end{align*} and similarly $u(1) = 0$ follows from $G(1,y) = 0$. \medskip \noindent Another example is the PDE from homework problem 42, \begin{align*} L u := \bigl ( \partial_{x_1}^2 + 2 \partial_{x_2}^2 + 3 \partial_{x_1} - 4 \bigr ) u = f , \end{align*} where the Fourier representation of $L$ is the multiplication operator associated to the polynomial $P(\xi) = - \xi_1^2 - 2 \xi_2^2 + \ii \, 3 \xi_1 - 4$. The inverse of this polynomial enters the solution \begin{align*} u(x) = \frac{1}{2 \pi} \, \bigl ( \Fourier^{-1} (\nicefrac{1}{P}) \bigr ) \ast f (x) , \end{align*} one can then read off the Green's function as \begin{align*} G(x,y) = \frac{1}{2 \pi} \, \bigl ( \Fourier^{-1} (\nicefrac{1}{P}) \bigr )(x-y) . \end{align*} \section{Green's functions on $\R^d$} \label{Greens_functions:Rd} The second example gives a strategy to \emph{compute} Green's functions on $\R^d$: the crucial ingredient here is the translational symmetry of the differential operator $L$, \ie $L$ commutes with the translation operator $(T_y u)(x) := u(x-y)$, $y \in \R^d$. Explicitly, we will discuss the Poisson equation \begin{align} - \Delta_x u = f \label{Greens_functions:eqn:Poisson_eqn} \end{align} in dimension $2$ and $3$ as well as the three-dimensional wave equation \begin{align} \bigl ( \tfrac{1}{c^2} \partial_t^2 - \Delta_x \bigr ) u = f . \label{Greens_functions:eqn:wave_eqn} \end{align} Since the strategy to solve these equations is identical, we will outline how to solve $L u = f$ for a differential operator of the form \begin{align*} L = \sum_{\abs{\alpha} \leq N} c(\alpha) \, \partial_x^{\alpha} . \end{align*} Then in Fourier representation, \begin{align*} \Fourier \bigl ( L u \bigr ) &= \Fourier \, L \, \Fourier^{-1} \, \Fourier u = P \hat{u} \overset{!}{=} \hat{f} , \end{align*} the differential operator $L$ transforms to multiplication by the polynomial \begin{align*} P(\xi) = \sum_{\abs{\alpha} \leq N} \ii^{\abs{\alpha}} \, c(\alpha) \, \xi^{\alpha} , \end{align*} and $u(x) = G(x) \ast f$ can be expressed as the convolution of the Green's function \begin{align*} G(x,y) := (2\pi)^{\nicefrac{d}{2}} \, \bigl ( \Fourier^{-1} (\nicefrac{1}{P}) \bigr )(x-y) \end{align*} with the inhomogeneity $f$. In other words, the problem of \emph{finding the Green's function reduces to computing the inverse Fourier transform of the rational function $\tfrac{1}{P}$}. For instance, one can interpret $\Fourier^{-1} (\nicefrac{1}{P})$ as an integral in the complex plane, use the method of partial fractions and employ Cauchy's integral formula. \medskip \noindent For the special case of the Poisson equation $- \Delta_x u = f$, another way to obtain the Green's function relies on Green's formula \begin{align} \int_{V} \dd x \, \bigl ( \Delta_x u(x) \; v(x) - u(x) \; \Delta_x v(x) \bigr ) = \int_{\partial V} \dd S \cdot \bigl ( \partial_n u(x) \; v(x) - u(x) \; \partial_n v(x) \bigr ) . \label{Greens_functions:eqn:Greens_formula} \end{align} Here, $V$ is a subset of $\R^n$ with boundary $\partial V$ and $u$ or $v$ has compact support. \begin{theorem}[Green's function for the Poisson equation]\label{Greens_functions:thm:Poisson_Green} The Green's function for the Poisson equation is % \begin{align*} G(x,y) = % \begin{cases} \tfrac{1}{C_d} \, \abs{x-y}^{2-d} & d \neq 2 \\ - \frac{1}{2\pi} \, \ln \abs{x-y} & d = 2 \\ \end{cases} % \end{align*} % where $C_d = (d-2) \, \mathrm{Area}(\mathbb{S}^{d-1})$ and $\mathrm{Area}(\mathbb{S}^{d-1})$ is the surface area of the $d-1$-dimensional sphere. \end{theorem} \begin{proof} Let us prove the case $d \neq 2$, the arguments for two-dimensional case are virtually identical, one only needs to replace $\abs{x}^{d-2}$ with $\ln \abs{x}$. First of all, $\abs{x}^{2-d}$ and $\ln \abs{x}$ define tempered distributions (use polar coordinates to show and arguments analogous to Lemma~\ref{S_and_Sprime:schwartz_functions:lem:Lp_estimate} to prove continuity). Moreover, away from $x = 0$, $\Delta_x \abs{x}^{2-d} = 0$ and $\Delta_x \ln \abs{x} = 0$ hold in the ordinary sense. Now let $\varphi \in \Schwartz(\R^d)$ be arbitrary. Since smooth function with compact support are dense in $\Schwartz(\R^d)$, we may assume $\varphi \in \Cont^{\infty}_{\mathrm{c}}(\R^d)$. Define $V_{\eps} := \R^d \setminus B_{\eps}$ where $B_{\eps} := \bigl \{ x \in \R^d \; \vert \; \abs{x} < \eps \bigr \}$. On $V_{\eps}$ where we have cut out a small hole around the origin, $\Delta_x \abs{x}^{2-d} = 0$ holds and we can write \marginpar{2014.01.23} % \begin{align*} \Bigl ( - \Delta_x \abs{x}^{2-d} \, , \, \varphi \Bigr ) &= - \Bigl ( \abs{x}^{2-d} \, , \, \Delta_x \varphi \Bigr ) = - \lim_{\eps \searrow 0} \int_{V_{\eps}} \dd x \, \abs{x}^{2-d} \, \Delta_x \varphi(x) \\ &= \lim_{\eps \searrow 0} \int_{V_{\eps}} \dd x \, \Bigl ( \Delta_x \abs{x}^{2-d} \, \varphi(x) - \abs{x}^{2-d} \, \Delta_x \varphi(x) \Bigr ) . \end{align*} % Now Green's formula applies, % \begin{align*} \ldots &= \lim_{\eps \searrow 0} \int_{\partial V_{\eps}} \dd S \cdot \Bigl ( \partial_r \abs{x}^{2-d} \, \varphi(x) - \abs{x}^{2-d} \, \partial_r \varphi(x) \Bigr ) \\ &= - \lim_{\eps \searrow 0} \int_{\partial B_{\eps}} \dd S \cdot \Bigl ( (2-d) \, \eps^{1-d} \, \varphi(x) - \eps^{2-d} \, \partial_r \varphi(x) \Bigr ) , \end{align*} % and we obtain an integral with respect to $\dd S$, the surface measure of the sphere of radius $\eps$. Note that the minus sign is due to the difference in orientation (the outward normal on $\partial V_{\eps}$ points towards the origin while the surface normal of $\partial B_{\eps}$ points away from it). Since the surface area of $B_{\eps}$ scales like $\eps^{d-1}$, the second term vanishes while the first term converges to % \begin{align*} \Bigl ( - \Delta_x \abs{x}^{2-d} \, , \, \varphi \Bigr ) &= (d-2) \, \mathrm{Area}(\mathbb{S}^{d-1}) \, \varphi(0) \\ &= (d-2) \, \mathrm{Area}(\mathbb{S}^{d-1}) \, (\delta , \varphi) . \end{align*} % \end{proof} \section{Green's functions on domains: implementing boundary conditions} \label{Greens_functions:domains} Predictably, the presence of boundaries complicates things. To exemplify some of the hurdles, let us discuss the Poisson equation \begin{align} - \Delta_x u = f , \qquad \qquad u \vert_{\partial \Omega} = h \in \Cont^1(\partial \Omega) , \label{Greens_function:eqn:Poisson_boundary} \end{align} on a bounded subset $\Omega$ of $\R^2$ with Dirichlet boundary conditions, \ie we prescribe the value of $u$ on the boundary (rather than that of its normal derivative). Imposing boundary conditions is necessary, because $- \Delta_x$ is not injective; The kernel of $- \Delta_x$ is made up of \emph{harmonic functions}, \ie functions which satisfy \begin{align*} - \Delta_x h = 0 . \end{align*} For instance, the function $h(x) = \e^{x_1} \sin x_2$ is harmonic. Consequently, if $u$ is \emph{a} solution to \eqref{Greens_function:eqn:Poisson_boundary}, then $u + h$ is another. However, it turns out that fixing $u$ on the boundary $\partial \Omega$ singles out a \emph{unique} solution. The first step in this direction is the following representation of a sufficiently regular function on $\Omega$: \begin{proposition}\label{Greens_functions:prop:rep_solution_Poisson_domain} Suppose $\Omega \subset \R^2$ be a bounded domain with piecewise $\Cont^1$ boundary $\partial \Omega$ and pick $u \in \Cont^2(\Omega) \cap \Cont^1(\bar{\Omega})$. Then for any $x \in \Omega$, we can write % \begin{align*} u(x) &= - \int_{\Omega} \dd y \, G(x,y) \, \Delta_y u(y) \, + \\ &\qquad + \int_{\partial \Omega} \dd S_y \cdot \Bigl ( G(x,y) \, \partial_{n_y} u(y) - \partial_{n_y} G(x,y) \, u(y) \Bigr ) \end{align*} % where the index $y$ in the surface measure $\dd S_y$ and the surface normal $n_y$ indicates that they are associated to the variable $y$. \end{proposition} The idea here is to exploit $- \Delta_x G(x,y) = \delta(x-y)$ as well as Green's formula~\eqref{Greens_functions:eqn:Greens_formula}, and \emph{formally}, we immediately obtain \begin{align*} \int_{\Omega} \dd x \, &\Bigl ( \Delta_y G(x,y) \; u(y) - G(x,y) \; \Delta_y u(y) \Bigr ) = \\ &\qquad \qquad \qquad = - u(x) - \int_{\Omega} \dd x \, G(x,y) \; \Delta_y u(y) \\ &\qquad \qquad \qquad = \int_{\partial \Omega} \dd S \cdot \Bigl (\partial_n G(x,y) \; u(y) - G(x,y) \; \partial_n u(y) \Bigr ) \end{align*} To make this rigorous, one has to cut a small hole around $x$ and adapt the strategy from the proof of Theorem~\ref{Greens_functions:thm:Poisson_Green}. However, we will skip the proof. \begin{theorem}\label{Greens_functions:thm:solution_Poisson_domain} The Green's function for the Dirichlet boundary value problem for the Poisson equation on a bounded domain $\Omega \subset \R^2$ with $\Cont^1$ boundary $\partial \Omega$ has the form % \begin{align*} G_{\Omega}(x,y) = G(x,y) + b(x,y) \end{align*} % where $G(x,y) = - \frac{1}{2\pi} \, \ln \abs{x-y}$ is the Green's function of the free problem and $b(x,y)$ is the solution to the boundary value problem % \begin{align} \Delta_x b = 0 , \qquad \qquad b(x,y) = - G(x,y) \; \forall x \in \partial \Omega \mbox{ or } \forall y \in \partial \Omega . \label{Greens_function:eqn:Greens_function_harmonic_term} \end{align} % Then the solution $u$ to the inhomogeneous Dirichlet problem~\eqref{Greens_function:eqn:Poisson_boundary} is given by % \begin{align} u(x) &= \int_{\Omega} \dd y \, G_{\Omega}(x,y) \, f(y) - \int_{\partial \Omega} \dd S_y \cdot \partial_{n_y} G_{\Omega}(x,y) \, h(y) . \label{Greens_functions:eqn:solution_Poisson_domain} \end{align} % \end{theorem} A priori it is not clear whether \eqref{Greens_function:eqn:Greens_function_harmonic_term} has a solution; we postpone a more in-depth discussion of harmonic functions and proceed under the assumption it does. \begin{proof} Clearly, seeing as $G_{\Omega}$ is the sum of the free fundamental solution $G$ and a harmonic function, $\Delta G_{\Omega}(x,y) = \delta(x-y)$ still holds. Moreover, the Green's function implements the boundary conditions, namely % \begin{align*} G_{\Omega}(x,y) = G(x,y) + b(x,y) = 0 \end{align*} % is satisfied by construction on the boundary. Then Green's identity~\eqref{Greens_functions:eqn:Greens_formula} and $\Delta_x b = 0$ imply % \begin{align*} 0 &= - \int_{\Omega} \dd y \, b(x,y) \, \Delta_y u(y) + \int_{\partial \Omega} \dd S_y \cdot \Bigl ( b(x,y) \; \partial_{n_y} u(y) - \partial_{n_y} b(x,y) \; u(y) \Bigr ) . \end{align*} % The function $u$ solves the inhomogeneous Poisson equation $- \Delta_x u = f$; On the boundary, $b(x,y)$ coincides with $- G(x,y)$ and $u(y) = h(y)$ holds, and we deduce % \begin{align*} \int_{\partial \Omega} \dd S_y \cdot G(x,y) \; \partial_{n_y} u(y) &= - \int_{\Omega} \dd y \, b(x,y) \, f(y) - \int_{\partial \Omega} \dd S_y \cdot \partial_{n_y} b(x,y) \; h(y) . \end{align*} % This term cancels one of the boundary terms in the integral representation of Proposition~\ref{Greens_functions:prop:rep_solution_Poisson_domain}, and we recover equation~\eqref{Greens_functions:eqn:solution_Poisson_domain}, % \begin{align*} u(x) &= - \int_{\Omega} \dd y \, G(x,y) \, \Delta_y u(y) \, + \\ &\qquad + \int_{\partial \Omega} \dd S_y \cdot \Bigl ( G(x,y) \, \partial_{n_y} u(y) - \partial_{n_y} G(x,y) \, u(y) \Bigr ) \\ &= \int_{\Omega} \dd y \, G_{\Omega}(x,y) \, f(y) - \int_{\partial \Omega} \dd S_y \cdot \partial_{n_y} G_{\Omega}(x,y) \, h(y) . \end{align*} % \end{proof} \chapter{Quantum mechanics} \label{quantum} The explanation of the photoelectric effect through light \emph{quanta} is the name sake for quantum mechanics. Quantization here refers to the idea that energy stored in light comes in “chunks” known as \emph{photons}, and that the energy per photon depends only on the frequency. This is quite a departure from the classical theory of light through Maxwell's equations (\cf Chapter~\ref{operators:Maxwell}). The reader can only get a glimpse of quantum theory in this chapter. A good standard physics textbook on the subject is \cite{Sakurai:quantum_mechanics:1994} while the mathematics of quantum mechanics is covered in more depth in \cite{Teschl:quantum_mechanics:2009,Gustafson_Sigal:quantum_mechanics:2011}. \section{Paradigms} \label{quantum:paradigms} The simplest \emph{bona fide} quantum system is that of a quantum spin, and it can be used to give an effective description of the \emph{Stern-Gerlach experiment} where a beam of neutral atoms with magnetic moment $g$ is sent through a magnet with inhomogeneous magnetic field $B = (B_1 , B_2 , B_3)$. It was observed experimentally that the beam splits in two rather than fan out with continuous distribution. Hence, the system behaves as if only two spin configurations, spin-up $\uparrow$ and spin-down $\downarrow$, are realized. A simplified (effective) model neglects the translational degree of freedom and focusses only on the internal spin degree of freedom. Then the energy observable, the \emph{hamiltonian}, is the matrix \begin{align*} H = g B \cdot S \end{align*} which involves the spin operator $S_j := \frac{\hbar}{2} \sigma_j$ defined in terms of Planck's constant $\hbar$ and the three Pauli matrices \begin{align*} \sigma_1 = \left ( % \begin{matrix} 0 & 1 \\ 1 & 0 \\ \end{matrix} % \right ) , \qquad \qquad \sigma_2 = \left ( % \begin{matrix} 0 & - \ii \\ + \ii & 0 \\ \end{matrix} % \right ) , \qquad \qquad \sigma_3 = \left ( % \begin{matrix} +1 & 0 \\ 0 & -1 \\ \end{matrix} % \right ) , \end{align*} and the magnetic moment $g$ and the magnetic field $B$. The prefactor of the Pauli matrices are real, and thus $H = H^*$ is a hermitian matrix. For instance, assume $B = (0,0,b)$ points in the $x_3$-direction. Then spin-up and spin-down (seen from the $x_3$-direction) are the \emph{eigenvectors} of \begin{align*} H = \left ( % \begin{matrix} + \frac{\hbar g b}{2} & 0 \\ 0 & - \frac{\hbar g b}{2} \\ \end{matrix} % \right ) , \end{align*} \ie $\psi_{\uparrow} = (1,0)$ and $\psi_{\downarrow} = (0,1)$. The dynamical equation is the \emph{Schrödinger equation} \begin{align} \ii \, \hbar \, \frac{\partial}{\partial t} \psi(t) &= H \psi(t) , && \psi(0) = \psi_0 \in \Hil . \label{quantum:eqn:Schroedinger_eqn} \end{align} The vector space $\Hil = \C^2$ becomes a Hilbert space if we equip it with the scalar product \begin{align*} \scpro{\psi}{\varphi}_{\C^2} := \sum_{j = 1 , 2} \overline{\psi_j} \, \varphi_j . \end{align*} Moreover, the hermitian matrix $H$ can always be diagonalized (\cf exercise~35--36), and the eigenvectors to distinct eigenvalues are orthogonal. The complex-valued \emph{wave function} $\psi$ encapsulates probabilities: for any $\psi \in \C^2$ normalized to $1 = \norm{\psi}_{\C^2}$, the probability to find the particle in the spin-up configuration is \begin{align*} \mathbb{P}(\mathsf{S}_3 = \uparrow) = \sabs{\psi_1}^2 = \babs{\sscpro{\psi_{\uparrow}}{\psi}}^2 \end{align*} since $\psi_{\uparrow} = (1,0)$. The above notation comes from probability theory and means “the probability of finding the random observable spin $\mathsf{S}_3$ in the spin-$\uparrow$ configuration $+ \frac{\hbar}{2}$”. \marginpar{2014.02.04} \medskip \noindent The second exemplary quantum system describes a non-relativistic particle of mass $m$ subjected to an electric field generated by the potential $V$. The classical Hamilton \emph{function} $h(q,p) = \tfrac{1}{2m} p^2 + V(q)$ is then “quantized” to \begin{align*} h \bigl ( \hat{x},- \ii \hbar \nabla_x \bigr ) = H = \frac{1}{2 m} \, \bigl ( - \ii \hbar \nabla_x \bigr )^2 + V(\hat{x}) \end{align*} by replacing momentum $p$ by the momentum \emph{operator} $\mathsf{P} = - \ii \hbar \nabla_x$ and position $q$ by the multiplication operator $\mathsf{Q} = \hat{x}$.\footnote{To find a consistent quantization procedure is highly non-trivial. One possibility is to use Weyl quantization \cite{Weyl:qm_gruppentheorie:1927,Wigner:Wigner_transform:1932,Moyal:Weyl_calculus:1949,Folland:harmonic_analysis_hase_space:1989,Lein:quantization_semiclassics:2010}. Such a quantization procedure also yields a formulation of a semiclassical limit, and the names for various operators (\eg position, momentum and angular momentum) are then justified via a semiclassical limit. For instance, the momentum operator is $- \ii \hbar \nabla_x$, because in the semiclassical limit it plays the role of the classical momentum observable $p$ (\cf \eg \cite[Theorem~1.0.1]{Lein:quantization_semiclassics:2010} and \cite[Theorem~7.0.1]{Lein:quantization_semiclassics:2010}).} The hamiltonian is now an operator on the Hilbert space $L^2(\R^d)$ whose action on suitable vectors $\psi$ is \begin{align*} (H \psi)(x) &= - \frac{\hbar^2}{2 m} (\Delta_x \psi)(x) + V(x) \, \psi(x) . \end{align*} \begin{figure} \hfil\includegraphics[height=4cm]{./images/low_intensity_double_slit}\hfil \caption{Images of a low-intensity triple slit experiment with photons (taken from \cite{Crowell:modern_revolution_physics:2008}). } \label{quantum:fig:low_intensity} \end{figure} Quantum particles simultaneously have wave and particle character: the Schrödinger equation~\eqref{quantum:eqn:Schroedinger_eqn} is structurally very similar to a wave equation. The physical constant $\hbar$ relates the energy of a particle with the associated wave length and has units $[\mathrm{energy} \cdot \mathrm{time}]$. The particle aspects comes when one measures outcomes of experiments: consider a version of the Stern-Gerlach experiment where the intensity of the atomic beam is so low that single atoms pass through the magnet. If the modulus square of the wave function $\sabs{\psi(t,x)}^2$ were to describe the intensity of a \emph{matter} wave, then one expects that the two peaks build up slowly, but \emph{simultaneously}. In actuality, one registers single impacts of atoms and only if one waits long enough, two peaks emerge (similar to what one sees in a low-intensity triple slit experiment in Figure~\ref{quantum:fig:low_intensity}). This is akin to tossing a coin: one cannot see the probabilistic nature in a few coin tosses, let alone a single one. Probabilities emerge only after repeating the experiment often enough. These experiments show that $\sabs{\psi(t,x)}^2$ is to be interpreted as a probability distribution, but more on that below. Pure states are described by wave functions, \ie complex-valued, square integrable functions. Put more precisely, we are considering $L^2(\R^d)$ made up of equivalence classes of functions with scalar product \begin{align*} \scpro{\varphi}{\psi} = \int_{\R^d} \dd x \, \overline{\varphi(x)} \, \psi(x) \end{align*} and norm $\norm{\psi} := \sqrt{\scpro{\psi}{\psi}}$. In physics text books, one usually encounters the the \emph{bra-ket} notation: here $\ket{\psi}$ is a state and $\ipro{x}{\psi}$ is $\psi(x)$. The scalar product of $\phi,\psi \in L^2(\R^d)$ is denoted by $\ipro{\phi}{\psi}$ and corresponds to $\scpro{\phi}{\psi}$. Although bra-ket notation can be ambiguous, it is sometimes useful and in fact used in mathematics every once in a while. The fact that $L^2(\R^d)$ consists of \emph{equivalence classes} of functions is only natural from a physical perspective: if $\psi_1 \sim \psi_2$ are in the same equivalence class (\ie they differ on a set of measure $0$), then the arguments in Chapter~\ref{spaces:Banach:prototypical} state that the associated probabilities coincide: Physically, $\abs{\psi(x,t)}^2$ is interpreted as the \emph{probability to measure a particle at time $t$ in (an infinitesimally small box located in) location $x$}. If we are interested in the probability that we can measure a particle in a region $\Lambda \subseteq \R^d$, we have to integrate $\abs{\psi(x,t)}^2$ over $\Lambda$, \begin{align} \mathbb{P}(X(t) \in \Lambda) = \int_{\Lambda} \dd x \, \abs{\psi(x,t)}^2 . \end{align} If we want to interpret $\abs{\psi}^2$ as \emph{probability} density, the wave function has to be \emph{normalized}, \ie \begin{align*} \norm{\psi}^2 = \int_{\R^d} \dd x \, \abs{\psi(x)}^2 = 1 . \end{align*} This point of view is called \emph{Born rule}: $\abs{\psi}^2$ could either be a \emph{mass} or \emph{charge density} -- or a \emph{probability density}. To settle this, physicists have performed the double slit experiment with an electron source of low flux. If $\abs{\psi}^2$ were a density, one would see the whole interference pattern building up slowly. Instead, one \emph{measures} “single impacts” of electrons and the result is similar to the data obtained from experiments in statistics (\eg the Dalton board). Hence, we speak of \emph{particles}. \section{The mathematical framework} \label{quantum:framework} To identify the common structures, let us study quantum mechanics in the abstract. Just like in the case of classical mechanics, we have to identify \emph{states}, \emph{observables} and \emph{dynamical equations} in \emph{Schrödinger and Heisenberg picture}. \subsection{Quantum observables} \label{quantum:frameworks:observables} Quantities that can be measured are represented by selfadjoint (hermitian in physics parlance) operators $F$ on the Hilbert space $\Hil$ (typically $L^2(\R^d)$), \ie special linear maps \begin{align*} F : \mathcal{D}(F) \subseteq \Hil \longrightarrow \Hil . \end{align*} Here, $\mathcal{D}(F)$ is the \emph{domain} of the operator since \emph{typical} observables are not defined for all $\psi \in \Hil$. \emph{This is not a mathematical subtlety with no physical content}, quite the contrary: consider the observable energy, typically given by \begin{align*} H = \frac{1}{2m} (- \ii \hbar \nabla_x)^2 + V(\hat{x}) , \end{align*} then states in the domain \begin{align*} \mathcal{D}(H) := \Bigl \{ \psi \in L^2(\R^d) \; \big \vert \; H \psi \in L^2(\R^d) \Bigr \} \subseteq L^2(\R^d) \end{align*} are those of \emph{finite energy}. For all $\psi$ in the domain of the hamiltonian $\mathcal{D}(H) \subseteq L^2(\R^d)$, the expectation value \begin{align*} \bscpro{\psi}{H \psi} < \infty \end{align*} is bounded. Well-defined observables have domains that are \emph{dense} in $\Hil$. Similarly, states in the domain $\mathcal{D}(\hat{x}_l)$ of the $l$th component of the position operator are those that are “localized in a finite region” in the sense of expectation values. Boundary conditions may also enter the definition of the domain: as seen in the example of the momentum operator on $[0,1]$, different boundary conditions yield different momentum operators (see Chapter~\ref{operators:unitary} for details). The energy observable is just a specific example, but it contains all the ingredients which enter the definition of a quantum observable: \begin{definition}[Observable] A quantum observable $F$ is a densely defined, selfadjoint operator on a Hilbert space. The spectrum $\sigma(F)$ (\cf Definition~\ref{operators:defn:spectrum}) is the set of outcomes of measurements. \end{definition} Physically, results of measurements are real which is reflected in the selfadjointness of operators (\cf Chapter~\ref{operators:selfadjoint_operators}), $H^* = H$. (A symmetric operator is selfadjoint if $\mathcal{D}(H^{\ast}) = \mathcal{D}(H)$.) The set of possible outcomes of measurements is the spectrum $\sigma(H) \subseteq \R$ (the spectrum is defined as the set of complex numbers so that $H - z$ is not invertible, \cf Chapter~\ref{operators:defn:spectrum}). Spectra of selfadjoint operators are necessarily subsets of the reals (\cf Theorem~\ref{quantum:thm:spectrum_selfadjoint_real}). Typically one “guesses” quantum observables from classical observables: in $d = 3$, the angular momentum operator is given by \begin{align*} L = \hat{x} \wedge (- \ii \hbar \nabla_x) . \end{align*} In the simplest case, one uses Dirac's recipe (replace $x$ by $\hat{x}$ and $p$ by $\hat{p} = - \ii \hbar \nabla_x$) on the classical observable angular momentum $L(x,p) = x \wedge p$. In other words, \emph{many quantum observables are obtained as quantizations of classical observables}: examples are position, momentum and energy. Moreover, the \emph{interpretation} of, say, the angular momentum operator as angular momentum is taken from classical mechanics. In the definition of the domain, we have already used the definition of expectation value: the expectation value of an observable $F$ with respect to a state $\psi$ (which we assume to be normalized, $\norm{\psi} = 1$) is given by \begin{align} \mathbb{E}_{\psi}(F) := \bscpro{\psi}{F \psi} . \end{align} The expectation value is finite if the state $\psi$ is in the domain $\mathcal{D}(F)$. The Born rule of quantum mechanics tells us that if we repeat an experiment measuring the observable $F$ many times for a particle that is prepared in the state $\psi$ each time, the statistical average calculated according to the relative frequencies converges to the expectation value $\mathbb{E}_{\psi}(F)$. Hence, quantum observables, selfadjoint operators on Hilbert spaces, are \emph{bookkeeping devices} that have two components: \begin{enumerate}[(i)] \item a \emph{set of possible outcomes} of measurements, the spectrum $\sigma(F)$, and \item \emph{statistics}, \ie how often a possible outcome occurs. \end{enumerate} \subsection{Quantum states} \label{quantum:frameworks:states} Pure states are wave functions $\psi \in \Hil$, or rather, wave functions up to a total phase: just like one can measure only energy \emph{differences}, only phase \emph{shifts} are accessible to measurements. Hence, one can think of pure states as orthogonal \emph{projections} \begin{align*} P_{\psi} := \sopro{\psi}{\psi} = \sscpro{\psi}{\cdot} \, \psi . \end{align*} if $\psi$ is normalized to $1$, $\norm{\psi} = 1$. Here, one can see the elegance of bra-ket notation vs. the notation that is “mathematically proper”. A generalization of this concept are \emph{density operators} $\rho$ (often called density matrices): density matrices are defined via the trace. If $\rho$ is a suitable linear operator and $\{ \varphi_n \}_{n \in \N}$ and orthonormal basis of $\Hil$, then we define \begin{align*} \trace \, \rho := \sum_{n \in \N} \sscpro{\varphi_n}{\rho \varphi_n} . \end{align*} One can easily check that this definition is independent of the choice of basis (see homework problem~28). Clearly, $P_{\psi}$ has trace $1$ and it is also positive in the sense that \begin{align*} \bscpro{\varphi}{P_{\psi} \varphi} \geq 0 \end{align*} for all $\varphi \in \Hil$. This is also the good definition for quantum states: \begin{definition}[Quantum state] A quantum state (or density operator/matrix) $\rho = \rho^*$ is a non-negative operator of trace $1$, \ie % \begin{align*} \bscpro{\psi}{\rho \psi} &\geq 0 , && \forall \psi \in \Hil , \\ \trace \, \rho &= 1 . \end{align*} % If $\rho$ is also an orthogonal projection, \ie $\rho^2 = \rho$, it is a pure state.\footnote{Note that the condition $\trace \, \rho = 1$ implies that $\rho$ is a \emph{bounded} operator while the positivity implies the selfadjointness. Hence, if $\rho$ is a projection, \ie ${\rho}^2 = \rho$, it is automatically also an orthogonal projection. } Otherwise $\rho$ is a mixed state. \end{definition} Density operators are projections if and only if they are rank-$1$ projections, \ie $\rho = \sopro{\psi}{\psi}$ for some $\psi \in \Hil$ of norm $1$ (see problem~28). \begin{example} Let $\psi_j \in \Hil$ be two wave functions normalized to $1$. Then for any $0 < \alpha < 1$ % \begin{align*} \rho = \alpha P_{\psi_1} + (1 - \alpha) P_{\psi_2} = \alpha \sopro{\psi_1}{\psi_1} + (1 - \alpha) \sopro{\psi_2}{\psi_2} \end{align*} % is a mixed state as % \begin{align*} \rho^2 &= \alpha^2 \sopro{\psi_1}{\psi_1} + (1 - \alpha)^2 \sopro{\psi_2}{\psi_2} + \\ &\qquad \qquad + \alpha (1 - \alpha) \bigl ( \sopro{\psi_1}{\psi_1} \sopro{\psi_2}{\psi_2} + \sopro{\psi_2}{\psi_2} \sopro{\psi_1}{\psi_1} \bigr ) \\ &\neq \rho . \end{align*} % Even if $\psi_1$ and $\psi_2$ are orthogonal to each other, since $\alpha^2 \neq \alpha$ and similarly $(1 - \alpha)^2 \neq (1 - \alpha)$, $\rho$ cannot be a projection. Nevertheless, it is a state since $\trace \, \rho = \alpha + (1 - \alpha) = 1$. Keep in mind that $\rho$ does not project on $\alpha \psi_1 + (1 - \alpha) \psi_2$! \end{example} Also the expectation value of an observable $F$ with respect to a state $\rho$ is defined in terms of the trace, \begin{align*} \mathbb{E}_{\rho}(F) := \trace (\rho \, F) , \end{align*} which for pure states $\rho = \sopro{\psi}{\psi}$ reduces to $\bscpro{\psi}{F \psi}$. \subsection{Time evolution} \label{quantum:frameworks:time_evolution} The time evolution is determined through the \emph{Schrödinger equation}, \begin{align} \ii \hbar \frac{\partial}{\partial t} \psi(t) = H \psi(t) , && \psi(t) \in \Hil, \; \psi(0) = \psi_0, \; \norm{\psi_0} = 1 . \end{align} Alternatively, one can write $\psi(t) = U(t) \psi_0$ with $U(0) = \id_{\Hil}$. Then, we have \begin{align*} \ii \hbar \frac{\partial}{\partial t} U(t) = H U(t) , && U(0) = \id_{\Hil} . \end{align*} If $H$ \emph{were} a number, one would immediately use the ansatz \begin{align} U(t) = \e^{- \ii \frac{t}{\hbar} H} \end{align} as solution to the Schrödinger equation. If $H$ is a selfadjoint operator, this is \emph{still true}, but takes a lot of work to justify rigorously if the domain of $H$ is not all of $\Hil$ (the case of unbounded operators, the \emph{generic} case). As has already been mentioned, we can evolve either states or observables in time and one speaks of the Schrödinger or Heisenberg picture, respectively. In the Schrödinger picture, states evolve according to \begin{align*} \psi(t) = U(t) \psi_0 \end{align*} while observables remain fixed. Conversely, in the Heisenberg picture, states are kept fixed in time and observables evolve according to \begin{align} F(t) := U(t)^* \, F \, U(t) = \e^{+ \ii \frac{t}{\hbar} H} F \, \e^{- \ii \frac{t}{\hbar} H} . \end{align} Heisenberg observables satisfy \emph{Heisenberg's equation of motion}, \begin{align} \frac{\dd}{\dd t} F(t) = \frac{\ii}{\hbar} \bigl [ H , F(t) \bigr ] , && F(0) = F , \end{align} which can be checked by plugging in the definition of $F(t)$ and elementary \emph{formal} manipulations. It is no coincidence that this equation looks structurally similar to equation~\eqref{classical_mechanics:eqn:eom_observables}! Just like in the classical case, density operators have to be evolved \emph{backwards} in time, meaning that $\rho(t) = U(t) \, \rho \, U(t)^*$ satisfies \begin{align*} \frac{\dd}{\dd t} \rho(t) = - \frac{\ii}{\hbar} \bigl [ H , \rho(t) \bigr ] , && \rho(0) = \rho . \end{align*} The equivalence of Schrödinger and Heisenberg picture is seen by comparing expectation values just as in Chapter~\ref{classical_mechanics:equivalence_S_H}: the cyclicity of the trace, $\trace (AB) = \trace (BA)$, yields \marginpar{2014.02.06} \begin{align*} \mathbb{E}_{\rho(t)}(F) &= \trace \bigl ( \rho(t) \bigr ) = \trace \Bigl ( U(t) \, \rho \, U(t)^* \, F \Bigr ) \\ &= \trace \Bigl ( \rho \, U(t)^* \, F \, U(t) \Bigr ) = \trace \bigl ( \rho \, F(t) \bigr ) = \mathbb{E}_{\rho} \bigl ( F(t) \bigr ) . \end{align*} As a last point, we mention the conservation of probability: if $\psi(t)$ solves the Schrödinger equation for some selfadjoint $H$, then we can check at least formally that the time evolution is unitary and thus preserves probability, \begin{align*} \frac{\dd}{\dd t} \bnorm{\psi(t)}^2 &= \frac{\dd}{\dd t} \bscpro{\psi(t)}{\psi(t)} = \bscpro{\tfrac{1}{\ii \hbar} H \psi(t)}{\psi(t)} + \bscpro{\psi(t)}{\tfrac{1}{\ii \hbar} H \psi(t)} \\ & = \frac{\ii}{\hbar} \Bigl ( \bscpro{\psi(t)}{H^* \psi(t)} - \bscpro{\psi(t)}{H \psi(t)} \Bigr ) \\ & = \frac{\ii}{\hbar} \bscpro{\psi(t)}{(H^* - H) \psi(t)} = 0 . \end{align*} Conservation of probability is reminiscent of Proposition~\ref{classical_mechanics:prop:states_stay_states}. We see that the condition $H^* = H$ is the key here: selfadjoint operators generate unitary evolution groups. As a matter of fact, there are cases when one \emph{wants} to violate conservation of proability: one has to introduce so-called \emph{optical potentials} which simulate particle creation and annihilation. The time evolution $\e^{- \ii \frac{t}{\hbar} H}$ is not the only unitary group of interest, other commonly used examples are \emph{translations} in position or momentum which are generated by the momentum and position operator, respectively (the order is reversed!), as well as rotations which are generated by the angular momentum operators. \subsection{Comparison of the two frameworks} \label{quantum:frameworks:comparison} Now that we have an understanding of the structures of classical and quantum mechanics, juxtaposed in Table~\ref{frameworks:comparison:table:overview_frameworks}, we can elaborate on the differences and similarities of both theories. \begin{table} % \begin{tabularx}{\textwidth}{>{\small\raggedright\hsize=3cm}X | >{\small\raggedright}X >{\small\raggedright}X >{\hsize=0cm}X} \makebox[1cm]{} & \textit{Classical} & \textit{Quantum} & \\ \hline \textit{Observables} & $f \in \Cont^{\infty}(\Pspace,\R)$ & selfadjoint operators acting on Hilbert space $\Hil$ & \\ [0.5ex] \textit{Building block observables} & position $x$ and momentum $p$ & position $\hat{x}$ and momentum $\hat{p}$ operators & \\ [0.5ex] \textit{Possible results of measurements} & $\mathrm{im}(f)$ & $\sigma(F)$ & \\ [0.5ex] \textit{States} & probability measures $\mu$ on phase space $\Pspace$ & density operators $\rho$ on $\Hil$ & \\ [0.5ex] \textit{Pure states} & points in phase space $\Pspace$ & wave functions $\psi \in \Hil$ & \\ [0.5ex] \textit{Generator of evolution} & hamiltonian function $H : \Pspace \longrightarrow \R$ & hamiltonian operator $H$ & \\ [0.5ex] \textit{Infinitesimal time evolution equation} & $\frac{\dd}{\dd t} f(t) = \{ H , f(t) \}$ & $\frac{\dd }{\dd t} F(t) = \frac{\ii}{\hbar} [ H , F(t) ]$ & \\ [0.5ex] \textit{Integrated time evolution} & hamiltonian flow $\phi_t$ & $\e^{+ \ii \frac{t}{\hbar} H} \, \Box \, \e^{- \ii \frac{t}{\hbar} H}$ & \\ \end{tabularx} % \caption{Comparison of classical and quantum framework} \label{frameworks:comparison:table:overview_frameworks} \end{table} For instance, observables form an \emph{algebra} (a vector space with multiplication): in classical mechanics, we use the \emph{pointwise product} of functions, \begin{align*} \cdot :& \, \Cont^{\infty}(\Pspace) \times \Cont^{\infty}(\Pspace) \longrightarrow \Cont^{\infty}(\Pspace) , \; (f , g) \mapsto f \cdot g \\ & (f \cdot g)(x,p) := f(x,p) \, g(x,p) , \end{align*} which is obviously commutative. We also admit \emph{complex}-valued functions and add \emph{complex conjugation} as involution (\ie $f^{\ast \ast} = f$). Lastly, we add the Poisson bracket to make $\Cont^{\infty}(\Pspace)$ into a so-called Poisson algebra. As we have seen, the notion of Poisson bracket gives rise to dynamics as soon as we choose an energy function (hamiltonian). On the quantum side, bounded operators (see~Chapter~\ref{operators:bounded}) form an algebra. This algebra is non-commutative, \ie \begin{align*} F \cdot G \neq G \cdot F . \end{align*} \emph{Exactly this is what makes quantum mechanics different.} Taking adjoints is the involution here and the commutator plays the role of the Poisson bracket. Again, once a hamiltonian (operator) is chosen, the dynamics of Heisenberg observables $F(t)$ is determined by the commutator of the $F(t)$ with the hamiltonian $H$. If an operator commutes with the hamiltonian, \emph{it is a constant of motion}. This is in analogy with Definition~\ref{classical:defn:conserved_quantity} where a classical observable is a constant of motion if and only if its Poisson bracket with the hamiltonian (function) vanishes. \subsection{Representations} \label{quantum:framework:representations} Linear algebra distinguishes abstract linear maps $H : \mathcal{X} \longrightarrow \mathcal{Y}$ and their representations as matrices using a basis in initial and target space: any pair of bases $\{ x_n \}_{n = 1}^N$ and $\{ y_k \}_{k = 1}^K$ of $\mathcal{X} \cong \C^N$ and $\mathcal{Y} \cong \C^K$ induces a matrix representation $h = (h_{nk}) \in \mathrm{Mat}_{\C}(N,K)$ of $H$ (called \emph{basis representation}) via \begin{align*} H x_n = \sum_{k = 1}^K h_{nk} \, y_k . \end{align*} The basis now identifies coordinates on the vector spaces: $x = \sum_{n = 1}^N \xi_n \, x_n \in \mathcal{X}$ has the coordinate $\xi = (\xi_1 , \ldots , \xi_n) \in \C^N$, and similarly $y = \sum_{k = 1}^K \eta_k \, y_k \in \mathcal{Y}$ is expressed in terms of the coordinate $\eta \in \C^K$. Using these coordinates, the equation $H x = y$ becomes the matrix equation $h \xi = \eta$. A change in basis can now be described in the same way: if $\{ x_n' \}_{j = 1}^N$ and $\{ y_k' \}_{k = 1}^K$ are two other orthonormal bases, then the coordinate representations of the maps \begin{align*} U_{xx'} &: x_n \mapsto x_n' \\ U_{yy'} &: y_k \mapsto y_k' \end{align*} are unitary matrices $u_{xx'} \in \mathcal{U}(\C^N)$ and $u_{yy'} \in \mathcal{U}(\C^K)$, and these matrices connect the coordinate representations of $H$ with respect to $\{ x_n \}_{n = 1}^N$, $\{ y_k \}_{k = 1}^K$ and $\{ x_n' \}_{n = 1}^N$, $\{ y_k' \}_{k = 1}^K$, \begin{align*} h' = u_{yy'} \, h \, u_{xx'}^{-1} . \end{align*} $u_{xx'}^{-1}$ maps $\xi'$ onto $\xi$, $h$ maps $\xi$ onto $\eta$ and $u_{yy'}$ maps $\eta$ onto $\eta'$. Similarly, we can represent operators on \emph{infinite}-dimensional Hilbert spaces such as $L^2(\R^d)$ in much the same way: for instance, consider the free Schrödinger operator $H = - \tfrac{1}{2} \Delta_x : \mathcal{D} \subset L^2(\R^d) \longrightarrow L^2(\R^d)$. Then the Fourier transform $\Fourier : L^2(\R^d) \longrightarrow L^2(\R^d)$ is such a unitary which changes from one “coordinate system” to another, and the free Schrödinger operator in this new representation becomes a simple multiplication operator \begin{align*} H^{\Fourier} := \Fourier \, H \, \Fourier^{-1} = \tfrac{1}{2} \hat{\xi}^2 . \end{align*} Because initial and target space are one and the same, $\Fourier$ appears twice. Another unitary is a rescaling which can be seen as a change of units: for $\lambda > 0$ one defines \begin{align*} (U_{\lambda} \varphi)(x) := \lambda^{\nicefrac{d}{2}} \, \varphi(\lambda x) \end{align*} where the scaling factor $\lambda$ relates the two scales. Similarly, other linear changes of the underlying configuration space $\R^d$ (\eg rotations) induce a unitary operator on $L^2(\R^d)$. \marginpar{2014.02.11} \medskip \noindent One can exploit this freedom of representation to simplify a problem: Just like choosing spherical coordinates for a problem with spherical symmetry, we can work in a representation which simplifies the problem. For instance, the Fourier transform exploits the \emph{translational symmetry} of the free Schrödinger operator ($H$ commutes with translations). Another example would be to use an \emph{eigenbasis}: assume $H = H^* \geq 0$ as a set of eigenvectors $\{ \psi_n \}_{n \in \N}$ which span all of $\Hil$, \ie the $\psi_n$ are linearly independent and $H \psi_n = E_n \, \psi_n$ where $E_n \in \R$ is the eigenvalue. The eigenvalues are enumerated by magnitude and repeated according to their multiplicity, \ie $E_1 \leq E_2 \leq \ldots$. Just like in the case of hermitian matrices, the eigenvectors to distinct eigenvalues of selfadjoint operators are trivial, and hence, we can choose the $\{ \psi_n \}_{n \in \N}$ to be orthonormal. Then the suitable unitary is \begin{align*} U : \Hil \longrightarrow \ell^2(\N) , \; \; \psi = \sum_{n = 1}^{\infty} \widehat{\psi}(n) \, \psi_n \mapsto \widehat{\psi} \in \ell^2(\N) \end{align*} where $\widehat{\psi} = \bigl ( \widehat{\psi}(1) , \widehat{\psi}(2) , \ldots \bigr )$ is the sequence of coefficients and $\ell^2(\N)$ is the prototypical Hilbert space defined in Definition~\ref{spaces:Hilbert:defn:ell2}; moreover, the definition of orthonormal basis (Definition~\ref{spaces:Hilbert:defn:ONB}) implies that $\widehat{\psi}$ is necessarily square summable. In this representation, $H$ can be seen as an “infinite diagonal matrix” \begin{align*} H = \sum_{n = 1}^{\infty} E_n \, P_{\psi_n} \mapsto H^U = U \, H \, U^{-1} = \left ( % \begin{matrix} E_1 & 0 & \cdots & \cdots \\ 0 & E_2 & 0 & \cdots \\ \vdots & & \ddots & \ddots \\ \end{matrix} % \right ) \end{align*} where $P_{\psi} \varphi := \scpro{\psi}{\varphi} \, \psi$ are the rank-$1$ projections onto $\psi$. Put another way, $H^U$ acts on $\widehat{\psi} \in \ell^2(\N)$ as \begin{align*} H^U \widehat{\psi} = \bigl ( E_1 \, \widehat{\psi}(1) , E_2 \, \widehat{\psi}(2) , \ldots \bigr ) . \end{align*} The simple structure of this operator allows one to compute the unitary evolution group explicitly in terms of the projections $P_{\psi_n}$, \begin{align*} \e^{- \ii \frac{t}{\hbar} H} = \sum_{n = 1}^{\infty} \e^{- \ii \frac{t}{\hbar} E_n} \, P_{\psi_n} . \end{align*} Sadly, most Schrödinger operators $H$ do not have a basis of eigenvectors. \section{Spectral properties of hamiltonians} \label{quantum:spectrum} The spectrum of an operator is the generalization of the set of eigenvalues for matrices. According to Definition~\ref{operators:defn:spectrum} the spectrum can be dived up into three parts, the \emph{point spectrum} \begin{align*} \sigma_{\mathrm{p}}(H) := \bigl \{ z \in \C \; \vert \; H - z \mbox{ is not injective} \bigr \} , \end{align*} the \emph{continuous spectrum} \begin{align*} \sigma_{\mathrm{c}}(H) := \bigl \{ z \in \C \; \vert \; H - z \mbox{ is injective, } \im (H - z) \subseteq \Hil \mbox{ dense} \bigr \} , \end{align*} and the \emph{residual spectrum} \begin{align*} \sigma_{\mathrm{r}}(H) := \bigl \{ z \in \C \; \vert \; H - z \mbox{ is injective, } \im (H - z) \subseteq \Hil \mbox{ not dense} \bigr \} . \end{align*} Point spectrum is due to eigenvalues with eigenvector. Compared to matrices, the occurrence of continuous and residual spectra is new. The residual spectrum is not important for our discussion as it is empty for selfadjoint operators. The continuous spectrum can be attributed to cases where the eigenvectors are not elements of the Hilbert space. For instance, in case of the free Schrödinger operator $H = - \tfrac{1}{2} \Delta_x$ on $L^2(\R^d)$, the spectrum is $\sigma(H) = \sigma_{\mathrm{c}}(H) = [0,+\infty)$. Here, the eigenvectors are plane waves, $\e^{+ \ii \xi \cdot x}$ which are smooth, bounded functions; however, plane waves are not square integrable. Similarly, multiplication operators have Dirac distributions as eigen“functions”. Note that this distinction between the spectral components goes further than looking at the spectrum as a set: for instance, it is known that certain random Schrödinger operators have dense point spectrum which “looks” the same as continuous spectrum. The spectrum can be probed by means of approximate eigenfunctions (“Weyl's Criterion”, see Theorem~\ref{operators:thm:Weyl_criterion}). \medskip \noindent There is also a second helpful classification of spectrum which cannot be made rigorous with the tools we have at hand, and that is the distinction between \emph{essential} spectrum $\sigma_{\mathrm{ess}}(H)$ and \emph{discrete} spectrum $\sigma_{\mathrm{disc}}(H)$. The essential spectrum is stable under local, short-range perturbations while the discrete spectrum may change. One has the following characterization for the essential spectrum: \begin{theorem}[Theorem VII.10 in \cite{Reed_Simon:M_cap_Phi_1:1972}] $\lambda \in \sigma_{\mathrm{ess}}(H)$ iff \emph{one} or more of the following holds: % \begin{enumerate}[(i)] \item $\lambda \in \sigma_{\mathrm{cont}}(H)$ \item $\lambda$ is a limit point of $\sigma_{\mathrm{p}}(H)$. \item $\lambda$ is an eigenvalue of infinite multiplicity. \end{enumerate} % \end{theorem} Similarly, the discrete spectrum has a similar characterization: \begin{theorem}[Theorem VII.11 in \cite{Reed_Simon:M_cap_Phi_1:1972}] $\lambda \in \sigma_{\mathrm{disc}}(H)$ if and only if \emph{both} of the following hold: % \begin{enumerate}[(i)] \item $\lambda$ is an isolated point of $\sigma(H)$, \ie for some $\eps > 0$ we have $\bigl ( \lambda - \eps , \lambda + \eps \bigr ) \cap \sigma(H) = \{ \lambda \}$. \item $\lambda$ is an eigenvalue of \emph{finite} multiplicity. \end{enumerate} % \end{theorem} \subsection{Spectra of common selfadjoint operators} \label{quantum:framework:common} Quite generally, the spectrum of selfadjoint operators is purely real. But before we prove that, let us discuss some examples from physics: \paragraph{Multiplication operators} The spectrum of the multiplication operator \begin{align*} \bigl ( f(\hat{x}) \psi \bigr )(x) := f(x) \, \psi(x) \end{align*} is given by the range, $\sigma \bigl ( f(\hat{x}) \bigr ) = \overline{\ran f}$, where $f : \R^d \longrightarrow \R$ is a piecewise-continuous function.\footnote{This condition can be relaxed and is chosen just for ease of use.} To see this claim, we rely on the Weyl criterion: in order to show $\sigma \bigl ( f(\hat{x}) \bigr ) \supseteq \overline{\ran f}$, pick any $\lambda \in \overline{\ran f}$. Then there exists a sequence $x_n$ such that $\abs{\lambda - f(x_n)} < \nicefrac{1}{n}$. Then by shifting an $L^2$-Dirac sequence by $x_n$ (\eg scaled Gaußians), we obtain a sequence of vectors $\psi_n$ with $\bnorm{\bigl ( f(\hat{x}) - \lambda \bigr ) \psi_n} \xrightarrow{n \to \infty} 0$. Hence, this reasoning shows $\overline{\ran f} \subseteq \sigma \bigl ( f(\hat{x}) \bigr )$. To show the converse inclusion, let $\lambda \in \sigma \bigl ( f(\hat{x}) \bigr )$. Then there exists a Weyl sequence $\{ \psi_n \}_{n \in \N}$ with $\bnorm{\bigl ( f(\hat{x}) - \lambda \bigr ) \psi_n} \rightarrow 0$ as $n \to \infty$. Assume $\inf_{x \in \R^d} \babs{f(x) - \lambda} = c > 0$, \ie $\lambda \not \in \overline{\ran f}$, then $\{ \psi_n \}$ cannot be a Weyl sequence to $\lambda$, \begin{align*} \bnorm{\bigl ( f(\hat{x}) - \lambda \bigr ) \psi_n} \geq \inf_{x \in \R^d} \babs{f(x) - \lambda} \, \snorm{\psi_n} \geq c > 0 , \end{align*} which is absurd. \medskip \noindent Should $f$ be constant and equal to $\lambda_0$ on a set of positive measure, there are infinitely many eigenfunctions associated to the eigenvalue $\lambda_0$. Otherwise, $f$ has continuous spectrum. In any case, the spectrum of $f(\hat{x})$ is purely essential. Clearly, this takes care of any operator which is unitarily equivalent to a multiplication operator, \eg the free Laplacian on $\R^d$, $\T^d$ or the tight-binding hamiltonians from Chapter~\ref{Fourier:T:periodic_operators:tight_binding}. \paragraph{The hydrogen atom} One of the most early celebrated successes of quantum mechanics is the explanation of the spectral lines by Schrödinger \cite{Schroedinger:quantisierung_ew_problem_1:1926,Schroedinger:quantisierung_ew_problem_2:1926,Schroedinger:quantisierung_ew_problem_3:1926,Schroedinger:quantisierung_ew_problem_4:1926}. Here, the operator \begin{align*} H := - \frac{\hbar^2}{2 m} \Delta_x - \frac{e}{\sabs{\hat{x}}} \end{align*} acts on a dense subspace of $L^2(\R^3)$. A non-obvious fact is that this operator is bounded from below, \ie there exits a constant $c > 0$ such that $H \geq - c$. This is false for the corresponding classical system, because the function $h(q,p) = \tfrac{1}{2m} p^2 - \tfrac{e}{\abs{q}}$ is \emph{not} bounded from below. The reason for that is that states of low potential energy (\ie wave functions which are sharply peaked around $0$) must pay an ever larger price in kinetic energy (sharply peaked means large gradient). One heuristic way to see that is to compute the energy expectation value of $\psi_{\lambda} := \lambda^{\nicefrac{3}{2}} \, \psi(\lambda x)$ for $\lambda \gg 1$ where $\psi \in \Schwartz(\R^d)$: \begin{align*} \mathbb{E}_{\psi_{\lambda}}(H) &= \frac{\hbar^2}{2m} \, \bscpro{\psi_{\lambda}}{- \Delta_x \psi_{\lambda}} - e \, \bscpro{\psi_{\lambda}}{\sabs{\hat{x}}^{-1} \psi_{\lambda}} \\ &= \frac{\hbar^2}{2m} \, \lambda^2 \, \int_{\R^3} \dd x \, \lambda^3 \, \babs{\nabla_x \psi(\lambda x)}^2 - e \, \lambda \, \int_{\R^3} \dd x \, \lambda^3 \, \frac{\sabs{\psi(\lambda x)}}{\lambda \, \sabs{x}} \\ &= \lambda^2 \, \scpro{\psi}{- \tfrac{\hbar^2}{2m} \Delta_x \psi} - \lambda \, \bscpro{\psi}{e \, \sabs{\hat{x}}^{-1} \psi} \end{align*} Clearly, if one replaces the Coulomb potential by $- \abs{x}^{-3}$, the kinetic energy wins and the quantum particle can “fall down the well”. The negative potential gives rise to a family of eigenvalues (the spectral lines) while $-\Delta_x$ contributes continuous spectrum $[0,+\infty)$, \begin{align*} \sigma(H) &= \{ E_n \}_{n \in \N} \cup [0,+\infty) , \\ \sigma_{\mathrm{cont}}(H) &= [0,+\infty) = \sigma_{\mathrm{ess}}(H) , \\ \sigma_{\mathrm{p}}(H) &= \{ E_n \}_{n \in \N} = \sigma_{\mathrm{disc}}(H) . \end{align*} \subsection{The spectrum of selfadjoint operators is real} \label{quantum:framework:selfadjoint_operator_real} As a side note, let us show that the spectrum of selfadjoint operators is purely real. \begin{theorem}\label{quantum:thm:spectrum_selfadjoint_real} Let $H = H^*$ be a selfadjoint operator on the Hilbert space $\Hil$. Then the following holds true: % \begin{enumerate}[(i)] \item $\sigma(H) \subseteq \R$ \item $H \geq 0$ $\Rightarrow$ $\sigma(H) \subseteq [0,+\infty)$ \end{enumerate} % \end{theorem} To prove this, we use the following \begin{lemma} Let $\Hil_j$, $j = 1 , 2$, be Hilbert spaces. Then an operator $T \in \mathcal{B}(\Hil_1,\Hil_2)$ is invertible if and only if there exists a constant $C > 0$ such that $T^* \, T \geq C \, \id_{\Hil_1}$ and $T \, T^* \geq C \, \id_{\Hil_2}$ hold. \end{lemma} \begin{proof} “$\Rightarrow$:” Assume $T$ is invertible. Then $T^* : \Hil_2 \longrightarrow \Hil_1$ is also invertible with inverse ${T^*}^{-1} = {T^{-1}}^*$. Set $C := \norm{T^{-1}}^{-2} = \norm{{T^*}^{-1}}^{-2}$. Then the inequality % \begin{align*} \norm{\psi} &= \norm{T^{-1} T \psi} \leq \norm{T^{-1}} \, \norm{T \psi} \end{align*} % proves $\norm{T \psi} \geq \norm{T^{-1}}^{-1}$, and thus also % \begin{align} \scpro{\psi}{T^* T \psi} &= \norm{T \psi}^2 \geq \norm{T^{-1}}^{-2} \, \norm{\psi}^2 = C \norm{\psi}^2 \label{quantum:eqn:Tast_T_geq_C_id} , \end{align} % \ie we have shown $T^* \, T \geq C \, \id_{\Hil_1}$. The non-negativity of $T \, T^*$ is shown analogously. “$\Leftarrow$:” Suppose there exists $C > 0$ such that $T^* \, T \geq C \, \id_{\Hil_2}$ and $T \, T^* \geq C \, \id_{\Hil_2}$. Then from \eqref{quantum:eqn:Tast_T_geq_C_id} we deduce $\norm{T \psi} \geq \sqrt{C} \, \norm{\psi}$ holds for all $\psi \in \Hil_1$. First of all, this proves that $T$ is injective, and secondly $T$ has closed range in $\Hil_2$ (one can see the latter by considering convergence of $T \psi_n$ for any Cauchy sequence $\{ \psi_n \}_{n \in \N}$). Moreover, one can easily see % \begin{align*} \ran T = \overline{\ran T} = \bigl ( \ker T^* \bigr )^{\perp} . \end{align*} % Since we can make the same arguments for $T^*$, we also know that $T^*$ is injective, and thus $\ker T^* = \{ 0 \}$. This shows that $T$ is surjective, \ie it is bijective, and hence, invertible. \end{proof} With the proof of the Lemma complete, we can now prove the statement: \begin{proof}[Theorem~\ref{quantum:thm:spectrum_selfadjoint_real}] % \begin{enumerate}[(i)] \item Let $H = H^*$ be selfadjoint and $z = \lambda + \ii \mu \in \C \setminus \R$ be a complex number with non-vanishing imaginary part $\mu$. We will show that $z \not\in \sigma(H)$, \ie that $H - \lambda$ is invertible: a quick computation shows % \begin{align*} \bigl ( H - z \bigr )^* \, \bigl ( H - z \bigr ) &= H^2 - 2 \, (\Re z) \, H + \sabs{z}^2 = H^2 - 2 \lambda \, H + (\lambda^2 + \mu^2) \\ &= \mu^2 + \bigl ( H - \lambda \bigr )^2 . \end{align*} % The last term is non-negative, and thus, we have shown % \begin{align*} \bigl ( H - z \bigr )^* \, \bigl ( H - z \bigr ) \geq \mu^2 . \end{align*} % By the Lemma, this means $H - \lambda$ is necessarily invertible, and $z \not \in \sigma(H)$. \item We have to show that for $\lambda \in (-\infty,0)$, the operator $H - \lambda$ is invertible. This follows as before from % \begin{align*} \bigl ( H - \lambda \bigr )^* \, \bigl ( H - \lambda \bigr ) &= H^2 - 2 \lambda \, H + \lambda^2 \geq \lambda^2 , \end{align*} % the non-negativity of $- 2 \lambda \, H = 2 \sabs{\lambda} \, H$ and the Lemma. \end{enumerate} % \end{proof} \subsection{Eigenvalues and bound states} \label{quantum:spectrum:bound_states} The hydrogen atom is a prototypical example of the type of problem we are interested in, namely Schrödinger operators on $L^2(\R^d)$ of the form \begin{align*} H = - \Delta_x + V \end{align*} where $V \leq 0$ is a non-positive potential decaying at infinity ($\lim_{\abs{x} \to \infty} V(x) = 0$). Under suitable technical conditions on the potential, $H$ defines a selfadjoint operator which is bounded from below, that is $H \geq c$ holds for some $c \in \R$, and we have \begin{align*} \sigma_{\mathrm{ess}}(H) &= \sigma(-\Delta_x) = [0,+\infty) . \end{align*} Now the question is whether $\sigma_{\mathrm{p}}(H) = \emptyset$ or \begin{align*} \sigma_{\mathrm{p}}(H) &= \{ E_n \}_{n = 0}^{N} \subset (-\infty,0) \end{align*} for some $N \in \N_0 \cup \{ \infty \}$. We shall always assume that the eigenvalues are ordered by magnitude, \begin{align*} E_0 \leq E_1 \leq \ldots \end{align*} The \emph{ground state} $\psi_0$ is the eigenfunction to the lowest eigenvalue $E_0$. Eigenfunctions $\psi$ are localized: the weakest form of localization is $\psi \in L^2(\R^d)$, but usually one can expect \emph{exponential} localization. So there are two natural questions which we will answer in turn: \begin{enumerate}[(1)] \item Do eigenvalues below the essential spectrum \emph{exist}? \item Can we give \emph{estimates} on their numerical values? \end{enumerate} \subsubsection{The Birman-Schwinger principle} \label{quantum:spectrum:bound_states:Birman_Schwinger} We begin with the Birman-Schwinger principle which gives a criterion for the existence and absence of eigenvalues at a specific energy level. It is \emph{the} standard tool for showing the existence or absence of eigenvalues. Assume $\varphi$ is an eigenvector of $H$ to the eigenvalue $-E < 0$. Then the eigenvalue equation is equivalent to \begin{align*} \bigl ( - \Delta_x + E \bigr ) \varphi &= - V \varphi = \abs{V} \varphi . \end{align*} If we define the vector $\psi := \abs{V}^{\nicefrac{1}{2}} \varphi$ and use that $-E \not \in \sigma(-\Delta_x) = [0,+\infty)$, we obtain \begin{align*} \abs{V}^{\nicefrac{1}{2}} \bigl ( - \Delta_x + E \bigr )^{-1} \abs{V}^{\nicefrac{1}{2}} \psi &= \psi . \end{align*} In other words, we have just shown the \begin{theorem}[Birman-Schwinger principle] The function $\varphi \in L^2(\R^d)$ is an eigenvector of $H = - \Delta_x + V$ to the eigenvalue $-E < 0$ if and only if $\psi = \abs{V}^{\nicefrac{1}{2}} \varphi$ is an eigenvector of the Birman-Schwinger operator % \begin{align} K_E := \abs{V}^{\nicefrac{1}{2}} \bigl ( - \Delta_x + E \bigr )^{-1} \, \abs{V}^{\nicefrac{1}{2}} \label{quantum:eqn:Birman_Schwinger_operator} \end{align} % to the eigenvalue $1$. \marginpar{2014.0213} \end{theorem} The only assumption we have glossed over is the boundedness of $K_E$. One may think that solving $K_E \psi = \psi$ is just as difficult as $H \varphi = - E \varphi$, but it is not. For instance, we immediately obtain the following \begin{corollary} Assume the Birman-Schwinger operator $K_E \in \mathcal{B} \bigl ( L^2(\R^d) \bigr )$ is bounded. Then for $\lambda_0$ small enough, $H_{\lambda} = - \Delta_x + \lambda V$ has no eigenvalue at $-E$ for all $0 \leq \lambda < \lambda_0$. \end{corollary} \begin{proof} Replacing $V$ with $\lambda V$ in equation~\eqref{quantum:eqn:Birman_Schwinger_operator} yields that the Birman-Schwinger operator for $H_{\lambda}$ is $\lambda K_E$. Thus, for $\lambda$ small enough, we can make $\lambda \norm{K_E} < 1$ arbitrarily small and since $\sup \abs{\sigma \bigl ( K_E \bigr )} \leq \norm{K_E}$,\footnote{This is a general fact: if $T \in \mathcal{B}(\mathcal{X})$ is an operator on a Banach space, then $\sup \abs{\sigma(T)} \leq \norm{T}$ holds \cite[Chapter~VIII.2, Theorems~3 and 4]{Yoshida:functional_analysis:1980}.} this means $1$ cannot be an eigenvalue. Hence, by the Birman-Schwinger principle there cannot exist an eigenvalue at $-E$. \end{proof} Another advantage is that we have an explicit expression for the \emph{operator kernel} of $K_E$, the \emph{Birman-Schwinger kernel}, which allows us to make explicit estimates. In general, an operator kernel $K_T$ for an operator $T$ is a distribution on $\R^d \times \R^d$ so that \begin{align*} (T \psi)(x) = \int_{\R^d} \dd y \, K_T(x,y) \, \psi(y) . \end{align*} For the sake of brevity, we will also write $T(x,y)$ for $K_T(x,y)$. We have dedicated Chapter~\ref{Greens_functions} to one specific example: assume the operator $L$ is invertible and $L u = f$, then \begin{align*} u(x) = \int_{\R^d} \dd y \, G(x,y) \, f(y) = \bigl ( L^{-1} f \bigr )(x) \end{align*} holds. In other words, the Green's function $G$ is the operator kernel of $L^{-1}$. Seeing as $K_E$ is the product of the multiplication operator $\abs{V}^{\nicefrac{1}{2}}$ and $\bigl ( - \Delta_x + E \bigr )^{-1}$, the dimension-dependent, explicit expression of Birman-Schwinger kernel involves only the Green's function of $- \Delta_x + E$ in that particular dimension, \begin{align*} K_E(x,y) = \abs{V(x)}^{\nicefrac{1}{2}} \bigl ( - \Delta_x + E \bigr )^{-1}(x,y) \abs{V(y)}^{\nicefrac{1}{2}} . \end{align*} In odd dimension, there exist closed expressions for $\bigl ( - \Delta_x + E \bigr )^{-1}(x,y)$ while for even $d$, no neat formulas for it exist. Nevertheless, its behavior can be characterized. \medskip \noindent Let us return to the original question: Can we show the \emph{existence} of eigenvalues as well via the Birman-Schwinger principle? The answer is yes, and we will treat a particular case: \begin{theorem}[\cite{Simon:bound_states_low_d_Schroedinger:1976}]\label{quantum:thm:existence_bound_state_1d_Schroedinger} Consider the Schrödinger operator $H_{\lambda} = - \partial_x^2 + \lambda V$ on $L^2(\R)$ where $\lambda > 0$ and the potential satisfies $V \in L^1(\R)$, $V \neq 0$, $V \leq 0$, and % \begin{align*} \int_{\R} \dd x \, \bigl ( 1 + x^2 \bigr ) \, \abs{V(x)} < \infty . \end{align*} % Then there exists $\lambda_0 > 0$ small enough so that $H_{\lambda}$ has exactly one eigenvalue % \begin{align} E_{\lambda} &= - \frac{\lambda^2}{4} \, \left ( \int_{\R} \dd x \, \abs{V(x)} \right )^2 + \order(\lambda^4) \label{quantum:eqn:BS_neg_eigenvalue} \end{align} % for all $\lambda \in (0,\lambda_0)$. \end{theorem} The eigenvalue gives an intuition on the shape of the eigenfunction: it has few oscillations to minimize kinetic energy and is approximately constant in the region where $V$ is appreciably different from $0$ (this region is not too large because of the decay assumption $\int_{\R} \dd x \, x^2 \, \abs{V(x)} < \infty$). Hence, the eigenfunction sees only the average value of the potential. This intuition neither explains why other eigenvalues may appear nor that for $d \geq 3$, the theorem is false. \begin{proof} The arguments in \cite[Section~2]{Simon:bound_states_low_d_Schroedinger:1976} ensure the boundedness of the Birman-Schwinger operator. Moreover, in one dimension the Green's function for $-\partial_x^2 + E$ exists ($-E \not\in \sigma(-\partial_x^2)$) and can be computed explicitly, namely % \begin{align*} \bigl ( -\partial_x^2 + E \bigr )^{-1}(x,y) = \sqrt{2\pi} \, \bigl ( \Fourier (\xi^2 + E)^{-1} \bigr )(x-y) = \frac{\e^{- \sqrt{E} \abs{x-y}}}{2 \sqrt{E}} . \end{align*} % To simplify notation, let us define $\mu := \sqrt{E}$. Thus, the Birman-Schwinger kernel is the function % \begin{align*} K_{\mu^2}(x,y) &= \frac{1}{2 \mu} \, \abs{V(x)}^{\nicefrac{1}{2}} \, \e^{- \mu \abs{x-y}} \, \abs{V(y)}^{\nicefrac{1}{2}} . \end{align*} % In addition, define the operators % \begin{align*} L_{\mu} := \frac{1}{2 \mu} \, \bopro{\abs{V}^{\nicefrac{1}{2}}}{\abs{V}^{\nicefrac{1}{2}}} \end{align*} % and $M_{\mu} := K_{\mu^2} - L_{\mu}$. Clearly, given that $V \in L^1(\R)$, its square root is $L^2$ and $L_E$ is a bounded rank-$1$ operator. Moreover, the operator kernel % \begin{align*} M_{\mu}(x,y) = \abs{V(x)}^{\nicefrac{1}{2}} \, \frac{\e^{- \mu \abs{x-y}} - 1}{2 \mu} \, \abs{V(y)}^{\nicefrac{1}{2}} \end{align*} % is well-defined in the limit $\mu \rightarrow 0$ and analytic for $\mu \in \C$ with $\Re \mu > 0$. The Birman-Schwinger principle tells us that $H_{\lambda}$ has an eigenvalue at $-\mu^2$ if and only if $1 \in \sigma_{\mathrm{p}}(K_{\mu^2})$: for $\lambda \ll 1$ small enough we have $\bnorm{\lambda \, M_{\mu}} < 1$ which means the Neumann series\footnote{In this context, the geometric series is usually referred to as Neumann series.} % \begin{align} \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} &= \sum_{n = 0}^{\infty} \lambda^n \, M_{\mu}^n = 1 + \lambda \, M_{\mu} + \order(\lambda^2) \label{quantum:eqn:Neuman_series_M_mu} \end{align} % exists in $\mathcal{B} \bigl ( L^2(\R) \bigr )$. Hence, the invertibility of % \begin{align*} 1 - \lambda \, K_{\mu^2} &= 1 - \lambda \, M_{\mu} - \lambda \, L_{\mu} \\ &= \bigl ( 1 - \lambda \, M_{\mu} \bigr ) \, \Bigl ( 1 - \lambda \, \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, L_{\mu} \Bigr ) \end{align*} % hinges on whether $1$ is an eigenvalue of \marginpar{2014.02.25} % \begin{align*} \lambda \, \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, L_{\mu} &= \opro{\tfrac{\lambda}{2 \mu} \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, \abs{V}^{\nicefrac{1}{2}}}{\abs{V}^{\nicefrac{1}{2}}} . \end{align*} % This is again a rank-$1$ operator, and thus, we can read off the eigenvector % \begin{align*} \psi_{\lambda,\mu} &= \frac{\lambda}{2 \mu} \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, \abs{V}^{\nicefrac{1}{2}} \in L^2(\R) \end{align*} % to its only non-zero eigenvalue. Moreover, we can compute this eigenvalue, % \begin{align*} \scpro{\abs{V}^{\nicefrac{1}{2}}}{ \; \tfrac{\lambda}{2 \mu} \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, \abs{V}^{\nicefrac{1}{2}}} , \end{align*} % and this is equal to $1$ if and only if $\mu$ satisfies the self-consistent equation % \begin{align*} \mu &= G(\mu) := \frac{\lambda}{2} \, \scpro{\abs{V}^{\nicefrac{1}{2}}}{ \; \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, \abs{V}^{\nicefrac{1}{2}}} . \end{align*} % Given that $\norm{\lambda \, M_{\mu}} < 1$ for $\lambda \ll 1$ small enough, we can express $\bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1}$ in terms of \eqref{quantum:eqn:Neuman_series_M_mu}. Keeping only the first term of the expansion~\eqref{quantum:eqn:Neuman_series_M_mu}, we approximate $G$ by the average of the potential % \begin{align} G(\mu) &= \tfrac{\lambda}{2} \, \bscpro{\abs{V}^{\nicefrac{1}{2}}}{\abs{V}^{\nicefrac{1}{2}}} + \order(\lambda^2) = \frac{\lambda}{2} \, \int_{\R} \dd x \, \abs{V(x)} + \order(\lambda^2) . \label{quantum:eqn:approximate_eigenvalue_sqrt} \end{align} % Hence, $G(\mu) = \mu$ has \emph{a} solution $\mu_{\ast}$ provided $\lambda$ is small enough; additionally \emph{any} solution to this equation satisfies $\mu^{-1} \leq C_1 \, \lambda^{-1}$ for some constant $C_1 > 0$ and $\lambda$ small. \medskip \noindent Now that we know that \emph{a} solution exists, we need to show uniqueness: Suppose we have found two solutions $\mu_1 \leq \mu_2$. Then they both solve the self-consistent equation $G(\mu_j) = \mu_j$, and assuming for a moment that $G$ is continuously differentiable in $\mu$, we use the fundamental theorem of calculus to obtain % \begin{align*} \babs{\mu_2 - \mu_1} &= \babs{G(\mu_2) - G(\mu_1)} = \abs{\int_{\mu_1}^{\mu_2} \dd \mu \, \partial_{\mu} G(\mu)} \\ & \leq \sup_{\mu \in [\mu_1,\mu_2]} \babs{\partial_{\mu} G(\mu)} \, \babs{\mu_2 - \mu_1} . \end{align*} % \emph{If} we can show $G$ is continuously differentiable \emph{and} its derivative is bounded by $\nicefrac{1}{2}$ for $\lambda$ small enough, then the above inequality reads $\babs{\mu_2 - \mu_1} \leq \tfrac{1}{2} \babs{\mu_2 - \mu_1}$. This is only possible if $\mu_1 = \mu_2$, and the solution is unique. To show the last bit, we note that $M_{\mu}$ and $(1-z)^{-1}$ are real-analytic in $\mu$ so that their composition $\bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1}$ is also real-analytic. The analyticity of $M_{\mu}$ for $\mu \in \C$, $\Re \mu > 0$, also yields the bound % \begin{align} \bnorm{\partial_{\mu} M_{\mu}} \leq C_2 \, \mu^{-1} \label{quantum:eqn:BS_proof_a_priori_estimate_G_mu} \end{align} % via the Cauchy integral formula, because the maximal radius of the circular contour is less than $\mu$. The derivative of the resolvent can be related to $\partial_{\mu} M_{\mu}$ via the useful trick % \begin{align*} 0 = \partial_{\mu} (\id) &= \partial_{\mu} \Bigl ( \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \; \bigl ( 1 - \lambda \, M_{\mu} \bigr ) \Bigr ) \\ &= \partial_{\mu} \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \; \bigl ( 1 - \lambda \, M_{\mu} \bigr ) + \lambda \, \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \; \partial_{\mu} M_{\mu} \end{align*} % which yields % \begin{align*} \babs{\partial_{\mu} G(\mu)} &= \abs{\frac{\lambda^2}{2} \scpro{\abs{V}^{\nicefrac{1}{2}}}{ \; \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, \partial_{\mu} M_{\mu} \, \bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1} \, \abs{V}^{\nicefrac{1}{2}}}} . \end{align*} % The right-hand side can be estimated with the help of the Cauchy-Schwarz inequality % \begin{align*} \ldots &\leq \lambda^2 \, \bnorm{\abs{V}^{\nicefrac{1}{2}}}_{L^2(\R)}^2 \, \norm{\bigl ( 1 - \lambda \, M_{\mu} \bigr )^{-1}}^2 \, \bnorm{\partial_{\mu} M_{\mu}} =: C_3 \, \lambda^2 \, \bnorm{\partial_{\mu} M_{\mu}} . \end{align*} % Combining \eqref{quantum:eqn:BS_proof_a_priori_estimate_G_mu} with $\mu^{-1} \leq C_1 \, \lambda^{-1}$ (which we obtained from $\mu = G(\mu)$), we find % \begin{align*} C_3 \, \lambda^2 \, \bnorm{\partial_{\mu} M_{\mu}} \leq C_3 \, \lambda^2 \, C_2 \, \mu^{-1} \leq C_1 C_2 C_3 \, \lambda . \end{align*} % Put another way, we have deduced the bound $\babs{\partial_{\mu} G(\mu)} \leq C \, \lambda$ which means that for $\lambda$ small enough, we can ensure that the derivative is less than $\nicefrac{1}{2}$. Thus, the eigenvalue is unique and we have shown the theorem. \end{proof} \subsubsection{The min-max principle} \label{quantum:spectrum:bound_states:the_min_max_principle} Now that we have established criteria for the \emph{existence} of bound states below the continuous spectrum for operators of the form $H = - \Delta_x + V$, we proceed to find other ways to give estimates of their numerical values. Crucially, we shall always assume $H \geq c$ for some $c \in \R$. Most of the methods of this chapter do not depend on the particular form of the hamiltonian. \medskip \noindent So let us assume we have established the existence of a \emph{ground state} $\psi_0$, \ie there exists an eigenvalue $E_0 = \inf \sigma(H) < 0 = \inf \sigma_{\mathrm{ess}}(H)$ at the bottom of the spectrum, the \emph{ground state energy}, whose eigenfunction is $\psi_0$. Then simplest estimate is obtained by minimizing the \emph{Rayleigh quotient} \begin{align*} \frac{\mathbb{E}_{\psi}(H)}{\norm{\psi}^2} &= \frac{\scpro{\psi}{H \psi}}{\norm{\psi}^2} \end{align*} for a family of trial wave functions (see also homework problem~54). Clearly, the Rayleigh quotient is bounded from below by $E_0$ for otherwise, $E_0$ is not the infimum of the spectrum. \begin{proposition}[The Rayleigh-Ritz principle] Let $H$ with a densely defined, selfadjoint operator which is bounded from below, \ie there exists $c \in \R$ such that $H \geq c$. Then % \begin{align} \inf \sigma(H) \leq \frac{\scpro{\psi}{H \psi}}{\norm{\psi}^2} \label{quantum:eqn:Rayleigh_Ritz} \end{align} % holds for all $\psi \in \Hil \setminus \{ 0 \}$. \end{proposition} A rigorous proof of this innocent-looking fact (see \eg \cite[Theorem~XIII.1]{Reed_Simon:M_cap_Phi_4:1978}) requires machinery that is not yet available to us. A non-obvious fact is that we can also give a \emph{lower} bound on the ground state energy: \begin{theorem}[Temple's inequality, Theorem~XIII.5 in \cite{Reed_Simon:M_cap_Phi_4:1978}] Let $H$ be a selfadjoint operator that is bounded from below with ground state $E_0 \in \sigma_{\mathrm{p}}(H)$, $E_0 < 0$. Suppose in addition $E_0 < E_1$ where $E_1$ is either the second eigenvalue (in case more eigenvalues exist) or the bottom of the essential spectrum. Then for $\mu \in (E_0,E_1)$ and $\psi$ with $\norm{\psi} = 1$ and $\scpro{\psi}{H \psi} < \mu$, Temple's inequality holds: % \begin{align*} E_0 \geq \scpro{\psi}{H \psi} - \frac{\scpro{\psi}{H^2 \psi} - \scpro{\psi}{H \psi}^2}{\mu - \scpro{\psi}{H \psi}} = \scpro{\psi}{H \psi} - \frac{\mathrm{Var}_{\psi}(H)}{\mu - \scpro{\psi}{H \psi}} \end{align*} % \end{theorem} Temple's inequality gives an energy window for the ground state energy: if $\psi$ is close to the ground state wave function, then the right-hand side is also close to $E_0$. On the other hand, one needs to know a lower bound on the \emph{second} eigenvalue $E_1$. \begin{proof} By assumption, $E_0$ is an isolated eigenvalue of finite multiplicity (otherwise $E_0 = E_1 = E_n$ for all $n \in \N$), and thus the operator $(H - E_0) (H - \mu) \geq 0$ is non-negative: the product is $= 0$ if applied to the ground state and $> 0$ otherwise because $\mu < E_1$. Consequently, \begin{align} \bscpro{\psi}{(H - \tilde{E_1}) H \psi} \geq E_0 \, \bscpro{\psi}{(H - \mu) \psi} \end{align} holds which, combined with the hypothesis $\bscpro{\psi}{(H - \mu) \psi} < 0$, yields \begin{align*} E_0 \geq \frac{\mu \, \scpro{\psi}{H \psi} - \scpro{\psi}{H^2 \psi}}{\mu - \scpro{\psi}{H \psi}} . \end{align*} \end{proof} What about other bound states below the essential spectrum (the ionization threshold)? Usually, we do not know \emph{whether} and \emph{how many} eigenvalues exist. Nevertheless, we can define a sequence of non-decreasing real numbers that coincides with the eigenvalues if they exist: the Rayleigh quotient suggests to use \begin{align*} E_0 := \inf_{\substack{\varphi \in \mathcal{D}(H), \norm{\varphi} = 1}} \scpro{\varphi}{H \varphi} \end{align*} as the definition of the ground state energy. Note that even if $H$ does not have eigenvalues, $E_0$ is still well-defined and yields $\inf \sigma(H)$ (use a Weyl sequence). A priori, we do \emph{not} know whether a $E_0$ is an eigenvalue, so we do not know whether an eigenvector exists. However, \emph{if} $E_0$ is an eigenvalue, then the eigenvector $\psi_1$ to the next eigenvalue $E_1$ (if it exists) would necessarily have to be orthogonal to $\psi_0$. Then the next eigenvalue satisfies \begin{align*} E_1 = \sup_{\varphi_0 \in \domain(H) \setminus \{ 0 \}} \inf_{\substack{\varphi \in \mathcal{D}(H), \norm{\varphi} = 1 \\ \varphi \in \{ \varphi_0 \}^{\perp}}} \scpro{\varphi}{H \varphi} . \end{align*} It turns out that this is the good definition even if $E_0 \not \in \sigma_{\mathrm{disc}}(H)$ is not an eigenvalue of finite multiplicity, because then $E_0 = E_1$. Quite generally, the candidate for the $n$th eigenvalue is \begin{align*} E_n := \sup_{\substack{\varphi_1 , \ldots , \varphi_n \in \mathcal{D}(H) \\ \scpro{\varphi_j}{\varphi_k} = \delta_{jk}}} \inf_{\substack{\varphi \in \mathcal{D}(H), \norm{\varphi} = 1 \\ \varphi \in \{ \varphi_1 , \ldots , \varphi_n \}^{\perp}}} \scpro{\varphi}{H \varphi} . \end{align*} Thus, we obtain a sequence of non-decreasing real numbers \begin{align*} E_0 \leq E_1 \leq E_2 \leq \ldots \end{align*} which -- if they exist -- are the eigenvalues repeated according to their multiplicities. One can show rigorously that if $E_n = E_{n+1} = E_{n + 2} = \ldots$, then $E_n = \inf \sigma_{\mathrm{ess}}(H)$ is the bottom of the essential spectrum. Otherwise, the $E_n < \inf \sigma_{\mathrm{ess}}(H)$ are \emph{eigenvalues of finite multiplicity}. In that case, there are at most $n$ eigenvalues below the essential spectrum. One may object that quite generally, it is impossible to evaluate $E_n$. Here is where the \emph{min-max principle} comes into play: assume we have chosen $n$ trial wave functions. Then this family of trial wave functions is a good candidate for the first few eigenfunctions if the eigenvalues $\lambda_j$ of the matrix $h := \Bigl ( \bscpro{\varphi_j}{H \varphi_k} \Bigr )_{0 \leq j , k \leq n-1} $ (ordered by size) are close to the $E_j$. \begin{theorem}[The min-max principle] Suppose $H$ is a selfadjoint operator on the Hilbert space $\Hil$ with domain $\mathcal{D}(H)$. Moreover, assume $H$ is bounded from below. Let $\bigl \{ \varphi_0 , \ldots , \varphi_{n-1} \bigr \} \subset \mathcal{D}(H)$ be an orthonormal system of $n$ functions and consider the $n \times n$ matrix % \begin{align*} h := \Bigl ( \bscpro{\varphi_j}{H \varphi_k} \Bigr )_{0 \leq j , k \leq n-1} \end{align*} % with eigenvalues $\lambda_0 \leq \lambda_1 \leq \ldots \leq \lambda_{n-1}$. Then we have that \marginpar{2014.02.27} % \begin{align*} E_j \leq \lambda_j \qquad \qquad \forall j = 0 , \ldots , n-1 . \end{align*} % \end{theorem} \begin{proof} We proceed by induction over $k$ (which enumerates the eigenvalues of $h$): denote the normalized eigenvector to the lowest eigenvalue $\lambda_0$ with $v_0 = \bigl ( v_{0,0} , \ldots v_{0,n-1} \bigr )$. Then the normalized vector $\chi_0 := \sum_{j = 0}^{n-1} v_{0,j} \, \varphi_j$ satisfies % \begin{align*} \lambda_0 = \bscpro{v_0}{h v_0}_{\C^n} = \bscpro{\chi_0}{H \chi_0} \geq E_0 \end{align*} % by the Rayleigh-Ritz principle. Now assume we have shown that $E_l \leq \lambda_l$ holds for all $l = 0 , \ldots , k \leq n-2$. Clearly, the eigenvectors $v_0 , \ldots , v_k$ to $h$, and the space spanned by the corresponding normalized $\chi_l = \sum_{j = 0}^{n-1} v_{l,j} \, \varphi_j$ is $k+1$-dimensional. Hence, for any % \begin{align*} \chi = \sum_{j = 0}^{n-1} w_j \, \chi_j \in \span \bigl \{ \chi_0 , \ldots , \chi_k \bigr \}^{\perp} \end{align*} % with coefficients $w \in \{ v_0 , \ldots , v_k \}^{\perp}$ we obtain % \begin{align*} \scpro{w}{h w} = \scpro{\chi}{H \chi} \geq E_{k+1} \end{align*} % because $\chi$ is orthogonal to a $k+1$-dimensional subspace of $\mathcal{D}(H)$. The left-hand side can be minimized by setting $w = v_{k+1}$, the eigenvector to $\lambda_{k+1}$, and thus, $E_{k+1} \leq \lambda_{k+1}$. This concludes the proof. \end{proof} One can use the min-max principle to make the following intuition rigorous: Assume one is given an operator $H(V) = -\Delta_x + V$ whose potential vanishes sufficiently rapidly at $\infty$, and one knows that $H(V)$ has a certain number of eigenvalues $\{ E_j(V) \}_{j \in \mathcal{I}}$, $\mathcal{I} \subseteq \N_0$. The decay conditions on $V$ ensure $\sigma_{\mathrm{ess}} \bigl ( H(V) \bigr ) = [0,+\infty)$. Then if $W \leq V$ is a second potential of the same type, the min-max principle implies \begin{align*} E_j(W) \leq E_j(V) . \end{align*} In particular, $H(W)$ has \emph{at least} as many eigenvalues as $H(V)$. This fact combined with Theorem~\ref{quantum:thm:existence_bound_state_1d_Schroedinger} immediately yields \begin{corollary} Suppose we are in the setting of Theorem~\ref{quantum:thm:existence_bound_state_1d_Schroedinger}. Then for all $\lambda > 0$ the Schrödinger operator $H = - \partial_x^2 + \lambda V$ has at least one eigenvalue $E_0 < 0$. \end{corollary} \section{Magnetic fields} \label{quantum:magnetic} Classically, there are \emph{two} ways to include magnetic fields (\cf Chapter~\ref{classical_mechanics:magnetic}): either by minimal substitution $p \mapsto p - A(x)$ which involves the magnetic vector \emph{potential} $A$ or one modifies the symplectic form to include the magnetic \emph{field} $B = \nabla_x \times A$. Note that the physical observable is the magnetic \emph{field} rather than the vector \emph{potential}, because there are many vector potentials which represent the same magnetic field. For instance, if $A$ is a vector potential to the magnetic field $B = \nabla_x \times A$, then also $A' = A + \nabla_x \phi$ is another vector potential to $B$, because $\nabla_x \times \nabla_x \phi = 0$. The scalar function $\phi$ generates a \emph{gauge transformation}. In contrast, one always needs to choose a vector potential in quantum mechanics, and the hamiltonian for a non-relativistic particle subjected to an electromagnetic field $(E,B) = \bigl ( - \nabla_x V , \nabla_x \times A \bigr )$ is obtained by minimal substitution as well, \begin{align} H^A = \bigl ( - \ii \nabla_x - A \bigr )^2 + V . \end{align} What happens if we choose an \emph{equivalent} gauge $A' = A + \nabla_x \phi$? It turns out that $H^A$ and $H^{A + \nabla_x \phi}$ are \emph{unitarily equivalent} operators, and the unitary which connects the two is $\e^{- \ii \phi}$, \begin{align*} \e^{+ \ii \phi} \, H^A \, \e^{- \ii \phi} = H^{A + \nabla_x \phi} \end{align*} Using the lingo of Chapter~\ref{quantum:framework:representations}, $\e^{- \ii \phi}$ is a unitary that connects two different representations. This has several very important ramifications. The spectrum $\sigma(H^A)$, for instance, only depends on the magnetic field $B = \nabla_x \times A$ because unitarily equivalent operators necessarily have the same spectrum. Moreover, the gauge freedom is essential to solving problems, because \emph{some gauges are nicer to work with than others}. One such condition is $\nabla_x \cdot A = 0$, known as Coulomb gauge. \marginpar{2014.03.04} The natural domain of these operators are \begin{definition}[Magnetic Sobolev spaces $H^m_A(\R^d)$]\label{quantum:defn:mag_Sobolev} Suppose $A \in \Cont^1(\R^d,\R^d)$. Then we define the magnetic Sobolev space of order $m$ to be \begin{align} H_A^m(\R^d) := \bigl \{ \psi \in L^2(\R^d) \; \; \vert \; \; \norm{\psi}_{H^m_A} < \infty \bigr \} \end{align} where the $m$th magnetic Sobolev norm is \begin{align} \norm{\psi}_{H^m_A}^2 := \sum_{\abs{\gamma} \leq m} \norm{\bigl ( - \ii \nabla_x - A \bigr )^{\gamma} \psi}_{L^2}^2 . \end{align} For $A = 0$, we abbreviate the (ordinary) Sobolev space with $H^m(\R^d) := H^m_{A = 0}(\R^d)$. \end{definition} The definition just means we are looking at those $\psi \in L^2(\R^d)$ whose weak derivatives of up to $m$th order are all in $L^2(\R^d)$.\footnote{The weak derivative is well-defined, because we can view $L^2(\R^d)$ as a subspace of the tempered distributions $\Schwartz'(\R^d)$.} One can see that Sobolev spaces are complete and can be equipped with a scalar product (see \eg \cite[Theorem~7.3]{Lieb_Loss:analysis:2001} for the case $m = 1$ and $A = 0$). Magnetic fields have the property that they induce oscillations, and these induced oscillations, in turn, \emph{increase the kinetic energy}. The diamagnetic inequality makes this intuition rigorous: \begin{theorem}[Diamagnetic inequality]\label{quantum:thm:diamag_inequality} Let $A : \R^d \longrightarrow \R^d$ be in $\Cont^1(\R^d,\R^d)$ and $\psi$ be in $H^1_A(\R^d)$. Then $\abs{\psi}$, the absolute value of $\psi$, is in $H^1(\R^d)$ and the \emph{diamagnetic inequality}, % \begin{align} \babs{\nabla_x \abs{\psi}(x)} \leq \babs{\bigl (- \ii \nabla_x - A(x) \bigr ) \psi(x)} , \label{quantum:eqn:diamag_inequality} \end{align} % holds pointwise for almost all $x \in \R^d$. \end{theorem} \begin{proof} Since $\psi \in L^2(\R^d)$ and each component of $A$ is in $\Cont^1(\R^d,\R^d) \subset L^2_{\mathrm{loc}}(\R^d,\R^d)$, the distributional gradient of $\psi$ is in $L^1_{\mathrm{loc}}(\R^d)$. The distributional derivative of the absolute value can be computed explicitly, % \begin{align} \partial_{x_j} \negthinspace \abs{\psi}(x) = % \begin{cases} \Re \Bigl ( \tfrac{\overline{\psi(x)}}{\abs{\psi(x)}} \, \partial_{x_j} \psi(x) \Bigr ) & \psi(x) \neq 0 \\ 0 & \psi(x) = 0 \\ \end{cases} % , \label{quantum:eqn:diamag_ineq_derivative_gradient} \end{align} % and the right-hand side is again a function in $L^1_{\mathrm{loc}}(\R^d)$. Given that $A$ and $\abs{\psi}$ are real, % \begin{align*} \Re \left ( \frac{\overline{\psi(x)}}{\abs{\psi(x)}} \, \ii A_j(x) \, \psi(x) \right ) &= \Re \bigl ( \ii A_j(x) \, \abs{\psi(x)} \bigr ) = 0 , \end{align*} % and we can add this term to equation~\eqref{quantum:eqn:diamag_ineq_derivative_gradient} free of charge to obtain % \begin{align*} \partial_{x_j} \abs{\psi}(x) = % \begin{cases} \Re \Bigl ( \tfrac{\overline{\psi(x)}}{\abs{\psi(x)}} \, \bigl ( \partial_{x_j} + \ii A_j(x) \bigr ) \, \psi(x) \Bigr ) & \psi(x) \neq 0 \\ 0 & \psi(x) = 0 \\ \end{cases} % . \end{align*} % The diamagnetic inequality now follows from $\babs{\Re z} \leq \abs{z}$, $z \in \C$. The left-hand side of \eqref{quantum:eqn:diamag_inequality} is in $L^2(\R^d)$ since the right-hand side is by assumption on $\psi$. \end{proof} The simplest example of a magnetic Hamilton operator $H^A = (- \ii \nabla_x - A)^2$ is the so-called \emph{Landau hamiltonian} where $d = 2$, $B$ is constant and $V = 0$. For instance, one can choose the \emph{symmetric gauge} \begin{align} A(x) = \frac{B}{2} \left ( \begin{matrix} - x_2 \\ + x_1 \\ \end{matrix} \right ) \end{align} or the \emph{Landau gauge} \begin{align} A(x) = B \left ( \begin{matrix} - x_2 \\ 0 \\ \end{matrix} \right ) . \end{align} The spectrum of $\sigma(H^A) = \{ 2 n + 1 \; \vert \; n \in \N_0 \} = \sigma_{\mathrm{ess}}(H^A)$ are the \emph{Landau levels}, a collection of \emph{infinitely degenerate} eigenvalues accumulating at $+ \infty$. Physically, the origin for this massive degeneracy is translation-covariance: if I have a bound state $\psi_0$, then $\psi_0(\cdot - x_0)$ is an eigenvector to a possibly gauge-transformed hamiltonian $H^{A + \nabla_x \phi}$. From a classical perspective, the existence of bound states as well as translational symmetry are also clear: a constant magnetic field traps a particle in a circular orbit, and the analog of this \emph{classical} bound state is a \emph{quantum} bound state, an eigenvector. \section{Bosons vs{.} fermions} \label{quantum:bosons_fermions} The extension of \emph{single}-particle quantum mechanics to \emph{multi}-particle quantum mechanics is highly non-trivial. To clarify the presentation, let us focus on \emph{two} identical particles moving in $\R^d$. Two options are arise: either the compound wave function $\Psi$ is a function on $\R^d$, \ie it acts like a \emph{density}, or it is a function of $\R^d \times \R^d$ where each set of coordinates $x = (x_1,x_2)$ is associated to one particle. It turns out that wave functions depend on $\R^{N d}$ where $N$ is the number of particles. However, that is not all, there is an added complication: classically, we can label identical particles by tracking their trajectory. This is impossible in the quantum framework, because the uncertainty principle forbids any such tracking procedure. Given that $\babs{\Psi(x_1,x_2)}^2$ is a physical observable, the inability to distinguish particles implies \begin{align*} \babs{\Psi(x_1,x_2)}^2 &= \babs{\Psi(x_2,x_1)}^2 , \end{align*} and hence, $\Psi(x_1,x_2) = \e^{+ \ii \theta} \, \Psi(x_2,x_1)$. However, given that exchanging variables twice must give the same wave function, the only two admissible phase factors are $\e^{+ \ii \theta} = \pm 1$. Particles for which $\Psi(x_1,x_2) = \Psi(x_2,x_1)$ holds are \emph{boson} (integer spin) while those for which $\Psi(x_1,x_2) = - \Psi(x_2,x_1)$ are \emph{fermions} (half-integer spin). Examples are bosonic photons and fermionic electrons. This innocent looking fact has very, very strong consequences on the physical and mathematical properties of quantum systems. The most immediate implication is \emph{Pauli's exclusion principle} for fermions, \begin{align*} \Psi(x,x) = 0 , \end{align*} a fact that is colloquially summarized by saying that bosons are social (because they like to bunch together) while sociophobic fermions tend to avoid one another. To make this more rigorous, let us consider the splitting \begin{align*} L^2(\R^d \times \R^d) \cong L^2_{\mathrm{s}}(\R^d \times \R^d) \oplus L^2_{\mathrm{as}}(\R^d \times \R^d) \end{align*} into symmetric and antisymmetric part induced via $f = f_{\mathrm{s}} + f_{\mathrm{as}}$ where \begin{align*} f_{\mathrm{s}}(x_1,x_2) :& \negmedspace= \tfrac{1}{2} \bigl ( f(x_1,x_2) + f(x_2,x_1) \bigr ) , \\ f_{\mathrm{as}}(x_1,x_2) :& \negmedspace= \tfrac{1}{2} \bigl ( f(x_1,x_2) - f(x_2,x_1) \bigr ) . \end{align*} Then one can proceed and restrict the two-particle Schrödinger operator \begin{align*} H = \sum_{j = 1 , 2} \bigl ( - \Delta_{x_j} + V(x_j) \bigr ) \end{align*} to either the bosonic space $L^2_{\mathrm{s}}(\R^d \times \R^d)$. The kinetic energy $- \sum_{j = 1 , 2} \Delta_{x_j}$ preserves the (anti-)symmetry, \eg in the antisymmetric (fermionic case) it defines a bounded linear map \begin{align*} H : L^2_{\mathrm{as}}(\R^d \times \R^d) \cap H^2(\R^d \times \R^d) \longrightarrow L^2_{\mathrm{as}}(\R^d \times \R^d) . \end{align*} \section{Perturbation theory} \label{quantum:perturbations} One last, but important remark concerns perturbation theory. Almost none of the systems one encounters in “real life” has a closed-form solution, so it is immediate to study perturbations of known systems first. The physics literature usually contents itself studying approximations of \emph{eigenvalues} of the hamiltonian, but the more fundamental question is what happens to the \emph{dynamics}? In other words, does $H_1 \approx H_2$ imply $\e^{- \ii t H_1} \approx \e^{- \ii t H_2}$. The answer is \emph{yes} and uses a very, very nifty trick, the \emph{Duhamel formula}. The idea is to write the difference \begin{align} \e^{- \ii t H_1} - \e^{- \ii t H_2} &= \int_0^t \dd s \, \frac{\dd}{\dd s} \Bigl ( \e^{- \ii s H_1} \, \e^{- \ii (t-s) H_2} \Bigr ) \notag \\ &= - \ii \, \int_0^t \dd s \, \e^{- \ii s H_1} \, \bigl ( H_1 - H_2 \bigr ) \, \e^{- \ii (t-s) H_2} \end{align} as the integral of a total derivative. So if we assume $H_2 = H_1 + \eps \, W$, then one has to estimate \begin{align*} \e^{- \ii s H_1} \, \bigl ( H_1 - H_2 \bigr ) \, \e^{- \ii (t-s) H_2} &= \eps \, \e^{- \ii s H_1} \, W \, \e^{- \ii (t-s) H_2} = \order(\eps) . \end{align*} Note that this holds for \emph{all} times, because quantum mechanics is a \emph{linear} theory. Otherwise, we would have to use the Grönwall Lemma~\ref{odes:lem:Groenwall} that places restrictions on the time scale for which $\psi_1(t) = \e^{- \ii t H_1} \psi$ and $\psi_2(t) = \e^{- \ii t H_2} \psi$ remain close. \chapter{Variational calculus} \label{variation} Functionals $\mathcal{E} : \Omega \subseteq \mathcal{X} \longrightarrow \C$ are maps from a subset $\Omega$ of a Banach space $\mathcal{X}$ over the field $\C$ (or $\R$) to $\C$ (or $\R$). In case $\mathcal{X}$ is finite-dimensional, a functional is just a function $\C^n \longrightarrow \C$, and so the cases we are really interested in are when $\mathcal{X}$ is \emph{infinite}-dimensional. Functionals arise very often in physics as a way to formulate certain fundamental principles (\eg energy, action and the like); their analysis often produces linear and non-linear PDEs which are interesting in their own right. For instance, the energy functional \begin{align} \mathcal{E}(\psi) := \left ( \int_{\R^d} \dd x \, \babs{\psi(x)}^2 \right )^{-1} \; \int_{\R^d} \dd x \, \Bigl ( \babs{\nabla_x \psi(x)}^2 + V(x) \, \babs{\psi(x)}^2 \Bigr ) \label{variation:eqn:energy_functional} \end{align} associated to the Schrödinger operator $H = - \Delta_x + V$ can be seen as a map $H^1(\R^d) \ni \psi \mapsto \mathcal{E}(\psi) \in \R$. Here, $H^1(\R^d)$ is the first Sobolev space, \cf Definition~\ref{quantum:defn:mag_Sobolev}. Let us assume $H$ is selfadjoint, bounded from below and has a ground state $\psi_0$. Then if we minimize $\mathcal{E}$ under the constraint $\norm{\psi} = 1$, the functional has a \emph{global minimum} at $\psi_0$. Alternatively, we can view it as the minimizer of the Rayleigh-Ritz quotient. So let us perturb the Rayleigh-Ritz quotient in the vicinity of the minimizer $\psi_0$, \ie we define \begin{align*} F(s) := \mathcal{E}(\psi_0 + s \varphi) \geq F(0) = \mathbb{E}(\varphi_0) \end{align*} where $\varphi \in H^1(\R^d)$ is arbitrary, but fixed. One can express denominator and numerator explicitly as quadratic polynomials in $s$, and one finds \begin{align*} \frac{\dd}{\dd s} F(0) &= 2 \, \snorm{\psi_0}^{-2} \, \Re \scpro{\varphi}{\bigl ( - \Delta_x + V - E_0 \bigr ) \psi_0} = 0 , \end{align*} \ie $F$ has a global minimum at $0$ independently of the choice of function $\varphi$. Put more succinctly: \emph{if it exists,} the minimizer $\psi_0$ of the Rayleigh-Ritz quotient is the eigenfunction of the Schrödinger operator $H = -\Delta_x + V$ at $E_0 = \inf \sigma(H)$. The energy functional only serves as an \emph{amuse gueule}. Among other things, it suggests that one can ask the same questions for functionals that one can asks for (ordinary) functions: \begin{enumerate}[(1)] \item Existence of local and global \emph{extrema}, \emph{convexity} properties and the existence of extrema under \emph{constraints}. \item One can study ODEs where the vector field is connected to derivatives of a functional; in the simplest case, we want to look at gradient flows. The same fundamental questions arise: Where are the \emph{fixed points}? Are these fixed points \emph{stable or unstable}? \end{enumerate} \section{Extremals of functionals} \label{variation:extremals} Given that functionals are just functions on infinite-dimensional spaces, it is not surprising that the same type of questions are raised as with ordinary functions: continuity, differentiability, existence of local and global extrema. As one can guess, a rigorous treatment of functionals is a lot more technical. \subsection{The Gâteaux derivative} \label{variation:extremals:derivatives} Apart from continuity, the most fundamental property a function has is that of differentiability. On functionals, the starting point is the \emph{directional derivative} which then gives rise to the \emph{Gâteaux derivative}. Similar to functions on $\R^n$ vs{.} functions on $\C^n$, we start with the real case first and postpone the discussion of complex derivatives to Chapter~\ref{variation:extremals:complex_derivatives}. \begin{definition}[Gâteaux derivative]\label{variation:defn:Gateaux_derivative} Let $\mathcal{E} : \Omega \subset \mathcal{X} \longrightarrow \R$ a continuous linear functional defined on an \emph{open} subset $\Omega$ of the real Banach space $\mathcal{X}$. Then the Gâteaux derivative $\dd \mathcal{E}(\psi)$ at $\psi \in \Omega$ is defined as the linear functional on $\mathcal{X}$ for which % \begin{align} \dd \mathcal{E}(\psi) \varphi := \left . \frac{\dd }{\dd s} \mathcal{E} \bigl ( \psi + s \varphi \bigr ) \right \vert_{s = 0} \label{variation:eqn:Gateaux_derivative} \end{align} % holds for all $\varphi \in \mathcal{X}$. If the Gâteaux derivative exists for all $\psi \in \Omega$, we say $\mathcal{E}$ is $\Cont^1$. \end{definition} Higher derivatives are multilinear forms which are defined iteratively. \subsection{Extremal points and the Euler-Lagrange equations} \label{variation:extremals:Euler_Lagrange} \emph{Critical points} are those $\psi_{\ast} \in \mathcal{X}$ for which the Gâteaux derivative $\dd \mathcal{E}(\psi_{\ast}) = 0$ vanishes. To illustrate the connection between PDEs and critical points, let us consider the functional \begin{align*} \mathcal{E}(u) := \int_{\R^d} \dd x \, \Bigl ( \tfrac{1}{2} \babs{\nabla_x u}^2 + u \, f \Bigr ) \end{align*} where $f : \R^d \longrightarrow \R$ is some fixed function and $u \in \Cont^{\infty}_{\mathrm{c}}(\R^d) \subset H^1(\R^d,\R)$. A quick computation yields the Gâteaux derivative, and if we set the right-hand side of \begin{align} \dd \mathcal{E}(u) v &= \int_{\R^d} \dd x \, \bigl ( \nabla_x u \cdot \nabla_x v + f \, v \bigr ) \notag \\ & = \int_{\R^d} \dd x \, \bigl ( - \Delta_x u + f \bigr ) \, v \overset{!}{=} 0 \label{variation:eqn:Euler_Lagrange_integral} \end{align} to zero for all $v \in H^1(\R^d,\R)$, we obtain the condition for a local extremum: $u$ is a critical point if and only if $u$ satisfies the Poisson equation \begin{align} - \Delta_x u + f = 0 . \label{variation:eqn:Euler_Lagrange_differential} \end{align} Depending on the context, we call either \eqref{variation:eqn:Euler_Lagrange_integral} or \eqref{variation:eqn:Euler_Lagrange_differential} the \emph{Euler-Lagrange equation} to $\mathcal{E}$. So if $\mathcal{X}$ is comprised of functions, then \emph{the search for critical points is equivalent to solving a linear or non-linear PDE.} \marginpar{2014.03.06} \subsection{Functionals on submanifolds} \label{variation:extremals:tangent} Very often the functional is defined on a subset $\Omega \subset \mathcal{X}$ which lacks a linear structure (\eg Lagrangian mechanics below) so that $\mathcal{E}(\psi + s \varphi)$ need not make sense, because $\psi + s \varphi \not\in \Omega$. Instead, one has to replace the simple linear combination $\psi + s \varphi$ with differentiable \emph{paths} $(-\delta,+\delta) \ni s \mapsto \psi_s \in \Omega$. Then a tangent vector $\xi$ at $\psi$ is an equivalence class of paths so that $\psi_0 = \psi$ and $\frac{\dd}{\dd s} \psi_s \big \vert_{s = 0} = \xi$; the tangent space $T_{\psi} \Omega$ is then the vector space spanned by these tangent vectors. Hence, we proceed as in Definition~\ref{variation:defn:Gateaux_derivative} and set \begin{align*} \dd \mathcal{E}(\psi) \xi := \left . \frac{\dd}{\dd s} \mathcal{E}(\psi_s) \right \vert_{s = 0} . \end{align*} Clearly, $\dd \mathcal{E}(\psi) \in \bigl ( T_{\psi} \Omega \bigr )'$ is an element of the dual space since it maps a tangent vector onto a scalar (\cf Definition~\ref{spaces:defn:dual_space} of the dual space). In general, this is just an abstract vector space, but here we can identify $T_{\psi} \Omega$ with a subvector space of $\mathcal{X}$. For a detailed description of the mathematical structures (manifolds and tangent bundles), we refer to \cite[Chapter~4.1]{Marsden_Ratiu:intro_mechanics_symmetry:1999}. \subsection{Lagrangian mechanics} \label{variation:extremals:lagrangian_mechanics} One extremely important example where the variation takes place over a \emph{non-linear} space is that of classical mechanics. Here we start with the space of paths \begin{align} \mathcal{D}(x_0,x_1) := \Bigl \{ q : [0,T] \longrightarrow \mathcal{X} \; \; \big \vert \; \; q \in \Cont^2 \bigl ( [0,T] , \mathcal{X} \bigr ) , \; \; q(0) = x_0 , \; \; q(T) = x_1 \Bigr \} \label{variation:eqn:path_space} \end{align} which join $x_0$ and $x_1$ in the Banach space $\mathcal{X}$; initially, let us concentrate on the case $\mathcal{X} = \R^d$ but more general choices are also admissible. As we will see later on, the choices of $x_0$, $x_1$ and $T$ are completely irrelevant, the Euler-Lagrange equations will be independent of them. The idea is to \emph{derive classical equations of motion from the Euler-Langrange equations of the action functional} \begin{align} S(q) := \int_0^T \dd t \, L \bigl ( q(t) , \dot{q}(t) \bigr ) \end{align} where $L \in \Cont^2 \bigl ( \R^d \times \R^d \bigr )$ is the \emph{Lagrange function}; $L(x,v)$ depends on position $x$ and velocity $v$ (as opposed to momentum). Physicists call this \emph{principle of stationary action}. We exploit the linear structure of $\mathcal{X} = \R^d$ and propose that we can canonically identify the tangent space $T_q \mathcal{D}(x_0,x_1)$ with $\mathcal{D}(0,0)$: if $h \in \mathcal{D}(0,0)$, then by definition $q + s h \in \mathcal{D}(x_0,x_1)$ is a path which joins $x_0$ and $x_1$ for all $s \in \R$ with tangent vector $h$. The Euler-Lagrange equations can now be derived easily: by definition $\dd S(q) = 0$ means $\dd S(q) h = 0$ holds for all $h \in \mathcal{D}(0,0)$, and we compute \begin{align} \dd S(q) h &= \left . \frac{\dd }{\dd s} S(q + s h) \right \vert_{s = 0} = \int_0^T \dd t \, \left . \frac{\dd }{\dd s} L \Bigl ( q(t) + s h(t) \, , \, \dot{q}(t) + s \dot{h}(t) \Bigr ) \right \vert_{s = 0} \notag \\ &= \int_0^T \dd t \, \Bigl ( \nabla_x L \bigl ( q(t) , \dot{q}(t) \bigr ) \cdot h(t) + \nabla_v L \bigl ( q(t) , \dot{q}(t) \bigr ) \cdot \dot{h}(t) \Bigr ) \notag \\ &= \int_0^T \dd t \, \left ( \nabla_x L \bigl ( q(t) , \dot{q}(t) \bigr ) - \frac{\dd}{\dd t} \nabla_v L \bigl ( q(t) , \dot{q}(t) \bigr ) \right ) \cdot h(t) . \label{variation:eqn:Euler_Lagrange_with_L_integral} \end{align} Note that the boundary terms vanish, because $h(0) = 0 = h(T)$. Clearly, the Euler-Lagrange equations \begin{align} \nabla_x L \bigl ( q(t) , \dot{q}(t) \bigr ) - \frac{\dd}{\dd t} \nabla_v L \bigl ( q(t) , \dot{q}(t) \bigr ) &= 0 \label{variation:eqn:Euler_Lagrange_Lagrangian_mechanics} \end{align} are independent of $x_0$, $x_1$ and $T$ -- as promised. Moreover, the linear nature of $\mathcal{X} = \R^d$ is not crucial in the derivation, but a rigorous definition is more intricate in the case where $\mathcal{X}$ is a manifold (\cf \cite[Chapter~7]{Marsden_Ratiu:intro_mechanics_symmetry:1999}). \subsubsection{Classical mechanics on $\R^d$} \label{variation:extremals:lagrangian_mechanics:classical_mechanics_Rd} \emph{The} standard example for a Lagrangian is $L(x,v) = \frac{m}{2} v^2 - U(x)$ where $U$ is the potential (we use $U$ rather than $V$ to avoid the ambiguity between the velocity $v$ and the potential $V$). A simple computation yields \begin{align*} 0 &= \nabla_x L \bigl ( q(t) , \dot{q}(t) \bigr ) - \frac{\dd}{\dd t} \nabla_v L \bigl ( q(t) , \dot{q}(t) \bigr ) \\ &= - \nabla_x U \bigl ( q(t) \bigr ) - m \ddot{q}(t) \end{align*} or $m \ddot{q} = - \nabla_x U$. This second-order equation can be reduced to a first-order ODE by setting $\dot{q} = v$, and one obtains \begin{align*} \frac{\dd}{\dd t} \left ( % \begin{matrix} q \\ v \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} v \\ - m^{-1} \, \nabla_x U \\ \end{matrix} % \right ) . \end{align*} Glancing back at the beginning of Chapter~\ref{classical_mechanics}, we guess the simple change of variables $p := m v$ and recover Hamilton's equations of motion~\eqref{classical_mechanics:eqn:hamiltons_eom}. In fact, this innocent change of variables is an instance of a much deeper fact, namely that momentum can be \emph{defined} as \begin{align*} p := \nabla_v L . \end{align*} \subsubsection{Derivation of Maxwell's equations} \label{variation:extremals:lagrangian_mechanics:Maxwell} The idea to \emph{derive the dynamical equations of a physical theory as a critical point from an action functional} is extremely successful; almost any physical theory (\eg general relativity, quantum electrodynamics and fluid dynamics) can be derived in this formalism, and hence, a better understanding of functionals gives one access to a richly stocked toolbox. Moreover, they yield equations of motion in situations where one wants to couple degrees of freedom of a different nature (\eg fluid dynamics and electrodynamics). To illustrate this, we will derive the vacuum Maxwell equations (\cf Chapter~\ref{operators:Maxwell} for $\eps = 1 = \mu$). It all starts with a clever choice of Lagrange function, in this case \begin{align*} L \bigl ( t , A , \phi , \alpha , \varphi \bigr ) &= \int_{\R^3} \dd x \, \mathbb{L} \bigl ( t \, , \, A(x) \, , \, \phi(x) \, , \, \alpha(x) \, , \, \varphi(x) \bigr ) \\ &= \int_{\R^3} \dd x \, \Bigl ( \tfrac{1}{2} \babs{- \alpha(x) - \nabla_x \phi(x)}^2 - \tfrac{1}{2} \babs{\nabla_x \times A(x)}^2 \, + \Bigr . \\ &\qquad \qquad \quad \Bigl . + \, j(t,x) \cdot A(x) - \rho(t,x) \, \phi(x) \Bigr ) \end{align*} where $A$ is the vector potential, $\phi$ the scalar potential, $j$ the current density and $\rho$ the charge density. Because of \emph{charge conservation} \begin{align} \nabla_x \cdot j + \partial_t \rho = 0 , \label{variation:eqn:charge_conservation} \end{align} current and charge density are linked. The potentials are linked to the electromagnetic field via \begin{align*} \mathbf{E} &= - \partial_t A - \nabla_x \phi , \\ \mathbf{B} &= \nabla_x \times A . \end{align*} Given that $L$ is defined in terms of a quadratic polynomial in the fields, we can easily deduce the equations of motion from the action functional \begin{align} S(A,\phi) &= \int_0^T \dd t \, L \bigl ( A(t) , \phi(t) , \partial_t A(t) , \partial_t \phi(t) \bigr ) \label{variation:eqn:Maxwell_action} \end{align} where $(A,\phi)$ is a path in the space of potentials. Physicists usually use $\delta$ to denote (what they call) “functional differentiation”, and the Euler-Langrange equations~\eqref{variation:eqn:Euler_Lagrange_Lagrangian_mechanics} are expressed as \begin{subequations}\label{variation:eqn:Maxwell_Euler_Lagrange} % \begin{align} \frac{\dd}{\dd t} \frac{\delta \mathbb{L}}{\delta \dot{\phi}} - \frac{\delta \mathbb{L}}{\delta \phi} &= 0 , \label{variation:eqn:Maxwell_Euler_Lagrange_phi} \\ \frac{\dd}{\dd t} \frac{\delta \mathbb{L}}{\delta \dot{A}} - \frac{\delta \mathbb{L}}{\delta A} &= 0 . \label{variation:eqn:Maxwell_Euler_Lagrange_A} \end{align} % \end{subequations} These can be computed by pretending that the integrand is just a polynomial in $A$, $\partial_t A$, $\phi$ and $\partial_t \phi$. We postpone a proper derivation until after the discussion of these two equations.\marginpar{2014.03.12} The \emph{Lagrange density $\mathbb{L}$} is independent of $\dot{\phi} = \partial_t \phi$ so that partial integration yields \emph{Gauß's law} (charges are the sources of electric fields), \begin{align} \nabla_x \cdot \bigl ( - \partial_t A - \nabla_x \phi \bigr ) &= \nabla_x \cdot \mathbf{E} = \rho . \label{variation:eqn:edyn_constraint_equation} \end{align} This equation acts as a constraint and is not a dynamical equation of motion. Note that since $\mathbf{B} = \nabla_x \times A$ is the curl of a vector field, it is automatically divergence-free, $\nabla_x \cdot \mathbf{B} = 0$. This takes care of the two constraints, equations~\eqref{operators:eqn:source_Maxwell_eqns} for $\eps = 1 = \mu$. Equation~\eqref{variation:eqn:Maxwell_Euler_Lagrange_A} yields both of the dynamical Maxwell equations, starting with \begin{align} - \partial_t^2 A - \nabla_x \partial_t \phi &= \partial_t \mathbf{E} = \nabla_x \times \nabla_x A - j = \nabla_x \times \mathbf{B} - j . \label{variation:eqn:edyn_dynamical_equation} \end{align} To obtain the other dynamical Maxwell equation, we derive $\mathbf{B} = \nabla_x \times A$ with respect to time and use $\nabla_x \times \nabla_x \phi = 0$: \begin{align*} \partial_t \mathbf{B} &= \nabla_x \times \partial_t A = - \nabla_x \times \bigl ( - \partial_t A - \nabla_x \phi \bigr ) \\ &= - \nabla_x \times \mathbf{E} \end{align*} Hence, we obtain the usual Maxwell equations after introducing a pair of new variables, the electric field $\mathbf{E}$ and the magnetic field $\mathbf{B}$. These fields are \emph{independent} of the choice of gauge, but more on that below. Solutions to the Maxwell equations are stationary points of the action functional~\eqref{variation:eqn:Maxwell_action}, and a proper derivation involves computing the functional derivative: \begin{align*} \bigl ( S(A,\phi) \bigr )(a,\varphi) &= \left . \frac{\dd}{\dd s} S \bigl ( A + s a \, , \, \phi + s \varphi \bigr ) \right \vert_{s = 0} \\ &= \int_0^T \dd t \int_{\R^3} \dd x \, \frac{\dd}{\dd s} \left ( \frac{1}{2} \Babs{- \partial_t A - \nabla_x \phi - s \, \partial_t a - s \, \nabla_x \varphi}^2 + \right . \\ &\qquad \qquad \qquad \qquad \qquad \left . - \frac{1}{2} \Babs{\nabla_x \times A + s \, \nabla_x \times a}^2 + \right . \\ &\qquad \qquad \qquad \qquad \qquad + \biggl . j \cdot A - \rho \, \phi + s \, j \cdot a - s \, \rho \, \varphi \biggr ) \bigg \rvert_{s = 0} \\ &= \int_0^T \dd t \int_{\R^3} \dd x \, \Bigl ( \bigl ( - \partial_t A - \nabla_x \phi \bigr ) \cdot \bigl ( - \partial_t a - \nabla_x \varphi \bigr ) \, + \Bigr . \\ &\qquad \qquad \qquad \qquad \Bigl . - \bigl ( \nabla_x \times A \bigr ) \cdot \bigl ( \nabla_x \times a \bigr ) + j \cdot a - \rho \, \varphi \Bigr ) \\ &= \int_0^T \dd t \int_{\R^3} \dd x \, \biggl ( \Bigl ( - \partial_t^2 A - \nabla_x \partial_t \phi - \nabla_x \times \nabla_x \times A + j \Bigr ) \cdot a + \biggr . \\ &\qquad \qquad \qquad \qquad \biggl . + \Bigl ( \nabla_x \cdot \bigl ( - \partial_t A - \nabla_x \phi \bigr ) - \rho \Bigr ) \, \varphi \biggr ) \end{align*} Setting $\dd S(A,\phi) = 0$ yields equations \eqref{variation:eqn:edyn_dynamical_equation} and \eqref{variation:eqn:edyn_constraint_equation}. \paragraph{Eliminating the constraints} The presence of the constraint equations \begin{align*} \nabla_x \cdot \mathbf{E} &= \rho \\ \nabla_x \cdot \mathbf{B} &= 0 \end{align*} means we can in fact eliminate some variables. The idea is to decompose the electromagnetic fields \begin{align*} \mathbf{E} &= \mathbf{E}_{\parallel} + \mathbf{E}_{\perp} , \\ \mathbf{B} &= \mathbf{B}_{\parallel} + \mathbf{B}_{\perp} \end{align*} into longitudinal ($\parallel$) and transversal ($\perp$) component. Transversal fields are those for which $\nabla_x \cdot \mathbf{E}_{\perp} = 0$ holds while the longitudinal component is simply the remainder $\mathbf{E}_{\parallel} = \mathbf{E} - \mathbf{E}_{\perp} = \nabla_x \chi$ which can always be written as a gradient of some function; on $\R^3$ this \emph{Helmholtz decomposition} of fields is unique. \begin{theorem}[Helmholtz-Hodge-Weyl-Leeray decomposition] The Hilbert space % \begin{align*} L^2(\R^3,\C^3) = J \oplus G \end{align*} % decomposes into the orthogonal subspaces % \begin{align} J := \ker \mathrm{div} = \ran \mathrm{curl} \label{variation:eqn:definition_J_Helmholtz} \end{align} % and % \begin{align} G := \ran \mathrm{grad} = \ker \mathrm{curl} . \label{variation:eqn:definition_G_Helmholtz} \end{align} % In other words, any vector field $\mathbf{E} = \mathbf{E}_{\parallel} + \mathbf{E}_{\perp}$ can be uniquely decomposed into $\mathbf{E}_{\parallel} \in G$ and $\mathbf{E}_{\perp} \in J$. \end{theorem} \begin{proof}[Sketch] We will disregard most technical questions and content ourselves showing orthogonality of the subspaces and $J \cap G = \{ 0 \}$. Note that since $\Cont^{\infty}_{\mathrm{c}}(\R^3,\C^3)$ is dense in $L^2(\R^3,\C^3)$, it suffices to work with vectors from that dense subspace. The proof of equation~\eqref{variation:eqn:definition_G_Helmholtz} can be found in \cite[Chapter~I, Theorem~1.4, equation~(1.34)]{Temam:theory_Navier_Stokes:2001}; equation~\eqref{variation:eqn:definition_G_Helmholtz} is shown in \cite[Chapter~I, Theorem~1.4, equation~(1.33) and Remark~1.5]{Temam:theory_Navier_Stokes:2001} and \cite[Theorem~1.1]{Picard:selfadjointness_curl:1998}. Let us start with orthogonality: Pick $\phi = \nabla_x \varphi \in G \cap \Cont^{\infty}_{\mathrm{c}}(\R^3,\C^3)$ where $\varphi \in \Cont^{\infty}_{\mathrm{c}}(\R^3)$ and $\psi \in J = \ker \mathrm{div}$. Then partial integration yields % \begin{align*} \bscpro{\psi}{\phi} = \bscpro{\psi}{\nabla_x \varphi} = \bscpro{\nabla_x \cdot \psi}{\varphi} = 0 , \end{align*} % meaning that the vectors are necessarily orthogonal. It is crucial here that the space of \emph{harmonic vector fields} % \begin{align*} \mathrm{Har}(\R^3,\R^3) := \ker \mathrm{div} \cap \ker \mathrm{curl} = J \cap G = \{ 0 \} \end{align*} % is trivial, because by % \begin{align*} \nabla_x \times \nabla_x \times \phi = \nabla_x \bigl ( \nabla_x \cdot \phi \bigr ) - \Delta_x \phi \end{align*} % the subvector space consists of functions which component-wise satisfy $\Delta_x \phi = 0$. But on $\Cont^{\infty}_{\mathrm{c}}(\R^3,\R^3)$ and $L^2(\R^3,\R^3)$, this equation has no non-trivial solution. \end{proof} On bounded subsets of $\R^3$, there \emph{are} harmonic vector fields because then at least the constant function is square integrable. Then the Helmholtz splitting is more subtle. Clearly, by definition $\mathbf{B}_{\parallel} = 0$ and $\nabla_x \cdot \mathbf{E} = \nabla_x \cdot \mathbf{E}_{\parallel} = \rho$. Moreover, $\nabla_x \times \mathbf{E} = \nabla_x \times \mathbf{E}_{\perp}$ holds so that we obtain \begin{align*} \mathbf{B}_{\parallel}(t) &= 0 , \\ \nabla_x \cdot \mathbf{E}_{\parallel}(t) &= \rho(t) . \end{align*} Note that the last equation can be solved with the help of the Fourier transform. The two dynamical contributions now only involve the \emph{transversal} components of the fields, \begin{align*} \partial_t \mathbf{E}_{\perp} &= \nabla_x \times \mathbf{B}_{\perp} - j , \\ \partial_t \mathbf{B}_{\perp} &= - \nabla_x \times \mathbf{E}_{\perp} . \end{align*} \paragraph{Emergence of $\mathbf{E}$ and $\mathbf{B}$} One may wonder what motivates one to set $\mathbf{E} = - \partial_t A - \nabla_x \phi$ and $\mathbf{B} = \nabla_x \times A$. Let us re-examine the Euler-Lagrange equation for the second variable, \begin{align*} - \partial_t^2 A - \nabla_x \partial_t \phi &= \nabla_x \times \nabla_x A - j . \end{align*} The left-hand side involves second-order time-derivatives, and if we want to write it as first-order equation, we can introduce the new variable $\mathbf{E} = - \partial_t A - \nabla_x \phi$ to obtain \begin{align*} \frac{\dd}{\dd t} \left ( % \begin{matrix} A \\ \mathbf{E} \\ \end{matrix} % \right ) &= \left ( % \begin{matrix} - \mathbf{E} - \nabla_x \phi \\ \nabla_x \times A \\ \end{matrix} % \right ) . \end{align*} \paragraph{Gauge symmetry} The constraint equation \eqref{variation:eqn:edyn_constraint_equation} is related to a continuous symmetry and leads to a \emph{conserved quantity}. The relation between the two, a continuous symmetry and a conserved quantity, is made precise by \emph{Noether's theorem} (\cf \cite[Theorem~11.4.1]{Marsden_Ratiu:intro_mechanics_symmetry:1999}). Here, the \emph{gauge symmetry} of the action functional leads to \eqref{variation:eqn:edyn_constraint_equation}: if $\chi : \R \times \R^3 \longrightarrow \R$ is a smooth function depending on time and space, then a quick computation shows \begin{align*} S \bigl ( A , \phi \bigr ) = S \bigl ( A + \nabla_x \chi \, , \, \phi - \partial_t \chi \bigr ) \end{align*} holds because the extra terms in the first two terms cancel exactly while those in the last two cancel because of \emph{charge conservation}~\eqref{variation:eqn:charge_conservation}. \subsection{Constraints} \label{variation:extremals:constraints} Two of the four Maxwell equations describe \emph{constraints}, and we have seen how to factor out the constraints in the dynamical equations by splitting $\mathbf{E}$ and $\mathbf{B}$ into longitudinal and transversal components. This represents one way to deal with constraints, one introduces a \emph{suitable parametrization} (coordinates) to factor them out. Unfortunately, this is often non-obvious and practically impossible. But fortunately, one can apply a well-known technique if one wants to find extrema for functions on subsets of $\R^d$ under constraints, \emph{Lagrange multipliers}. For simplicity, let us reconsider the case of functions on $\R^d$. There, the idea of Lagrange multipliers is very simple: assume one wants to find the extrema of $f : \R^d \longrightarrow \R$ under the constraints \begin{align*} g(x) = \bigl ( g_1(x) , \ldots , g_n(x) \bigr ) = 0 \in \R^n . \end{align*} Then this problem is equivalent to finding the (ordinary) extrema of the function \begin{align*} F : \R^d \times \R^n \longrightarrow \R , \; \; F(x,\lambda) := f(x) + \lambda \cdot g(x) \end{align*} and the condition for the extrema, \begin{align*} \nabla_x f(x) + \lambda \, \nabla_x g(x) &= 0 , \\ g(x) &= 0 , \end{align*} shows how the recipe of using Lagrange multipliers works behind the scenes. There is also a simple visualization in case $n = 1$: setting $\nabla_x f(x) + \lambda \, \nabla_x g(x) = 0$ means $\nabla_x f(x)$ and $\nabla_x g(x)$ are parallel to one another. Assume, for instance, that $x_0$ is a local maximum, but that $\nabla_x f(x_0)$ and $\nabla_x g(x_0)$ are not parallel. First of, $\nabla_x g(x)$ is always \emph{normal} to the surface defined by $g(x) = 0$, and we can split $\nabla_x f(x_0) = v_{\parallel} + v_{\perp}$ where $v_{\parallel} \parallel \nabla_x g(x_0)$ and $v_{\perp}$ is orthogonal. Then $v_{\perp} \neq 0$ means we can increase the value of $f(x)$ along $g(x) = 0$ by going in the direction of $v_{\perp}$, because \begin{align*} v_{\perp} \cdot \partial_{v_{\perp}} f(x_0) = v_{\perp}^2 > 0 . \end{align*} The argument for a local minimum is analogous, one simply has to walk in the direction opposite of $v_{\perp}$ to lower $f(x)$. Put another way, tangential to the surface, all components of $\nabla_x f(x)$ need to vanish, but along the surface normal $\nabla_x f(x)$ need not be zero. But it is prohibited to travel along this direction, because one would leave the surface $\{ g(x) = 0 \}$. To go back to the realm of functionals, we merely need to translate all these ideas properly: consider a functional $\mathcal{E} : \Omega \subseteq \mathcal{X} \longrightarrow \R$ restricted to \begin{align*} U := \bigl \{ x \in \Omega \; \; \vert \; \; \mathcal{J}(x) = 0 \bigr \} \end{align*} described by the constraint functional $\mathcal{J}$. \begin{theorem} Let $\mathcal{E}$ and $\mathcal{J}$ both be $\Cont^1$ functionals and assume $x_0 \in U$ is a critical point of $\mathcal{E} \vert_U$. Then there exists $\lambda \in \R$ such that % \begin{align*} \dd \mathcal{E}(x_0) + \lambda \, \dd \mathcal{J}(x_0) = 0 . \end{align*} % \end{theorem} \begin{proof} Let $x_s$ be a differentiable path in $U$ for the tangent vector $\xi = \partial_t x_s \vert_{s = 0}$, \ie it is a differentiable path in $\Omega$ such that $J(x_s) = 0$. Then $J(x_s) = 0$ implies automatically $\dd J(x_0) \xi = 0$ for all such paths, \ie $\xi \in \ker \dd J(x_0)$ (“$\dd J(x)$ is normal to $J(x) = 0$”). The fact that $x_0$ is a critical points of $\mathcal{E}$ in $U$ means % \begin{align*} \dd \mathcal{E}(x_0) \xi &= \left . \frac{\dd}{\dd s} \mathcal{E}(x_s) \right \vert_{s = 0} = 0 , \end{align*} % meaning $\xi \in \ker J(x_0)$ at critical points implies $\xi \in \ker \dd \mathcal{E}(x_0)$ (“$\dd \mathcal{E}(x_0)$ and $\dd \mathcal{J}(x_0)$ are parallel”). \end{proof} The proof once again states that in the tangential direction to $U$, one needs to have $\dd \mathcal{E}(x_0) = 0$ while in the normal direction, $\dd \mathcal{E}(x_0) \neq 0$ is permissible because it is not possible to travel in this direction as one would leave the “surface” $U = \{ J(x) = 0 \}$.\marginpar{2014.03.13} \begin{example} Assume we would like to find the equations of motions of a particle of mass $1$ moving on the surface of the sphere $\mathbb{S}^2 \subset \R^3$ with radius $1$. Then the Lagrange function describing this motion is just the free Lagrange function in $\R^3$, namely $L(x,v) = \frac{1}{2} v^2$. The constraint functional is described in terms of the function $J(x,v) = \tfrac{1}{4} \bigl ( x^2 - 1 \bigr )^2$, % \begin{align*} \mathcal{J}(q) := \int_0^T \dd t \, J \bigl (q(t) , \dot{q}(t) \bigr ) . \end{align*} % Paths on the sphere satisfy $\mathcal{J}(q) = 0$. Equation~\eqref{variation:eqn:Euler_Lagrange_with_L_integral} now yields a very efficient way to compute $\dd S + \lambda \, \dd \mathcal{J}$, namely for any path $q$ on the surface of the sphere, we obtain % \begin{align*} \bigl ( \dd S + \lambda \, \dd \mathcal{J} \bigr ) h &= \int_0^T \dd t \, \left ( \nabla_x \bigl ( L + \lambda \, \mathcal{J} \bigr ) \bigl ( q(t) , \dot{q}(t) \bigr ) - \frac{\dd}{\dd t} \nabla_v \bigl ( L + \lambda \, \mathcal{J} \bigr ) \bigl ( q(t) , \dot{q}(t) \bigr ) \right ) \cdot h(t) \\ &= \int_0^T \dd t \, \bigl ( - \ddot{q}(t) + \lambda \, (q(t)^2 - 1) \, q(t) \bigr ) \cdot h(t) . \end{align*} % By construction, $h(t)$ is tangent to the sphere at $q(t)$, \ie $h(t)$ needs to be perpendicular to $q(t)$, and thus the second term vanishes (the term also vanishes because $q(t)^2 = 1$, but the other argument still holds true if we change the constraint function). In fact, $q(t)$ is always normal to the tangent plane, and thus saying that $\ddot{q}(t)$ is perpendicular to the plane really means % \begin{align} \ddot{q}(t) = \lambda(t) \, q(t) . \label{variation:eqn:ddotq_free_particle_sphere} \end{align} % Writing this equation as a first-order equation, we introduce the variable $v(t) := \dot{q}(t)$ which by definition is tangent to $\mathbb{S}^2$ at $q(t)$. Because we are in three dimensions, there exists a unique vector $\omega(t)$ with $\omega(t) \cdot q(t) = 0$ ($\omega(t)$ must lie in the tangent plane), $\omega(t) \cdot \dot{q}(t) = 0$ and % \begin{align*} \dot{q}(t) = \omega(t) \times q(t) , \end{align*} % because $\omega(t)$ is the vector which completes $\{ q(t) , \dot{q}(t) \}$ to an orthogonal basis of $\R^3$ and is proportional to $q(t) \times \dot{q}(t)$. In fact, the above equation implies $\omega(t) = q(t) \times \dot{q}(t)$ since % \begin{align*} q(t) \times \dot{q}(t) &= q(t) \times \bigl ( \omega(t) \times q(t) \bigr ) = \underbrace{\abs{q(t)}^2}_{= 1} \, \omega(t) - \underbrace{q(t) \cdot \omega(t)}_{= 0} \, q(t) = \omega(t) . \end{align*} % Deriving the left-hand side with respect to time yields an equation of motion for $\omega$, % \begin{align*} \frac{\dd}{\dd t} \omega(t) &= \frac{\dd}{\dd t} \bigl ( q(t) \times \dot{q}(t) \bigr ) = \dot{q}(t) \times \dot{q}(t) + q(t) \times \ddot{q}(t) = q(t) \times \ddot{q}(t) , \end{align*} % and with the help equation \eqref{variation:eqn:ddotq_free_particle_sphere}, we deduce $\frac{\dd}{\dd t} \omega(t) = 0$. That means $\omega = \omega(0) = \omega(t) = q(0) \times \dot{q}(0)$ is constant in time and the equations of motion reduce to % \begin{align*} \dot{q}(t) &= \omega \times q(t) . \end{align*} % We have solved these equation numerous times in the course, the solutions are rotations around the axis along $\omega$ with angular velocity $\abs{\omega}$, just as expected. \end{example} \subsection{Complex derivatives} \label{variation:extremals:complex_derivatives} To frame the discussion, we will quickly recap why complex derivatives of functions are fundamentally different from real derivatives. In what follows let us denote complex numbers with $z = z_{\Re} + \ii z_{\Im} \in \C$ where $z_{\Re} , z_{\Im} \in \R$ are real and imaginary part. We also split any function $f : \C \longrightarrow \C$ into real and imaginary part as $f = f_{\Re} + \ii f_{\Im}$ where now $f_{\Re} , f_{\Im} : \C \longrightarrow \R$. The identification of $\C \cong \R^2$ via $z \mapsto \vec{z} := \bigl ( z_{\Re} , z_{\Im} \bigr )$ allows us to think of \begin{align} \partial_z f(z_0) = \lim_{z \to z_0} \frac{f(z) - f(z_0)}{z - z_0} \label{variation:eqn:complex_derivative} \end{align} as a limit in $\R^2$. As we are in more than one (real) dimension, the above equation implicitly assumes that the limit is \emph{independent of the path} taken as $z \to z_0$. To simplify the notation a little, we will assume without loss of generality that $z_0 = 0$ and $f(0) = 0$. Then the existence of the limit~\eqref{variation:eqn:complex_derivative} implies that \begin{align*} \partial_z f(0) &= \lim_{z_{\Re} \to 0} \frac{f_{\Re}(z_{\Re}) + \ii f_{\Im}(z_{\Re})}{z_{\Re}} = \partial_{z_{\Re}} f(0) \\ &= \lim_{z_{\Im} \to 0} \frac{f_{\Im}(\ii z_{\Im}) + \ii f_{\Im}(\ii z_{\Im})}{\ii z_{\Im}} = - \ii \, \partial_{z_{\Im}} f(0) \\ &= \tfrac{1}{2} \bigl ( \partial_{z_{\Re}} - \ii \partial_{z_{\Im}} \bigr ) f(0) \end{align*} are in fact equal. Thus, equating real and imaginary part, we immediately get the Cauchy-Riemann equations, \begin{subequations}\label{variation:eqn:Cauchy_Riemann} % \begin{align} \partial_{z_{\Re}} f_{\Re}(0) &= + \partial_{z_{\Im}} f_{\Im}(0) , \\ \partial_{z_{\Im}} f_{\Re}(0) &= - \partial_{z_{\Re}} f_{\Im}(0) . \end{align} % \end{subequations} This reasoning shows that complex differentiability of $f$ (\ie the limit in \eqref{variation:eqn:complex_derivative} exists) implies \eqref{variation:eqn:Cauchy_Riemann}. In fact, these two are equivalent, a function is complex differentiable or \emph{holomorphic} if and only if \begin{align*} \partial_{\bar{z}} f(z) = \tfrac{1}{2} \bigl ( \partial_{z_{\Re}} + \ii \partial_{z_{\Im}} \bigr ) f(z) = 0 . \end{align*} So let us turn our attention back to functionals. The idea here is the same: we can identify each complex Banach space $\mathcal{X} \cong \mathcal{X}_{\R} \oplus \ii \mathcal{X}_{\R}$ as a Banach space over $\R$ whose dimension is twice as large via the identification $x \mapsto \vec{x} = \bigl ( \Re x , \Im x \bigr ) = \bigl ( x_{\Re} , x_{\Im} \bigr )$. Hence, we can associate to any functional $\mathcal{E} : \Omega \subseteq \mathcal{X} \longrightarrow \C$ another functional on $\mathcal{X}_{\R} \oplus \ii \mathcal{X}_{\R}$ via \begin{align*} \mathcal{E}_{\R}(\vec{x}) = \mathcal{E} \bigl ( x_{\Re} + \ii x_{\Im} \bigr ) . \end{align*} For this functional, we can take (partial) derivatives with respect to $x_{\Re}$ and $x_{\Im}$ as before (we are in the setting of functionals over a real Banach space again) as well as define the gradient $\partial_{\vec{x}}$. Then analogously to derivatives on $\C$, we define the complex partial \begin{subequations} % \begin{align} \partial_x \mathcal{E}(x) &:= \partial_{x_{\Re}} \mathcal{E}(x) - \ii \partial_{x_{\Im}} \mathcal{E}(x) , \\ \partial_{\bar{x}} \mathcal{E}(x) &:= \partial_{x_{\Re}} \mathcal{E}(x) + \ii \partial_{x_{\Im}} \mathcal{E}(x) . \end{align} % \end{subequations} and the complex Gâteaux derivatives \begin{subequations} % \begin{align} \dd \mathcal{E}(x) \equiv \dd_x \mathcal{E}(x) &:= \dd_{x_{\Re}} \mathcal{E}(x) - \ii \dd_{x_{\Im}} \mathcal{E}(x) , \\ \bar{\dd} \mathcal{E}(x) \equiv \dd_{\bar{x}} \mathcal{E}(x) &:= \dd_{x_{\Re}} \mathcal{E}(x) + \ii \dd_{x_{\Im}} \mathcal{E}(x) . \end{align} % \end{subequations} Now that derivatives have been extended, let us define the notion of \begin{definition}[Critical point] Let $\mathcal{E} : \Omega \subseteq \mathcal{X} \longrightarrow \C$ be a $\Cont^1$ functional. Then $x_0 \in \Omega$ is a critical point if and only if $\bar{\dd} \mathcal{E}(x_0) = 0$. \end{definition} One way to compute $\bar{\dd} \mathcal{E}(x)$ is to treat $x$ and $\bar{x}$ as independent functions and compute the corresponding partial Gâteaux derivative. Note that in case of polynomials, The reason why we have chosen to take the derivative $\bar{\dd}$ instead of $\dd$ is just a matter of convention and convenience: in the context of Hilbert spaces, we can write \begin{align*} \bar{\dd} \mathcal{E}(\psi) \varphi = \bscpro{\varphi}{\mathcal{E}'(\psi)} \end{align*} as a scalar product; the vector $\mathcal{E}'(\psi)$ is defined via the Riesz representation theorem~\ref{hilbert_spaces:dual_space:thm:Riesz_Lemma}. If we had used $\dd$ instead, then $\mathcal{E}'(\psi)$ would appear in the first, the \emph{anti-linear} argument of the scalar product. \paragraph{Derivation of the Ginzburg-Landau equations} As a simple example, let us derive the Euler-Lagrange equation for the Ginzburg-Landau energy functional, \begin{align} \mathcal{E}_{\Omega}(\psi,A) := \int_{\Omega} \dd x \, \Bigl ( \babs{\bigl ( - \ii \nabla_x - A(x) \bigr ) \psi(x)}^2 + \tfrac{\kappa^2}{2} \bigl ( \sabs{\psi(x)}^2 - 1 \bigr )^2 + \bigl ( \nabla_x \times A(x) \bigr )^2 \Bigr ) . \label{variation:eqn:Ginzburg_Landau_functional} \end{align} It describes the difference in Helmholtz free energy between the ordinary and superconducting phase of an ordinary superconductor in the bounded region $\Omega \subseteq \R^3$ which is subjected to a magnetic field $B = \nabla_x \times A$. Here, $\psi$ is an \emph{order parameter} which describes whether the material is in an ordinary conductor $\psi = 0$ or the electrons have formed Cooper pairs which carry a superconducting current ($\psi \neq 0$). Admissible states are by definition critical points of the Ginzburg-Landau functional, \begin{align} \bigl ( \dd \mathcal{E}_{\Omega}(\psi,A) \bigr )(\varphi,a) &= \left . \frac{\dd}{\dd s} \mathcal{E}_{\Omega} \bigl ( \psi + s \varphi \, , \, A + s a \bigr ) \right \vert_{s = 0} \notag \\ &= 2 \Re \int_{\Omega} \dd x \, \Bigl ( a \cdot \Bigl ( - \overline{\psi} \, \bigl ( - \ii \nabla_x - A \bigr ) \psi + \nabla_x \times \nabla_x \times A \Bigr ) + \Bigr . \notag \\ &\qquad \qquad \qquad \; \Bigl . + \, \overline{\varphi} \, \Bigl ( \bigl ( - \ii \nabla_x - A \bigr )^2 \psi - \kappa^2 \bigl ( 1 - \sabs{\psi}^2 \bigr ) \, \psi \Bigr ) \overset{!}{=} 0 , \label{variation:eqn:Ginzburg_Landau_critical_point} \end{align} and leads to the \emph{Ginzburg-Landau equations}, \begin{subequations} % \begin{align} \nabla_x \times \nabla_x \times A &= \Re \bigl ( \overline{\psi} \, \bigl ( - \ii \nabla_x - A \bigr ) \psi \bigr ) =: j(\psi,A) , \\ 0 &= \bigl ( - \ii \nabla_x - A \bigr )^2 \psi - \kappa^2 \bigl ( 1 - \sabs{\psi}^2 \bigr ) \, \psi . \end{align} % \end{subequations} We could have also obtained these equations by deriving the integrand with respect to $\overline{\psi}$. Clearly, for all $A$ which describe \emph{constant} magnetic fields $B = \nabla_x \times A$ the normal conducting phase $\psi = 0$ as well as the perfect superconductor $\psi = 1$ are solutions, and the question is for which values of $\kappa$ and $B$ there are other solutions (mixed phase where normal and superconducting states coexist). We will continue this example later on in the chapter. \subsection{Second derivatives} \label{variation:extremals:second_derivatives} When we introduced the Gâteaux derivative for functionals, we have claimed that one can compute second- and higher-order derivatives in the same fashion. We choose to illustrate the general principle with the Ginzburg-Landau equations: the Euler-Lagrange equations can either be understood as the integral expression~\eqref{variation:eqn:Ginzburg_Landau_critical_point} or the \emph{$L^2$ gradient} \begin{align} \mathcal{E}_{\Omega}'(\psi,A) &= \left ( % \begin{matrix} \bigl ( - \ii \nabla_x - A \bigr )^2 \psi - \kappa^2 \bigl ( 1 - \sabs{\psi}^2 \bigr ) \, \psi \\ \nabla_x \times \nabla_x \times A - \Re \bigl ( \overline{\psi} \, \bigl ( - \ii \nabla_x - A \bigr ) \psi \bigr ) \\ \end{matrix} % \right ) \end{align} defined by $\bscpro{(\varphi,a)}{\mathcal{E}_{\Omega}'(\psi,A)} := \bigl ( \dd \mathcal{E}_{\Omega}(\psi,A) \bigr )(a,\varphi)$. The \emph{Hessian} $\mathcal{E}_{\Omega}''$ of $\mathcal{E}_{\Omega}$ now is just the \emph{linearization} of $\mathcal{E}_{\Omega}'(\psi)$, \begin{align} \bigl ( \mathcal{E}_{\Omega}''(\psi,A) \bigr )(\varphi,a) := \left . \frac{\dd}{\dd s} \mathcal{E}_{\Omega}' \bigl ( \psi + s \varphi \, , \, A + s a \bigr ) \right \vert_{s = 0} . \end{align} Equivalently, we could have used the quadratic form \begin{align*} \bscpro{(\eta,\alpha)}{\mathcal{E}_{\Omega}''(\psi,A) (\varphi,a)} &= \frac{\partial^2}{\partial s \, \partial t} \mathcal{E}_{\Omega} \bigl ( \psi + s \varphi + t \eta \, , \, A + s a + t \alpha \bigr ) \bigg \vert_{s = t = 0} \end{align*} as a definition. A somewhat lengthier computation yields an explicit expression, \begin{align*} &\bigl ( \mathcal{E}_{\Omega}''(\psi,A) \bigr ) (\varphi,a) = \\ &\, = \left ( % \begin{matrix} \bigl ( - \ii \nabla_x - A \bigr )^2 \varphi + \kappa^2 \, \bigl ( 2 \, \sabs{\psi}^2 - 1 \bigr ) \, \varphi + \kappa^2 \, \psi^2 \, \overline{\varphi} - 2 \, \bigl ( - \ii \nabla_x - A \bigr ) \psi \cdot a + \ii \, \psi \, \nabla_x \cdot a \\ \bigl ( \nabla_x \times \nabla_x + \sabs{\psi}^2 \bigr ) a - \Re \bigl ( \overline{\varphi} \, \bigl ( - \ii \nabla_x - A \bigr ) \psi + \overline{\psi} \, \bigl ( - \ii \nabla_x - A \bigr ) \varphi \bigr ) \\ \end{matrix} % \right ) , \end{align*} and the take-away message here is that $\mathcal{E}_{\Omega}''(\psi,A)$ is an $\R$-linear map, and thus, we can use the tools of functional analysis to probe the properties of the Hessian. \medskip \noindent Quite generally, the Hessian of a functional $\mathcal{E}$ is just the second-order Gâteaux derivative, \begin{align*} \bscpro{\beta}{\mathcal{E}''(\psi) \alpha} &= \frac{\partial^2}{\partial s \, \partial t} \mathcal{E} \bigl ( \psi + s \alpha + t \beta \bigr ) \bigg \vert_{s = t = 0} , \end{align*} which implicitly defines a \emph{linear} partial differential operator. The properties of the Hessian characterize the behavior of the fixed point: is it a local maximum, a local minimum or a saddle point? In short, we have the following hierarchy: We would like find the critical points of a given a functional $\mathcal{E}$ and characterize them. The critical point equation is then equivalent to a \emph{non-linear PDE} whose solutions are the fixed points. The linearization of this PDE at a critical point yields a \emph{linear PDE} which serves as the starting point for the \emph{stability analysis} of that fixed point. This is exactly what we have done for ODEs in Chapter~\ref{odes:stability}: the non-linear vector field determines the fixed points while its linearization can be used to classify the fixed point. Hence, functionals generate many interesting linear and non-linear PDEs. \section{Key points for a rigorous analysis} \label{variation:rigorous} Up to now, we have not been very rigorous. A careful reader will have noticed that even though we have defined a notion of differentiability, we have \emph{not} defined \emph{continuity} which in real analysis precedes differentiability. That was a conscious choice, because the question of continuity for functionals turns out to be much more involved. But continuity is not necessary to define the Gâteaux derivative: So let us sketch what is involved in a rigorous analysis of functionals. The purpose here is not to completely cover and overcome all of the difficulties, but to merely point them out. So let us consider a functional $\mathcal{E}$ on $\Omega \subseteq \mathcal{X}$, and assume we would like to find a \emph{minimizer}, \ie an element $x_0 \in \Omega$ for which \begin{align*} \mathcal{E}(x_0) = \inf_{x \in \Omega} \mathcal{E}(x) , \end{align*} and whether this minimizer is \emph{unique}. Clearly, we have to assume that the functional is \emph{bounded from below}, \begin{align*} E_0 := \inf_{x \in \Omega} \mathcal{E}(x) , \end{align*} otherwise such a minimizer cannot exist. Now let us consider the case where $\Omega$ is a closed subset of $\R^d$, \ie we are looking at functions (in the ordinary sense) from $\R^d$ to $\R$. We split this minimization procedure in 3 steps; this splitting is chosen to \begin{enumerate}[(1)] \item Pick a \emph{minimizing sequence $\{ x_n \}_{n \in \N}$} for which $\lim_{n \to \infty} \mathcal{E}(x_n) = E_0$. Such a sequence exists, because $\mathcal{E}$ is bounded from below. \item Investigate whether $\{ x_n \}_{n \in \N}$ or at least a subsequence converges. Just imagine if $\mathcal{E}$ has two minima $x_0$ and $x_0'$. Then the alternating sequence would certainly be a minimizing sequence which does not converge. Another situation may occur, namely that the minimizer is located at “$\infty$”: take $\mathcal{E}(x) = \e^{- x^2}$ on $\R^d$, then no minimizer exists. So how do we show that $\{ x_n \}_{n \in \N}$ has a convergent subsequence? Let us first assume in addition that $\mathcal{E}$ is \emph{coercive}, \ie $\mathcal{E}(x) \rightarrow \infty$ if and only if $\snorm{x} \rightarrow \infty$. In that case, we may assume that all the elements in the minimizing sequence satisfy $\mathcal{E}(x_n) \leq E_0 + 1$ (just discard all the others). Then since $\mathcal{E}$ is coercive, also $\snorm{x_n} \leq C$ holds for some $C > 0$, and the existence of a convergent subsequence follows from Bolzano-Weierstrass (every bounded sequence in $\R^d$ has a convergent subsequence). For simplicity, we denote this convergent subsequence by $\{ x_n \}_{n \in \N}$. \item Then the limit point of this convergent subsequence, $x_0 = \lim_{n \to \infty} x_n$ is a candidate for a minimizer. \emph{If} $\mathcal{E}$ is continuous, then by % \begin{align*} E_0 = \lim_{n \to \infty} \mathcal{E}(x_n) = \mathcal{E} \Bigl ( \lim_{n \to \infty} x_n \Bigr ) = \mathcal{E}(x_0) \end{align*} % $x_0$ is a minimizer of $\mathcal{E}$. \end{enumerate} Not surprisingly, things are more complicated if $\mathcal{X}$ is infinite-dimensional. \begin{enumerate}[(i)] \item First of all, showing that a functional is bounded from below is not as immediate as in the case of functions. For instance, consider the energy functional~\eqref{variation:eqn:energy_functional} associated to $H = - \Delta_x + V$ where $V$ is not bounded from below (think of something like the Coulomb potential). Then it is a priori \emph{not} clear whether $H$ -- and thus $\mathcal{E}$ -- is bounded from below. \item In point (2) the Bolzano-Weiserstrass theorem was crucial to extract a convergent subsequence. Things are not as easy when going to infinite dimensions, because the unit ball $\bigl \{ \snorm{x} \leq 1 \; \vert \; x \in \mathcal{X} \bigr \}$ is \emph{no longer compact}. However, if the bidual $\mathcal{X}''$ is isometrically isomorphic to $\mathcal{X}$ (\ie $\mathcal{X}$ is reflexive), then by the Banach-Alaoglu theorem \cite[Theorem~IV.21]{Reed_Simon:M_cap_Phi_1:1972} every bounded sequence has a \emph{weakly} convergent subsequence (\cf Definition~\ref{spaces:defn:weak_convergence}). For instance, if $\mathcal{X}$ is a Hilbert space, it is reflexive. \item The essential ingredient in (3) was the continuity of the functional, but this is usually either very hard to prove or even wrong. However, for the purpose of finding minimizers, \emph{weak lower semi-continuity} (w-lsc) suffices, \ie % \begin{align*} x_n \rightharpoonup x_0 \; \; \Longrightarrow \; \; \liminf_{n \to \infty} \mathcal{E}(x_n) \geq \mathcal{E}(x_0) . \end{align*} % Because if $x_n$ is initially a minimizing sequence, then by definition % \begin{align} \lim_{n \to \infty} \mathcal{E}(x_n) = E_0 \geq \mathcal{E}(x_0) \geq E_0 \label{variation:eqn:wlsc_existence_minimizer} \end{align} % minimizes the value of the functional. \marginpar{2014.03.25} \end{enumerate} \begin{theorem}\label{variation:thm:key_theorem_existence_minimizer} Assume that % \begin{enumerate}[(i)] \item $\Omega \subseteq \mathcal{X}$ is closed under weak limits (weakly convergent sequences converge in $\Omega$), \item $\mathcal{E}$ is weakly lower semi-continuous, and \item $\mathcal{E}$ is coercive. \end{enumerate} % Then $\mathcal{E}$ is bounded from below and attains its minimum in $\Omega$, \ie there exists a possibly non-unique minimizer in $\Omega$. \end{theorem} \begin{proof} Set $E_0 := \inf_{x \in \Omega} \mathcal{E}(x)$ and pick a minimizing sequence $\{ x_n \}_{n \in \N}$, \ie a sequence with $E_0 = \lim_{n \to \infty} \mathcal{E}(x_n)$. At this point, we do not know whether $E_0$ is finite or $-\infty$. In case $E_0$ is finite, we may assume without loss of generality that $\mathcal{E}(x_n) \leq E_0 + 1$ (simply discard all elements for which this does not hold). By the Banach-Alaoglu theorem, $\{ x_n \}_{n \in \N}$ contains a weakly convergent subsequence $\{ x_{n_k} \}_{k \in \N}$ for which $x_{n_k} \rightharpoonup x_0$. The point $x_0$ is the candidate for the minimizer. Given that $\mathcal{E}$ is w-lsc, we conclude $\lim_{k \to \infty} \mathcal{E}(x_{n_k}) \geq \mathcal{E}(x_0)$. However, in view of \eqref{variation:eqn:wlsc_existence_minimizer} we already know $\mathcal{E}(x_0) = E_0$ which not only shows the existence of \emph{a} minimizer, but also that $E_0 > -\infty$. \end{proof} One \emph{application} of this theorem is to show the existence of solutions to a PDE, and the strategy is as follows: Find a functional so that its Euler-Lagrange equation are the PDE in question. Then the existence of a minimizer implies the existence of a solution to the PDE, because the PDE characterizes the critical points of the functional. To get uniqueness of solutions, we have to impose additional assumptions. One of the standard ones is \emph{convexity} which is defined analogously to the case of functions. \begin{definition}[Convex functional] Let $\Omega \subset \mathcal{X}$ be a convex subset and $\mathcal{E} : \Omega \longrightarrow \R$ a functional. % \begin{enumerate}[(i)] \item $\mathcal{E}$ is \emph{convex} iff $\mathcal{E} \bigl ( s x + (1 - s) y \bigr ) \leq s \, \mathcal{E}(x) + (1-s) \, \mathcal{E}(y)$ for all $x , y \in \Omega$ and $s \in [0,1]$. \item $\mathcal{E}$ is \emph{strictly convex} iff $\mathcal{E} \bigl ( s x + (1 - s) y \bigr ) < s \, \mathcal{E}(x) + (1-s) \, \mathcal{E}(y)$ for all $x \neq y$ and $s \in (0,1)$. \end{enumerate} % \end{definition} Convexity has strong implications on minimizers just like in the case of functions. \begin{proposition}\label{variation:prop:minimizer_convexity} % \begin{enumerate}[(i)] \item Every local minimum of a convex functional is a global minimum. \item A convex functional has at most one minimizer. \end{enumerate} % \end{proposition} \begin{proof} % \begin{enumerate}[(i)] \item Assume $x_0$ is just a local, but not a global minimum. Then there exists a neighborhood $V$ of $x_0$ such that $\mathcal{E}(x_0) \leq \mathcal{E}(x)$ for all $x \in V$ (local minimum) as well as a point $y \not\in V$ with $\mathcal{E}(y) < \mathcal{E}(x_0)$ (local but not global minimum). Connecting $x_0$ and $y$ by a line segment, convexity implies in conjunction with $\mathcal{E}(y) < \mathcal{E}(x_0)$ % \begin{align*} \mathcal{E} \bigl ( s y + (1 - s) x_0 \bigr ) \leq s \, \mathcal{E}(y) + (1-s) \, \mathcal{E}(x_0) < \mathcal{E}(x_0) . \end{align*} % Hence, for $s$ small enough, $x' = s y + (1-s) x_0 \in V$ and we have found points in $V$ for which $\mathcal{E}(x') < \mathcal{E}(x_0)$ -- in contradiction to the assumption that $x_0$ is a local minimum. Hence, every local minimum is also a global minimum. \item Assume there exist two distinct minimizers $x_0$ and $x_0'$ with $\mathcal{E}(x_0) = E_0 = \mathcal{E}(x_0')$. Then strict convexity % \begin{align*} E_0 \leq \mathcal{E} \bigl ( s x_0 + (1-s) x_0' \bigr ) < s \, \mathcal{E}(x_0) + (1-s) \, \mathcal{E}(x_0') = E_0 \end{align*} % leads to a contradiction, and the minimum -- if it exists -- is unique. \end{enumerate} % \end{proof} Note that the definition of convexity does not require $\mathcal{E}$ to be once or twice Gâteaux-differentiable. However, if we assume in addition that the function is once or twice differentiable, we obtain additional characterizations of convexity. \begin{proposition}\label{variation:prop:convexity_differential_characterization} Assume $\mathcal{E} : \Omega \longrightarrow \R$ is Gâteaux differentiable. % \begin{enumerate}[(i)] \item $\mathcal{E}$ is convex iff $\mathcal{E}(x) \geq \mathcal{E}(y) + \bigl ( \dd \mathcal{E}(y) \bigr )(x-y)$ for all $x,y \in \Omega$ \item $\mathcal{E}$ is convex iff $\bigl ( \dd \mathcal{E}(x) - \dd \mathcal{E}(y) \bigr )(x-y) \geq 0$ for all $x,y \in \Omega$ \item $\mathcal{E}$ is convex and twice Gâteaux differentiable iff $\bscpro{y}{\dd^2 \mathcal{E}(x) y} \geq 0$ for all $x,y \in \Omega$ \end{enumerate} % \end{proposition} \begin{proof} % \begin{enumerate}[(i)] \item “$\Rightarrow$:” The convexity of $\mathcal{E}$ can be expressed by regrouping the terms in the definition, % \begin{align*} \mathcal{E} \bigl ( y + s (x - y) \bigr ) \leq \mathcal{E}(y) + s \bigl ( \mathcal{E}(x) - \mathcal{E}(y) \bigr ) , \end{align*} % which is equivalent to % \begin{align*} \mathcal{E}(x) - \mathcal{E}(y) \geq \frac{\mathcal{E} \bigl ( y + s (x-y) \bigr ) - \mathcal{E}(y)}{s} . \end{align*} % Taking the limit $s \searrow 0$ yields % \begin{align*} \mathcal{E}(x) - \mathcal{E}(y) \geq \bigl ( \dd \mathcal{E}(y) \bigr )(x-y) \end{align*} % which implies $\mathcal{E}(x) \geq \mathcal{E}(y) + \bigl ( \dd \mathcal{E}(y) \bigr )(x-y)$. “$\Leftarrow$:” Upon substituting $x \mapsto x$ and $y \mapsto y + s (x-y)$ as well as $y \mapsto x$ and $y \mapsto y + s (x-y)$ yields % \begin{align*} \mathcal{E}(x) &\geq \mathcal{E} \bigl ( y + s (x-y) \bigr ) + (1-s) \, \Bigl ( \mathcal{E} \bigl ( y + s (x-y) \bigr ) \Bigr )(x-y) , \\ \mathcal{E}(y) &\geq \mathcal{E} \bigl ( y + s (x-y) \bigr ) - s \, \Bigl ( \mathcal{E} \bigl ( y + s (x-y) \bigr ) \Bigr )(x-y) , \end{align*} % and if we multiply the first inequality by $s$, the second with $1-s$ and add the two, we obtain $\mathcal{E} \bigl ( s x + (1-s) y \bigr ) \leq s \, \mathcal{E}(x) + (1-s) \, \mathcal{E}(y)$. \item “$\Rightarrow$:” From (i) we deduce $\mathcal{E}(x) \geq \mathcal{E}(y) + \bigl ( \dd \mathcal{E}(y) \bigr )(x-y)$ and $\mathcal{E}(y) \geq \mathcal{E}(x) + \bigl ( \mathcal{E}(x) \bigr )(y-x)$; adding the two yields the right-hand side. “$\Leftarrow$:” Set $f(s) := \mathcal{E} \bigl ( y + s (x-y) \bigr )$. Then $f'(s) = \Bigl ( \dd \mathcal{E} \bigl ( y + s(x-y) \bigr ) \Bigr ) (x-y)$, and by assumption % \begin{align*} f'(s) - f'(0) &= \Bigl ( \dd \mathcal{E} \bigl ( y + s(x-y) \bigr ) - \mathcal{E}(y) \Bigr ) (x-y) \geq 0 \end{align*} % holds. Integrating this equation with respect to $s$ over $[0,1]$, we obtain % \begin{align*} f(1) - f(0) - f'(0) = \mathcal{E}(x) - \mathcal{E}(y) - \bigl ( \dd \mathcal{E}(y) \bigr )(x-y) \geq 0 , \end{align*} % which by virtue of (i) is equivalent to convexity. \item “$\Rightarrow$:” The convexity of $\mathcal{E}$ means % \begin{align*} \Bigl ( \dd \mathcal{E} \bigl ( x + s y \bigr ) - \mathcal{E}(x) \Bigr )(sy) \geq 0 , \end{align*} % and consequently, % \begin{align*} \bscpro{y}{\bigl ( \dd^2 \mathcal{E}(x) \bigr )(y)} &= \lim_{s \searrow 0} \frac{\Bigl ( \dd \mathcal{E} \bigl ( x + s y \bigr ) - \mathcal{E}(x) \Bigr )(sy)}{s} \\ &= \lim_{s \searrow 0} \Bigl ( \dd \mathcal{E} \bigl ( x + s y \bigr ) - \mathcal{E}(x) \Bigr ) y . \end{align*} % “$\Leftarrow$:” The convexity follows from a Taylor expansion of $\mathcal{E}$. \end{enumerate} % \end{proof} There is an analogous version of the proposition characterizing \emph{strict} convexity; since the proofs are virtually identical, we leave it to the reader to modify them appropriately. \begin{corollary} Assume $\mathcal{E} : \Omega \longrightarrow \R$ is Gâteaux differentiable. % \begin{enumerate}[(i)] \item $\mathcal{E}$ is strictly convex iff $\mathcal{E}(x) > \mathcal{E}(y) + \bigl ( \dd \mathcal{E}(y) \bigr )(x-y)$ for all $x \neq y$ \item $\mathcal{E}$ is strictly convex iff $\bigl ( \dd \mathcal{E}(x) - \dd \mathcal{E}(y) \bigr )(x-y) > 0$ for all $x \neq y$ \item $\mathcal{E}$ is strictly convex and twice Gâteaux differentiable iff $\bscpro{y}{\dd^2 \mathcal{E}(x) y} > 0$ for all $x \neq y$ \end{enumerate} % \end{corollary} Now we go back to the problem at hand, existence of minimizers. The first step is the following \begin{lemma}\label{variation:lem:convex_functional_w-lsc} Suppose $\mathcal{E}$ is a convex functional defined on a closed subset $\Omega \subseteq \mathcal{X}$ of a reflexive Banach space $\mathcal{X}$; moreover, we assume its Gâteaux derivative exists for all $x \in \Omega$. Then $\mathcal{E}$ is w-lsc. \end{lemma} \begin{proof} Pick an arbitrary $x \in \mathcal{X}$ which we leave fixed. Moreover, let $(x_n)_{n \in \N}$ be a sequence which converges weakly to $x$. Then by the characterization of convexity in Proposition~\ref{variation:prop:convexity_differential_characterization}~(i) yields % \begin{align*} \mathcal{E}(x_n) \geq \mathcal{E}(x) + \bigl ( \dd \mathcal{E}(x) \bigr ) (x_n - x) . \end{align*} % Upon taking the limit $n \to \infty$, the last term on the right-hand side vanishes since $x_n \rightharpoonup x$, and we deduce $\mathcal{E}$ is weakly lower semicontinuous, % \begin{align*} \lim_{n \to \infty} \mathcal{E}(x_n) \geq \mathcal{E}(x) . \end{align*} % \end{proof} \begin{theorem}\label{variation:thm:existence_uniquness_convex_functional} Assume % \begin{enumerate}[(i)] \item $\Omega \subseteq \mathcal{X}$ is closed under weak limits (weakly convergent sequences converge in $\Omega$), \item $\mathcal{E}$ is strictly convex and \item $\mathcal{E}$ is coercive. \end{enumerate} % Then there exists a \emph{unique} minimizer. \end{theorem} \begin{proof} The convexity of $\mathcal{E}$ implies weakly lower semicontinuity (Lemma~\ref{variation:lem:convex_functional_w-lsc}). Hence, Theorem~\ref{variation:thm:key_theorem_existence_minimizer} applies and we know there exists \emph{a} minimizer, and in view of Proposition~\ref{variation:prop:minimizer_convexity}~(ii) this minimizer is necessarily unique. \end{proof} As mentioned earlier, variational calculus can be used to show existence of solutions if the PDE in question coincide with the Euler-Lagrange equations of a functional. If the functional is in addition strictly convex, then our arguments here show that the solution is \emph{unique}. \section{Bifurcations} \label{variation:bifurcation} The idea to understand certain physical phenomena as incarnations of a “bifurcation” where certain properties of a system change abruptly. Before we show how this meshes with the preceding content of the chapter, let us quickly explore one incarnation from physics in order to introduce some of the necessary terminology. Phase transitions, for instance, are points where some \emph{order parameter} changes abruptly when an \emph{external parameter} is changed. For instance, in a ferromagnet the order parameter is the macroscopic \emph{magnetization} while the external parameter is temperature. Below a critical temperature, the Curie temperature, the microscopic magnets can align to produce a non-zero macroscopic magnetization. For instance, another magnet can be used to align these microscopic magnets. This magnetization persists even when the magnet is heated -- up to a certain specific \emph{critical temperature} when the magnetization suddenly drops to $0$. In summary, below the critical temperature, there are two states, the unmagnetized state where the macroscopic magnetization $M(T) = 0$ vanishes and the magnetized state where $M(T) \neq 0$. Above the critical temperature, only the $M(T) = 0$ state persists. Other physical effects such as superconductivity can be explained along the exact same lines. To tie this section to the theme of the chapter, let us consider a \emph{parameter-dependent functional} $\mathcal{E} : \R \times \mathcal{D} \longrightarrow \R$ where $\mathcal{D}$ is dense in some Hilbert space $\Hil$; we will denote the external parameter with $\mu$ and the Banach space variable with $x$, \ie $\mathcal{E}(\mu,x)$. The bifurcation analysis starts with \begin{align*} F(\mu,x) := \partial_x \mathcal{E}(\mu,x) \end{align*} which enters the Euler-Lagrange equations and determines the stationary points of the functional. Here, the partial derivative is defined in terms of the scalar product as \begin{align*} \bigl ( \dd_x \mathcal{E}(\mu,x) \bigr )(y) = \bscpro{y}{\partial_x \mathcal{E}(\mu,x)} . \end{align*} In what follows, we assume that the “normal solution” $x = 0$ solves $F(\mu,0) = 0$ for all $\mu \in \R$ and we want to know whether there is a bifurcation solution $x(\mu) \neq 0$. Then in the present context, we define the notion of \begin{definition}[Bifurcation point] $(\mu_0,0)$ is a bifurcation point if there exists $x(\mu)$ on a neighborhood $[\mu_0,\mu_0 + \delta)$ so that $x(\mu) \neq 0$ on $(\mu_0,\mu_0 + \delta)$ and $F \bigl ( \mu , x(\mu) \bigr ) = 0$. \end{definition} A consequence of the Implicit Function Theorem \cite[Theorem~1.6]{Teschl:nonlinear_functional_analysis:2010} is the following \begin{proposition} If $(\mu_0,0)$ is a bifurcation point, then $\dd_x F(\mu_0,0)$ does not have a bounded inverse. \end{proposition} \begin{proof}[Sketch] Because if $\dd_x F(\mu_0,0)$ were invertible, then by the Implicit Function Theorem \cite[Theorem~1.6]{Teschl:nonlinear_functional_analysis:2010} we could extend the trivial solution $x(\mu)$ in a vicinity of $\mu_0$. But given the multivaluedness, this is clearly false. \end{proof} The last example has shown us that $\dd_x F(\mu_0,0)$ not being invertible is just \emph{necessary} but \emph{not sufficient}. Hence, we need to impose additional conditions, and one of the standard results in this direction is a Theorem due to Krasnoselski \cite{Krasnoselski:non_linear_functional_analysis:1984} \begin{theorem}[Krasnoselski] Assume $\mathcal{D} \subseteq \Hil$ is a dense subset of a Hilbert space and the $\Cont^1$ map $F : \R \times \mathcal{D} \longrightarrow \Hil$ is such that % \begin{enumerate}[(i)] \item $\dd_x F(\mu,x)$ is a $\Cont^1$ map at $(\mu_0,0)$, \item $\dd_x F(\mu_0,0)$ is selfadjoint, \item $0$ is an isolated eigenvalue of \emph{odd} and finite multiplicity, and \item there exists $v \in \ker \dd_x F(\mu_0,0)$ such that $\bscpro{v}{\partial_{\mu} \dd_x F(\mu_0,0) v} \neq 0$. \end{enumerate} % Then $(\mu_0,0)$ is a bifurcation point. \marginpar{2014.03.27} \end{theorem} The main ingredient is a procedure called \emph{Lyapunov-Schmidt reduction}: define the linear operator $L(\mu) := \dd_x F(\mu,0)$ and the orthogonal projection $P$ onto $\ker L(\mu_0)$. The projection and its orthogonal complement $P^{\perp} = 1 - P$ induce a splitting of the Hilbert space \begin{align*} \Hil = \ran P \oplus \ran P^{\perp} . \end{align*} Hence, any $x = x_{\parallel} + x_{\perp}$ can be uniquely decomposed into $x_{\parallel}$ from the finite-dimensional space $\ran P$ and $x_{\perp} \in \ran P^{\perp}$. Consequently, also the equation $F(x,\mu) = 0 \in \Hil$ can equivalently be seen as \begin{subequations} % \begin{align} F_{\parallel}(\mu,x_{\parallel},x_{\perp}) := P F \bigl ( \mu , x_{\parallel} + x_{\perp} \bigr ) &= 0 \in \ran P , \label{variation:eqn:defn_F_parallel} \\ F_{\perp}(\mu,x_{\parallel},x_{\perp}) := P^{\perp} F \bigl ( \mu , x_{\parallel} + x_{\perp} \bigr ) &= 0 \in \ran P^{\perp} . \label{variation:eqn:defn_F_perp} \end{align} % \end{subequations} The idea of the Liapunov-Schmidt reduction is to solve these equations iteratively: Given $(\mu,x_{\parallel})$ we first solve the \emph{branching equation} \begin{align*} F_{\perp}(\mu,x_{\parallel},x_{\perp}) = 0 \end{align*} in the vicinity of the branching point $(\mu_0,0)$. By assumption $\dd_{x_{\perp}} F(\mu_0,0,0)$ has a bounded inverse, and thus, the Implicit Function Theorem \cite[Theorem~1.6]{Teschl:nonlinear_functional_analysis:2010} yields a solution $x_{\perp}(\mu,x_{\parallel})$. We then proceed by inserting $x_{\perp}(\mu,x_{\parallel})$ into $F_{\parallel}$, thereby obtaining a function \begin{align*} f(\mu,x_{\parallel}) := F_{\parallel} \bigl ( \mu , x_{\parallel} , x_{\perp}(\mu,x_{\parallel}) \bigr ) \end{align*} that depends only on finitely many variables ($n = \dim \ker L(\mu_0)$ is finite by assumption), \ie $f : \R \times \R^n \longrightarrow \R^n$. The remaining equation $f(\mu,x_{\parallel})$ then needs to be solved by other means. \begin{proof} We will use the notation introduced in the preceding paragraphs. First of all the map % \begin{align*} F_{\perp} : \bigl ( \R \times P \mathcal{D} \bigr ) \times P^{\perp} \mathcal{D} \longrightarrow \ran P^{\perp} \end{align*} % satisfies the assumptions of the Implicit Function Theorem \cite[Theorem~1.6]{Teschl:nonlinear_functional_analysis:2010}: evidently, it inherits the $\Cont^1$ property from $F$, and $F_{\perp}(\mu,0,0) = 0$ holds for all $\mu \in \R$. Moreover, $\dd_{x_{\perp}} F_{\perp}(\mu_0,0,0)$ has a bounded inverse because $0 \in \sigma \bigl ( \dd_x F(\mu_0,0) \bigr )$ is an isolated eigenvalue. Consequently, there exists a neighborhood of $(\mu_0,0) \in \R \times P \mathcal{D}$ and a function $x_{\perp}(\mu,x_{\parallel})$ on that neighborhood that is uniquely determined by % \begin{align*} F_{\perp} \bigl ( \mu , x_{\parallel} , x_{\perp}(\mu,x_{\parallel}) \bigr ) = 0 . \end{align*} % Later on, we will crucially need the technical estimate % \begin{align} x_{\perp} , \partial_{\mu} x_{\perp} = \order \bigl ( \snorm{x_{\parallel}} \, \sabs{\mu - \mu_0} \bigr ) + o(\snorm{x_{\parallel}}) \label{variation:eqn:asymptotics_x_perp} \end{align} % as $\mu \rightarrow \mu_0$ and $x_{\parallel} \rightarrow 0$, but we postpone the proof of \eqref{variation:eqn:asymptotics_x_perp} until the end. For simplicity, let us only prove Krasnoselski's Theorem for $\dim \ker L(\mu_0) = 1$. Then the kernel is spanned by a single normalized vector $v \in \ker L(\mu_0)$, and we can write $x_{\parallel} = s v$ for some $s \in \R$. Combining \eqref{variation:eqn:defn_F_parallel} with $x_{\perp}(\mu,sv)$ yields a \emph{scalar} function $f : \R \times \R \longrightarrow \R$, and we are looking for solutions to % \begin{align} f(\mu,s) := \frac{1}{s} \, \Bscpro{v}{F \bigl ( \mu, sv + x_{\perp}(\mu,sv) \bigr )} = 0 . \label{variation:eqn:defn_f_mu_s} \end{align} % We have added the prefactor $\nicefrac{1}{s}$ to make $\lim_{s \searrow 0} \partial_{\mu} f(\mu_0,s) \neq 0$, but more on that later. As in the case of functions, we can Taylor expand % \begin{align} F(\mu,x) &= F(\mu,0) + \bigl ( \dd F(\mu,0) \bigr ) x + R(\mu,x) = L(\mu) x + R(\mu,x) , \label{variation:eqn:Taylor_expansion_F} \end{align} % to first order; here, the remainder $R(\mu,x) = o(\snorm{x})$ vanishes as $x \to 0$. Plugging \eqref{variation:eqn:Taylor_expansion_F} into \eqref{variation:eqn:defn_f_mu_s} yields % \begin{align*} f(\mu,s) &= \bscpro{v}{L(\mu) v} + \bscpro{v}{L(\mu) s^{-1} x_{\perp}(\mu,sv)} + \Bscpro{v}{s^{-1} \, R \bigl ( \mu , sv + x_{\perp}(\mu,sv) \bigr )} . \end{align*} % We obtain the branching solution through the Implicit Function Theorem: once we have proven $\partial_{\mu} f(\mu_0,0) \neq 0$, we obtain a function $s(\mu)$ in the vicinity of $\mu_0$ so that $f \bigl ( \mu , s(\mu) \bigr ) = 0$. And this function $s(\mu)$ also defines the branching solution % \begin{align} x(\mu) := s(\mu) \, v + x_{\perp} \bigl ( \mu , s(\mu) v \bigr ) \neq 0 \label{variation:eqn:branching_solution} \end{align} % satisfying $F \bigl ( \mu , x(\mu) \bigr ) = 0$ by construction. Hence, we compute the derivative % \begin{align} \partial_{\mu} f(\mu,s) &= \bscpro{v}{\partial_{\mu} L(\mu) v} + \bscpro{v}{\partial_{\mu} L(\mu) s^{-1} x_{\perp}(\mu,sv)} + \bscpro{v}{L(\mu) s^{-1} \, \partial_{\mu} x_{\perp}(\mu,sv)} + \notag \\ &\quad + \Bscpro{v}{s^{-1} \, \partial_{\mu} R \bigl ( \mu , sv + x_{\perp}(\mu,sv) \bigr )} + \notag \\ &\quad + \Bscpro{v}{\bigl ( \dd_x R \bigl ( \mu , sv + x_{\perp}(\mu,sv) \bigr ) \bigr ) \, s^{-1} \, \partial_{\mu} x_{\perp}(\mu,sv)} , \label{variation:eqn:partial_mu_f} \end{align} % and use \eqref{variation:eqn:asymptotics_x_perp} in conjunction with assumption (iv), % \begin{align*} \bscpro{v}{\partial_{\mu} L(\mu_0) v} = \bscpro{v}{\partial_{\mu} \dd_x F(\mu_0,0) v} \neq 0 , \end{align*} % to deduce that all but the first term vanish in the limit $\mu \to \mu_0$ and $s \searrow 0$. First of all, \eqref{variation:eqn:asymptotics_x_perp} tells us that % \begin{align*} s^{-1} x_{\perp}(\mu,sv) , s^{-1} \, \partial_{\mu} x_{\perp}(\mu,sv) = \order(\sabs{\mu - \mu_0}) + o(1) \end{align*} % goes to $0$ in the limit $\mu \to \mu_0$ and $s \searrow 0$. Moreover, the assumption that $\partial_{\mu} F$ exists as a Gâteaux derivative means we can Taylor expand this function to first order, and the terms are just the $\mu$-derivatives of \eqref{variation:eqn:Taylor_expansion_F}. Consequently, $\partial_{\mu} L(\mu_0)$ is bounded and $\partial_{\mu} R(\mu,x) = o(\snorm{x})$, and we deduce that the second and third term in \eqref{variation:eqn:partial_mu_f} vanish. \marginpar{2014.04.01} Moreover, the terms involving $R$ also vanish: by assumption also $\partial_{\mu} F$ has a Taylor expansion at $(\mu,0)$, so that % \begin{align*} s^{-1} \, &\norm{\partial_{\mu} R \bigl ( \mu , sv + x_{\perp}(\mu,sv) \bigr )} = o \bigl ( \bnorm{sv + x_{\perp}(\mu,sv)} \bigr ) \end{align*} % is necessarily small, and if we combine that with \eqref{variation:eqn:asymptotics_x_perp}, we deduce that this term vanishes as $\mu \to \mu_0$ and $s \searrow 0$. The last term can be treated along the same lines, $\dd_x F(\mu,x)$ is $\Cont^1$ in $(\mu_0,0)$ by assumption so that % \begin{align*} \dd_x R \bigl ( \mu,sv + x_{\perp}(\mu,sv) \bigr ) &= \dd_x F \bigl ( \mu,sv + x_{\perp}(\mu,sv) \bigr ) - \dd_x L(\mu) \bigl ( sv + x_{\perp}(\mu,sv) \bigr ) \\ &= o(\snorm{x}) \xrightarrow{\mu \to \mu_0 , \; s \searrow 0} 0 . \end{align*} % In conclusion, we have just shown $\partial_{\mu} f(\mu_0,0) \neq 0$, the Implicit Function Theorem applies, and we obtain the branching solution~\eqref{variation:eqn:branching_solution}. All that remains are proofs of the estimates \eqref{variation:eqn:asymptotics_x_perp}. We use the Taylor expansion~\eqref{variation:eqn:Taylor_expansion_F} to rewrite $F_{\perp}(\mu,x_{\parallel},x_{\perp}) = 0$ as % \begin{align*} P^{\perp} L(\mu) P^{\perp} x_{\perp}(\mu,sv) + P^{\perp} L(\mu) sv + P^{\perp} R \bigl ( \mu , sv + x_{\perp}(\mu,sv) \bigr ) = 0 . \end{align*} % $L^{\perp}(\mu) := P^{\perp} L(\mu) P^{\perp}$ is invertible on $\ran P^{\perp}$ in a neighborhood of $\mu_0$, because $\mu \mapsto L(\mu)$ is continuous and thus, the spectral gap around $0$ does not suddenly collapse \cite[Chapter~VII.3]{Kato:perturbation_theory:1995}. Now we can solve for $x_{\perp}$ and insert $L(\mu_0) sv = 0$ free of charge, % \begin{align} x_{\perp}(\mu,sv) &= - L^{\perp}(\mu)^{-1} \, P^{\perp} \Bigl ( L(\mu) sv + R \bigl ( \mu , sv + x_{\perp}(\mu,sv) \bigr ) \Bigr ) \notag \\ & = - L^{\perp}(\mu)^{-1} \, P^{\perp} \Bigl ( \bigl ( L(\mu) - L(\mu_0) \bigr ) sv + R \bigl ( \mu , sv + x_{\perp}(\mu,sv) \bigr ) \Bigr ) \label{variation:eqn:selfconsistent_x_perp} \\ &= \order \bigl ( \snorm{sv} \, \sabs{\mu - \mu_0} \bigr ) + o \bigl ( \bnorm{sv + x_{\perp}(\mu,sv)} \bigr ) . \notag \end{align} % The first term is clearly $\order \bigl ( s \, \sabs{\mu - \mu_0} \bigr )$. Initially, we merely obtain $o \bigl ( \bnorm{sv + x_{\perp}(\mu,sv)} \bigr )$ for the second term, but if $x_{\perp}(\mu,sv) \rightarrow \tilde{x} \neq 0$ as $\mu \to \mu_0$ and $s \searrow 0$, we would obtain % \begin{align*} \tilde{x} = o(\snorm{\tilde{x}}) \end{align*} % which is absurd. Thus, the asymptotic behavior of $x_{\perp}(\mu,sv)$ is described by \eqref{variation:eqn:asymptotics_x_perp}. The proof for $\partial_{\mu} x_{\perp}(\mu,sv)$ is analogous, one starts from equation~\eqref{variation:eqn:selfconsistent_x_perp} and uses % \begin{align*} \partial_{\mu} L^{\perp}(\mu)^{-1} = - L^{\perp}(\mu)^{-1} \, \partial_{\mu} L^{\perp}(\mu) \, L^{\perp}(\mu)^{-1}% \end{align*} % as well as the assumption that $\mu \mapsto \dd_x F(\mu,x)$ is $\Cont^1$ in $(\mu_0,0)$. This concludes the proof.\marginpar{2014.04.03} \end{proof}
1,314,259,992,910
arxiv
\section{Introduction} For a given continuous-time plant $\dot{x}(t)=Ax(t)+Bu(t)$ and a sampled-data state-feedback controller $u(t)=Kx(t_k),t\in[t_k,t_{k+1})$, we define the maximum sampling interval (MSI) as the largest possible $h>0$ such that the closed loop is asymptotically stable for any sampling sequence with $t_{k+1}-t_k\leq h$. Computing possibly large lower bounds on the MSI is an interesting and relevant research question since its knowledge allows to reduce the sampling frequency as much as possible without affecting stability. Furthermore, the stability analysis of sampled-data systems can be applied to many practical problems, e.g., to study the behavior of networked control systems with transmission delays or packet dropouts. Since the MSI is an essential concept in sampled-data control (compare~\cite{hetel2017recent,hespanha2007survey}), a wide variety of methods to compute such MSI bounds have been developed in the literature, using different viewpoints such as the time-delay approach (\cite{fridman2004robust,fridman2010refined}), the hybrid system approach (\cite{carnevale2007lyapunov,nesic2009explicit}), the discrete-time approach (\cite{fujioka2009stability,hetel2011delay}), and the input/output stability approach (\cite{mirkin2007some}), which are all discussed extensively in the survey~\cite{hetel2017recent}. Typically, these methods are model-based, i.e., they require an (accurate) model for their implementation. While measured data of a system is typically simple to obtain, deriving a model via first principles can be complicated and time-consuming. If no model is available, \emph{finding} a controller with guaranteed stability for all sampling intervals in a desired range is particularly important since, without model knowledge, it is not even clear how to design a stabilizing controller under continuous state-feedback. Nevertheless, to the best of our knowledge, computing guaranteed MSI bounds and designing sampled-data controllers based only on measured data has not been considered in the literature thus far. In this paper, we develop a direct approach for a) computing MSI bounds for unknown systems with a given controller and b) designing controllers with possibly large MSI bounds, both based only on a noisy data trajectory of finite length. Our approach is inspired by recent results of~\cite{waarde2020from} on discrete-time data-driven control, which are based on a data-dependent parametrization of all matrices consistent with the measured data and a given noise bound. Our work is also closely related to~\cite{berberich2020combining}, which uses similar ideas for combining data with additional prior knowledge for robust controller design. In the present paper, we first derive a result similar to~\cite{waarde2020from} to parametrize unknown \emph{continuous-time} systems using one a priori collected trajectory. We combine this parametrization with well-established robust control techniques (\cite{scherer2000linear,scherer2001lpv}) to verify existing sampled-data stabilization conditions of~\cite{fridman2010refined} robustly for all systems which are consistent with the measured data and the assumed noise bound. While most existing results for sampled-data control of uncertain systems using time-delay arguments only apply to restricted classes of uncertainties, our results lead to a general robust sampled-data control formulation with larger flexibility. Another interpretation of our work is that of providing tools for analyzing unknown systems w.r.t. a particular sampled-data system property (the MSI) using only measured data, similar to other recent works on data-driven system analysis such as, e.g.,~\cite{maupong2017lyapunov,koch2020provably}. A well-established alternative to the results presented in this paper is to use the measured data to identify the unknown system and then apply model-based tools as presented in~\cite{hetel2017recent} to the estimated system. Our approach is more direct and does not rely on an intermediate estimation step while at the same time being non-conservative w.r.t. the model-based conditions in~\cite{fridman2010refined}. On the other hand, providing tight estimation bounds using noisy data of finite length is in general non-trivial for stochastic noise (\cite{matni2019tutorial,matni2019self}) and can be computationally expensive for deterministic noise (\cite{milanese1991optimal}). Another closely related paper by~\cite{rueda2020delay} on data-driven control for time-delay systems using delayed measurements addresses a conceptually similar control goal (stability under uncertain delays) but considers a discrete-time scenario which leads to an entirely different problem formulation. \subsection*{Notation} For a symmetric matrix $P$, we write $P\succ0$ ($P\succeq0$) or $P\prec0$ ($P\preceq0$) if $P$ is positive (semi-) definite or negative (semi-) definite, respectively. Further, in matrix inequalities, $\star$ denotes entries which can be inferred from symmetry. We denote the set of nonnegative integers by $\mathbb{N}_0$. \section{Preliminaries} We consider an unknown linear time-invariant (LTI) system \begin{align}\label{eq:sys_cont} \dot{x}(t)=A_{\text{tr}}x(t)+B_{\text{tr}}u(t),\>\>x(0)=x_0, \end{align} with $x_0,x(t)\in\mathbb{R}^n$, $u(t)\in\mathbb{R}^m$, $t\geq0$, where the true system matrices $A_{\text{tr}},B_{\text{tr}}$ are unknown. Our goal is to design a sampled-data state-feedback controller $u(t)=Kx(t_k)$ for $t\in[t_k,t_{k+1})$, where $\{t_k\}_{k=0}^\infty$ with $t_0=0$ and $\lim_{k\to\infty}t_k=\infty$ are the sampling times, such that the closed loop is exponentially stable for all sampling times satisfying $t_{k+1}-t_k\in(0,h],k\in\mathbb{N}_0$, with a possibly large $h>0$. We call the maximum $h$ such that the sampled-data system is exponentially stable the MSI. We assume that $N\in\mathbb{N}_0$ measurements $\{\dot{x}(\tau_k),x(\tau_k),u(\tau_k)\}_{k=1}^N$, sampled at not necessarily equidistant sampling times $\{\tau_k\}_{k=1}^N$, of the perturbed system \begin{align}\label{eq:sys_cont_dist} \dot{x}(t)=A_{\text{tr}}x(t)+B_{\text{tr}}u(t)+B_dd(t),\>\>x(0)=x_0, \end{align} for some unknown disturbance signal $d(t)\in\mathbb{R}^{m_d}$, $t\geq0$, are available. The matrix $B_d\in\mathbb{R}^{n\times m_d}$ has full column rank and is known which allows us to model the influence of the disturbance on the system. We define the data matrices \begin{align*} \dot{X}&\coloneqq\begin{bmatrix}\dot{x}(\tau_1)&\dot{x}(\tau_2)&\dots&\dot{x}(\tau_N)\end{bmatrix},\\ X&\coloneqq\begin{bmatrix}x(\tau_1)&x(\tau_2)&\dots&x(\tau_N)\end{bmatrix},\\ U&\coloneqq\begin{bmatrix}u(\tau_1)&u(\tau_2)&\dots&u(\tau_N)\end{bmatrix}. \end{align*} We assume that the (unknown) disturbance sequence affecting the measured data, denoted by $\{\hat{d}(\tau_k)\}_{k=1}^N$, satisfies a bound of the form $\hat{D}\coloneqq\begin{bmatrix}\hat{d}(\tau_1)&\hat{d}(\tau_2)&\dots&\hat{d}(\tau_N)\end{bmatrix}\in\mathcal{D}$ with \begin{align}\label{eq:noise_bound} \mathcal{D}\coloneqq\Big\{D\in\mathbb{R}^{m_d\times N}\Bigm| \begin{bmatrix}D^\top\\I\end{bmatrix}^\top \begin{bmatrix}Q_d&S_d\\S_d^\top&R_d\end{bmatrix} \begin{bmatrix}D^\top\\I\end{bmatrix}\succeq0\Big\} \end{align} for some $Q_d\in\mathbb{R}^{N\times N}$, $S_d\in\mathbb{R}^{N\times m_d}$, $R_d\in\mathbb{R}^{m_d\times m_d}$ with $Q_d\prec0$. This includes a wide range of practically relevant noise bounds (compare~\cite{persis2020formulas,berberich2020design,waarde2020from,koch2020provably,berberich2020combining} where similar noise bounds are considered in a discrete-time setting). A key practical issue in obtaining data as above is that time derivatives $\dot{X}$ of the state are typically not accessible directly. However, after having recorded a densely-sampled trajectory of the state $x$, one could obtain a good approximation of $\dot{x}$ in a post-processing step, e.g., using finite differences or even a non-causal filter. Alternatively, various methods for derivative estimation have been developed in the field of continuous-time system identification, compare, e.g.,~\cite{garnier2003continuous}. Since the data are recorded in an offline experiment, the mentioned approaches are also amenable for networked control systems: Therein, a sensor located at the plant could measure the state with a high sampling rate and store the data, which would then be extracted after the experiment. Note that the network is only required for closed-loop operation, such that the achievable sampling rate of the experiment depends only on the sensor and not on the network. Further, we do not assume that $\dot{x}(\tau_k)$ can be measured exactly. In particular, including the disturbance $d$ in~\eqref{eq:sys_cont_dist} allows for measurement noise on $\dot{x}(\tau_k)$. For instance, if we assume that the deviation of the measured value of $\dot{x}(\tau_k)$ from the true value is bounded in the $2$-norm by $\bar{d}>0$ for each $k=1,\dots,N$, then we can choose $B_d=I$, $Q_d=-I$, $S_d=0$, $R_d=\bar{d}^2NI$. In Appendix A, we show that an estimate of $\dot{x}(\tau_k)$ as well as an error bound can be obtained from measured data via Euler discretization if norm bounds on the matrices $A_{\text{tr}}$ and $B_{\text{tr}}$ are available. It is an interesting issue for future research to relax the latter assumption, extend this approach to more general discretization schemes, and validate it with a realistic example. Let us now define the set of all systems consistent with the measured data and the noise bound \begin{align*} \Sigma_c\coloneqq\{(A,B)\mid \dot{X}=AX+BU+B_dD,D\in\mathcal{D}\}. \end{align*} On a technical level, our contribution can be summarized as follows: 1) We derive a simple, data-dependent parametrization of the set $\Sigma_c$ (Section~\ref{sec:param}), which we then employ to 2) \emph{compute} a lower bound on the MSI under a given state-feedback (Section~\ref{sec:analysis}) and to 3) \emph{design} a state-feedback controller for~\eqref{eq:sys_cont} with a possibly large MSI (Section~\ref{sec:design}). The MSI bounds are guaranteed robustly for all system matrices $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$ which are consistent with the measured data and the noise bound. Thus, our results provide an alternative to model-based control under aperiodic sampling as surveyed in~\cite{hetel2017recent}, requiring no model knowledge but only measured data. Finally, we note that it is possible to extend the results of this paper to output-feedback control under aperiodic sampling by considering an extended state vector based on the first $n$ time derivatives of the input and output, similar to the discrete-time output-feedback formulation in~\cite{berberich2020combining}. \section{Data-driven system parametrization}\label{sec:param} In this section, we provide a purely data-driven parametrization of all continuous-time LTI systems as in~\eqref{eq:sys_cont} which are consistent with the measured data and the noise bound. The presented approach provides a variation of~\cite{waarde2020from}, where analogous results are obtained for discrete-time systems. To this end, we define $Z\coloneqq\begin{bmatrix}X\\U\end{bmatrix}$ as well as \begin{align}\label{eq:bound_def} P_c=\begin{bmatrix}Q_c&S_c\\S_c^\top&R_c\end{bmatrix}\coloneqq\begin{bmatrix}-Z&0\\\dot{X}&B_d\end{bmatrix} \begin{bmatrix}Q_d&S_d\\S_d^\top&R_d\end{bmatrix} \begin{bmatrix}-Z&0\\\dot{X}&B_d\end{bmatrix}^\top. \end{align} \begin{theorem}\label{thm:cont_param} It holds that \begin{align}\label{eq:thm_cont_param} \Sigma_c=\Big\{(A,B)\Bigm| \begin{bmatrix}\begin{bmatrix}A&B\end{bmatrix}^\top\\I\end{bmatrix}^\top P_c \begin{bmatrix}\begin{bmatrix}A&B\end{bmatrix}^\top\\I\end{bmatrix}\succeq0\Big\}. \end{align} \end{theorem} \textbf{Proof}$\quad$ The proof follows similar arguments as~\cite{waarde2020from} and hence we only give a sketch for the special case $B_d=I$ and refer to~\cite{waarde2020from} for details. If $B_d=I$, then $(A,B)$ lies in the set on the right-hand side of~\eqref{eq:thm_cont_param} if and only if \begin{align}\label{eq:thm_cont_param_proof1} \begin{bmatrix}(\dot{X}-AX-BU)^\top\\I\end{bmatrix}^\top \begin{bmatrix}Q_d&S_d\\S_d^\top&R_d\end{bmatrix} \begin{bmatrix}\star\\\star\end{bmatrix}\succeq0. \end{align} It is straightforward to see that~\eqref{eq:thm_cont_param_proof1} is in turn equivalent to the existence of $D\in\mathcal{D}$ satisfying $\dot{X}=AX+BU+D$, thus concluding the proof. $\hfill\blacksquare$ Theorem~\ref{thm:cont_param} provides a direct data-driven parametrization of all continuous-time systems consistent with the data using noisy input-state measurements, which need not be sampled at equidistant sampling instants. The proof is analogous to that of the discrete-time results by~\cite{waarde2020from} and it relies on simple algebraic manipulations of the data equation. The system matrices $(A,B)$ can be interpreted as an \emph{uncertainty} satisfying the quadratic matrix inequality bound~\eqref{eq:thm_cont_param}. Indeed, the remainder of this paper makes extensive use of this interpretation by applying S-procedure-based linear matrix inequality (LMI) relaxations to robustify existing model-based LMIs for computing MSI bounds against the uncertainty class $\Sigma_c$. In~\cite{persis2020formulas}, an alternative approach to data-driven control of continuous-time systems is outlined. Controller design based on Theorem~\ref{thm:cont_param} has multiple advantages if compared to the approach by~\cite{persis2020formulas} such as guaranteed robustness with respect to noise and a lower computational complexity (the number of decision variables for controller design in the present paper is independent of the data length). It is an interesting issue for future research to employ Theorem~\ref{thm:cont_param} in order to develop further continuous-time data-driven control methods beyond the sampled-data formulations considered in the present paper. \section{Stability analysis under aperiodic sampling}\label{sec:analysis} In this section, we analyze stability of unknown sampled-data systems and compute lower bounds on the MSI by modeling the closed-loop system as a time-delay system. To this end, our work builds on the model-based time-delay approach to sampled-data control which is discussed, e.g., in~\cite{hetel2017recent,fridman2004robust,fridman2010refined,liu2012wirtingers}. The key idea of the time-delay approach to sampled-data control is to write the controller as \begin{align}\label{eq:sampled_data_controller} u(t)=Kx(t_k)=Kx(t-\tau(t)),\quad \tau(t)\coloneqq t-t_k \end{align} for all $t\in[t_k,t_{k+1})$, where $\tau$ is a delay with $\tau(t_k)=0$ and $\dot{\tau}(t)=1$ if $t\neq t_k$. The closed-loop sampled-data control system can then be written as \begin{align}\label{eq:sys_delay} \dot{x}(t)&=Ax(t)+BKx(t-\tau(t)) \end{align} for any $t\geq0$, i.e., as a linear time-delay system. The literature contains various methods to compute values $h$ such that~\eqref{eq:sys_delay} is exponentially stable with time-varying delay $\tau(t)$ for any $\tau(t)\in(0,h]$, see e.g.~\cite{hetel2017recent,fridman2004robust,fridman2010refined}. Throughout this paper, we focus on the conditions stated in~\cite[Proposition 1]{fridman2010refined} which rely on the Lyapunov-Krasovskii functional \begin{align}\label{eq:LKF} V(t,x(t),\dot{x}(t))=&\>x(t)^\top P_1x(t)\\\nonumber &+(h-\tau(t))\int_{t-\tau(t)}^t \dot{x}(s)^\top R\dot{x}(s)\text{d}s, \end{align} where $P_1\succ0,R\succ0,\tau(t)=t-t_k$. Using~\eqref{eq:LKF},~\cite{fridman2010refined} computes an MSI bound based on model knowledge as follows. \begin{proposition}\label{prop:fridman} (\cite{fridman2010refined}) The system $\dot{x}=Ax(t)+Bu(t)$ with controller~\eqref{eq:sampled_data_controller} is exponentially stable for any sampling sequence $\{t_k\}_{k=0}^{\infty}$ with $t_{k+1}-t_k\in(0,h]$, $t_0=0$, $\lim_{k\to\infty}t_k=\infty$, if there exist $R\succ0$, $P_1\succ0$, $P_2$ and $P_3$ such that \begin{align}\label{eq:time_delay1} &\begin{bmatrix}P_2^\top(A+BK)+(A+BK)^\top P_2&\star\\ P_1-P_2+P_3^\top (A+BK)&-P_3-P_3^\top+hR \end{bmatrix}\prec0,\\\label{eq:time_delay2} &\begin{bmatrix}P_2^\top(A+BK)+(A+BK)^\top P_2&\star&\star\\ P_1-P_2+P_3^\top(A+BK)&-P_3-P_3^\top&\star\\ -h(BK)^\top P_2&-h(BK)^\top P_3&-hR \end{bmatrix}\prec0. \end{align} \end{proposition} Throughout this paper, we make the following technical assumption on the matrix $P_c$ involved in the data-dependent parametrization provided by Theorem~\ref{thm:cont_param}. \begin{assumption}\label{ass:inertia} The matrix $P_c$ is invertible and has exactly $m_d$ positive eigenvalues. \end{assumption} This assumption is not restrictive and it holds if the available measurements are sufficiently rich, i.e., the matrix $Z$ has full row rank, under mild additional assumptions. To be precise, we show in Appendix B that Assumption~\ref{ass:inertia} holds for the common special case $S_d=0$ if (i) $Z$ has full row rank, (ii) $B_d$ is invertible, i.e., $m_d=n$, and (iii) the disturbance $\hat{D}$ generating the data satisfies a strict version of the quadratic matrix inequality in~\eqref{eq:noise_bound}, i.e., $\hat{D}Q_d\hat{D}^\top+R_d\succ0$. Note that $Z$ having full row rank does not imply that the underlying system can be uniquely identified from the available data due to the influence of the disturbance. Defining \begin{align*} \begin{bmatrix}\tilde{Q}_c&\tilde{S}_c\\\tilde{S}_c^\top&\tilde{R}_c\end{bmatrix}\coloneqq \begin{bmatrix}Q_c&S_c\\S_c^\top&R_c\end{bmatrix}^{-1},\> \tilde{P}_c\coloneqq\begin{bmatrix}-\tilde{R}_c&\tilde{S}_c^\top\\\tilde{S}_c&-\tilde{Q}_c \end{bmatrix}, \end{align*} we now provide a data-based stability certificate for the considered sampled-data system. \begin{theorem}\label{thm:analysis} If Assumption~\ref{ass:inertia} holds, then $\dot{x}(t)=Ax(t)+Bu(t)$ with controller~\eqref{eq:sampled_data_controller} is exponentially stable for any $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_{c}$ and any sampling sequence $\{t_k\}_{k=0}^{\infty}$ with $t_{k+1}-t_k\in(0,h]$, $t_0=0$, and $\lim_{k\to\infty}t_k=\infty$, if there exist ${\lambda_1>0}$, ${\lambda_2>0}$, ${P_1\succ0}$, $R\succ0$, $P_2$, and $P_3$ such that $P=\begin{bmatrix}P_1&0\\P_2&P_3\end{bmatrix}$, ${P_R=\text{diag}(P,R)}$, $P_{R2}=\begin{bmatrix}P_1&0\\P_2&P_3\\0&R\end{bmatrix}$ and \begin{align}\label{eq:thm_time_delay_LMI1} &\left[ \begin{array}{cc} I&0\\ \begin{bmatrix}0&I\\0&-I\\0&\frac{1}{2}hI\end{bmatrix}&\begin{bmatrix}0\\I\\0\end{bmatrix}\\\hline 0&I\\ \begin{bmatrix}I&0\\K&0\end{bmatrix}&\begin{bmatrix}0\\0\end{bmatrix} \end{array} \right]^\top \left[ \begin{array}{c|c} \begin{matrix} 0&P_{R2}^\top\\P_{R2}&0 \end{matrix}&0\\\hline 0&\lambda_1\tilde{P}_c \end{array} \right] \left[ \begin{array}{cc} \star&\star\\\star&\star\\\hline \star&\star\\\star&\star \end{array} \right]\prec0, \end{align} \begin{align}\label{eq:thm_time_delay_LMI2} &\left[ \begin{array}{cc} I&0\\ \begin{bmatrix}0&I&0\\0&-I&0\\0&0&-\frac{1}{2}hI\end{bmatrix}&\begin{bmatrix}0\\I\\0\end{bmatrix}\\\hline 0&I\\ \begin{bmatrix}I&0&0\\K&0&-hK\end{bmatrix}&\begin{bmatrix}0\\0\end{bmatrix} \end{array} \right]^\top \left[ \begin{array}{c|c} \begin{matrix} 0&P_{R}^\top\\P_{R}&0 \end{matrix}&0\\\hline 0&\lambda_2\tilde{P}_c \end{array} \right] \left[ \begin{array}{cc} \star&\star\\\star&\star\\\hline \star&\star\\\star&\star \end{array} \right]\prec0. \end{align} \end{theorem} \textbf{Proof}$\quad$ Applying the dualization lemma~\cite[Lemma 4.9]{scherer2000linear} to~\eqref{eq:thm_cont_param} (the required inertia properties hold by Assumption~\ref{ass:inertia}), we have that $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$ if and only if \begin{align}\label{eq:AB_bound_primal} \begin{bmatrix}\begin{bmatrix}A&B\end{bmatrix}\\I\end{bmatrix}^\top \begin{bmatrix}-\tilde{R}_c&\tilde{S}_c^\top\\\tilde{S}_c&-\tilde{Q}_c\end{bmatrix} \begin{bmatrix}\begin{bmatrix}A&B\end{bmatrix}\\I\end{bmatrix}\succeq0. \end{align} We note that this step is analogous to the equivalence of~\cite[Conditions (4.4.1) and (4.4.4)]{scherer2000linear}, where the strict and non-strict inequalities can be interchanged. Note that \begin{align*} \begin{bmatrix}0&I\\A+BK&-I\\0&\frac{1}{2}h\end{bmatrix}= \begin{bmatrix}0&I\\0&-I\\0&\frac{1}{2}hI\end{bmatrix}+\begin{bmatrix}0\\I\\0\end{bmatrix} \begin{bmatrix}A&B\end{bmatrix}\begin{bmatrix}I&0\\K&0\end{bmatrix}. \end{align*} Therefore, using the full-block S-procedure~(\cite{scherer2001lpv}), \eqref{eq:thm_time_delay_LMI1} implies that for any $\begin{bmatrix}A&B\end{bmatrix}$ satisfying~\eqref{eq:AB_bound_primal} the matrix \begin{align*} \begin{bmatrix}P_1&0\\P_2&P_3\\0&R\end{bmatrix}^\top\begin{bmatrix}0&I\\A+BK&-I\\0&\frac{1}{2}hI\end{bmatrix} +\begin{bmatrix}0&I\\A+BK&-I\\0&\frac{1}{2}hI\end{bmatrix}^\top \begin{bmatrix}P_1&0\\P_2&P_3\\0&R\end{bmatrix} \end{align*} is negative definite, i.e.,~\eqref{eq:time_delay1} holds. Using that~\eqref{eq:AB_bound_primal} holds if and only if $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$, this in turn implies that~\eqref{eq:time_delay1} holds for any $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$. Similarly, we can show that~\eqref{eq:thm_time_delay_LMI2} implies \begin{align*} &\begin{bmatrix}P_1&0&0\\P_2&P_3&0\\0&0&R\end{bmatrix}^\top \begin{bmatrix}0&I&0\\A+BK&-I&-hBK\\0&0&-\frac{1}{2}hI\end{bmatrix}\\ +&\begin{bmatrix}0&I&0\\A+BK&-I&-hBK\\0&0&-\frac{1}{2}hI\end{bmatrix}^\top\begin{bmatrix}P_1&0&0\\P_2&P_3&0\\0&0&R\end{bmatrix}\prec0 \end{align*} for any $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$, i.e.,~\eqref{eq:time_delay2} holds. Thus, by Proposition~\ref{prop:fridman}, we can deduce exponential stability for any $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$. $\hfill\blacksquare$ Theorem~\ref{thm:analysis} provides a purely data-dependent approach to verify closed-loop stability of a sampled-data system with aperiodic sampling. For any fixed MSI bound $h$, the conditions~\eqref{eq:thm_time_delay_LMI1}--\eqref{eq:thm_time_delay_LMI2} are LMIs and thus, a possibly large $h$ leading to closed-loop robust stability can be found via a simple bisection algorithm. The derived stability guarantees hold for an \emph{arbitrary} sampling sequence $\{t_k\}_{k=0}^{\infty}$ of infinite length satisfying $t_0=0$, $t_{k+1}-t_k\in(0,h]$, and $\lim_{k\to\infty}t_k=\infty$. This only requires availability of a noisy open-loop data trajectory of finite length which is generated by a sampling sequence $\{\tau_k\}_{k=1}^N$ \emph{not} necessarily satisfying $\tau_{k+1}-\tau_k\leq h$. Under mild technical assumptions, the statement is non-conservative in the sense that the proposed conditions are equivalent to~\eqref{eq:time_delay1}-\eqref{eq:time_delay2} holding for all $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_{c}$. This is due to the fact that the employed S-procedure only uses one matrix inequality constraint~\eqref{eq:AB_bound_primal}, for which relaxations with a scalar multiplier are exact, compare~\cite{waarde2020from} for details. The proof relies on an application of robust control arguments from~\cite{scherer2000linear,scherer2001lpv} to verify the model-based LMIs~\eqref{eq:time_delay1}-\eqref{eq:time_delay2} robustly for all systems consistent with the measured data. Since the parametrization of $\Sigma_c$ in Theorem~\ref{thm:cont_param} takes a ``dual'' form (involving $\begin{bmatrix}A&B\end{bmatrix}^\top$ instead of $\begin{bmatrix}A&B\end{bmatrix}$), we use the dualization lemma to derive the ``primal'' parametrization~\eqref{eq:AB_bound_primal} which we then combine with Inequalities~\eqref{eq:time_delay1}-\eqref{eq:time_delay2}. It is worth noting that, to robustify Inequality~\eqref{eq:time_delay1} w.r.t. $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$, we use as a technical argument a \emph{non-square} matrix $P_{R2}$ which plays the role of the Lyapunov matrix in robust control. Finally, we conjecture that an extension of Theorem~\ref{thm:design} to include performance criteria for the closed loop is straightforward, e.g., by robustly verifying the performance conditions in~\cite{fridman2010refined}. \section{Controller design for closed-loop stability under aperiodic sampling}\label{sec:design} While Theorem~\ref{thm:analysis} provides an effective way to compute MSI bounds of an unknown plant, the result requires that a state-feedback gain $K$ is given. In case no controller is available, a matrix $K$ with $A+BK$ Hurwitz for all $(A,B)\in\Sigma_c$ can be found by combining Theorem~\ref{thm:cont_param} with a Lyapunov LMI and the S-procedure, similar to Theorem~\ref{thm:analysis} or the discrete-time results by~\cite{waarde2020from}. However, applying the analysis conditions in Theorem~\ref{thm:analysis} for the resulting controller may lead to a small MSI such that stability can only be achieved under an unnecessarily high sampling rate. In particular, if achieving a small sampling rate constitutes the main control objective, then we want to find a possibly large MSI bound $h$ holding for \emph{some} controller, which needs to be found concurrently with $h$. Since the matrix $\tilde{Q}_c$ is generally not negative definite, Inequalities~\eqref{eq:thm_time_delay_LMI1} and~\eqref{eq:thm_time_delay_LMI2} are not convex in the state-feedback gain $K$ and hence, we cannot optimize over $K$ efficiently. The following result provides a reformulation of Inequalities~\eqref{eq:thm_time_delay_LMI1}-\eqref{eq:thm_time_delay_LMI2} which is convex in $K$ for fixed variables $P_1^{-1}$, $R$ and hence, it can be used to search for a controller while simultaneously guaranteeing a bound on the corresponding MSI. \begin{theorem}\label{thm:design} If Assumption~\ref{ass:inertia} holds, then $\dot{x}(t)=Ax(t)+Bu(t)$ with controller~\eqref{eq:sampled_data_controller} is exponentially stable for any $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_{c}$ and any sampling sequence $\{t_k\}_{k=0}^{\infty}$ with $t_{k+1}-t_k\in(0,h]$, $t_0=0$, and $\lim_{k\to\infty}t_k=\infty$, if there exist ${\tilde{\lambda}_1>0}$, ${\tilde{\lambda}_2>0}$, ${Q_1\succ0}$, $R\succ0$, $Q_2$, and $Q_3$ with $Q_3+Q_3^\top\succ0$ such that $Q=\begin{bmatrix}Q_1&0\\Q_2&Q_3\end{bmatrix}$, $Q_R=\text{diag}(Q,R),\bar{Q}_R=\text{diag}(Q,R^{-1})$,~\eqref{eq:thm_time_delay_LMI3_dual}, and~\eqref{eq:thm_time_delay_LMI2_dual} hold. \begin{figure* \vspace{2pt} \begin{align}\label{eq:thm_time_delay_LMI3_dual} \left[ \begin{array}{cc} I&0\\ \begin{bmatrix}0&0&0\\I&-I&R\\0&0&-\frac{1}{2h}I\end{bmatrix}& \begin{bmatrix}I&K^\top\\0&0\\0&0\end{bmatrix}\\\hline 0&I\\ \begin{bmatrix}0&I&0\end{bmatrix}&\begin{bmatrix}0&0\end{bmatrix} \end{array} \right]^\top &\left[ \begin{array}{c|c} \begin{matrix} 0&Q_R^\top\\Q_R&0 \end{matrix}&0\\\hline0&\tilde{\lambda}_1P_c \end{array} \right] \left[ \begin{array}{cc} I&0\\ \begin{bmatrix}0&0&0\\I&-I&R\\0&0&-\frac{1}{2h}I\end{bmatrix}& \begin{bmatrix}I&K^\top\\0&0\\0&0\end{bmatrix}\\\hline 0&I\\ \begin{bmatrix}0&I&0\end{bmatrix}&\begin{bmatrix}0&0\end{bmatrix} \end{array} \right]\prec0\\\label{eq:thm_time_delay_LMI2_dual} \left[ \begin{array}{cc} I&0\\ \begin{bmatrix}0&0&0\\I&-I&0\\0&0&-\frac{1}{2}hI\end{bmatrix}& \begin{bmatrix}I&K^\top\\0&0\\0&-hK^\top\end{bmatrix}\\\hline 0&I\\ \begin{bmatrix}0&I&0\end{bmatrix}&\begin{bmatrix}0&0\end{bmatrix} \end{array} \right]^\top &\left[ \begin{array}{c|c} \begin{matrix} 0&\bar{Q}_R^\top\\\bar{Q}_R&0 \end{matrix}&0\\\hline0&\tilde{\lambda}_2P_c \end{array} \right] \left[ \begin{array}{cc} I&0\\ \begin{bmatrix}0&0&0\\I&-I&0\\0&0&-\frac{1}{2}hI\end{bmatrix}& \begin{bmatrix}I&K^\top\\0&0\\0&-hK^\top\end{bmatrix}\\\hline 0&I\\ \begin{bmatrix}0&I&0\end{bmatrix}&\begin{bmatrix}0&0\end{bmatrix} \end{array} \right]\prec0 \end{align} \noindent\makebox[\linewidth]{\rule{\textwidth}{0.4pt}} \end{figure*} \end{theorem} \textbf{Proof}$\quad$ Applying the dualization lemma~\cite[Lemma 4.9]{scherer2000linear} and performing straightforward algebraic manipulations, we infer that~\eqref{eq:thm_time_delay_LMI3_dual} is equivalent to \begin{align}\label{eq:thm_time_delay_LMI3} \left[ \begin{array}{cc} I&0\\ \begin{bmatrix}0&I&0\\0&-I&0\\0&R&-\frac{1}{2h}I\end{bmatrix}&\begin{bmatrix}0\\I\\0\end{bmatrix}\\\hline 0&I\\ \begin{bmatrix}I&0&0\\K&0&0\end{bmatrix}&\begin{bmatrix}0\\0\end{bmatrix} \end{array} \right]^\top \left[ \begin{array}{c|c} \begin{matrix} 0&P_R^\top\\P_R&0 \end{matrix}&0\\\hline0&\lambda_1\tilde{P}_c \end{array} \right] \left[ \begin{array}{cc} \star&\star\\\star&\star\\\hline \star&\star\\\star&\star \end{array} \right]\prec0 \end{align} with $\lambda_1=\frac{1}{\tilde{\lambda}_1},P=\begin{bmatrix}P_1&0\\P_2&P_3\end{bmatrix}\coloneqq Q^{-1},P_R\coloneqq\text{diag}(P,R^{-1})$, where invertibility of $Q$ follows from $Q_1\succ0,Q_3+Q_3^\top\succ0$. Similar to the proof of Theorem~\ref{thm:analysis}, the full-block S-procedure~(\cite{scherer2001lpv}) together with~\eqref{eq:AB_bound_primal} implies that \begin{align*} &\begin{bmatrix}P_1&0&0\\P_2&P_3&0\\0&0&R^{-1}\end{bmatrix}^\top \begin{bmatrix}0&I&0\\A+BK&-I&0\\0&R&-\frac{1}{2h}I\end{bmatrix}\\ &+\begin{bmatrix}0&I&0\\A+BK&-I&0\\0&R&-\frac{1}{2h}I\end{bmatrix}^\top \begin{bmatrix}P_1&0&0\\P_2&P_3&0\\0&0&R^{-1}\end{bmatrix}\\ =&\begin{bmatrix}P_2^\top(A+BK)+(A+BK)^\top P_2&\star&\star\\ P_1-P_2+P_3^\top(A+BK)&-P_3-P_3^\top&\star\\ 0&I&-\frac{1}{h}R^{-1}\end{bmatrix}\\ \prec&\>0 \end{align*} for any $\begin{bmatrix}A&B\end{bmatrix}\in\Sigma_c$. Applying the Schur complement to the right-lower block, this in turn implies~\eqref{eq:time_delay1}. Similarly, using the dualization lemma~\cite[Lemma 4.9]{scherer2000linear},~\eqref{eq:thm_time_delay_LMI2_dual} implies~\eqref{eq:thm_time_delay_LMI2} which in turn implies~\eqref{eq:time_delay2} by Theorem~\ref{thm:analysis}, thus concluding the proof. $\hfill\blacksquare$ It is difficult to render Inequalities~\eqref{eq:thm_time_delay_LMI3_dual}--\eqref{eq:thm_time_delay_LMI2_dual} convex in all decision variables since i) they contain both $R$ and $R^{-1}$ and ii) the state-feedback gain $K$ is multiplied by both $Q_1$ and $R^{-1}$. Nevertheless, for fixed $Q_1\succ0$ and $R\succ0$, they are convex in the remaining decision variables and thus, Theorem~\ref{thm:design} allows us to use measured data for designing controllers which lead to a possibly large MSI bound $h$. While fixing the matrices $Q_1,R$ leads to conservatism, a practical remedy is to alternate between solving the LMIs in Theorems~\ref{thm:analysis} and~\ref{thm:design} for fixed $K$ and $Q_1,R$, respectively. The proof of Theorem~\ref{thm:design} exploits dualization arguments to render the involved matrix inequalities convex in $K$, similar to the (discrete-time) results in~\cite{berberich2020combining}. In particular, the dualization lemma directly implies that~\eqref{eq:thm_time_delay_LMI2} and~\eqref{eq:thm_time_delay_LMI2_dual} are equivalent. In order to render Inequality~\eqref{eq:thm_time_delay_LMI1} convex in $K$, we additionally need to perform a Schur complement w.r.t. $R$. \begin{remark} In addition to being purely data-driven methods, Theorems~\ref{thm:analysis} and~\ref{thm:design} also extend existing model-based methods for stability analysis and controller design for uncertain sampled-data systems. Many existing time-delay approaches such as~\cite{fridman2010refined,gao2010robust,seuret2012novel} can only handle polytopic uncertainty and verify the nominal LMIs on all vertices of the uncertainty set, which can become intractable for too many vertices (e.g., in high-dimensional systems). Alternatively, the approach in~\cite{orihuela2010delay} allows for simple quadratic uncertainty bounds but only stability analysis conditions are provided and no controller design is considered. On the other hand, the presented approach utilizes powerful robust control tools to employ the data-dependent uncertainty bound derived in Theorem~\ref{thm:cont_param}. Additionally, it is straightforward to include prior model knowledge in order to reduce conservatism and improve the MSI bound: Suppose the LTI system~\eqref{eq:sys_cont} is given as a linear fractional transformation (compare~\cite{zhou1996robust}) \begin{align}\label{eq:LFT} \dot{x}(t)&=Ax(t)+Bu(t)+B_ww(t),\\\nonumber z(t)&=Cx(t)+Du(t),\\\nonumber w(t)&=\Delta z(t), \end{align} where all matrices except for the uncertainty $\Delta$ are known. Following the arguments in~\cite{berberich2020combining}, all results in this paper can be directly extended to compute MSI bounds and design controllers for~\eqref{eq:LFT} robustly for all uncertainties $\Delta$ which are consistent with the measured data (potentially affected by noise) and the prior knowledge given by the matrices $A,B,C,D,B_w$. This also allows us to include prior knowledge on uncertainty bounds or structure such as \begin{align*} \Delta=\mathrm{diag}(\Delta_1,\dots,\Delta_{\ell}),\quad \begin{bmatrix}\Delta_i^\top\\I\end{bmatrix}^\top P_i \begin{bmatrix}\Delta_i^\top\\I\end{bmatrix}\succeq0 \end{align*} for some matrices $P_i$, which can lead to a dramatic reduction of uncertainty and therefore to an increase of the computed MSI bound. Finally, for the special case that only prior knowledge and no data is available, this leads to a model-based robust time-delay approach for uncertain systems with an improved flexibility if compared to the existing literature mentioned above. \end{remark} \begin{remark} As an obvious alternative to the results in this paper, one could estimate the matrices $A_{\text{tr}}$, $B_{\text{tr}}$ together with uncertainty bounds and then apply the model-based time-delay results for uncertain systems in~\cite{fridman2010refined,gao2010robust,seuret2012novel,orihuela2010delay}. However, as discussed above, these results mainly focus on polytopic uncertainty descriptions. It is possible to construct a polytope which is guaranteed to contain the unknown system matrices $A_{\text{tr}}$, $B_{\text{tr}}$ using measured data affected by noise, e.g., via set membership estimation (\cite{milanese1991optimal}). However, the number of vertices of this polytope grows rapidly with the system dimension and the number of data points and hence, if the polytope is not over-approximated (leading to conservatism), then this approach quickly becomes intractable. \end{remark} Finally, to the best of our knowledge, the presented work provides the first results to compute MSI bounds and design controllers with a possibly high MSI based only on measured data. A highly interesting issue for future research is whether and, if so, how further approaches to sampled-data control under aperiodic sampling as, e.g., surveyed in~\cite{hetel2017recent} can be translated into a purely data-driven formulation. To this end, following the input-output approach (compare~\cite{mirkin2007some}), which relies on an integral quadratic constraint description (\cite{megretski1997system}) of the delay operator, in such a scenario is immediate, simply by considering both the unknown system matrices and the delay as uncertainties in a robust control formulation (compare~\cite{berberich2020combining} for how additional uncertainty bounds can be combined with a data-dependent bound of the form~\eqref{eq:thm_cont_param} to design robust controllers). \section{Example}\label{sec:example} In the following, we illustrate the practical applicability of the proposed approach by applying it to a standard example from~\cite{zhang2001stability}. To be precise, we consider \begin{align}\label{eq:sys_ex} A_{\text{tr}}=\begin{bmatrix}0&1\\0&-0.1\end{bmatrix},\quad B_{\text{tr}}=\begin{bmatrix}0\\0.1\end{bmatrix},\quad B_d=I, \end{align} where $A_{\text{tr}},B_{\text{tr}}$ are \emph{unknown}. We assume that $N=100$ measurements $\{\dot{x}(\tau_k),x(\tau_k),u(\tau_k)\}_{k=1}^N$ are available with (not equidistant) sampling intervals \begin{align*} &\tau_{k+1}-\tau_k=1.5\quad\text{if}\>k\in[1,49],\\ &\tau_{k+1}-\tau_k=3\quad\text{if}\>k\in[50,99], \end{align*} where the input generating the data is sampled uniformly from $u(\tau_k)\in[-1,1]$. These measurements are perturbed by a disturbance $\{\hat{d}(\tau_k)\}_{k=1}^N$ sampled uniformly from $\hat{d}(\tau_k)^\top\hat{d}(\tau_k)\leq\bar{d}^2$, $k=1,\dots,N$, for some $\bar{d}>0$, and we assume the (valid) noise bound~\eqref{eq:noise_bound} with $Q_d=-I$, $S_d=0$, $R_d=\bar{d}^2NI$. We use MATLAB together with YALMIP~(\cite{lofberg2004yalmip}) and MOSEK~(\cite{MOSEK15}) for computations. We first apply Theorem~\ref{thm:analysis} to compute an MSI bound for a given controller $K=-\begin{bmatrix}3.75&11.5\end{bmatrix}$ which is also considered in~\cite{fridman2010refined} and for which solving the model-based LMIs~\eqref{eq:time_delay1}-\eqref{eq:time_delay2} leads to a maximum $h$ of $1.62$. On the other hand, the MSI bounds computed based on the available data without any model knowledge via Theorem~\ref{thm:analysis} are listed in Table~1 for varying noise levels. As expected, the MSI bound which can be guaranteed decreases for increasing noise levels since the corresponding set $\Sigma_c$ of systems consistent with the data grows and hence, the sampling frequency required for stability increases. Since the bound~\eqref{eq:noise_bound} with $Q_d=-I$, $S_d=0$, $R_d=\bar{d}^2NI$ is not a tight representation of $\hat{d}(\tau_k)^\top\hat{d}(\tau_k)\leq\bar{d}^ 2$, the size of $\Sigma_c$ need not decrease in general when the data length $N$ increases. This phenomenon has been pointed out and analyzed in more detail in the recent literature, compare, e.g.,~\cite{martin2020dissipativity,berberich2020combining,bisoffi2021trade}. Nevertheless, for the application of Theorems~\ref{thm:analysis} and~\ref{thm:design} in the above example, we observe that increasing the data length $N$ has (approximately) a similar effect as reducing the noise bound $\bar{d}$, i.e., a longer data trajectory allows us to compute larger MSI bounds in most cases. \begin{table}\label{tab:ex_analysis} \begin{center} \begin{tabular}{c|ccccccc} noise bound $\bar{d}$&$0.001$&$0.005$&$0.01$&$0.02$&$0.03$&$0.04$&$0.05$\\\hline MSI bound $h$& $1.59$&$1.49$&$1.38$&$1.17$&$1$&$0.86$&$0.67$ \end{tabular} \vskip6.5pt \caption{MSI bound $h$ for~\eqref{eq:sys_ex} with $K=-\big[3.75\quad11.5\big]$, computed via Theorem~\ref{thm:analysis}.} \end{center} \end{table} Further, we optimize for a controller gain $K$ with a possibly large MSI bound by iteratively optimizing over the LMIs in Theorems~\ref{thm:analysis} and~\ref{thm:design} (fixing $K$ and $Q_1,R$, respectively) while gradually increasing $h$. We initialize this iterative approach via the controller $K=-\begin{bmatrix}3.75&11.5\end{bmatrix}$. In Table~2, the MSI bounds $h$ are displayed which hold for the corresponding state-feedback gains that emerged from this iteration. Note that, by choosing a different controller, the MSI bound can be increased significantly if compared to the controller given above. \begin{table}\label{tab:ex_design} \begin{center} \begin{tabular}{c|ccccccc} noise bound $\bar{d}$&$0.001$&$0.005$&$0.01$&$0.02$&$0.03$&$0.04$&$0.05$\\\hline MSI bound $h$& $142.6$&$28.5$&$13.8$&$6.3$&$4$&$2.9$&$2.2$ \end{tabular} \vskip6.5pt \caption{MSI bound $h$ for~\eqref{eq:sys_ex} with $K$ computed via an iteration using Theorems~\ref{thm:analysis} and~\ref{thm:design}.} \end{center} \end{table}
1,314,259,992,911
arxiv
\section{Introduction} \setcounter{equation}{0} The modern era of high-throughput data collection creates data in abundance. Some devices -- spectrometers and gene chips come to mind -- automatically generate measurements on thousands of standard features of each observational unit. What hasn't changed in science is the difficulty of obtaining good observational units. High-throughput devices don't help us to find and enroll qualified patients, or catch exotic butterflies, or observe primate mating behaviors. Hence, in many fields the number of observational units -- eg. patients, butterflies, or matings -- has not increased, and stays today in the dozens or hundreds. But each of those few observational units can now be subjected to a large battery of automatic feature measurements Many of those automatically measured features will have little relevance to any given project. This new era of {\it feature glut} poses a needle-in-a-haystack problem: we must detect a relatively few valuable features among many useless ones. Unfortunately, the combination of small sample sizes (few observational units) and high dimensions (many feature measurements) makes it hard to tell needles from straw. Orthodox statistical methods assumed a quite different set of conditions: more observations than features, and all features highly relevant. Modern statistical research is intensively developing new tools and theory to address the new unorthodox setting; such research comprised much of the activity in the recent 6-month Newton Institute program {\it Statistical Theory and Methods for Complex, High-Dimensional Data}. In this article we focus on this new setting, this time addressing the challenges that modern high-dimensional data pose to linear classification schemes. New data analysis tools and new types of mathematical analysis of those tools will be introduced. \subsection{Multivariate normal classification} \label{sec-simple} Consider a simple model of linear classifier training. We have a set of labelled training samples $(Y_i, X_i)$, $i=1,\dots,n$, where each label $Y_i$ is $ \pm 1$ and each feature vector $X_i \in R^p$. For simplicity, we assume the training set contains equal numbers of $1$'s and $-1$'s and that the feature vectors $X_i \in R^p$ obey $X_i \sim N( Y_i \mu , \Sigma )$, $i=1,\dots, n$, for an unknown {\it mean contrast vector} $\mu \in R^p$; here $\Sigma$ denotes the feature covariance matrix and $n$ is the training set size. In this simple setting, one ordinarily uses linear classifiers to classify an unlabeled test vector $X$, taking the general form $L(X) = \sum_{j=1}^p w(j) X(j)$, for a sequence of `feature weights' $w = (w(j): j=1,\dots,p)$. Classical theory going back to RA Fisher \cite{Anderson} shows that the optimal classifier has feature weights $w \propto \Sigma^{-1} \mu$; at first glance linear classifier design seems straightforward and settled. However, in many of today's most active application areas, it is a major challenge to construct linear classifiers which work well. \subsection{$p$ larger than $n$} \label{sec-p-gt-n} The culprit can be called the ``$p > n$ problem''. A large number $p$ of measurements is automatically made on thousands of standard features, but in a given project, the number of observational units, $n$, might be in the dozens or hundreds. The fact that $p \gg n$ makes it difficult or impossible to estimate the feature covariance $\Sigma$ with any precision. It is well known that naive application of the formula $w \propto \Sigma^{-1} \mu$ to empirical data in the $p > n$ setting is problematic; at a minimum, because the matrix of empirical feature covariances $\widehat{Cov}_{n,p}(X)$ is not invertible. But even if we use the generalized inverse $\widehat{Cov}_{n,p}(X)^\dag$, the resulting naive classification weights, $\hat{w}_{naive}\propto \widehat{Cov}_{n,p}(X)^{\dag} \widehat{Cov}_{n,p}(Y,X)$, often give very ``noisy'' classifiers with low accuracy. The modern feature glut thus seriously damages the applicability of `textbook' approaches. A by-now standard response to this problem is to simply ignore feature covariances, and standardize the features to mean zero and variance one. One in effect pretends that the feature covariance matrix $\Sigma$ is the identity matrix, and uses the formula $w(j) \propto Cov(Y,X(j)) $ \cite{Bickel,Fan}. Even after this reduction, further challenges remain. \subsection{When features are rare and weak} \label{sec-simple-model} In fields such as genomics, with a glut of feature measurements per observational unit, it is expected that few measured features will be useful in any given project; nevertheless, they all get measured, because researchers can't say in advance which ones will be useful. Moreover, reported misclassification rates are relatively high. Hence there may be numerous useful features, but they are relatively rare and individually quite weak. Such thinking motivated the following {\it rare/weak feature model} {\it (RW Feature Model)} in \cite{PNAS}. Under this model, \begin{itemize} \item {\bf Useful features are rare:} the contrast vector $\mu$ is nonzero in only $k$ out of $p$ elements, where $ \epsilon = k/p$ is small, i.e. close to zero. As an example, think of $p = 10,000$, $k =100$, so $\epsilon = k/p = .01$. In addition, \item {\bf Useful features are weak:} the nonzero elements of $\mu$ have {\it common} amplitude $\mu_0$, which is assumed not to be `large'. `Large' can be measured using $\tau = \sqrt{n} \mu_0$; values of $\tau$ in the range $2$ to $4$ imply that corresponding values of $\mu$ are not large. \end{itemize} Since the elements $X(j)$ of the feature vector where the class contrast $\mu(j) = 0$ are entirely uninformative about the value of $Y(j)$, only the $k$ features where $\mu(j) = \mu_0$ are useful. The problem is how to identify and benefit from those rare, weak features. We speak of $\epsilon$ and $\tau$ as the {\it sparsity} and {\it strength} parameters for the Rare/Weak model, and refer to the $RW(\epsilon,\tau)$ model. Models with a `sparsity' parameter $\epsilon$ are common in estimation settings \cite{DJ,MENBO,Needles}, but not with the feature strength constraint $\tau$. Also closely related to the RW model is work in multiple testing by Ingster and the authors \cite{Ingster97,HC,JinThesis}. \subsection{Feature selection by thresholding} Feature selection \-- i.e. working only with an empirically-selected subset of features \-- is a standard response to feature glut. We are supposing, as announced in Section \ref{sec-p-gt-n}, that feature correlations can be ignored and that features are already standardized to variance one. We therefore focus on the vector of feature $Z$-scores, with components $ Z (j) = n^{-1/2} \sum_i Y_i X_i(j)$, $j=1,\dots,p$. These are the $Z$-scores of two-sided tests of $H_{0,j}$: $Cov(Y,X(j)) = 0$. Under our assumptions $Z \sim N(\theta , I_p)$ with $\theta=\sqrt{n} \mu$ and $\mu$ the feature contrast vector. Features with nonzero $\mu(j)$ typically have significantly nonzero $Z(j)$ while all other features will have $Z(j)$ values largely consistent with the null hypothesis $\mu(j)= 0$. In such a setting, selecting features with $Z$-scores above a threshold makes sense. We identify three useful threshold functions: $\eta_t^{\star}(z)$, $\star \in \{ clip, hard,soft\}$. These are: {\it Clipping} -- $ \eta^{clip}_t(z) = \mbox{sgn}(z) \cdot 1_{\{|z| > t\}}$, which ignores the size of the $Z$-score, provided it is large; {\it Hard Thresholding} -- $ \eta^{hard}_t(z) = z\cdot 1_{\{ |z| > t \}}$, which uses the size of the $Z$-score, provided it is large; and {\it Soft Thresholding} -- $ \eta^{soft}_t(z) = \mbox{sgn}(z) (|z| - t)_+$, which uses a shrunken $Z$-score, provided it is large. \begin{definition} Let $\star \in \{ soft, hard, clip\}$. The threshold feature selection classifier makes its decision based on $L_t^\star <> 0$ where $ \hat{L}_t^\star(X) = \sum_{j=1}^p \hat{w}^\star_t(j) X(j)$, and $ \hat{w}^{\star}_t(j) = \eta^\star_t ( Z(j)), j = 1,\dots,p$. \end{definition} In words, the classifier sums across features with large training-set $Z$-scores, and a simple function of the $Z$-score generates the feature weight. Several methods for linear classification in bioinformatics follow this approach: the Shrunken Centroids method \cite{Tibs} is a variant of soft thresholding in this two-class setting; the highly-cited methods in \cite{Leukemia} and \cite{BreastCancer} are variants of hard thresholding. Clipping makes sense in the theoretical setting of the RW model (since then the useful features have all the same strength) and is simpler to analyse than the other nonlinearities. Thresholding has been popular in estimation for more than a decade \cite{DJ}; it is known to be succesful in `sparse' settings where the estimand has many coordinates, of which only a relatively few coordinates are significantly nonzero. However classification is not the same as estimation, and performance characteristics are driven by quite different considerations. One crucial question remains: how to choose the threshold based on the data? Popular methods for threshold choice include cross-validation \cite{Tibs}; control of the false discovery rate \cite{AB,ABDJ,BH95,FDR}; and control of the local false discovery rate \cite{Efron}. \subsection{Higher Criticism} In \cite{PNAS} we proposed a method of threshold choice based on recent work in the field of multiple comparisons. We now very briefly mention work in that field and then introduce the threshold choice method. \subsubsection{HC testing} Suppose we have a collection of $N$ $P$-values $\pi_i$ which under the global null hypothesis are uniformly distributed: $\pi_i \sim_{iid} U[0,1]$. Consider the order statistics: $ \pi_{(1)} \leq \pi_{(2)} \leq \dots \leq \pi_{(N)}$. Under the null hypothesis, these order statistics have the usual properties of uniform order statistics, including the asymptotic normality $\pi_{(i)} \sim_{approx} \mathrm{Normal}( i/N, i/N ( 1 - i/N))$. HC forms a $Z$-score comparing $\pi_{(i)}$ with its mean under the null, and then maximizes over a wide range of $i$. Formally: \begin{definition} ({\bf HC Testing}) \cite{HC}. The Higher Criticism objective is \begin{equation} \label{HCobj} HC(i; \pi_{(i)}) = \sqrt{N} \frac{i/N - \pi_{(i)}}{\sqrt{i/N(1-i/N)}} . \end{equation} Fix $\alpha_0 \in (0,1)$ (eg $\alpha_0 = 1/10$). The HC test statistic is $HC^* = \max_{1 \leq i \leq \alpha_0 N } HC(i; \pi_{(i)})$. \end{definition} HC seems insensitive to the selection of $\alpha$, in Rare/Weak situations; here we always use $\alpha_0 = .10$. In words, we look for the largest standardized discrepancy for any $\pi_{(i)}$ between the observed behavior and the expected behavior under the null. When this is large, the whole collection of $P$-values is not consistent with the global null hypothesis. The phrase ``Higher Criticism'' is due to John Tukey, and reflects the shift in emphasis from single test results to the whole collection of tests; see discussion in \cite{HC}. Note: there are several variants of HC statistic; we discuss only one variant in this brief note; the main results of \cite{HC} still apply to this variant; for full discussion see \cite{HC,PNAS}. \subsubsection{HC thresholding} \label{ssec-HCTDEF} Return to the classification setting in previous sections. We have a vector of feature $Z$-scores $(Z(j), j=1,\dots, p)$. To apply HC notions, translate $Z$-scores into two-sided $P$-values, and maximizes the HC objective over index $i$ in the appropriate range. Define the {\it feature $P$-values} $\pi_i = Prob \{ |N(0,1)| > |Z(i)|\}$, $i=1,\dots, p$; and define the increasing rearrangement $\pi_{(i)}$, the HC objective function $HC(i; \pi_{(i)})$, and the increasing rearrangement $|Z|_{(i)}$ correspondingly. Here is our proposal. \begin{definition} ({\bf HC Thresholding}). \label{hct-def} Apply the HC procedure to the feature $P$-values. Let the maximum HC objective be achieved at index $\hat{i}$. The {\bf Higher Criticism threshold} (HCT) is the value $\hat{t}^{HC} = |Z|_{(\hat{i})}$. The {\bf HC threshold feature selector} selects features with $Z$-scores exceeding $\hat{t}^{HC}$ in magnitude. \end{definition} Figure \ref{fig:IllustHC} illustrates the procedure. Panel (a) shows a sample of $Z$-scores, Panel (b) shows a PP-plot of the corresponding ordered $P$-values versus $i/p$ and Panel (c) shows a standardized PP-plot. The standardized PP-Plot has its largest deviation from zero at $\hat{i}$; and this generates the threshold value. \begin{figure} \begin{centering} \includegraphics[height=2.75 in,width=3.15in]{HC-RoySoc} \caption{Illustration of HC thresholding. Panel (a) the ordered $|Z|$-scores. Panel (b) the corresponding ordered $P$-values in a PP plot. Panel (c) the HC objective function in (\ref{HCobj}); this is largest at $\hat{i} \approx 0.007 p $ ($x$-axes are $i/p$). Vertical lines indicate $\pi_ {(\hat{i})}$ in Panel (b), and $|Z|_{(\hat{i})}$ in Panel (a). } \label{fig:IllustHC} \end{centering} \end{figure} \subsubsection{Previously-reported results for HCT} Our article \cite{PNAS} reported several findings about behavior of HCT based on numerical and empirical evidence. In the RW model, we can define an ideal threshold, i.e. a threshold based on full knowledge of the RW parameters $\epsilon$ and $\mu$ and chosen to minimize the misclassification rate of the threshold classifier -- see Section \ref{sec-ideal} below. We showed in \cite{PNAS} that: \begin{itemize} \item HCT gives a threshold value which is numerically very close to the ideal threshold. \item In the case of very weak feature z-scores, HCT has a False Feature Discovery Rate (FDR) substantially higher than other popular approaches, but a Feature Missed Detection Rate (MDR) substantially lower than those other approaches; \item At the same time, HCT has FDR and MDR very closely matching those of the ideal threshold. \end{itemize} In short, HCT has very different operating characteristics from those other thresholding schemes like FDR thresholding \cite{AB,ABDJ} and Bonferroni thresholding, but very similar operating characteristics to the ideal threshold. \subsection{Asymptotic RW model, and the phase diagram} In this paper we further support the findings reported in \cite{PNAS}, this time using an asymptotic analysis. In our analysis the number of observations $n$ and the number of features $p$ tend to infinity in a linked fashion, with $n$ remaining very small compared to $p$. (Empirical results in \cite{PNAS} show our large-$p$ theory is applicable at moderate $n$ and $p$). More precisely, we consider a sequence of problems with increasingly more features, increasingly more rare useful features, and relatively small numbers of observations compared to the number of features. \begin{definition} \label{def-ARW} The phrase {\bf asymptotic RW model} refers to the following combined assumptions. \begin{itemize} \item {\bf Asymptotic Setting.} We consider a sequence of problems, where the number of observations $n$ and the number of features $p$ both tend to $\infty$ along the sequence. \item {\bf $p$ dramatically larger than $n$}. Along this sequence, $n \sim c \cdot \log(p)^\gamma $, so there are dramatically more features per observational unit than there are observational units. \item {\bf Increasing Rarity.} The sparsity $\epsilon$ varies with $n$ and $p$ according to $\epsilon = p^{-\beta}$, $0 < \beta < 1$. \item {\bf Decreasing Strength.} The strength $\tau$ varies with $n$ and $p$ according to $\tau = \sqrt{2r \log(p)}$, $0 < r < 1$. \end{itemize} The symbol $ARW(r,\beta;c,\gamma)$ refers to the model combining these assumptions. \end{definition} In this model, because $r < 1$, useful features are individually too weak to detect and because $0 < \beta < 1$, useful features are increasingly rare with increasing $p$, while increasing in total number with $p$. It turns out that $c$ and $\gamma$ are incidental, while $r$ and $\beta$ are the driving parameters. Hence we always simply write $ARW(r,\beta)$ below. There is a large family of choices of $(r,\beta)$ where successful classification is possible, and another large family of choices where it is impossible. To understand this fact, we use the concept of {\it phase space}, the two-dimensional domain $0 < r , \beta < 1$. We show that this domain is partitioned into two regions or `phases'. In the ``impossible'' phase, useful features are so rare and so weak that classification is asymptotically impossible {\it even with the ideal choice of threshold}. In the ``possible'' phase, successfully separating the two groups is indeed possible \-- {\it if} one has access to the ideal threshold. Figure \ref{fig:phase} displays this domain and its partition into phases. Because of the partition into two phases, we also call this display the {\it phase diagram}. An explicit formula for the graph $r = \rho^*(\beta)$ bounding these phases is given in (\ref{eq-rho-def}) below. { The phase diagram provides a convenient platform for comparing different procedures. A threshold choice is {\bf optimal} if it gives the same partition of phase space as the one obtained with the ideal choice of threshold}. How does HCT compare to the ideal threshold, and what partition in the phase space does HCT yield? For reasons of space, we focus in this paper on the {\it Ideal HC threshold}, which is obtained upon replacing the empirical distribution of feature $Z$-scores by it expected value. The Ideal HC threshold is thus the threshold which HCT is `trying to estimate'; in the companion paper \cite{AoS} we give a full analysis showing that the ideal HCT and HCT are close. The central surprise of our story is that HC behaves surprisingly well: the partition of phase space describing the two regions where ideal thresholding fails and/or succeeds also describes the two regions where Ideal HCT fails and/or succeeds in classifying accurately. The situation is depicted in the table below: \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|} \hline Region & Property of Ideal Threshold & Property of Ideal HCT \\ \hline $r < \rho^*(\beta)$ & Ideal Threshold Classifier Fails & Ideal HCT Fails \\ $r > \rho^*(\beta)$ & Ideal Threshold Classifier Succeeds & Ideal HCT Succeeds \\ \hline \end{tabular} \end{center} \end{table} Here by `succeeds', we mean asymptotically zero misclassification rate and by `fails', we mean asymptotically 50\% misclassification rate. In this sense of size of regions of success, HCT is just as good as the ideal threshold. Such statements cannot be made for some other popular thresholding schemes, such as False Discovery threshold selection. As will be shown in \cite{AoS} even the very popular Cross-Validated choice of Threshold will fail if the training set size is bounded, while HCT will still succeed in the RW model in that case. The full proof of a broader set of claims -- with a considerably more general treatment -- will appear elsewhere. { To elaborate the whole story on HCT needs three connected papers including \cite{PNAS}, \cite{AoS}, and the current one. In \cite{PNAS}, we reported numerical results both with simulated data from the RW model and with certain real data often used as standard benchmarks for classifier performance. In \cite{AoS}, we will develop a more mathematical treatment of many results we cite here and in \cite{PNAS}. The current article, logically second in the triology, develops an analysis of Ideal HCT which is both transparent and which provides the key insights underlying our lengthy arguments in \cite{AoS}}. We also take the time to explain the notions of phase diagram and phase regions. We believe this paper will be helpful to readers who want to understand HCT and its performance, but who would be overwhelmed by the epsilontics of the analysis in \cite{AoS}. { The paper is organized as follows. Section \ref{sec-ideal} introduces a functional framework and several ideal quantities. These include the proxy classification error where Fisher's separation (SEP) \cite{Anderson} plays a key role, the ideal threshold as a proxy for the optimal threshold, and the ideal HCT as a proxy of the HCT. Section \ref{sec-HCT} introduces the main results on the asymptotic behavior of the HC threshold under the asymptotic RW model, and the focal point is the phase diagram. Section \ref{sec-clipping} outlines the basic idea behind the main results followed by the proofs. Section \ref{sec-HCobjective} discuss the connection between the ideal threshold and the ideal HCT. Section \ref{sec-FDR} discusses the ideal behavior of Bonferroni threshold feature selection and FDR-controlling feature selection. Section \ref{sec-discussion} discusses the link between ideal HCT and ordinary HCT, the finite $p$ phase diagram, and other appearances of HC in recent literature. } \section{Sep functional and ideal threshold} \label{sec-ideal} \setcounter{equation}{0} Suppose $L$ is a fixed, {\it nonrandom} linear classifier, with decision boundary $L <> 0$. Will $L$ correctly classify the future realization $(Y,X)$ from simple model of Section \ref{sec-simple-model}. Then $Y = \pm 1$ equiprobable and $X \sim N(Y\mu,I_p)$. The misclassification probability can be written \begin{equation} \label{Relationship1} P\{ YL(X) < 0 | \mu\} = \Phi(- \frac{1}{2} Sep(L;\mu)), \end{equation} where $\Phi$ denotes the standard normal distribution function and $ Sep(L;\mu)$ measures the standardized interclass distance: \begin{eqnarray} \label{DefineSep1} Sep(L;\mu) &=& \frac{ E\{L(X) | Y=1\} - E\{L(X) | Y=-1\} }{SD(L(X))}\\ &=& \frac{ 2 \sum w(j) \mu(j) }{(\sum w(j)^2)^{1/2}} = \frac{ 2 \langle w, \mu\rangle }{\|w\|_2}, \nonumber \end{eqnarray} The {\it ideal linear classifier} $L_\mu$ with feature weights $w \propto \mu$, and decision threshold $L_\mu <> 0$ implements the likelihood ratio test. It also maximizes $Sep$, since for every other linear classifier $L$, $Sep(L; \mu) \leq Sep(L_\mu; \mu) = 2 \|\mu\|_2$. \subsection{Certainty-equivalent heuristic} \newcommand{\widetilde{Sep}}{\widetilde{Sep}} Threshold selection rules give {\it random} linear classifiers: the classifier weight vector $w$ is a {\it random} variable, because it depends on the $Z$-scores of the realized training sample. If $L_Z$ denotes a linear classifier constructed based on such a realized vector of $Z$-scores, then the misclassification error can be written as \begin{equation} \label{Relationship2} Err(L_Z,\mu) = \Phi(- \frac{1}{2} Sep(L_Z;\mu)); \end{equation} this is a random variable depending on $Z$ and on $\mu$. Heuristically, because there is a large number of coordinates, some statistical regularity appears, and we anticipate that random quantities can be replaced by expected values. We proceed as if \begin{equation} \label{SepHeuristic1} Sep(L_Z;\mu) \approx \frac{ 2 E_{Z|\mu} \langle w, \mu\rangle }{( E_{Z|\mu} w^2)^{1/2} } \end{equation} where the expectation is over the conditional distribution of $Z$ conditioned on $\mu$. Our next step derives from the fact that $\mu$ itself is random, having about $\epsilon \cdot p$ nonzero coordinates, in random locations. Now as $w_i = \eta_t(Z_i)$ we write heuristically \[ E_{Z|\mu} \langle w, \mu\rangle \approx p \cdot \epsilon \cdot \mu_0 \cdot E \{ \eta_t(Z_1) | \mu_1=\mu_0 \}, \] while \[ E_{Z|\mu} \langle w,w \rangle \approx p \cdot \left( \epsilon \cdot E\{ \eta_t^2(Z_1) | \mu_1 = \mu_0 \} + (1-\epsilon) \cdot E\{ \eta_t^2(Z_1) | \mu_1 =0 \} \right) . \] \begin{definition} Let the threshold $t$ be fixed and chosen independently of the training set. In the $RW(\epsilon,\tau)$ model we use the following expressions for {\bf proxy separation} \[ \widetilde{Sep}(t; \epsilon,\tau) = \frac{2A}{\sqrt{B}}, \] where \[ A(t,\epsilon,\tau) = \epsilon \cdot \tau \cdot E \{ \eta_t( \tau+ W ) \} . \] \[ B(t,\epsilon,\tau) = \epsilon \cdot E \{ \eta_t^2( \tau+ W ) \} + (1-\epsilon) E \{ \eta_t^2( W ) \} . \] and $W$ denotes a standard normal random variable. By {\bf proxy classification error} we mean \[ \widetilde{Err}(t;\epsilon,\tau,p,n)= \Phi( - \frac{1}{2} \sqrt{ \frac{p}{n}} \cdot \widetilde{Sep}(t,\epsilon,\tau)). \] \end{definition} Normalizations are chosen here so that, in large samples \[ Sep(L_Z,\mu) \approx \sqrt{\frac{p}{n}} \cdot \widetilde{Sep}(t,\epsilon,\tau) . \] While ordinarily, we expect averages to ``behave like expectations'' in large samples, we use the word {\it proxy} to remind us that there is a difference (presumably small). Software to compute these proxy expressions has been developed by the authors, and some numerical results using them were reported in \cite{PNAS}. Of course, the rationale for our interest in these proxy expressions is our heuristic understanding that they accurately describe the exact large-sample behavior of certain threshold selection schemes. This issue is settled in the affirmative, after considerable effort, in \cite{AoS}. \subsection{Certainty-equivalent threshold functionals} In general the best threshold to use in a given instance of the RW model depends on both $\epsilon$ and $\tau$. It also depends on the specific realization of $\mu$ and even of $Z$. However, dependence on $\mu$ and $Z$ is simply ``noise'' that goes away in large samples, while the dependence on $\epsilon$ and $\tau$ remains. \begin{definition} The {\it ideal threshold functional} $T_{ideal}(\epsilon,\tau)$ maximizes the proxy separation \[ T_{ideal}(\epsilon,\tau) = \mbox{arg max}_t \widetilde{Sep}(t,\epsilon,\tau). \] \end{definition} Heuristically, $T_{ideal}$ represents a near-optimal threshold in all sufficiently large samples; it is what we ``ought'' to be attempting to use. \begin{definition} {\bf Folding.} \label{def-fold} The following concepts and notations will be used in connection with distributions of absolute values of random variables. \begin{itemize} \item The Half Normal distribution function $\Psi(t) = P \{ |N(0,1)| \leq t \}$. \item The noncentral Half-Normal distribution $\Psi_\tau(t) = P \{ |N(\tau,1)| \leq t \}$. \item Given a distribution function $F$, the {\it folded distribution} is $G(t) = F(t) - F(-t)$. The Half Normal is the folded version of the standard Normal, and the noncentral Half Normal is the folded version of a Normal with unit standard deviation and nonzero mean equal to the noncentrality parameter. \item Let $F_{\epsilon,\tau}$ denote the 2-point mixture \[ F_{\epsilon,\tau} (t) = (1-\epsilon) \Phi(t) + \epsilon \Phi(t-\tau); \] $G_{\epsilon,\tau}$ denotes the corresponding folded distribution: \[ G_{\epsilon,\tau} (t) = (1-\epsilon) \Psi_0(t) + \epsilon \Psi_\tau(t), \] \end{itemize} \end{definition} We now define an HCT functional representing the target that HC thresholding aims for. \begin{definition} Let $F$ be a distribution function which is not the standard normal $\Phi$. At such a distribution, we define the HCT functional by \[ T_{HC}(F) = \mbox{argmax}_{t > t_0} \frac{ \bar{G}(t) - \bar{\Psi}(t)}{\sqrt{G(t)\cdot \bar{G}(t)}} ; \] here $G$ is the folding of $F$, and $t_0$ is a fixed parameter of the HCT method (eg. $t_0 = \Phi^{-1}(0.1)$). The HC threshold in the $RW(\epsilon,\tau)$ model may be written, in an abuse of notation, \[ T_{HC}(\epsilon,\tau) \] meaning $T_{HC}(F_{\epsilon,\tau})$. \end{definition} Let $F_{n,p}$ denote the usual empirical distribution of the feature $Z$-scores $Z_i$. The HCT of Definition \ref{hct-def} can be written as $\hat{t}_{n,p}^{HC} = T_{HC}(F_{n,p})$. Let $F$ denote the expected value of $F_{n,p}$; then $T_{HC}(F)$ will be called the {\it ideal HC threshold}. Heuristically, we expect the usual sampling fluctuations and that \[ T_{HC}(F_{n,p}) \approx T_{HC}(F) \] with a discrepancy decaying as $p$ and $n$ increase. This issue is carefully considered in the companion paper \cite{AoS}, which shows that the empirical HC threshold in the ARW model indeed closely matches the ideal HC threshold. For comparison purposes, we considered two other threshold schemes. First, (ideal) False-Discovery Rate thresholding. For a threshold $t$, and parameters $(p,\epsilon,\tau)$, the expected number of useful features selected is \[ E(TP) (t;\epsilon,\tau,p ) = p \cdot \epsilon \cdot \bar{G}_{\epsilon,\tau}(t) ; \] and the expected number of useless features selected is \[ E(FP) (t;\epsilon,\tau,p ) = p \cdot (1-\epsilon) \cdot \bar{\Psi}(t) . \] Let $TPR(t) = p^{-1} E(TP)(t)$ denote the expected {\it rate} of useful features above threshold and $FPR(t) = p^{-1} E(FP)(t)$ denote the expected {\it rate} of usless features above threshold. In analogy with our earlier heuristic, we define the proxy False Discovery Rate (FDR) \[ \widetilde{FDR}(t;\epsilon,\tau,p) = \frac{FPR(t)}{TPR(t) + FPR(t)} \] (The term ``proxy'' reminds us that \[ \frac{E (FP)(t)}{E(TP)(t) + E(FP)(t)} \neq E \frac{FP(t)}{TP(t) + FP(t)}, \] although for large $p$ the difference will often be small.) We define the FDRT-$\alpha$ functional by \[ T_{FDR,\alpha}(\epsilon,\tau) = \min \{ t : \widetilde{FDR} < \alpha,\quad t > t_0 \} . \] Heuristically, this is the threshold that FDRT is `trying' to learn from noisy empirical data. We will also need the {\it proxy Local FDR}. \[ \widetilde{Lfdr}(t;\epsilon,\tau,p) = \frac{FPR'(t)}{TPR'(t) + FPR'(t)}. \] Here $FPR'$ denotes the derivative of $FPR$, which exists, using smoothness properties of $\Psi_0$; similarly for $TPR'$ and $\Psi_\tau$. Intuitively, $ \widetilde{Lfdr}(t)$ denotes the expected fraction of useless features among those features having observed $Z$-scores near level $t$. Second, we considered Bonferroni-based thresholding. \[ T_{Bon} = \bar{\Phi}^{-1}(p^{-1}). \] This threshold level is set at the level that would cause on average one false alarm in a set of $p$ null cases. In \cite[Figures 2-3]{PNAS}, we presented numerical calculations of all these functionals and their separation behavior in two cases. \begin{itemize} \item $p = 10,000$, and $\epsilon = .01$. \item $p = 10^6$ and $\epsilon = .0001$. \end{itemize} Although our calculations are exact numerical finite-$p$ calculations, we remark that they correspond to sparsity exponents $\beta = 1/2$ and $\beta = 2/3$, respectively. The figures show the following. \begin{itemize} \item There is a very close numerical approximation of the HCT to the ideal threshold, not just at large $\tau$ but also even at quite small $2 < \tau < 3$. \item FDR and Bonferroni thresholds behave very differently from the ideal and from HC. \item The separation behavior of the HCT is nearly ideal. For the constant FDR rules, the separation behavior is close to ideal at some $\tau$ but becomes noticeably sub-ideal at other $\tau$. \item The False discovery rate behavior of HCT and Ideal thresholding depends on $\tau$. At small $\tau$, both rules tolerate a high FDR while at large $\tau$, both rules obtain a small FDR. \item The Missed detection rate of HCT and Ideal thresholding also depends on $\tau$. At small $\tau$, the missed detection rate is high, but noticeably less than 100\%. At large $\tau$, the missed detection rate falls, but remains noticeably above $0\%$. In contrast the MDR for FDR procedures is essentially 100\% for small $\tau$ and falls below that of HCT/ideal for large $\tau$. \end{itemize} These numerical examples illustrate the idealized behavior of different procedures. We can think of the HCT functional as the threshold which is being estimated by the actual HCT rule. On an actual dataset sampled from the underlying $F$, the HC threshold will behave differently, primarly due to stochastic fluctuations $F_{n,p} \approx F$. Nevertheless, the close approximation of the HCT threshold to the ideal one is striking and, to us, compelling. \section{Behavior of ideal threshold, asymptotic RW model} \label{sec-HCT} \setcounter{equation}{0} \newcommand{\widetilde{Err}}{\widetilde{Err}} We now study the ideal threshold in the asymptotic RW model of Definition \ref{def-ARW}. That is, we fix parameters $r$,$\beta$ in that model and study the choice of threshold $t$ maximizing class separation. We first make precise a structural fact about the ideal threshold, first observed informally in \cite{PNAS}. \begin{definition} {\bf ROC Curve.} The {\em feature detection receiver operating characteristic curve} (ROC) is the curve parameterized by $(FPR(t),TPR(t))$. The tangent to this curve at $t$ is \[ tan(t) = \frac{TPR'(t)}{FPR'(t)} \] and the secant is \[ sec(t) = \frac{TPR(t)}{FPR(t)}. \] Note that in the $RW(\epsilon,\tau)$ model, $TPR$, $FPR$, $tan$ and $sec$ all depend on $t$, $\epsilon$,$\tau$ and $p$, although we may, as here, indicate only dependence on $t$. \end{definition} \begin{theorem} {\bf Tangent-Secant Rule.} In the $ARW(r,\beta)$ model. we have \begin{equation} \label{SecTanRule} \frac{tan(T_{Ideal})}{1 + tan(T_{Ideal})} \sim \frac{1}{2} \left( 1 + \frac{sec(T_{Ideal})}{1 + sec(T_{Ideal})} \right), \qquad p \rightarrow \infty. \end{equation} Here $\epsilon = p^{-\beta}$, $\tau = \sqrt{2r \log(p)}$ and $n \sim c \log(p)^\gamma$, $p \rightarrow \infty$ as in Definition \ref{def-ARW}. \end{theorem} \begin{definition} {\bf Success Region.} The {\em{region of asymptotically successful ideal threshold feature selection}} in the $(\beta,r)$ plane is the interior of the subset where the ideal threshold choice $T_{ideal}(\epsilon,\tau)$ obeys \[ \widetilde{Err}(T_{ideal}(\epsilon,\tau);\epsilon,\tau,p) \rightarrow \infty, \;\;\; p \rightarrow \infty ; \] here we are in the $ARW(r,\beta)$ model of Definition \ref{def-ARW}. \end{definition} The interesting range involves $(\beta,r) \in [0,1]^2$. The following function is important for our analysis, and has previously appeared in important roles in other (seemingly unrelated) problems; see Section \ref{sec-OtherHC}. \begin{equation} \label{eq-rho-def} \rho^*(\beta) = \left\{ \begin{array}{ll} 0, &\qquad 0 < \beta \leq 1/2, \\ \beta - 1/2, &\qquad 1/2 < \beta \leq 3/4, \\ (1 - \sqrt{1 - \beta})^2, & \qquad 3/4 < \beta < 1. \end{array} \right. \end{equation} As it turns out, it marks the boundary between success and failure for threshold feature selection. \begin{theorem} {\bf Existence of Phases.} The success region is precisely $r > \rho^*(\beta)$, $0 < \beta < 1$. In the interior of the complementary region $r < \rho^*(\beta)$, $1/2 < \beta < 1$, even the ideal threshold cannot send the proxy separation to infinity with increasing $(n,p)$. \end{theorem} \begin{definition} \label{def-regions} {\bf Regions I,II, III.} The Success Region can be split into three regions, referred to here and below as Regions I-III. The interiors of the regions are as follows: \begin{description} \item[I.] $\beta - 1/2 < r \leq \beta/3$ and $1/2 < \beta < 3/4$; $r > \rho^*(\beta)$. \item[II.] $\beta/3 < r \leq \beta$ and $1/2 < \beta <1$; $r > \rho^*(\beta)$. \item[III.] $\beta< r < 1$ and $1/2 < \beta <1$; $r > \rho^*(\beta)$. \end{description} See Figure \ref{fig:phase}. \end{definition} \begin{figure}[h] \centering \includegraphics[height = 4 in, width = 5 in]{Phase5.pdf} \caption{Phase diagram. The curve $r = \rho^*(\beta)$ splits the phase space into the failure region and the success region, and the latter further splits into three different regions I, II, III. Numbers in the brackets show limits of $\widetilde{FDR}$ and $\widetilde{Lfdr}$ at the ideal HCT as in Theorem 4. } \label{fig:phase} \end{figure} In the asymptotic RW model, the optimal threshold must behave asymptotically like $\sqrt{2q \log(p)}$ for a certain $q = q(r,\beta)$. Surprisingly we need not have $q = r$. \begin{theorem} {\bf Formula for Ideal Threshold.} Under the Asymptotic RW model $ARW(r,\beta)$, with $r > \rho^*(\beta)$, the ideal threshold has the form $T_{ideal}(\epsilon,\tau) \sim \sqrt{2q^*\log(p)}$ where \begin{equation} \label{qformula} q^* = \left\{ \begin{array}{ll} 4r, &\qquad \mbox{Region I}, \\ \frac{(\beta + r)^2}{4r}, &\qquad \mbox{Region II,III}. \\ \end{array} \right. \end{equation} \end{theorem} Note in particular that in Regions I and II, $q^* > r$, and hence $T_{ideal}(\epsilon,\tau) > \tau$. Although the features truly have strength $\tau$, the threshold is best set {\it higher than} $\tau$. We now turn to FDR properties. The Tangent-Secant rule implies immediately \begin{equation} \label{eq-LFDR-FDR} \frac{Lfdr(T_{Ideal})}{(1 + FDR(T_{Ideal}))/2} \rightarrow 1, \qquad p \rightarrow \infty. \end{equation} Hence any result about FDR is tied to one about local FDR, and vice versa. \begin{theorem} Under the Asymptotic RW model $ARW(r,\beta)$, at the ideal threshold $T_{ideal}(\epsilon,\tau)$ proxy FDR obeys \begin{equation} \label{Relationship5} \widetilde{FDR}(T_{ideal},\epsilon,\tau) \rightarrow \left\{ \begin{array}{ll} 1, &\qquad \mbox{Region I} \\ \frac{\beta - r}{2r}, &\qquad \mbox{Region II} \qquad (\mbox{note $\beta/3 < r < \beta$}) \\ 0, &\qquad \mbox{Region III} \end{array} \right. \end{equation} as $p \rightarrow \infty$, and the proxy local FDR obeys \begin{equation} \label{Relationship6} \widetilde{Lfdr}(T_{ideal},\epsilon,\tau) \rightarrow \left\{ \begin{array}{ll} 1, &\qquad \mbox{Region I} \\ \frac{ r + \beta }{4r}, &\qquad \mbox{Region II} \qquad (\mbox{note $\beta/3 < r < \beta$}) \\ 1/2, &\qquad \mbox{Region III} \end{array} \right. \end{equation} as $p \rightarrow \infty$. \end{theorem} Several aspects of the above solution are of interest. \begin{itemize} \item {\it Threshold Elevation.} The threshold $\sqrt{2q^* \log(p)}$ is significantly higher than $\sqrt{2r\log(p)}$ in Regions I and II. Instead of looking for features at the amplitude they can be expected to have, we look for them at much higher amplitudes. \item {\it Fractional Harvesting.} Outside of Region III, we are selecting only a small fraction of the truly useful features. \item {\it False Discovery Rate.} Outside Region III, we actually have a very large false discovery rate, which is very close to $1$ in Region I. Surprisingly {\it even though most of the selected features are useless}, we still correctly classify! \item {\it Training versus Test performance.} The quantity $\sqrt{q}$ can be interpreted as a ratio: $\sqrt{q^*/r} = \min\{ \frac{\beta + r}{2r}, 2\} = $ strength of useful features in training / strength of those features in test. From (\ref{qformula}) we learn that, in Region I, the selected useful features perform about half as well in training as we might expect from their performance in test. \end{itemize} \section{Behavior of ideal clipping threshold} \label{sec-clipping} \setcounter{equation}{0} We now sketch some of the arguments involved in the full proof of the theorems stated above. In the RW model, it makes particular sense to use the clipping threshold function $\eta_t^{clip}$, since all nonzeros are known to have the same amplitude. The ideal clipping threshold is also very easy to analyze heuristically. But it turns out that all the statements in Theorems 1-3 are equally valid for all three types of threshold functions, so we prefer to explain the derivations using clipping. \newcommand{\mbox{sgn}}{\mbox{sgn}} \subsection{$\widetilde{Sep}$ in terms of true and false discoveries} In the RW model, we can express the components of the proxy separation very simply when using the clipping threshold: \[ A_{clip}(t,\epsilon,\tau) = \epsilon \cdot \tau \cdot E \mbox{sgn}(\tau + W) 1_{\{ |\tau + W | > t \}}. \] \[ B_{clip}(t,\epsilon,\tau) = \epsilon\cdot E 1_{\{ |\tau + W | > t \}} + (1-\epsilon) \cdot E 1_{\{ | W | > t \}} \] where $W$ denotes an $N(0,1)$ random variable. Recall the definitions of useful selections TP and useless selections FP; we also must count {\it Inverted Detections}, for the case where the $\mu_i > 0$ but $\eta_t^{clip}(Z_i) < 0$. Put \[ E(ID)(t;\epsilon,\tau,p) = \epsilon \cdot p \cdot \Phi(-t-\tau), \] with again $\Phi$ the standard normal distribution, and define the {\it inverted detecton rate} by $IDR = p^{-1} E(ID)$. Then \[ A_{clip}(t;\epsilon,\tau) = \tau\cdot \left(TPR(t) - 2 IDR)(t) \right) . \] \[ B_{clip}(t;\epsilon,\tau) = TPR(t) + FPR(t) . \] We arrive at an identity for $\widetilde{Sep}$ in the case of clipping: \[ \widetilde{Sep}(t ; \epsilon,\tau) = \frac{(2 \sqrt{p} \tau) \cdot \left(TPR(t) - 2IDR(t) \right) }{\sqrt{ TPR(t) + FPR(t)}}. \] \newcommand{\overline{Sep}}{\overline{Sep}} We now explain Theorem 1, the Tangent-Secant rule. Consider the alternate proxy \[ \overline{Sep}(t) = \frac{ (2 \tau) \cdot TPR(t) }{\sqrt{ TPR(t) + FPR(t)}} = \frac{2 A(t) }{B^{1/2}(t) }, \mbox{ say}; \] i.e. drop the term $IDR$. It turns out that for the alternate proxy, the tangent secant rule and resulting FDR-Lfdr balance equation are exact identites. \begin{lemma} Let $\epsilon > 0$ and $\tau > 0$. The threshold $t_{alt}$ maximizing $ \overline{Sep}(t) $ as a function of $t$ satisfies the Tangent-Secant rule as an exact identity; at this threshold we have \begin{equation} \label{alt-fdr-lfdr} Lfdr(t_{alt}) = \frac{1}{2} \left( 1 + FDR(t_{alt}) \right). \end{equation} \end{lemma} {\bf Proof.} Now, $A$ and $B$ are both smooth functions of $t$, so at the $t$ optimizing $A B^{- 1/2}$ we have \[ 0 = B^{-1/2} \left( A' - \frac{1}{2} \frac{A}{B} B' \right) = \frac{B'}{B^{1/2}} \cdot \left( \frac{A'}{B'} - \frac{1}{2} \frac{A}{B} \right). \] By inspection $B'(t) < 0$ for every $t > 0$. Hence, \begin{equation} \label{eq-AB-identity} \frac{A'}{\tau B'} = \frac{1}{2} \frac{A}{\tau B} . \end{equation} The Tangent-Secant Rule follows. We now remark that \[ FDR(t) = 1 - \frac{ \ TPR(t) }{{ TPR(t) + FPR(t)}} = 1 - \frac{A}{\tau \cdot B} , \] and \[ Lfdr(t) = 1 - \frac{ \ TPR'(t) }{{ TPR'(t) + FPR'(t)}} = 1 - \frac{A'}{\tau \cdot B'} . \] Display (\ref{alt-fdr-lfdr}) follows {\unskip\nobreak\hfil\penalty50\hskip2em\vadjust{ The full proof of Theorem 1, which we omit, simply shows that the discrepancy caused by $ \overline{Sep} \neq \widetilde{Sep}$ has an asymptotically negligible effect; the two objectives have very similar maximizers. \subsection{Analysis in the asymptotic RW model} We now invoke the $ARW(r,\beta)$ model: $\epsilon = p^{-\beta}$, $\tau = \sqrt{2 r \log(p)}$, $(n,p) \rightarrow \infty$, $n \sim c \log(p)^\gamma$, $p \rightarrow \infty$. Let $t_p(q) = \sqrt{2 q \log(p)}$. The classical Mills' ratio can be written in terms of the Normal survival function as: \[ \bar{\Phi}(t_p(q)) \sim p^{-q} /t_p(q), \qquad p \rightarrow \infty. \] Correspondingly, the half Normal obeys: \begin{equation} \label{mills} \bar{\Psi}_0(t_p(q)) \sim 2 \cdot p^{-q} /t_p(q), \qquad p \rightarrow \infty. \end{equation} \newcommand{\mbox{\sc PL}}{\mbox{\sc PL}} We also need a notation for poly-log terms. \begin{definition} \label{def-PL} Any occurrence of the symbol $\mbox{\sc PL}(p)$ denotes a term which is $O(\log(p)^\zeta)$ and $\Omega(\log(p)^{-\zeta})$ as $p \rightarrow \infty$ for some $\zeta > 0$. Different occurrences of this symbol may stand for different such terms. \end{definition} In particular, we may well have $T_1(p) = \mbox{\sc PL}(p)$, $T_2(p) = \mbox{\sc PL}(p)$, as $p \rightarrow \infty$, and yet $\frac{T_1(p)}{T_2(p)} \not \rightarrow 1$ as $p \rightarrow \infty$. However, certainly $\frac{T_1(p)}{T_2(p)} = \mbox{\sc PL}(p)$, $p \rightarrow \infty$. The following Lemma exposes the main phenomena driving Theorems 1-4. It follows by simple algebra, and several uses of Mills' Ratio (\ref{mills}) in the convenient form $ \bar{\Psi}_0(t_p(q)) = PL(p) \cdot p^{-\beta}$. \begin{lemma} \label{MainLemma} In the asymptotic RW model $ARW(r,\beta)$, we have: \begin{enumerate} \item {\it Quasi power-law for useful feature discoveries}: \begin{equation} E(TP)(t_q(p),\epsilon,\tau) = \mbox{\sc PL}(p) \cdot p^{\delta(q,r,\beta)} , \qquad p \rightarrow \infty. \end{equation} where the useful feature discovery exponent $\delta$ obeys \[ \delta(q; \beta,r) \equiv \left\{ \begin{array}{ll} {1 - \beta}, &\qquad 0 < q \leq r, \\ {1 - \beta - (\sqrt{q} - \sqrt{r})^2}, &\qquad r < q < 1. \end{array} \right. \] \item {\it Quasi power-law for useless feature discoveries}: \begin{equation} E(FP)(t_q(p),\epsilon,\tau) = \mbox{\sc PL}(p) \cdot p^{1-q}, \qquad p \rightarrow \infty, \end{equation} \item {\it Negligibility of inverted detections}: \begin{equation} E(ID)(t_q(p),\epsilon,\tau) = o( E(TP) (t_q(p),\epsilon,\tau) ) , \qquad p \rightarrow \infty. \end{equation} \end{enumerate} \end{lemma} As an immediate corollary, under $ARW(r,\beta)$, we have: \begin{equation}\label{eq-asysep} \widetilde{Sep}(t_q(p),\epsilon,\tau) \cdot (\frac{1}{2\sqrt{n}}) = \mbox{\sc PL}(p) \cdot \frac{p^{\delta(q;\beta,r)}}{\sqrt{p^{\delta(q;\beta,r)} + p^{1 - q}}} , \qquad p \rightarrow \infty. \end{equation} On the right side of this display, the poly-log term is relatively unimportant. The driving effect is the power-law behavior of the fraction. The following Lemma contains the core idea behind the appearance of $\rho^*$ in Theorems 1 and 2, and the distinction between Regions I and Region II,III. \newcommand{\widetilde{HC}}{\widetilde{HC}} \begin{lemma} \label{lem-ratiolemma} Let $(\beta,r) \in(\frac{1}{2},1)^2$. Let $\gamma(q;r,\beta)$ denote the rate at which \[ \frac{p^{\delta(q;\beta,r)}}{\sqrt{p^{\delta(q;\beta,r)} + p^{1 - q}}} \] tends to $\infty$ as $p \rightarrow \infty$, for fixed $q$, $r$, and $\beta$. Then $\gamma > 0$ if and only if $r > \rho^*(\beta)$. A choice of $q$ maximizing this ratio is given by (\ref{qformula}). \end{lemma} {\bf Proof Sketch.} By inspection of $\delta$, it is enough to consider $q \leq 1$. The ratio grows to infinity with $p$ like $p^{\gamma}$, where \[ \gamma(q;r,\beta) = \delta(q;\beta,r) - \max((1-q),\delta(q;\beta,r))/2 ; \] provided there exists $q \in [0,1]$ obeying $\gamma(q;r,\beta) > 0$. Let $q^*(r,\beta)$ denote the value of $q$ maximizing the rate of separation: \[ q^*(r,\beta) = \mbox{argmax}_{q \in [0,1]} \gamma(q;r,\beta) ; \] in the event of ties, we take the smallest value of $q$. Let's define $\rho^*(\beta)$ without recourse to the earlier formula (\ref{eq-rho-def}) but instead by the functional role claimed for it by this lemma: \begin{equation} \label{alt-rho-def} \rho^*(\beta) = \inf \{ r : \gamma(q^*(r,\beta);r,\beta) > 0, r > 0 \}. \end{equation} We will derive the earlier formula (\ref{eq-rho-def}) from this. Now $\gamma = \min(\gamma_1,\gamma_2)$ where \[ \gamma_1(q) = \delta(q;\beta,r) - (1-q)/2; \qquad \mbox{ and } \gamma_2(q) = \delta(q;\beta,r) /2 . \] We have \begin{equation} \label{eq-max-rep} \gamma(q^*(r,\beta);r,\beta) = \max_{q \in [0,1]} \min_{i=1,2} \gamma_i(q;r\beta) . \end{equation} In dealing with this maximin, two special choices of $q$ will recur below. \begin{itemize} \item $q_1$: Viewed as a function of $q$, $\gamma_1$ is maximized on the interval $[r,1]$ (use calculus!) at $q_1(r,\beta) \equiv 4r$, and is monotone on either side of the maximum. \item $q_2$: On the other hand, $\gamma_2$ is monotone decreasing as a function of $q$ on $[r,1]$. Hence the maximizing value of $q$ in (\ref{eq-max-rep}) {\it over the set of $q$-values where $ \gamma_2$ achieves the minimum} will occur at the minimal value of $q$ achieving the minimum, i.e. at the solution to: \begin{equation} \label{eq-qdef} (1-q) = \delta(q;\beta,r) . \end{equation} (\ref{eq-qdef}) is satisfied uniquely on $[r,1]$ by $q_2(r,\beta) = (\beta+r)^2/4r$. \end{itemize} The behavior of $\min(\gamma_1 ,\gamma_2)$ varies by cases; see Table \ref{table-maximin}. To see how the table was derived, note that \[ \gamma_1 < \gamma_2 \mbox{ iff } \delta < 1-q. \] Consider the first and second rows. For $q > r$, $\delta = 1 - q - \beta -r + 2\sqrt{rq}$. Hence $\delta < 1-q$ on $[r,1]$ iff $- \beta -r + 2\sqrt{rq} < 0$ iff $ q < q_2$. Consider the third and fourth rows. For $q < r$, $\delta = 1-\beta$. Hence $\delta < 1-q$ on $[0,r]$ iff $\beta > q$. \begin{table}[h] \begin{center} \begin{tabular}{|l|l|} \hline Range & Minimizing $\gamma_i$\\ \hline $r \leq q \leq q_2$ & $\gamma_1$ \\ $\max(r,q_2) < q $ & $\gamma_2$ \\ $\beta < q < r $ & $\gamma_2$ \\ $q < \min(r,\beta) $ & $\gamma_1$ \\ \hline \end{tabular} \caption{Minimizing $\gamma_i$ in (\ref{eq-max-rep})} \end{center} \label{table-maximin} \end{table} Derivatives $\dot{\gamma}_i = \frac{\partial}{\partial q} \gamma_i$, $i =1,2$, are laid out in Table \ref{table-gammadot}. \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|} \hline & $q < r$& $q > r$ \\ \hline $\dot{\gamma}_1$ &$\frac{1}{2}$ & $-\frac{1}{2} + \sqrt{r/q}$ \\ $\dot{\gamma}_2$ & 0 & $-\frac{1}{2} + \frac{1}{2} \sqrt{r/q}$ \\ \hline \end{tabular} \caption{Derivatives of $\gamma_i$} \end{center} \label{table-gammadot} \end{table} Table \ref{table-maxq} presents results of formally combining the two previous tables. There are four different cases, depending on the ordering of $q_1$,$q_2$,$\beta$ and $r$. In only one case does the above information leave $q^*$ undefined. (We note that this is a purely formal calculation; Lemma \ref{lem-algebra}, Display (\ref{eq-r-q2}) below shows that {\it rows 2 and 4 never occur}.) To see how Table \ref{table-maxq} is derived, consider the first row. Using the derivative table above, we see that $\min(\gamma_1,\gamma_2)$ is increasing on $[0,\beta]$, constant on $[\beta,r]$ and decreasing on $[r,1]$. Hence the maximin value is achieved at any $q \in [\beta,r]$. For row 2, $\min(\gamma_1,\gamma_2)$ is increasing on $[0,\beta]$, constant on on $[\beta,r]$ increasing on $[r,\min(q_1,q_2)]$ and decreasing on $[\min(q_1,q_2),1]$. For row 3, $\min(\gamma_1,\gamma_2)$ is constant on $[0,r]$, increasing on $[r,\min(q_1,q_2)]$ and monotone decreasing on $[\min(q_1,q_2),1]$. For row 4, $\min(\gamma_1,\gamma_2)$ is increasing on $[0,r]$ and decreasing on $[r,1]$. \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|l|} \hline Case & Minimizing $\gamma_i$ & Maximin $q$ & Maximin value \\ \hline $\beta < r$, $q_2 < r$ & $ \left\{ \begin{array}{ll} \gamma_1 & (0,\beta) \\ \gamma_2& (\beta,1) \\ \end{array} \right. $ & $q^* \in[\beta,r]$ & $\gamma_1(\beta), \gamma_2(q_2)$ \\ \hline $\beta < r$, $q_2 > r$ & $ \left\{ \begin{array}{ll} \gamma_1 & (0,\beta) \\ \gamma_2& (\beta,r) \\ \gamma_1& (r,q_2) \\ \gamma_2& (q_2,1) \\ \end{array} \right. $ & $q^* = \min(q_1,q_2)$ & $\gamma_1(q^*)$ \\ \hline $\beta > r$, $q_2 > r$ & $ \left\{ \begin{array}{ll} \gamma_2 & (0,r) \\ \gamma_1& (r,q_2) \\ \gamma_2& (q_2,1) \\ \end{array} \right. $ & $q^* = \min(q_1,q_2)$ & $\gamma_1(q^*)$ \\ \hline $\beta > r$, $q_2 < r$ & $ \left\{ \begin{array}{ll} \gamma_1& (0,r) \\ \gamma_2 & (r,1) \\ \end{array} \right. $ & $q^* =r$ & $\gamma_1(r)$ \\ \hline \end{tabular} \caption{Maximin Behavior in (\ref{eq-max-rep})} \end{center} \label{table-maxq} \end{table} We are trying to find circumstances where $\gamma \leq 0$. In the above table, we remarked that the hypotheses of rows 2 and 4 can never occur. We can see that in row 1, $\gamma_1(\beta; r,\beta) > 0$ for $\beta \in [0,1]$, $r > \beta$. This leaves only row 3 where we might have $\gamma \leq 0$; in that case either $q^* = q_1$ or $q^*=q_2$. Writing out explicitly \[ \gamma_1(q;r,\beta) = 1 - \beta + ( \sqrt{q} - \sqrt{r})_+^2 - (1-q)/2 . \] \begin{itemize} \item Case $q^* = q_1$. \[ \gamma_1(q_1;r,\beta) = 1/2 - \beta + r . \] Hence $\gamma_1(q_1;r,\beta) = 0$ along $r = \beta - 1/2$, and $\gamma_1(q_1;r,\beta) < 0$ for $r < \beta -1/2$. Consider $1/2 < \beta < 3/4$. In this range, Lemma \ref{lem-algebra} shows $r < q_1( \beta -1/2,\beta) < q_2(\beta-1/2,\beta) < 1$. Hence $q^*(r,\beta) = q_1(r,\beta)$ for $r \leq \beta - 1/2$, $\beta \in (1/2,3/4)$. We conclude that \begin{equation} \label{rho-formula-1} \rho^*(\beta) = \beta - 1/2, \qquad 1/2 \leq \beta \leq 3/4. \end{equation} \item Case $q^* = q_2$. \[ \gamma_1(q_2;r,\beta) = 1/2 - (\beta+r)^2/8r. \] We have $\gamma_1(q_2;r,\beta) = 0$ along $r = (1 - \sqrt{1-\beta})^2 $, and $\gamma_1(q_2;r,\beta) < 0$ for $0 < r < (1 - \sqrt{1-\beta})^2 $. Consider $3/4 < \beta < 1$. In this range, Lemma \ref{lem-algebra} shows shows $r < q_2((1 - \sqrt{1-\beta})^2,\beta) < q_1((1 - \sqrt{1-\beta})^2,\beta) < 1$. We conclude that \begin{equation} \label{rho-formula-2} \rho^*(\beta) = (1 - \sqrt{1-\beta})^2, \qquad 3/4 \leq \beta \leq 1/2. \end{equation} \end{itemize} Together (\ref{alt-rho-def}), (\ref{rho-formula-1}), and (\ref{rho-formula-2}) establish that the formula (\ref{eq-rho-def}) has the properties implied by the Lemma. To complete the Lemma, we need to validate formula (\ref{qformula}). This follows from rows 1 and 3 of Table \ref{table-maxq} and Lemma \ref{lem-algebra}. {\unskip\nobreak\hfil\penalty50\hskip2em\vadjust{ \begin{lemma} \label{lem-algebra} Let $q_1(r,\beta) = 4r$ and $q_2(r,\beta) = (\beta+r)^2/4r$ just as in the previous lemma. For $0 < \beta < 1$ we have \begin{eqnarray} q_1 < q_2 & \mbox{iff} & r < \beta /3 \nonumber \\ q_2 < 1 & \mbox{iff} & r > (1 - \sqrt{1-\beta})^2 \nonumber \\ r < q_2& \mbox{iff} & r < \beta \label{eq-r-q2}\\ \end{eqnarray} \end{lemma} {\bf Proof.} School algebra. {\unskip\nobreak\hfil\penalty50\hskip2em\vadjust{ Lemmas \ref{lem-ratiolemma} and \ref{lem-algebra} show that (\ref{qformula}) gives us {\it one choice} of threshold maximizing the rate at which $\widetilde{Sep}$ tends to infinity; is it the {\it only choice}? Except in the case, $r > \beta$ and $q_2 < r$, this is indeed the only choice. As Table \ref{table-maxq} shows, in case $r > \beta$ and $q_2 < r$, {\it any} $q \in [\beta,r]$ optimizes the {\it } of separation. It turns out that in that case, our formula $q^*$ not only maximizes the rate of separation, it correctly describes the leading-order asymptotic behavior of $T_{Ideal}$. The key point is the Tangent-Secant Formula, which picks out from among all $q \in [\beta,r]$ uniquely $q_2$. This is shown by the next two lemmas, which thereby complete the proof of Theorem 3. \begin{lemma} Set $\gamma_0(q;r,\beta) = -\beta - r + 2 \sqrt{rq} $, for $q \in (0,1)$. In the $ARW(r,\beta)$ model, consider the threshold $t_q(p)$. Suppose that $q > r$. Then \begin{equation} \label{eq-TP2FP} \frac{TPR(t_q(p))}{FPR(t_q(p))} \sim p^{\gamma_0(q,r,\beta)} \cdot \frac{\sqrt{q}}{2\sqrt{q}-2\sqrt{r}} \qquad p \rightarrow \infty. \end{equation} Suppose that $q <r$. Then \begin{equation} \label{eq-TP2FPa} \frac{TPR(t_q(p))}{FPR(t_q(p))} \sim p^{q-\beta} \cdot \sqrt{\frac{\pi}{2}} \cdot t_q(p) \qquad p \rightarrow \infty. \end{equation} Suppose that $q \neq r$. Then \begin{equation} \label{eq-LTP2LFP} \frac{TPR'(t_q(p))}{FPR'(t_q(p))} = \frac{1}{2} \cdot p^{\gamma_0(q,r,\beta)} . \end{equation} \end{lemma} \noindent {\bf Proof.} Simple manipulations with Mills' ratio, this time {\it not} simply grouping polylog terms together with the symbol $PL$, give that for the threshold under the ARW model, if $q > 0$, \[ FPR(t_q(p)) \sim\sqrt{\frac{2}{\pi}} \cdot \frac{p^{-q}}{ \sqrt{2q \log(p)}} \qquad p \rightarrow \infty. \] If $q > r$ \[ TPR(t_q(p)) \sim \sqrt{\frac{1}{2\pi}} \cdot \frac{p^{-\beta - (\sqrt{q}-\sqrt{r})^2}}{ (\sqrt{q} - \sqrt{r}) \sqrt{ 2\log(p)}}. \] Display (\ref{eq-TP2FP}) follows. If $0 < q < r$ \[ TPR(t_q(p))) \sim \epsilon, \qquad p \rightarrow \infty. \] Display (\ref{eq-TP2FPa}) follows. If $q \neq r$ we have the exact identities: \[ TPR'(t_q(p)) = \sqrt{\frac{1}{2\pi}} \cdot p^{-\beta - (\sqrt{q} - \sqrt{r})^2 } , \qquad FPR'(t_q(p)) = \sqrt{\frac{2}{\pi}} \cdot p^{-q}. \] Display (\ref{eq-LTP2LFP}) follows. {\unskip\nobreak\hfil\penalty50\hskip2em\vadjust{ \begin{lemma} $q_2(r,\beta)$ is the unique solution of $\gamma_0(q;r,\beta) = 0$. Suppose $r > \beta$ and $\beta < q_2 < r$. \begin{equation} \label{eq-ideal-lim} T_{Ideal} / t_{q_2}(p) \rightarrow 1, \qquad p \rightarrow \infty. \end{equation} \end{lemma} \noindent {\bf Proof.} By inspection for $q < q_2$, $\gamma_0 < 0$ while for $q > q_2$, $\gamma_0 > 0$. So from (\ref{eq-LTP2LFP}), $Lfdr(t_q(p))$ tends to $0$ or $1$ depending on $q < q_2$ or $q > q_2$. Fix $\eta > 0$. If $T_{Ideal} < t_{q_2 - \eta}$ infinitely often as $p \rightarrow \infty$, then $0$ would be a cluster point of $Lfdr(T_{Ideal})$. Similarly, if $T_{Ideal} >t_{q_2 + \eta}$ infinitely often as $p \rightarrow \infty$, then $1$ would be a cluster point of $Lfdr(T_{Ideal})$. From $q > \beta$ and (\ref{eq-TP2FPa}) we know that $FDR(T_{Ideal}) \rightarrow 0$. >From the Tangent-Secant rule we know that $Lfdr(T_{Ideal}) \rightarrow 1/2$. (\ref{eq-ideal-lim}) follows. {\unskip\nobreak\hfil\penalty50\hskip2em\vadjust{ \subsection{FDR/Lfdr properties of ideal threshold} We now turn to Theorem 4. Important observation: \begin{quotation} {\it The asymptotic $T_{Ideal} = t_{q^*}(p)\cdot (1+o(1))$ is simply too crude to determine the FDR and Lfdr properties of $T_{Ideal}$; it is necessary to consider the second-order effects implicit in the $(1+o(1))$ term. For this, the Tangent-Secant formula is essential.} \end{quotation} Indeed (\ref{eq-LTP2LFP}) shows that the only possibilities for limiting local FDR of a threshold of the {\it exact} form $t_q(p)$ are $0, 1/2, 1$. The actual local FDR of $T_{Ideal}$ spans a continuum from $[0,1]$, due to the fact that small perturbations of $t_q(p)(1+o(1))$ implicit in the $o(1)$ can cause a change in the local FDR. \newcommand{\tilde{q}}{\tilde{q}} To understand this, for $q \neq r$ and $s \in (0,\infty)$, put \[ \tilde{q}(q,r,s,p) = \left( r^{1/2} \pm \sqrt{ (q^{1/2} - r^{1/2})^2 + \log(s)/\log(p)} \right)^2 , \] where the sign of $\pm$ is $+$ if $q > r$. Clearly, $\tilde{q}$ is well-defined for all sufficiently large $p$. As an example, $\tilde{q}(q,r,1,p) = q$. The peculiar definition ensures that \[ \phi(t_{\tilde{q}}(p) - t_r(p)) = \phi(t_q(p)-t_r(p)) \cdot s. \] By simple algebra one can show \begin{lemma} Let $q$,$r$, and $s$ be fixed. With $\tilde{q} = \tilde{q}(q,r,s,p)$, \[ \frac{ t_{\tilde{q}}(p)}{t_q(p)} \rightarrow 1, \qquad p \rightarrow \infty. \] and \[ \frac{\phi( t_{\tilde{q}}(p))}{\phi(t_q(p))} \rightarrow 1, \qquad p \rightarrow \infty. \] \end{lemma} Let's put for short $F_s = FPR(t_{\tilde{q}(q,r,s,p)}(p))$ and $T_s = TPR(t_{\tilde{q}(q,r,s,p)}(p))$. Then, the last few displays show \[ T'_s = s \cdot T'_1 , \qquad F'_s \sim F'_1, \qquad p \rightarrow \infty. \] \[ T_s \sim s \cdot T_1 , \qquad F_s \sim F_1, \qquad p \rightarrow \infty. \] Hence \[ FDR(t_{\tilde{q}(q,r,s,p)}(p)) = \frac{F_s}{T_s + F_s} \sim \frac{F_1}{s \cdot T_1 + F_1} , \qquad p \rightarrow \infty ; \] \[ Lfdr(t_{\tilde{q}(q,r,s,p)}(p)) = \frac{F'_s}{T'_s + F'_s} \sim \frac{F'_1}{s \cdot T'_1 + F'_1}, \qquad p \rightarrow \infty. \] Choosing $s$ appropriately, we can therefore obtain a perturbed $q$ which perturbs the Lfdr and FDR. In fact there is a unique choice of $s$ needed to ensure the Tangent-Secant formula. \begin{lemma} For given values $F_1$, $F'_1$, $T_1 \neq 0 $, $T'_1 \neq 0$, put \[ s^* = \frac{F'_1}{T'_1} - 2 \frac{F_1}{T_1}. \] This choice of $s$ obeys the Tangent-Secant rule: \[ \frac{F'_1}{s^* \cdot T'_1 + F'_1} = \frac{1}{2} \left( 1 + \frac{F_1}{s^* \cdot T_1 + F_1} \right). \] \end{lemma} To use this recall (\ref{eq-TP2FP})-(\ref{eq-TP2FPa})-(\ref{eq-LTP2LFP}). These formulas give expressions for $T_1/F_1$ and $T'_1/F'_1$. Plugging in $q=q^*$ we get \begin{corollary} We have \[ T_{Ideal} \sim t_{\tilde{q}(q^*,r,s^*,p)}(p) . \] where $s^*$ is obtained by setting $T_1 = (\sqrt{q^*}-\sqrt{r})^{-1}$, $F_1 = 2/\sqrt{q}$, $T'_1=1$ and $F'_1= 2$. Moreover, if $\beta/3 < r < \beta$, \[ FDR(T_{Ideal}) \sim \frac{\beta-r}{2r}, \qquad Lfdr(T_{Ideal}) \sim \frac{r+\beta}{4r} . \] \end{corollary} \section{Connection of HC objective with $\widetilde{Sep}$} \label{sec-HCobjective} \setcounter{equation}{0} \newcommand{{\bar{\Psi}}}{{\bar{\Psi}}} Let $F = F_{\epsilon,\tau}$ be the two-point mixture of Definition \ref{def-fold} and $G = G_{\epsilon,\tau}$ the corresponding folded distribution. Then in the asymptotic RW model we have, for $t = t_q(p)$: \begin{eqnarray} HC(t; F_{\epsilon,\tau}) &=& \frac{G(t) - \Psi(t)}{\sqrt{G(t)\bar{G}(t)}} \nonumber \\ &=& \frac{\epsilon (\Psi_\tau - \Psi_0)(t)}{[((1-\epsilon)\Psi_0 + \epsilon \Psi_\tau)((1-\epsilon){\bar{\Psi}}_0 + \epsilon {\bar{\Psi}}_\tau)]^{1/2}} \nonumber \\ &\sim& \frac{\epsilon (\Psi_\tau - \Psi_0)(t)}{[((1-\epsilon){\bar{\Psi}}_0 + \epsilon {\bar{\Psi}}_\tau)]^{1/2}} \label{sim1} \\ & \propto & \frac{TPR(t) - \frac{\epsilon}{1-\epsilon} FPR(t) }{[FPR(t)+ TPR(t) ]^{1/2}} \nonumber \\ & \sim & \frac{TPR(t) }{[ TPR(t) + FPR(t)]^{1/2}} \label{sim2} \\ &\sim & \widetilde{Sep}(t;\epsilon,\tau), \qquad p \rightarrow \infty . \label{sim3} \end{eqnarray} Step (\ref{sim1}) follows from $G_{\epsilon,\tau}(t_q(p)) \rightarrow 1$ as $p \rightarrow \infty$. Step (\ref{sim2}) follows from $\epsilon FPR(t_q(p)) = o(TPR(t_q(p)))$ as $p \rightarrow \infty$. Step (\ref{sim3}) follows from $TPR(t_q(p)) = o(TPR(t_q(p)))$ as $p \rightarrow \infty$. This derivation can be made rigorously correct when $q = q^*(r,\beta)$, where $q^*$ is as announced in Theorem 2. With extra work, not shown here, one obtains: \begin{theorem} The statements made for the ideal threshold in Theorems 1-3 are equally valid for the ideal HC threshold. \end{theorem} \section{Suboptimality of phase diagram for other methods} \label{sec-FDR} \setcounter{equation}{0} Setting thresholding by control of the False Discovery rate is a popular approach. Another approach, more conservative classical and probably even more popular, is the Bonferroni approach, which controls the expected total number of false features selected. Theorem 2 and 3 implicitly show the suboptimality of these approaches. The implications include an attractive relationship to the regions of Definition \ref{def-regions}. \begin{theorem} Under the Asymptotic RW model $ARW(r,\beta)$, the regions of the $(r,\beta)$ phase space for successful classification are: \[ r > (1 - \sqrt{1 - \beta})^2, \qquad 0 < \beta < 1. \] \end{theorem} {\bf Remark.} The successful region of Ideal FDRT and Bonferroni is smaller than that of HCT. However, even for regions when FDRT or Bonferroni would be successful, HCT stil yields more accurate classifications in terms of the convergence rate. This is especially important for finite-p performance. In short, Bonferroni and FDRT fail to adapt the difficulty level of the classification problem measured by $(\epsilon, \tau)$, one picks a fixed threshold, another picks a fixed false discovery rate. {\bf Proof Sketch.} We continue to use the notations $q^*(r,\beta)$, $q_i(r,\beta), i=1,2$, $\gamma(r,\beta)$ $\gamma_i(r,\beta)$, $i=1,2$ from the proof of Lemma \ref{MainLemma}. The Bonferroni threshold $-\Phi^{-1}(1/p) = t_1(p)(1 + o(1))$ as $p \rightarrow \infty$. In this proof sketch, we analyze $t_1(p)$ {\it as if} it were the Bonferroni threshold. Applying (\ref{eq-asysep}), and the definition of $\gamma$ we have \[ \widetilde{Sep}(t_1(p),\epsilon,\tau) \cdot \sqrt{\frac{p}{n}} = PL(p) \cdot p^{\gamma(1;r,\beta) }, \qquad p \rightarrow \infty. \] Now \[ \gamma(1;r,\beta) = \min_{i=1,2} \gamma_i (1 ; r, \beta ); \] while $\gamma_1(1;r,\beta) = \delta(1;r,\beta)/2$ and $ \gamma_2(1;r,\beta) = \delta(1;r,\beta)$. Hence $\gamma(1;r,\beta) > 0 $ iff $ \delta(1;r,\beta) > 0 $. But \[ \delta(1 ; r, \beta ) = -\beta -r + 2 \sqrt{r}, \] which is positive iff $r > 1 - \sqrt{1-\beta}$. \begin{comment} {\bf The right answer is the following. Recall that at $t = t_p(q)$, \[ FP \sim \sqrt{\frac{2}{\pi}} \frac{p^{1 - q}}{\sqrt{2 q \log p}}, \qquad TP \sim \left\{ \begin{array}{ll} p^{1 - \beta}, &\qquad 0 < q < r, \\ \sqrt{\frac{1}{2\pi}} \frac{p^{1 - \beta - (\sqrt{q} - \sqrt{r})^2}}{(\sqrt{q} - \sqrt{r}) \sqrt{2 \log p}}, &\qquad q > r. \\ \end{array} \right. \] So by similar argument as before, we should take \[ t_q(p) \sim \left\{ \begin{array}{ll} t_{q_2}(p), &\qquad 0 < r < \beta, \\ \sqrt{2 \beta \log p}, &\qquad \beta < r < 1. \end{array} \right. \] } \end{comment} Precise FDR control at level $\alpha$ requires that $TP/FP = (1-\alpha)/\alpha$. Suppose that $r < \beta$. Equation (\ref{eq-TP2FP}) relates the TP/FP ratio to $\gamma_0$, $q$ and $r$. Clearly, the only way to keep $TP/FP$ bounded away from 0 and infinity is to choose $q$ so that $\gamma_0(q;r,\beta) = 0$. We note that $q_2(r,\beta)$ exactly solves this problem: \[ \gamma_0(q_2(r,\beta) ;r,\beta) = 0. \] It follows that in the $ARW(r,\beta)$ model, the FDRT functional obeys $T_{FDR,\alpha}(\epsilon,\tau) = t_{q_2}(p)(1 + o(1))$, $p \rightarrow \infty$. In this proof sketch we analyze $ t_{q_2}(p)$ {\it as if} it were exactly the FDR threshold. \[ \gamma(q_2(r,\beta);r,\beta) = \min_{i=1,2} \gamma_i (q_2(r,\beta) ; r, \beta ); \] while $\gamma_1(q_2;r,\beta) = 1 - \frac{(\beta+r)^2}{4r}$ and $ \gamma_2(q_2;r,\beta) = \frac{1}{2} - \frac{(\beta+r)^2}{8r}$. Hence $\gamma(q_2;r,\beta) > 0 $ is positive iff $r > 1 - \sqrt{1-\beta}$. The last paragraph assumed $r < \beta$. On inuitive grounds, the region $r > \beta$ offers even better perfomance, so it must lie entirely above the phase transition. We omit details. {\unskip\nobreak\hfil\penalty50\hskip2em\vadjust{ Table \ref{table:exponents} compare the exponents in SEP for different methods. Also see Figure \ref{fig:exponents} for a comparison of the exponents for $\beta = 1/2$ and $\beta = 5/8$. \begin{table}[htb] \begin{center} \begin{tabular}{|r|l|l|l|} \hline Method & SEP Exponent & Boundary, $3/4 < \beta$ & Boundary, $1/2 < \beta < 3/4$ \\ \hline Ideal, HCT & $\gamma$ & $1 - \sqrt{1-\beta}$ & $\beta - 1/2$ \\ FDRT & $\frac{1}{2} - \frac{(\beta+r)^2}{8r} $ & $1 - \sqrt{1-\beta}$ & $1 - \sqrt{1-\beta}$ \\ Bonferroni & $ -\beta -r + 2 \sqrt{r}$ & $1 - \sqrt{1-\beta}$ &$1 - \sqrt{1-\beta}$ \\ \hline \end{tabular} \end{center} \caption{Comparison of the exponents in SEP.} \label{table:exponents} \end{table} \begin{figure} \begin{centering} \includegraphics[height=3.5 in,width=4 in]{Gamma5-625} \caption{Comparison of SEP exponents for HCT, FDRT, and Bonferroni. Top: $\beta = 1/2$. Bottom: $\beta = 5/8$. $x$-axes displays $r$-values from $0$ to $\beta$, $y$-axes display corresponding exponents. Solid blue horizontal bar indicates that $r$ falls in the failure region where all exponents are $0$. } \label{fig:exponents} \end{centering} \end{figure} \section{Discussion and conclusions} \label{sec-discussion} \setcounter{equation}{0} \subsection{Beyond ideal performance} We consider here only asymptotic, ideal behavior. Conceptually, the ideal threshold envisions a situation with an oracle, who, knowing $\epsilon$ and $\tau$ and $n$ and $p$, chooses the very best threshold possible under those given parameters. In this paper we have analyzed the behavior of this threshold within a certain asymptotic framework. However, clearly, no empirical procedure can duplicate the performance of the ideal threshold. We have seen that ideal HC thresholding comes close. This ideal HC threshold does not involve optimal exploitation of knowledge of $\epsilon$ and $\tau$, but merely the availability of the underlying distribution of feature $Z$-scores $F_{\epsilon,\tau}$. In this paper,we analyzed the behavior of $t^{HC} = T_{HC}(F_{\epsilon,\tau})$. This is an ideal procedure because we never would know $F_{\epsilon,\tau}$; instead we would have the empirical CDF $F_{n,p}$ defined by \[ F_{n,p}(z) = \frac{1}{p} \sum_{j=1}^p 1_{\{ Z(j) \leq z \} } . \] The (non-ideal) HCT that we defined in Section \ref{ssec-HCTDEF} is then simply \[ \hat{t}^{HC}_{n,p} = T_{HC}(F_{n,p}) . \] Because $F_{\epsilon,\tau}(z) = E(F_{n,p})(z)$, we are conditioned to expect that $t^{HC} \approx \hat{t}^{HC}_{n,p}$; indeed there are generations of experience for {\it other} functionals $T$ showing that we typically have $T(F_{n,p}) \approx T(E(F_{n,p}))$ for large $n$,$p$ for those functionals. Proving this for $T = T_{HC}$ is more challenging than one might anticipate; the problem is that $T_{HC}$ is not continuous at $F = \Phi$, and yet $F_{\epsilon(p),\tau(p)} \rightarrow \Phi$ as $p \rightarrow \infty$. After considerable effort, we justify the approximation of HCT by ideal HCT in \cite{AoS}. Hence the analysis presented here only partially proves that HCT gives near-optimal threshold feature selection; it explains the connection at the level of ideal quantities but not at the level of fluctuations in random samples. \subsection{Other asymptotic settings} In the analysis here we consider only the case that $n \sim c \log(p)^\gamma$. The phase diagram will be slightly different in case $n$ is bounded and does not go to infinity with $p$, and will be again slightly different in case $n \sim c p^\gamma$. The full details are presented in \cite{AoS}. \subsection{Phase diagram for finite sample sizes} { While the main focus of our paper has been asymptotic analysis, we mention that the phase diagram also reflects the finite-sample behavior. In Figure \ref{fig:finitep}, we consider $p = 3 \times 10^3 \times N$, $N = (1, 10, 100)$. For such $p$, we take $n = \log(p)/2$ and display the boundary of the set of $(\beta, r)$ where ideal HCT yields a classification error between $10\%$ and $40\%$. The figure illustrates that as $p$ grows, both the upper bound and the lower bound migrate towards the common limit curve $r = \rho^*(\beta)$. \begin{figure}[h] \centering \includegraphics[height =4 in, width = 5 in]{Finitep.pdf} \caption{Classification boundaries for $p = 3 \times 10^3 \times (1, 10, 100)$. For each $p$, both an upper boundary and a lower boundary are calculated which correspond to the proxy classification errors of $10\%$ and $40\%$ using ideal HCT. The gap between two boundaries gets smaller as $p$ increases. The solid black curve displays the asymptotic boundary $r = \rho^*(\beta)$. } \label{fig:finitep} \end{figure} \subsection{Other work on HC} \label{sec-OtherHC} HC was originally proposed for use in a detection problem which has nothing to do with threshold feature selection: testing an intersection null hypothesis $\mu(j) = 0 \; \forall j$ \cite{HC}. The literature has developed since then. Papers \cite{HJ1, HJ2} extends the optimality of Higher Criticism in detection to correlated setting. Wellner and his collaborators investigated the Higher Criticism in the context of Goodness-of-fit, see for example \cite{Wellner}. Hall and his collaborators have investigated Higher Criticism for robustness, see for example \cite{Hall-robust}. HC has been applied to data analysis in astronomy (\cite{Cayon, Starck}) and computational biology \cite{Goeman}. HC has been used as a principle to synthesize new procedures: Cai, Jin, Low in \cite{CJL} (see also Meinshausen and Rice \cite{Rice}) use Higher Criticism to motivate estimators for the proportion $\epsilon$ of non-null effects. \subsection{ HC in classification} Higher Criticism was previously applied to high-dimensional classification in Hall, Pittelkow, Ghosh (2008) \cite{Hall}, but there is a key conceptual difference from the present paper. Our approach uses HC in classifier design -- it selects features in designing a linear classifier; but the actual classification decisions are made by the linear classifier when presented with specific test feature vectors. The approach in \cite{Hall} uses HC to directly make decisions from feature vector data. Let's call these two different strategies {\it HC-based feature selection} (used here) and {\it HC-based decision} (used in \cite{Hall}.) In the ARW model, for a given classifier performance level, the HC-based decision strategy requires much stronger feature contrasts to reach that level than the HC-based feature selection strategy. In this paper, we have shown that HC-feature-selection requires useful features to have contrasts exceeding \[ \sqrt{2 \rho^*(\beta) \log p }/\sqrt{n} \cdot (1+o(1)), \qquad (n,p) \rightarrow \infty. \] HC-decision requires useful features to have contrasts exceeding \[ \sqrt{2 \rho^*(\beta) \log p } \cdot (1 + o(1)), \qquad p \rightarrow \infty. \] Therefore, if the number of training samples $n > 1$, HC-feature selection has an advantage. Imagine that we have $n=36$ training samples; we will find that HC feature selection can asymptotically detect features roughly 1/6 the strength of using HC directly for decision.
1,314,259,992,912
arxiv
\section{Introduction} Many simulations of physical, biological, and/or mathematical phenomena rely on computations over the real numbers. Hence, it is important to precisely characterize the computability and computational complexity of real processes and operators. The discrete case has been well studied and is now well understood: the Church-Turing thesis asserts the equivalence of all reasonable models of computations over discrete objects. However, there exist several models of computation over the real numbers which define various notions of computability. The BSS-model \cite{BluCucShu98,BluShuSma89} named after its discoverers, Blum, Shub and Smale, considers processes of computation using reals in the same way classical models use digits. While there is no hope to physically realize a BSS-machine, there exist physical machines that compute over the reals. For example, the Differential Analyzer \cite{Bus31} which was modeled by Shannon as the General Purpose Analog Computer ({\sc gpac})\cite{Sha41}. We can also cite algebraically defined classes of computable functions \cite{Moo96}, and recursive analysis. This paper explores the question of complexity in the latter framework: Recursive Analysis. Recursive analysis is the most accepted model for continuous computation as it does not suffer from physical hopelessness and is as old as the Turing machine. It was indeed introduced by A. Turing \cite{Tur36}. It was also presented by some of the pioneers of computability theory as a natural way of computing over the reals using classical machines\cite{Grz57,Lac55}. In this model, a real number is represented as a converging sequence of rational numbers, this sequence can be encoded on an infinite tape that is bounded from the left. Hence, a real function $f$ is computable if there exists an algorithm that can produce a sequence representing $f(x)$ given a sequence representing $x$ . The essentials of recursive analysis are presented in section \ref{sec:recana}. For a complete recent view of the area, the reader may refer to the monograph \cite{Wei00} or the tutorial \cite{BraHerWei08}. There is no hope to unify all approaches of continuous computations. However, recent work by some authors illustrate that relationships can be established between what is computed using Shannon's {\sc gpac} and recursive analysis \cite{JOC2007}. Also it was possible to give characterizations of computable functions, in the context of recursive analysis, with function algebras {\`a} la Moore \cite{FundamentaInformatica2006}. See survey \cite{CIEChapter2007} for more insights on the relative equivalences among different continuous models. However, discussions have mainly focused on the computability level. Connecting at the complexity level models that are known to be related at the computability level is an even more ambitious goal. An immediate deep problem is that of defining complexity notions for some of the models such as the {\sc gpac}. The main difficulty of this line of research is that there is no common viewpoint of what continuous time and continuous space mean, as shown by several attempts \cite{Moo96,AMP95,Ruo94,CIEChapter2007}. To illustrate this point note that no notion of complexity is defined for the {\sc gpac}. BSS and recursive analysis both have such a notion. In the {\sc gpac}, the continuity of time is responsible for this difficulty. In recursive analysis and BSS, time is discrete, whereas space (in BSS) is continuous, hence, the notion of time complexity exists in those two models. However, as expected, there is no relationship between what is polynomially computable for BSS and what is polynomially computable in recursive analysis. In recursive analysis, complexity relates the precision needed for the function value with both the time and the precision needed on the input value. K-I. Ko \cite{Ko91} defines this notion of complexity as well as the induced results on what it means to belong to a given complexity class and what the complexity of certain mathematical operators are. Our aim is to characterize the complexity classes of real functions defined over arbitrary domains using function algebras in the way of \cite{Moo96}. This line of research has already produced significant results such as \cite{GCharacterizing,Ijuc} which respectively compares the notion of complexity over rational and real numbers and present a framework for characterizing complexity in recursive analysis over compact domains. This paper is based on numerous recent developments in the field of computability in recursive analysis and extends or reproves those results. For example, the results from \cite{CMC02} or from \cite{FundamentaInformatica2006} are revisited and reproved as applications of our framework. What this paper aims to provide can be seen from several viewpoints. First, it is a step in the direction of implicit complexity for real computation. In this context, it can be understood as a precursor for research in the line of \cite{Hof99,Jones00,MM00} for the context of recursive analysis. Second, it gives an algebraic characterization of what is computable in polynomial time (as well as higher complexity classes) for real functions. It is a porting of \cite{Cob65,BelCoo92,Clo98} to the reals. In fact, as the main contribution is a framework for translating discrete characterizations into continuous ones, we use Bellantoni-and Cook characterization and plug it into the reals in order to capture continuous polynomial time computability. It would be immediate to have a similar characterization using Cobham's work and/or all the discrete function algebras presented in \cite{Clo98}. Third, this paper ventures from computability to complexity. In that sense, this article is similar to \cite{CamOja06} when applied to complexity. Finally, this paper has a pedagogical value insofar as the algebraic characterizations we provide make the domain of complexity over the reals clearer, where such characterizations do not rely on any kind of machine definitions. For example, they can give easier ways of proving that a given function is not polynomial time computable. We believe that this is a very natural and intuitive paradigm that avoids discrete machinery (such as Type-2 Turing machines) when talking about continuous computation. The research done in the current paper has started a couple of years ago. A preliminary version of that work was presented at the Third International Workshop on Physics and Computation and later published in the International Journal of Unconventional Computing \cite{Ijuc}. The current paper is a comprehensive extension of \cite{Ijuc} as follows: (1) whereas the work in \cite{Ijuc} is restricted to functions defined over the compact interval $[0,1]$, the current paper considers real functions defined over any arbitrary domain, though, for simplicity, the proofs are given only for functions defined over the whole real line and (2) many of the proofs in \cite{Ijuc} are either eliminated or just outlined, whereas the current paper provides a detailed complete proof of every result. The paper is organized as follows. Section 1 is an introduction. Section 2 presents a quick review of the related work done in the area of algebraically characterizing continuous complexity classes. Section 3 introduces the basic concepts and results of recursive analysis, the framework for continuous computation that we adopt in this paper. Section 4 is the core of the paper. It starts with a simple preliminary result in Subsection 4.1, followed, in Subsection 4.2, by integer characterizations of special classes of real functions, namely the Lipschitz functions and the locally poly-Lipschitz functions. Subsection 4.3 avoids the Lipschitz assumptions and generalizes these results to all polynomial time computable real functions. In Section 5 we apply the obtained results to algebraically characterize some computability and complexity classes of real functions. We first, Subsections 5.1 and 5.2, obtain some restatements and extensions of already known results using our framework. In Subsection 5.3 we provide new results that give algebraic machine independent characterizations of polynomial time computable analysis functions. \section{Related Work} We prove our results by relating the notion of polytime (polynomial time) computable functions over the reals (arbitrary domains) to the corresponding notion over the integers. This gives a direct way to lift algebraic characterizations of integer computability and complexity to algebraic characterizations of the corresponding analog notions. Our setting is actually proved to be robust to approximations. One does not need to be able to compute exactly the computability and or complexity class over the integers, but only some defined approximation of it in order to be able to compute the corresponding class over the reals. This can be seen as a way to reformulate/reprove/reread very nicely some constructions already used in \cite{JOC2007,FundamentaInformatica2006}. Hence, our framework gives a way to rely on algebraic machine-independent characterizations of computable functions over the integers. Several such characterizations are known \cite{Clo98}. In particular, Kleene's functions are well known to capture exactly the integer functions computable by Type-1 Turing machines. Cobham \cite{Cob65}, and later Bellantoni and Cook \cite{BelCoo92}, were among the first to propose algebraically defined classes of polytime computable integer functions. Our main theorem relies on Bellantoni and Cook's ideas in \cite{BelCoo92}. Other machine independent characterizations of classical discrete computability and complexity classes (see survey \cite{Clo98}) could also be considered. These results yield a direct understanding of how algebraic characterizations of computability and complexity classes over the integers can be lifted to algebraic characterizations of the corresponding classes over the reals. Indeed when talking about computability and complexity over the integers, several machine-independent characterizations of computable functions are known \cite{Clo98}. In particular, Kleene's functions are well known to capture exactly the discrete functions computable by Turing machines. Cobham \cite{Cob65}, and later Bellantoni and Cook \cite{BelCoo92}, were among the first to propose algebraically defined characterizations of polynomial time computable integer functions. See the survey \cite{Clo98} for some other machine independent characterizations of classical computability and complexity classes. Notice that our framework is different from the one proposed by Campagnolo and Ojakian in \cite{CamOja06}; in particular, it has the main advantage of allowing to talk not only about the computability level but also about the complexity level. As recursive analysis relies on Type-2 Turing machines, it is natural to wonder whether the complexity results we obtain are in any way related to works such as \cite{kapron1996,constable1973}, which characterize the Basic Feasible Functions BFFs which are analogous to polytime computable functions. First, note that BFFs do not exactly correspond to polytime computable functions in the context of recursive analysis. They are in fact incomparable even if we restrain BFFs to functions that make sense in the context of recursive analysis. Furthermore, the objects in question are not the same; only very specific sequences have sense in the context of recursive analysis and restricting Type-2 functions to those that represent real functions is far from easy. However, our work could be related to \cite{Isaac10} which seeks to characterize both BFFs and polytime computable functions in the sense of recursive analysis using polynomial interpretations. \section{Essentials of Recursive Analysis} \label{sec:recana} In this section, we recall some basic definitions from recursive analysis: see \cite{Wei00,Ko91} for a full detailed presentation. Let $\N,\Z,\mathbb{Q},\R$ denote the set of natural numbers, integer numbers, rational numbers, and real numbers respectively. Let $\mathbb{D}$ denote the set of dyadic numbers, that is, $\mathbb{D}=\{r\in\mathbb{Q}\colon r=\frac{a}{2^b},a\in \Z,b\in\N\}$. These are the rationals with finite binary representation. For any real number $x$ let $|x|$ denote the absolute value of $x$. \begin{definition}[Representation of real numbers] Assume $x\in\R$. A Cauchy sequence representing $x$ is a function $\varphi_x\colon\mathbb{N}\rightarrow\mathbb{D}$ that converges at a binary rate: $\forall n\in\mathbb{N}\colon|x-\varphi_x(n)|\le2^{-n}$. Given $x\in\R$, let $CF_x$ denote the class of Cauchy functions that represent $x$. \end{definition} Based on this representation we can now define the computability of functions over the real numbers. \begin{definition}[Computability of real functions] \label{dfn:computability_of_real_functions} Assume a function\\ $f\colon D\subseteq\R\rightarrow\mathbb{R}$. We say that $f$ is computable if there exists an oracle Turing machine $M^{^{()}}$ such that for every $x\in D$, for every $\varphi_x\in CF_x$, and for every $n\in\mathbb{N}$ the following holds: \begin{myequation} |M^{^{\varphi_x}}(n) - f(x)|\le 2^{-n} \end{myequation} \end{definition} For $x\not\in D$, and for any $\varphi_x\in CF_x$, the behavior of the machine $M^{^{\varphi_x}}$ is undefined. Note that for the rest of this article we only concentrate on two cases: either $D=[0,1]$ or $D=\R$. \begin{definition}[Polytime computability of real functions] If $D=[0,1]$, then we say $f$ is \emph{polytime computable} if there exists a machine $M$ such that the computation time of $M^{^{\varphi_x}}(n)$ is bounded by $p(n)$ for some polynomial $p$. If $D=\R$, then we say $f$ is \emph{polytime computable} if the computation time of $M^{^{\varphi_x}}(n)$ is bounded by $p(k,n)$ for some polynomial $p$ where $k=\min\{j\colon x\in[-2^j,2^j]\}$. We will typically call $k$ the extension parameter and $n$ the precision parameter. \end{definition} As is evident from the above definitions, in the context of recursive analysis continuity is a necessary condition of computability, though it is not sufficient. The following definition introduces the notion of `modulus of continuity' which in some sense quantifies the concept of continuity and provides a useful tool in the investigation of real continuous computation \cite{GCharacterizing}. \begin{definition}[Modulus of continuity] Consider a function $f\colon\R\to\R$. Then $f$ has a modulus of continuity if there exists a function $m\colon\mathbb{N}^2\to\mathbb{N}$ such that for all $k,n\in\mathbb{N}$ and for all $x,y\in[-2^k,2^k]$ the following holds: \begin{myequation} if \; |x-y|\le 2^{-m(k,n)}, \; then \; |f(x)-f(y)|\le 2^{-n} \end{myequation} If $f$ is defined over $[0,1]$ the same definition holds except that the parameter $k$ is not necessary anymore, that is $m\colon\N\to\N$. \end{definition} Notice that the existence of modulus of continuity for a function $f$ implies that this function is continuous. In analogy with \cite[corollary 2.21]{Ko91}, polytime computability over unbounded domains can be characterized as follows \cite{GCharacterizing}. \begin{proposition} \label{thm:characterizing_ptime_over_real_functions_case} Assume a function $f\colon\R\rightarrow\mathbb{R}$. Then $f$ is polytime computable iff there exist two integer functions: $m:\mathbb{N}^2\rightarrow\mathbb{N}$ and $\psi:\mathbb{D}\times\mathbb{N}\rightarrow\mathbb{D}$ such that \begin{enumerate} \item{$m$ is a polynomial function and is a modulus for $f$,} \item{$\psi$ is an approximation function for $f$, that is, for every $d\in\mathbb{D}$ and $n\in\mathbb{N}$ the following holds: \begin{myequation} |\psi(d,n)-f(d)|\le 2^{-n} \end{myequation}} \item{$\psi(d,n)$ is computable in time $p(|d|+n)$ for some polynomial $p$.} \end{enumerate} \end{proposition} \begin{proof} The proof is an extension of the proof of Corollary 2.21 in \cite{Ko91}. Assume the existence of $m$ and $\psi$ that satisfy the given conditions. Without loss of generality assume that $m(k,n)=(k+n)^a$ for some $a\in\mathbb{N}$. Assume an $f$-input $x\in\R$ and let $\varphi\in CF_x$. Assume $n\in\mathbb{N}$. Let $M^{^{\varphi}}(n)$ be an oracle Turing machine that does the following: \begin{enumerate} \item{let $d=\varphi(2)$,} \item{from $d$ determine the least $k$ such that $x\in[-2^k,2^k]$,} \item{let $\alpha=m(k,n+1)$,} \item{let $d'=\varphi(\alpha)$,} \item{let $e=\psi(d',n+1)$ and output $e$.} \end{enumerate} Note that every step of the above procedure can be performed in polynomial time with respect to both $k$ and $n$. Now verifying the correctness of $M^{^{\varphi}}(n)$: \begin{myequation} |e-f(x)|&\le|e-f(d')|+|f(d')-f(x)|\\ &\le 2^{-(n+1)}+|f(d')-f(x)|,\qquad \mbox{by definition of $\psi$}\\ & \le 2^{-(n+1)}+2^{-(n+1)},\qquad \mbox{$|d'-x|\le 2^{-m_k(n+1)}$ and definition of $m$}\\ & =2^{-n} \end{myequation} This completes the first part of the proof. Now assume $f$ is polytime computable. In the following we adopt the following notation: for every $x\in\R$ let $\varphi_x^*(n)=\frac{\lfloor 2^nx\rfloor}{2^n}$. Fix some large enough $k$ and consider any $x\in\R$ such that $len(\lfloor x\rfloor)=k$ ($len(j)$ denotes the length of the binary representation of the integer $j$), hence $x\in[-2^k,2^k]$. Since $f$ is polytime computable, there exists an oracle Turing machine $M^{^{()}}$ such that the computation time of $M^{^{\varphi_x^*}}(n)$ is bounded by $q(k,n)$ for some polynomial $q$. Fix some large enough $n\in\mathbb{N}$. Let \begin{myequation} n_x=\max\{j\colon\varphi_x^*(j)\textit{ is queried during the computation of } M^{^{\varphi_x^*}}(n+3)\} \end{myequation} Let $d_x=\varphi_x^*(n_x)$. Note that $\varphi_{d_x}^*(j)=\varphi_x^*(j)$ for every $j\le n_x$. Let $\ell_x=d_x-2^{-n_x}$ and $r_x=d_x+2^{-n_x}$. Then $\{(\ell_x,r_x)\colon x\in[-2^k,2^k]\}$ is an \emph{open covering} of the compact interval $[-2^k,2^k]$. By the \emph{Heine-Borel Theorem}, $[-2^k,2^k]$ has a finite covering $\mathcal{C}=\{(\ell_{x_i},r_{x_i})\colon i=1,\ldots,w\}$. Note that $x_i\in[-2^k,2^k]$ for $i=1,\ldots,w$. Define $m'\colon\mathbb{N}^2\to\mathbb{N}$ by \begin{myequation} m'(k,n)=\max\{n_{x_i}\colon i=1,\ldots,w\} \end{myequation} We need to show that $m'$ is a polynomial modulus for $f$. From the above assumptions, $n_x\le q(k,n+3)$ which is polynomial in $k,n$. Now assume some $x,y\in[-2^k,2^k]$ such that $x<y$ and $|x-y|\le 2^{-m'_k(n)}$.\\ \noindent\underline{case 1:} $x,y\in(\ell_{x_i},r_{x_i})$ for some $i\in\{1,\ldots,w\}$. Then $|x-d_{x_i}|< 2^{-n_{x_i}}$ which implies that $\varphi_x^*(j)=\varphi_{d_{x_i}}^*(j)$ for every $j\le n_{x_i}$, hence $M^{^{\varphi_x^*}}(n+3)=M^{^{\varphi_{d_{x_i}}^*}}(n+3)$. Now \begin{myequation} |f(x)-f(d_{x_i})|&\le|f(x)-M^{^{\varphi_x^*}}(n+3)|+|M^{^{\varphi_x^*}}(n+3)-f(d_{x_i})|\\ &=|f(x)-M^{^{\varphi_x^*}}(n+3)|+|M^{^{\varphi_{d_{x_i}}^*}}(n+3)-f(d_{x_i})|\\ &\le 2^{-(n+3)}+2^{-(n+3)}\\ &=2^{-(n+2)} \end{myequation} Similarly, we can deduce that $|f(y)-f(d_{x_i})|\le 2^{-(n+2)}$. Hence, $|f(x)-f(y)|\le|f(x)-f(d_{x_i})|+|f(d_{x_i})-f(y)| \le 2^{-(n+2)}+2^{-(n+2)}=2^{-(n+1)}$.\\ \noindent\underline{case 2:} There is no $i$ such that $x,y\in(\ell_{x_i},r_{x_i})$. Notice that $\mathcal{C}$ is a covering and by assumption $|x-y|\le\min\{\frac{1}{2}(r_{x_i}-\ell_{x_i})\colon i=1,\ldots,w\}$. Hence, there must exist $i,j$ such that $x\in(\ell_{x_i},r_{x_i})$, $y\in(\ell_{x_j},r_{x_j})$, and $\ell_{x_j}<r_{x_i}$. Choose an arbitrary $z\in(\ell_{x_j},r_{x_i})$. Then \begin{align*} |f(x)-f(y)|&\le |f(x)-f(z)|+|f(z)-f(y)|\\ & \le 2^{-(n+1)}+|f(z)-f(y)|,\quad\textit{applying case 1 to $x,z\in(\ell_{x_i},r_{x_i})$}\\ &\le 2^{-(n+1)}+2^{-(n+1)},\quad\textit{applying case 1 to $y,z\in(\ell_{x_j},r_{x_j})$}\\ &=2^{-n} \end{align*} Hence, $m'$ is a polynomial modulus function for $f$. The approximation function can be defined as follows: for $d\in\mathbb{D}$ and $n\in\N$, let $\psi(d,n)=M^{^{\varphi_d^*}}(n)$. This completes the proof of the theorem. \end{proof} \section{Characterizing Polytime Real Complexity} In this section, we prove that it is possible to relate polytime computability over the reals to polytime computability over the integers. We do that in two steps. In the first step, we consider the special case of Lipschitz functions. In the second step, we discuss how to avoid the Lipschitz hypothesis, and consider general functions. Let's first provide a preliminary result to help explaining what we would like to get. \subsection{A preliminary first result} A function over the real line can be characterized by the integer projection of a function over the real plane. The extra dimension can be viewed as representing the precision of the computed approximation. \begin{proposition}[Complexity over $\R$ vs. Complexity over $\R^2$] \label{prop:char_comp_over_R_2_comp} The following are equivalent: \begin{enumerate} \item{a function $f:\R \to \mathbb{R}$ is polytime computable,} \item{there exists a polytime computable function $g:\R \times \RP \to \R$ such that the following holds \begin{equation} \label{eq:4} \forall x\in\R,\;\forall y\in\N\colon|g(x,y)-yf(x)|\le 1 \end{equation}} \end{enumerate} \end{proposition} \begin{proof} $(1)\Rightarrow(2):$ is obtained directly by letting $g(x,y)=yf(x)$. From the hypothesis $f$ is polytime computable and multiplication can be done in polynomial time, hence, $g(x,y)$ is polytime computable. Clearly, Equation \eqref{eq:4} holds.\\ \noindent$(2)\Rightarrow(1):$ Since $g$ is polytime computable, there exists an oracle machine $N^{^{()}}$ that computes $g$ in polynomial time. Assume an input $x\in\R$ and a Cauchy sequence $\varphi_x\in CF_x$. Assume $n\in\mathbb{N}$ and consider an oracle machine $M^{^{\varphi_x}}(n)$ which does the following: \begin{enumerate} \item{Simulate the computation of $N^{^{\varphi_x,\varphi_y}}(0)$, (for $\varphi_y(i)=2^{n+1}$) \begin{enumerate} \item{whenever $N^{^{()}}$ queries $\varphi_x(i)$, $M^{^{()}}$ queries its own oracle with the same argument $i$ and returns $d=\varphi_x(i)$,} \item{whenever $N^{^{()}}$ queries $\varphi_y(j)$, $M^{^{()}}$ returns $2^{n+1}$,} \end{enumerate}} \item{Repeat the last step as long as $N^{^{()}}$ keeps querying,} \item{Let $e$ be the output of the simulation of $N^{^{()}}$,} \item{Output $2^{-(n+1)}e$.} \end{enumerate} From this procedure we have \begin{myequation} \label{eqn:prop:different_way_of_characterizing_computability_over_R_eqn_1} &|e-g(x,2^{n+1})|\le 1\\ &|2^{-(n+1)}e-2^{-(n+1)}g(x,2^{n+1})|\le 2^{-(n+1)} \end{myequation} From the proposition hypothesis: \begin{myequation} \label{eqn:prop:different_way_of_characterizing_computability_over_R_eqn_2} &|g(x,2^{n+1})-2^{n+1}f(x)|\le 1\\ &|2^{-(n+1)}g(x,2^{n+1})-f(x)|\le 2^{-(n+1)} \end{myequation} Then \begin{align*} |M^{^{\varphi_x}}(n)-&f(x)|\le\\ &|M^{^{\varphi_x}}(n)-2^{-(n+1)}g(x,2^{n+1})|+|2^{-(n+1)}g(x,2^{n+1})-f(x)|\\ &=|2^{-(n+1)}e-2^{-(n+1)}g(x,2^{n+1})|+|2^{-(n+1)}g(x,2^{n+1})-f(x)|\\ &\le 2^{-(n+1)}+|2^{-(n+1)}g(x,2^{n+1})-f(x)|\qquad\textit{from Eq. \eqref{eqn:prop:different_way_of_characterizing_computability_over_R_eqn_1}}\\ &\le2^{-(n+1)}+2^{-(n+1)}\qquad\textit{from Eq. \eqref{eqn:prop:different_way_of_characterizing_computability_over_R_eqn_2}}\\ &\le 2^{-n} \end{align*} Hence, the machine $M^{^{\varphi_x}}$ correctly computes the function $f(x)$. Furthermore, it is clear that $M^{^{\varphi_x}}(n)$ operates in polytime in terms of the precision parameter $n$ and the length of $\lfloor x\rfloor$. \end{proof} We would like to talk about functions $g$ with assertions like the above but quantification is only done over the integers. That is to say about assertions like \eqref{eq:4} but with something like $\forall x \in \Z$ instead of $\forall x \in \R$. Moving to such full integer characterization we are faced with the problem of how the notion of continuity of real functions can be transferred to the domain of integers. \subsection{Lipschitz functions} For Lipschitz functions this is facilitated by the fact that such functions provide us with free information about their continuity properties. \begin{definition}[Lipschitz functions] \label{dfn:lipschitz_fun} A real function $f\colon D\subseteq\R\to\R$ is Lipschitz if there exists a constant $K\ge 0$ such that for all $x_1,x_2\in D$ the following holds: \begin{myequation} |f(x_1)-f(x_2)|\le K|x_1-x_2| \end{myequation} \end{definition} Then we have the following characterization of polytime computable Lipschitz functions. \begin{proposition}[Complexity over $\R$ vs Complexity over $\R^2$] \label{prop:maversion_comp} Fix any arbitrary constant $\epsilon\ge0$. Let $f:\R\to\R$ be a Lipschitz function. Then the following are equivalent: \begin{enumerate} \item{$f$ is polytime computable,} \item{there exists a polytime computable function $g\colon\R\times \R\to\R$ such that: \begin{equation} \label{eq:etoileepsilon} \forall x\in \Z,\forall y\in\N^{\ge 1}\colon|g(x,y)-yf(\frac{x}{y})|\le \epsilon \end{equation}} \end{enumerate} \end{proposition} \begin{proof} \noindent$(1)\Rightarrow(2):$ Assume $f$ is polytime computable. Define $g$ as follows: \begin{myequation} g(x,y)= \begin{cases} 0 & y=0 \\ y f(\frac{x}{y}) & y \in\N^{\ge1}\\ piecewise\;linear & otherwise \end{cases} \end{myequation} $g$ is polytime computable since $f$ is polytime computable and arithmetic operations on the reals are polytime computable. Clearly, $g$ satisfies Eq. \eqref{eq:etoileepsilon}.\\ \noindent $(2)\Rightarrow(1):$ Assume that (2) holds and for simplicity assume $\epsilon=1$. Let $K$ be a Lipschitz constant for $f$ (see Definition \ref{dfn:lipschitz_fun}). Let $a\in\N$ such that $K\le2^a$. Hence, for all $x,y\in\R$ the following holds: $|f(x)-f(y)|\le 2^a|x-y|$. Since $g$ is polytime computable, there exists an oracle machine $N^{^{()}}$ which computes $g$ in polynomial time. Assume an input $x\in\R$ and a Cauchy function $\varphi_x\in CF_x$. Assume $n\in\mathbb{N}$ and consider an oracle machine $M^{^{\varphi_x}}(n)$ that does the following: \begin{enumerate} \item{Let $n'=n+2+a$ and let $d=\varphi_x(n')$,} \item{Then $|d-x|\le2^{-n'}$, hence, it can be assumed without loss of generality that, $d=\frac{k_1}{2^{n'}}$ for some $k_1\in\Z$,} \item{Simulate the operation of $N^{^{\varphi_x,\varphi_y}}(0)$: (for $\varphi_y(j)=2^{n'}$) \begin{enumerate} \item{whenever $N^{^{()}}$ queries $\varphi_x(i)$, $M^{^{()}}$ returns $k_1$,} \item{whenever $N^{^{()}}$ queries $\varphi_y(j)$, $M^{^{()}}$ returns $2^{n'}$,} \end{enumerate}} \item{Repeat the last step as long as $N^{^{()}}$ keeps querying,} \item{Let $e$ be the output of $N^{^{()}}$,} \item{Output $2^{-n'}e$.} \end{enumerate} It can be easy seen that the computation time of $M^{^{\varphi_x}}(n)$ is bounded by a polynomial in terms of $n$ and $k$, where $n$ is the precision parameter and $k$ is the least positive integer such that $x\in [-2^k,2^k]$. Now we want to verify that $M^{^{\varphi_x}}(n)$ computes $f(x)$ within a precision $2^{-n}$. From the computation of $N^{^{()}}$ we have \begin{myequation} \label{eqn:char_over_R_3_1} &|e-g(k_1,2^{n'})|\le 1\\ &|2^{-n'}e-2^{-n'}g(k_1,2^{n'})|\le 2^{-n'} \end{myequation} From Eq. \eqref{eq:etoileepsilon} with $\epsilon=1$ \begin{myequation} \label{eqn:char_over_R_3_2} &|g(k_1,2^{n'})-2^{n'}f(\frac{k_1}{2^{n'}})|\le 1\\ &|2^{-n'}g(k_1,2^{n'})-f(\frac{k_1}{2^{n'}})|\le 2^{-n'} \end{myequation} From Eq. \eqref{eqn:char_over_R_3_1} and \eqref{eqn:char_over_R_3_2} we have \begin{myequation} \label{eqn:char_over_R_3_3} |2^{-n'}e-f(\frac{k_1}{2^{n'}})|\le 2^{-(n'-1)} \end{myequation} From the fact that $f$ is Lipschitz we have \begin{myequation} \label{eqn:char_over_R_3_4} |f(\frac{k_1}{2^{n'}})-f(x)|\le 2^a2^{-n'}=2^{-(n+2)} \end{myequation} From Eq. \eqref{eqn:char_over_R_3_3} and \eqref{eqn:char_over_R_3_4} we have \begin{myequation} |2^{-n'}e-f(x)|\le 2^{-(n'-1)}+2^{-(n+2)}\le 2^{-n} \end{myequation} This completes the proof that $f$ is polytime computable. \end{proof} The previous proposition can be generalized to locally Lipschitz functions as follows. \begin{definition}[Locally Lipschitz functions] Assume a function $f\colon\R^k\to\R$. \begin{enumerate} \item{We say that $f$ is locally Lipschitz if $f$ is Lipschitz on every compact subset of its domain. That is, for every compact set $C\subseteq\R^k$ there exists a constant $K_C$ such that for every $\vec{x},\vec{y}\in C$ the following holds: \begin{myequation} |f(\vec{x})-f(\vec{y})| \le K_c \norm{\vec{x}-\vec{y}} \end{myequation} $\norm{\cdot}$ is any norm, for example, the Euclidean norm.} \item{Let $C_i = \bar{B}(\vec{0},2^i)$, where $\bar{B}(\vec{0},2^i)$ is the closed ball centered at the origin $\vec{0}$ with radius $2^i$. We say that $f$ is locally poly-Lipschitz if $f$ is locally Lipschitz and there exists a sequence of Lipschitz constants $\{K_{C_i}\in\N\}_{i\in\N}$ that is polytime computable; that is, there exists a Turing machine $M(i)$ that uniformly computes $K_{C_i}$ in time $p(i)$, where $p$ is a polynomial function.} \end{enumerate} \end{definition} Then we have the following version of Proposition \ref{prop:maversion_comp} for locally poly-Lipschitz functions. \begin{proposition}[Complexity over $\R$ vs Complexity over $\R^2$] \label{prop:maversion_comp_2} Fix any arbitrary constant $\epsilon\ge0$. Let $f:\R\to\R$ be a locally poly-Lipschitz function. Then the following are equivalent: \begin{enumerate} \item{$f$ is polytime computable,} \item{there exists a polytime computable function $g\colon\R\times \R\to\R$ such that: \begin{equation} \label{eq:etoileepsilon_1} \forall x\in \Z,\forall y\in\N^{\ge 1}\colon|g(x,y)-yf(\frac{x}{y})|\le \epsilon \end{equation}} \end{enumerate} \end{proposition} \begin{proof} \noindent$(1)\Rightarrow(2):$ Exactly as the proof of $(1)\Rightarrow(2)$ of Proposition \ref{prop:maversion_comp}.\\ \noindent$(2)\Rightarrow(1):$ Similar to the proof of $(2)\Rightarrow(1)$ of Proposition \ref{prop:maversion_comp}. Only the operation of $M^{^{\varphi_x}}(n)$ needs to be modified as follows. \begin{enumerate} \item{Let $j$ be such that $4x\in[-2^j,2^j]$,} \item{Compute the Lipschitz constant $K_{C_j}$ of the function $f$ (by assumption on $f$ this can be done in polynomial time),} \item{Let $a\in\N$ be such that $K_{C_j}\le 2^a$,} \item{Let $n'=n+2+a$ and let $d=\varphi_x(n')$,} \item{Then $|d-x|\le2^{-n'}$, hence, it can be assumed without loss of generality that, $d=\frac{k_1}{2^{n'}}$ for some $k_1\in\Z$,} \item{Simulate the operation of $N^{^{\varphi_x,\varphi_y}}(0)$: (for $\varphi_y(j)=2^{n'}$) \begin{enumerate} \item{whenever $N^{^{()}}$ queries $\varphi_x(i)$, $M^{^{()}}$ returns $k_1$,} \item{whenever $N^{^{()}}$ queries $\varphi_y(j)$, $M^{^{()}}$ returns $2^{n'}$,} \end{enumerate}} \item{Repeat the last step as long as $N^{^{()}}$ keeps querying,} \item{Let $e$ be the output of $N^{^{()}}$,} \item{Output $2^{-n'}e$.} \end{enumerate} It can be easily verified that $M^{^{\varphi_x}}(n)$ operates in polynomial time and correctly computes $f(x)$ similar to what was done in Proposition \ref{prop:maversion_comp}. \end{proof} In order to interrelate with discrete integer complexity classes we employ the following notion of \emph{approximation}. \begin{definition}[Approximation] \label{dfn:approximation_1} Let $\mC$ be a class of functions from $\R^2$ to $\R$. Let $\mD$ be a class of functions from $\Z^2$ to $\Z$. Assume a function $f\colon\R\to\R$. \begin{enumerate} \item{We say that $\mC$ \emph{approximates} $\mD$ if for any function $g \in \mD$, there exists some function $\tilde{g} \in \mC$ such that for all $x,y\in\Z$ we have \footnote{Notice that the choice of the constants $\frac{1}{4}$ and $3$ in this definition is arbitrary.} \begin{myequation} |\tilde{g}(x,y)-g(x,y)| \le 1/4 \end{myequation}} \item{We say that $f$ is \emph{$\mathcal{C}$-definable} if there exists a function $\tilde{g} \in \mC$ such that the following holds \begin{myequation} \label{eq:etoile} \forall x\in \Z,\forall y\in\N^{\ge 1}\colon|\tilde{g}(x,y)-yf(\frac{x}{y})|\le 3 \end{myequation}} \end{enumerate} \end{definition} We then have the following result. \begin{theorem}\textbf{\emph{(Complexity over $\R$ vs approximate complexity over $\Z^2$)}} \label{th:lip} Consider a class $\mC$ of polytime computable real functions that approximate the class of polytime computable integer functions. Assume that $f:\R \to \mathbb{R}$ is Lipschitz. Then $f$ is polytime computable iff $f$ is $\mathcal{C}$-definable. \end{theorem} \begin{proof} Assume that $f$ is polytime computable. By Proposition \ref{prop:maversion_comp}, there exists a polytime computable function $g$ such that \eqref{eq:etoileepsilon} holds with $\epsilon=\frac{3}{4}$ \begin{myequation} \label{eqn:th_1_1} \forall x\in \Z,\forall y\in\N^{\ge 1}\colon|g(x,y)-yf(\frac{x}{y})|\le \frac{3}{4} \end{myequation} Since $g$ is polytime computable, there exists an oracle machine $M^{^{()}}$ which efficiently computes $g$. Consider a function $h:\Z^2\to\Z$ where $h(x,y)$ is defined as follows: (i) simulate the computation of $M^{^{\varphi_x,\varphi_y}}(1)$ where the exact values of $x$ and $y$ are used to answer the oracle queries, (ii) let $e$ be the output of that simulation, and (iii) return $\lfloor e\rfloor$ as the value of $h(x,y)$. By the definition of $h$ we have $|h(x,y)-\lfloor g(x,y)\rfloor|\le 1$. In addition $h$ is polytime computable, hence from the theorem hypothesis there exists some $\tilde{g} \in \mC$ such that $$\forall x,y\in\Z:|\tilde{g}(x,y) - h(x,y)| \le 1/4$$ Hence, \begin{equation} \forall x,y\in \Z\colon|\tilde{g}(x,y)-\lfloor g(x,y)\rfloor|\le 1+\frac14 = \frac{5}{4} \end{equation} We have $|g(x,y)-\lfloor g(x,y) \rfloor|\le 1$, then \begin{myequation} \label{eqn:th_1_2} \forall x,y\in\Z:|\tilde{g}(x,y)-g(x,y)|\le\frac{9}{4} \end{myequation} Finally, from Eq. \eqref{eqn:th_1_1} and Eq. \eqref{eqn:th_1_2} we have the desired result \begin{equation} \forall x\in \Z,\forall y\in\N^{\ge 1}\colon|\tilde{g}(x,y)-yf(\frac{x}{y})|\le \frac{9}{4}+\frac{3}{4}=3 \end{equation} The other direction follows from Proposition \ref{prop:maversion_comp} with $\epsilon=3$, observing that the functions in $\mC$ are polytime computable. \end{proof} In the right-to-left direction of the previous theorem, Eq. \eqref{eq:etoile} implicitly provides a way to efficiently approximate $f$ from $\tilde{g}\upharpoonright\Z^2$. Computability of $f$ is possible, in particular at the limit points, from the fact that it is Lipschitz (hence continuous), and efficiency is possible by the fact that $\tilde{g}$ is polytime computable. The left-to-right direction relates polytime computability of real functions to the corresponding classical discrete notion. Using Proposition \ref{prop:maversion_comp_2} the previous theorem can be generalized to locally poly-Lipschitz functions as follows. \begin{theorem}\textbf{\emph{(Complexity over $\R$ vs approximate complexity over $\Z^2$)}} \label{th:lip_2} Consider a class $\mC$ of polytime computable real functions that approximate the class of polytime computable integer functions. Assume that $f:\R \to \mathbb{R}$ is locally poly-Lipschitz. Then $f$ is polytime computable iff $f$ is $\mathcal{C}$-definable. \end{theorem} \subsection{Avoiding the Lipschitz hypothesis} \newcommand\mysharp[1]{\#_#1} \newcommand\mysharpa[2]{\#_#1[#2]} The major obstacle to avoiding the Lipschitz hypothesis is how to implicitly encode the continuity of $f$ in discrete computations. This is done in two steps: (1) encoding the modulus of continuity which provides information at arbitrarily small rational intervals (however, it does not tell anything about the irrational limit points) and (2) bounding the behavior of the characterizing function $g$ both at compact subintervals of its domain and at the integers. We need another notion of `approximation' that is a kind of converse to that given in Definition \ref{dfn:approximation_1}. \begin{definition}[Polytime computable integer approximation] A function $g: \R^d \to \R$ is said to have a polytime computable integer approximation if there exists some polytime computable integer function $h: \Z^d \to \Z$ such that \begin{myequation} \forall \bar{x} \in \Z^d\colon |h(\bar{x})- g(\bar{x})|\le 1 \end{myequation} \end{definition} A sufficient condition is that the restriction of $g$ to the integers ($g\upharpoonright\Z^2$) is polytime computable. The choice of the constant $1$ is then due to the fact that this is the best estimated error when trying to compute the floor of $g(\bar{x})$. Now we define a special class of functions that will be used to implicitly describe information about the smoothness of real functions. Their role can be compared to that of the modulus of continuity. \begin{definition}[$\mysharp{T}$] Consider a function $T\colon\N\to \N$ and define $\mysharp{T}:\R^{\ge 1} \to \R$ by $\mysharpa{T}{x}=2^{T(\lfloor\log_2 x\rfloor)}$. When $T$ is a polynomial function with degree $k$ we write $\mysharp{k}$ to simplify the notation, and we let $\mysharp{k}[x]=2^{\lfloor\log_2 x\rfloor^k}$. \end{definition} For $x\in\R$ let $\len{x}$ denote the length of the binary representation of $\lfloor x\rfloor$. Then the following proposition is the non-Lipschitz version of Proposition \ref{prop:maversion_comp}. \begin{proposition}\textbf{\emph{(Complexity over $\R$ vs complexity over $\R^3$)}} \label{prop:maversiond} Fix an arbitrary constant $\epsilon\ge0$. Then the following are equivalent: \begin{enumerate} \item{a function $f:\R \to \mathbb{R}$ is polytime computable,} \item{ there exists some function $g\colon\R^3\to\R$ such that \begin{enumerate} \item{\label{itemaa} $g$ has a polytime computable integer approximation,} \item{\label{itema} for some integer $k$, \begin{myequation} \label{eq:etoileepsilond} \forall x\in\R\;\forall y,z\in\R^{\ge1},& z> 4|x| \colon\\ & |g(x\mysharpa{k}{yz},y,z)-yf(x)|\le \epsilon \end{myequation}} \item{\label{itemc} \label{itemb} for some constant $M$, \begin{myequation} \label{eqn:smoothness_of_g} \forall x_1,x_2&\in\R\;\forall y,z\in\R^{\ge1}, z>\frac{4|x_1|}{\mysharpa{k}{yz}}\colon \\ & |x_1-x_2|\le 1 \Rightarrow |g(x_1,y,z)-g(x_2,y,z)|\le M \end{myequation}} \end{enumerate}} \end{enumerate} \end{proposition} \begin{proof} $(2)\Rightarrow(1):$ For simplicity, assume $\epsilon=1$. Assume there exists a function $g$ that satisfies the given conditions. Assume some $x\in\R$ and $n\in\N$. Let $y=2^n$ and $z=2^{\len{x}+b}$ for some arbitrary fixed constant $b\ge4$. Then $y,z\ge 1$ and $z>4|x|$. From condition \eqref{itema} we have \begin{myequation} \label{eqn:char_of_compact_1a} &|g(2^{(\len{x}+n+b)^k}x,y,z)-yf(x)|\le 1\\ &|y^{-1}g(2^{(\len{x}+n+b)^k}x,y,z)-f(x)|\le y^{-1} \end{myequation} Let $h$ be a polytime computable integer function with $$|h(x,y,z)- g(x,y,z) |\le 1$$ for all $x,y,z \in \Z$. Such a function exists by condition \eqref{itemaa}. Hence, \begin{equation} \label{eqn:char_of_compact_2a} |g(\lfloor 2^{(\len{x}+n+b)^k}x\rfloor,y,z)-h(\lfloor 2^{(\len{x}+n+b)^k}x\rfloor,y,z)|\le 1 \end{equation} Note that $\frac{2^{(\len{x}+n+b)^k} 4|x|}{\mysharpa{k}{yz}} =\frac{2^{(\len{x}+n+b)^k}4 |x|}{2^{\lfloor\log {yz}\rfloor^k}} =\frac{2^{(\len{x}+n+b)^k}4 |x|}{2^{\lfloor\log {2^{n+\len{x}+b}}\rfloor^k}}=4|x|<z$, hence condition \eqref{itemc} can be applied to get \begin{equation} \label{eqn:char_of_compact_3a} |g(\lfloor 2^{(\len{x}+n+b)^k} x\rfloor,y,z)-g(2^{(\len{x}+n+b)^k} x,y,z)|\le M \end{equation} From Equation \eqref{eqn:char_of_compact_2a} and Equation \eqref{eqn:char_of_compact_3a} we have \begin{myequation} \label{eqn:char_of_compact_4a} &|g( 2^{(\len{x}+n+b)^k} x,y,z)-h(\lfloor 2^{(\len{x}+n+b)^k}x\rfloor,y,z)|\le M+1\\ &|y^{-1}g( 2^{(\len{x}+n+b)^k} x,y,z)-y^{-1}h(\lfloor 2^{(\len{x}+n+b)^k}x\rfloor,y,z)|\le (M+1)y^{-1} \end{myequation} From Equation \eqref{eqn:char_of_compact_1a} and Equation \eqref{eqn:char_of_compact_4a} \begin{myequation} \label{eqn:char_of_compact_5} &|f(x)-y^{-1}h(\lfloor 2^{(\len{x}+n+b)^k}x\rfloor,y,z)|\le(M+2)y^{-1}\\ &|f(x)-2^{-n}h(\lfloor 2^{(\len{x}+n+b)^k}x\rfloor,2^n,2^{\len{x}+b})|\le(M+2)2^{-n} \end{myequation} Using this last equation we can build a polytime oracle Turing machine that computes $f(x)$ as follows. Assume some $\varphi\in CF_x$. Consider a machine $M^{^{\varphi}}(n)$ that does the following: \begin{enumerate} \item{let $d'=\varphi(2)$,} \item{let $\len{d}$ denote the length of the binary representation of $\lfloor d'+1\rfloor$,} \item{let $d=\varphi((\len{d}+n+4)^k+1)$,} \item{let $w=h(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})$,} \item{output $2^{-n}w$.} \end{enumerate} It is clear that the computation time of $M^{^\varphi}(n)$ is bounded by a polynomial in terms of $n$ and $\len{x}$. So it remains to show the correctness of $M^{^\varphi}(n)$. There are two cases. Assume first that $\len{x}=\len{d}$. In such a case let $b=4$ in the above equations. By definition of Cauchy sequences we have: \begin{myequation} \label{eqn:main_1} &|d-x|\le 2^{-((\len{d}+n+4)^k+1)}\\ &|2^{(\len{d}+n+4)^k}d-2^{(\len{d}+n+4)^k}x|\le1/2\\ &|\lfloor2^{(\len{d}+n+4)^k}d\rfloor-\lfloor2^{(\len{d}+n+4)^k}x\rfloor|\le1 \end{myequation} From Equation \eqref{eqn:char_of_compact_2a}, and the fact that $\len{x}=\len{d}$ and $b=4$, we have \begin{myequation} \label{eqn:main_2} &|g(\lfloor 2^{(\len{d}+n+4)^k}x\rfloor,2^n,2^{\len{d}+4})-h(\lfloor 2^{(\len{d}+n+4)^k}x\rfloor,2^n,2^{\len{d}+4})|\le 1\\ &|g(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})-h(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})|\le 1 \end{myequation} Now we want to apply condition \eqref{itemc} with \begin{align*} x_1 = \lfloor2^{(\len{d}+n+4)^k}d\rfloor,\qquad & x_2 = \lfloor2^{(\len{d}+n+4)^k}x\rfloor\\ y = 2^n,\qquad & z = 2^{\len{d}+4}\qquad \end{align*} From Eq. \eqref{eqn:main_1} we have $|x_1-x_2|\le1$. And \begin{myequation} \frac{4|x_1|}{\mysharpa{k}{yz}} &= \frac{4|\lfloor2^{(\len{d}+n+4)^k}d\rfloor|}{2^{{\lfloor\log_2yz\rfloor}^k}} = \frac{4|\lfloor2^{(\len{d}+n+4)^k}d\rfloor|}{2^{{\lfloor\log_2 (2^n 2^{\len{d}+4})\rfloor}^k}}\\ & = \frac{4|\lfloor2^{(\len{d}+n+4)^k}d\rfloor|}{2^{{(n+\len{d}+4)}^k}} \le \frac{4\lfloor2^{(\len{d}+n+4)^k}2^{\len{d}}\rfloor}{2^{{(n+\len{d}+4)}^k}}\\ & = \frac{4\cdot 2^{(\len{d}+n+4)^k}2^{\len{d}}}{2^{{(n+\len{d}+4)}^k}} = 4\cdot 2^{\len{d}} < 2^{\len{d}+4} = z \end{myequation} Accordingly \begin{equation} \label{eqn:main_4} |g(\lfloor2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})-g(\lfloor2^{(\len{d}+n+4)^k}x\rfloor,2^n,2^{\len{d}+4})|\le M \end{equation} From Equation \eqref{eqn:main_2} and Equation \eqref{eqn:main_4} we have: \begin{myequation} \label{eqn:main_5} |h(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})-g(\lfloor2^{(\len{d}+n+4)^k}x\rfloor,2^n,2^{\len{d}+4})|\le M+1 \end{myequation} From Equation \eqref{eqn:main_2} and Equation \eqref{eqn:main_5} we have \begin{myequation} \label{eqn:main_6} &|h(\lfloor 2^{(\len{d}+n+4)^k}x\rfloor,2^n,2^{\len{d}+4})-h(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})|\le M+2\\ &|2^{-n}h(\lfloor 2^{(\len{d}+n+4)^k}x\rfloor,2^n,2^{\len{d}+4})-\\ &\qquad\qquad\qquad 2^{-n}h(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})|\le (M+2)2^{-n}\\ \end{myequation} From Equation \eqref{eqn:char_of_compact_5} and Equation \eqref{eqn:main_6} and the fact that $b=4$ and $\len{x}=\len{d}$: \begin{equation} \label{eqn:avoid_lip_2} |f(x)-2^{-n}h(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})|\le (M+2)2^{-(n-1)} \end{equation} Note that the above algorithm outputs $2^{-n}h(\lfloor 2^{(\len{d}+n+4)^k}d\rfloor,2^n,2^{\len{d}+4})$. Hence, the algorithm is correct for the case $\len{x}=\len{d}$. The only other possibility about the relationship between $\len{x}$ and $\len{d}$ is that $\len{d}=\len{x}+1$. Such a case is equivalent to letting $b=5$ and following the same line of reasoning as above. Hence, $M^{^\varphi}(n)$ correctly approximates $f(x)$.\\ \noindent$(1)\Rightarrow(2):$ Assume that $f\colon\R\to\R$ is polytime computable. Hence $f$ has a polynomial modulus $m(n',n)=(n'+n)^{k-1}$ for some constant $k\in\N^{\ge2}$, where $n'$ and $n$ are the extension and precision parameters respectively. Define $g\colon\R^3\to\R$ as follows. \begin{myequation} g(x,y,z)= \begin{cases} y f(\frac{x}{\mysharpa{k}{yz}}) & y,z\ge 1\\ y f(\frac{x}{\mysharpa{k}{y}}) & y\ge1,z<1\\ y f(\frac{x}{\mysharpa{k}{z}}) & y<1,z\ge1\\ y f(x) & y,z<1\\ \end{cases} \end{myequation} Then for every $x\in\R$, every $y,z\in \R^{\ge 1}$, and every $z > 4|x|$ we have \begin{align*} |g(x \mysharpa{k}{yz},y,z)-yf(x)|=|yf(\frac{x \mysharpa{k}{yz}}{\mysharpa{k}{yz}})-yf(x)|=0 \end{align*} Hence, condition \eqref{itema} is satisfied. Now assume $x_1,x_2 \in \R$ and $y,z\in\R^{\ge1}$ such that $|x_1-x_2|\le1$ and $z>\frac{4|x_1|}{\mysharpa{k}{yz}}$. Then \begin{myequation} \label{eqn:avoid_lip_1} |g(x_1,y,z)-g(x_2,y,z)|&=|yf(\frac{x_1}{\mysharpa{k}{yz}})-yf(\frac{x_2}{\mysharpa{k}{yz}})|\\ & = y|f(\frac{x_1}{\mysharpa{k}{yz}})-f(\frac{x_2}{\mysharpa{k}{yz}})| \end{myequation} We have \begin{myequation} |\frac{x_1}{\mysharpa{k}{yz}}-\frac{x_2}{\mysharpa{k}{yz}}|&=\frac{1}{\mysharpa{k}{yz}}|x_1-x_2|\\ &\le\frac{1}{\mysharpa{k}{yz}}=2^{-\lfloor\log{z}+\log{y}\rfloor^k} \end{myequation} Since $z>\frac{4|x_1|}{\mysharpa{k}{yz}}$, then $\log{z}\ge len(\lfloor \frac{|x_1|}{\mysharpa{k}{yz}}\rfloor)$ and $\log{z}\ge len(\lfloor \frac{|x_2|}{\mysharpa{k}{yz}}\rfloor)$. Hence, $\log{z}$ is an upper bound to the extension parameter of the input to $f$ as given by Equation \eqref{eqn:avoid_lip_1}. In addition $\log{y}$ represents the precision parameter. Hence, by applying the modulus of continuity of $f$, $m(n',n)=(n'+n)^{k-1}$, we have \begin{myequation} |g(x_1,y,z)-g(x_2,y,z)|\le|y2^{-\log{y}}|=1 \end{myequation} Hence, condition \eqref{itemb} is satisfied with $M=1$. Assume $i_1,i_2,i_3\in\Z$. The following procedure computes a function $h\colon\Z^3\to\Z$ such that $|h(i_1,i_2,i_3)- g(i_1,i_2,i_3)|\le 1$: \begin{enumerate} \item{if $i_2,i_3\ge 1$, then let $j=i_2 i_3$,} \item{if $i_2\ge1$ and $i_3 < 1$, then let $j=i_2$,} \item{if $i_2<1$ and $i_3 \ge 1$, then let $j=i_3$,} \item{if $i_2,i_3< 1$, then let $j=1$,} \item{let $l=len(j)-1$,} \item{Shift right the binary representation of $i_1$ by $l^k$ positions, the result would be a dyadic rational $d$ (this corresponds to dividing $i_1$ by $2^{l^k}$),} \item{Simulate the computation of $f(d)$ assuming large enough though fixed precision. When simulating the oracle, $d$ is presented exactly,} \item{Multiply the output of the previous step by $i_2$. Finally, truncate the result to extract the integer part.} \end{enumerate} It is clear that all of these steps can be performed in polynomial time in terms of $len(|i_1|)$, $len(|i_2|)$, and $len(|i_3|)$. The fixed precision in step 7 can be calculated from the modulus of $f$ given the fact that the error in the output of the above procedure should be at most $1$. Hence, condition \eqref{itemaa} is satisfied. This completes the proof of the proposition. \end{proof} \begin{remark}~ \begin{enumerate} \item{The previous proposition can be generalized to any function $f\colon\R^n\to\R$ by appropriately adjusting the arities of the functions $g$ and $h$ (as well as slightly modifying Definition \ref{dfn:approximation_1} to include functions with any arity).} \item{Given a unary function $f\colon\R\to\R$ that is polytime computable the previous proposition essentially characterizes $f$ with an integer function $h$ that is polytime computable (see Equation \eqref{eqn:avoid_lip_2} and the associated algorithm). $h$ takes three arguments: one argument that can be interpreted as the extension parameter (the last argument, $2^{\len{d}+4}$, in Equation \eqref{eqn:avoid_lip_2}), another that can be interpreted as the precision parameter, (the middle argument, $2^n$, in Equation \eqref{eqn:avoid_lip_2}), and the third argument is the value itself (the first argument, $\lfloor 2^{(\len{d}+n+4)^k} d\rfloor$, in Equation \eqref{eqn:avoid_lip_2}).} \end{enumerate} \end{remark} We need to consider real functions that are well behaved relative to their restriction to the integers. For ease of notation, we will use $[a,b]$ to denote either the interval $[a,b]$ or the interval $[b,a]$, according to whether or not $a<b$. \begin{definition}[Peaceful functions]~ \begin{enumerate} \item{A function $g: \R^3 \to \R$ is said to be peaceful if \begin{equation} \forall x\in\R,\forall y,z\in \N^{\ge 1} \colon g(x,y,z) \in [g(\lfloor x \rfloor ,y,z) ,g(\lceil x \rceil ,y,z)] \end{equation}} \item{We say that a class $\mC$ of real functions peacefully approximates some class $\mD$ of integer functions, if the subclass of peaceful functions of $\mC$ approximates $\mD$.} \end{enumerate} \end{definition} \begin{definition} Let $\mC$ be a class of functions from $\R^3$ to $\R$. Let us consider a function $f:\R \to \mathbb{R}$ and a function $T\colon\N\to \N$. \begin{enumerate} \item{We say that $f$ is $T$-$\mathcal{C}$-definable if there exists some peaceful function $g \in \mathcal{C}$ such that \begin{myequation} \forall x\in \Z\;\forall y,z\in \N^{\ge 1}, z>\frac{4|x|}{\mysharpa{T}{yz}} \colon|g(x,y,z)-yf(\frac{x}{\mysharpa{T}{yz}})|\le 2 \end{myequation}} \item{We say that $f$ is $T$-smooth if there exists some integer $M$ such that \begin{myequation} \label{eqn:for_continuity_of_f} \forall x_1,x_2&\in\R\;\forall y,z\in\R^{\ge1}, z>\frac{4|x_1|}{\mysharpa{T}{yz}}:\\ & |x_1-x_2|\le1 \Rightarrow y |f(\frac{x_1}{\mysharpa{T}{yz}})-f(\frac{x_2}{\mysharpa{T}{yz}})|\le M \end{myequation}} \end{enumerate} \end{definition} Notice the similarity in the role that $\mysharpa{T}{yz}$ plays in the previous definition and the role of the modulus of continuity of $f$. Now we can have the non-Lipschitz version of Theorem \ref{th:lip}. \begin{theorem}\textbf{\emph{(Complexity over $\R$ vs approximate complexity over $\Z^3$)}} \label{th:two} Consider a class $\mC$ of real functions that peacefully approximates polytime computable integer functions. And whose functions have polytime computable integer approximations \footnote{A sufficient condition for that is restrictions to integers of functions from $\mC$ are polytime computable.}. Then the following are equivalent. \begin{enumerate} \item{a function $f\colon\R\to\R$ is polytime computable,} \item{there exists some positive integer $k$ such that \begin{enumerate} \item{\label{itemat} $f$ is $n^k$-$\mathcal{C}$-definable, } \item{\label{itemf} $f$ is $n^k$-smooth.} \end{enumerate}} \end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow(2):$ By Proposition \ref{prop:maversiond} there exists a function $g\colon\R^3\to\R$ such that Equation \eqref{eq:etoileepsilond} holds with $\epsilon=3/4$. Such equation can be rewritten as follows (through change of variables). \begin{myequation} \label{eqn:avoid_lip_5} \forall x\in\R\;\forall y,z\in\R^{\ge1},& z> \frac{4|x|}{\mysharpa{k}{yz}} \colon\\ & |g(x,y,z)-yf(\frac{x}{\mysharpa{k}{yz}})|\le \frac{3}{4} \end{myequation} Also, using Proposition \ref{prop:maversiond}, $g$ has a polytime computable integer approximation $h$, that is, \begin{myequation} \label{eqn:avoid_lip_4} \forall x\in\Z, \forall y,z\in\N^{\ge1}: |g(x,y,z)-h(x,y,z)| \le 1 \end{myequation} Now, by hypothesis, there exists some peaceful real function $\tilde{h} \in \mC$ that approximates $h$ \begin{myequation} \label{eqn:avoid_lip_3} \forall x\in\Z,\forall y,z\in \N^{\ge1}:|\tilde{h}(x,y,z) - h(x,y,z)| \le 1/4 \end{myequation} Then, from Equation \eqref{eqn:avoid_lip_4} and Equation \eqref{eqn:avoid_lip_3} we have \begin{myequation} \label{eqn:avoid_lip_6} \forall x\in\Z,\forall y,z\in \N^{\ge1}\colon |\tilde{h}(x,y,z)- g(x,y,z)|\le 1+\frac14 = \frac{5}{4} \end{myequation} From Equation \eqref{eqn:avoid_lip_5} and Equation \eqref{eqn:avoid_lip_6} we have \begin{myequation} \label{eqn:from_R_to_approx_Z3_1} \forall x\in \Z\;&\forall y,z\in \N^{\ge 1}, z>\frac{4|x|}{\mysharpa{k}{yz}}\colon \\ & |\tilde{h}(x,y,z)-yf(\frac{x}{\mysharpa{k}{yz}})|\le \frac{5}{4}+\frac{3}{4}=2 \end{myequation} This proves (2a) of the current theorem. Now assume $x_1,x_2\in\R$ such that $|x_1-x_2|\le1$ and assume $y,z\in\R ^{\ge1}$ such that $z>\frac{4|x_1|}{\mysharpa{k}{yz}}$. Then \begin{align*} |yf(\frac{x_1}{\mysharpa{k}{yz}})&-yf(\frac{x_2}{\mysharpa{k}{yz}})|\le |yf(\frac{x_1}{\mysharpa{k}{yz}})-g(x_1,y,z)|+\\ & |g(x_1,y,z)-g(x_2,y,z)|+|g(x_2,y,z)-yf(\frac{x_2}{\mysharpa{k}{yz}})|\\ & =\frac{3}{4} + |g(x_1,y,z)-g(x_2,y,z)| + \frac{3}{4},\qquad \textit{applying Equation \eqref{eqn:avoid_lip_5}}\\ & = \frac{3}{2} + M, \qquad \textit{applying Equation \eqref{eqn:smoothness_of_g}} \end{align*} This proves \eqref{itemf} of the current theorem.\\ \noindent $(2)\Rightarrow(1):$ This is proven through the use of part (2) of Proposition \ref{prop:maversiond} as follows. Since $f$ is $n^k$-$\mathcal{C}$-definable, there exists a peaceful function $g\in\mathcal{C}$ such that \begin{myequation} \label{eqn:avoid_lip_7} \forall x\in \Z\;\forall y,z\in \N^{\ge 1}, z>\frac{4|x|}{\mysharpa{k}{yz}} \colon|g(x,y,z)-yf(\frac{x}{\mysharpa{k}{yz}})|\le 2 \end{myequation} From the hypothesis of this theorem $g$ has a polytime computable integer approximation. Hence, condition \eqref{itemaa} of Proposition \ref{prop:maversiond} is satisfied. Assume $x_1,x_2\in\Z$ such that $|x_1-x_2|\le 1$ and assume $y,z\in\N^{\ge1}$ such that $z>\frac{4\max\{|x_1|,|x_2|\}}{\mysharpa{k}{yz}}$. From the hypothesis of the theorem we have $f$ is $n^k$-smooth, hence there exists some positive integer $M$ such that \begin{myequation} \label{eqn:avoid_lip_10} y |f(\frac{x_1}{\mysharpa{k}{yz}})-f(\frac{x_2}{\mysharpa{k}{yz}})|\le M \end{myequation} Applying Equation \eqref{eqn:avoid_lip_7} to $x_1$ and $x_2$ we have \begin{myequation} \label{eqn:avoid_lip_8} &|g(x_1,y,z)-yf(\frac{x_1}{\mysharpa{k}{yz}})|\le 2\\ &|g(x_2,y,z)-yf(\frac{x_2}{\mysharpa{k}{yz}})|\le 2 \end{myequation} Then \begin{myequation} \label{eqn:two_successive_values_of_h} |g(x_1&,y,z)-g(x_2,y,z)|\le |g(x_1,y,z)-yf(\frac{x_1}{\mysharpa{k}{yz}})| + \\ & y |f(\frac{x_1}{\mysharpa{k}{yz}})-f(\frac{x_2}{\mysharpa{k}{yz}})| + |yf(\frac{x_2}{\mysharpa{k}{yz}})-g(x_2,y,z)|\\ & \le 2 + y |f(\frac{x_1}{\mysharpa{k}{yz}})-f(\frac{x_2}{\mysharpa{k}{yz}})| + 2,\;\textit{from Eq. \eqref{eqn:avoid_lip_8}}\\ & = 4 + y |f(\frac{x_1}{\mysharpa{k}{yz}})-f(\frac{x_2}{\mysharpa{k}{yz}})|\\ & \le 4 + M = M' ,\; \textit{from Eq. \eqref{eqn:avoid_lip_10}} \end{myequation} Now assume $u,v\in\R$ such that $|u-v|\le1$. And assume $y,z\in\N^{\ge1}$ such that $z>\frac{4\max\{|u|,|v|\}}{\mysharpa{k}{yz}}$. Then \begin{myequation} \label{eqn:avoid_lip_11} |g&(u,y,z)-g(v,y,z)| \le |g(u,y,z)-g(\lfloor u\rfloor,y,z)| + \\ &\quad\; |g(\lfloor u\rfloor,y,z) - g(\lfloor v\rfloor,y,z)| + |g(\lfloor v\rfloor,y,z) - g(v,y,z)| \\ & \le |g(\lceil u\rceil,y,z)-g(\lfloor u\rfloor,y,z)| + |g(\lfloor u\rfloor,y,z) - g(\lfloor v\rfloor,y,z)| + \\ & \quad\; |g(\lfloor v\rfloor,y,z) - g(\lceil v\rceil,y,z)|,\;\textit{since g is peaceful}\\ & \le M' + |g(\lfloor u\rfloor,y,z) - g(\lfloor v\rfloor,y,z)| + M',\; \textit{from Eq. \eqref{eqn:two_successive_values_of_h}}\\ & \le M' + M' + M', \; \textit{from Eq. \eqref{eqn:two_successive_values_of_h} and the fact that $|u-v|\le1$}\\ & \le 3M' \end{myequation} Hence, condition \eqref{itemc} of Proposition \ref{prop:maversiond} is satisfied (note that only the integer values of $y$ and $z$ are used throughout the proof of $(2)\Rightarrow(1)$ in Proposition \ref{prop:maversiond}). Assume $w\in\R$ and $y,z\in\N^{\ge 1}$ such that $z>\frac{4|w|}{\mysharpa{k}{yz}}$. Then \begin{myequation} |g&(w,y,z)-yf(\frac{w}{\mysharpa{k}{yz}})|\le |g(w,y,z)-g(\lfloor w\rfloor,y,z)| + \\ &\quad\;|g(\lfloor w\rfloor,y,z)-yf(\frac{\lfloor w\rfloor}{\mysharpa{k}{yz}})| + |yf(\frac{\lfloor w\rfloor}{\mysharpa{k}{yz}})-yf(\frac{w}{\mysharpa{k}{yz}})|\\ & \le 3M'+ |g(\lfloor w\rfloor,y,z)-yf(\frac{\lfloor w\rfloor}{\mysharpa{k}{yz}})| + \\ & \quad\; |yf(\frac{\lfloor w\rfloor}{\mysharpa{k}{yz}})-yf(\frac{w}{\mysharpa{k}{yz}})|, \; \textit{from Eq. \eqref{eqn:avoid_lip_11}}\\ & \le 3M'+ 2 + |yf(\frac{\lfloor w\rfloor}{\mysharpa{k}{yz}})-yf(\frac{w}{\mysharpa{k}{yz}})|,\;\textit{from Eq. \eqref{eqn:avoid_lip_7}}\\ & \le 3M'+2+M=M'',\;\textit{$f$ is $n^k$-smooth and using Eq. \eqref{eqn:avoid_lip_10}} \end{myequation} Hence, condition \eqref{itema} of Proposition \ref{prop:maversiond} is satisfied. This completes the proof of the theorem. \end{proof} The previous results can be generalized to any complexity class as indicated by the following corollary. \begin{corollary} \label{machinchose} Let $\mathcal{D}$ be some class of functions from $\N$ to $\N$ above the class of polynomial functions and closed under composition. Consider a class $\mC$ of real functions that peacefully approximates integer functions computable in time $\mathcal{D}$. And whose functions have integer approximations computable in time $\mathcal{D}$ \footnote{A sufficient condition for that is restrictions to integers of functions from $\mC$ are computable in time $\mathcal{D}$.}. Then the following are equivalent. \begin{enumerate} \item{a function $f\colon\R\to\R$ is computable in time $\mathcal{D}$,} \item{there exists some $T \in \mathcal{D}$ such that \begin{enumerate} \item{$f$ is $T$-$\mathcal{C}$-definable,} \item{$f$ is $T$-smooth.} \end{enumerate}} \end{enumerate} \end{corollary} \begin{proof} The proof is similar to that of the previous theorem. It should be noted that if $f$ is computable in time bounded by $\mathcal{D}$ then it has a modulus in $\mathcal{D}$. This is a direct consequence of the generalization of Theorem 2.19 in \cite{Ko91} to functions over the whole real line $\R$. \end{proof} \section{Applications} In this section we apply the above results to algebraically characterize some computability and complexity classes of real functions. We first obtain some restatements and extensions of already known results, using our framework. We then provide new results, in particular, the main result given by Theorems \ref{th:w1} and \ref{th:w2} and Corollary \ref{cor:w3} which gives algebraic machine independent characterizations of polytime computable analysis functions. Note that, we obtain characterizations that are valid for functions defined on any closed interval (potentially infinite), including the whole real line. In the case of the polytime computable functions, this is a brand new result. In the case of computable functions, this proves the generality of our framework. A \emph{function algebra} $\mathcal{F}=[\mathcal{B};\mathcal{O}]$ is the smallest class of functions containing a set of basic functions $\mathcal{B}$ and their closure under a set of operations $\mathcal{O}$. \subsection{Elementarily computable functions: class $\mathcal{L}$ } Let us now consider the class $\mathcal{L}$ defined in \cite{CMC02}: $$\mathcal{L} = [0, 1, -1, \pi, U,\theta_3; COMP, LI]$$ where $\pi$ is the mathematical constant $\pi = 3.14..$, $U$ is the set of projection functions, $\theta_3(x)=\max\{0, x^3\}$, $COMP$ is the classical composition operator, and $LI$ is Linear Integration (solution to a system of linear differential equations). From the constructions in \cite{CMC02}, we know that this class captures the elementary integer functions. In addition the following lemma follows from the constructions in \cite{FundamentaInformatica2006}. \begin{lemma} $\mathcal{L}$ is a class of real functions computable in elementary time that peacefully approximates total elementarily computable integer functions. \end{lemma} Again using the above results we can obtain characterizations of the class of elementarily computable analysis functions. \begin{proposition}[Variation of \cite{CMC02}] A Lipschitz function $f:\R \to \R$ is computable in elementary time iff it is $\mathcal{L}$-definable. \end{proposition} \begin{proposition}[Extension of \cite{CMC02}] Let $f:\R \to \R$ be some $T$-smooth function, for some elementary function $T\colon\N\to\N$. Then $f$ is computable in elementary time iff it is $T$-$\mathcal{L}$-definable. \end{proposition} As in \cite{CMC02,FundamentaInformatica2006}, we can also characterize in a similar way the functions computable in time $\mathcal{E}_n$ for $n\geq 3$, where $\mathcal{E}_n$ represents the $n$-th level of the Grzegorczyk hierarchy. \subsection{Recursive functions: class $\mathcal{L}_{\mu}$} Let us now consider the class $\mathcal{L}_{\mu}$ defined in \cite{FundamentaInformatica2006}: $$\mathcal{L}_{\mu} = [0, 1, U, \theta_3; COMP, LI, \mathit{UMU}]$$ where a zero-finding operator $\mathit{UMU}$ has been added. This class is known (see \cite{FundamentaInformatica2006}) to extend the class of total (integer) recursive functions. From the constructions in this latter paper one can show: \begin{lemma} $\mathcal{L}_{\mu}$ is a class of computable functions that peacefully approximate the class of total integer recursive functions. \end{lemma} And hence, as a consequence of Theorem \ref{th:lip} and Corollary \ref{machinchose}, we obtain the following result for functions defined on a product of closed (potentially infinite) intervals $\mathcal{D}$: \begin{proposition}[Variation of \cite{FundamentaInformatica2006}] A Lipschitz function $f:\mathcal{D} \to \R$ is computable iff it is $\mathcal{L}_{\mu}$-definable. \end{proposition} \begin{proposition}[Extension of \cite{FundamentaInformatica2006}] Let $f:\mathcal{D} \to \R$ be some $T$-smooth function, for some total recursive function $T:\N\to\N$. Then $f$ is computable iff it is $T$-$\mathcal{L}_{\mu}$-definable. \end{proposition} \subsection{Polynomial Time Computable Functions} We are now ready to provide our main result: an algebraic characterization of polytime computable functions over the reals. To do so, we define a class of real functions which are essentially extensions to $\R$ of the Bellantoni-Cook class \cite{BelCoo92}. The latter class was developed to exactly capture classical discrete polytime computability in an algebraic machine-independent way. In the next definition any function $f(x_1,\ldots,x_m;y_1,\ldots,y_n)$ has two types of arguments (see \cite{BelCoo92}): \emph{normal} arguments which come first followed by \emph{safe} arguments, using `$;$' for separation. For any $n\in\Z$ we call $[2n,2n+1]$ an even interval and $[2n+1,2n+2]$ an odd interval. \begin{definition} Define the function algebra $$\mathcal{W}=[0,1,+,-,U,c,parity,p;SComp,SI]$$ \begin{enumerate} \item{zero-ary functions for the constants $0$ and $1$,} \item{a binary addition function: $+(;x,y)=x+y$,} \item{a binary subtraction function: $-(;x,y)=x-y$,} \item{a set of projection functions $U=\{U_i^j\colon i,j\in\N,i\le j\}$ where: \\$U_i^{m+n}(x_1,\ldots,x_m;x_{m+1},\ldots,x_{m+n})=x_i$,} \item{a polynomial conditional function $c$ defined by: \footnote{If $x=1$, the conditional is equal to $y$; if $x=0$, it is equal to $z$. Between $0$ and $1$, it stays between $y$ and $z$.} \begin{myequation} c(;x,y,z)=xy+(1-x)z \end{myequation}} \item{a continuous parity function: \begin{myequation} parity(;x)=\max\{0, \frac{\pi}{2} sin(\pi x)\} \end{myequation} Hence, $parity(;x)$ is non-zero if and only if $x$ lies inside an even interval. Furthermore, for any $n\in\Z$ the following holds: $\int_{2n}^{2n+1}parity(;x)dx=1$.} \item{a continuous predecessor function $p$ defined by: \begin{myequation} p(;x) = \int_0^{x-1}parity(;t)dt \end{myequation} Notice that when $x$ belongs to an even interval $[2n,2n+1]$ $p(;x)$ acts exactly like $\lfloor\frac{x}{2}\rfloor$. On an odd interval $[2n+1, 2n+2]$, it grows continuously and monotonically from $n$ to $n+1$.} \item{a safe composition operator $SComp$: Assume a vector of functions $\bar{g}_1(\bar{x};)\in \mathcal{W}$, a vector of functions $\bar{g}_2(\bar{x};\bar{y})\in \mathcal{W}$, and a function $h\in \mathcal{W}$ of arity $dim(\bar{g}_1)+dim(\bar{g}_2)$ (where $dim$ denotes the vector length). Define new functio \begin{myequation} f(\bar{x};\bar{y})=h(\bar{g}_1(\bar{x};);\bar{g}_2(\bar{x};\bar{y})) \end{myequation} It is clear from the asymmetry in this definition that normal arguments can be repositioned in safe places whereas the opposite can not happen.} \item{safe integration operator\footnote{Notice that for simplicity we misuse the basic functions (and $p'$) so that their arguments are now in normal positions (the alternative is to redefine a new set of basic functions with arguments in normal positions). }$SI$: Assume functions $g,h_0,h_1\in\mathcal{W}$. Let $p'(;x)=p(;x-1)+1$. Define a new function as the solution of the following ODE: \begin{myequation} f(0,\bar{y};\bar{z})=&g(\bar{y};\bar{z})\\ \partial_x f(x,\bar{y};\bar{z})&=parity(x;)[h_1(p(x;),\bar{y};\bar{z},f(p(x;),\bar{y};\bar{z}))\\ &\qquad\qquad\qquad -f(2p(x;),\bar{y};\bar{z})]\\ &+parity(x-1;)[h_0(p'(x;),\bar{y};\bar{z},f(p'(x;),\bar{y};\bar{z}))\\ &\qquad\qquad\qquad\qquad -f(2p'(x;)-1,\bar{y};\bar{z})] \end{myequation} This operator closely matches Bellantoni and Cook's predicative recursion on notations: if $x$ is an even integer, we apply $h_0$ to its predecessor $p(x;)$, if $x$ is an odd integer, we apply $h_1$ to $p'(x;) = \lfloor \frac{x}{2} \rfloor$.} \end{enumerate} \end{definition} This class $\mathcal{W}$ is based on the Bellantoni-Cook's constructions and the dichotomy of normal/safe arguments in order to have the following properties, proved by induction. \begin{proposition} \label{prop:properties_of_class_W} \begin{enumerate} \item{Class $\mathcal{W}$ preserves the integers, that is for every $f\in\mathcal{W}$ of arity $n$, $f\upharpoonright\Z^n\colon\Z^n\to\Z$.} \item{Every function in $\mathcal{W}$ is polytime computable.} \item{Every polytime computable integer function has a peaceful extension in $\mathcal{W}$.} \end{enumerate} \end{proposition} \begin{proof} \underline{Part I:} Proof is by induction on the construction of functions in $\mathcal{W}$. It is easy to see that the constant functions $0$ and $1$, addition, subtraction, and projections all preserve $\Z$. Given $n\in\Z$ we have $p(;n)=\lfloor\frac{n}{2}\rfloor$ which is an integer. Given $i,j,k\in\Z$ it is clear that $c(;i,j,k)=ij+(1-i)k$ is an integer. The parity function is always $0$ at the integer points. Hence, all the basic functions preserve the integers. Trivially composition preserves the integers. Let $g,h_0,h_1\in\mathcal{W}$ be functions that preserve $\Z$ and consider the application of the safe integration operator to define a new function $f\in\mathcal{W}$. We use strong induction over the discrete values of the integration variable to show that $f$ preserves $\N$ (for simplicity we restrict to non-negative integers; also we neglect the arguments $\bar{y}$ and $\bar{z}$ and drop the `;' in case all arguments of the function are normal). The base case $f(0)=g$ holds by assumption on $g$. Let $n\in\N^{\ge1}$ and assume $f(j)\in\N$ for every $j\le 2n$, then \begin{myequation} \label{eq:class_W_1} f(2n+1)&=f(2n)+\int_{2n}^{2n+1}parity(x)[h_1(p(x);f(p(x)))-f(2p(x))]dx\\ &=f(2n)+\int_{2n}^{2n+1}parity(x)[h_1(n;f(n))-f(2n)]dx\\ &=f(2n)+[h_1(n;f(n))-f(2n)]\int_{2n}^{2n+1}parity(x)dx\\ &=f(2n)+[h_1(n;f(n))-f(2n)]\cdot 1\\ &=h_1(n;f(n)) \end{myequation} which is an integer value by the assumption on $h_1$ and the induction hypothesis on $f$. Similarly, it can be shown that \begin{myequation} \label{eq:class_W_2} f(2n+2)=h_0(n+1;f(n+1)) \end{myequation} which is also an integer value. This completes the proof of the first part of the proposition.\\ \noindent\underline{Part II:} Proof is by induction on the construction of functions in $\mathcal{W}$. It is easy to see that all the basic functions are polytime computable. Composition preserves polytime computability. Consider a function $f\in\mathcal{W}$ that is defined by safe integration from $g,h_0,h_1$ where these latter functions are polytime computable. At $x=0$ we have $f(0)=g$ which is polytime computable by assumption on $g$. Assume $x\in[2n,2n+1]$ for some $n\in\N$, then \begin{myequation} \label{eq:class_W_3} f(x)&=g+\int_0^{x}parity(u)[h_1(p(u);f(p(u)))-f(2p(u))]du\\ &=f(2n)+\int_{2n}^{x}parity(u)[h_1(n;f(n))-f(2n)]du\\ &=f(2n)+[h_1(n;f(n))-f(2n)]\int_{2n}^{x}parity(u)du\\ &=f(2n)+[f(2n+1)-f(2n)]\int_{2n}^{x}parity(u)du, \textit{\; from Eq. \eqref{eq:class_W_1}}\\ &=f(2n)+[f(2n+1)-f(2n)](p(x+1) - p(2n+1))\\ &=f(2n)+[f(2n+1)-f(2n)](p(x+1) - n) \end{myequation} Similarly, over odd intervals $[2n+1,2n+2]$ we have \begin{myequation} \label{eq:class_W_4} f(x)=f(2n+1)+[f(2n+2)-f(2n+1)](p(x)-n) \end{myequation} From Eq. \eqref{eq:class_W_1} and Eq. \eqref{eq:class_W_2} we see that the safe integration operator exactly simulates the behavior of the safe recursion operator of the Bellantoni-Cook class. So given the assumption that $g,h_0,h_1$ are polytime computable, we have $f\upharpoonright\N$ is polytime computable. Furthermore, from the base case $p(x)$ is polytime computable. This completes the proof of the second part of the proposition.\\ \noindent\underline{Part III:} First we use induction to show that every function in the Bellantoni-Cook class (which captures integer polytime computability) has an extension in $\mathcal{W}$. The Bellantoni-Cook class is defined by: $B= [0,U,s_0,s_1,pr,cond;\\SComp,SRec]$, see \cite{BelCoo92}. The functions $0,U\in\mathcal{W}$ are extensions of the corresponding functions in $B$. Define functions $\tilde{s}_i\in\mathcal{W}$ by $\tilde{s}_i(;x)=2x+i$ where $i\in\{0,1\}$. Then $\tilde{s}_i$ are extensions of the successor functions $s_i$. The predecessor function is defined as follows: $pr(;n)=\lfloor\frac{n}{2}\rfloor$. From our definition of the class $\mathcal{W}$ we have $p\upharpoonright\N=pr$, hence $p$ is an extension of the predecessor function $pr$. Define a function $c_d\in\mathcal{W}$ as follows. \begin{myequation} c_d(;x,y,z)=c(;x-2p(;x),z,y) \end{myequation} Assume $x=2n$ for $n\in\N$. Then $c_d(;2n,y,z)=c(;2n-2p(;2n),z,y)=c(;0,z,y)=y$. Now assume $x=2n+1$. Then $c_d(;2n+1,y,z)=c(;2n+1-2p(;2n+1),z,y)=c(;2n+1-2n,z,y)=c(;1,z,y)=z$. So $c_d\upharpoonright\N^3=cond$, hence it is an extension of the conditional function $cond$. The case for safe composition $SComp$ is easy. Now assume $f\in B$ that is defined by safe recursion from $g,h_0,h_1\in B$. Assume $\tilde{g},\tilde{h}_0,\tilde{h}_1\in\mathcal{W}$ are extensions of $g,h_0,h_1$. Define the function $\tilde{f}\in\mathcal{W}$ by safe integration from $\tilde{g},\tilde{h}_0,\tilde{h}_1$. We claim that $\tilde{f}$ is an extension of $f$. Proof is by strong induction on the recursion/integration variable. At the base case we have $\tilde{f}(0)=\tilde{g}=g=f(0)$. Let $n\in\N$, then from the proof of the first part of the proposition we have: \begin{myequation} \tilde{f}(2n+1)&=\tilde{h}_1(n;\tilde{f}(n))=\tilde{h}_1(n;f(n)),\;\textit{from induction over n}\\ &=h_1(n;f(n)),\;\textit{by assumption on } \tilde{h}_1\\ &=f(2n+1),\;\textit{by definition of safe recursion} \end{myequation} Similarly, it can be shown that $\tilde{f}(2n+2)=f(2n+2)$, hence $\tilde{f}$ is an extension of $f$. We have shown that every function in $B$ has an extension in $\mathcal{W}$. It now remains to show that we can find a peaceful extension inside $\mathcal{W}$. Consider Eq. \eqref{eq:class_W_3} with $x\in[2n,2n+1]$. We have \begin{myequation} f(x) &= f(2n)+[f(2n+1)-f(2n)](p(x+1) - n)\\ & = f(2n)+[f(2n+1)-f(2n)]\cdot\epsilon, \quad for\; \epsilon\in[0,1]\\ & = \epsilon f(2n+1) + (1-\epsilon) f(2n) \end{myequation} This latter equation shows that $f(x)\in[f(2n),f(2n+1)]$. Similarly, from Eq. \eqref{eq:class_W_4} for $x\in[2n+1,2n+2]$ we have \begin{myequation} f(x) &= f(2n+1)+[f(2n+2)-f(2n+1)](p(x)-n)\\ & = f(2n+1)+[f(2n+2)-f(2n+1)]\cdot\delta, \quad for\; \delta\in[0,1]\\ & = \delta f(2n+2) + (1-\delta) f(2n+1) \end{myequation} This latter equation shows that $f(x)\in[f(2n+1),f(2n+2)]$. The latter two equations then imply that every function generated by the safe integration operator is peaceful. Now consider an arbitrary function $f\in\mathcal{W}$. Define the following functions in $\mathcal{W}$: \begin{myequation} &\hat{g}()=f(0)\\ &\hat{h}_0(x;y)=f(2x)\\ &\hat{h}_1(x;y)=f(2x+1) \end{myequation} Now define a function $\hat{f}\in\mathcal{W}$ by safe integration using the functions $\hat{g},\hat{h}_1$, and $\hat{h}_2$. It can be easily seen that $\hat{f}(n)=f(n)$ for every $n\in\N$. In addition $\hat{f}$ is peaceful. This completes the proof of the third part of the proposition. \end{proof} The previous proposition indicates that $\mathcal{W}$ is a class of polytime computable real functions that peacefully approximates polytime computable integer functions. Hence, using Theorem \ref{th:lip} the following result is obtained. \begin{theorem}\label{th:w1} A Lipschitz function $f:\R \to \R$ is polytime computable iff it is $\mathcal{W}$-definable. \end{theorem} Additionally, the previous proposition implies that any function in $\mathcal{W}$ has a polytime computable integer approximation (since $\mathcal{W}$ preserves the integers), hence using Corollary \ref{machinchose}, we can get the following result. \begin{theorem}\label{th:w2} Let $f:\R \to \R$ be some $n^k$-smooth function for some $k$. Then $f$ is polytime computable iff it is $n^k$-$\mathcal{W}$-definable. \end{theorem} Notice that $\mathcal{C}$-definability of a function can be seen as a schema that builds a function $f$ from a function $\tilde{g}\in\mathcal{C}$ (see Definition \ref{dfn:approximation_1}). Let $\mathit{Def}[\mathcal{C}]$ stand for $\mathcal{C}$-definability. That is, $\mathit{Def}[\mathcal{C}]$ is the class of functions $f$ such that $f$ is $\mathcal{C}$-definable. Similarly, given a function $T\colon\N\to\N$, let $T\text{-}\textit{Def}[\mathcal{C}]$ denote $T\text{-}\mathcal{C}$-definability, that is, $T\text{-}\textit{Def}[\mathcal{C}]$ is the class of functions $f$ such that $f$ is $T\text{-}\mathcal{C}$-definable. Then, the class of polytime computable functions can be algebraically characterized in a machine-independent way as follows. \begin{corollary} \label{cor:w3} A function $f\colon\R\to\R$ is polytime computable iff either (1) $f$ is Lipschitz and belongs to $\mathit{Def}[\mathcal{W}]$, (2) $f$ is locally poly-Lipschitz and belongs to $\mathit{Def}[\mathcal{W}]$, or (3) $f$ is $n^k$-smooth and belongs to $n^k\text{-}\textit{Def}[\mathcal{W}]$ for some $k\in\N$. \end{corollary} \bibliographystyle{elsarticle-num}
1,314,259,992,913
arxiv
\section{Introduction} \vspace{5mm} Understanding of closed string tachyon condensation has been gradually proceeding. In recent researches, it has been recognized that there is a class of tachyons whose condensation can be understood rather easily. It is a class in which tachyons are localized in a non-compact space. A typical and well-studied example of such a tachyon is the one appearing in string theory on a non-compact and non-supersymmetric orbifold \cite{APS}. For the other tachyons, their condensations are studied recently in \cite{Suyama1}\cite{recent2}\cite{recent1}. One property which seems to be common in the latter tachyons is that they inevitably couples to the dilaton, and the endpoint of their condensations would have linear dilaton backgrounds \cite{YZ1}\cite{MIT}\cite{Suyama1}. The analysis of this phenomenon has been done by using a spacetime effective action which is valid when the size of string is negligible, but $\alpha'$-corrections are not always small, especially when the tachyon mass$^2$ is of order of $(\alpha')^{-1}$. There is an analysis by using string field theory \cite{YZ2}, and also by worldsheet RG flows \cite{Suyama2}. In this paper, we construct $N=2$ SCFT's which describe an $\alpha'$-exact background of string theory, in which a tachyon varies along a spatial direction. They are obtained as the IR limit of gauged linear sigma models (GLSM's) \cite{GLSM}. GLSM has been already used in \cite{Vafa} to discuss the non-supersymmetric orbifolds, in which GLSM provides a control of an RG flow which is assumed to describe the corresponding tachyon condensation. Our construction is, roughly speaking, to promote the RG scale to one of the target space coordinate, and to make the whole tachyon background on-shell. We show that there is such a CFT in which the dilaton gradient varies as the tachyon varies. This can be regarded as an explicit realization of the claim made in \cite{Suyama2}\cite{Suyama1}. Throughout this paper, we just focus on the tree level string theory, and possible problems on a strong coupling region will be, hopefully, discussed elsewhere. This paper is organized as follows. In section \ref{GLSM}, we briefly review \cite{Vafa}, and then explain how to obtain an on-shell background from the corresponding RG flow. Some applications of our construction to tachyon condensations are shown in section \ref{example}. The examples include vanishing of target space, and pinching off of a cylinder discussed in \cite{Silverstein}. Section \ref{W} deals with tachyon condensations in which the dilaton varies with tachyons. We comment on properties of our CFT in section \ref{charge}. Section \ref{discuss} is devoted to discussion. Our conventions of the superfields and the detailed construction of $N=2$ superconformal algebra, which is a review of \cite{SCA1}\cite{SCA2}\cite{HoriKapustin} in a general setup, are shown in appendices. \vspace{1cm} \section{On-shell tachyon condensation in GLSM} \label{GLSM} \vspace{5mm} \subsection{Tachyon condensation and RG flow} \vspace{5mm} First let us recall a description of a closed string tachyon condensation by a GLSM \cite{Vafa}. The GLSM is a $U(1)$ gauge theory in two dimensions with $(2,2)$ supersymmetry. Suppose that there are $n$ chiral superfields $\Phi_i$ whose charges for the $U(1)$ gauge symmetry are $q_i$. The Lagrangian of this GLSM is \begin{equation} L = \frac1{2\pi}\int d^4\theta\ \Bigl( \sum_{i=1}^n\bar{\Phi}_ie^{2q_iV}\Phi_i -\frac1{2e^2}\bar{\Sigma}\Sigma \Bigr) -\frac1{2\pi}\Bigl[ \int d^2\tilde{\theta}\ t\Sigma +\mbox{h.c.}\ \Bigr], \label{Vafa} \end{equation} where $\Sigma=\bar{D}_+D_-V$ is a twisted chiral superfield constructed from $V$, and $t=t_R+it_I$ is a complex parameter. For conventions see Appendix \ref{convention}. The classical vacua is determined by the D-term potential. The minima of the potential is the solutions of \begin{equation} \sum_{i=1}^nq_i|\phi_i|^2 = 2t_R, \label{target} \end{equation} where $\phi_i$ are the lowest components of $\Phi_i$. By taking $e\to\infty$ limit, fluctuations perpendicular to this vacuum manifold (\ref{target}) become infinitely heavy, and the GLSM approaches a non-linear sigma model whose target space is the vacuum manifold (\ref{target}). To see the geometry of (\ref{target}) in more details, suppose that $q_1,\cdots,q_l$ are positive, and the rest of $q_i$ are negative. If $t_R$ is negative, then all $\phi_{l+1},\cdots,\phi_n$ cannot vanish simultaneously. These can be regarded as homogeneous coordinates of the weighted projective space $WCP_{q_{l+1},\cdots,q_n}$. The total space of (\ref{target}) is a non-compact bundle over this $WCP_{q_{l+1},\cdots,q_n}$, which is topologically equivalent to ${\bf C}^l\times({\bf C}^{n-l}\backslash\{\vec{0}\})/\sim$ where the identification is defined as follows, \begin{equation} (\phi_1, \cdots, \phi_n) \sim (\lambda^{q_1}\phi_1,\cdots,\lambda^{q_n}\phi_n), \hspace{5mm} \lambda\in{\bf C}^*. \end{equation} If $t_R$ is positive, then (\ref{target}) determines a similar non-compact bundle over $WCP_{q_{1},\cdots,q_l}$. Note that if all $q_i$ are positive (negative), then there is no solution to (\ref{target}) when $t_R$ is negative (positive), respectively. In those cases, there are no supersymmetric vacua for such $t_R$ regions. \vspace{5mm} The GLSM is super-renormalizable, and thus the renormalization procedure is very simple. In fact, all divergences are canceled by the shift of $t_R$ as follows, \begin{equation} t_R(\Lambda) = t_R+\frac12\sum_{i=1}^nq_i \log\Lambda, \label{t-shift} \end{equation} where $\Lambda$ is a UV cut-off scale. If $\sum_iq_i>0$, then $t_R(\Lambda)$ decreases monotonically as the scale $\Lambda$ decreases, and vice versa. Recalling that the shape (even topology) of the vacuum manifold, in other words, the target space of the corresponding non-linear sigma model, depends on the sign of $t_R$, it is concluded that the target space varies drastically along the RG flow. To see what is described by this RG flow, it is convenient to dualize $\Phi_i$, according to \cite{HoriVafa}. The twisted superpotential of the dual theory is \begin{equation} \tilde{W} = \Sigma\Bigl[ \frac12\sum_{i=1}^nq_iY_i-t \Bigr]+\mu\sum_{i=1}^ne^{-Y_i}, \end{equation} where $Y_i$ are twisted chiral superfield dual to $\Phi_i$, and $\mu$ is a scale parameter related to $\Lambda$ and $t$. The superpotential is absent in the dual theory. Now $\Sigma$ can be integrated out, and this results in a constraint \begin{equation} \sum_{i=1}^nq_iY_i-2t = 0. \end{equation} Let us consider a simple case; all $q_i$ except $q_n=-q$ are positive and $\sum_{i=1}^nq_i<0$. Then the twisted superpotential, after integrating out $\Sigma$ is \begin{equation} \tilde{W} = \mu\sum_{i=1}^{n-1}e^{-Y_i} +\mu e^{\frac2{q}t}\prod_{i=1}^{n-1}e^{-\frac{q_i}{q}Y_i}. \end{equation} Let us define $u_i=e^{-\frac1qY_i}$. Then $\tilde{W}$ is \begin{equation} \tilde{W} = \mu\sum_{i=1}^{n-1}u_i^q+\mu e^{\frac2qt}\prod_{i=1}^{n-1}u_i^{q_i}. \label{tildeW} \end{equation} In the UV limit ($t_R\to-\infty$), the second term in the RHS vanishes. Naively, this limit of the GLSM is expected to be described by an $N=2$ SCFT which is specified by $\tilde{W}|_{t\to-\infty}$ (up to an orbifolding)\footnote{ The natural variables are $Y_i$, not $u_i$ itself. So the UV limit is not a product of minimal models. }. The second term of the RHS of (\ref{tildeW}) would indicate a perturbation by a tachyon background which grows at low energy $(t_R\to+\infty)$. In fact, the $U(1)$ charge shows that $\prod_{i=1}^{n-1}u_i^{q_i}$ is relevant, since $\sum_i^{n-1}q_i<q$ by assumption. Therefore, one may naturally expect that the GLSM would describe an RG flow which is induced by a tachyon perturbation. The endpoint of the RG flow is expected to describe a background in which the tachyon condenses, and, as a result, the target space is drastically deformed by the tachyon condensation, even at the level of topology. This approach to closed string tachyon condensations has been applied to non-supersymmetric orbifolds \cite{Vafa}. See also a review paper \cite{review}. Note that for another choice $\sum_iq_i>0$, $\prod_{i=1}^{n-1}u_i^{q_i}$ is irrelevant, while for this choice $t_R\to-\infty$ is the IR limit, so the irrelevant perturbation decays as $\Lambda$ decreases, as it should be the case. However, in this case it is not obvious which tachyons would induce this RG flow. \vspace{5mm} \subsection{Promoting $t$ to field} \vspace{5mm} The off-shell analysis based on an RG flow seems to be in conflict with the c-theorem, when one would like to discuss a generic tachyon which is not localized in a non-compact space. In such a case, for example a tachyon localized in a compact space \cite{Suyama2}, there is no reason for the c-theorem not to be applicable. Then the central charge must decrease along the RG flow induced by the tachyon, and therefore the consistency of string theory becomes suspicious. One may think that the endpoint of such a tachyon condensation would be a non-critical string theory, which is possible when a non-trivial dilaton background is induced by the tachyon condensation. One such mechanism was discussed in \cite{MIT}. Some of recent researches \cite{Suyama2}\cite{MIT}\cite{Suyama1} focus on on-shell processes of closed string tachyon condensations. Since they are time-dependent processes, understanding of them is usually difficult. However, if one is able to analyze an on-shell process of tachyon condensation, the endpoint of the condensation can be identified as a state in string theory with which one had started. In this paper, we would like to show that it is possible to construct an exact CFT background in which a tachyon varies along a {\it spatial} direction, not the temporal one. It would be very interesting if the spatial direction can be Wick-rotated, enabling us to discuss the corresponding time-dependent processes. The construction is a rather straightforward generalization of \cite{Vafa}; it is achieved by promoting the parameter $t$ to a twisted chiral superfield $Y$, that is, the Lagrangian is \begin{equation} L = \frac1{2\pi}\Bigl[ \int d^4\theta\ \Bigl( \sum_{i=1}^n\bar{\Phi}_ie^{2q_iV}\Phi_i -\frac1{2e^2}\bar{\Sigma}\Sigma -k\bar{Y}Y \Bigr) -\int d^2\tilde{\theta}\ Y\Sigma -\int d^2\bar{\tilde{\theta}}\ \bar{Y}\bar{\Sigma} \Bigr]. \label{promoted} \end{equation} Then the shift (\ref{t-shift}) due to the RG flow becomes an additive renormalization of $Y$, making the total Lagrangian kept intact. The dual theory of (\ref{promoted}) is almost the same as before, and in particular, the twisted superpotential is (\ref{tildeW}), with $t$ replaced with $Y$. The term $e^{\frac2qY}\prod_{i=1}^{n-1}u_i^{q_i}$ in the twisted superpotential can be regarded as a ``dressed'' operator, which varies in $Y$-direction. Therefore, the target space of (\ref{promoted}) should describe an on-shell background in which a tachyon varies along the $Y$-direction. In one side of $Y$-direction, (\ref{promoted}) describes a background without tachyon, and in the other side it describes an endpoint of the tachyon condensation. Note that the GLSM (\ref{promoted}) is a mirror version of the one discussed in \cite{HoriKapustin}. See also \cite{twistedcircle} for its application to tachyon condensation. \vspace{5mm} One advantage of introducing $Y$ is that the whole process of tachyon condensation can be described by one CFT which is obtained by taking the IR limit of the GLSM. In arguments relying on RG flows, one can only suggest that a CFT, which is an IR fixed point of a flow, would be a possible endpoint of a tachyon condensation. In our case, a tachyon varies in a spatial direction, keeping the on-shell condition, since our model is a CFT as a whole, not only in an asymptotic region. Therefore, the corresponding state with a non-trivial tachyon is connected to the original tachyonic vacuum via a physical process in string theory. It is also interesting that $Y$, along which the tachyon varies, looks like the Liouville field. This may have something to do with our previous works \cite{Suyama1}\cite{Suyama2}. In addition, $N=2$ superconformal symmetry is realized in our model. $N=2$ algebra has a $U(1)$ current which is a combination of $U(1)_V$ and $U(1)_A$ symmetries in the GLSM. By regarding the superfields in two dimensions as those obtained from four-dimensional ones via the dimensional reduction, $U(1)_V$ and $U(1)_A$ both have their origins in four dimensions; $U(1)_V$ comes from the R-symmetry, and $U(1)_A$ is the rotation in the plane which is dimensionally reduced. Since $U(1)_A$ symmetry has a chiral anomaly in the GLSM (\ref{Vafa}) when $\sum_iq_i\ne0$, it is not obvious whether $N=2$ symmetry is realized even at the IR limit, which seems to be generally expected. When the FI-parameter $t$ is promoted to $Y$, the chiral anomaly can be canceled. The chiral anomaly appears for the following transformation of fermions \begin{equation} \psi_\pm \to e^{\mp ia}\psi_\pm, \end{equation} under which the path-integral measure is not invariant, and the variation of the measure is \begin{equation} {\cal D}\psi{\cal D}\bar{\psi} \to {\cal D}\psi{\cal D}\bar{\psi}\exp\Bigl[ -\frac{iq}{\pi}\int d^2x\ av_{01} \Bigr], \end{equation} where $q$ is the gauge charge of $\psi_\pm$. All fermions in $\Phi_i$ gives this variation with appropriate charges. All of them can be canceled if the chiral transformation is accompanied by the shift of $Y$, \begin{equation} Y \to Y+ia\sum_{i=1}^nq_i, \end{equation} since the twisted superpotential in terms of component fields includes \begin{equation} \frac1{\pi}\int d^2x\ y_Iv_{01}, \end{equation} where $y=y_R+iy_I$ is the lowest component of $Y$. Note that the above term indicates that $Y$ is periodic with period $2\pi i$. In fact, one can construct generators of $N=2$ algebra acting on a subspace of the full Hilbert space of the CFT, according to \cite{SCA1}\cite{SCA2}\cite{HoriKapustin}. The details of construction of the generators are summarized in Appendix \ref{N=2}. The ambiguity of the choice of the $U(1)$ current mentioned there can be fixed for the GLSM (\ref{promoted}). Since the GLSM can be described by a non-linear sigma model in the IR limit, the $U(1)$ current of the $N=2$ algebra should be just a fermion number current (contribution from $Y$ can include a shift in $y_I$-direction). Therefore, one should choose $p_i=0$, since the lowest component of $J_2^i$ generates a transformation of $\phi_i$ which may act non-trivially on the target space. As a result, the central charge of this CFT is \begin{equation} c=3(n-1)+3\Bigl( 1+k\gamma^2 \Bigr), \end{equation} as expected from the dimensionality of the target space (plus linear dilaton background). \vspace{1cm} \section{Examples} \label{example} \vspace{5mm} In this section, we show explicit examples of GLSM's describing space-dependent tachyon backgrounds. The first example is (\ref{promoted}) with $n=1$. This is nothing but a mirror dual of the GLSM studied in \cite{HoriKapustin}, in which it has been shown that this GLSM is equivalent (mirror dual) to both the $N=2$ Liouville theory and $SL(2,{\bf R})/U(1)$ coset. The latter CFT describes a Euclidean black hole in two dimensions \cite{BH}, and its geometry is a semi-infinite cigar-like one. The shape of this target space can be seen from the classical vacuum manifold determined by \begin{equation} |\phi|^2 = 2y_R. \label{cigar} \end{equation} For simplicity, we have chosen $q=1$. For a positive $y_R$, (\ref{cigar}) has a solution, and its shape is locally a cylinder. For a negative $y_R$, however, there is no supersymmetric solution. Therefore the cylinder in the $y_R>0$ region is ``capped'' at $y_R=0$, resulting in the cigar geometry. The twisted superpotential of the dual theory is \begin{equation} \tilde{W} = e^{-2Y}. \end{equation} This is the $N=2$ Liouville potential. This can also be regarded as a tachyon vertex operator dressed by the Liouville field. If this operator is interpreted as a space-dependent tachyon background, it represents a tachyon growing toward $Y\to-\infty$. Then the original GLSM, which is a non-linear sigma model on the cigar, indicates that the target space ``disappears'' where tachyon condenses. The situation would be similar for a general $n$ case with $q_i>0$. The target space is asymptotically $WCP_{q_1,\cdots,q_n}\times {\bf R}\times S^1$ which is terminated at $y_R=0$. The mirror dual theory has a tachyon background which grows where the target space disappears. \vspace{5mm} The second example is the GLSM with $n=2$ and $q_1q_2<0$. For simplicity, let $q_1=1$, $q_2=-2$. The corresponding twisted superpotential is \begin{equation} \tilde{W} = e^{-Y_1}+e^{Y}e^{-\frac12Y_1}. \end{equation} This can be regarded as describing a small tachyon perturbation if $Y\to-\infty$. Since there is a Liouville potential for $Y_1$, the region with large negative $Y_1$ is not accessible from the asymptotic $Y_1\to+\infty$ region. In the original variable, $Y_1\to-\infty$ corresponds to $\Phi_1\to0$ \cite{HoriVafa}. The vacuum manifold is determined by \begin{equation} |\phi_1|^2-2|\phi_2|^2 = 2y_R. \label{tube} \end{equation} We fix the gauge by imposing $\phi_1\ge0$. For the case $y_R<0$, this condition is not appropriate since $\phi_1=0$ is a possible solution of (\ref{tube}). However, $\phi_1=0$ is not physically relevant in this case, as mentioned above, so the gauge fixing condition $\phi_1\ge0$ would make sense. For $y_R<0$, the target space is a half of a one-sheeted hyperboloid, topologically a cylinder and the radius of the cylinder $|\phi_2|$ reduces as $\phi_1$ decreases. The minimum value of the radius becomes small as $|y_R|$ becomes small. The tachyon vertex $e^{-\frac12Y_1}$ in (\ref{tube}) is localized around $\phi_1=0$ and its amplitude $e^{Y}$ grows with $Y$. On the other hand, for $y_R>0$, the target space is instead a half of a two-sheeted hyperboloid, topologically a plane. Therefore, this can be understood that the thin part of the one-sheeted hyperboloid is pinched off by a tachyon condensation, which is the phenomenon discussed in \cite{Silverstein}. It seems very interesting that the GLSM discussed here is the same as the one employed in \cite{twistedcircle} for application to twisted circle geometry, but it is analyzed here with a different gauge choice, and $Y$ direction is regarded here as a physical coordinate. \vspace{1cm} \section{GLSM with superpotential} \label{W} \vspace{5mm} So far, we have discussed GLSM's without superpotential and their application to closed string tachyon condensations. In this section, we turn our attention into GLSM's with superpotential. \vspace{5mm} One example which we would like to discuss is the GLSM (\ref{general}) with $n=2$, and the superpotential is \begin{equation} W = \Phi_1^{n_1}\Phi_2^{n_2}, \label{eg1} \end{equation} where $n_1,n_2>3$ are positive integers. This superpotential must be compatible with the gauge symmetry, which implies \begin{equation} n_1q_1+n_2q_2 = 0. \label{gaugeinv} \end{equation} For definiteness, we assume $q_1>0>q_2$ and $q_1+q_2<0$. The D-term condition is \begin{equation} q_1|\phi_1|^2+q_2|\phi_2|^2 = 2y_R. \end{equation} For $y_R<0$, $\phi_2$ cannot vanish, while for $y_R>0$, $\phi_1$ is non-zero. There is also the F-term condition which imposes $\phi_1=0$ for $y_R<0$ and $\phi_2=0$ for $y_R>0$. Summarizing, \begin{eqnarray} \phi_1=0, &\hspace{5mm}& q_2|\phi_2|^2 = y_R<0, \\ \phi_2=0, &\hspace{5mm}& q_1|\phi_1|^2 = y_R>0. \end{eqnarray} Then the vector superfield gets massive by the Higgs mechanism. For $y_R<0$, for example, \begin{equation} \bar{\Phi}_2e^{2q_2V}\Phi_2 = |\Phi_1|^2+2q_2|\Phi_2|^2V+q_2|\Phi_2|^2V^2, \end{equation} where the second term of the RHS cancels the term coming from the twisted superpotential. $\Phi_1$ and $Y$ remains massless. The superpotential for $\Phi_1$ is now \begin{equation} W = a'\Phi_1^{n_1}. \label{minimal} \end{equation} Therefore, around a vacuum with $y_R<0$, the GLSM describes, in the low energy limit, a CFT which consists of an $N=2$ minimal model specified by (\ref{minimal}) and a free field, with a possible linear dilaton background. Similarly, around a vacuum with $y_R>0$, the low energy limit is a product of another $N=2$ minimal model and a free field. Since there remains a residual gauge symmetry, the minimal model is in fact an orbifold. Consider the $y_R<0$ case. In this case, $\phi_2$ has a non-zero vev, so the gauge symmetry is broken to a discrete symmetry whose action on $\Phi_1$ is \begin{equation} \Phi_1 \to e^{2\pi i\frac{q_1}{q_2}}\Phi_1 = e^{-2\pi i\frac{n_2}{n_1}}\Phi_1. \end{equation} If $n_1,n_2$ are coprime to each other, the minimal model is orbifolded by ${\bf Z}_{n_1}$, otherwise the orbifold group is a subgroup of ${\bf Z}_{n_1}$. \vspace{5mm} The dual theory provides a more detailed description of the model. The mirror dual of a GLSM with a superpotential was discussed in \cite{HoriVafa}. Assuming that their prescription is applicable to our case, the dual superpotential is \begin{equation} \tilde{W} = X_1^{|q_2|}+e^{\frac2{|q_2|}Y}X_1^{q_1}, \label{mirror} \end{equation} which is suitable for $Y\to-\infty$ limit, and \begin{equation} \tilde{W} = X_2^{q_1}+e^{-\frac2{q_1}Y}X_2^{|q_2|}, \end{equation} for $Y\to+\infty$ limit. The difference from the cases in section \ref{GLSM} is that the fundamental variables are $X_{1,2}$, not $\log X_{1,2}$. So the dual theory is really a minimal model, with a relevant (irrelevant) perturbation for $y_R\to-\infty$ ($y_R\to+\infty$), respectively (recall that $y_R\to-\infty$ is the IR limit when $q_1+q_2<0$). The minimal model appeared in (\ref{mirror}) is not different from (\ref{minimal}), since the gauge invariance condition (\ref{gaugeinv}) is solved as follows, \begin{equation} q_1 = m n_2, \hspace{5mm} q_2 = -m n_1, \end{equation} where $m$ can be absorbed by a rescaling of the gauge coupling $e$. Recall our assumption $q_1+q_2<0$. This implies $n_1>n_2$ which means that the central charge coming from the minimal model decreases by the tachyon condensation, as it should be the case. \vspace{5mm} One can construct an $N=2$ superconformal algebra acting on a subspace of the Hilbert space. See Appendix \ref{N=2} for the details. The presence of such algebra strongly suggests that the IR limit of the GLSM is actually an $N=2$ SCFT with central charge \begin{equation} c = 3(1-p_1)+3(1-p_2)+(-3)+3\Bigl( 1+k\gamma^2 \Bigr), \end{equation} where $p_1,p_2$ are the $U(1)_V$ charges of $\Phi_1,\Phi_2$, respectively. It should be emphasized that the construction of the superconformal algebra does not depend on a particular limit, like $Y\to\pm\infty$. This implies that the IR limit is a CFT describing the following background: The background has one spatial direction corresponding to $y_R$. There is a non-trivial tachyon field which is exponentially growing in $y_R\to-\infty$ region, and looks like a massive field and is exponentially damped in $y_R\to+\infty$ region. The existence of such a CFT is very interesting since this can be regarded as an $\alpha'$-exact version of a solution of the equation of motion of the classical string theory, which has been discussed in \cite{Suyama1}\cite{YZ1}. Recall that the central charge coming from the minimal model part decreases after a tachyon condensation. Since total central charge must be the same for both $y_R\to\pm\infty$ region, which are parts of the same theory, the central charge coming from $Y$ field must increase after the tachyon condensation. This means that the dilaton gradient varies as tachyon condenses, which has been observed in the analysis of a solution of a low energy effective theory \cite{Suyama1}\cite{YZ1}\cite{MIT} and also in \cite{Suyama2}. \vspace{5mm} Another example is the GLSM with \begin{equation} W = \Phi_{0}G(\Phi_1,\cdots,\Phi_n), \end{equation} where $G(x_1,\cdots,x_n)$ is a quasi-homogeneous polynomial satisfying \begin{equation} G(\lambda^{q_1}x_1,\cdots,\lambda^{q_n}x_n) = \lambda^{-q_0}G(x_1,\cdots,x_n), \end{equation} for $\lambda\in{\bf C}^*$. $G(x_1,\cdots,x_n)$ is also required to be regular, that is, $G=0$ and $\partial_iG=0$ have no common solution. The charges are assumed to satisfy \begin{equation} q_1,\cdots,q_n<0<q_0, \hspace{5mm} \sum_{i=1}^n|q_i| > q_0. \end{equation} The IR limit in $y_R\to-\infty$ region is an analogy of the Calabi-Yau phase \cite{GLSM}, that is, that region is described by a non-linear sigma model with target space $M\times{\bf R}$ where $M$ is a hypersurface $G=0$ in $WCP_{q_1,\cdots,q_n}$. On the other hand, $y_R\to+\infty$ region is described by a LG orbifold with superpotential $W=G(\Phi_1,\cdots,\Phi_n)$. Since one can construct an $N=2$ superconformal algebra in the previous example, it would be reasonable to expect that there is an $N=2$ SCFT which interpolates the above two theories. This CFT may describe a decay of $M$ into an LG orbifold via a tachyon condensation. This may be an example in which a decay of a compact manifold would result in a final state different from ``nothing''. The dual superpotential is, however, not what would be expected. One would obtain the following superpotential \begin{equation} \tilde{W} = \sum_{i=1}^nX_i^{q_0}+e^{-\frac2{q_0}Y}\prod_{i=1}^nX_i^{|q_i|}. \end{equation} This is suitable for $Y\to+\infty$ limit, but this limit should be described by the different superpotential, as mentioned above. It is very interesting to understand this situation better. \vspace{1cm} \section{$U(1)_V$ charges, central charge and gauge symmetry} \label{charge} \vspace{5mm} We have discussed various GLSM's in which $\gamma=\sum_iq_i\ne0$, and the FI-parameter promoted to a twisted chiral superfield. This kind of models has some properties which are absent in GLSM's with $\gamma=0$. It might seems strange that the central charge (\ref{center}) depends on the $U(1)_V$ charges $p_i$ of $\Phi_i$. The definition of $p_i$ is ambiguous, since $U(1)_V$ current can be modified by adding the gauge current. As a result, $p_i'=p_i+mq_i$ can also be regarded as $U(1)_V$ charges. However, this modification does not keep the central charge (\ref{center}) fixed, unless $\gamma=0$. One might think that this is related to the fact that the gauge current \begin{equation} j_g = \sum_{i=1}^nq_i\Bigl[ 2i{\cal D}_-\bar{\phi}_i\phi_i-\bar{\psi}_{i,-}\psi_{i,-} \Bigr], \end{equation} which acts on the $\bar{Q}_+$-closed subspace, is not a primary field, \begin{equation} T(x)j_g(0) \sim \frac{i\gamma}{(x^-)^3}-\frac1{(x^-)^2}j_g(0)-\frac1{x^-}\partial_-j_g(0). \end{equation} It is known that the presence of the $(x^-)^{-3}$ term is a signal of the appearance of the mixed anomaly, \begin{equation} \partial_\mu j_g^\mu = a R^{(2)}, \end{equation} where $R^{(2)}$ is the scalar curvature of the worldsheet. However, contributions to the coefficient $a$ comes not only from $\gamma$ but also from a coefficient of $(x^+)^{-3}$ term of the $\bar{T}\bar{j}_g$ OPE. By exchanging $-$ and $+$, one can similarly construct the $N=2$ algebra acting on $\bar{Q}_-$-closed subspace of the Hilbert space, and the OPE's of them are the same as those we have discussed. In particular, the coefficient of $(x^+)^{-3}$ in the $\bar{T}\bar{j}_g$ OPE is $i\gamma$. Since $a$ is proportional to the difference of these coefficients, it has been shown that the mixed anomaly is absent in our GLSM. Note that, of course, the zero mode of $j_g$ commutes with the generators of the $N=2$ algebra, so the gauge symmetry is preserved, although it is not promoted to an affine symmetry. If $\gamma=0$, $j_g$ is primary. Moreover, the following modified operators \begin{eqnarray} T_\beta &=& T+\frac i4\beta\partial j_g, \\ G_\beta &=& G, \\ \bar{G}_\beta &=& \bar{G}+i\beta\sum_{i=1}^nq_i\partial(\phi_i\bar{\psi}_i), \\ j_\beta &=& j-\frac12\beta j_g, \end{eqnarray} form the $N=2$ algebra with the same central charge. Therefore, the ambiguity of the choice of $U(1)_V$ charges is absent in the $\gamma=0$ case, in the sense that all choices provide the same central charge. In fact, the dependence of such an ambiguity of central charge already exists in ungauged chiral model. In \cite{SCA1}, it is shown that a two-dimensional theory with the Lagrangian \begin{equation} L = \int d^4\theta\ \sum_{i=1}^n\bar{\Phi}_i\Phi_i+\int d^2\theta\ W(\Phi) \label{chiral} \end{equation} reduces in the IR limit to an $N=2$ SCFT with central charge \begin{equation} c = 3\sum_{i=1}^n(1-2\alpha_i), \end{equation} where $\alpha_i$ are determined by \begin{equation} W = \sum_{i=1}^n\alpha_i\Phi_i\partial_iW(\Phi). \label{euler} \end{equation} Therefore, if (\ref{chiral}) has a $U(1)$ symmetry, that is, $W$ also satisfies \begin{equation} 0 = \sum_{i=1}^nq_i\Phi_i\partial_iW(\Phi), \end{equation} then $\alpha_i$ can be determined only up to $q_i$, and the central charge cannot be fixed. To see the origin of this ambiguity, consider the following superpotential, \begin{equation} W_{\xi,m} = \Phi_1^{n_1}\Phi_2^{n_2}+\xi\Phi_1^{m_1}\Phi_2^{m_2}. \end{equation} For this superpotential with $\xi\ne0$, (\ref{euler}) uniquely determines $\alpha_i$, unless $n_1m_2=n_2m_1$. Then the central charge depends on $m_i$, but not on $\xi$. One can take $\xi\to0$ limit while keeping the central charge fixed, so various CFT's with different central charges degenerate at $W=\Phi_1^{n_1}\Phi_2^{n_2}$ which we have discussed in the previous section. Therefore, the IR limit would not be uniquely determined for such a ``degenerate'' superpotential, and there would be {\it the} $U(1)_V$ charges $\alpha_i$ (or $p_i$ in our notation) which should be distinguished from a $U(1)_V$ charges dictating the transformation property of fields. As mentioned above, any GLSM with a superpotential may have such an ambiguity, since the superpotential must preserve the gauge symmetry. It is very interesting to know how to fix this ambiguity. \vspace{1cm} \section{Discussion} \label{discuss} \vspace{5mm} We have discussed various GLSM's in which the FI-parameter is promoted to a twisted chiral superfield $Y$, and its application to closed string tachyon condensations. The presence of $Y$ enables us to cancel a possible chiral anomaly, which results in the possibility to have an $N=2$ SCFT in the IR limit. It is remarkable that any RG flow described by a GLSM with an FI-parameter can be used to construct an $N=2$ SCFT. Since it is conformal, it can be used for a background of string theory, which is $\alpha'$-exact solution of the equations of motion at the tree level. From the mirror description, at least some of them can be regarded as describing a background in which a tachyon grows along the $y_R$-direction. Therefore, the GLSM accomplishes the construction of an $\alpha'$-exact description of an on-shell tachyon condensation from the corresponding worldsheet RG flow. By using such GLSM's, we have described tachyon condensations in which a part of a target space disappears or the topology of the target space changes. In another cases, we have shown that the dilaton gradient varies as a tachyon condenses, when the condensation decreases the central charge of a part of the system. The latter phenomenon has been observed in \cite{Suyama1}\cite{YZ1}\cite{MIT} at the level of the spacetime effective theory, and in this paper we have shown that this is indeed the case also in the level of $\alpha'$-exact solutions. It is also interesting that the RG scale (FI-parameter in GLSM) is related to the field $Y$ in order to obtain the consistent background for tachyon condensations, and the $Y$ field realy looks like the Liouville field. This may be understood as an explicit realization of the description of tachyon condensations proposed in \cite{Suyama2}. It may seem curious that the explicit expression of the energy-momentum tensor (\ref{EM}) shows a simple linear dilaton background for $y$, although we claimed that the dilaton gradient varies along $y_R$. In fact, our claim is that a non-trivial dilaton appears after some degrees of freedom, becoming massive, are integrated out. Note that the contribution to the central charge from the vector superfield $V$ is $-3$, which is the right amount to cancel the contribution from a chiral superfield with zero $U(1)_V$ charge. So the Higgs mechanism and the successive integrating out massive fields does not change the total central charge, if the $U(1)_V$ charge of fields are suitably chosen. For example, If one chooses $p_1=\frac2{n_1}, p_2=0$ for the case (\ref{eg1}), then in the $y_R\to-\infty$ region where $\Phi_2$ becomes massive, the contributions to the central charge from $\Phi_2$ and $V$ cancel each other. So we can simply ignore those fields, and therefore the dilaton gradient is the one appears in the energy-momentum tensor (\ref{EM}). Then what happens in the $y_R\to+\infty$ region? As mentioned in section \ref{charge}, one cannot redefine the $U(1)_V$ charges as $p_1=0,p_2=\frac2{n_2}$ by using the gauge symmetry. Instead, one can regard the $N=2$ algebra for $(p_1,p_2)=(\frac2{n_1},0)$ as the algebra for $(p_1,p_2)=(0,\frac2{n_2})$ ``twisted'' by the gauge current. For example, \begin{equation} T_{p_1=\frac2{n_1},p_2=0} = T_{p_1=0,p_2=\frac2{n_2}}+\frac i4\beta\partial_-j_g, \end{equation} where $\beta=\frac2{n_1q_1}=-\frac2{n_2q_2}$. The central charge coming from $\Phi_1$ through $T_{p_1=0,p_2=\frac2{n_2}}$ is canceled by $V$, but $j_g$ also contributes to the central charge, due to the presence of the $(x^-)^{-3}$ term, and this part would be absorbed to the Liouville part by a non-trivial field redefinition, resulting in a dilaton gradient different from that in (\ref{EM}). It is worth emphasizing again that for any RG flow described by a GLSM one can construct an $N=2$ SCFT in which the RG scale is replaced with a spatial coordinate. Therefore, in principle, we have obtained various CFT's describing tachyon condensations of various kinds. Since it is on-shell backgrounds, it is indeed possible to be realized in string theory. Several examples were discussed in this paper. It is expected that a systematic research of these CFT's and the corresponding tachyon condensations would provide a deeper understanding of closed string tachyon condensations. In particular, it would help understanding which kind of tachyons makes the target spaces disappeared and which are not. Note that, as mentioned at the end of in section \ref{W}, not all CFT's obtained from GLSM describe target space dynamics induced by tachyon condensations. It is very important to clarify this issue. The relation between a two-dimensional black hole \cite{BH} and a tachyon condensation, mentioned in section \ref{example}, seems to be interesting. Naively, by Wick-rotating $y_I$-direction, not $y_R$, one might obtain the Lorentzian black hole background, and the region where the tachyon condenses is behind the horizon. Therefore, it is tempting to guess that an inhomogeneous tachyon condensation would result in a formation of a black hole. Then a homogeneous tachyon condensation might be understood that the whole target space falls inside the horizon. Another interesting point of this relation is that the two-dimensional black hole has a matrix model description \cite{KKK}. It is very interesting if one can use such a matrix model technique to analyze a closed string tachyon condensation non-perturbatively. A non-perturbative analysis of closed string tachyon condensation is very important since a typical endpoint of the condensation would be strongly coupled. A relation between tachyon condensation and two-dimensional black hole was already mentioned in \cite{Olsson}. So far, we have discussed static space-dependent tachyon background. To study a time evolution of a tachyon condensation, one should perform the Wick rotation of the spacelike Liouville direction into the timelike one. There may be another way to extract a time-dependent process from our CFT. Let us discuss the spacetime effective action employed in \cite{Suyama1}\cite{YZ1}. To obtain a space-dependent solution, one would make an ansatz \begin{equation} ds^2 = e^{A(y)}\eta_{\mu\nu}dx^\mu dx^\nu+dy^2, \hspace{5mm} \Phi=\Phi(y), \hspace{5mm} T=T(y). \end{equation} Then the equations of motion reduce to the following ones, \begin{eqnarray} \frac12(T')^2-V(T) &=& 2(\Phi')^2, \\ \Phi'' &=& \frac12(T')^2, \\ A &=& \mbox{const.} \end{eqnarray} At a local minimum below zero, the tachyon $T$ can stop its rolling, and the dilaton is linear in $y$-direction. Consider a geodesic motion in this background in the Einstein frame. The geodesic equation is \begin{equation} \frac{d^2y}{dt^2} = -\frac12\phi'\Bigl[ 1-\Bigl( \frac{dy}{dt} \Bigr)^2 \bigr], \end{equation} where $\phi=-\frac4{D-1}\Phi$ and $t=x^0$. This indicates that all test particles are accelerated toward the positive $y$-direction, and its speed approaches the speed of light. The monotonicity of the geodesics seems to be related to the monotonicity of the RG flow behind the tachyon condensation. It may be interesting if the target space which an observer moving along the geodesic sees is a Penrose limit of the above solution. It is known that a Penrose limit of any solutions in string theory and M-theory preserves at least 16 supersymmetries, so the properties of the endpoint of the condensation might be understood, although the corresponding background is strongly coupled and the metric becomes singular. \vspace{2cm} \begin{flushleft} {\Large \bf Acknowledgments} \end{flushleft} \vspace{5mm} I would like to thank S.Iso, T.Takayanagi and T.Tokunaga for valuable discussions. This work was supported in part by JSPS Research Fellowships for Young Scientists. \newpage
1,314,259,992,914
arxiv
\section{Introduction} Hyperbolically embedded subgroups were introduced in~\cite{osin2} by Dahmani, Guiraldel and Osin, their definition is recalled in Section~\ref{sec:HypEmbDef}. Given a group $G$, $X\subset G$ and $H\leq G$, let $H\hookrightarrow_h (G,X)$ denote that $H$ is hyperbolically embedded subgroup of $G$ with respect to $X$. The class of groups containing a hyperbolically embedded subgroup includes hyperbolic groups, relatively hyperbolic groups, and some non-relatively hyperbolic examples including most mapping class groups of surfaces. Let $\Gamma$ be a simplicial graph, let $v$ be a vertex of $\Gamma$, and let \begin{align*} T_v \Gamma = \{w \in V(\Gamma) \mid \{v,w\}\in E(\Gamma)\}. \end{align*} denote the set of the vertices adjacent to $v$. For $x,y \in T_v \Gamma$, the \emph{angle metric} $\angle_v (x,y)$ is the combinatorial length of the shortest path in the graph $\Gamma - \{v\}$ between $x$ and $y$, with $\angle_v (x,y) = \infty$ if there is no such path. The graph $\Gamma$ is \emph{fine at $v$} if $(T_v \Gamma,\angle_v)$ is a locally finite metric space. The graph $\Gamma$ is \emph{fine} if it is fine at every vertex, this notion was introduced by Bowditch~\cite{bowditch}. There is an approach to relatively hyperbolic groups by Bowditch based on the notion of \emph{fine graph}~\cite{bowditch}. The main result of the note is a characterization of hyperbolically embedded subgroups that generalizes Bowditch's cited approach. \begin{definition}\label{def:GHgraph} Let $G$ be a group and let $H$ be a subgroup. A graph $\Gamma$ is a $(G,H)$-graph if $G$ acts on $\Gamma$ and \begin{enumerate} \item $\Gamma$ is connected and hyperbolic, \item there are finitely many $G$-orbits of vertices, \item $G$-stabilizers of vertices are finite or conjugates of $H$, and there is a vertex with $G$-stabilizer equals $H$, \item $G$-stabilizers of edges are finite, and \item \label{item:05} $\Gamma$ is fine at $V_\infty (\Gamma) = \{v \in V(\Gamma) \mid v \text{ has infinite stabilizer} \}$. \end{enumerate} \end{definition} Bowditch's definition~\cite[Definition 2]{bowditch} of relative hyperbolicity can be phrased as follows: A group {$G$ is hyperbolic relative to a subgroup of $H$} if and only if there exists a $(G,H)$-graph that has finitely many $G$-orbits of edges. This note proves the following result. \begin{thm}\label{thm:main} An infinite subgroup $H$ is hyperbolically embedded in $G$ if and only if there exists a $(G,H)$-graph. \end{thm} The ``only if" direction of Theorem~\ref{thm:main} is straightforward, namely, in the case that $H\hookrightarrow_h (G,S)$ then the Coned-off Cayley graph $\hat\Gamma(G,H,S)$, defined below, is a $(G,H)$-graph by Proposition~\ref{q2} which is discussed in Section~\ref{sec:HypEmbDef}. \begin{definition}[Coned-off Cayley Graph] Let $G$ be a group, $S\subset G$ and $H\leq G$. The \emph{Coned-off Cayley graph $\hat \Gamma(G,H,X)$} is the graph with vertex set the $G$-set $G\cup G/H$, where $G/H$ is the set of left cosets of $H$ in $G$; and edge set $\{\{g,gx\}\colon g\in G \text{ and } x\in X\} \cup \{ \{g, gH\} \colon g\in G \}$. Vertices of $\hat \Gamma(G,H,X)$ in $G/H$ are called \emph{cone vertices}. The notion of coned-off Cayley graph was introduced by Farb~\cite{farb}. \end{definition} \begin{proposition} \label{q2} Let $G$ be a group, $X\subset G$ and $H\leq G$. Then $H \hookrightarrow_h (G,X) $ if and only if $\hat{\Gamma}(G,H,X)$ is connected, hyperbolic, and fine at cone vertices. \end{proposition} The converse of Theorem~\ref{thm:main} follows from Proposition~\ref{q2} and the main technical result of the note: \begin{prop}\label{fin} Let $H$ be an infinite subgroup of $G$. If there is a $(G,H)$-graph then there is a subset $X$ of $G$ such that the Coned-off Cayley graph $\hat{\Gamma} (G,H,X)$ is a $(G,H)$-graph. \end{prop} The skeleton of the proof of Proposition~\ref{fin} is as follows. We define the notion of \emph{thick} $(G,H)$-graph, see Definition~\ref{def:thick}. In Section~\ref{sec:05}, we prove that if there is a $(G,H)$-graph, then there is a thick $(G,H)$-graph, see Proposition~\ref{prop:Existencethick}; and that if there is a thick $(G,H)$-graph then there is $X\subset G$ such that $\hat{\Gamma} (G,H,X)$ is a $(G,H)$-graph, see Proposition~\ref{prop:end}. These two results yield Proposition~\ref{fin} and the main result follows. Roughly speaking, a thick $(G,H)$-graph is a $(G,H)$-graph $\Delta$ that contains as a $G$-subgraph the coned-off Cayley graph $\Gamma(G,H,S)$ for $S$ a finite relative generating set of $G$ with respect to $H$. To construct a thick graph from a $(G,H)$-graph we perform a finite number of attachments of $G$-orbits of edges and vertices. Proving that attaching a new $G$-orbit of edges to a $(G,H)$-graph produces a new $(G,H)$-graph is a non-elementary argument which is the content of Section~\ref{sec:ModifyingGH-graphs}. Specifically, Proposition~\ref{lem2.2hs} generalizes results from Wise and the first author~\cite[Lemma 2.9]{MPW}, and Bowditch~\cite[Lemma 2.3]{bowditch}; the arguments in those references do not seem to generalize to our broader context and a new strategy was required. \subsection*{Acknowledgments.} Some of the results of this note are based on the Master thesis of the second author at Memorial University of Newfoundland under the supervision of the first author~\cite{Rashid20}. In that work an attempt to prove a weaker version of Theorem~\ref{thm:main} is outlined, using a more restrictive notion of $(G,\mathcal{P})$-graph. The second author acknowledges funding by the Natural Sciences and Engineering Research Council of Canada NSERC. \section{Preliminaries} Most graphs considered in this note are $1$-dimensional simplicial complexes. A \emph{graph} is an ordered pair $(V,E)$, where $V$ is a set, and $E$ is a relation on $V$ that is anti-reflexive and symmetric. Elements of set $V$ are called \textit{vertices}, and elements of set $E$ are called \textit{edges}. For a graph $\Gamma$, we denote $V(\Gamma)$ and $E(\Gamma)$ its vertex and edge set, respectively. If $v\in V(\Gamma)$, $e\in E(\Gamma)$ and $v\in e$, then $v$ is \textit{incident} to $e$. Vertices incident to the same edge are called \textit{adjacent}. For a vertex $w \in V(\Gamma)$, the graph $\Gamma - m$ is defined as the graph with vertex set $V(\Gamma) - \{w\}$ and edge set $E(\Gamma)- \{\{v,w\} \mid v \in V(\Gamma) \}$. A \emph{path} or an \emph{edge-path} from a vertex $v_0$ to a vertex $v_n$ of $\Gamma$ is a sequence of vertices $[v_0, v_1 \dots , v_n]$, where ${v_i}$ and $v_{i+1}$ are adjacent for all $i \in \{0, \dots , n-1 \}$. Its \emph{reverse-path} would be $[v_n, v_{n-1} \dots , v_0]$. A \emph{subpath} of the path $[v_0,v_1,\ldots,v_n]$ is a path of the form $[v_i,v_{i+1},\ldots ,v_j]$ for some $0\leq i < j\leq n$, or of the form $[v_i]$ for some $0\leq i\leq n$. The length of a path is one less than the total number of vertices in the sequence. If no vertex on a path appears in the sequence more than once, the path is called an \emph{embedded path}. The concatenation of two paths $\alpha = [u_0, u_1 \dots , u_n]$ and $\beta = [v_0, v_1 \dots , v_m] $ such that $u_n=v_0$ is $[\alpha,\beta] = [u_0, u_1 \dots , u_n, v_1 \dots , v_m]$. Analogously, the concatenation of a vertex $a$ and a path $\alpha = [u_0, u_1 \dots , u_n]$ such that $a$ and $u_0$ are adjacent is $[a,\alpha]=[a,u_0, u_1 \dots , u_n]$. A graph is \emph{connected} if there is a path between any two vertices. In a connected graph, the \emph{path-distance} between vertices is the length of the shortest path between them; this defines a metric on the set of vertices called the \emph{path metric}. For a graph $\Gamma$, we denote this distance by $\dist_\Gamma$. An \emph{$(L,C)$-quasi-isometry} $q\colon \Gamma\to \Delta$ between connected graphs is a function $q\colon V(\Gamma) \to V(\Delta)$ such that \[ \frac{1}{L}\dist_\Gamma(x,y) - C \leq \dist_\Delta(q(x),q(y)) \leq L\dist_\Gamma(x,y)+C\] for any $x,y\in V(\Gamma)$, and for any $z\in V(\Delta)$ there is $x\in V(\Gamma)$ such that $\dist_\Delta(q(x),z)\leq C$. A graph with a $G$-action is called \emph{a $G$-graph}. For a vertex $v$ of $\Gamma$, the \emph{$G$-stabilizer} of $v$ is $G_v=\{g \in G \colon g.v = v\}$, and the $G$-orbit of $v$ is $G.v=\{g.v\colon g\in G\}$. Define analogously $G$-stabilizers and $G$-orbits of edges. \section{Hyperbolically Embedded Subgroups and Coned-off Cayley Graphs}\label{sec:HypEmbDef} \begin{definition}[Hyperbolically Embedded Subgroups]\cite{osin2} Let $G$ be a group, let $H$ be a subgroup, and let $X\subset G$. Suppose that $G$ is generated by $X\cup H$. Let $\Gamma(G, X\sqcup H)$ be the Cayley graph of $G$ whose edges are labeled by letters from the alphabet $X\sqcup H$. Note that $\Gamma(G, X\sqcup H)$ is not a simplicial graph. For $h,k\in H$, let $\hat d_H (h,k)$ be the length of the shortest edge-path from $h$ to $k$ with the property that if an edge is labeled by an element of $H$ then the endpoints of the edge are not elements of the subgroup $H$ (this type of path is called admissible); if there is no admissible path between $h$ and $k$, let $\hat d_\lambda (h,k)=\infty$. The subgroup $H$ is \emph{hyperbolically embedded in $G$ with respect to $X$}, denoted as $H \hookrightarrow_h (G,X)$, if \begin{enumerate} \item $G$ is generated by $X\cup H$, \item $\Gamma(G,X\sqcup H)$ is a hyperbolic graph, and \item the metric space $(H, \hat d_\lambda)$ has the property that any ball of finite radius has finitely many elements. \end{enumerate} If $H \hookrightarrow_h (G,X)$ for some $X$, then we write $H \hookrightarrow_h G$. \end{definition} The conclusion of Proposition~\ref{q2} is contained in the following lemma. \begin{lem} \label{prop:ConedOffCayleyGraph} \label{khat} \label{lem3.2} \label{cqc} \label{ccg} Let $G$ be a group, $X\subseteq G$, and $H\leq G$ an infite subgroup. Then \begin{enumerate} \item $G=\langle X \cup H \rangle$ if and only if $\hat{\Gamma}(G,H,X)$ is connected if and only if $\Gamma(G, H\cup X)$ is connected. \item If $G=\langle X\cup H \rangle$, then $\hat{\Gamma}(G,H,X)$ is quasi-isometric to ${\Gamma}(G,H \cup X)$. \item $\hat\Gamma(G,H,X)$ is fine at $G/H$ if and only if $(H,\hat d_H)$ is locally finite. \item $H\hookrightarrow_h (G,X)$ if and only if $\hat\Gamma(G,H,X)$ is connected, hyperbolic and fine at cone vertices. \end{enumerate} \end{lem} \begin{proof} Consider the inclusion of the vertex set of $\Gamma(G, H\cup X)$ into the vertex set of $\hat \Gamma(G,H,X)$. For any $a,b\in G$, they are adjacent by an edge of $\Gamma(G, H\cup X)$ with label in $H$ if and only if $a$ and $b$ are both adjacent to the same cone vertex of $\hat\Gamma(G,H,X)$; and they are adjacent by an edge with label in $X$ if and only if they are adjacent in $\hat\Gamma(G,H,X)$. It follows that $\hat\Gamma (G,H,X)$ and $\Gamma(G,H\cup X)$ are both connected, or both disconnected. Hence, if both graphs are connected, \[ \frac12\dist_{\hat\Gamma}(x,y)\leq \dist_{\hat\Gamma}(x,y) \leq 2\dist_\Gamma(x,y) \] for any $x,y\in G$. In particular, the inclusion $G\hookrightarrow G\cup G/H$ is a quasi-isometry $\hat\Gamma (G,H,X ) \to \Gamma(G,H\cup X)$. For the third statement, observe that the metric spaces $(H, \hat d_H)$ and $(T_H \hat \Gamma, \angle_H)$ have the same underlying set and \[ \frac12 \hat d_H (h,k) \leq \angle_H(h,k) \leq 2\hat d_H (h,k) \] for any $h,k\in H$. Moreover, the metric spaces $(T_{gH}\hat\Gamma, \angle_{gH})$ for $gH\in G/H$ are all isometric. Therefore $(H, \hat d_H)$ is a locally finite metric space if and only if $\hat \Gamma$ is fine at $G/H$. \end{proof} \section{Edge-attachments to $(G,H)$-graphs} \label{sec:ModifyingGH-graphs} In this section, we prove Proposition~\ref{lem2.2hs} and Corollary~\ref{attach} stated below. \begin{definition} Let $\Gamma$ and $\Gamma'$ be $G$-graphs such that $V(\Gamma)=V(\Gamma')$ as $G$-sets. Let $u,v\in V(\Gamma)$ distinct vertices such that $\{u,v\} \not \in E(\Gamma)$. If \begin{align*} V(\Gamma') &= V(\Gamma),\\ E(\Gamma') &= E(\Gamma) \cup \left\{\{g.u,g.v\} \mid g \in G \right\}, \end{align*} then we say that $\Gamma'$ is \emph{obtained from $\Gamma$ by attaching a $G$-orbit of edges with representative $\{u,v\}$}. \end{definition} \begin{proposition} \label{lem2.2hs} Let $\Gamma$ be a connected $G$-graph such that the $G$-stabilizers of edges are finite, and let $u,v \in V(\Gamma)$ such that $u \neq v$. Suppose that $\Gamma'$ is a $G$-graph obtained from $\Gamma$ by attachment a $G$-orbit of edges with representative $\{u,v\} \in V(\Gamma)$. Then \begin{enumerate} \item If $a \in V(\Gamma)$ and $\Gamma$ is fine at $a$, then $\Gamma'$ is fine at $a$. \item The graph $\Gamma'$ is connected and the inclusion $\imath\colon\Gamma \hookrightarrow \Gamma'$ is a quasi-isometry. \item If $a\in V(\Gamma)$ and $T_a\Gamma$ has finitely many $G_a$-orbits, then $T_a\Gamma'$ has finitely many $G_a$-orbits. \end{enumerate} \end{proposition} The first statement of Proposition~\ref{lem2.2hs} is the main technical result of this note. It is a generalization of~\cite[Lemma 2.9]{MPW} and~\cite[Lemma 2.3]{bowditch} where the results are proved under the assumption that the graph $\Gamma$ is fine. The weaker assumption of having fineness at only a subset of vertices requires a different and non-trivial proof strategy. We record a corollary that is used in the proof of the main result of the note. \begin{corollary}\label{attach} Let $\Gamma$ be a $(G,H)$-graph and let $u,v \in V(\Gamma)$ such that $u \neq v$. If $\Gamma'$ is a $G$-graph obtained from $\Gamma$ by the attachment of a $G$-orbit of edges with representative $\{u,v\}$, then $\Gamma'$ is a $(G,H)$-graph and the inclusion $\imath\colon\Gamma \hookrightarrow \Gamma'$ is a quasi-isometry. \end{corollary} The proof of Proposition~\ref{lem2.2hs} is discussed in subsection~\ref{subsec:proofAttaching}, and the proof of Corollary~\ref{attach} is in subsection~\ref{subsec:CorAttaching} \subsection{A couple of Lemmas on Fine Graphs} Let $\Gamma$ be a graph. A path $[u, u_1 \dots , u_n]$ in $\Gamma$ is an \emph{escaping path from $u$ to $v$} if $v=u_i$ for some $i\in \{1, \dots , n\}$, and $u_i \neq u$ for every $i \in \{1, \dots , n\}$. For vertices $u$ and $v$ of $\Gamma$ and $k \in \mathbb{Z}_{+}$, define: \[ \vec{uv} (k)_\Gamma = \{w \in T_u \Gamma \mid w \text{ belongs to an escaping path from }u\text{ to }v\text{ of length } \leq k\}. \] \begin{lemma} \label{fatv} A graph $\Gamma$ is fine at $u \in V(\Gamma)$ if and only if $\vec{uv}(k)_\Gamma$ is a finite set for every integer $k>0$ and every vertex $v \in V(\Gamma)$. \end{lemma} \begin{proof} Observe that for any $u,v\in V(\Gamma)$, $k>0$ and any $w\in \vec{uv}(k)_\Gamma$, \[ \vec{uv}(k)_\Gamma \subseteq B_{T_u \Gamma}(w, 2k-2) \quad \text{ and } \quad B_{T_u \Gamma}(w, k) \subseteq \vec{uw}(k+1),\] where $B_{T_u \Gamma}(w, r)$ denotes the closed ball in $(T_u \Gamma,\angle_u)$ centered at $w$ of radius $r$. The statement of the lemma is then an immediate consequence. \end{proof} \begin{lemma} \label{lem2.1hs} Let $\Gamma$ be a connected $G$-graph with finite edge stabilizers. Suppose that $\Gamma$ is fine at $u \in V(\Gamma)$. Then for any vertex $v \in V(\Gamma)$, $G_u \cap G_v$ is finite or $u = v$. \end{lemma} \begin{proof} Let $H$ denote $G_u\cap G_v$. Since $\Gamma$ is connected there is a minimum length path $[u_0,u_1,\ldots,u_k]$ from $u$ to $v$. Let $U=\{gu_1 \colon g\in G_u\cap G_v\}$ and observe that this is an $H$-set. Since $H$ fixes both $u$ and $v$, it follows that $U \subseteq B_{T_u\Gamma}(u_1, 2k-2)$. Therefore, by the assumption that $\Gamma$ is fine at $u$, $U$ is finite $H$-set. It follows that for any vertex $w \in U$, the edge $e=\{u,w\}$ is fixed by a finite index subgroup $H\cap G_e$ of $H$. Since edge $G$-stabilizers are finite, $G_e\cap H$ is finite subgroup and therefore $H$ is finite. \end{proof} \subsection{Proof of Proposition~\ref{lem2.2hs} } \label{subsec:proofAttaching} \begin{proof}[Proof of Proposition~\ref{lem2.2hs}] By Lemma~\ref{fatv}, to prove that $\Gamma'$ is fine at $a$ is enough to show that for every integer $k\geq 1$ and for every $b \in V(\Gamma')$ with $b \neq a$, the set $\vec{ab}(k)_{\Gamma'}$ is finite. Fix $b\in V(\Gamma)$, $b\neq a$, and $k\geq1$. Let $\alpha$ be a minimal length embedded path from $u$ to $v$ in $\Gamma$, such a path exists since $\Gamma$ is connected. Suppose that the length of $\alpha$ is $\ell$, \[\alpha=[u,u_1,\ldots, u_{\ell-1}, v] .\] Observe that if $\ell=1$ then $\Gamma=\Gamma'$ and there is nothing to prove. Assume that $\ell>1$. Let $\hat{\alpha}$ be the reverse path from $v$ to $u$. Let \[ n = k\ell\] and let \[X_0 = \{ x\in \vec{ab}(k)_{\Gamma'} \mid x \not\in T_a\Gamma \}\] To prove that $\vec{ab}(k)_{\Gamma'}$ is finite, we show that $X_0$ is a finite set, define inductively a sequence of finite subsets $\vec{ab}(n)_\Gamma=W_n\subseteq W_{n-1} \subseteq \cdots \subseteq W_1 $, and prove that $\vec{ab}(k)_{\Gamma'} \subseteq W_1 \cup X_0$. For the rest of the proof, we use the following terminology: A subpath of length two of a path $P$ is called a \emph{corner of $P$}. Define subsets $W_1,W_2,\dots , W_{n}$ and $Z_1,Z_2,\dots , Z_{n-1}$ of $T_a \Gamma$ as follows. Let \begin{align*} W_n &= \vec{ab} (n) _\Gamma. \end{align*} Suppose $W_j$ has been defined, and let \begin{align*} Z_{j-1} &= W_j \cup \{z \in T_a \Gamma \mid \exists \text{ corner } c \text{ of } \alpha \text{ or } \hat{\alpha} \text{ such that } g.c = [z,a,w] \text{ for } g \in G, w \in W_j \}. \\ W_{j-1} &= W_j \cup \{w \in T_a \Gamma \mid \exists z \in Z_{j-1} \text{ such that } \angle_{T_a \Gamma} (z,w) \leq n \}. \end{align*} It is immediate that \begin{align} \label{w} W_j \subseteq Z_{j-1} \subseteq W_{j-1}, \end{align} in particular if $z \in Z_{j-1}$ then $\angle_{T_a \Gamma} (z,z) = 0$ and hence $z \in W_{j-1}$. \begin{lem}\label{g1} For $j \leq n$, $Z_{j-1}$ and $W_j$ are finite sets. In particular, $W_1$ is finite. \end{lem} \begin{proof} \emph{If $W_j$ is finite, then $Z_{j-1}$ is finite.} This is a consequence of the assumption that the $G$-stabilizers of edges in $\Gamma$ are finite. By contrary, assume that there are infinitely many $z \in T_a \Gamma$ such that there exists a corner $c$ of $\alpha$ or $\hat{\alpha}$ for which $g.c=[z,a,w]$, where $w \in W_j$ and $g \in G$. But as $W_j$ is finite, there exists a $w \in W_j$ such that there are infinitely many $z \in Z_{j-1}$ for which there exists a corner $c$ for which $g.c=[z,a,w]$ for $g \in G$. Note that there are infinitely many $g$'s. Pick any two of these $g$'s, say $g_1$ and $g_2$. Note that $g_2 g_{1}^{-1}$ is a stabilizer of $\{a,w\} \in E(\Gamma)$. This implies the stabilizer of the edge $\{a,w\}$ is infinite, which contradicts the hypothesis. Hence, $Z_{j-1}$ is finite. \emph{If $Z_j$ is finite then $W_j$ is finite.} Indeed, since $\Gamma$ is fine at $a$, balls in $(T_a\Gamma, \angle_a)$ are finite; therefore if $Z_j$ then $\{w \in T_a \Gamma \mid \exists z \in Z_{n-1} \text{ such that } \angle_{T_a \Gamma} (z,w) \leq n \}$ is finite and hence $W_j$ is finite. The lemma follows since Lemma~\ref{fatv} implies that $\vec{ab}_\Gamma (n) = W_n$ is finite. \end{proof} Let $\gamma'$ be a path in $\Gamma'$. Let $\gamma$ be the path in $\Gamma$ obtained by replacing each subpath of length one of the form $[g.u,g.v]$ or $[g.v,g.u]$ for some $g \in G$ by the path $g. \alpha$ or $g. \hat{\alpha}$ (make a choice of $g$ if necessary), respectively. Since this construction is used once more in the proof, we refer to the path $\gamma$ the \emph{$\alpha$-replacement of $\gamma'$}. \begin{lem} \label{g} Let $\gamma'$ be an escaping path in $\Gamma'$ from $a$ to $b$ of length at most $k$, and let $\gamma$ be its $\alpha$-replacement. Then \[ \gamma'\cap T_a\Gamma \subseteq \gamma \cap T_a\Gamma \subseteq W_1,\] where $\gamma' \cap T_a\Gamma$ is the set of vertices of $\gamma'$ that belong to $T_a \Gamma$, and $\gamma\cap T_a\Gamma$ is defined analogously. \end{lem} \begin{proof} By construction, the set of vertices of $\gamma'$ is a subset of the set of vertices of $\gamma$. Hence $\gamma'\cap T_a\Gamma \subseteq \gamma \cap T_a\Gamma$. Observe that $\gamma$ is of the form $[a,\gamma_1, a,\gamma_2, a, \dots ,a, \gamma_m]$, where each $\gamma_i$ does not contain the vertex $a$. Note that $m\leq n$ and that $\gamma$ is not escaping when $m>1$. Let $w_i$ and $z_i$ denote the initial and terminal vertices of $\gamma_i$, respectively. The inclusions~\eqref{w} together with the observation that $\gamma \cap T_a\Gamma =\{w_1,z_1,\dots,z_{m-1}, w_m\}$ show that to conclude the proof of the lemma is enough to show that $z_{i-1} \in Z_{i-1}$ and $w_i \in W_i$ for $i \leq m$. This is done inductively using the following four claims. \emph{Claim 1. If $z_{i+1} \in Z_{i+1}$ then $w_{i+1}\in W_{i+1}$}. Indeed, since $\angle_{T_a \Gamma} (z_{i+1}, w_{i+1}) \leq n$ and $z_{i+1} \in Z_{i+1}$, it follows that $w_{i+1} \in W_{i+1}$. \emph{Claim 2. If $w_{i+1}\in W_{i+1}$ then $z_i\in Z_i$.} As $[z_{i},a,w_{i+1}]$ is the translation of a corner of $\alpha$ or $\hat{\alpha}$, if $w_{i+1} \in W_{i+1}$, then by the definition $z_{i} \in Z_{i}$. \emph{Claim 3. $w_m \in W_m$.} Note that $[a,\gamma_m]$ is an escaping path of length at most $n$ from $a$ to $b$ in $\Gamma$. Therefore $w_m \in \vec{ab}_\Gamma (n) = W_n$. Since $m\leq n$, it follows that $w_m \in W_n \subseteq W_m$. \emph{Claim 4. $z_{m-1} \in Z_{m-1}$.} Note that $[z_{m-1},a,w_m]$ is the translation of a corner of $\alpha$ or $\hat{\alpha}$; since $w_m \in W_m$, by the definition, $z_{m-1} \in Z_{m-1}$. \end{proof} \begin{lemma}\label{lem:eduardo} $X_0$ is a finite set. \end{lemma} \begin{proof} Suppose that $X_0$ is an infinite set. Note that if $x\in X_0$ then there is $g\in G$ such that either $g.u=a$ and $g.v=x$, or $g.u=x$ and $g.v=a$. Without loss of generality, assume that the set $Y=\{g\in G \mid \text{ $g.u=a$ and $g.v \in \vec{ab}(k)_{\Gamma'}$} \}$ is infinite. For each $g\in Y$, let $\gamma_g'$ be an escaping path from $a$ to $b$ of length $\leq k$ in $\Gamma'$ that contains $[a,g.v]$ as an initial subpath. Let $\gamma_g$ be the $\alpha$-replacement of $\gamma_g'$. Since $\alpha=[u,u_1,\ldots, u_{\ell-1}, v]$, observe that $g.\alpha = [a,g.u_1,\ldots, g.u_{\ell-1}, g.v]$ is an initial subpath of $\gamma_g$. By Lemmas~\ref{g1} and~\ref{g}, $W_1$ is finite and $\gamma_g \cap T_a\Gamma \subseteq W_1$ for any $g\in D$. It follows that there exists a vertex $w \in W_1$ such that $[a,w]$ is an initial subpath of $\gamma_g$ for infinitely many distinct elements $g\in Y$. Therefore, there are infinitely many distinct elements $g\in Y$ that map the edge $\{u,u_1\}$ to $\{a,w\}$, but that means that the edge $\{a,w\}$ of $\Gamma$ has infinite $G$-stabilizer. This is a contradiction with the hypotheses on the $G$-graph $\Gamma$. \end{proof} \begin{lemma}\label{lem:Farhan} $\vec{ab}(k)_{\Gamma'}\subseteq W_1\cup X_0$. \end{lemma} \begin{proof} Let $\gamma'$ be an escaping path from $a$ to $b$ in $\Gamma'$ of length $\leq k$. Then Lemma~\ref{g} implies that $\gamma'\cap T_a\Gamma \subseteq W_1$. Since any vertex in $\gamma'\cap T_a\Gamma'$ is either in $T_a\Gamma$ or $X_0$, it follows that $\gamma'\cap T_a\Gamma' \subseteq W_1\cup X_0$ which proves the statement. \end{proof} By Lemmas~\ref{g1} and~\ref{lem:eduardo}, the set $W_1\cup X_0$ is finite, hence Lemma~\ref{lem:Farhan} implies that $\vec{ab}(k)_{\Gamma'}$ is finite; this completes the proof that $\Gamma'$ is fine. It is left to address the last two items of Proposition~\ref{lem2.2hs}. \begin{lemma} $\Gamma'$ is connected and the inclusion $\Gamma \to \Gamma'$ is a quasi-isometry. \end{lemma} \begin{proof} Since $V(\Gamma)=V(\Gamma')$ and $\Gamma$ is connected, the inclusion $\imath\colon \Gamma \hookrightarrow \Gamma'$ implies that $\Gamma'$ is also connected. Let $x,y \in V(\Gamma)$. Let $\gamma'$ be a minimal length path from $x$ to $y$ in $\Gamma'$, and let $\gamma$ be the $\alpha$-replacement of $\gamma'$. Then the length of $\gamma$ is at most $\ell \dist_{\Gamma'}(x,y)$. It follows that \[ \dist_{\Gamma'}(x,y) \leq \dist_\Gamma (x,y) \leq \ell \dist_{\Gamma'}(x,y),\] and hence $\imath\colon \Gamma \hookrightarrow \Gamma'$ is a quasi-isometry. \end{proof} \begin{lemma} If $a\in V(\Gamma)$ and $T_a\Gamma$ has finitely many $G_a$-orbits, then $T_a\Gamma'$ has finitely many $G_a$-orbits. \end{lemma} \begin{proof} If $T_a\Gamma=T_a\Gamma'$ there is nothing to prove. Suppose that $T_a\Gamma \subsetneq T_a\Gamma'$. Observe that $D=T_a\Gamma'-T_a\Gamma$ is a $G_a$-set. For any $w\in D$ there is a $g\in G$ such that $[g.u, g.v]=[a,b]$ or $[g.v, g.u]=[a,b]$. Note that all elements $w\in D$ such that there is $g\in G$ such that $[g.u, g.v]=[a,b]$ are in the same $G_a$-orbit; and analogously for the other case. Hence the $G_a$-set $D$ has at most two distinct $G_a$-orbits. Since $T_a\Gamma'=T_a\Gamma \cup D$, the lemma follows. \end{proof} This completes the proof of Proposition~\ref{lem2.2hs}. \end{proof} \subsection{Proof of Corollary~\ref{attach}}\label{subsec:CorAttaching} \begin{proof}[Proof of Corollary~\ref{attach}] Definition~\ref{def:GHgraph} of $(G,H)$-graph has five items that need to be verified for $\Gamma'$. Items (1) and (5) follow from Proposition~\ref{lem2.2hs}. Items (2) and (3) are immediate from the definition of $\Gamma'$. Item (4) follows from item (5) and Lemma~\ref{lem2.1hs}. \end{proof} \section{Proof of the Main Result.}\label{sec:05} \begin{definition}\label{def:thick} A $(G,H)$-graph $\Gamma$ is \emph{thick} if it satisfies the following conditions: \begin{enumerate} \item \label{eq:minimal} It contains a vertices $u_0$ and $v_0$ such that \[ \{u_0,v_0\} \in E(\Gamma),\qquad G_{u_0}=1,\quad \text{ and }\quad G_{v_0}=H.\] \item \label{eq:minimal02} There is $S$ be a finite relative generating set of $G$ with respect to $H$, and a collection $\{u_0,\ldots , u_\ell\}$ of representatives of $G$-orbits of vertices of $\Gamma$ with finite $G$-stabilizers such that \[ \{u_0,s.u_0\} \in E(\Gamma)\quad \text{for all $s\in S$, } \quad \text{ and }\quad \{u_0, u_j\} \in E(\Gamma) .\] \end{enumerate} \end{definition} \begin{proposition}\label{prop:Existencethick} Let $H$ be an infinite subgroup of a finitely generated group $G$. If there exists a $(G,H)$-graph $\Gamma$ then there is a thick $(G,H)$-graph with a $G$-equivariant quasi-isometry $\Gamma\to \Delta$. \end{proposition} \begin{remark}\label{rem:trivialStabilizer} Let $\Gamma$ be a $(G,H)$-graph, let $\Gamma'$ be a $G$-graph, and suppose $\Gamma$ is a $G$-subgraph of $\Gamma'$. If there is an edge $\{u,v\}\in E(\Gamma')$ such that $G_u$ is trivial, and \begin{align*} V(\Gamma') &= V(\Gamma)\cup G.v,\\ E(\Gamma') &= E(\Gamma) \cup \left\{\{g.u,g.v\} \mid g \in G \right\}, \end{align*} then it is an easy observation that $\Gamma'$ is a $(G,H)$-graph and the inclusion $\imath\colon\Gamma \hookrightarrow \Gamma'$ is a quasi-isometry. Hence, one can always assume that a $(G,H)$-graph contains a vertex with trivial stabilizer. \end{remark} \begin{proof}[Proof of Proposition~\ref{prop:Existencethick}] By Remark~\ref{rem:trivialStabilizer}, we can assume that $\Gamma$ contains a vertex $u_0$ with trivial stabilizer. Let $v_0\in V_\infty(\Gamma)$ such that $G_{v_0}=H$. Now we will perform a finite sequence of attachments $G$-orbits of edges to $\Gamma$. By Corollary~\ref{attach} the resulting graph $\Delta$ will be a $(G,H)$-graph with a $G$-equivariant quasi-isometry $\Gamma\to H$. By attaching a $G$-orbit of edges if necessary assume that $\{u_0,v_0\}\in E(\Gamma)$. Let $S$ be a finite relative generating set of $G$ with respect to $H$, and let $\{u_0,\ldots , u_\ell\}$ be a collection of representatives of $G$-orbits of vertices of $\Gamma$ with finite $G$-stabilizers. By adding finitely many orbits of edges if necessary, assume that $\{u_0,s.u_0\} \in E(\Gamma)$ for all $s\in S$ and $\{u_0, u_j\} \in E(\Gamma)$ for all $1\leq j\leq \ell$. Observe that the resulting graph is a thick $(G,H)$-graph. \end{proof} \begin{proposition}\label{prop:end} Suppose that $H$ is an infinite subgroup of $G$. Let $\Gamma$ be a thick $(G,H)$-graph, and let $v_0$, $\{u_0,\ldots , u_\ell\}$ and $S$ be as in Definition~\ref{def:thick}. Let \[X=\{g\in G \colon \text{ $\dist_{\Gamma}(u_i, g.u_j)=1$ or $\dist_{\Gamma}(u_i, g.v_0)=1$ for some $0\leq i, j\leq\ell$ } \}\] Then $q\colon \hat\Gamma(G,H,X) \to \Gamma$ given by $g\mapsto g.u_0$ and $gH\mapsto g.v_0$ is a quasi-isometry. Moreover, $\hat\Gamma(G,H,X)$ is fine at cone vertices. In particular $\hat\Gamma(G,H,X)$ is a $(G,H)$-graph. \end{proposition} \begin{proof} Note that if $f,g\in G$ and $\{f,g\}$ is an edge in $\hat\Gamma$ then $\dist_\Gamma(q(f),q(g))\leq 3$. Indeed, by definition of $X$, $\{g.u_i, f.u_j\}$ is an edge in $\Gamma$ for some $i,j$ and hence $[f.u_0, f.u_i, g.u_j, g.u_0]$ is a path of length $3$ in $\Gamma$. Moreover for any edge of the form $\{g,gH\}$ of $\hat\Gamma$, we have that $\{g.u_0,g.v_0\}$ is an edge of $\Gamma$ and hence $\dist_\Gamma(q(g),q(gH))=1\leq 3$. It follows that \[ \dist_{\Gamma}(q(a),q(b))\leq 3\dist_{\hat\Gamma}(a,b) \] for any $a,b\in V(\hat\Gamma)$. On the other hand, any vertex of $V(\Gamma)$ is of the form $g.u_j$ or $g.v_0$ for some $g\in G$ and $0\leq j\leq\ell$. If $\{g_1.u_i,g_2.u_j\}$ is an edge in $\Gamma$, then $g_1^{-1}g_2\in X$ or equivalently $\{g_1, g_2\}$ is an edge of $\hat\Gamma$. Analogously, if $\{g_1.u_i,g_2.v_0\}$ is an edge in $\Gamma$, then $g_1^{-1}g_2\in X$ and hence $[g_1,g_2,g_2H]$ is path in $\hat\Gamma$. It follows that \[ \dist_{\hat\Gamma}(a,b) \leq 2\dist_{\Gamma}(q(a),q(b))\] for any $a,b\in V(\hat\Gamma)$. We have a $G$-equivariant quasi-isometry $q\colon \hat\Gamma \to \Gamma$. It is left to prove that $\hat\Gamma$ is fine at cone vertices. The argument above shows that every edge in $\hat\Gamma$ of the form $\{g,gx\}$ with $g\in G$ and $x\in X$ corresponds to an (undirected) path of length at most three in $\Gamma$ between $g.u_0$ and $gx.u_0$ that does not contain any vertex in $V_\infty(\Gamma)$. Moreover, every edge in $\hat\Gamma$ of the form $\{g,gH\}$ corresponds to the edge $\{g.u_0, g.v_0\}$ in $\Gamma$. In this way, to every path $\alpha$ in $\hat\Gamma$ from $e$ to $g$, corresponds a path $q(\alpha)$ from $u_0$ to $g.u_0$ with the property that $\alpha$ pass through the cone vertex $fH$ if and only if $q(\alpha)$ pass through the vertex $f.v_0$. Moreover $|q(\alpha)|\leq 3|\alpha|$. Suppose that $\hat\Gamma(G,H,X)$ is not fine at cone vertices. Then there is $r>0$ such that the set $\Omega=\{h\in H\colon \angle_H(e,h)\leq r\}$ is an infinite set. It follows that the set $\{h.u_0\in T_{v_0}\Gamma \colon h\in \Omega\}$ is an infinite set. For each $h\in \Omega$ there is path $\alpha_h$ from $e$ to $h$ that does not pass through the cone vertex $H$, and $|\alpha_h|\leq r$. Hence $\angle_{v_0}(u_0, h.u_0)\leq |q(\alpha_h)|\leq 3r$ for every $h\in \Omega$. Therefore $\Gamma$ is not fine. This is a contradiction, and therefore $\hat\Gamma$ is fine at cone vertices. \end{proof} \bibliographystyle{alpha}
1,314,259,992,915
arxiv
\section{Introduction} Let $M$ be a compact symplectic manifold with symplectic form $\omega$. On the space $\mathcal J$ of all $\omega$-compatible complex structures $J$ there is a natural symplectic form with respect to which the scalar curvature $S(J)$ of the K\"ahler manifold $(M, \omega, J)$ becomes a moment map for the action of the group of all Hamiltonian diffeomorphisms of $(M, \omega)$ acting on $\mathcal J$ (c.f. \cite{donaldson97}, \cite{fujiki92}). This means that the problem of finding extremal K\"ahler metrics can be set in the framework of stability in the sense of geometric invariant theory. It was shown in \cite{futaki06} that, perturbing the symplectic form on $\mathcal J$ and the scalar curvature incorporating with the higher Chern classes and with a small real parameter $t$, the perturbed scalar curvature $S(J,t)$ becomes a moment map with respect to the perturbed symplectic form on $\mathcal J$. Note that the unperturbed scalar curvature is the trace of the first Chern class. See section 2 for the precise definitions. Recall that a K\"ahler metric $g$ is called an extremal K\"ahler metric if the $(1,0)$-part of the gradient vector field of the scalar curvature $S$ $$ \mathrm{grad}' S = g^{i{\overline j}}\frac{\partial S}{\partial {\overline z}^j}\frac{\partial}{\partial z^i}$$ is a holomorphic vector field. Extremal K\"ahler metrics are critical points of two functionals. One is the so-called the Calabi functional. This is a functional $\Psi$ on the space $\mathcal K_{\omega_0}$ of all K\"ahler forms in a fixed de Rham class $\omega_0$ with fixed complex structure $J$. If $\omega \in \mathcal K_{\omega_0}$ and $S(\omega)$ denotes the scalar curvature of $\omega$ then $$ \Psi(\omega) = \int_M S(\omega)^2 \omega^m$$ where $m = \dim_{{\mathbb C}}M$. Calabi originally defined extremal K\"ahler metrics to be the critical points of $\Psi$. The other functional $\Phi$ is defined on $\mathcal J$. If $S(J)$ denotes the scalar curvature of the K\"ahler manifold $(M,\omega,J)$ for $J \in \mathcal J$ then $$ \Phi(J) = \int_M S(J)^2\omega^m.$$ It is easy to see that the extremal K\"ahler metrics are exactly the critical points of $\Phi$ from the fact that the scalar curvature is the moment map on $\mathcal J$ for the action of Hamiltonian diffeomorphisms as mentioned above. Inspired by a work of Bando \cite{bando83} the author defined in \cite{futaki06} perturbed extremal K\"ahler metrics as follows: the K\"ahler metric $g$ for $(M, \omega, J)$ is called a perturbed extremal K\"ahler metric if the $(1,0)$-part of the gradient vector field $$ \mathrm{grad}' S(J,t) = g^{i{\overline j}}\frac{\partial S(J,t)}{\partial {\overline z}^j}\frac{\partial}{\partial z^i}$$ is a holomorphic vector field. From the fact that $S(J,t)$ becomes a moment map on $\mathcal J$ with respect to the perturbed symplectic structure, one can see that the critical points of the functional $$\Phi(J) = \int_M S(J,t)^2 \omega^m$$ are $J$'s for which the K\"ahler metric of $(M,\omega,J)$ is a perturbed extremal K\"ahler metric. However it is not true for $t \ne 0$ that perturbed extremal K\"ahler metrics are the critical points of the functional $\Psi$ on $\mathcal K_{\omega_0}$ defined by $$ \Psi(\omega) = \int_M S(\omega,t)^2 \omega^m$$ where $S(\omega,t)$ is the perturbed scalar curvature of $(M, \omega, J)$, see Remark 3.3 in \cite{futaki06}. This is the significant difference between the perturbed case and the unperturbed case. In \cite{xwang04} Xiaowei Wang explains how one gets the decomposition theorem of Calabi \cite{calabi85} for the structure of the Lie algebra of all holomorphic vector fields on compact K\"ahler manifolds with extremal K\"ahler metrics in the finite dimensional setting of the framework of the moment maps, see also \cite{futaki05}. On the other hand Lijing Wang \cite{Lijing06} explains how one gets the Hessian formulae for the Calabi functional and the functional $\Phi$ in the finite dimensional setting of the framework of moment maps. Recall that the Hessian formula for the Calabi functional plays the key role for the proof of Calabi's decomposition theorem of the Lie algebra of all holomorphic vector fields on compact K\"ahler manifolds with extremal K\"ahler metrics. Because of the above mentioned difference between the perturbed case and the unperturbed case, one can not expect that the same proof as the unperturbed case by Calabi can be applied to the perturbed case. The purpose of this paper is to see L.-J. Wang's finite dimensional arguments provide us a rigorous proof of Calabi's decomposition theorem for compact K\"ahler manifolds with perturbed extremal K\"ahler metrics. Thus we obtain a similar statement of the decomposition theorem: \begin{theorem}\label{main1} Let $M$ be a compact K\"ahler manifold with a perturbed extremal K\"ahler metric. Let $\mathfrak h(M)$ be the Lie algebra of all holomorphic vector fields and $\mathfrak k$ be the real Lie algebra of all Killing vector fields of $M$. Then \begin{enumerate} \item[(a)] \ \ $\mathfrak h_0(M) := \mathfrak k \otimes {\mathbb C}$ is the maximal reductive subalgebra of $\mathfrak h(M)$. \item[(b)]\ \ The $(1,0)$-part of the gradient vector field $$ \mathrm{grad}' S(J,t) = g^{i{\overline j}}\frac{\partial S(J,t)}{\partial {\overline z}^j}\frac{\partial}{\partial z^i}$$ of $S(J,t)$ belongs to the center of $\mathfrak h_0(M)$. \item[(c)]\ \ $\mathfrak h(M)$ has the structure of semi-direct decomposition $$\mathfrak h(M) = \mathfrak h_0(M) + \sum_{\lambda\ne 0} \mathfrak h_{\lambda}(M)$$ where $\mathfrak h_{\lambda}(M)$ is the $\lambda$-eigenspace of the adjoint action of $\mathrm{grad}'S(J,t) $. \end{enumerate} \end{theorem} We will follow the arguments of L.-J. Wang almost word for word. Throughout this paper Hermitian inner products are anti-linear in the first component and linear in the second component. \section{Perturbed extremal K\"ahler metric} Let $M$ be a compact symplectic manifold of dimension $2m$ with symplectic form $\omega$, $\mathcal J$ the space of all $\omega$-compatible complex structures on $M$. Then for each $J \in \mathcal J$, $(M,\omega, J)$ becomes a K\"ahler manifold. For a pair $(J, t)$, $t$ being a small real number, we define a smooth function $S(J,t)$ on $M$ by \begin{equation}\label{S(J,t)} S(J,t)\, \omega^m = c_1(J) \wedge \omega^{m-1} + t c_2(J) \wedge \omega^{m-2} + \cdots + t^{m-1} c_m(J) \end{equation} where $c_i(J)$ is the $i$-the Chern form defined by \begin{equation}\label{Chern} \det(I + \frac{i}{2\pi}t\Theta) = 1 + tc_1(J) + \cdots + t^m c_m(J), \end{equation} $\Theta$ being the curvature form with respect to $\omega$. Note that we use $S(J,t)$ in place of $S(J,T)/2m\pi$ in \cite{futaki06} to avoid clumsy constant $1/2m\pi$. \begin{definition}\label{PExt} The K\"ahler metric $g$ of the K\"ahler manifold $(M, J, \omega)$ is called a $t$-perturbed extremal K\"ahler metric or simply perturbed extremal metric if \begin{equation}\label{grad} \mathrm{grad}'S(J,t) = \sum_{i,j = 1}^m g^{i{\overline j}}\frac{\partial S(J,t)}{\partial {\overline z}^j} \frac{\partial} {\partial z^i} \end{equation} is a holomorphic vector field. \end{definition} The following was proved in \cite{futaki06}, Proposition 3.2. \begin{proposition}\label{critical} The critical points of the functional $\Phi$ on $\mathcal J$ defined by \begin{equation}\label{Phi} \Phi(J) = \int_M S(J,t)^2 \omega^m \end{equation} are the perturbed extremal K\"ahler metrics. \end{proposition} The proof of this proposition essentially follows from the fact that the perturbed scalar curvature $S(J,t)$ gives the moment map for the action of the group of Hamiltonian diffeomorphisms with respect to a perturbed symplectic structure on $\mathcal J$. This perturbed symplectic structure is described as follows. The tangent space of $\mathcal J$ at $J$ is identified with a subspace of $\mathrm{Sym}(\otimes^2 T^{\prime\prime\ast}M)$. For a small real number $t$, we define an Hermitian structure on $\mathrm{Sym}(\otimes^2 T^{\prime\prime\ast}M)$ by \begin{equation}\label{inner} (\nu, \mu)_t = \int_M mc_m(\overline{\nu}_{jk} \,\mu^i{}_{\overline \ell} \frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta) \end{equation} for $\mu$ and $\nu$ in the tangent space $T_J\mathcal J$, where $c_m$ is the polarization of the determinant viewed as a $GL(m,{\mathbb C})$-invariant polynomial, i.e. $c_m(A_1, \cdots, A_m)$ is the coefficient of $m!\, t_1 \cdots t_m$ in $\det(t_1A_1 + \cdots + t_mA_m)$, where $I$ denotes the identity matrix and $\Theta = \overline{\partial}(g^{-1}\partial g)$ is the curvature form of the Levi-Civita connection, and where $u_{jk}\mu^i_{\bar l}$ should be understood as the endomorphism of $T'_JM$ which sends $\partial/\partial z^j$ to $u_{jk}\mu^i_{\bar l}{\partial/\partial z^i}$. When $t = 0$, (\ref{inner}) gives the usual $L^2$-inner product. The perturbed symplectic form $\Omega_{J,t}$ at $J \in \mathcal J$ is then given by \begin{eqnarray}\label{symp} &&\Omega_{J,t}(\nu,\mu) = \Re (\nu,\sqrt{-1}\mu)_t \\ &&= \Re \int_M mc_m(\overline{\nu}_{jk} \,\sqrt{-1}\mu^i{}_{\overline \ell} \frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \nonumber \\ && \hspace{6.5cm}\cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta) \nonumber \end{eqnarray} where $\Re$ means the real part. In \cite{futaki06} we proved the following: \begin{theorem}[\cite{futaki06}] If $\delta J = \mu$ then \begin{equation}\label{moment1} \delta \int_M u\ S(J,t)\omega^m = \Omega_{J,t}(2\sqrt{-1}\nabla^{\prime\prime}\nabla^{\prime\prime}u,\mu). \end{equation} Namely the perturbed scalar curvature $S(J,t)$ gives a moment map with respect to the perturbed symplectic form $\Omega_{J,t}$ for the action of the group of Hamiltonian diffeomorphisms on $\mathcal J$. \end{theorem} Now we can prove Proposition \ref{critical}. From (\ref{moment1}) we have \begin{eqnarray}\label{moment1.5} \delta \int_M S(J,t)^2\, \omega^m &=& 2 \int_M S(J,t) \delta S(J,t)\, \omega^m \\ &=& 2\Omega_{J,t}(2\sqrt{-1}\nabla^{\prime\prime}\nabla^{\prime\prime}S(J,t), \mu).\nonumber \end{eqnarray} This shows that $J$ is a critical point if and only if \begin{equation}\label{moment2} \nabla^{\prime\prime}\mathrm{grad}^{\prime}S(J,t) = 0, \end{equation} i.e. the K\"ahler metric of $(M, \omega, J)$ is a perturbed extremal K\"ahler metric. Let $\frak g$ be the complexification of the Lie algebra of the group of Hamiltonian diffeomorphisms. Then $\frak g$ is simply the set of all complex valued smooth functions $u$ with the normalization $$ \int_M u\, \omega^m = 0$$ with the Lie algebra structure given by the Poisson bracket. The infinitesimal action of $u$ on $\mathcal J$ is given by $2i\nabla^{\prime\prime}\nabla^{\prime\prime}u$, see Lemma 10 in \cite{donaldson97} or Lemma 2.3 in \cite{futaki06}. Define $L : C^{\infty}(M)\otimes {\mathbb C}\ (\cong \frak g)\ \to C^{\infty}(M)\otimes {\mathbb C} $ by \begin{eqnarray}\label{L} &&(v,Lu)_{L^2} = (\nabla^{\prime\prime}\nabla^{\prime\prime}v,\nabla^{\prime\prime} \nabla^{\prime\prime}u)_t \\ && = \int_M mc_m({\overline v}_{jk}u^i{}_{\overline \ell} \frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta). \nonumber \end{eqnarray} More explicitly $L$ is expressed as \begin{equation}\label{L'} Lu = mc_m(u^i{}_{{\overline \ell} jk}\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta)/\omega^m. \end{equation} We define ${\overline L} : C^{\infty}(M)\otimes {\mathbb C} \to C^{\infty}(M)\otimes {\mathbb C} $ by $\overline{L}u := \overline{L {\overline u}}$. Then $\overline{L}$ satisfies \begin{eqnarray}\label{Lbar} &&(v,\overline{L} u)_{L^2} = (\nabla^{\prime\prime}\nabla^{\prime\prime}{\overline u}, \nabla^{\prime\prime}\nabla^{\prime\prime}{\overline v})_t \\ && = \int_M mc_m(u_{jk}{\overline v}^i{}_{\overline \ell} \frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta). \nonumber \end{eqnarray} and \begin{equation}\label{Lbar'} \overline{L} u = mc_m(u_{jk}{}^i{}_{{\overline \ell}}\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta)/\omega^m. \end{equation} \begin{lemma}\label{deltaS} If $v$ is a real smooth function and $\delta J = \nabla^{\prime\prime}\nabla^{\prime\prime}v$ then $$ \delta S(J,t) = Lv + \overline{L}v.$$ \end{lemma} \begin{proof} Let $u$ be also a real smooth function. Then by (\ref{moment1}) \begin{eqnarray*} \int_M u\,\delta S(J,t) \omega^m &=& \Re (2\sqrt{-1}\nabla^{\prime\prime}\nabla^{\prime\prime}u,\sqrt{-1} \mu)_t \\ &=& (\nabla^{\prime\prime}\nabla^{\prime\prime}u, \nabla^{\prime\prime}\nabla^{\prime\prime}v)_t + (\nabla^{\prime\prime}\nabla^{\prime\prime}v,\nabla^{\prime\prime}\nabla^{\prime\prime}u)_t\\ &=& (u,Lv)_{L^2} + (u,\overline{L}v)_{L^2}. \end{eqnarray*} \end{proof} \begin{lemma}\label{bracket} Let $u$ and $v$ be real smooth functions and put $\mathcal X_u = 2\sqrt{-1}\nabla^{\prime\prime}\nabla^{\prime\prime}u$ and $\mathcal X_v = 2\sqrt{-1}\nabla^{\prime\prime}\nabla^{\prime\prime}v$. Then we have $$ \Omega_{J,t}(\mathcal X_u, \mathcal X_v) = (\{u,v\},S(J,t))_{L^2}.$$ \end{lemma} \begin{proof} Consider $\mathcal X_u$ and $\mathcal X_v$ as the infinitesimal action of real Hamiltonian functions $u$ and $v$ on $\mathcal J$. Since $S(J,t)$ gives an equivariant moment map \begin{equation}\label{moment3} \int_M u S(\sigma J,t)\, \omega^m = \int_M (\sigma^{-1\ast} u) S(J,t)\, \omega^m \end{equation} for a Hamiltonian diffeomorphism $\sigma$. If $\sigma$ is generated by the Hamiltonian vector field of a Hamiltonian function $v$ then (\ref{moment1}) and (\ref{moment3}) show \begin{equation}\label{moment4} \Omega_{J,t}(2\sqrt{-1}\nabla^{\prime\prime}\nabla^{\prime\prime}u, 2\sqrt{-1}\nabla^{\prime\prime}\nabla^{\prime\prime}v) = - \int_M S(J,t) \{v,u\}\omega^m. \end{equation} \end{proof} \begin{lemma}\label{Lbar-L} For any smooth complex valued function $u$ we have $$(\overline{L} - L) u = - \frac 12 (S(J,t)^{\alpha}u_{\alpha} - u^{\alpha}S(J,t)_{\alpha})$$ where $z^{\alpha}$'s are local holomorphic coordinates. \end{lemma} \begin{proof}It is sufficient prove when $u$ is a real valued function. Let $v$ be also a real valued smooth function. From (\ref{L}) and (\ref{Lbar}) we have \begin{eqnarray*} (v, \overline{L}u - Lu)_{L^2} &=& (\nabla^{\prime\prime}\nabla^{\prime\prime}u, \nabla^{\prime\prime}\nabla^{\prime\prime}v)_t - (\nabla^{\prime\prime}\nabla^{\prime\prime}v,\nabla^{\prime\prime}\nabla^{\prime\prime}u)_t\\ &=& \overline{(\nabla^{\prime\prime}\nabla^{\prime\prime}v, \nabla^{\prime\prime}\nabla^{\prime\prime}u)_t } - (\nabla^{\prime\prime}\nabla^{\prime\prime}v, \nabla^{\prime\prime}\nabla^{\prime\prime}u)_t . \end{eqnarray*} It follows from this that \begin{eqnarray*} 2\Re (\nabla^{\prime\prime}\nabla^{\prime\prime}v, i\nabla^{\prime\prime}\nabla^{\prime\prime}u)_t &=& i(\nabla^{\prime\prime}\nabla^{\prime\prime}v, \nabla^{\prime\prime}\nabla^{\prime\prime}u)_t + \overline{i(\nabla^{\prime\prime}\nabla^{\prime\prime}v, \nabla^{\prime\prime}\nabla^{\prime\prime}u)_t}\\ &=&-i(v,(\overline{L} - L)u)_{L^2}. \end{eqnarray*} Let $X_u$ denote the Hamiltonian vector field of $u$: $ i(X_u)\omega = du$. Then $X_u = J\mathrm{grad}\, u$ and $\{u,S\} = X_uS$. It then follows that \begin{eqnarray*} (v,(\overline{L} - L)u)_{L^2} &=& 2i\Re(\nabla^{\prime\prime} \nabla^{\prime\prime} v, i\nabla^{\prime\prime} \nabla^{\prime\prime} u)_t \\ &=& \frac i2 \Re(\mathcal X_v, i\mathcal X_u) = \frac i2\Omega_{J,t}(\mathcal X_v, \mathcal X_u)\\ &=& -\frac i2 (\{u,v\},S(J,t))_{L^2} = \frac i2 (v,\{u, S(J,t)\})_{L^2}\\ &=& \frac i2 (v,X_uS(J,t))_{L^2} = \frac i2 \omega(v,g(X_u, J\mathrm{grad}\,S(J,t)))_{L^2}\\ &=& \frac i2 (v,du(J\mathrm{grad}\,S(J,t)))_{L^2} = -\frac 12 (v,S(J,t)^{\alpha}u_{\alpha} - u^{\alpha} S(J,t)_{\alpha})_{L^2}. \end{eqnarray*} \end{proof} \begin{lemma}\label{d1} Let $u$ be a real smooth function and suppose $\delta J = \nabla^{\prime\prime} \nabla^{\prime\prime}u$. Then $$ \delta \int_M S(J,t)^2 \omega^m = 4(u,LS(J,t))_{L^2} = 4(u,\overline{L} S(J,t))_{L^2}.$$ \end{lemma} \begin{proof} By (\ref{moment1.5}) \begin{eqnarray*} \delta \int_M S(J,t)^2\omega^m &=& 2\Omega_{J,t}(2i\nabla^{\prime\prime} \nabla^{\prime\prime} S(J,t), \nabla^{\prime\prime} \nabla^{\prime\prime}u) \\ &=& 4\Re (\nabla^{\prime\prime} \nabla^{\prime\prime} S(J,t), \nabla^{\prime\prime} \nabla^{\prime\prime}u)_t \\ &=& 2(\nabla^{\prime\prime} \nabla^{\prime\prime} S(J,t), \nabla^{\prime\prime} \nabla^{\prime\prime}u)_t + 2(\nabla^{\prime\prime} \nabla^{\prime\prime}u, \nabla^{\prime\prime} \nabla^{\prime\prime} S(J,t))_t\\ &=& 2(u,LS(J,t))_{L^2} + 2(u,\overline{L}S(J,t))_{L^2}. \end{eqnarray*} But from Lemma \ref{Lbar-L} we have $$ \overline{L}S(J,t) = LS(J,t),$$ from which the lemma follows. \end{proof} \begin{lemma}\label{d2} Suppose that $(\omega, J)$ is a perturbed extremal K\"ahler metric and thus that the gradient vector field of $S(J,t)$ is a holomorphic vector field. If $\delta J = \nabla^{\prime\prime} \nabla^{\prime\prime} u$ for a real smooth function $u$ then $$ (\delta L)S(J,t) = -\frac12 L(S(J,t)^\alpha u_{\alpha} - u^{\alpha}S(J,t)_{\alpha}) = L(\overline{L} - L)u.$$ \end{lemma} \begin{proof}\ \ Recall that by Lemma 2.3 in \cite{futaki06} $$ \mathcal L_XJ = 2i\nabla^{\prime\prime}_JX^{\prime} - 2i\nabla^{\prime}_JX^{\prime\prime} .$$ Therefore \begin{eqnarray*} \mathcal L_{JX}J &=& 2i\nabla^{\prime\prime}_JiX^{\prime} - 2i\nabla^{\prime}_J(-i)X^{\prime\prime}\\ &=& -2(\nabla^{\prime\prime}_JX^{\prime} - \nabla^{\prime}_JX^{\prime\prime}).\\ \end{eqnarray*} This shows that $\mathcal L_{JX}J \in T_J \mathcal J$ corresponds to $-2\nabla^{\prime\prime} \nabla^{\prime\prime} u \in \mathrm{Sym} \otimes^2T^{\prime\prime\ast}M$ via the identification $T_J \mathcal J \cong \mathrm{Sym} \otimes^2T^{\prime\prime\ast}M$. Thus $\mathcal L_{-\frac12 JX_u}J$ corresponds to $\nabla^{\prime\prime}\nabla^{\prime\prime}u$. On the other hand \begin{equation}\label{d3} \mathcal L_{\frac12 JX_u}\omega = d(i(\frac12 JX_u)\omega) \end{equation} and \begin{eqnarray}\label{d4} (i(\frac12 JX_u)\omega)(Y) &=& \omega(\frac12 JX_u,Y) = \omega(-\frac12 \mathrm{grad}u,Y)\\ &=& \omega(-\frac12 X_u,JY) = -\frac12 du\circ J = (d^cu)(Y) \nonumber \end{eqnarray} where $d^c = \frac i2(\overline{\partial} - \partial)$. From (\ref{d3}) and (\ref{d4}) it follows that \begin{equation}\label{d5} \mathcal L_{\frac12 JX_u}\omega = dd^c u = i\partial\overline{\partial} u. \end{equation} Let $f_s$ is a flow generated by $-\frac 12 JX_u$. Suppose that $S$ is a smooth function such that $\mathrm{grad}^{\prime}S$ is a holomorphic vector field and that $S_s$ is a function such that $$ \mathrm{grad}_s^{\prime} S_s = \mathrm{grad}^{\prime} S,\qquad \int_M S_s\, (f_{-s}^{\ast}\omega)^m = \int_M S\, \omega^m$$ where $\mathrm{grad}_s^{\prime} S_s$ is the $(1,0)$-part of the gradient vector field of $S_s$ with respect to $f_{-s}^{\ast}\omega$. It is easy to see that if $f_{-s}^{\ast} \omega = \omega + i\partial\overline{\partial} \varphi$ then $S_s = S + S^{\alpha} \varphi_{\alpha}$. Then (\ref{d5}) shows \begin{equation}\label{d6} S_s = S + sS^{\alpha}u_{\alpha} + O(s^2). \end{equation} We have \begin{equation}\label{d7} L(f_sJ,\omega)f_s^{\ast}S_s = f_s^{\ast}(L(J,f_{-s}^{\ast}\omega)S_s) = 0. \end{equation} Takin the derivative of (\ref{d7}) with respect to t at $t=0$ we obtain \begin{equation}\label{d8} \delta L\cdot S + L(-\frac 12 (JX_u)S + S^{\alpha}u_{\alpha}) = 0. \end{equation} On the other hand \begin{eqnarray}\label{d9} JX_u\cdot S &=& g(JX_u,\mathrm{grad}\, S) = \omega(X_u,\mathrm{grad}\,S) = du(\mathrm{grad}\,S)\\ &=& (\partial u + \overline{\partial} u)(\nabla^{\prime} S + \nabla^{\prime\prime} S) = u^{\alpha}S_{\alpha} + S^{\alpha}u_{\alpha}.\nonumber \end{eqnarray} It follows from (\ref{d8}) and (\ref{d9}) that \begin{eqnarray*} \delta L\cdot S &=& - L(-\frac 12 (u^{\alpha}S_{\alpha} + S^{\alpha}u_{\alpha}) + S^{\alpha}u_{\alpha})\\ &=& - L(\frac 12 (S^{\alpha}u_{\alpha} - u^{\alpha} S_{\alpha})). \end{eqnarray*} Applying this with $S = S(J,\omega)$ and using Lemma \ref{Lbar-L} complete the proof of Lemma \ref{d2}. \end{proof} \begin{theorem}\label{Hessian} Let $J$ be a critical point of $\Phi$, i.e. $(\omega, J)$ gives a perturbed extremal K\"ahler metric and $u$ be a real smooth function on $M$. Then the Hessian of $\Phi$ at $J$ in the direction of $\nabla^{\prime\prime}\nabla^{\prime\prime} u$ and $\nabla^{\prime\prime}\nabla^{\prime\prime} v$is given by $$ Hess(\Phi)_J(\nabla^{\prime\prime}\nabla^{\prime\prime} u, \nabla^{\prime\prime}\nabla^{\prime\prime} v) = 8(u,L\overline{L}v) = 8(u,\overline{L}Lv).$$ \end{theorem} \begin{proof}\ \ Let $\delta J = \nabla^{\prime\prime}\nabla^{\prime\prime} v$. By using Lemma \ref{d1}, Lemma \ref{d2} and Lemma \ref{deltaS} successively one obtains \begin{eqnarray*} \mathrm{Hess}(\Phi)_J(\nabla^{\prime\prime}\nabla^{\prime\prime} u, \nabla^{\prime\prime}\nabla^{\prime\prime} v) &=& 4\delta (u,LS(J,t))\\ &=& 4(u,\delta L\cdot S(J,t) + L\delta S(J,t))\\ &=& 4(u,L(\overline{L} - L)v + L(L+ \overline{L})v)\\ &=& 8(u,L\overline{L}v). \end{eqnarray*} If one uses the third term in Lemma \ref{d1} and $\delta \overline{L} = L - \overline{L}$ then one gets the third term of Theorem \ref{Hessian}. This completes the proof. \end{proof} \section{Proof of Theorem \ref{main1}} In this section we give a proof of Theorem \ref{main1}. Suppose that $g$ is a perturbed extremal K\"ahler metric on $(M, \omega, J)$. Let $X$ be a holomorphic vector field and $\alpha$ be the dual $1$-form to $X$, that is $$ \alpha(Y) = g(X,Y), \qquad \alpha = \alpha_{{\overline i}} d\overline{z}^i = g_{j{\overline i}}X^jd{\overline z}^i. $$ Since $X$ is a holomorphic vector field $$ \overline{\partial}\alpha = (\nabla_{\overline i} \alpha_{{\overline j}} - \nabla_{\overline j} \alpha_{{\overline i}} )d{\overline z}^i \wedge d{\overline z}^j = 0.$$ Let $\alpha = H\alpha + \overline{\partial}\psi$ be the harmonic decomposition where $H\alpha$ denotes the harmonic part. Then \begin{eqnarray*} L\psi &=& mc_m (\psi^i{}_{{{\overline \ell}}jk}\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta)\\ &=& mc_m ((X^i - (H\alpha)^i)_{{\overline \ell} jk}\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta,\\ &&\hspace{7cm}\cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta)\\ &=& - mc_m((H\alpha)^i{}_{{\overline \ell} jk}\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta, \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta)\\ &=& -mc_m((H\alpha)^i{}_{j{\overline \ell} k} + (R_{j{\overline \ell}}{}^i{}_p(H\alpha)^p)_k)\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta,\\ &&\hspace{7cm} \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta). \end{eqnarray*} Note that being $\overline{\partial}$-harmonic and being $\partial$-harmonic are equivalent on compact K\"ahler manifolds, and thus $$(H\alpha)_{{\overline q} j} = \nabla_j(H\alpha)_{{\overline q}} = 0.$$ This implies $(H\alpha)^i{}_j = 0$. It follows that \begin{eqnarray}\label{pr1} L\psi &=& -mc_m( R_{j{\overline \ell}}{}^i{}_{p,k}(H\alpha)^p\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta,\\ &&\hspace{6cm} \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta)\nonumber\\ &=& -mc_m( R_{j{\overline \ell}}{}^i{}_{k,p}(H\alpha)^p\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta,\nonumber\\ &&\hspace{6cm} \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta)\nonumber\\ &=& - (H\alpha)^p\nabla_pS(J,t) = - (H\alpha)_{{\overline q}}\nabla^{{\overline q}}S(J,t)\nonumber \end{eqnarray} where we have used the second Bianchi identity $R_{j{\overline \ell}}{}^i{}_{p,k} = R_{j{\overline \ell}}{}^i{}_{k,p}$ and \begin{eqnarray*} \nabla_pS(J,t) &=& \nabla_p \frac 1t(c_m(\omega\otimes I + \frac i{2\pi}t\Theta) - \omega^m)\\ &=& \frac 1t \nabla_pc_m(\omega \otimes I + \frac i{2\pi}t\Theta)\\ &=& mc_m(R_{j{\overline \ell}}{}^i{}_{k,p}\frac {\sqrt{-1}}{2\pi}\, dz^k \wedge d\overline{z^{\ell}}, \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta,\\ &&\hspace{4cm} \cdots, \ \omega\otimes I + \frac {\sqrt{-1}}{2\pi}\,t\Theta). \end{eqnarray*} Note that $\nabla^{{\overline q}}S(J,t)\frac{\partial}{\partial {\overline z}^q}$ is a conjugate holomorphic vector field and that $(H\alpha)_{{\overline q}}d{\overline z}^q$ is a conjugate holomorphic $1$-form because $H\alpha$ is a $\partial$-harmonic $(0,1)$-form. It follows from (\ref{pr1}) that $L\psi = \mathrm{constant}$. But since $\int_M L\psi \omega^m = 0$ by (\ref{L}) we obtain $L\psi = 0$. This implies that $\mathrm{grad}^{\prime}\psi$ is a holomorphic vector field. Then $(H\alpha)^i\frac{\partial}{\partial z^i} = X - \mathrm{grad}^{\prime}\psi$ is also holomorphic. It then follows that $$ \nabla_{{\overline k}}(H\alpha)_{{\overline j}} = 0.$$ But since $(H\alpha)$ is $\partial$-harmonic we also have $\nabla_k(H\alpha)_{{\overline j}} = 0$. Thus $H\alpha$ is parallel. This proves the direct sum decomposition as a vector space $$ \frak h(M) = \frak a(M) + \frak h^{\prime}(M)$$ where $$\frak h^{\prime}(M) = \{X \in \frak h(M)\ |\ X = \mathrm{grad}^{\prime}u\ \mathrm{for\ some}\ u \in C^{\infty}_{{\mathbb C}}(M)\}.$$ It is easy to see $$ [\frak a(M), \frak a(M)] = 0;$$ $$ [\frak a(M),\frak h^{\prime}(M)] \subset \frak h^{\prime}(M);$$ $$ [\frak h^{\prime}(M),\frak h^{\prime}(M)] \subset \frak h^{\prime}(M).$$ Now by Theorem \ref{Hessian} we have $L\overline{L} = \overline{L}L$. Thus $\overline{L}$ preserves ${\rm Ker}\, L$. Let $E_{\lambda}$ denote the $\lambda$-eigenspace of $2\overline{L}|_{{\rm Ker}\,L}$. If $u \in E_{\lambda}$ then $\mathrm{grad}^{\prime}u \in \frak h^{\prime}(M)$ and \begin{eqnarray*} \lambda u &=& 2\overline{L} u \\ &=& 2(\overline{L} - L)u \\ &=& S(J,t)^{\alpha}u_{\alpha} - u^{\alpha}S(J,t)_{\alpha}. \end{eqnarray*} This implies $[\mathrm{grad}^{\prime}S(J,t),\mathrm{grad}^{\prime}u] = \lambda\, \mathrm{grad}^{\prime}u$. We put $$\mathrm{grad}^{\prime}(E_{\lambda}) := \frak h_{\lambda}(M)\ \mathrm{for}\ \lambda \ne 0,$$ $$\mathrm{grad}^{\prime}(E_0) := \frak h_0^{\prime}(M),$$ $$\frak h_0 = \frak a(M) + \frak h_0^{\prime}(M).$$ Then we obtain the decomposition $$ \frak h(M) = \sum_{\lambda} \frak h_{\lambda}(M)$$ where $\frak h_{\lambda}(M)$ is the $\lambda$-eigenspace of $\mathrm{ad} ( \mathrm{grad}^{\prime} S(J,t))$. Note that the real and imaginary parts of an element of $\frak a(M)$ are parallel and Killing and hence $[\mathrm{grad}^{\prime}S(J,t), \frak a(M)] = 0$. Finally since $E_0 = {\rm Ker}\,L \cap {\rm Ker} \overline{L}$, the real and imaginary parts are respectively in $E_0$, that is $E_0$ is the complexification of the purely imaginary functions $u$ such that $\mathrm{grad}^{\prime}u$ is holomorphic. The real parts of such $\mathrm{grad}^{\prime}u$'s are Killing vector fields, see Lemma 2.3.8 in \cite{futaki88}. The real parts of the elements of $\mathfrak a(M)$ are also Killing vector fields. Thus $\mathfrak h_0(M)$ is reductive. This completes the proof of Theorem \ref{main1}.
1,314,259,992,916
arxiv
\section{Introduction} A major contributing factor behind recent success in deep learning is the availability of large-scale annotated datasets \cite{russakovsky2015imagenet}. Despite the existing performance gap to humans, deep-learning-based computer vision methods have become essential advancement of real-world systems. A particularly challenging and emerging application is autonomous driving, which requires system performance with extreme reliability~\cite{liu2020video, 9206716, Cui_2021_ICCV, 9411961}. However, leveraging the power of deep learning for autonomous driving is nontrivial, due to the lack of datasets. \indent Consequently, significant research efforts have been invested into autonomous driving datasets such as KITTI~\cite{geiger2013vision}, Cityscapes~\cite{cordts2015cityscapes}, and BDD100K~\cite{yu2020bdd100k} datasets, which serve as the driving force to the development of visual technologies for understanding complex traffic scenes and driving scenarios. As the computer vision community has made impressive advances in increasingly difficult tasks ($i.e.,$ object detection, instance segmentation, and multi-object tracking) in recent years, a new task named multi-object tracking and segmentation (MOTS) is proposed in order to consider detection, segmentation and tracking together as interconnected problems~\cite{voigtlaender2019mots}. Consequently, the KITTI MOTS dataset~\cite{voigtlaender2019mots} is introduced to assess the proposed visual task. \begin{figure*}[!ht] \centering \includegraphics[width=16.6cm]{img/ffl.png} \caption{Illustration of DG-Labeler architecture. We craft our DG-Labeler on TrackR-CNN. In the feature extraction phase, we replace the original heavy 3D convolution with a more efficient flow network to increase the feature representation. In the task phase, we devise the collaborative connections to propagate information across each task so the upper-level task heads (a.k.a. the mask and tracking head) can perform accurate and efficient predictions on instance-centric features. Moreover, we propose a depth-granularity module, which greatly improves the segmentation behavior of our DG-Labeler.} \label{Frame} \end{figure*} \indent Although the existing MOTS datasets~\cite{voigtlaender2019mots,yu2020bdd100k} fills the gap of data shortage for MOTS task, they are limited in three significant drawbacks in the training data: \textit{1) including no challenging cases (i.e., motion blur or defocus) to address a general driving setting; 2) focusing on local roads of inner cities and thus lack diversity.} These limits may cause problems in training as the data does not fully capture the real-world traffic complexity. In addition, \textit{the annotation for MOTS data is highly labor-intensive as it requires the pixel-level mask as well as the temporal tracking label across frames.} In order to produce the MOTS data, \cite{voigtlaender2019mots} uses a refinement network to generate an initial segmentation mask followed by human corrections. Afterward, tracking labels are created by delineating the temporal coherence based on instance masks across video frames~\cite{voigtlaender2019mots}. \\ \indent To this end, we, therefore, propose the DGL-MOTS dataset to maximize synergies tailored for the MTOS task in autonomous driving. In order to improve the annotation quality and reduce the annotation cost, we devise DG-Labeler to produce fine-grained instance masks. Concretely, our work delivers the following contributions: \begin{itemize} \item We create a \textbf{D}epth-\textbf{G}ranularity \textbf{L}abeled \textbf{MOTS}, thus the name of our dataset, \textbf{DGL-MOTS} (Figure~\ref{Frame}). Compared to the KITTI MOTS and BDD100K, DGL-MOTS significantly exceeds the previous effort in terms of annotation quality, data diversity, and temporal representation, which boosts the training accuracy and efficiency. \item We perform cross-dataset evaluations. Extensive experiment results indicate the benefits of our datasets. Networks trained on our dataset outperform their counterparts (with the same architecture) trained on KITTI MOTS~\cite{voigtlaender2019mots} and BDD100K~\cite{yu2020bdd100k} on the same test set. Also, improvement for networks trained on our dataset is reached with less training schedule (Table~\ref{trainS}). \item We propose an end-to-end annotator named DG-Labeler (Figure~\ref{Frame}), whose architecture includes a novel depth-granularity module to model the spatial relation of instances and assist to produce fine-grained instance mask. With limited correction iterations, DG-Labeler can generate high-quality MOTS annotation. \item DG-Labeler leverages the depth information to depict the instance spatial relation and retain finer details at the instance boundary (Figure~\ref{DGM}). On both KITTI MOTS and DGL-MOTS datasets, DG-Labeler outperforms TrackR-CNN~\cite{voigtlaender2019mots} in accuracy by a significant margin. For its simplicity, we hope DG-Labeler can also serve as a new strong baseline for the MOTS task. \end{itemize} \section{Related Work} This section summarizes the related datasets for autonomous driving, multi-object tracking and segmentation as well as annotation methods for dataset creation. \\ \indent \textbf{MOTS dataset.} The multi-object tracking (MOT) is a critical task for autonomous driving, as it needs to perform object detection as well as object tracking in a video. A large array of datasets have been created focusing on driving scenarios, for example, KITTI tracking~\cite{geiger2012we}, MOTChallenge~\cite{milan2016mot16}, UA-DETRAC~\cite{wen2015ua}, PathTrack~\cite{manen2017pathtrack}, and PoseTrack~\cite{andriluka2018posetrack}. None of these datasets provide segmentation masks for the annotated objects and thus do not depict pixel-level representations and complex interactions like MOTS data. More progressive datasets come from Cityscapes~\cite{cordts2015cityscapes}, ApolloScape~\cite{huang2018apolloscape}, BDD100K~\cite{yu2020bdd100k}, and KITTI MOTS dataset~\cite{voigtlaender2019mots} which provide instance segmentation data for autonomous driving. However, Cityscapes only provides instance annotations for a small subset (i.e., 5,000 images) while ApolloScape offers no temporal object descriptions over time. Thus, the two datasets cannot be utilized for joint training of MOTS algorithms. In contrast, KITTI MOTS~\cite{voigtlaender2019mots} is the first public dataset which fills the gap of data shortage for the MOTS task but it only includes a few thousand learning data for training; to date, BDD100K has the largest data scale from intensive sequential frames which are redundant for training. Compared to the aforementioned two datasets, our DGL-MOTS dataset includes more diverse data and fine-grained annotations.\\ \indent \textbf{Multi-object tracking and segmentation.} The majority of MOTS methods~\cite{bertasius2020classifying,wang2020towards,zhou2020tracking,kim2020video,lin2020video,yang2019video,mohamed2020instancemotseg} intuitively extend from Mask R-CNN. Although the extension paradigm is simple, it encounters several performance bottlenecks: 1) feature sharing across each task is insufficient for joint optimization; 2) the mask head struggles to produce fine-grained instance boundaries; 3) the RoI representation (proposal-based features) is redundant which impact the inference speed with the increasing number of proposals. Compared to the existing methods, our method explicitly models spatial relation of instance, which helps us achieve high granularity for instance masks. We circumvent RoI operations for high-level tasks (a.k.a tracking and segmentation) by using collaborative connections, which link each task head interdependently for joint optimization and boost the computational tractability. \\ \indent \textbf{Data annotator.} MOTS data requires the instance mask as well as the temporal tracking label across frames. So far, many attempts for semi-automated annotation~\cite{arun2020weakly,gupta2019lvis,xu2018srda,fang2019instaboost,khoreva2017simple,xu2016deep} have been made to reduce the annotation overhead. Aside from heavy engagement of human correction efforts~\cite{vondrick2013efficiently}, these methods generally have arduous implementations for the annotator and require multiple steps to achieve a desirable result. Moreover, their annotators cannot operate on a tracking level and only create instance masks in a single image. To produce the MOTS data, \cite{voigtlaender2019mots} leverages a refinement network to generate an initial segmentation mask followed by human corrections. The tracking labels are then created by delineating the temporal coherence across video frames~\cite{voigtlaender2019mots}. To our best knowledge, ~\cite{voigtlaender2019mots,porzi2020learning} are two methods available for MOTS annotation, both built on Mask R-CNN by simply adding a tracking branch~\cite{he2017mask}. Compared to~\cite{voigtlaender2019mots,porzi2020learning}, our DG-Labeler explicitly models spatial relation to achieving fine-grained instance boundary. We also devise a collaborative connection, which uses the detection results to guide the high-level tasks (segmentation and tracking) to accurately fire on the task-relevant pixels. With limited human corrections, our annotation protocol can produce MOTS labels with appealing quality. \begin{figure*}[!ht] \centering \includegraphics[width=13cm]{img/DGM.PNG} \caption{ In the depth-granularity module, base mask and depth map are first divided into $2\times 2$ sub-regions. Then, each corresponding sub-region from the base mask and depth map are organically blended to produce the final fine-grained mask.} \label{DGM} \end{figure*} \section{DG-Labeler} \subsection{Overall Architecture} Built on the TrackR-CNN~\cite{voigtlaender2019mots}, our DG-Labeler includes architectures of feature extraction, task heads, and depth-granularity module, which collaboratively perform detection, segmentation, and tracking. Our overall architecture is shown in Figure~\ref{Frame}.\\ \indent \textbf{Feature extraction} uses the ResNet~\cite{he2016deep} backbone $\textbf{N}_{fm}$ to compute per frame feature maps $F$ and leverage a flow network $\textbf{N}_{fl}$ to model temporal features over time on video. Feature maps from the previous moment $F_{t-i}$ are warped into the current time $t$ based on the flow field $\Delta D_{t-i\rightarrow{t}}$ to obtain $F'_{t-i\rightarrow{t}}$. Afterwards, $F_{t}$ and $F'_{t-i\rightarrow{t}}$ are aggregated for $F'_{t}$ to increase the feature representation of the current frame. In default, the temporal range is three, namely, adjacent frames are used for the feature aggregation.\\ \indent \textbf{Task heads} are consisted of four task heads (a.k.a. the classification head $\textbf{N}_{cl}$, the bounding box head $\textbf{N}_{bb}$, the mask head $\textbf{N}_{mk}$, and the tracking head $\textbf{N}_{tr}$). The aforementioned task heads follow the implementation in TrackR-CNN. Besides, we craft a new depth head $\textbf{N}_{dp}$~\cite{godard2019digging} into our network to model the spatial relation of each detected object on the video frame. Since our feature extraction is built on ResNet~\cite{he2016deep}, we replace the U-Net architecture in~\cite{godard2019digging} with feature pyramid networks (FPNs)~\cite{lin2017feature} to predict depth maps. Other implementations are the same in ~\cite{godard2019digging}.\\ \indent \textbf{Depth-granularity module} is the key component of our DG-Labeler. The next section will detail this module. \subsection{Depth-Granularity Module} Inspired by~\cite{liu2021sg}, we blend the base mask $B$ with the corresponding depth map $D$ to generate the final fine-grained mask in the depth-granularity module (Figure~\ref{DGM}). In our implementation, both $B$ and $D$ have the same shape of $H\times W\times 1.$ We crop out each region of interest (RoI) on the base mask $B$ and the depth map $D$ based on the detected bounding boxes and divide each RoI into $k\times k$ regions of the same size. The division is arbitrary but we find that $k=2$ has the best speed-accuracy tradeoff. Afterward, each sub-region of the depth map is normalized by: \begin{equation} \hat{D_{i}}=\frac{D_{i}-\min(D_{i})}{\max(D_{i})-\min(D_{i})}, \quad {\forall i\in\{1,...,k\}}, \label{Equation 1} \end{equation} where $D_{i}$ and $\hat{D_{i}}$ represent one sub-region of the depth map and its normalized depth map respectively. $\hat{D_{i}}$ renders the spatial relation (foreground and background) and boundary details of the target instance (Figure~\ref{DGM}). \\ \indent Finally, we apply element-wise productions between the base mask and the normalized depth map from each corresponding sub-region, and sum along the $k\times k$ regions to obtain the final mask $M_j$ of the $j^{th}$ instance on the frame: \begin{equation} M_j = \sum_{i=1}^{k\times k}\sigma(B_{i}\times \hat{D_{i}}), \label{Equation 2} \end{equation} where $B_{i}$ and $\hat{D_{i}}$ are one sub-region of the base mask and the normalized depth map respectively, and $\sigma$ is \texttt{sigmoid} activation. In our implementation, our base mask $B_{i}$ uses floating point and the depth map $\hat{D_{i}}$ encodes the relative spatial relation, not the absolute depth values. With the spatial relation modeling, our final mask is more fine-grained. \subsection{Collaborative Connections} Unlike TrackR-CNN and its variants ~\cite{voigtlaender2019mots,luiten2020unovost,lin2019agss} whose task heads operate independently and ignore the intrinsic correlations among each task, we devise collaborative connections (the red lines in Figure~\ref{Frame}) across detection, segmentation, and tracking heads to facilitate the information prorogation across tasks. Compared to TrackR-CNN and its variants, this implementation offers us two improvements for the network behaviors: 1). our segmentation and tracking head fire on instance-centric features (the blue lines in Figure~\ref{Frame}) governed by the bounding boxes and the mask-guided boxes respectively, thus can perform more accurate predictions; 2) we improve the runtime performance by avoiding encoding redundant features based on proposals produced by RPN and thus reducing computational cost per instance.\\ \subsection{Training Objective} GIoU learning~\cite{rezatofighi2019generalized} is used in our training. Since our method follows the top-down paradigm, we argue that the improved bounding box regression can benefit the instance segmentation and tracking task. Bear this in mind, we leverage the \texttt{GIoU} loss in~\cite{liu2021sg} to organize our learning. Particularly, we propose a modified \texttt{GIoU} loss using a logarithmic function to increase the bounding box losses in order to facilitate hard sample learning ($i.e.,$ small GIoU): \begin{equation} \mathcal{L}_{box} =-\ln\frac{1+GIoU}{2} \label{box} \end{equation} \indent Consequently, our overall loss can be defined as: \begin{equation} \mathcal{L}_{all} = \mathcal{L}_{box}+\mathcal{L}_{cls}+\mathcal{L}_{mask}+\mathcal{L}_{track}+\mathcal{L}_{depth} \label{all} \end{equation} where $\mathcal{L}_{cls}, \mathcal{L}_{mask}$, and $\mathcal{L}_{track}$ are from \cite{voigtlaender2019mots}, $\mathcal{L}_{box}$ is the modified \texttt{GIoU} loss from Eq. \ref{box}, and $\mathcal{L}_{depth}$ is the average of per-pixel smoothness and masked photo- metric loss in \cite{godard2019digging}. Our architecture is trained in an end-to-end fashion. \begin{table*}[!ht] \centering \begin{tabular}{c|c|c|c|c|c} \toprule Datasets & Video clip & Total frames & Identities & Instances & Ins./Fr. \\ \hline KITTI MOTS & 21 & 8K & 749 & 38K & 4.78 \\ \hline BDD100K MOTS & 70 & 14K & 6.3K & 129K & 9.20 \\ \hline Ours & 40 & 12K & 1.6K & 68K & 6.23 \\ \bottomrule \end{tabular} \caption{Annotation statistics. Our dataset outperforms the KITTI MOTS in annotation volume and density. BDD100K offers the largest training data but selected sequentially from video frames, which include redundant temporal information. \label{stats} \end{table*} \section{DGL-MOTS Dataset} \subsection{Data Acquisition} Our data acquisition is carefully designed to capture the high variability of driving scenarios, such as highway, local, residential, and parking. Our raw data is acquired from a moving vehicle with a span of two months, covering different lighting conditions in four different states in the USA. Images are recorded with a GoPro HERO8 at a frame rate of 17~\texttt{Hz}, behind the windshield of the vehicle. We deliberately skip post-processing ($i.e.,$ rectification or calibration) and keep data with motion blur and defocus to increase data diversity. We argue that data with a low-degree motion blur and defocus can better reflect the driving scenarios. However, severely compromised video frames are excluded from annotations. 40 video sequences are manually selected for dense annotations, aiming for a high diversity of foreground objects (vehicles and pedestrians) and overall scene layout. Our annotation is elaborated in the next section. \subsection{Annotation Protocol} To keep our annotation effort manageable, we use an iterative semi-automated annotation protocol based on our DG-Labeler. At the first iteration, we use the pre-trained DG-Labeler to automatically perform annotations for our data, followed by a manual correction step. Per iteration, we fine-tune our DG-Labeler using the annotated data after manual corrections. We iterate the aforementioned process until pixel-level accuracy for all instances has been reached.\\ \indent To initialize DG-Labeler, we use ResNet-101~\cite{he2016deep} pre-trained on COCO~\cite{lin2014microsoft} and Mapillary~\cite{MVD2017} datasets as our feature extraction backbone; FlowNet pre-trained on the Flying Chair dataset~\cite{dosovitskiy2015flownet} is used to predict flow filed; and the depth network~\cite{godard2019digging} pre-trained on the KITTI depth dataset~\cite{uhrig2017sparsity} is used to predict depth map. Note training the depth network~\cite{godard2019digging} uses a self-supervised manner and only needs video sequences (3 consecutive frames without ground truth) to train. At the initial training, the weights of ResNet-101 and FlowNet are fixed, and the other weights related to different task heads are updated by learning on KITTI MOTS and BDD100K. We train the initial model for 40 epochs with a learning rate of $5 \times {10}^{-7}$ with Adam~\cite{kingma2014adam} optimizer and mini-batch size of 8. After each correction, the refined annotations are used to fine-tune our DG-Labeler.\\ \indent Eventually, we use 6 iterations to finalize the annotation process. \textit{We perform further processing on the annotated data and select learning examples for training and testing in every 5 frames. Following this design, our dataset generally has longer temporal representations and descriptions.} Instead of splitting our annotated data randomly, we want to ensure that the training, validation, and test sets include the data representation for different driving scenarios, such as highway, local, residential, and parking areas (Figure 1). \begin{table*}[ht] \centering \scriptsize \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Ep.} & Training&Testing & \multicolumn{4}{c|}{Cars} & \multicolumn{4}{c}{Pedestrians} \\ \cline{5-12} \rule{0pt}{8pt}& & Dataset &Dataset & HOTA $\uparrow$ & sMOTSA$\uparrow$ & MOTSA$\uparrow$ & IDS$\downarrow$ & HOTA $\uparrow$ & sMOTSA$\uparrow$ & MOTSA$\uparrow$ & IDS$\downarrow$\\ \midrule PointTrack++ & 40 & KITTI & KITTI& $67.28$ & $82.82$ & $92.61$ & $\textcolor{blue}{\textbf{36}}$ & $56.67$ & $68.13$ & $83.67$ & $36$\\ PointTrack++ & 40 & DGL-MOTS&KITTI & $\textcolor{red}{\textbf{68.40}}_{\textcolor{red}{\uparrow \textbf{1.12}}}$ & $\textcolor{red}{\textbf{84.33}}_{\textcolor{red}{\uparrow \textbf{1.51}}}$ & $\textcolor{red}{\textbf{93.68}}_{\textcolor{red}{\uparrow \textbf{1.07}}}$ & $\textcolor{red}{\textbf{32}}_{\textcolor{red}{\downarrow \textbf{4}}}$ & $\textcolor{blue}{\textbf{57.87}}_{\textcolor{blue}{\uparrow \textbf{1.2}}}$ & $\textcolor{red}{\textbf{69.30}}_{\textcolor{red}{\uparrow \textbf{1.17}}}$ & $\textcolor{red}{\textbf{84.51}}_{\textcolor{red}{\uparrow \textbf{0.84}}}$ & $\textcolor{red}{\textbf{33}}_{\textcolor{red}{\downarrow \textbf{3}}}$\\ PointTrack++ & 20 & DGL-MOTS &KITTI& $\textcolor{blue}{\textbf{67.42}}_{\uparrow \textcolor{blue}{\textbf{0.14}}}$ & $\textcolor{blue}{\textbf{82.99}}_{\uparrow \textcolor{blue}{\textbf{0.6}}}$ & $\textcolor{blue}{\textbf{92.67}}_{\uparrow \textcolor{blue}{\textbf{0.6}}}$ & $\textcolor{blue}{\textbf{36}}_{\textcolor{blue}{\downarrow \textbf{0}}}$ & $\textcolor{red}{\textbf{57.89}}_{\textcolor{red}{\uparrow \textbf{1.22}}}$ & $\textcolor{blue}{\textbf{68.20}}_{\textcolor{blue}{\uparrow \textbf{0.07}}}$ & $\textcolor{blue}{\textbf{83.76}}_{\textcolor{blue}{\uparrow \textbf{0.09}}}$ & $\textcolor{blue}{\textbf{35}}_{\downarrow \textcolor{blue}{\textbf{1}}}$\\ \midrule TrackRCNN & 40 & KITTI & KITTI & $56.92$ & $77.20$ & $87.92$ & $92$ & $42.08$ & $47.43$ & $67.12$ & $78$ \\ TrackRCNN & 40 & DGL-MOTS& KITTI & $58.02_{\uparrow 1.1}$ & $78.80_{\uparrow 1.60}$ & $88.90_{\uparrow 1.00}$& $80_{\downarrow 12}$ & $43.11_{\uparrow 1.03}$ & $48.61_{\uparrow 1.18}$ & $68.33_{\uparrow 1.21}$ & $62_{\downarrow 16}$\\ TrackRCNN& 20 & DGL-MOTS& KITTI & $57.12_{\uparrow 0.20}$ & $77.31_{\uparrow 0.11}$ & $88.15_{\uparrow 0.23}$ & $90_{\downarrow 2}$ & $42.21_{\uparrow 0.13}$ & $47.50_{\uparrow 0.07}$ & $68.46_{\uparrow 1.34}$ & $76_{\downarrow 2}$\\ \midrule STEm-Seg & 40 & KITTI& KITTI & $56.36$ & $76.30$ & $86.63$ & $76$ & $43.10$ & $51.02$ & $66.60$ & $74$\\ STEm-Seg & 40 & DGL-MOTS&KITTI & $57.50_{\uparrow 1.14}$ & $77.35_{\uparrow 1.05}$ & $87.92_{\uparrow 1.29}$ & $56_{\downarrow 20}$ & $45.10_{\uparrow2.00}$ & $52.70_{\uparrow 1.68}$ & $68.00_{\uparrow 1.4}$& $60_{\downarrow 14}$\\ STEm-Seg & 20 & DGL-MOTS&KITTI & $56.70_{\uparrow 0.34}$ & $76.36_{\uparrow 0.06}$ & $86.70_{\uparrow 0.07}$ & $66_{\downarrow 10}$ & $43.45_{\uparrow 0.35}$ & $51.42_{\uparrow 0.4}$ & $66.99_{\uparrow 0.39}$& $70_{\downarrow 4}$\\ \midrule PointTrack++ & 40 &BDD100K&BDD100K& $\textcolor{blue}{\textbf{68.33}}$ & $\textcolor{blue}{\textbf{84.60}}$ & $93.20$ & $\textcolor{blue}{\textbf{49}}$ & $55.42$ & $\textcolor{blue}{\textbf{64.56}}$ & $\textcolor{blue}{\textbf{80.29}}$ & $45$ \\ PointTrack++ & 40 & DGL-MOTS&BDD100K & $\textcolor{red}{\textbf{69.28}}_{\textcolor{red}{\uparrow \textbf{0.95}}}$ & $\textcolor{red}{\textbf{85.59}}_{\textcolor{red}{\uparrow \textbf{0.99}}}$ & $\textcolor{red}{\textbf{94.32}}_{\textcolor{red}{\uparrow \textbf{1.12}}}$ & $\textcolor{red}{\textbf{38}}_{ \textcolor{red}{\downarrow\textbf{11}}}$ & $\textcolor{red}{\textbf{56.89}}_{\textcolor{red}{\uparrow \textbf{1.47}}}$ & $\textcolor{red}{\textbf{65.28}}_{\textcolor{red}{\uparrow \textbf{0.72}}}$ & $\textcolor{red}{\textbf{81.05}}_{\textcolor{red}{\uparrow \textbf{0.76}}}$ & $\textcolor{red}{\textbf{34}}_{\textcolor{red}{\downarrow \textbf{11}}}$\\ PointTrack++ & 20 & DGL-MOTS &BDD100K& $68.26_{{\downarrow 0.07}}$ & $84.43_{{\downarrow 0.17 }}$ & $\textcolor{blue}{\textbf{93.27}}_{\textcolor{blue}{\uparrow \textbf{0.07}}}$ & $52_{{\uparrow 3}}$ & $\textcolor{blue}{\textbf{55.37}}_{{\downarrow \textcolor{blue}{\textbf{0.05}}} }$ & $64.23_{{\downarrow 0.33 }}$ & $80.26_{{\downarrow 0.03}}$ & $50_{{\uparrow 5}}$\\ \midrule TrackRCNN & 40 &BDD100K&BDD100K & $57.91$ & $78.10$ & $88.62$ & $85$ & 46.37 & $55.93$ & $70.18$& $88$\\ TrackRCNN & 40 & DGL-MOTS& BDD100K & $59.22_{\uparrow 1.31}$ & $79.82_{\uparrow 1.72}$ & $89.90_{\uparrow 1.08}$& $68_{\downarrow 17}$ & $47.49_{\uparrow 1.42}$ & $56.61_{\uparrow 0.68}$ & $71.80_{\uparrow 1.62}$ & $78_{\downarrow 10 }$\\ TrackRCNN& 20 & DGL-MOTS& BDD100K & $58.09_{\uparrow 0.18}$ & $78.20_{\uparrow 0.10}$ & $88.69_{\uparrow 0.07}$ & $80_{\downarrow 5}$ & $46.52_{\uparrow 0.15}$ & $56.07_{\uparrow 0.14}$ & $70.32_{\uparrow 0.14}$& $84_{\downarrow 4}$\\ \midrule STEm-Seg & 40 &BDD100K &BDD100K& $57.39$ & $77.24$ &$87.65$ & $66$ & $47.65$ & $56.30$ & $71.03$ & $48$\\ STEm-Seg & 40 & DGL-MOTS&BDD100K & $58.62_{\uparrow 1.23}$ & $78.50_{\uparrow 1.20}$ & $88.96_{\uparrow 1.30}$ & $56_{\downarrow 10}$ & $49.00_{\uparrow 1.35}$ & $57.72_{\uparrow 1.42}$ & $72.20_{\uparrow 1.17}$ & $\textcolor{blue}{\textbf{38}}_{\downarrow \textcolor{blue}{\textbf{10}}}$\\ STEm-Seg & 20 & DGL-MOTS&BDD100K & $57.70_{\uparrow 0.31}$ & $77.78_{\uparrow 0.54}$ & $88.04_{\uparrow 0.39}$ & $64_{\downarrow 2}$ & $47.95_{\uparrow 0.30}$ & $56.98_{\uparrow 0.68}$ & $71.50_{\uparrow 0.47}$ & $44_{\downarrow 4}$\\ \bottomrule \end{tabular} \caption{The results for cross-dataset evaluation on KITTI, BDD100K, and our DGL-MOTS. ${\uparrow}$ and ${\downarrow}$ indicate the change of performance on the metrics. The \textcolor{red}{best} and the \textcolor{blue}{second-best} methods on KITTI and BDD100K are highlighted.} \label{trainS} \end{table*} \begin{figure}[h] \centering \includegraphics[width=8.5cm]{img/stats.png} \caption{Distribution of the instance size (left) based on the bounding box size and track length (right) based on the duration of instances that appeared on video. Our dataset is more diverse in object scale and tracking length than counterparts. } \label{DStats} \end{figure} \section{Experiments} \subsection{Implementation Details} In order to evaluate the proposed dataset and annotator, we perform cross-dataset evaluations and compare against three recent state-of-the-art MOTS methods\footnote{All the compared methods use ResNet101~\cite{he2016deep}.} (a.k.a. PointTrack++~\cite{xu2020segment}, TrackRCNN~\cite{voigtlaender2019mots}, and STEm-Seg~\cite{athar2020stem}, which only require an instance-level label for training \footnote{There are more progressive methods \cite{qiao2020vip,kimeagermot,luiten2020track}. However, these methods needs extra information (i.e., flow field and LiDAR measurement) for supervision.}. All the methods are trained on the KITTI MOTS, BDD100K, and DGL-MOTS train sets separately and cross-validated on each dataset. In training, we designate no fixed number of total iterations and allow each method to be trained until performance asymptotically. The evaluation metrics are sMOTSA, MOTSA, IDS and HOTA from~\cite{voigtlaender2019mots}. All experiments are conducted on one TITAN RTX GPU. \subsection{Dataset Statistics} \textbf{Annotation volume} is summarized in Table~\ref{stats}. We compare DGL-MOTS with BDD100K and KITTI MOTS in terms of the number of video clips, video frames, unique identities, instances, and instances per frame. In comparison, DGL-MOTS outperforms KITTI MOTS in evaluation metrics. Particularly, our instances per frame are around 1.5\% higher than that of KITTI MOTS, which indicates that our dataset has a higher portion of scene complexity. BDD100K has the largest data volume among the three datasets, but its data is intensively selected from sequential video frames, which include redundant learning examples.\\ \indent \textbf{Instance variations} are represented by the instance appearance change as well as the temporal description (as shown in Figure \ref{DStats}). The left figure illuminates the distribution of squared bounding-box size $\sqrt{wh}$ (where width $w$ and height $h$); while the right figure shows the distribution of tracking length per instance. Figure \ref{DStats} demonstrates that our dataset is not only more diverse in visual scale, but also longer in the temporal range for tracking.\\ \indent \textbf{Scene diversity} is well-represented in our DGL-MOTS dataset, which includes more diverse driving scenes (Figure 1). Since DGL-MOTS provides recordings from four different states, it covers significantly more areas than KITTI MOTS that contains driving footage from a single city (Karlsruhe, German). Compared to BDD100K, our dataset include more road settings such as parking, residential, local, and high-way, while BDD100K only collects data of inner-city from the populous areas in the US \cite{yu2020bdd100k}. \begin{figure*}[!ht] \centering \includegraphics[width=16.6cm]{img/cdt.PNG} \caption{Qualitative examples of different methods tested on the KITTI test set. Results on the left column are methods trained on the KITTI while results on the right column are methods trained on our dataset. We can see the improvement of segmentation and tracking using our dataset. Masks of the same color indicate the tracking of the same instance.} \label{cdt} \end{figure*} \begin{table*}[ht] \centering \scriptsize \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Dataset} & \multicolumn{4}{c|}{Cars} & \multicolumn{4}{c}{Pedestrians} \\ \cline{3-10} \rule{0pt}{12pt} & & HOTA$\uparrow$ & sMOTSA$\uparrow$ & MOTSA$\uparrow$ & IDS$\downarrow$ & HOTA$\uparrow$ &sMOTSA$\uparrow$ & MOTSA$\uparrow$ & IDS$\downarrow$ \\ \midrule PointTrack++ & KITTI & \textcolor{blue}{\textbf{67.28}} & \textcolor{blue}{\textbf{82.82}} & \textcolor{red}{\textbf{92.61}} & \textcolor{blue}{\textbf{36}} & \textcolor{red}{\textbf{56.67}} & \textcolor{blue}{\textbf{68.13}} & \textcolor{red}{\textbf{83.67}} & \textcolor{red}{\textbf{36}}\\ TrackRCNN & KITTI & 56.92 & 77.20 & 87.92 & 92 & 42.08 & 47.43 &67.12 & 78\\ STEm-Seg & KITTI & 56.36 & 76.30 &86.63 & 76 & 43.10 & 51.02 & 66.60&74\\ \textbf{DG-Labler (Ours)} & KITTI & $\textcolor{red}{\textbf{69.72}}$ & $\textcolor{red}{\textbf{83.68}}$ & $\textcolor{blue}{\textbf{90.72}}$ & $\textcolor{red}{\textbf{35}}$ & $\textcolor{blue}{\textbf{55.90}}$ & $\textcolor{red}{\textbf{69.36}}$ & $\textcolor{blue}{\textbf{83.40}}$ & $\textcolor{blue}{\textbf{50}}$\\ \midrule PointTrack++ & BDD100K & \textcolor{red}{\textbf{68.33}} & \textcolor{blue}{\textbf{84.60}} & \textcolor{red}{\textbf{93.20}} & \textcolor{red}{\textbf{49}} & \textcolor{blue}{\textbf{55.42}} & \textcolor{blue}{\textbf{64.56}} & \textcolor{blue}{\textbf{80.29}}& \textcolor{red}{\textbf{45}}\\ TrackRCNN & BDD100K & 57.91 & 78.10 & 88.62 & 85 & 46.37 & 55.93 &70.18 & 88\\ STEm-Seg & BDD100K & 57.39 & 77.24 &87.65 & 60 & 47.65 & 56.30 & 71.03 & 48\\ \textbf{DG-Labler (Ours)} & BDD100K & $\textcolor{blue}{\textbf{67.89}}$ & $\textcolor{red}{\textbf{85.30}}$ & $\textcolor{blue}{\textbf{91.70}}$ & $\textcolor{blue}{\textbf{58}}$ & $\textcolor{red}{\textbf{56.23}}$ & $\textcolor{red}{\textbf{65.43}}$ & $\textcolor{red}{\textbf{81.40}}$ & $\textcolor{blue}{\textbf{48}}$\\ \midrule PointTrack++ & DGL-MOTS & \textcolor{blue}{\textbf{68.10}} & \textcolor{blue}{\textbf{83.62}} & \textcolor{red}{\textbf{92.39}} & \textcolor{blue}{\textbf{42}} & \textcolor{blue}{\textbf{59.10}} & \textcolor{blue}{\textbf{71.90}} & \textcolor{blue}{\textbf{86.60}} & \textcolor{blue}{\textbf{32}}\\ TrackRCNN & DGL-MOTS & 58.63 & 78.8 & 88.9 & 88 & 48.23 & 60.29 & 76.10 & 77\\ STEm-Seg & DGL-MOTS & 57.90 & 77.99 &87.9 & 78 & 47.88 & 59.82 & 67.70 & 58\\ \textbf{DG-Labler (Ours)} & DGL-MOTS & \textcolor{red}{\textbf{69.35}} & \textcolor{red}{\textbf{84.10}} & \textcolor{blue}{\textbf{91.43}} & \textcolor{red}{\textbf{40}} & \textcolor{red}{\textbf{61.20}} & \textcolor{red}{\textbf{73.17}} & \textcolor{red}{\textbf{87.14}} & \textcolor{red}{\textbf{28}}\\ \bottomrule \end{tabular} \caption{Comparison with the state-of-the-art methods on the KITTI MOTS, BDD100K, and DGL-MOTS. Each method is trained on KITTI MOTS, BDD100K, and DGL-MOTS separately. The \textcolor{red}{best} and the \textcolor{blue}{second-best} methods are highlighted.} \label{trainD} \end{table*} \subsection{Cross-Dataset Evaluations} Table~\ref{trainS} reports the results for the cross-dataset evaluations to assess our DGL-MOTS dataset. For the same method trained on different datasets, their performance gaps stem from the quality of the dataset ($i.e.,$ annotation quality, data diversity, and temporal representation). Essentially, we observe two benefits of using our dataset in training over its counterparts. First, methods trained on DGL-MOTS all outperform their counterparts (with the same network architecture) on all metrics (Table~\ref{trainS}). The improved performance indicates that, compared to KITTI and BDD100K, our dataset captures more general road settings and driving scenarios in training. Second, the DGL-MOTS dataset can train methods to achieve improved performance with a shorter schedule than methods trained on KITTI and BDD100K. For instance, TrackRCNN~\cite{voigtlaender2019mots} and STEm-Seg~\cite{athar2020stem} trained on DGL-MOTS with 20 epochs outperform its counterpart trained on KITTI and BDD100K with 40 epochs respectively.\\ \indent In addition, we display the qualitative examples of each method from KITTI MOTS in Figure ~\ref{cdt}. The selected results also resonate with our quantitative analysis that methods trained on our DGL-MOTS dataset generally achieve improved performance in instance mask generation and tracking than their counterparts (with the same architecture) trained on KITTI MOTS. Both quantitative and qualitative results prove the advantages of the proposed DGL-MOTS dataset over the concurrent datasets. \begin{figure*}[!ht] \centering \includegraphics[width=16.2cm]{img/cmp.png} \caption{Qualitative examples of different methods on DGL-MOTS dataset. Compared to other methods, our DG-Labeler offers fine-grained instance masks. All methods are trained and tested on our DGL-MOTS dataset.} \label{qe} \end{figure*} \begin{table*}[ht] \centering \scriptsize \begin{tabular}{c|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Cars} & \multicolumn{4}{c}{Pedestrians} \\ \cline{2-9} \rule{0pt}{12pt} & HOTA$\uparrow$ & sMOTSA$\uparrow$ & MOTSA$\uparrow$ & IDS$\downarrow$ & HOTA$\uparrow$ & sMOTSA$\uparrow$ & MOTSA$\uparrow$ & IDS$\downarrow$\\ \midrule TrackRCNN & 57.91 & 78.10 & 88.62 & 85 & 46.37 & 55.93 & 70.18 & 88\\ TrackRCNN+CC & $63.51_{\uparrow 5.6}$ & $79.9_{\uparrow 1.8}$ & $89.42_{\uparrow 0.8}$ & $67_{\downarrow 18}$ & $48.77_{\uparrow 2.4}$ & $58.73_{\uparrow 2.8}$ & $72.18_{\uparrow 2.0}$ & $62_{\downarrow 26}$\\ TrackRCNN+DGM & $66.81_{\uparrow 8.9}$ & $82.7_{\uparrow 4.6}$ & $90.92_{\uparrow 2.3}$ & $74_{\downarrow 11}$ & $53.07_{\uparrow 6.7}$ & $62.03_{\uparrow 6.1}$ & $77.48_{\uparrow 7.3}$ & $74_{\downarrow 14}$\\ TrackRCNN+CC+GL & $64.51_{\uparrow 6.6}$ & $80.90_{\uparrow 2.8}$ & $90.92_{\uparrow 2.3}$ & $65_{\downarrow 20}$ & $50.27_{\uparrow 3.9}$ & $59.93_{\uparrow 4.0}$ & $73.78_{\uparrow 3.6}$ & $\textcolor{blue}{\textbf{55}}_{\textcolor{blue}{\downarrow \textbf{33}}}$\\ TrackRCNN+CC+DGM & $\textcolor{blue}{\textbf{68.51}}_{\textcolor{blue}{\uparrow \textbf{10.6}}}$ & $\textcolor{blue}{\textbf{84.68}}_{\textcolor{blue}{\uparrow \textbf{6.4}}}$ & $\textcolor{blue}{\textbf{91.42}}_{\textcolor{blue}{\uparrow \textbf{2.8}}}$ & $\textcolor{blue}{\textbf{63}}_{\textcolor{blue}{\downarrow \textbf{22}}}$ & $\textcolor{blue}{\textbf{54.77}}_{\textcolor{blue}{\uparrow \textbf{8.4}}}$ & $\textcolor{blue}{\textbf{64.83}}_{\textcolor{blue}{\uparrow \textbf{8.9}}}$ & $\textcolor{blue}{\textbf{80.08}}_{\textcolor{blue}{\uparrow \textbf{9.9}}}$ & $60_{\downarrow 28}$\\ \midrule \textbf{DG-Labeler (Ours)} & $\textcolor{red}{\textbf{69.35}}_{\textcolor{red}{\uparrow \textbf{11.44}}}$ & $\textcolor{red}{\textbf{85.30}}_{\textcolor{red}{\uparrow \textbf{7.2}}}$ & $\textcolor{red}{\textbf{91.70}}_{\textcolor{red}{\uparrow \textbf{3.08}}}$ & $\textcolor{red}{\textbf{58}}_{\textcolor{red}{\downarrow \textbf{27}}}$ & $\textcolor{red}{\textbf{56.23}}_{\textcolor{red}{\uparrow \textbf{9.86}}}$ & $\textcolor{red}{\textbf{65.43}}_{\textcolor{red}{\uparrow \textbf{9.5}}}$ & $\textcolor{red}{\textbf{81.40}}_{\textcolor{red}{\uparrow \textbf{11.22}}}$ & $\textcolor{red}{\textbf{48}}_{\textcolor{red}{\downarrow \textbf{40}}}$\\ \bottomrule \end{tabular} \caption{Ablation study results on the BDD100K. All methods are trained on the BDD100K training set. CC, DGM, and GL stand for collaborative connections, depth-granularity module, and GIoU loss respectively. We use the best models in training for testing. ${\downarrow}$ and ${\uparrow}$ indicate the performance gain to the baseline. The \textcolor{red}{best} and the \textcolor{blue}{second-best} methods are highlighted.} \label{ab} \end{table*} \subsection{Comparison to The State-Of-The-Art} This section presents the state-of-the-art comparison of our DG-Labeler on KITTI, BDD100K, and DGL-MOTS.\\ \indent \textbf{Quantitative results} illuminates that DG-Labeler achieves the appealing performance on all metrics (on HOTA, sMOTSA, MOTSA, and IDS) among all methods. (Table~\ref{trainD}). Also, DG-Labeler is on-par with PointTrack++~\cite{xu2020segment}, the concurrent method. For instance, our margins over the strong methods (TrackRCNN~\cite{voigtlaender2019mots} and STEm-Seg~\cite{athar2020stem}) are around 3.53-13.36\% for the car class and 9.5-21.93\% for the pedestrian class on HOTA, sMOTSA, and MOTSA. The improvements suggest that DG-Labeler has a superior segmentation behavior to other recent methods. Meanwhile, DG-Labeler performs on par with the top-performing method, PointTrack++~\cite{xu2020segment} on all metrics. The reported results indicate that our DG-Labeler is competitive with the existing best approaches.\\ \indent \textbf{Qualitative examples} demonstrate the improved instance mask quality of our DG-Labeler over the counterpart methods (as shown in Figure~\ref{qe}). To demonstrate our advantage, we select some samples where other methods have trouble dealing with. Those cases include 1) objects with complex shapes ($i.e.,$ wing mirrors or pedestrians), which is hard to depict sharp borders; 2) same class objects with overlapping. Other methods often get confused with the borders and fail to segment accurate boundaries; 3) Objects in separated parts ($i.e.,$ occluded or truncated objects). Other methods may segment targets into separate objects or include occlusions as false positives. Based on the results, our DG-Labeler achieves an improved segmentation behavior in these cases because our depth-granularity module models the object spatial relations which offer more accurate descriptions of instance details and boundaries. Besides, our collaborative connections allow our segmentation and tracking head to accurately fire on the pixel of the instance instead of using the candidate proposals. \subsection{Ablation Study} We perform an ablation study on the BDD100K test set. Note our method is crafted on TrackRCNN~\cite{voigtlaender2019mots}, thus our baseline. By progressively integrating different contributing components: collaborative connections (CC), depth-granularity module (DGM), and GIoU loss (GL) (Sec. 3.4), to the baseline, we assess the contribution of each new component in DG-Labeler to TrackRCNN~\cite{voigtlaender2019mots}.\\ \indent We present the results in Table~\ref{ab}. All of our components (CC, DGM, GL) assist in achieving improved performance. Particularly for a single module, the baseline with CC avoids inefficient proposal-based operations and performs predictions on the accurate RoIs, thus achieving improved performance in accuracy; DGM contributes the largest improvements in dense prediction (HOTA, sMOTSA, and MOTSA). Compared to the strong baseline TrackRCNN, our full model integrating all contributions obtains absolute gains of 11.44\%, 7.2\%, 3.08\%, and 27 in terms of HOTA, sMOTSA, MOTSA, and IDS for car class and 9.86\%, 9.5\%, 11.22\%, and 40 for pedestrian class respectively. More results are displayed in the supplementary materials. \section{Conclusion} In this work, we offer the DGL-MOTS Dataset for training MOTS algorithm as well as DG-Labeler for data annotation. We believe that our work holds valuable potentials to facilitate the progress of the MOTS studies. {\small \bibliographystyle{ieee_fullname}
1,314,259,992,917
arxiv
\section{Introduction} \label{sec:introduction} \noindent In this paper, we shall develop a data-driven method to solve the following multiscale elliptic PDEs with random coefficients $a(x,\omega)$, \begin{align} \mathcal{L}(x,\omega) u(x,\omega) \equiv -\nabla\cdot\big(a(x,\omega)\nabla u(x,\omega)\big) &= f(x), \quad x\in D, \quad \omega\in\Omega, \label{MsStoEllip_Eq}\\ u(x,\omega)&= 0, \quad \quad x\in \partial D, \label{MsStoEllip_BC} \end{align} where $D \in \mathbb{R}^d$ is a bounded spatial domain and $\Omega$ is a sample space. The forcing function $f(x)$ is assumed to be in $L^2(D)$. We also assume that the problem is uniformly elliptic almost surely; see Section \ref{sec:randomproblem} for precise definition of the problem. In recent years, there has been an increased interest in quantifying the uncertainty in systems with randomness, i.e., solving stochastic partial differential equations (SPDEs, i.e., PDEs driven by Brownian motion) or partial differential equations with random coefficients (RPDEs). Uncertainty quantification (UQ) is an emerging research area to address these issues; see \cite{Ghanem:91,Xiu:03,babuska:04,matthies:05,WuanHou:06,Wan:06,Babuska:07,Webster:08,Xiu:09,Najm:09,sapsis:09,Zabaras:13,Grahamquasi:2015} and references therein. However, when SPDEs or RPDEs involving multiscale features and/or high-dimensional random inputs, the problems become challenging due to high computational cost. Recently, some progress has been made in developing numerical methods for multiscale PDEs with random coefficients; see \cite{Kevrekidis:2003,Zabaras:06,Ghanem:08,graham2011quasi,abdulle2013multilevel,hou2015heterogeneous,ZhangCiHouMMS:15,efendiev2015multilevel,chung2018cluster} and references therein. For example, data-driven stochastic methods to solve PDEs with random and/or multiscale coefficients were proposed in \cite{ChengHouYanZhang:13,ZhangCiHouMMS:15,ZhangHouLiu:15,hou2019model}. They demonstrated through numerical experiments that those methods were efficient in solving RPDEs with many different force functions. However, the polynomial chaos expansion \cite{Ghanem:91,Xiu:03} is used to represent the randomness in the solutions. Although the polynomial chaos expansion is general, it is a priori instead of problem specific. Hence many terms may be required in practice for an accurate approximation which induces the curse of dimensionality. We aim to develop a new data-driven method to solve multiscale elliptic PDEs with random coefficients based on intrinsic dimension reduction. The underlying low-dimensional structure for elliptic problems is implied by the work \cite{bebendorf2003}, in which high separability of the Green's function for uniformly elliptic operators with $L^{\infty}$ coefficients and the structure of blockwise low-rank approximation to the inverses of FEM matrices were established. We show that under the uniform ellipticity assumption, the family of Green's functions parametrized by a random variable $\omega$ is still highly separable, which reveals the approximate low dimensional structure of the family of solutions to \eqref{MsStoEllip_Eq} (again parametrized by $\omega$) and motivates our method. Our method consists of two stages. In the offline stage, a set of data-driven basis is constructed from solution samples. For example, the data can be generated by solving \eqref{MsStoEllip_Eq}-\eqref{MsStoEllip_BC} corresponding to a sampling of the coefficient $a(x,\omega)$. Here, different sampling methods can be applied, including Monte Carlo (MC) method and quasi-Monte Carlo (qMC) method. The sparse-grid based stochastic collocation method \cite{Griebel:04,Xiu:05,Webster:08} also works when the dimension of the random variables in $a(x,\omega)$ is moderate. Or the data come from field measurements directly. Then the low-dimensional structure and the corresponding basis will be extracted using model reduction methods, such as the proper orthogonal decomposition (POD) \cite{HolmesLumleyPOD:1993,Sirovich:1987,Willcox2015PODsurvey}, a.k.a. principle component analysis (PCA). The basis functions are data driven and problem specific. The key point is that once the dimension reduction is achieved, the online stage of computing the solution corresponding to a new coefficient becomes finding a linear combination of the (few) basis to approximate the solution. However, the mapping from the input coefficients of the PDE to the expansion coefficients of the solution in terms of the data driven basis is highly nonlinear. We propose a few possible online strategies (see Section \ref{sec:DerivationNewMethod}). For examples, if the coefficient is in parametric form, one can approximate the nonlinear map from the parameter domain to the expansion coefficients. Or one can apply Galerkin method using the extracted basis to solve \eqref{MsStoEllip_BC} for a new coefficient. In practice, the random coefficient of the PDE may not be available but censors can be deployed to record the solution at certain locations. In this case, one can compute the expansion coefficients of a new solution by least square fitting those measurements at designed locations. We also provide analysis and guidelines for sampling, dimension reduction, and other implementations of our methods. The rest of the paper is organized as follows. In Section 2, we introduce the high separability of the Green's function of deterministic elliptic PDEs and present its extension to elliptic problems with random coefficients. In section 3, we describe our new data-driven method and its detailed implementation. In Section 4, we present numerical results to demonstrate the efficiency of our method. Concluding remarks are made in Section 5. \section{Low-dimensional structures in the solution space} \label{sec:LowDimStructures} \subsection{High separability of the Green's function of deterministic elliptic operators. } \noindent Let $\mathcal{L}(x): V \to V' $ be an uniformly elliptic operator in a divergence form \begin{align} \mathcal{L}(x)u(x) \equiv -\nabla\cdot(a(x)\nabla u(x))\label{DeterministicEllipticPDE} \end{align} in a bounded Lipschitz domain $D \subset \mathbb{R}^d$, where $V = H_0^1(D)$. The uniformly elliptic assumption means that there exist $a_{\min}, a_{\max}>0$, such that $a_{\min}<a(x)<a_{\max}$ for almost all $x \in D$. The contrast ratio $\kappa_a=\frac{a_{\max}}{a_{\min}}$ is an important factor in the stability and convergence analysis. We consider the Dirichlet boundary value problem defined as \begin{align} \mathcal{L}(x)u(x)=f(x), \quad \text{in}~ D, \quad u(x)=0, \quad \text{on} ~ \partial D. \label{DeterministicDirichletProblem} \end{align} For all $x,y\in D$, the Green's function $G(x,y)$ is the solution of \begin{align} \mathcal{L}G(\cdot,y)=\delta(\cdot,y), \quad \text{in}~ D, \quad G(\cdot,y)=0, \quad \text{on} ~ \partial D, \label{DeterministicGreenFunction} \end{align} where $\mathcal{L}$ refers to the first variable $\cdot$ and $\delta(\cdot,y)$ is the Dirac delta function denoting an impulse source point at $y\in D$. The Green's function $G(x,y)$ is the Schwartz kernel of the inverse $\mathcal{L}^{-1}$, i.e., the solution of \eqref{DeterministicDirichletProblem} is represented by \begin{align} u(x)=\mathcal{L}^{-1}f(x)=\int_{D}G(x,y)f(y)dy. \label{solutionrepresentationGreen} \end{align} Since the coefficient $a(x)$ is only bounded, the $G(x,y)$ has a lower regularity, compared with the Green's function associated with the Poisson's equation. In \cite{gruter1982green}, the authors proved the existence of Green's function for $ d \geq 3 $ and the estimate $|G(x,y)|\leq \frac{C(d,\kappa_a)}{a_{\min}}|x-y|^{2-d}$, where $C(d,\kappa_a)$ is a constant depends on $d$ and $\kappa_a$. For $d=2$ the existence of the Green's function was proved in \cite{dolzmann1995estimates} together with the estimate $|G(x,y)|\leq \frac{C(\kappa_a)}{a_{\min}}\log|x-y|$. Thus, when $\mathcal{L}$ is an uniform elliptic operator, $\mathcal{L}^{-1}$ exists and $||\mathcal{L}^{-1}||\leq Ca_{\min}^{-1}$, where $C$ depends on $d$ and $\kappa_a$. Under mild assumptions, one can prove that the solution $u(x)$ to Eq.\eqref{DeterministicDirichletProblem} has finite dimensional approximations as follows. \begin{proposition}\label{FiniteDimensionalApprox} Let $D \subset \mathbb{R}^d$ be a convex domain and $X$ be a closed subspace of $L^2(D)$. Then for any integer $k \in \mathbb{N}$ there is a subspace $V_k \subset X$ satisfying $\dim V_k \leq k$ such that \begin{align} \text{\normalfont dist}_{L^2(D)}(u,V_k) \le C \frac{\text{\normalfont diam}(D)}{\sqrt[d]{k}}\| \nabla u \| _{L^2(D)}, \quad \text{\normalfont for all } u \in X \cap H^1(D), \label{ApproxByFiniteDimensionSubspace} \end{align} where the constant $C$ depends only on the spatial dimension $d$. \end{proposition} The proof is based on the Poincar\'{e} inequality; see \cite{BebendorfHackbusch:2003}. All distances and diameters use the Euclidean norm in $\mathbb{R}^d$ except the distance of functions which uses the $L^2(D)$-norm. We emphasize that in the Prop. \ref{FiniteDimensionalApprox} one can choose the finite dimensional space $V_k$ to be the space of piece-wise constant functions defined on a grid with grid size $\frac{\text{\normalfont diam}(D)}{\sqrt[d]{k}}$. Now we present the definition of $\mathcal{L}$-harmonic function on a domain $E\subset D$ introduced in \cite{BebendorfHackbusch:2003}. A function $u$ is $\mathcal{L}$-harmonic on $E$ if $u\in H^1(\hat{E}), \forall \hat{E} \subset E$ with $dist(\hat{D}, \partial E)>0$ and satisfies \[ a(u, \varphi) = \int_{E} a(x)\nabla u(x)\cdot \nabla \varphi(x) dx =0 \quad \forall \varphi \in C_0^{\infty} (E). \] Denote the space of $\mathcal{L}$-harmonic functions on $E$ by $X(E)$, which is closed in $L^2(E)$. The following key Lemma shows that the space of $\mathcal{L}$-harmonic function has an approximate low dimensional structure. \begin{lemma}[Lemma 2.6 of \cite{BebendorfHackbusch:2003}]\label{lemma1} Let $\hat{E}\subset E \subset D$ in $R^d$ and assume that $\hat{E}$ is convex such that \[ dist(\hat{E}, \partial E)\ge \rho~ diam(\hat{E})>0, \quad \mbox{for some constant } \rho >0. \] Then for any $1>\epsilon>0$, there is a subspace $W\subset X(\hat{E})$ so that for all $u\in X(\hat{E})$, \[ dist_{L^2(\hat{E})}(u, W)\le \epsilon \|u\|_{L^2(E)} \] and \[ dim(W)\le c^d(\kappa_a,\rho) (|\log \epsilon |)^{d+1}, \] where $c(\kappa_a,\rho) >0 $ is a constant that depends on $\rho$ and $\kappa_a$. \end{lemma} In other words, the above Lemma says the Kolmogorov n-width of the space of $\mathcal{L}$-harmonic function $X(\hat{E})$ is less than $O(\exp(-n^{\frac{1}{d+1}}))$. The key property of $\mathcal{L}$-harmonic functions used to prove the above result is the Caccioppoli inequality, which provides the estimate $\|\nabla u\|_{L^2(\hat{\E})} \le C(\kappa_a, \rho)\|u\|_{L^2(E)}$. Moreover, the projection of the space of piecewise constant functions defined on a multi-resolution rectangular mesh onto $X(\hat{E})$ can be constructed as a candidate for $W$ based on Prop. \ref{FiniteDimensionalApprox}. In particular, the Green's function $G(\cdot,y)$ is $\mathcal{L}$-harmonic on $E$ if $y\notin E$. Moreover, given two disjoint domains in $D_1, D_2$ in $D$, the Green's function $G(x,y)$ with $x\in D_1, y\in D_2$ can be viewed as a family of $\mathcal{L}$-harmonic functions on $D_1$ parametrized by $y\in D_2$. From the above Lemma one can easily deduce the following result which shows the high separability of the Green's function for the elliptic operator \eqref{DeterministicEllipticPDE}. \begin{figure}[tbph] \centering \begin{tikzpicture}[scale=0.9] \coordinate [label={[xshift=0.7cm, yshift=0.3cm]$D$}] (a1) at (0,0); \coordinate (b1) at (0,4); \coordinate (c1) at (8,4); \coordinate (d1) at (8,0); \draw(a1)--(b1)--(c1)--(d1)--cycle; \coordinate (a2) at (1,0.8); \coordinate (b2) at (1,3.2); \coordinate (c2) at (3,3.2); \coordinate (d2) at (3,0.8); \draw(a2)--(b2)--(c2)--(d2)--cycle; \coordinate (a3) at (5,0.8); \coordinate (b3) at (5,3.2); \coordinate (c3) at (7,3.2); \coordinate (d3) at (7,0.8); \draw(a3)--(b3)--(c3)--(d3)--cycle; \tikzstyle{textnode} = [thick, fill=white, minimum size = 0.1cm] \node[textnode] (D1) at (2,2) {$D_1$}; \node[textnode] (D2) at (6,2) {$D_2$}; \node[textnode] (Gf) at (4,3.3) {$G(x,y)$}; \path [->] (Gf) edge node {} (D1); \path [->] (Gf) edge node {} (D2); \end{tikzpicture} \caption{Green's function $G(x,y)$ with dependence on $x\in D_1$ and $y\in D_2$.} \label{fig:Greenfunction1} \end{figure} \begin{proposition}[Theorem 2.8 of \cite{BebendorfHackbusch:2003}]\label{GreenFuncSepaApp} Let $D_1, D_2 \subset D$ be two subdomains and $D_1$ be convex (see Figure \ref{fig:Greenfunction1}). Assume that there exists $\rho>0$ such that \begin{align} 0 < \text{ \normalfont diam} (D_1) \leq \rho\text{ \normalfont dist} (D_1, D_2). \label{AdmissiblePairs} \end{align} Then for any $\epsilon \in (0,1)$ there is a separable approximation \begin{align} G_k(x,y) = \sum_{i=1}^k u_i(x) v_i(y) \quad \text{with } k \leq c^d(\kappa_a, \rho) |\log \epsilon|^{d+1}, \label{GreenFuncSepaApp1} \end{align} so that for all $y\in D_2$ \begin{align} \| G( \cdot,y) - G_k(\cdot,y) \|_{L^2 (D_1)} \leq \epsilon \| G(\cdot,y) \| _{L^2(\hat{D}_1)}, \end{align} where $\hat{D}_1 := \{ x \in D : 2\rho~\text{\normalfont dist} (x, D_1) \leq \text{\normalfont diam} (D_1)\}$. \end{proposition} \begin{remark} In the recent work \cite{EngquistZhao:2018}, it is shown that the Green's function for high frequency Helmholtz equation is not highly separable due to the highly oscillatory phase. \end{remark} \subsection{Extension to elliptic PDEs with random coefficients}\label{sec:randomproblem} \noindent Let's consider the following elliptic PDEs with random coefficients: \begin{align} \mathcal{L}(x,\omega) u(x,\omega) \equiv -\nabla\cdot\big(a(x,\omega)\nabla u(x,\omega)\big) &= f(x), \quad x\in D, \quad \omega\in\Omega, \label{MsStoEllip_ModelEq}\\ u(x,\omega)&= 0, \quad \quad x\in \partial D, \label{MsStoEllip_ModelBC} \end{align} where $D \in \mathbb{R}^d$ is a bounded spatial domain and $\Omega$ is a sample space. The forcing function $f(x)$ is assumed to be in $L^2(D)$. The above equation can be used to model the flow pressure in porous media such as water aquifer and oil reservoirs, where the permeability field $a(x,\omega)$ is a random field whose exact values are infeasible to obtain in practice due to the low resolution of seismic data. We also assume that the problem is uniformly elliptic almost surely, namely, there exist $a_{\min}, a_{\max}>0$, such that \begin{align} P\big(\omega\in \Omega: a(x, \omega)\in [a_{\min},a_{\max}], \forall x \in D\big) = 1. \label{asUniformlyElliptic1} \end{align} Note that we do not make any assumption on the regularity of the coefficient $a(x,\omega)$ in the physical space, which can be arbitrarily rough for each realization. For the problem \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC}, the corresponding Green function is defined as \begin{align} \mathcal{L}(x,\omega)G(x,y,\omega) \equiv-\nabla_x\cdot(a(x,\omega)\nabla_x G(x,y,\omega)) &= \delta(x,y), \quad x\in D,\quad \omega\in\Omega,\\ G(x,y,\omega) &= 0, \quad \quad x\in \partial D, \end{align} where $y\in D$ and $\delta(x,y)$ is the Dirac delta function. A key observation for the proof of Lemma \ref{lemma1} and Prop. \ref{FiniteDimensionalApprox} is that the projection of the space of piecewise constant functions defined on a multi-resolution rectangular mesh, depending only on the geometry of $D_1, D_2$, $\kappa_a$, and $\rho$, onto the $\mathcal{L}$-harmonic function provides a candidate for the finite dimensional subspace $W$. Based on this observation, one can easily extend the statement in Prop. \ref{FiniteDimensionalApprox} to the family of Green's functions $G(x,y,\omega) $ parametrized by $\omega$ under the uniform ellipticity assumption \eqref{asUniformlyElliptic1}. \begin{theorem}\label{ThmRandomGreenFuncSepaApp} Let $D_1, D_2 \subset D$ be two subdomains and $D_1$ be convex. Assume that there is $\rho>0$ such that $0 < \text{ \normalfont diam} (D_1) \leq \rho\text{ \normalfont dist} (D_1, D_2)$. Then for any $\epsilon \in (0,1)$ there is a separable approximation \begin{align} G_k(x,y,\omega) = \sum_{i=1}^k u_i(x) v_i(y,\omega) \quad \text{with } k \leq c^d(\kappa_a, \rho) |\log \epsilon|^{d+1}, \label{RandomGreenFuncSepaApp} \end{align} so that for all $y\in D_2$ \begin{align} \| G(\cdot,y,\omega) - G_k(\cdot,y, \omega) \|_{L^2 (D_1)} \leq \epsilon \| G(\cdot,y, \omega) \| _{L^2(\hat{D}_1)} \quad \text{a.s. in } \Omega, \end{align} where $\hat{D}_1 := \{ x \in D : 2\rho\text{\normalfont dist} (x, D_1) \leq \text{\normalfont diam} (D_1)\}$. \end{theorem} The above Theorem shows that there exists a low dimensional linear subspace, e.g., spanned by $u_i(\cdot)$, that can approximate the family of functions $G(\cdot,y,\omega)$ well in $L^2(D_1)$ uniformly with respect to $y\in D_2$ and a.s. in $\omega$. Moreover, if $\mathrm{supp}(f)\subset D_2$, one can approximate the solution to \eqref{MsStoEllip_ModelBC} by the same space well in $L^2(D_1)$ uniformly with respect to $f$ and a.s. in $\omega$. Let \begin{equation} u_f(x,\omega)=\int_{D_2} G(x,y,\omega)f(y) dy \end{equation} and \begin{equation} u^{\epsilon}_f(x,\omega)=\int_{D_2} G_k(x,y,\omega)f(y) dy=\sum_{i=1}^k u_i(x)\int_{D_2} v_i(y,\omega) f(y) dy. \end{equation} Hence \begin{equation} \begin{array}{l} \|u_f(\cdot,\omega)-u^{\epsilon}_f(\cdot,\omega)\|^2_{L^2(D_1)}=\int_{D_1} \left[\int_{D_2} (G(x,y,\omega)-G_k(x,y,\omega))f(y) dy\right]^2 dx \\ \\ \le \|f\|_{L^2(D_2)}^2 \int_{D_2}\| G(\cdot,y,\omega) - G_k(\cdot,y, \omega) \|^2_{L^2 (D_1)} dy\le C(D_1, D_2, \kappa_a, d)\epsilon^2\|f\|_{L^2(D_2)}^2, \end{array} \end{equation} a.s. in $\omega$ since $\| G(\cdot,y, \omega) \| _{L^2(\hat{D}_1)}$ is bounded by a positive constant that depends on $D_1, D_2, \kappa_a, d$ a.s. in $\omega$ due to uniform ellipticity \eqref{asUniformlyElliptic1}. Although, the proof of high separability of the Green's function requires $x\in D_1, y\in D_2$ for well separated $D_1$ and $D_2$, i.e., avoiding the singularity of the Green's function at $x=y$, the above approximation of the solution $u$ in a domain disjoint with the support of $f$ seems to be valid for $u$ in the whole domain even when $f$ is a globally supported smooth function as shown in our numerical tests. \begin{remark}\label{remark1} It is important to note that both the linear subspace $W$ and the bound for its dimension are independent of the randomness. Moreover, it is often possible to find a problem specific and data driven subspace with a dimension much smaller than the theoretical upper bound for $W$ (as demonstrated by our experiments). This key observation motivates our data-driven approach which can achieve a significant dimension reduction in the solution space. \end{remark} \begin{remark} Although we present the problem and our data driven approach for the elliptic problem \eqref{MsStoEllip_ModelBC} with scalar random coefficients $a(x,\omega)$, all the statements can be directly extended when the random coefficient is replaced by a symmetric positive definite tensor $a_{i,j}(x,\omega), i,j,=1, \ldots, d$ with uniform ellipticity. \end{remark} \begin{remark} In the recent work \cite{BrysonZhaoZhong:2019}, it is shown that a random field can have a large intrinsic complexity if it is rough, i.e., $a(x_1,\omega)$ and $a(x_2, \omega)$ decorrelate quickly in terms of $\|x_1-x_2\|$. However, when a random field, as rough as it can be, is input as the coefficient of an elliptic PDE, the intrinsic complexity of the resulting solution space, which depends on the coefficient highly nonlinearly and nonlocally, is highly reduced. This phenomenon can also be used to explain the severe ill-posedness of the inverse problem in which one tries to recover the coefficient of an elliptic PDE from the boundary measurements such as electrical impedance tomography (EIT). \end{remark} Before we end this subsection, we give a short review of existing methods for solving problem \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC} involving random coefficients. There are basically two types of methods. In intrusive methods, one represents the solution of \eqref{MsStoEllip_ModelEq} by $u(x,\omega)= \sum_{\alpha \in J} u_{\alpha}(x)H_{\alpha}(\omega)$, where $J$ is an index set, and $H_{\alpha}(\omega)$ are certain basis functions (e.g. orthogonal polynomials). Typical examples are the Wiener chaos expansion (WCE) and polynomial chaos expansion (PCE) method. Then, one uses Galerkin method to compute the expansion coefficients $u_{\alpha}(x)$; see \cite{Ghanem:91,Xiu:03,babuska:04,matthies:05,WuanHou:06,Najm:09} and reference therein. These methods have been successfully applied to many UQ problems, where the dimension of the random input is small. However, the number of basis functions increases exponentially with the dimension of random input, i.e., they suffer from the curse of dimensionality of both the input space and the output (solution) space. In the non-intrusive methods, one can use the MC method or qMC method to solve \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC}. However, the convergence rate is slow and the method becomes more expensive when the coefficient $a(x,\omega)$ contains multiscale features. Stochastic collocation methods explore the smoothness of the solutions in the random space and use certain quadrature points and weights to compute the solutions \cite{Xiu:05,Babuska:07}. Exponential convergence can be achieved for smooth solutions, but the quadrature points grow exponentially as the number of random variables increases. Sparse grids \cite{Griebel:04,Webster:08} can reduce the quadrature points to some extent \cite{Griebel:04}. However, the sparse grid method still becomes very expensive when the dimension of randomness is modestly high. Instead of building random basis functions a priori or choosing collocation quadrature points based on the random coefficient $a(x,\omega)$ (see Eq.\eqref{ParametrizeRandomCoefficient}), we extract the low dimensional structure and a set of basis functions in the solution space directly from the data (or sampled solutions). Notice that the dimension of the extracted low dimensional space mainly depends on $\kappa_a$ (namely $a_{\min}$ and $a_{\max}$), and very mildly on the dimension of the random input in $a(x,\omega)$. Therefore, the curse of dimension can be alleviated. \section{Derivation of the new data-driven method} \label{sec:DerivationNewMethod} In many physical and engineering applications, one needs to obtain the solution of the Eq.\eqref{MsStoEllip_ModelEq} on a subdomain $\hat{D}\subseteq D$. For instance, in the reservoir simulation one is interested in computing the pressure value $u(x,\omega)$ on a specific subdomain $\hat{D}$. Our method consists of offline and online stages. In the offline stage, we extract the low dimensional structure and a set of data-driven basis functions from solution samples. For example, a set of solution samples $\{u(x,\omega_i)\}_{i=1}^{N}$ can be obtained from measurements or generated by solving \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC} with coefficient samples $\{a(x,\omega_i)\}_{i=1}^{N}$. Let $V_l=\{u|_{\hat{D}}(x,\omega_1),...,u|_{\hat{D}}(x,\omega_N)\}$ denote the solution samples. We use POD \cite{HolmesLumleyPOD:1993,Sirovich:1987,Willcox2015PODsurvey}, or a.k.a PCA, to find the optimal subspace and its orthonormal basis to approximate $V_l$ to certain accuracy. Define the correlation matrix $\sigma_{ij}=<u(\cdot,\omega_i), u(\cdot,\omega_j)>_{\hat{D}}, i, j= 1, \ldots, N$. Let the eigenvalues and corresponding eigenfunctions of the correlation matrix be $\lambda_1\ge \lambda_2 \ge \ldots \ge \ldots \ge \lambda_N \ge 0$ and $\phi_{1}(x)$, $\phi_{2}(x), \ldots, \phi_N(x)$ respectively. The space spanned by the leading $K$ eigenfunctions have the following approximation property to $V_l$. \begin{proposition}\label{POD_proposition} \begin{align} \frac{\sum_{i=1}^{N}\Big|\Big|u(x,\omega_{i})- \sum_{j=1}^{K}<u(\cdot,\omega_{i}), \phi_j(\cdot)>_{\hat{D}}\phi_j(x)\Big|\Big|_{L^2(\hat{D})}^{2} }{\sum_{i=1}^{N}\Big|\Big|u(x,\omega_{i})\Big|\Big|_{L^2(\hat{D})}^{2}}=\frac{\sum_{s=K+1}^{N} \lambda_s}{\sum_{s=1}^{N} \lambda_s}. \label{Prop_PODError} \end{align} \end{proposition} First, we expect a fast decay in $\lambda_s$ so that a small $K\ll N$ will be enough to approximate the solution samples well in root mean square sense. Secondly, based on the existence of low dimensional structure implied by Theorem \ref{ThmRandomGreenFuncSepaApp}, we expect that the data-driven basis, $\phi_{1}(x)$, $\phi_{2}(x), \ldots, \phi_{K}(x)$, can almost surely approximate the solution $u|_{\hat{D}}(x,\omega)$ well too under some sampling condition (see Section \ref{sec:DetermineNumberOfSamples}) by \begin{align} u|_{\hat{D}}(x,\omega) \approx \sum_{j=1}^{K}c_{j}(\omega)\phi_{j}(x), \quad \text{a.s. } \omega \in \Omega, \label{RB_expansion} \end{align} where the data-driven basis functions $\phi_{j}(x)$, $j=1,...,K$ are defined on $\hat{D}$. The Prop.\ref{POD_proposition} still remains valid in the case $\hat{D}=D$, where the data-driven basis $\phi_{j}(x)$, $j=1,...,K$ can be used in the Galerkin approach to solve \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC} on the whole domain $D$ (see Section \ref{sec:GlobalProblem}). Now the problem is how to find $c_{j}(\omega)$ through an efficient online process given a new realization of $a(x,\omega)$. We prescribe several strategies in different setups. \subsection{Parametrized randomness}\label{sec:parametrized} In many applications, $a(x,\omega)$ is parameterized by $r$ independent random variables, i.e., \begin{align} a(x,\omega) = a(x,\xi_{1}(\omega),...,\xi_{r}(\omega)). \label{ParametrizeRandomCoefficient} \end{align} Thus, the solution can be represented as a function of these random variables as well, i.e., $u(x,\omega) = u(x,\xi_{1}(\omega),...,\xi_{r}(\omega))$. Let $\gvec{\xi}(\omega)=[\xi_1(\omega),\cdots,\xi_r(\omega)]^T$ denote the random input vector and $\textbf{c}(\omega)=[c_{1}(\omega),\cdots,c_{K}(\omega)]^T$ denote the vector of solution coefficients in \eqref{RB_expansion}. Now, the problem can be viewed as constructing a map from $\gvec{\xi}(\omega)$ to $\textbf{c}(\omega)$, denoted by $\textbf{F}:\gvec{\xi}(\omega)\mapsto \textbf{c}(\omega)$, which is nonlinear. We approximate this nonlinear map through the sample solution set. Given a set of solution samples $\{u(x,\omega_i)\}_{i=1}^{N}$ corresponding to $\{\gvec{\xi}(\omega_i)\}_{i=1}^{N}$, e.g., by solving \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC} with $a(x,\xi_{1}(\omega_i),...,\xi_{r}(\omega_i))$, from which the set of data driven basis $\phi_{j}(x), j=1,...,K$ is obtained using POD as described above, we can easily compute the projection coefficients $\{\textbf{c}(\omega_i)\}_{i=1}^{N}$ of $u|_{\hat{D}}(x,\omega_i)$ on $\phi_{j}(x)$, $j=1,...,K$, i.e., $c_j(\omega_i)=<u(x,\omega_i), \phi_{j}(x)>_{\hat{D}}$. From the data set, $F(\gvec{\xi}(\omega_i))= \textbf{c}(\omega_i)$, $i=1,...,N$, we construct the map $\textbf{F}$. Note the significant dimension reduction by reducing the map $\gvec{\xi}(\omega)\mapsto u(x,\omega)$ to the map $\gvec{\xi}(\omega)\mapsto \textbf{c}(\omega)$. We provide a few ways to construct $\textbf{F}$. \begin{itemize} \item Interpolation. \\ When the dimension of the random input $r$ is small or moderate, one can use interpolation. In particular, if the solution samples correspond to $\gvec{\xi}$ located on a (sparse) grid, standard polynomial interpolation can be used to approximate the coefficient $c_j$ at a new point of $\gvec{\xi}$. If the solution samples correspond to $\gvec{\xi}$ at scattered points or the dimension of the random input $r$ is moderate or high, one can first find the a few nearest neighbors to a new point efficiently using $k-d$ tree \cite{wald2006building} and then use moving least square approximation centered at the new point. \item Neural network. \\ When the dimension of the random input $r$ is high, interpolation approach becomes expensive and less accurate, we show that neural network seems to provide a satisfactory solution. \end{itemize} More implementation details will be explained in Section \ref{sec:NumericalExperiments} and the map $\textbf{F}$ is plotted based on interpolation. In the online stage, one can compute the solution $u(x,\omega)$ to \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC} using the constructed mapping $\textbf{F}$. Given a new realization of $a(x,\xi_{1}(\omega_i),...,\xi_{r}(\omega_i))$, we plug $\gvec{\xi}(\omega)$ into the constructed map $\textbf{F}$ and directly obtain $\textbf{c}(\omega)=\textbf{F}(\gvec{\xi}(\omega))$ which are the projection coefficients of the solution on the data-driven basis. So we can quickly obtain the new solution $u|_{\hat{D}}(x,\omega)$ using Eq.\eqref{RB_expansion}, where the computational time is negligible. Once we obtain the numerical solutions, we can use them to compute statistical quantities of interest, such as mean, variance, and joint probability distributions. \begin{remark} In Prop.\ref{POD_proposition} we construct the data-driven basis functions from eigen-decomposition of the correlation matrix associated with the solution samples. Alternatively we can subtract the mean from the solution samples, compute the covariance matrix, and construct the basis functions from eigen-decomposition of the covariance matrix. \end{remark} \subsection{Galerkin approach} \label{sec:GlobalProblem} \noindent In the case $\hat{D}=D$, we can solve \eqref{MsStoEllip_ModelEq}-\eqref{MsStoEllip_ModelBC} on the whole domain $D$ by the standard Galerkin formulation using the data driven basis for a new realization of $a(x,\omega)$. Once the data driven basis $\phi_{j}(x)$, $j=1,...,K$, which are defined on the domain $D$, are obtained from solution samples in the offline stage, given a new realization of the coefficient $a(x,\omega)$, we approximate the corresponding solution as \begin{align} u(x,\omega) \approx \sum_{j=1}^{K}c_{j}(\omega)\phi_{j}(x), \quad \text{a.s. } \omega \in \Omega, \label{RB_expansion2} \end{align} and use the Galerkin projection to determine the coefficients $c_{j}(\omega)$, $j=1,...,K$ by solving the following linear system in the online stage, \begin{align} \sum_{j=1}^K \int_{D}a(x,\omega)c_{j}(\omega)\nabla\phi_{j}(x)\cdot\nabla\phi_{l}(x)dx = \int_{D}f(x)\phi_{l}(x)dx, \quad l=1,...,K. \label{GalerkinSystem} \end{align} \begin{remark} The computational cost of solving the linear system \eqref{GalerkinSystem} is small compared to using a Galerkin method, such as the finite element method, directly for $u(x,\omega)$ because $K$ is much smaller than the degree of freedom needed to discretize $u(x,\omega)$. \end{remark} If the coefficient $a(x,\omega)$ has the affine parameter dependence property \cite{RozzaPatera:2007}, i.e., $ a(x,\omega) = \sum_{n=1}^{r} a_{n}(x)\xi_{n}(\omega) $, we compute the terms that do not depend on randomness, including $\int_{D}a_{n}(x)\nabla\phi_{j}(x)\cdot\nabla\phi_{l}(x)dx$, $\int_{D}f(x)\phi_{l}(x)dx$, $j,l=1,...,K$ and save them in the offline stage. This leads to considerable savings in assembling the stiffness matrix for each new realization of the coefficient $a(x,\omega)$ in the online stage. Of course, the affine form is automatically parametrized. Hence, one can also construct the map $\textbf{F}:\gvec{\xi}(\omega)\mapsto \textbf{c}(\omega)$ as described in the previous Section \ref{sec:parametrized}. If the coefficient $a(x,\omega)$ does not admit an affine form, we can apply the empirical interpolation method (EIM) \cite{PateraMaday:2004} to convert $a(x,\omega)$ into an affine form. \subsection{Least square fitting from direct measurements at selected locations}\label{sec:LS} In many applications, only samples (data) or measurements of $u(x,\omega)$ is available while the model of $a(x,\omega)$ or its realization is not known. In this case, we propose to compute the coefficients $\textbf{c}$ by least square fitting the measurements (values) of $u(x,\omega)$ at appropriately selected locations. First, as before, from a set of solutions samples, $u(x_j, \omega_i)$, measured on a mesh $x_j \in \hat{D}, j=1, \ldots, J$, one finds a set of data driven basis $\phi_1(x_j), \ldots, \phi_K(x_j)$, e.g. using POD. For a new solution $u(x,\omega)$ measured at $x_1, x_2, \ldots, x_M$, one can set up the following least square problem to find $\vec{c}=[c_1, \ldots, c_K]^T$ such that $u(x,\omega)\approx \sum_{k=1}^K c_k\phi_k(x)$: \begin{equation} \label{eq:LS} B \vec{c}=\vec{y}, \quad \vec{y}=[u(x_1,\omega), \ldots, u(x_M,\omega)]^T, B=[\boldsymbol{\phi}^M_1, \ldots, \boldsymbol{\phi}^M_K]\in R^{M\times K}, \end{equation} where $\boldsymbol{\phi}^M_k=[\phi_k(x_1), \ldots, \phi_k(x_M)]^T$. The key issue in practice is the conditioning of the least square problem \eqref{eq:LS}. One way is to select the measurement (sensor) locations $x_1, \ldots x_M$ such that rows of $B$ are as decorrelated as possible. We adopt the approach proposed in \cite{Kutz2017Sensor} in which a QR factorization with pivoting for the matrix of data driven basis is used to determine the measurement locations. More specifically, let $\Phi=[\boldsymbol{\phi}_1, \ldots, \boldsymbol{\phi}_K]\in R^{J\times K}$, $\boldsymbol{\phi}_k=[\phi_k(x_1), \ldots, \phi_k(x_J)]^T$. If $M=K$, QR factorization with column pivoting is performed on $\Phi^T$. If $M>K$, QR factorization with pivoting is performed on $\Phi\Phi^T$. The first $M$ pivoting indices provide the measurement locations. More details can be found in \cite{Kutz2017Sensor} and Section \ref{sec:NumericalExperiments}. \subsection{Extension to problems with parameterized force functions} \label{sec:ExtensionTOManyFx} \noindent In many applications, we are interested in solving multiscale elliptic PDEs with random coefficients in the multiquery setting. A model problem is given as follows, \begin{align} -\nabla\cdot\big(a(x,\omega)\nabla u(x,\omega)\big) &= f(x,\theta), \quad x\in D, \quad \omega\in\Omega, \quad \theta \in \Theta, \label{MsStoEllipMultiquery_Eq}\\ u(x,\omega)&= 0, \quad \quad x\in \partial D, \label{MsStoEllipMultiquery_BC} \end{align} where the setting of the coefficient $a(x,\omega)$ is the same as \eqref{ParametrizeRandomCoefficient}. Notice that the force function $f(x,\theta)$ is parameterized by $\theta\in \Theta$ and $\Theta$ is a parameter set. In practice, we often need to solve the problem \eqref{MsStoEllipMultiquery_Eq}-\eqref{MsStoEllipMultiquery_BC} with multiple force functions $f(x,\theta)$, which is known as the multiquery problem. It is computationally expensive to solve this kind of problem using traditional methods. Some attempts have been made in \cite{ZhangCiHouMMS:15,hou2019model}, where a data-driven stochastic method has been proposed to solve PDEs with random and multiscale coefficients. When the number of random variables in the coefficient $a(x,\omega)$ is small, say less than 10, the methods developed in \cite{ZhangCiHouMMS:15,hou2019model} can provide considerable savings in solving multiquery problems. However, they suffer from the curse of dimensionality of both the input space and the output (solution) space. Our method using data driven basis, which is based on extracting a low dimensional structure in the output space, can be directly adopted to this situation. Numerical experiments are presented in Section \ref{sec:NumericalExperiments}. \subsection{Determine a set of good learning samples} \label{sec:DetermineNumberOfSamples} \noindent A set of good solution samples is important for the construction of data-driven basis in the offline stage. Here we provide an error analysis which is based on the finite element formulation. However, the results extend to general Galerkin formulation. First, we make a few assumptions. \begin{assumption} \label{assumption2} Suppose $a(x,\omega)$ has the following property: given $ \delta_1 > 0$, there exists an integer $N_{\delta_1}$ and a choice of snapshots $\{a(x,\omega_i)\}$, $i=1,...,N_{\delta_1}$ such that \begin{align} \mathds{E}\left[\inf_{1\le i\le N_{\delta_1}} \big|\big|a(x,\omega) - a(x,\omega_i)\big|\big|_{L^\infty(D)}\right] \le \delta_1. \label{asd} \end{align} \end{assumption} Let $\{a(x,\omega_i)\}_{i=1}^{N_{\delta_1}}$ denote the samples of the random coefficient. When the coefficient has an affine form, we can verify Asm. \ref{assumption2} and provide a constructive way to sample snapshots $\{a(x,\omega_i)\}_{i=1}^{N_{\delta_1}}$ if we know the distribution of the random variables $\xi_{i}(\omega)$, $i=1,...,r$. Let $V_h\subset H_{0}^{1}(D)$ denote a finite element space that is spanned by nodal basis functions on a mesh with size $h$ and $\tilde{V}_h \subset V_h$ denote the space spanned by the data-driven basis $\{\phi_{j}(x)\}_{j=1}^{K}$. We assume the mesh size is fine enough so that the finite element space can approximate the solutions to the underlying PDEs well. For each $a(x,\omega_i)$, let $u_h(x,\omega_i)\in V_h$ denote the FEM solution and $\tilde{u}_h(x,\omega_i)\in \tilde{V}_h$ denote the projection on the data-driven basis $\{\phi_{j}(x)\}_{j=1}^{K}$. \begin{assumption} \label{assumption3} Given $\delta_2 > 0$, we can find a set of data-driven basis, $\phi_1, \ldots, \phi_{K_{\delta_2}}$ such that \begin{align} ||u_h(x,\omega_i)-\tilde{u}_h(x,\omega_i)||_{L^2(D)} \le \delta_2,\ \forall 1\le i \le K_{\delta_2}, \label{equation_asumption2} \end{align} where $\tilde{u}_h(x,\omega_i)$ is the $L^2$ projection of $u_h(x,\omega_i)$ onto the space spanned by $\phi_1, \ldots, \phi_{K_{\delta_2}}$. \end{assumption} Asm.\ref{assumption3} can be verified by setting the threshold in the POD method; see Prop.\ref{POD_proposition}. Now we present the following error estimate. \begin{theorem} \label{error_theorem1} Under Assumptions \ref{assumption2}-\ref{assumption3}, for any $\delta_i > 0$, $i=1,2$, we can choose the samples of the random coefficient $\{a(x,\omega_i)\}_{i=1}^{N_{\delta_1}}$ and the threshold in constructing the data-driven basis accordingly, such that \begin{align} \mathds{E}\left[\big|\big|u_h(x,\omega) - \tilde{u}_h(x,\omega)\big|\big|_{L^2(D)}\right] \leq C\delta_1 + \delta_2, \label{error_theorem} \end{align} where $C$ depends on $a_{\min}$, $f(x)$ and the domain $D$. \end{theorem} \begin{proof} Given a coefficient $a(x,\omega)$, let $u_h(x,\omega)$ and $\tilde{u}_h(x,\omega)$ be the corresponding FEM solution and data-driven solution, respectively. We have \begin{align} \label{proof_basis_error} &\big|\big|u_h(x,\omega) - \tilde{u}_h(x,\omega)\big|\big|_ {L^2(D)} \nonumber\\ \le &\big|\big|u_h(x,\omega) - u_h(x,\omega_i)\big|\big|_{L^2(D)} + \big|\big|u_h(x,\omega_i) - \tilde{u}_h(x,\omega_i)\big|\big|_{L^2(D)} + \big|\big|\tilde{u}_h(x,\omega_i) - \tilde{u}_h(x,\omega)\big|\big|_{L^2(D)}, \nonumber\\ :=& I_1 + I_2+ I_3, \end{align} where $u_h(x,\omega_i)$ is the solution corresponding to the coefficient $a(x,\omega_i)$ and $\tilde{u}_h(x,\omega_i)$ is its projection. Now we estimate the error term $I_1$ first. In the sense of weak form, we have \begin{align} \int_{D}a(x,\omega)\nabla u_h(x,\omega)\cdot \nabla v_h(x)dx=\int_{D}f(x)v_h(x), \quad \text{for all} \quad v_h(x)\in V_h, \label{FEMsolutionWeakForm1} \end{align} and \begin{align} \int_{D}a(x,\omega_i)\nabla u_h(x,\omega_i)\cdot\nabla v_h(x)dx=\int_{D}f(x)v_h(x), \quad \text{for all} \quad v_h(x)\in V_h. \label{FEMsolutionWeakForm2} \end{align} Subtracting the variational formulations \eqref{FEMsolutionWeakForm1}-\eqref{FEMsolutionWeakForm2} for $u_h(x,\omega)$ and $u_h(x,\omega_i)$, we find that for all $v_h(x)\in V_h$, \begin{align} \int_{D}a(x,\omega)\nabla (u_h(x,\omega)-u_h(x,\omega_i))\cdot\nabla v_h(x)dx =-\int_{D}(a(x,\omega)-a(x,\omega_i))\nabla u_h(x,\omega_i)\cdot\nabla v_h(x). \label{FEMsolutionWeakForm3} \end{align} Let $w_h(x)=u_h(x,\omega)-u_h(x,\omega_i)$ and $L(v_h)=-\int_{D}(a(x,\omega)-a(x,\omega_i))\nabla u_h(x,\omega_i)\cdot\nabla v_h(x)$ denote the linear form. Eq.\eqref{FEMsolutionWeakForm3} means that $w_h(x,\omega)$ is the solution of the weak form $\int_{D}a(x,\omega)\nabla w_h\cdot\nabla v_h(x)dx=L(v_h)$. Therefore, we have \begin{align} \big|\big|w_h(x)\big|\big|_ {H^1(D)}\leq \frac{||L||_{H^1(D)}}{a_{\min}}. \label{EstimateError} \end{align} Notice that \begin{align} ||L||_{H^1(D)} =\max_{||v_h||_{H^1(D)}=1}|L(v_h)|&\leq ||a(x,\omega)-a(x,\omega_i)||_{L^\infty(D)} ||u_h(x,\omega_i)||_{H^1(D)},\nonumber \\ &\leq ||a(x,\omega)-a(x,\omega_i)||_{L^\infty(D)}\frac{||f(x)||_{H^1(D)}}{a_{\min}}. \label{EstimateError2} \end{align} Since $w_h(x)=0$ on $\partial D$, combining Eqns.\eqref{EstimateError}-\eqref{EstimateError2} and using the Poincar\'e inequality on $w_h(x)$, we obtain an estimate for the term $I_1$ as \begin{align} \big|\big|u_h(x,\omega) - u_h(x,\omega_i)\big|\big|_{L^2(D)} &\leq C_1\big|\big|u_h(x,\omega) - u_h(x,\omega_i)\big|\big|_{H^1(D)} \nonumber \\ &\leq C_1||a(x,\omega)-a(x,\omega_i)||_{L^\infty(D)}\frac{||f(x)||_{H^1(D)}}{a_{\min}^2}, \label{EstimateError3} \end{align} where $C_1$ only depends on the domain $D$. For the term $I_3$ in Eq.\eqref{proof_basis_error}, we can similarly get \begin{align} \big|\big|\tilde{u}_h(x,\omega_i) - \tilde{u}_h(x,\omega)\big|\big|_{L^2(D)} \leq C_1||a(x,\omega)-a(x,\omega_i)||_{L^\infty(D)}\frac{||f(x)||_{H^1(D)}}{a_{\min}^2}. \label{I3} \end{align} The term $I_2$ in Eq.\eqref{proof_basis_error} can be controlled according to the Asm.\ref{assumption3}. Combining the estimates for terms $I_1$, $I_2$ and $I_3$ and integrating over the random space, we prove the theorem. \end{proof} Theorem \ref{error_theorem1} indicates that the error between $u_h(x,\omega)$ and its approximation $\tilde{u}_h(x,\omega)$ using the data driven basis consists of two parts. The first part depends on how well the random coefficient is sampled. While the second part depends on the truncation threshold in constructing the data-driven basis from the solution samples. In practice, a balance of these two factors and the discretization error (of the numerical method used to solve the PDEs) gives us the guidance on how to choose solution samples and truncation threshold in the POD method to achieve optimal accuracy. Again, the key advantage for our data driven approach for this form of elliptic PDEs is the low dimensional structure in the solution space which provides a significant dimension reduction. \section{Numerical experiments} \label{sec:NumericalExperiments} \noindent In this section we will present various numerical experiments to demonstrate the accuracy and efficiency of our proposed data-driven method. \subsection{An example with five random variables}\label{sec:Example1} \noindent We consider a multiscale elliptic PDE with a random coefficient that is defined on a square domain $D=[0,1]\times[0,1]$, \begin{align}\label{randommultiscaleelliptic} \begin{split} -\nabla\cdot(a(x,y,\omega)\nabla u(x,y,\omega)) &= f(x,y), \quad (x,y)\in D, \omega\in\Omega,\\ u(x,y,\omega)&=0, \quad \quad (x,y)\in\partial D. \end{split} \end{align} In this example, the coefficient $a(x,y,\omega)$ is defined as \begin{align} a(x,y,\omega) =& 0.1 + \frac{2+p_1\sin(\frac{2\pi x}{\epsilon_1})}{2-p_1\cos(\frac{2\pi y}{\epsilon_1})} \xi_1(\omega) + \frac{2+p_2\sin(\frac{2\pi (x+y)}{\sqrt{2}\epsilon_2})}{2-p_2\sin(\frac{2\pi (x-y)}{\sqrt{2}\epsilon_2})}\xi_2(\omega) + \frac{2+p_3\cos(\frac{2\pi (x-0.5)}{\epsilon_3})}{2-p_3\cos(\frac{2\pi (y-0.5)}{\epsilon_3})}\xi_3(\omega) \nonumber \\ &+ \frac{2+p_4\cos(\frac{2\pi (x-y)}{\sqrt{2}\epsilon_4})}{2-p_4\sin(\frac{2\pi (x+y)}{\sqrt{2}\epsilon_4})}\xi_4(\omega) + \frac{2+p_5\cos(\frac{2\pi (2x-y)}{\sqrt{5}\epsilon_5})}{2-p_5\sin(\frac{2\pi (x+2y)}{\sqrt{5}\epsilon_5})}\xi_5(\omega), \label{coefficientofexample1} \end{align} where $[\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4,\epsilon_5]=[\frac{1}{47},\frac{1}{29},\frac{1}{53},\frac{1}{37},\frac{1}{41}]$, $[p_1,p_2,p_3,p_4,p_5]=[1.98,1.96,1.94,1.92,1.9]$, and $\xi_i(\omega)$, $i=1,...,5$ are i.i.d. uniform random variables in $[0,1]$. The contrast ratio in the coefficient \eqref{coefficientofexample1} is $\kappa_a\approx 4.5\times 10^3$. The force function is $f(x,y) = \sin(2\pi x)\cos(2\pi y)\cdot I_{D_2}(x,y)$, where $I_{D_2}$ is an indicator function defined on $D_2=[\frac{1}{4},\frac{3}{4}]\times[\frac{1}{16},\frac{5}{16}]$. The coefficient \eqref{coefficientofexample1} is highly oscillatory in the physical space. Therefore, one needs a fine discretization to resolve the small-scale variations in the problem. We shall show results for the solution to \eqref{randommultiscaleelliptic} with coefficient \eqref{coefficientofexample1} in: (1) a restricted subdomain $D_1=[\frac{1}{4},\frac{3}{4}]\times[\frac{11}{16},\frac{15}{16}]$ away from the support $D_2$ of the source term $f(x,y)$; and (2) the full domain $D$. In all of our numerical experiments, we use the same uniform triangulation to implement the standard FEM and choose mesh size $h=\frac{1}{512}$ in order to resolve the multiscale information. We use $N=2000$ samples in the offline stage to construct the data-driven basis and determine the number of basis $K$ according to the decay rate of the eigenvalues of the correlation matrix of the solution samples, i.e., $\sigma_{ij}=<u(x,\omega_i),u(x,\omega_j)>, i,j=1, \dots, N$. In Figure \ref{fig:Example1localeigenvalues}, we show the decay property of eigenvalues. Specifically, we show the magnitude of the eigenvalues in Figure \ref{fig:Example1localeigenvalues1a} and the ratio of the accumulated sum of the leading eigenvalues over the total sum in Figure \ref{fig:Example1localeigenvalues1b}. These results and Prop.\ref{POD_proposition} imply that a few leading eigenvectors will provide a set of data-driven basis that can approximate all solution samples well. \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_local_eigenvalues-eps-converted-to.pdf} \caption{ Decay of eigenvalues.} \label{fig:Example1localeigenvalues1a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_local_acc_eigenvalues-eps-converted-to.pdf} \caption $1-\sqrt{\sum_{j=n+1}^{N}\lambda_{j}/\sum_{j=1}^{N}\lambda_{j}}$, $n=1,2,...$.} \label{fig:Example1localeigenvalues1b} \end{subfigure} \caption{The decay properties of the eigenvalues in the local problem of Sec.\ref{sec:Example1}.} \label{fig:Example1localeigenvalues} \end{figure} After we construct the data-driven basis, we use the spline interpolation to approximate the mapping $\textbf{F}:\gvec{\xi} \mapsto \textbf{c}(\gvec{\xi})$. Notice that the coefficient of \eqref{coefficientofexample1} is parameterized by five i.i.d. random variables. We can partition the random space $[\xi_1(\omega),\xi_2(\omega),\cdots,\xi_5(\omega)]^T\in [0,1]^5$ into a set of uniform grids in order to construct the mapping $\textbf{F}$. Here we choose $N_1=9^5$ samples. We remark that we can choose other sampling strategies, such as sparse-grid points and Latin hypercube points. In Figure \ref{fig:Example1localbasismapping}, we show the profiles of the first two data-driven basis functions $\phi_{1}$ and $\phi_{2}$ and the plots of the mappings $c_1(\xi_1,\xi_2;\xi_3,\xi_4,\xi_5)$ and $c_2(\xi_1,\xi_2;\xi_3,\xi_4,\xi_5)$ with fixed $[\xi_3,\xi_4,\xi_5]^T=[0.25, 0.5, 0.75]^T$. One can see that the data-driven basis functions contain multiscale features and the mapping $c_1(\xi_1,\xi_2;\xi_3,\xi_4,\xi_5)$ and $c_2(\xi_1,\xi_2;\xi_3,\xi_4,\xi_5)$ are smooth with respect to $\xi_i$, $i=1,2$. The behaviors of other data-driven basis functions and the mappings are similar (not shown here). \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_local_basis_zeta1-eps-converted-to.pdf}\\ \includegraphics[width=1.0\linewidth]{ex1_local_mapping_c1-eps-converted-to.pdf}\\ \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_local_basis_zeta2-eps-converted-to.pdf}\\ \includegraphics[width=1.0\linewidth]{ex1_local_mapping_c2-eps-converted-to.pdf}\\ \end{subfigure} \caption{Plots of data-driven basis $\phi_{1}$ and $\phi_{2}$ and mappings $c_1(\xi_1,\xi_2;\xi_3,\xi_4,\xi_5)$ and $c_2(\xi_1,\xi_2;\xi_3,\xi_4,\xi_5)$ with fixed $[\xi_3,\xi_4,\xi_5]^T=[0.25, 0.5, 0.75]^T$.} \label{fig:Example1localbasismapping} \end{figure} Once we get the mapping $\textbf{F}$, the solution corresponding to a new realization $a(x,\gvec{\xi}(\omega))$ can be constructed easily by finding $ \textbf{c}(\gvec{\xi})$ and plugging in the approximation \eqref{RB_expansion}. In Figure \ref{fig:Example1locall2err}, we show the mean relative $L^2$ and $H^1$ errors of the testing error and projection error. The testing error is the error between the numerical solution obtained by our mapping method and the reference solution obtained by the FEM on the same fine mesh used to compute the sample solutions. The projection error is the error between the FEM solution and its projection on the space spanned by data-driven basis, i.e. the best possible approximation error. For the experiment, only four data-driven basis are needed to achieve a relative error less than $1\%$ in $L^2$ norm and less than $2\%$ in $H^1$ norm. Moreover, the numerical solution obtained by our mapping method is close to the projection solution, which is the best approximation of the reference solution by the data-driven basis. This is due to the smoothness of the mapping. Notice that the computational time of the mapping method is almost negligible. In practice, when the number of basis is 10, it takes about $0.0022s$ to get a new solution by the mapping method, whereas the standard FEM takes $0.73s$. \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_local_L2err-eps-converted-to.pdf} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_local_H1err-eps-converted-to.pdf} \end{subfigure} \caption{ Relative $L^2$ and $H^1$ error with increasing number of basis for the local problem of Sec.\ref{sec:Example1}.} \label{fig:Example1locall2err} \end{figure} In Figure \ref{fig:Example1localdiffN}, we show the accuracy of the proposed method when we use different number of samples $N$ in constructing the data-driven basis. Although the numerical error decreases when the sampling number $N$ is increased in general, the difference is very mild. \begin{figure}[tbph] \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\linewidth]{ex1_meanL2err_diffN.pdf} \caption{Testing errors in $L^2$ norm.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\linewidth]{ex1_meanL2proj_diffN.pdf} \caption{Projection errors in $L^2$ norm.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\linewidth]{ex1_meanH1err_diffN.pdf} \caption{Testing errors in $H^1$ norm.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\linewidth]{ex1_meanH1proj_diffN.pdf} \caption{Projection errors in $H^1$ norm.} \end{subfigure} \caption{ The relative testing/projection errors in $L^2$ and $H^1$ norms with different number of samples (i.e. $N$) for the local problem of Sec.\ref{sec:Example1}.} \label{fig:Example1localdiffN} \end{figure} Next, we test our method on the whole computation domain for \eqref{randommultiscaleelliptic} with coefficient \eqref{coefficientofexample1}. Figure \ref{fig:Example1globaleigenvalues} shows the decay property of eigenvalues. Similarly, we show magnitudes of the leading eigenvalues in Figure \ref{fig:Example1globaleigenvalues3a} and the ratio of the accumulated sum of the eigenvalues over the total sum in Figure \ref{fig:Example1globaleigenvalues3b}. We observe similar behaviors as before. Since we approximate the solution in the whole computational domain, we take the Galerkin approach described in Section \ref{sec:GlobalProblem} using the data-driven basis. In Figure \ref{fig:Example1globall2engerr}, we show the mean relative error between our numerical solution and the reference solution in $L^2$ norm and $H^1$ norm, respectively. In practice, when the number of basis is 15, it takes about $0.084s$ to compute a new solution by our method, whereas the standard FEM method costs about $0.82s$ for one solution. \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_global_eigenvalues-eps-converted-to.pdf} \caption{ Decay of the eigenvalues.} \label{fig:Example1globaleigenvalues3a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_global_acc_eigenvalues-eps-converted-to.pdf} \caption{ $1-\sqrt{\sum_{j=n+1}^{N}\lambda_{j}/\sum_{j=1}^{N}\lambda_{j}}$, $n=1,2,...$.} \label{fig:Example1globaleigenvalues3b} \end{subfigure} \caption{The decay properties of the eigenvalues for the global problem of Sec.\ref{sec:Example1}.} \label{fig:Example1globaleigenvalues} \end{figure} \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_global_L2err-eps-converted-to.pdf} \caption{ Relative error in $L^2$ norm.} \label{fig:Example1globall2err} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex1_global_H1err-eps-converted-to.pdf} \caption{ Relative error in $H^1$ norm.} \label{fig:Example1globalengerr} \end{subfigure} \caption{The relative errors with increasing number of basis for the global problem of Sec.\ref{sec:Example1}.} \label{fig:Example1globall2engerr} \end{figure} \subsection{An example with an exponential type coefficient}\label{sec:Example2} \noindent We now solve the problem \eqref{randommultiscaleelliptic} with an exponential type coefficient. The coefficient is parameterized by eight random variables, which has the following form \begin{align} a(x,y,\omega) =&\exp\Big( \sum_{i=1}^8 \sin(\frac{2\pi (9-i)x}{9\epsilon_i})\cos(\frac{2\pi iy}{9\epsilon_i})\xi_i(\omega) \Big), \label{coefficientofexample2} \end{align} where the multiscale parameters $[\epsilon_1,\epsilon_2,\cdots,\epsilon_{8}] =[\frac{1}{43},\frac{1}{41},\frac{1}{47},\frac{1}{29},\frac{1}{37},\frac{1}{31},\frac{1}{53},\frac{1}{35}]$ and $\xi_i(\omega)$, $i=1,...,8$ are i.i.d. uniform random variables in $[-\frac{1}{2},\frac{1}{2}]$. Hence the contrast ratio is $\kappa_a\approx 3.0\times 10^3$ in the coefficient \eqref{coefficientofexample2}. The force function is $f(x,y) = \cos(2\pi x)\sin(2\pi y)\cdot I_{D_2}(x,y)$, where $I_{D_2}$ is an indicator function defined on $D_2=[\frac{1}{4},\frac{3}{4}]\times[\frac{1}{16},\frac{5}{16}]$. In the local problem, the subdomain of interest is $D_1=[\frac{1}{4},\frac{3}{4}]\times[\frac{11}{16},\frac{15}{16}]$. In Figure \ref{fig:Example2eigenvalues}, we show the decay property of eigenvalues. Specifically, in Figure \ref{fig:Example2eigenvalues-a} we show the magnitude of leading eigenvalues and in Figure \ref{fig:Example2eigenvalues-b} we show the ratio of the accumulated sum of the eigenvalues over the total sum. These results imply that the solution space has a low-dimensional structure, which can be approximated by the data-driven basis functions. \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex2_local_eigenvalues-eps-converted-to.pdf} \caption{ Decay of eigenvalues.} \label{fig:Example2eigenvalues-a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex2_local_acc_eigenvalues-eps-converted-to.pdf} \caption{ $1-\sqrt{\sum_{j=n+1}^{N}\lambda_{j}/\sum_{j=1}^{N}\lambda_{j}}$, $n=1,2,...$.} \label{fig:Example2eigenvalues-b} \end{subfigure} \caption{The decay properties of the eigenvalues in the problem of Sec.\ref{sec:Example2}.} \label{fig:Example2eigenvalues} \end{figure} Since the coefficient $a(x,y,\omega)$ is parameterized by eight random variables, it is expensive to construct the mapping $\textbf{F}:\gvec{\xi}(\omega)\mapsto \textbf{c}(\omega)$ using the interpolation method with uniform grids. Instead, we use a sparse grid polynomial interpolation approach to approximate the mapping $\textbf{F}$. Specifically, we use Legendre polynomials with total order less than or equal 4 to approximate the mapping, where the total number of nodes is $N_1=2177$; see \cite{Griebel:04}. Figure \ref{fig:Example2errors-a} shows the relative errors of the testing error and projection error in $L^2$ norm. Figure \ref{fig:Example2errors-b} shows the corresponding relative errors in $H^1$ norm. The sparse grid polynomial interpolation approach gives a comparable error as the best approximation error. We observe similar convergence results in solving the global problem \eqref{randommultiscaleelliptic} with the coefficient \eqref{coefficientofexample2} (not shown here). Therefore, we can use sparse grid method to construct mappings for problems of moderate number of random variables. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex2_local_L2err-eps-converted-to.pdf} \caption{ Relative error in $L^2$ norm.} \label{fig:Example2errors-a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex2_local_H1err-eps-converted-to.pdf} \caption{ Relative error in $H^1$ norm.} \label{fig:Example2errors-b} \end{subfigure} \caption{ The relative errors with increasing number of basis in the problem of Sec.\ref{sec:Example2}.} \label{fig:Example2errors} \end{figure} \subsection{An example with a discontinuous coefficient}\label{sec:ExampleInterface} \noindent We solve the problem \eqref{randommultiscaleelliptic} with a discontinuous coefficient, which is an interface problem. The coefficient is parameterized by twelve random variables and has the following form \begin{align} a(x,y,\omega) =& \exp\Big(\sum_{i=1}^{6} \sin(2\pi \frac{x\sin(\frac{i\pi}{6}) +y\cos(\frac{i\pi}{6}) }{\epsilon_i} )\xi_i(\omega) \Big)\cdot I_{D\setminus D_3}(x,y)\nonumber\\ &+\exp\Big(\sum_{i=1}^{6} \sin(2\pi \frac{x\sin(\frac{(i+0.5)\pi}{6}) +y\cos(\frac{(i+0.5)\pi}{6}) }{\epsilon_{i+6}} )\xi_{i+6}(\omega) \Big)\cdot I_{D_3}(x,y), \label{coefficientofexampleInterface} \end{align} where $\epsilon_i=\frac{1+i}{100}$ for $i=1,\cdots,6$, $\epsilon_{i}=\frac{i+13}{100}$ for $i=7,\cdots,12$, $\xi_i(\omega)$, $i=1,\cdots,12$ are i.i.d. uniform random variables in $[-\frac{2}{3},\frac{2}{3}]$, and $I_{D_3}$ and $I_{D\setminus D_3}$ are indicator functions. The subdomain $D_3$ consists of three small rectangles whose edges are parallel to the edges of domain $D$ with width $10h$ and height $0.8$. And the lower left vertices are located at $(0.3,0.1),(0.5,0.1),(0.7,0.1)$ respectively. The contrast ratio in the coefficient \eqref{coefficientofexampleInterface} is $\kappa_a\approx 3\times 10^3$. In Figure \ref{fig:ExampleInterfaceRealizations} we show two realizations of the coefficient \eqref{coefficientofexampleInterface}. \begin{figure}[htbp] \centering \includegraphics[width=0.49\linewidth]{Ex5_DiffCoef_realization1.pdf} \includegraphics[width=0.49\linewidth]{Ex5_DiffCoef_realization5.pdf} \caption{Two realizations of the coefficient \eqref{coefficientofexampleInterface} in the interface problem.} \label{fig:ExampleInterfaceRealizations} \end{figure} We now solve the local problem of \eqref{randommultiscaleelliptic} with the coefficient \eqref{coefficientofexampleInterface}, where the domain of interest is $D_1=[\frac{1}{4},\frac{3}{4}]\times[\frac{11}{16},\frac{15}{16}]$. The force function is $f(x,y) = \cos(2\pi x)\sin(2\pi y)\cdot I_{D_2}(x,y)$, where $D_2=[\frac{1}{4},\frac{3}{4}]\times[\frac{1}{16},\frac{5}{16}]$. In Figure \ref{fig:ExampleInterfaceeigenvalues-a} and Figure \ref{fig:ExampleInterfaceeigenvalues-b} we show the magnitude of dominant eigenvalues and approximate accuracy. These results show that only a few data-driven basis functions are enough to approximate all solution samples well. \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{Ex5channel_local_eigenvalues.pdf} \caption{ Decay of eigenvalues.} \label{fig:ExampleInterfaceeigenvalues-a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{Ex5channel_local_acc_eigenvalues.pdf} \caption{$1-\sqrt{\sum_{j=n+1}^{N}\lambda_{j}/\sum_{j=1}^{N}\lambda_{j}}$, $n=1,2,...$.} \label{fig:ExampleInterfaceeigenvalues-b} \end{subfigure} \caption{The decay properties of the eigenvalues in the problem of Sec.\ref{sec:ExampleInterface}.} \label{fig:ExampleInterfaceeigenvalues} \end{figure} Since the coefficient \eqref{coefficientofexampleInterface} is parameterized by twelve random variables, constructing the mapping $\textbf{F}:\gvec{\xi}(\omega)\mapsto \textbf{c}(\omega)$ using the sparse grid polynomial interpolation becomes very expensive too. Here we use the least square method combined with the $k-d$ tree algorithm for searching nearest neighbors to approximate the mapping $\textbf{F}$. In our method, we first generate $N_1=5000$ data pairs $\{(\gvec{\xi}^n(\omega),\textbf{c}^n(\omega)\}_{n=1}^{N_1}$ that will be used as training data. Then, we use $N_2=200$ samples for testing in the online stage. For each new testing data point $\gvec{\xi}(\omega)=[\xi_1(\omega),\cdots,\xi_r(\omega)]^T$ (here $r=12$), we run the $k-d$ tree algorithm to find its $n$ nearest neighbors in the training data set and apply the least square method to compute the corresponding mapped value $\vec{c}(\omega)=[c_1(\omega), \ldots, c_K(\omega)]^T$. The complexity of constructing a $k-d$ tree is $O(N_1\log N_1)$. Given the $k-d$ tree, for each testing point the complexity of finding its $n$ nearest neighbors is $O(n\log N_1)$ \cite{wald2006building}. Since the $n$ training data points are close to the testing data point $\gvec{\xi}(\omega)$, for each training data $(\gvec{\xi}^m(\omega),\textbf{c}^m(\omega)$, $m=1,....n$, we compute the first-order Taylor expansion of each component $c^m_j(\omega)$ at $\gvec{\xi}(\omega)$ as \begin{align} c^m_j(\omega)\approx c_j(\omega)+\sum_{i=1}^{r=12}(\xi^m_i-\xi_i)\frac{\partial c_j}{\partial \xi_i}(\omega),\quad j=1,2,\cdots,K, \label{least-square-system} \end{align} where $\xi^m_i$, $i=1,...,r$, $c^m_j(\omega)$, $j=1,...,K$ are given training data, $c_j(\omega)$ and $\frac{\partial c_j}{\partial \xi_i}(\omega)$, $j=1,...,K$ are unknowns associated with the testing data point $\gvec{\xi}(\omega)$. In the $k-d$ tree algorithm, we choose $n=20$, which is slightly greater than $r+1=13$. By solving \eqref{least-square-system} using the least square method, we get the mapped value $\vec{c}(\omega)=[c_1(\omega), \ldots, c_K(\omega)]^T$. Finally, we use the formula \eqref{RB_expansion} to get the numerical solution of Eq.\eqref{randommultiscaleelliptic} with the coefficient \eqref{coefficientofexampleInterface}. Because of the discontinuity and high-dimensional random variables in the coefficient \eqref{coefficientofexampleInterface}, the problem \eqref{randommultiscaleelliptic} is more challenging. The nearest neighbors based least square method provides an efficient way to construct mappings and achieves relative errors less than $3\%$ in both $L^2$ norm and $H^1$ norm; see Figure \ref{fig:ExampleInterfacelocalerrors}. Alternatively, one can use the neural network method to construct mappings for this type of challenging problems; see Section \ref{sec:Example3}. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{Ex5channel_local_L2err.pdf} \label{fig:ExampleInterfaceerrors-a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{Ex5channel_local_H1err.pdf} \label{fig:ExampleInterfaceerrors-b} \end{subfigure} \caption{ The relative errors with increasing number of basis in the local problem of Sec.\ref{sec:ExampleInterface} .} \label{fig:ExampleInterfacelocalerrors} \end{figure} \subsection{An example with high-dimensional random coefficient and force function}\label{sec:Example3} \noindent We solve the problem \eqref{randommultiscaleelliptic} with an exponential type coefficient and random force function, where the total number of random variables is twenty. Specifically, the coefficient is parameterized by eighteen i.i.d. random variables, i.e. \begin{align} a(x,y,\omega) = \exp\Big(\sum_{i=1}^{18} \sin(2\pi \frac{x\sin(\frac{i\pi}{18}) +y\cos(\frac{i\pi}{18}) }{\epsilon_i} )\xi_i(\omega) \Big), \label{coefficientofexample3} \end{align} where $\epsilon_i=\frac{1}{2i+9}$, $i=1,2,\cdots,18$ and $\xi_i(\omega)$, $i=1,...,18$ are i.i.d. uniform random variables in $[-\frac{1}{5},\frac{1}{5}]$. The force function is a Gaussian density function $f(x,y) = \frac{1}{2\pi\sigma^2}\exp(-\frac{(x-\theta_1)^2+(y-\theta_2)^2}{2\sigma^2})$ with a random center $(\theta_1,\theta_2)$ that is a random point uniformly distributed in the subdomain $D_2=[\frac{1}{4},\frac{3}{4}]\times[\frac{1}{16},\frac{5}{16}]$ and $\sigma=0.01$. When $\sigma$ is small, the Gaussian density function $f(x,y)$ can be used to approximate the Dirac-$\delta$ function, such as modeling wells in reservoir simulations. We first solve the local problem of \eqref{randommultiscaleelliptic} with the coefficient \eqref{coefficientofexample3}, where the subdomain of interest is $D_1=[\frac{1}{4},\frac{3}{4}]\times[\frac{11}{16},\frac{15}{16}]$. In Figures \ref{fig:Example3eigenvalues-a} and \ref{fig:Example3eigenvalues-b}, we show the magnitude of leading eigenvalues and the ratio of the accumulated sum of the eigenvalue over the total sum, respectively. We observe similar exponential decay properties of eigenvalues even if the force function contains randomness. These results show that we can still build a set of data-driven basis functions to solve problem \eqref{randommultiscaleelliptic} with coefficient \eqref{coefficientofexample3}. \begin{figure}[tbph] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex3_local_eigenvalues.pdf} \caption{ Decay of eigenvalues.} \label{fig:Example3eigenvalues-a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex3_local_acc_eigenvalues.pdf} \caption{ $1-\sqrt{\sum_{j=n+1}^{N}\lambda_{j}/\sum_{j=1}^{N}\lambda_{j}}$, $n=1,2,...$.} \label{fig:Example3eigenvalues-b} \end{subfigure} \caption{The decay properties of the eigenvalues in the problem of Sec.\ref{sec:Example3}.} \label{fig:Example3eigenvalues} \end{figure} Notice that both the coefficient and force contain randomness here. We put the random variables $\gvec{\xi}(\omega)$ in the coefficient and the random variables $\gvec{\theta}(\omega)$ in the force together when we construct the mapping $\textbf{F}$. Moreover, the dimension of randomness, 18+2=20, is too large even for sparse grids. Here we construct the mapping $\textbf{F}:(\gvec{\xi}(\omega),\gvec{\theta}(\omega))\mapsto \textbf{c}(\omega)$ using the neural network as depicted in Figure \ref{fig:DNNstructure2}. The neural network has 4 hidden layers and each layer has 50 units. Naturally, the number of the input units is 20 and the number of the output units is $K$. The layer between input units and first layer of hidden units is an affine transform. So is the layer between output units and last layer of hidden units. Each two layers of hidden units are connected by an affine transform, a tanh (hyperbolic tangent) activation and a residual connection, i.e. $\textbf{h}_{l+1}=\tanh(\textbf{A}_l \textbf{h}_l+\textbf{b}_l)+\textbf{h}_l$, $l=1,2,3$, where $\textbf{h}_l$ is $l$-th layer of hidden units, $\textbf{A}_l$ is a 50-by-50 matrix and $\textbf{b}_l$ is a 50-by-1 vector. Under the same setting of neural network, if the rectified linear unit (ReLU), which is piecewise linear, is used as the activation function, we observe a much bigger error. Therefore we choose the hyperbolic tangent activation function and implement the residual neural network (ResNet) here \cite{he2016deep}. \begin{figure}[tbph] \tikzset{global scale/.style={ scale=#1, every node/.append style={scale=#1} } } \centering \begin{tikzpicture}[global scale=0.6] \tikzstyle{inputvariables} = [circle, very thick, fill=yellow,draw=black, minimum height=0.1cm, text width = 0.2cm] \tikzstyle{hiddenvariables} = [circle, thick, draw =black, fill=blue,minimum height=0.1cm, text width = 0.2cm] \tikzstyle{outputvariables} = [circle, very thick, draw=black, fill=red,minimum height=0.1cm, text width = 0.2cm] \tikzstyle{dottedvariables} = [thick, fill=white, minimum size = 0.2cm] \tikzstyle{textnode} = [thick, fill=white, minimum size=0.1cm ] \node[inputvariables] (x1) at (0,0) {$\xi_1$}; \node[inputvariables, below=0.4cm of x1] (x2) {$\xi_2$}; \node[dottedvariables, below=0.1cm of x2] (x3) {$\vdots$}; \node[inputvariables, below=0.1cm of x3] (x4) {$\xi_{r_1}$}; \node[inputvariables, below=0.4cm of x4] (x5) {$\theta_{1}$}; \node[inputvariables, below=0.1cm of x5] (x6) {$\theta_{2}$}; \node[dottedvariables, below=0.1cm of x6] (x7) {$\vdots$}; \node[inputvariables, below=0.4cm of x7] (x8) {$\theta_{r_2}$}; \draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (x4.south west)--(x1.north west); \node[textnode, above left of=x3,left=0.2cm] {$\gvec{\xi}(\omega)$}; \draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (x8.south west)--(x5.north west); \node[textnode, above left of=x7,left=0.2cm] {$\gvec{\theta}(\omega)$}; \node[hiddenvariables] (h1) at (3,-1) {}; \node[hiddenvariables, below=0.4cm of h1] (h2) {}; \node[dottedvariables, below=0.1cm of h2] (h3) {$\vdots$}; \node[dottedvariables, below=0.1cm of h3] (h4) {$\vdots$}; \node[hiddenvariables, below=0.4cm of h4] (h5) {}; \node[hiddenvariables, below=0.4cm of h5] (h6) {}; \node[dottedvariables] (h31) at (5.5,-1) {$\cdots$}; \node[dottedvariables, below=0.5cm of h31] (h32) {$\cdots$}; \node[dottedvariables, below=0.5cm of h32] (h33) {$\cdots$}; \node[dottedvariables, below=0.5cm of h33] (h34) {$\dots$}; \node[dottedvariables, below=0.5cm of h34] (h35) {$\cdots$}; \node[dottedvariables, below=0.5cm of h35] (h36) {$\cdots$}; \node[hiddenvariables] (h21) at (8,-1) {}; \node[hiddenvariables, below=0.4cm of h21] (h22) {}; \node[dottedvariables, below=0.1cm of h22] (h23) {$\vdots$}; \node[dottedvariables, below=0.1cm of h23] (h24) {$\vdots$}; \node[hiddenvariables, below=0.4cm of h24] (h25) {}; \node[hiddenvariables, below=0.4cm of h25] (h26) {}; \node[outputvariables] (y1) at (11,0) {$c_1$}; \node[outputvariables, below=0.4cm of y1] (y2) {$c_2$}; \node[outputvariables, below=0.4cm of y2] (y3) {$c_3$}; \node[dottedvariables, below=0.2cm of y3] (y4) {$\vdots$}; \node[dottedvariables, below=0.2cm of y4] (y5) {$\vdots$}; \node[dottedvariables, below=0.2cm of y5] (y6) {$\vdots$}; \node[dottedvariables, below=0.2cm of y6] (y7) {$\vdots$}; \node[outputvariables, below=0.2cm of y7] (y8) {$c_k$}; \draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (y1.north east)--(y8.south east); \node[textnode, above right of=y5, right=0.2cm] {$\mathbf{c}(\omega)$}; \path [-] (x1) edge node {} (h1); \path [-] (x2) edge node {} (h1); \path [-] (x4) edge node {} (h1); \path [-] (x5) edge node {} (h1); \path [-] (x6) edge node {} (h1); \path [-] (x8) edge node {} (h1); \path [-] (x1) edge node {} (h2); \path [-] (x2) edge node {} (h2); \path [-] (x4) edge node {} (h2); \path [-] (x5) edge node {} (h2); \path [-] (x6) edge node {} (h2); \path [-] (x8) edge node {} (h2); \path [-] (x1) edge node {} (h5); \path [-] (x2) edge node {} (h5); \path [-] (x4) edge node {} (h5); \path [-] (x5) edge node {} (h5); \path [-] (x6) edge node {} (h5); \path [-] (x8) edge node {} (h5); \path [-] (x1) edge node {} (h6); \path [-] (x2) edge node {} (h6); \path [-] (x4) edge node {} (h6); \path [-] (x5) edge node {} (h6); \path [-] (x6) edge node {} (h6); \path [-] (x8) edge node {} (h6); \path [-] (h1) edge node {} (h31); \path [-] (h2) edge node {} (h31); \path [-] (h5) edge node {} (h31); \path [-] (h6) edge node {} (h31); \path [-] (h1) edge node {} (h32); \path [-] (h2) edge node {} (h32); \path [-] (h5) edge node {} (h32); \path [-] (h6) edge node {} (h32); \path [-] (h1) edge node {} (h33); \path [-] (h2) edge node {} (h33); \path [-] (h5) edge node {} (h33); \path [-] (h6) edge node {} (h33); \path [-] (h1) edge node {} (h34); \path [-] (h2) edge node {} (h34); \path [-] (h5) edge node {} (h34); \path [-] (h6) edge node {} (h34); \path [-] (h1) edge node {} (h35); \path [-] (h2) edge node {} (h35); \path [-] (h5) edge node {} (h35); \path [-] (h6) edge node {} (h35); \path [-] (h1) edge node {} (h36); \path [-] (h2) edge node {} (h36); \path [-] (h5) edge node {} (h36); \path [-] (h6) edge node {} (h36); \path [-] (h21) edge node {} (h31); \path [-] (h22) edge node {} (h31); \path [-] (h25) edge node {} (h31); \path [-] (h26) edge node {} (h31); \path [-] (h21) edge node {} (h32); \path [-] (h22) edge node {} (h32); \path [-] (h25) edge node {} (h32); \path [-] (h26) edge node {} (h32); \path [-] (h21) edge node {} (h33); \path [-] (h22) edge node {} (h33); \path [-] (h25) edge node {} (h33); \path [-] (h26) edge node {} (h33); \path [-] (h21) edge node {} (h34); \path [-] (h22) edge node {} (h34); \path [-] (h25) edge node {} (h34); \path [-] (h26) edge node {} (h34); \path [-] (h21) edge node {} (h35); \path [-] (h22) edge node {} (h35); \path [-] (h25) edge node {} (h35); \path [-] (h26) edge node {} (h35); \path [-] (h21) edge node {} (h36); \path [-] (h22) edge node {} (h36); \path [-] (h25) edge node {} (h36); \path [-] (h26) edge node {} (h36); \path [-] (h21) edge node {} (y1); \path [-] (h22) edge node {} (y1); \path [-] (h25) edge node {} (y1); \path [-] (h26) edge node {} (y1); \path [-] (h21) edge node {} (y2); \path [-] (h22) edge node {} (y2); \path [-] (h25) edge node {} (y2); \path [-] (h26) edge node {} (y2); \path [-] (h21) edge node {} (y3); \path [-] (h22) edge node {} (y3); \path [-] (h25) edge node {} (y3); \path [-] (h26) edge node {} (y3); \path [-] (h21) edge node {} (y8); \path [-] (h22) edge node {} (y8); \path [-] (h25) edge node {} (y8); \path [-] (h26) edge node {} (y8); \node[textnode, font=\fontsize{15}{6}\selectfont, above=1.0cm of h31] (Text1){Hidden units}; \node[textnode, font=\fontsize{15}{6}\selectfont, left =1.8cm of Text1] (Text2) {Input units}; \node[textnode, font=\fontsize{15}{6}\selectfont, right = 1.8cm of Text1] {Output units}; \end{tikzpicture} \caption{Structure of neural network, where $r_1=18$ and $r_2=2$.} \label{fig:DNNstructure2} \end{figure} We use $N_1=5000$ samples for network training in the offline stage and use $N_2=200$ samples for testing in the online stage. The sample data pairs for training are $\{(\gvec{\xi}^n(\omega),\gvec{\theta}^n(\omega)),\textbf{c}^n(\omega)\}_{n=1}^{N_1}$, where $\gvec{\xi}^n(\omega)\in [-\frac{1}{5},\frac{1}{5}]^{18}$, $\gvec{\theta}^n(\omega))\in [\frac{1}{4},\frac{3}{4}]\times[\frac{1}{16},\frac{5}{16}]$, and $\textbf{c}^n(\omega)\in R^{K}$. We define the loss function of network training as \begin{align} loss\big(\{\textbf{c}^n\},\{\textbf{\^{c}}^n\}\big) = \frac{1}{N_1}\sum_{n=1}^{N_1}\frac{1}{K}|\textbf{c}^{n}-\textbf{\^{c}}^{n}|^2, \end{align} where $\textbf{c}^{n}$ are the training data and $\textbf{\^{c}}^n$ are the output of the neural network. Figure \ref{fig:Example3locall2err-a} shows the value of loss function during training procedure. Figure \ref{fig:Example3locall2err-b} shows the corresponding mean relative error of the testing samples in $L^2$ norm. Eventually the relative error of the neural network reaches about $1.5\times 10^{-2}$. Figure \ref{fig:Example3locall2err-c} shows the corresponding mean relative error of the testing samples in $H^1$ norm. We remark that many existing methods become extremely expensive or infeasible when the problem is parameterized by high-dimensional random variables like this one. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.32\textwidth} $K=5$\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingloss_5.pdf} \\ $K=10$\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingloss_10.pdf} \\ $K=20$\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingloss_20.pdf} \caption{ Loss.} \label{fig:Example3locall2err-a} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} ~~\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingL2err_5.pdf} \\ ~~\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingL2err_10.pdf} \\ ~~\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingL2err_20.pdf} \caption{ Relative $L^2$ error.} \label{fig:Example3locall2err-b} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} ~~\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingH1err_5.pdf}\\ ~~\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingH1err_10.pdf}\\ ~~\\ \includegraphics[width=1.0\linewidth]{ex3_local_trainingH1err_20.pdf} \caption{ Relative $H^1$ error.} \label{fig:Example3locall2err-c} \end{subfigure} \caption{ First column: the value of loss function during training procedure. Second column and third column: the mean relative errors of the testing set during training procedure in $L^2$ and $H^1$ norm respectively.} \label{fig:Example3locall2err} \end{figure} \subsection{An example with unknown random coefficient and source function}\label{sec:Example4} \noindent Here we present an example where the models of the random coefficient and source are unknown. Only a set of sample solutions are provided as well as a few censors can be placed at certain locations for solution measurements. This kind of scenario appears often in practice. We take the least square fitting method as described in Section \ref{sec:LS}. Our numerical experiment is still based on \eqref{randommultiscaleelliptic}, which is used to generate solution samples (instead of experiments or measurements in real practice). But once the data are generated, we do not assume any knowledge of the coefficient or the source when computing a new solution. To be specific, the coefficient takes the form \begin{align} a(x,y,\omega) = \exp\Big(\sum_{i=1}^{24} \sin(2\pi \frac{x\sin(\frac{i\pi}{24}) +y\cos(\frac{i\pi}{24}) }{\epsilon_i} )\xi_i(\omega) \Big) \label{coefficientofexample4} \end{align} where $\epsilon_i=\frac{1+i}{100}$, $i=1,2,\cdots,24$ and $\xi_i(\omega)$, $i=1,...,24$ are i.i.d. uniform random variables in $[-\frac{1}{6},\frac{1}{6}]$. The force function is a random function $f(x,y) = \sin(\pi(\theta_1x+2\theta_2))\cos(\pi(\theta_3y+2\theta_4))\cdot I_{D_2}(x,y)$ with i.i.d. uniform random variables $\theta_1,\theta_2,\theta_3,\theta_4$ in $[0,2]$. We first generate $N=2000$ solutions samples (using standard FEM) $u(x_j, \omega_i), i=1, \ldots, N, j=1, \ldots, J$, where $x_j$ are the points where solution samples are measured. Then a set of $K$ data-driven basis $\phi_k(x_j), j=1, \ldots, J, k=1, \dots, K$ are extracted from the solution samples as before. Next we determine $M$ good sensing locations from the data-driven basis so that the least square problem \eqref{eq:LS} is not ill-conditioned. We follow the method proposed in \cite{Kutz2017Sensor}. Define $\Phi=[\boldsymbol{\phi}_1, \ldots, \boldsymbol{\phi}_K]\in R^{J\times K}$, where $\boldsymbol{\phi}_k=[\phi_k(x_1), \ldots, \phi_k(x_J)]^T$. If $M=K$, QR factorization with column pivoting is performed on $\Phi^T$. If $M>K$, QR factorization with pivoting is performed on $\Phi\Phi^T$. The first $M$ pivoting indices provide the measurement locations. Once a new solution is measured at these $M$ selected locations, the least square problem \eqref{eq:LS} is solved to determine the coefficients $c_1, c_2, \ldots, c_K$ and the new solution is approximated by $u(x_j,\omega)=\sum_{k=1}^K c_k\phi_k(x_j)$. In Figure \ref{fig:Example4localerrors} and Figure \ref{fig:Example4globalerrors}, we show the results of the local problem and global problem, respectively. In these numerical results, we compared the error between the reconstructed solutions and the reference solution. We find the our proposed method works well for problem \eqref{randommultiscaleelliptic} with a non-parametric coefficient or source as well. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex4_local_L2err.pdf} \label{fig:Example4errors-a} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex4_local_H1err.pdf} \label{fig:Example4errors-b} \end{subfigure} \caption{ The relative errors with increasing number of basis in the local problem of Sec.\ref{sec:Example4} .} \label{fig:Example4localerrors} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex4_global_L2err.pdf} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1.0\linewidth]{ex4_global_H1err.pdf} \end{subfigure} \caption{ The relative errors with increasing number of basis in the global problem of Sec.\ref{sec:Example4}.} \label{fig:Example4globalerrors} \end{figure} \section{Conclusion} \label{sec:Conclusion} \noindent In this paper, we propose a data-driven approach to solve elliptic PDEs with multiscale and random coefficient which arise in various applications, such as heterogeneous porous media flow problems in water aquifer and oil reservoir simulations. The key idea for our method, which is motivated by the high separable approximation of the underlying Green's function, is to extract a problem specific low dimensional structure in the solution space and construct its basis from the data. Once the data-driven basis is available, depending on different setups, we design several ways to compute a new solution efficiently. Error analysis based on sampling error of the coefficients and the projection error of the data-driven basis is presented to provide some guidance in the implementation of our method. Numerical examples show that the proposed method is very efficient especially when the problem has relative high dimensional random input. \section*{Acknowledgements} \noindent The research of S. Li is partially supported by the Doris Chen Postgraduate Scholarship. The research of Z. Zhang is supported by the Hong Kong RGC General Research Funds (Projects 27300616, 17300817, and 17300318), National Natural Science Foundation of China (Project 11601457), Seed Funding Programme for Basic Research (HKU), and Basic Research Programme (JCYJ20180307151603959) of The Science, Technology and Innovation Commission of Shenzhen Municipality. The research of H. Zhao is partially supported by NSF grant DMS-1622490 and DMS-1821010. This research is made possible by a donation to the Big Data Project Fund, HKU, from Dr Patrick Poon whose generosity is gratefully acknowledged. \bibliographystyle{siam}
1,314,259,992,918
arxiv
\section{Introduction} \label{sec:introduction} Approximate solution of PDEs using machine learning techniques has been considered in various forms in the past thirty years. For instance, \cite{lagaris1998,lagaris2000,lee1990,malek2006} propose to use neural networks to solve PDEs and ODEs. These articles compute neural network solutions by using an a priori fixed mesh. In recent years, there has been an incipient development of mesh-free numerical methods to solve PDEs by using neural networks. Although the approaches have been diverse, most of these algorithms aim to train a neural network to approximate the unknown function, forcing the fulfillment of the PDE and its boundary conditions through a suitable loss functional. In this regard, among other works, let us mention \cite{E18, DGM18, He20, Wang20, Zang20}. Deep neural networks are not necessarily suitable for solving PDEs in low dimensions, where they may be outperformed by classical methods specifically tailored for the problems under consideration. However, neural network methods have proven to be effective in some circumstances where the application of classical methods becomes impractical. Such is the case of high-dimensional PDEs. The method we propose in this work aims to overcome some disadvantages of the algorithms available in the literature. Using a first-order formulation we are able to avoid the computation of second-order derivatives in cost functionals, thereby saving a significant computational cost in high dimensions. Avoiding second-order derivatives also allows us to use linear activation functions and, a priori, gives us the possibility of approximating weak solutions. On the other hand, counting on explicit representations of the gradients simplifies the strong imposition of Neumann-type boundary conditions. Namely, we can impose boundary conditions without adding penalty terms in the loss function. This results in a reduction in training time. Let $\Omega \subset {\mathbb{R}^d}$ be an open domain. In this work, we shall make use of the spaces \[ \begin{aligned} &H^1(\Omega) = \{ v \in L^2(\Omega) \colon \nabla v \in L^2(\Omega)\}, \\ & H(\Div; \Omega ) = \{ {{\bm \psi}} \in [L^2(\Omega)]^d \colon \Div {{\bm \psi}} \in L^2(\Omega)\}. \end{aligned} \] We assume there exists a disjoint partition $\pp \Omega = \Gamma_\calD \cup \Gamma_\calN$, with $|\Gamma_\calD| > 0$, and let ${{\bm \nu}}$ denote the outward normal to $\Omega$. Given sufficiently regular functions $f, g_\calD, g_\calN$, we aim to solve the problem \begin{equation}\label{eq:problem} \left\lbrace \begin{aligned} - \mbox{div}({{\bm A}} \nabla u) + Bu & = f &\mbox{ in } \Omega, \\ u & = g_\calD &\mbox{ on } \Gamma_\calD, \\ {{\bm A}} \nabla u \cdot {{\bm \nu}} & = g_\calN &\mbox{ on } \Gamma_\calN, \\ \end{aligned} \right. \end{equation} where we assume ${{\bm A}} \in [L^{\infty}(\Omega)]^{d\times d}$ is a.e. symmetric and uniformly positive definite: there exist constants $\lambda, \Lambda$ such that \[ 0 < \lambda \le \lambda_{\min}({{\bm A}}(x)) \le \lambda_{\max}({{\bm A}}(x)) \le \Lambda, \quad \mbox{for a.e. } x \in \Omega, \] where $\lambda_{\min}({{\bm A}}(\cdot))$ (resp. $\lambda_{\max}({{\bm A}}(\cdot))$) denotes the minimum (resp. maximum) eigenvalue of ${{\bm A}}(\cdot)$. The linear operator $B \colon H^1(\Omega) \to L^2(\Omega)$ in \eqref{eq:problem} is at most of linear order, and we assume it satisfies \begin{equation*} \label{eq:hyp-B} \| B v \|_{L^2(\Omega)} \le C \| \nabla v \|_{L^2(\Omega)} \quad \forall v \in H^1(\Omega) \mbox{ such that } v = 0 \mbox{ on } \Gamma_\calD. \end{equation*} Examples satisfying this condition include $Bv = \Div({{\bm \beta}} v)$, with ${{\bm \beta}} \in [W^{1,\infty}(\Omega)]^d$, and $Bv = {{\bm \beta}} \cdot \nabla v + \gamma v$ for some ${{\bm \beta}} \in [L^{\infty}(\Omega)]^d$, $\gamma \in L^{\infty}(\Omega)$. We thus remark that \eqref{eq:problem} can accommodate, for example, stationary convection-reaction-diffusion problems and Helmholtz problems. We introduce the flux variable ${{\bm \phi}} = {{\bm A}} \nabla u$ and rewrite \eqref{eq:problem} as a first-order system: \begin{equation}\label{eq:FOS} \left\lbrace \begin{aligned} {{\bm \phi}} - {{\bm A}} \nabla u & = 0 & \mbox{ in } \Omega, \\ - \Div({{\bm \phi}}) + B u - f & = 0 & \mbox{ in } \Omega, \\ u & = g_\calD & \mbox{ on } \Gamma_\calD, \\ {{\bm \phi}} \cdot {{\bm \nu}} & = g_\calN & \mbox{ on } \Gamma_\calN. \\ \end{aligned} \right. \end{equation} Our approach is based on seeking minimizers of the loss function \begin{equation} \label{eq:LS-loss} \mathcal{L} (u, {{\bm \phi}}) := \| {{\bm \phi}} - {{\bm A}} \nabla u \|_{L^2(\Omega)}^2 + \| \Div({{\bm \phi}}) - B u + f \|_{L^2(\Omega)}^2 \end{equation} on a suitable set of admissible functions \[ \mathcal{A} := \{ {{\bm q}} = (u, {{\bm \phi}}) \in H^1(\Omega) \times H(\Div; \Omega) \colon u = g_\calD \mbox{ on } \Gamma_\calD, \ {{\bm \phi}} \cdot {{\bm \nu}} = g_\calN \mbox{ on } \Gamma_\calN \}. \] Clearly, if \eqref{eq:problem} has a unique solution $u \in H^1(\Omega)$, then the unique minimizer of $\mathcal{L}$ in $\mathcal{A}$ is ${{\bm q}} :=~(u, {{\bm A}} \nabla u)$. Our goal is to compute approximations to such a minimizer within a suitable finite dimensional space $\mathcal{A}_m \subset \mathcal{A}$. Particularly, in our method we consider a space $\mathcal{A}_m$ composed of neural networks with a fixed architecture and parameters ${{\bm \Theta}} \in \mathbb{R}^m$. Some efforts in this direction include the deep FOSLS method from \cite{Cai20} and the deep mixed residual method proposed in \cite{Lyu20}. Reference \cite{Cai20} proposes the use of a partition of $\Omega$ and a mid-point quadrature rule for the evaluation of the discrete loss functional; instead, our algorithm is meshfree and uses random quadrature points. This yields a significant advantage in high-dimensional problems. Our method can be understood in the setting of the mixed residual methods in \cite{Lyu20}. However, a significant difference between our work and \cite{Lyu20} is that here we propose a strong imposition of the boundary conditions instead of the inclusion of penalization terms in the loss functional. We pre-train neural networks to accommodate boundary data, which results in a reduction in the number of iterations required in the solution of the PDE \cite{Berg18, Sheng20}. To the best of our knowledge, the error in the approximation of continuous functionals with their discrete counterparts is not taken into account in any numerical method based on neural networks available in the literature. In other words, the focus is generally on the convergence of the minimizers of functionals such as $\mathcal{L}$ in \eqref{eq:LS-loss} over certain neural network spaces towards the minimizer of the same functional at the continuous level. However, in practice one does not compute $\mathcal{L}$ exactly but rather approximates it by means of quadrature rules. Let us call $\mathcal{L}_N$ such an approximation to the functional $\mathcal{L}$, where $N$ is, for example, the number of quadrature points. The computation of $ \mathcal{L}_N $ instead of $ \mathcal{L} $ can introduce important changes in the nature of the minimization problem, such as the loss of convexity of the associated functional \cite{E18}. A major contribution of this work is to present a convergence analysis that considers the discretization of the functional $ \mathcal{L} $. Specifically, we prove the almost-sure $\Gamma$-convergence of the discrete loss functions towards the continuous one. As stated in Theorem \ref{teo:conv}, this implies the almost-sure convergence of the solutions computed numerically to the solution of the continuous problem. The techniques we develop for this purpose are not only valid for the method we propose, and we generalize and apply them to the convergence analysis of a broad class of methods, including the Deep Ritz \cite{E18} and the Deep Galerkin \cite{DGM18} Methods (DRM and DGM, respectively; see Remarks \ref{rem:DRM} and \ref{rem:DGM}). \subsection*{Organization of the paper.} The rest of the paper is organized as follows. Section \ref{sec:description} describes the method we propose for dealing with \eqref{eq:FOS}, including the treatment of Dirichlet and Neumann boundary conditions in a strong form, and discusses some aspects pertaining to its implementation. We perform a convergence analysis for our method in Section \ref{sec:analysis}. This analysis takes into account the approximation of the loss functional by means of Monte Carlo integration, and establishes the convergence of the discrete minimization problem towards the continuous one in the sense of almost sure $\Gamma$-convergence. Section \ref{sec:general} generalizes the analysis to include some other well-known methods, thereby establishing their convergence as well. We illustrate the performance of our method through computational examples in Section \ref{sec:numerical}, and provide some concluding remarks in Section \ref{sec:conclusion}. \section{Description of the method} \label{sec:description} The goal of the method we propose is to approximate the unique minimizer $(u, {{\bm \phi}})$ of the functional in \eqref{eq:LS-loss}. A natural first approach would consist in seeking a set of parameters ${{\bm \Theta}}_0 \in \mathbb{R}^m$ such that \[ \mathcal{L} (u_{ {{\bm \Theta}}_0}, {{\bm \phi}}_{ {{\bm \Theta}}_0 }) = \min_{{{\bm \Theta}} \in \mathbb{R}^m} \mathcal{L} (u_{{{\bm \Theta}}}, {{\bm \phi}}_{{{\bm \Theta}}}), \] with the functions $(u_{{{\bm \Theta}}}, {{\bm \phi}}_{{{\bm \Theta}}})$ belonging to a suitable neural network space. The use of neural networks in this setting has the advantage that one can easily implement meshfree methods by randomly sampling collocation points (see \cite{DGM18}, for example), and thereby be able to deal with high-dimensional problems, where most classical numerical PDE methods become unfeasible. The enforcement of boundary conditions is a non-trivial aspect to take into account in this approach. A typical way to tackle this issue is to incorporate boundary conditions by adding a penalization term \cite{ E18, DGM18, Zang20}. However, in practice it is observed that enforcing discrete functions to satisfy the boundary conditions gives rise to a faster training process \cite{Berg18,Sheng20}. We shall first create suitable auxiliary functions with the purpose of imposing the boundary conditions in a strong fashion. In this way, we ensure ($u_{{{\bm \Theta}}},{{\bm \phi}}_{{{\bm \Theta}}}) \in \mathcal{A}_m \subset \mathcal{A}$, for all ${{\bm \Theta}} \in \mathbb{R}^m$. Then, the optimization procedure consists of sampling $N$ points $\{x_k \}_{k=1}^N \subset \Omega$ uniformly, and approximating $\mathcal{L} (u_{ {{\bm \Theta}}}, {{\bm \phi}}_{ {{\bm \Theta}}}) \approx \mathcal{L}_N (u_{ {{\bm \Theta}}}, {{\bm \phi}}_{ {{\bm \Theta}}}),$ at every step of a gradient descent algorithm, with $\mathcal{L}_N$ defined as \begin{equation}\label{eq:discrete_cost} \mathcal{L}_N (u, {{\bm \phi}}) := \frac{|\Omega|}{N} \sum_{k=1}^N \big( {{\bm \phi}}(x_k) - {{\bm A}} \nabla u(x_k) \big)^2 + \big(\Div{{\bm \phi}}(x_k) - B u(x_k) + f(x_k) \big)^2. \end{equation} We expose the details below. \subsection{Strong imposition of boundary conditions}\label{subsec:strong_b} We follow the ideas from \cite{Berg18} about the imposition of Dirichlet boundary conditions, and extend the approach to include Neumann boundary conditions. Instead of trying to compute either $u$ or ${{\bm \phi}}$ directly and incorporate the boundary conditions by a penalization term, we shall enforce them in the construction of the neural network approximations. For that purpose, we make use of the following notion. \begin{definition}[smooth distance function] \label{def:distance-function} Let $\Gamma_* \subset \overline \Omega$ be a closed set. We say that a Lipschitz continuous function $d_* \colon \Omega \to \mathbb{R}$ is a {\em smooth distance function} if it satisfies $d_* \ge 0$ and $d_* (x) = 0$ if and only if $x \in \Gamma_*$. \end{definition} We briefly comment on the use of smooth distance functions in the strong imposition of Dirichlet and Neumann boundary conditions. In the computation of $u$ in \eqref{eq:LS-loss}, we restrict the class of functions to be \begin{equation} \label{eq:def-u} u (x) := G_\calD(x) + d_\calD(x) \, v(x), \end{equation} where the unknown is the function $v \colon \Omega \to \mathbb{R}$, $G_\calD$ is a lifting of the Dirichlet datum, and $d_\calD$ is a smooth distance function to $\Gamma_\calD$. In a similar fashion, we can incorporate normal boundary conditions on the flux variable ${{\bm \phi}}$ in a strong way. We first construct a vector field ${{\bm n}} \colon \Omega \to \mathbb{R}^d$ such that ${{\bm n}} |_{\Gamma_\calN} = {{\bm \nu}}$ and $| {{\bm n}} (x) | = 1$ for a.e. $x \in \Omega$, and consider \begin{equation} \label{eq:def-vphi} {{\bm \phi}} (x) := {{\bm \psi}}(x) + \left(G_\calN (x) - \frac{ {{\bm \psi}}(x) \cdot {{\bm n}}(x) }{1 + d_\calN(x)} \right) {{\bm n}}(x). \end{equation} Above, $G_\calN$ is a lifting of the Neumann boundary condition, $d_\calN$ is a smooth distance function to $\Gamma_\calN$, and the unknown is the function ${{\bm \psi}} \colon \Omega \to \mathbb{R}^{d}$. By its definition, the function ${{\bm \phi}}$ satisfies the boundary condition ${{\bm \phi}} \cdot {{\bm \nu}} = g_\calN$ at $\Gamma_\calN$. We remark that we do not require any smoothness on ${{\bm n}}$: in particular this field may be discontinuous at some points in the domain. Therefore, in the construction of approximate solutions we shall first compute the vector field ${{\bm n}}$ and the scalar functions $d_\calD$, $d_\calN$, $G_\calD$, $G_\calN$. Then, we seek ${{\bm y}} = (v, {{\bm \psi}})$ such that the corresponding pair $(u,{{\bm \phi}})$, given by \eqref{eq:def-u} and \eqref{eq:def-vphi}, minimizes the loss function $\mathcal{L}$. The computation of the auxiliary functions ${{\bm n}}$, $d_\calD$, $d_\calN$, $G_\calD$, $G_\calN$ typically requires fewer degrees of freedom and iterations than the computation of $(v, {{\bm \psi}})$, depending on the complexity of the domain or the boundary data. Consequently, we shall frequently use a simpler architecture to represent them. Below, we give details on the computation of the auxiliary functions. \subsubsection{Computation of smooth distance functions} Loosely, for $* \in \{\mathcal{D},\mathcal{N}\}$, a smooth distance function to $\Gamma_*$ is a function $d_* \colon \Omega \to [0,\infty)$ that approximates the distance to $\Gamma_*$, cf. Definition \ref{def:distance-function}. To construct such functions, we first randomly choose $N_d$ points $\{x_{i}\}_{i=1}^{N_d} \subset \Omega$ (the same set of points can be used for either $* = \mathcal{D}$ and $* = \mathcal{N}$) and compute \[ d^{(*)}(x_i) \approx \mbox{dist} (x_i, \Gamma_*). \] This can be done by choosing points on $\Gamma_*$ and using efficient nearest-neighbor search strategies. Once we have computed the quantities $\{d^{(*)}(x_i)\}_{i=1}^{N_d}$, we train a neural network for $d_*$ by using the cost function \[ \mathcal{L}_{*}(d) = \frac{1}{N_d} \sum_{i=1}^N |d(x_i) - d^{(*)}(x_i)|^2 + \frac{1}{N_{d,*}} \sum_{i=1}^{N_{d,*}} |d(x_{*,i})|^2, \] where $\{x_{*,i}\}_{i=1}^{N_{d,*}}$ is a random batch of points on $\Gamma_*$. In the setting of $d_\calD$ and $d_\calN$, we use neural networks with a single hidden layer and significantly less parameters than the networks employed in the PDE resolution. \subsubsection{Boundary data liftings and normal field} \label{sec:boundary-data} We approximate liftings of the boundary data to $\Omega$ by {\em smooth liftings} \cite{Berg18}: in either \eqref{eq:def-u} and \eqref{eq:def-vphi}, we require $G_\calD$ and $G_\calN$ to coincide with $g_\calD$ on $\Gamma_\calD$ and with $g_\calN$ on $\Gamma_\calN$, respectively, and to be smooth enough so that we can apply the differential operator to them pointwise. A natural way to enforce the former is to set as loss functions the $L^2$-norms of the discrepancies on the corresponding boundary subsets, namely \[ \mathcal{L}_\mathcal{D} (G) = \| G - g_\calD \|_{L^2(\Gamma_\calD)}^2 , \quad \mathcal{L}_\mathcal{N} (G) = \| G - g_\calN \|_{L^2(\Gamma_\calN)}^2. \] In practice, we consider sets of boundary nodes $\{ z^\mathcal{D}_i \}_{i=1}^{M_\mathcal{D}} \subset \Gamma_\calD,$ $\{ z^\mathcal{N}_i \}_{i=1}^{M_\mathcal{N}} \subset \Gamma_\calN$ and define the quadratic cost functionals \[ \mathcal{L}_\mathcal{D} (G) = \frac1{M_\mathcal{D}}\sum_{i=1}^{M_\mathcal{D}} | G(z^\mathcal{D}_i) - g_\calD(z^\mathcal{D}_i) |^2, \qquad \mathcal{L}_\mathcal{N} (G) = \frac1{M_\mathcal{N}}\sum_{i=1}^{M_\mathcal{N}} | G(z^\mathcal{N}_i) - g_\calN(z^\mathcal{N}_i) |^2. \] In the same fashion as for the smooth distance functions, we consider neural networks with a single hidden layer to compute the functions $d_\calD$ and $d_\calN$. Analogously, for the computation of the vector field ${{\bm n}}$ we start from the loss function \[ \mathcal{L}_{{{\bm n}}} ({{\bm m}}) = \| {{\bm m}} - {{\bm \nu}} \|_{L^2(\Gamma_\calN)}^2 + \| |{{\bm m}}|^2 - 1 \|_{L^2(\Omega)}^2 , \] consider a set of randomly selected points $\{z^{\mathcal{N}, {{\bm n}}}_{i}\}_{i=1}^{M_{\mathcal{N}, {{\bm n}}}} \subset \Gamma_\calN$ and $\{z^{{{\bm n}}}_i\}_{i=1}^{M_{{{\bm n}}}} \subset \Omega$, and minimize the cost functional \[ \mathcal{L}_{{{\bm n}}} ({{\bm m}}) = \frac{1}{M_{\mathcal{N}, {{\bm n}}}} \sum_{i=1}^{M_{\mathcal{N}, {{\bm n}}}} | {{\bm m}}(z^{\mathcal{N}, {{\bm n}}}_{i}) - {{\bm \nu}} |^2 + \frac1{M_{{{\bm n}}}}\sum_{i=1}^{M_{{{\bm n}}}} | |{{\bm m}}(z^{{{\bm n}}}_i)|^2 - 1 |^2. \] We point out that, in practice, the set of auxiliary points $\{z^{\mathcal{N}, {{\bm n}}}_{i}\}_{i=1}^{M_{\mathcal{N}, {{\bm n}}}}$ can be the same as the set $\{ z^\mathcal{N}_i \}_{i=1}^{M_\mathcal{N}}$ used in the approximation of $\mathcal{L}_\mathcal{N}$. \subsection{Computational aspects} Once we have built the auxiliary functions, we proceed to compute $u$ and ${{\bm \phi}}$. For this purpose, we consider a set of random points $\{x_k\}_{k=1}^N \subset \Omega$, and seek to minimize the cost functional \begin{equation}\label{eq:loss_discrete} \mathcal{L}_N (u, {{\bm \phi}}) := \frac{|\Omega|}{N} \sum_{k=1}^N \big( {{\bm \phi}}(x_k) - {{\bm A}} \nabla u(x_k) \big)^2 + \big(\Div{{\bm \phi}}(x_k) - B u(x_k) + f(x_k) \big)^2. \end{equation} From the construction of $u$ and ${{\bm \phi}}$ (see \eqref{eq:def-u} and \eqref{eq:def-vphi}), the trainable parameters $ {{\bm \Theta}} $ arise in the computation of the auxiliary functions $v$ and ${{\bm \psi}}$. In broad terms, the method we propose can be summarized as follows: \begin{itemize} \item {\bf Stage 1:} Train auxiliary functions $d_\calD$, $d_\calN$, $G_\calD$, $G_\calN$, and ${{\bm n}}$. \item {\bf Stage 2:} Until some stop criterion is reached, do: \begin{itemize} \item Select random points $\{x_k\}_{k=1}^N \subset \Omega$. \item For some learning rate $\ell$, do: $${{\bm \Theta}} = {{\bm \Theta}} - \ell \nabla_{{{\bm \Theta}}} \mathcal{L}_N (u_{{\bm \Theta}}, {{\bm \phi}}_{{\bm \Theta}}).$$ \item Update learning rate. \end{itemize} \end{itemize} The computation of $\mathcal{L}_N (u, {{\bm \phi}})$ requires computing the derivatives of $u$ and ${{\bm \phi}}$ with respect to the input variables, evaluated at $\{x_k\}_{k=1}^N$. Since we constructed our auxiliary functions as neural networks, it is possible to compute efficiently these derivatives by means of the Back-Propagation algorithm. Packages like TensorFlow allow this kind of computation. Additionally, our least-squares loss function \eqref{eq:loss_discrete} only involves first-order derivatives in space. We discretize such derivatives by using finite-difference quotients. Namely, for any function ${{\bm \varphi}} \colon {\mathbb{R}^d} \to \mathbb{R}^n$ we let $h > 0$ be a fixed constant and consider the second-order (with respect to $h$) formula \[ \partial_i {{\bm \varphi}} (x_k) \simeq \frac{{{\bm \varphi}} (x_k + h{{\bm e}}_i) - {{\bm \varphi}} (x_k - h{{\bm e}}_i)}{2h}, \] where ${{\bm e}}_i \in \mathbb{R}^d$ is the $i$-th canonical basis vector in ${\mathbb{R}^d}$. We employ this formula for the approximation of $\nabla u$, $\Div {{\bm \phi}}$ and the first-order derivatives involved in $B$. For the numerical examples we implemented our algorithm by using PyTorch and discretizing the differential operators by means of finite differences. We typically use about 10,000 steps of gradient descent, sampling between 1,000 and 5,000 random points in $\Omega$ at each step. A step-type decrease in the learning rate showed good results in practice. In particular, we start from a learning rate $\ell = 10 ^{-2}$, which we halve every 1,000-2,500 gradient descent steps. No particular type of architecture was chosen for the functions involved. We use three-layer neural networks with linear activation function (ReLU) for the auxiliary functions, and five-layer networks for the main variables $v$ and ${{\bm \psi}}$. The ADAM \cite{ADAM} optimization algorithm showed good results in numerical experiments. Further details about the implementation of the method can be found in Section \ref{sec:numerical}. Regarding the training of auxiliary functions $d_\calD$ and $d_\calN$, the following procedure showed good results in practice: \begin{itemize} \item Select $N_d$ random points $\{x_k\}_{k=1}^{N_d} \subset \Omega$. \item Initialize a vector ${{\bm D}}$ as ${{\bm D}}_i = \infty$ for $i = 1,...,N_d$. \item Until some stop criterion is reached, do: \begin{itemize} \item Select $M_{*}$ random points $\{z^*_i\}_{i=1}^{M_*} \subset \Gamma_*$. \item Update ${{\bm D}}$ as: ${{\bm D}}_k =\min\{ \min_{i= 1, \ldots M_{*}}{|x_k - z^*_i|} , {{\bm D}}_k\}$ \item Define the loss function: $$\mathcal{L}_{*}(d_*) = \frac{1}{N_d} \sum_{k=1}^{N_d} |d_*(x_k) - {{\bm D}}_k|^2 + \frac{1}{M_*} \sum_{i=1}^{M_{*}} |d(z_i^*)|^2.$$ \item For some learning rate $\ell$, do: $${{\bm \Theta}}_{d_*} = {{\bm \Theta}}_{d_*} - \ell \nabla_{{{\bm \Theta}}_{d_*}} \mathcal{L}_* (d_*).$$ \item Update learning rate. \end{itemize} Here $* \in \{D,N\}$, and ${{\bm \Theta}}_{d_*}$ denotes the trainable parameters of $d_*$. \end{itemize} \section{Analysis of the method} \label{sec:analysis} In this section, we prove the convergence of our method by using two main ingredients. First, we put the discretization in a $\Gamma$-convergence framework. More precisely, the sequence of functionals we consider is related to the use of meshfree methods in the computation of a regularized version of the discrete loss functional $\mathbb{R}^m \ni {{\bm \Theta}} \mapsto \mathcal{L} (u_{{\bm \Theta}}, {{\bm \phi}}_{{\bm \Theta}})$; see Theorem \ref{teo:gamma_conv} below. Second, we exploit the coercivity of the least-squares functional and approximation properties of neural networks to conclude that the sequence of minimizers of the regularized discrete loss functionals converges to the solution of \eqref{eq:problem} as the number of neural network parameters $m \to \infty$. For the sake of simplicity, we consider problem \eqref{eq:FOS} with $g_\calD = g_\calN = 0$. Otherwise, one could consider $G_\calD$ and $G_\calN$ such that $G_\calD = g_\calD$ on $\Gamma_\calD$ and $G_\calN = g_\calN$ on $\Gamma_\calN$, a smooth normal field ${{\bm n}}$ such that ${{\bm n}} = {{\bm \nu}}$ on $\Gamma_\calN$, and then the auxiliary functions $u_0 = u - G_\calD$ and ${{\bm \phi}}_0 = {{\bm \phi}} - G_\calN {{\bm n}}$ would solve the first-order system \[ \left\lbrace \begin{array}{rll} {{\bm \phi}}_0 - {{\bm A}} \nabla u_0 = & {{\bm A}} \nabla G_\calD - G_\calN {{\bm n}} & \mbox{in } \Omega, \\ - \Div({{\bm \phi}}_0) + B u_0 = & f + \Div(G_\calN {{\bm n}}) - B G_\calD & \mbox{in } \Omega, \\ u_0 = & 0 & \mbox{on } \Gamma_\calD, \\ {{\bm \phi}}_0 \cdot {{\bm \nu}} = & 0 & \mbox{on } \Gamma_\calN. \\ \end{array} \right. \] Naturally, the solution to this system corresponds to the minimum of the least-squares functional \[ (u, {{\bm \phi}}) \mapsto \| {{\bm \phi}} - {{\bm A}} \nabla u + \widetilde{g} \|_{L^2(\Omega)}^2 + \| \Div({{\bm \phi}}) - B u + \widetilde{f} \|_{L^2(\Omega)}^2, \] with $\widetilde{g} = - {{\bm A}} \nabla G_\calD + G_\calN {{\bm n}} $ and $\widetilde{f} = f + \Div(G_\calN {{\bm n}}) - B G_\calD$. This functional can be dealt with by using the same tools as for \eqref{eq:LS-loss}, the only difference being the presence of the zero-order correction term $\widetilde{g}$ in the first $L^2$-norm. In the following proof of convergence, we restrict ourselves to one hidden layer neural networks with $n$ neurons. We define the set of discrete functions \begin{equation*} \mathcal{C}_m := \Big\{ (v_{{\bm \Theta}},{{\bm \psi}}_{{\bm \Theta}}) : v_{{\bm \Theta}} = B_v \sigma( A_v x + c_v ), {{\bm \psi}}_{{\bm \Theta}} = B_{{{\bm \psi}}} \sigma( A_{{{\bm \psi}}} x + c_{{{\bm \psi}}} ) \Big\}, \end{equation*} with $A_v,A_{{{\bm \psi}}} \in \mathbb{R}^{n \times d}$, $c_v,c_{{{\bm \psi}}} \in \mathbb{R}^{n \times 1}$, $B_v \in \mathbb{R}^{1\times n}$, $B_{{{\bm \psi}}} \in \mathbb{R}^{d \times n }$, and $\sigma \colon \mathbb{R}^n \to \mathbb{R}^n$, and $\sigma$ a smooth and bounded non-constant activation function, applied elementwise. We collect all the parameters in ${{\bm \Theta}} \in \mathbb{R}^m$ with $m = 3n(d+1)$. We remark that, whenever we state that $m \to \infty$, we mean that the number of neurons $n$ is growing to infinity. Assuming that we are able to construct smooth auxiliary functions $d_\calD$, $d_\calN$ and ${{\bm n}}$ as in Section \ref{subsec:strong_b}, we define the set of discrete admissible functions \begin{equation} \label{eq:admissible-class} \mathcal{A}_m := \Big\{ {{\bm q}}_{{{\bm \Theta}}}= (u_{{{\bm \Theta}}},{{\bm \phi}}_{{{\bm \Theta}}}) : u_{{{\bm \Theta}}} = d_\calD v_{{{\bm \Theta}}} \mbox{ and } {{\bm \phi}}_{{{\bm \Theta}}} = {{\bm \psi}}_{{{\bm \Theta}}} - \Big( \frac{ {{\bm \psi}}_{{{\bm \Theta}}} \cdot {{\bm n}} }{1 + d_\calN} \Big) {{\bm n}}, \mbox{ }(v_{{{\bm \Theta}}},{{\bm \psi}}_{{{\bm \Theta}}})\in \mathcal{C}_m \Big\}. \end{equation} We remark that the fulfillment of the boundary conditions is guaranteed within the set $\mathcal{A}_m$, in the sense that $u_{{\bm \Theta}} = 0$ if $d_\calD = 0$ and ${{\bm \phi}}_{{\bm \Theta}} \cdot {{\bm n}} = 0$ if $d_\calN =0$. \subsection{Approximation properties of neural networks} Let ${{\bm q}}_0 = (u_0,{{\bm \phi}}_0) \in \mathcal{A}$ be the unique minimizer of \eqref{eq:LS-loss}. We shall make the assumption that ${{\bm q}}_0$ can be approximated by the neural network spaces. Namely, let us assume that \begin{equation} \label{eq:hypothesis} d({{\bm q}}_0,\mathcal{A}_m) := \inf_{{{\bm q}}_{{\bm \Theta}} \in \mathcal{A}_m} \|{{\bm q}}_0-{{\bm q}}_{{\bm \Theta}}\|_{H^1(\Omega) \times H(\Div; \Omega)} \to 0 \quad \mbox{as } m\to\infty. \end{equation} We briefly comment on this hypothesis. In first place, there are several by now classical results \cite{Cybenko89, hornik1991, Barron93} regarding the approximation properties of neural networks, although without the incorporation of boundary conditions. We additionally point out to \cite{Yarotsky17,He_etal20} for recent results regarding approximation capabilities of ReLU neural networks, including approximation rates. Therefore, if we use a nonconstant activation function $\sigma$, then we expect $d({{\bm q}},\mathcal{C}_m) \to 0$ when $m\to\infty$ for any ${{\bm q}} \in H^1(\Omega) \times H(\Div; \Omega)$. Condition \eqref{eq:hypothesis} further assumes that the solution $q_0$ can be approximated through the admissible classes $\mathcal{A}_m$ that incorporate boundary conditions. This hypothesis holds, for example, if one assumes certain regularity of solutions to \eqref{eq:FOS}. For instance, if $u_0 \in C^1(\overline\Omega)$, then it satisfies (recall $g_\calD = 0$) \[ \left| \lim_{t \to 0^+} \frac{u_0(z-t{{\bm \nu}})}{t} \right| = \left| \frac{\pp u_0 }{\pp {{\bm \nu}}} (z) \right| < \infty, \quad z \in \Gamma_\calD. \] If we write $x = z - t {{\bm \nu}}$, then $t \approx \mbox{dist}(x, \Gamma_\calD) \approx d_\calD(x)$ and the finiteness of the limit above essentially means that $u_0/d_\calD$ is a bounded function. Additionally, if we can construct auxiliary functions $d_\calD$, $d_\calN$, and ${{\bm n}}$ in such a way that \begin{equation}\label{eq:hipotesis} \frac{u_0}{d_\calD} \in H^1(\Omega), \mbox{ and } \frac{({{\bm \phi}}_0\cdot{{\bm n}}){{\bm n}}}{d_\calN} \in H(\Div; \Omega). \end{equation} then there exists a sequence $\{(v_m,{{\bm \psi}}_m)\}_{m\in\mathbb{N}}$ with $(v_m,{{\bm \psi}}_m) \in \mathcal{C}_m$ for all $m$, such that \[ \left\|v_m - \frac{u_0}{d_\calD} \right\|_{H^1(\Omega)}\to 0 \quad \mbox{and} \quad \left\|{{\bm \psi}}_m - \sum^{d-1}_{i=1}({{\bm \phi}}_0\cdot{{\bm t}}){{\bm t}} - \frac{1+d_\calN}{d_\calN}({{\bm \phi}}_0\cdot{{\bm n}}){{\bm n}}\right\|_{H(\Div; \Omega)}\to 0 \] as $m\to\infty$. Defining the sequence $\{(u_m,{{\bm \phi}}_m)\}_{m\in\mathbb{N}}$ as $u_m = d_\calD v_m$ and ${{\bm \phi}}_m = {{\bm \psi}}_m - \Big( \frac{ {{\bm \psi}}_m \cdot {{\bm n}} }{1 + d_\calN} \Big) {{\bm n}}$, we would have $(u_m,{{\bm \phi}}_m) \in \mathcal{A}_m$ for all $m$, and $(u_m,{{\bm \phi}}_m) \to (u_0,{{\bm \phi}}_0)$ in $\| \cdot \|_{H^1(\Omega)\times H(\Div;\Omega)}$ and therefore \eqref{eq:hypothesis} would hold. Clearly, \eqref{eq:hipotesis} is a regularity assumption on the solution of \eqref{eq:FOS}, and in turn it translates into its approximability by neural networks. \subsection{$\Gamma$-convergence} We aim to prove the convergence of the neural network approximations computed by our method towards minimizers of the least-squares functional $\mathcal{L}$ in \eqref{eq:LS-loss}. For this purpose, we shall make use of $\Gamma$-convergence theory, that provides a framework for the convergence of functionals. In particular, if one has proven the $\Gamma$-convergence of a sequence of functionals and has a converging sequence of minimizers, then one can guarantee the existence of solutions to the limit problem, as well as the convergence of either minimum values and minimizers. We next briefly review the definition and some basic results pertaining to $\Gamma$-convergence and refer to \cite{braides2006} for further details. \begin{definition}[sequential $\Gamma$-convergence] \label{def:gamma-convergence} Let $X$ be a metric space and let $F_{n}$, $F: X \to \overline{\mathbb{R}}$, where $\overline{\mathbb{R}}:= [-\infty,+\infty]$. We say that $F_{n}$ $\Gamma$-converges to $F$ (and write $F_n \xrightarrow[]{\Gamma} F$) if, for every $x \in X$ we have \begin{itemize} \item \emph{(lim-inf inequality)} for every sequence $\{x_n\}_{n \in \mathbb{N}} \subset X$ converging to $x$, \[ F(x) \le \liminf_{n\to\infty} F_n(x_n) ; \] \item \emph{(lim-sup inequality)} there exists a sequence $\{x_n\}_{n \in \mathbb{N}}$ converging to $x$ such that \[ F(x) \ge \limsup_{n\to\infty} F_n(x_n) .\] \end{itemize} \end{definition} \begin{definition}[equi-coercivity]\label{def:equicoercividad} Let $\{F_n\}_{n \in \mathbb{N}}$ be a sequence of functions $F_n: X \to \overline{\mathbb{R}}$. We say that $\{F_n\}$ is equi-coercive if for all $t \in \mathbb{R}$ there exists a compact set $K_t \subset X$ such that $\{F_n \le t\} \subset K_t$. \end{definition} \begin{theorem}[fundamental theorem of $\Gamma$-convergence]\label{teo:fund_teo_gamma_conv} Let $(X, d)$ be a metric space, $\{F_n\}_{n\in\mathbb{N}}$ be an equi-coercive sequence of functions on $X$, and $F$ be such that $F_n \xrightarrow[]{\Gamma} F$. Then, $$\exists \min_{X} F = \lim_{n \to\infty} \inf_{X} F_n.$$ Moreover, if $\{x_n\}_{n\in\mathbb{N}}$ is a precompact sequence in $X$ such that $\lim_{n\to\infty} F_n(x_n) = \lim_{n\to\infty} \inf_{X} F_n$, then every limit of a subsequence of $\{x_n\}$ is a minimum point for $F$. \end{theorem} \subsection{Convergence of the method} We split the proof of convergence of our method into several steps. We start by proving the following auxiliary lemma, that shows the continuity of the neural network functions with respect to the parameters. \begin{lemma}[continuity with respect to neural network parameters] \label{lem:continuidad_tita} The map \[{{\bm \Theta}} \mapsto {{\bm q}}_{{\bm \Theta}} = (u_{{{\bm \Theta}}},{{\bm \phi}}_{ {{\bm \Theta}}}) \in (\mathcal{A}_m,\| \cdot \|_{H^1(\Omega) \times H(\Div; \Omega)})\] is continuous. Moreover, defining the functions $G_1, G_2: \mathbb{R}^m \times \Omega \to \mathbb{R}$, \begin{equation}\label{def:func_aux} G_1({{\bm \Theta}},x):=| {{\bm \phi}}_{{{\bm \Theta}}}(x) - {{\bm A}} \nabla u_{{{\bm \Theta}}}(x) |^2, \quad G_2({{\bm \Theta}},x):=|\Div{{\bm \phi}}_{{{\bm \Theta}}}(x) - B u_{{{\bm \Theta}}}(x) + f(x) |^2, \end{equation} for any $R>0$ we have $G_1 \in L^{\infty}(B(0,R) \times \Omega)$ and, assuming $f \in L^{\infty}(\Omega)$, $G_2 \in L^{\infty}(B(0,R) \times \Omega)$. \end{lemma} \begin{proof} Let us first focus on a generic neural network $v_{{{\bm \Theta}}}: \mathbb{R}^d \to \mathbb{R}$ with one hidden layer, \[v_{{{\bm \Theta}}}(x) = B \sigma( A x + c ).\] Above, we assume $\sigma$ is a Lipschitz continuous activation function, and the parameters $B \in \mathbb{R}^{1\times n}$, $A \in \mathbb{R}^{n\times d}$ and $c \in \mathbb{R}^{n \times 1}$ are collected in ${{\bm \Theta}} \in \mathbb{R}^m$, $m = n (d+2)$. Using the fact that $v_{{{\bm \Theta}}}$ and its derivatives depend continuously on the parameters, one can verify easily that the map $\mathbb{R}^m \mapsto W^{1,\infty}(\Omega)$ such that ${{\bm \Theta}} \mapsto v_{{{\bm \Theta}}}$ is continuous. Moreover, the function $G: \mathbb{R}^m \times \Omega \to \mathbb{R}$, defined as $G({{\bm \Theta}},x) := v_{{{\bm \Theta}}}(x)$ is Lipschitz continuous, and therefore it is bounded on $B(0,R) \times \Omega$ and its (weak) derivatives are essentially bounded on the same set as well. For arbitrary neural network functions $(u_{{{\bm \Theta}}},{{\bm \phi}}_{ {{\bm \Theta}}})$ in the space $\mathcal{A}_m$, defined by \eqref{eq:admissible-class}, we exploit the idea above together with the fact that the auxiliary functions $d_\calD, d_\calN$ and ${{\bm n}}$ are smooth to conclude the desired result. \end{proof} The following lemma guarantees that, for the loss function $\mathcal{L}$ defined in \eqref{eq:LS-loss}, minimizers over $\mathcal{A}_m$ converge towards the minimizer ${{\bm q}}_0 \in \mathcal{A}$ as $m \to \infty$. \begin{lemma}[approximation properties of $\mathcal{A}_m$]\label{lemma:aprox-inf} For every $m \in \mathbb{N}$ there exists ${{\bm q}}_m \in \mathcal{A}_m$ such that $\mathcal{L}({{\bm q}}_m) \leq \mathcal{L}({{\bm q}}_{{\bm \Theta}})$ for all ${{\bm q}}_{{\bm \Theta}}\in\mathcal{A}_m$. Moreover, if ${{\bm q}}_0$ is the unique minimizer of $\mathcal{L}$ in $\mathcal{A}$, defining the sequence $\{{{\bm q}}_m\}_{m\in\mathbb{N}}$ with ${{\bm q}}_m \in \mathcal{A}_m$ being a minimizer of $\mathcal{L}$ in $\mathcal{A}_m$, we have $\|{{\bm q}}_m - {{\bm q}}_0\|_{H^1(\Omega) \times H(\Div; \Omega)} \to 0$ with $m \to \infty$. \end{lemma} \begin{proof} From \cite{Cai94}, we know that $\mathcal{L}$ is elliptic with respect to the $H^1(\Omega) \times H(\Div; \Omega)$ norm. Namely, there exist positive constants $\alpha$ and $\beta$ such that \begin{equation}\label{eq:funcional_eliptico} \alpha\|(u,{{\bm \phi}})\|_{H^1(\Omega) \times H(\Div; \Omega)} \le \| {{\bm \phi}} - {{\bm A}} \nabla u \|_0^2 + \| \Div({{\bm \phi}}) - B u \|_0^2 \le \beta\|(u,{{\bm \phi}})\|_{H^1(\Omega) \times H(\Div; \Omega)}, \end{equation} for all $(u,{{\bm \phi}}) \in H^1(\Omega) \times H(\Div; \Omega)$. Let $\{{{\bm q}}_{{{\bm \Theta}}_i}\}_{i \in \mathbb{N}} \subset \mathcal{A}_m$ be a minimizing sequence in $\mathcal{A}_m$, namely, $\mathcal{L}({{\bm q}}_{{{\bm \Theta}}_i}) \to \inf_{{{\bm q}} \in \mathcal{A}_m} \mathcal{L}({{\bm q}})$. It follows from \eqref{eq:funcional_eliptico} that $\{{{\bm q}}_{{{\bm \Theta}}_i}\}_{i \in \mathbb{N}}$ is bounded in $H^1(\Omega) \times H(\Div; \Omega)$. Lemma \ref{lem:continuidad_tita} implies that the map ${{\bm \Theta}} \mapsto (u_{{{\bm \Theta}}},{{\bm \phi}}_{ {{\bm \Theta}}}) \in (\mathcal{A}_m,\| \cdot \|_{H^1(\Omega) \times H(\Div; \Omega)})$ is continuous and, because $\mathcal{A}_m$ is finite dimensional, we can extract a subsequence $\{{{\bm q}}_{{{\bm \Theta}}_j}\}_{j \in \mathbb{N}} \subset \{{{\bm q}}_{{{\bm \Theta}}_i}\}_{i \in \mathbb{N}}$ in such a way that ${{\bm q}}_j \to {{\bm q}}_m \in \mathcal{A}_m$, with ${{\bm q}}_m \in \arg \min_{{{\bm q}} \in \mathcal{A}_m} \mathcal{L}({{\bm q}})$. Next, let $\varepsilon > 0$. For every $m \in \mathbb{N}$, we let ${{\bm q}}_m = (u_m, {{\bm \phi}}_m) \in \mathcal{A}_m$ be a minimizer of $\mathcal{L}$ in $\mathcal{A}_m$ and ${{\bm q}}^*_m = (u^*_m, {{\bm \phi}}^*_m) \in \mathcal{A}_m$ be such that $d({{\bm q}}_0, \mathcal{A}_m) \ge \|{{\bm q}}^*_m-{{\bm q}}_0\|_{H^1(\Omega) \times H(\Div; \Omega)} - \varepsilon$. Then, using that the solution ${{\bm q}}_0 = (u_0,{{\bm \phi}}_0)$ of \eqref{eq:FOS} satisfies the conditions ${{\bm \phi}}_0 = {{\bm A}} \nabla u_0$ and $-\Div({{\bm \phi}}_0) + B u_0 = f$ a.e. in $\Omega$ and exploiting the upper bound in \eqref{eq:funcional_eliptico}, we deduce \[ \begin{aligned} 0 & \le \mathcal{L}({{\bm q}}_m) \le \mathcal{L}({{\bm q}}^*_m) = \| {{\bm \phi}}^*_m - {{\bm A}} \nabla u^*_m \|_0^2 + \| \Div({{\bm \phi}}^*_m) - B u^*_m +f\|_0^2 \\ & = \| {{\bm \phi}}^*_m - {{\bm \phi}}_0 - {{\bm A}} \nabla (u^*_m - u_0) \|_0^2 + \| \Div({{\bm \phi}}^*_m - {{\bm \phi}}_0) - B (u^*_m - u_0) \|_0^2 \\ & \le \beta \|{{\bm q}}^*_m-{{\bm q}}_0\|_{H^1(\Omega) \times H(\Div; \Omega)} \le \beta ( d({{\bm q}}_0, \mathcal{A}_m) + \varepsilon). \end{aligned} \] This, together with \eqref{eq:hypothesis}, shows that $\mathcal{L}({{\bm q}}_m) \to 0 = \mathcal{L}({{\bm q}}_0)$ as $m \to \infty$. Finally, by combining the lower bound in \eqref{eq:funcional_eliptico} with the fact that ${{\bm q}}_0$ satisfies \eqref{eq:FOS} a.e. in $\Omega$, we reach the estimate \[ \begin{aligned} \|{{\bm q}}_m - {{\bm q}}_0\|_{H^1(\Omega) \times H(\Div; \Omega)} & \le \frac1\alpha \Big( \| {{\bm \phi}}_m - {{\bm \phi}}_0 - {{\bm A}} \nabla (u_m - u_0) \|_0^2 + \| \Div({{\bm \phi}}_m - {{\bm \phi}}_0) + B (u_m - u_0) \|_0^2 \Big) \\ & \le \frac1\alpha \Big( \| {{\bm \phi}}_m - {{\bm A}} \nabla u_m \|_0^2 + \| \Div({{\bm \phi}}_m) + B u_m +f\|_0^2 \Big) = \mathcal{L}({{\bm q}}_m) \to 0. \end{aligned} \] This concludes the proof. \end{proof} The result above assumes that, given ${{\bm \Theta}} \in \mathbb{R}^m$, one can compute $\mathcal{L}(u_{{{\bm \Theta}}},{{\bm \phi}}_{ {{\bm \Theta}}})$ exactly. This is not the case in general, because we resort to Monte Carlo integration for the computation of the $L^2$ norms in \eqref{eq:LS-loss}; cf. the discrete loss functional \eqref{eq:loss_discrete}. To deal with this issue, we consider a regularized version of the loss functions $\mathcal{L}$ and $\mathcal{L}_N: \mathcal{A}_m \to \overline\mathbb{R}$, using $\mathbb{R}^m$ as domain. Given $R>0$, we define the regularized functional $L: \mathbb{R}^m \to \mathbb{R}$ as \begin{equation}\label{eq:reg_func} L({{\bm \Theta}}) := \left\lbrace \begin{array}{ll} \mathcal{L}( u_{{{\bm \Theta}}}, {{\bm \phi}}_{{{\bm \Theta}}} ) & \mbox{if } |{{\bm \Theta}}| \le R, \\ +\infty & \mbox{otherwise. } \\ \end{array} \right. \end{equation} Next, we let $\{X_i\}_{i \in \mathbb{N}}$ be an i.i.d. sequence of random variables, defined on a probability space $(\Lambda,\Sigma,P)$ with $X_i:\Lambda \to \Omega \quad \forall i \in \mathbb{N}$, with uniform probability density on $\Omega$. Given $\lambda \in \Lambda$, $R>0$, and $N \in \mathbb{N}$ we set $V_N(\lambda) := \cup_{i \le N} \{X_i(\lambda)\} $, and the regularized discrete functional $L_{\lambda,N}: \mathbb{R}^m \to \overline\mathbb{R}$ as \begin{equation}\label{eq:reg_func_dis} L_{\lambda,N}({{\bm \Theta}}) := \left\lbrace \begin{aligned} & \frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) } G_1({{\bm \Theta}},x) + G_2({{\bm \Theta}},x) & \mbox{ if } |{{\bm \Theta}}| \le R, \\ & +\infty & \mbox{otherwise, } \\ \end{aligned} \right. \end{equation} with $G_1$ and $G_2$ as in \eqref{def:func_aux}. With these definitions, we can prove the pointwise $P$-almost sure convergence of the sequence $\{L_{\lambda,N}\}_{N \in \mathbb{N}}$ towards $L$. \begin{lemma}[almost sure convergence of regularized discrete loss functions]\label{lemma:conv_puntual} Consider $R>0$, $L$ as in \eqref{eq:reg_func}, $L_{\lambda,N}$ and $\{X_i\}_{i \in \mathbb{N}}$ an i.i.d. family of random variables defined in the probability space $(\Lambda,\Sigma,P)$ as in \eqref{eq:reg_func_dis}. Then $L_{\lambda,N}({{\bm \Theta}}) \to L({{\bm \Theta}})$ as $N \to \infty$ $P$-almost surely, for all ${{\bm \Theta}} \in \mathbb{R}^m$. \end{lemma} \begin{proof} Since we are using the same parameter $R$ in the definitions of $L$ and $L_{\lambda,N}$, if $|{{\bm \Theta}}| > R$ we have $L({{\bm \Theta}}) = L_{\lambda,N}({{\bm \Theta}}) = +\infty$ and there is nothing to be proven. We therefore assume $|{{\bm \Theta}}| \le R$. Recalling $V_N(\lambda) = \cup_{i \le N} \{X_i(\lambda)\} $ with $\lambda \in \Lambda$ and the definition of $G_1$ and $G_2$ \eqref{def:func_aux}, an application of the strong law of large numbers yields \[ \frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) } | {{\bm \phi}}(x) - {{\bm A}} \nabla u(x) |^2 \xrightarrow[N \to \infty]{a.s.} \int_{\Omega} | {{\bm \phi}} - {{\bm A}} \nabla u |^2, \] and \[ \frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) } | \Div{{\bm \phi}}(x) - B u(x) + f(x) |^2 \xrightarrow[N \to \infty]{a.s.} \int_{\Omega} | \Div{{\bm \phi}} - B u + f |^2 \] for all $(u,{{\bm \phi}}) \in \mathcal{A}_m$. It follows immdiately that $L_{\lambda,N}({{\bm \Theta}}) \to L({{\bm \Theta}})$ $P$-almost surely as $N \to \infty$. \end{proof} We are now in position to prove the almost sure $\Gamma$-convergence of $L_{\lambda,N}$ to $L$ as the number of quadrature points $N \to \infty$. \begin{theorem}[almost sure $\Gamma$-convergence]\label{teo:gamma_conv} Let $R>0$, $L$ be as in \eqref{eq:reg_func}, and $L_{\lambda,N}$ and $\{X_i\}_{i \in \mathbb{N}}$ be an i.i.d. family of random variables defined in the probability space $(\Lambda,\Sigma,P)$ as in \eqref{eq:reg_func_dis}. Then, assuming $f \in L^{\infty}(\Omega)$, it holds that $L_{\lambda,N} \xrightarrow[]{\Gamma} L$ as $N \to \infty$ $P$-almost surely. \end{theorem} \begin{proof} We first observe that the lim-sup inequality is a trivial corollary of Lemma \ref{lemma:conv_puntual}. Indeed, it suffices to consider the recovery sequence $\{{{\bm \Theta}}_N\}_{N \in \mathbb{N}} \subset \mathbb{R}^m$, ${{\bm \Theta}}_N \equiv {{\bm \Theta}}$, and by Lemma \ref{lemma:conv_puntual} we have $L_{\lambda,N}({{\bm \Theta}}_N) \to L({{\bm \Theta}})$ with $N \to \infty$ $P$-almost surely. We next prove the lim-inf inequality. Given ${{\bm \Theta}} \in \mathbb{R}^m$, let $\{{{\bm \Theta}}_N\}_{N \in \mathbb{N}} \subset \mathbb{R}^m$ be a sequence of parameters such that ${{\bm \Theta}}_{N} \to {{\bm \Theta}}$. We aim to prove that \begin{equation}\label{eq:liminf} L({{\bm \Theta}}) \le \liminf_{N \to \infty} L_{\lambda,N}({{\bm \Theta}}_N). \end{equation} We observe that, if $|{{\bm \Theta}}|>R$ then there exists $N_0 = N_0(\lambda)$ such that $L({{\bm \Theta}})=L_{\lambda,N}({{\bm \Theta}}_N) = +\infty$ for all $N>N_0$, and \eqref{eq:liminf} trivially holds. Therefore, without loss of generality we assume $\{{{\bm \Theta}}_N\}_{N \in \mathbb{N}} \subset \overline{B(0,R)}$. In that case, we extract a subsequence in such a way that $L_{\lambda,N}({{\bm \Theta}}_N) \to \liminf_{N \to \infty} L_{\lambda,N}({{\bm \Theta}}_N)$ and, for the sake of simplicity, we omit the relabeling. By Lemma \ref{lem:continuidad_tita}, the map ${{\bm \Theta}} \mapsto (u_{{{\bm \Theta}}},{{\bm \phi}}_{ {{\bm \Theta}}}) \in (\mathcal{A}_m,\| \cdot \|_{H^1(\Omega) \times H(\Div; \Omega)})$ is continuous and therefore $(u_{{{\bm \Theta}}_N}, {{\bm \phi}}_{{{\bm \Theta}}_N}) \to (u_{{{\bm \Theta}}}, {{\bm \phi}}_{{{\bm \Theta}}})$ in the $H^1(\Omega) \times H(\Div; \Omega)$ norm. Because $\Omega$ is bounded, this implies \[ \begin{aligned} & \|u_{{{\bm \Theta}}_N}-u_{{{\bm \Theta}}}\|_{L^1(\Omega)} \to 0, & \quad \|\nabla u_{{{\bm \Theta}}_N}-\nabla u_{{{\bm \Theta}}}\|_{L^1(\Omega)} \to 0,\\ & \|{{\bm \phi}}_{{{\bm \Theta}}_N}-{{\bm \phi}}_{{{\bm \Theta}}}\|_{L^1(\Omega)} \to 0, \quad \mbox{and} & \ \|\Div {{\bm \phi}}_{{{\bm \Theta}}_N}-\Div {{\bm \phi}}_{{{\bm \Theta}}}\|_{L^1(\Omega)} \to 0. \end{aligned}\] Then, defining $G_1$ and $G_2$ as in \eqref{def:func_aux}, we extract another subsequence in such a way that $G_1({{\bm \Theta}}_N,x) + G_2({{\bm \Theta}}_N,x) \to G_1({{\bm \Theta}},x) + G_2({{\bm \Theta}},x)$ almost everywhere in $\Omega$, and, as before, we omit the relabeling. In order to prove \eqref{eq:liminf}, we are going to show that the latter subsequence satisfies $L_{\lambda,N}({{\bm \Theta}}_N) \to L({{\bm \Theta}})$ with $N \to \infty$ $P$-almost surely. Let $\varepsilon>0$ be an arbitrary number, and \[ M := \|G_1\|_{L^{\infty}(B(0,R) \times \Omega)} + \|G_2\|_{L^{\infty}(B(0,R) \times \Omega)}; \] we have $M <\infty$ by Lemma \ref{lem:continuidad_tita}. Using the triangle inequality, we split \begin{equation} \label{eq:split-L} |L_{\lambda,N}({{\bm \Theta}}_N) - L({{\bm \Theta}})| \le |L_{\lambda,N}({{\bm \Theta}}_N) - L_{\lambda,N}({{\bm \Theta}})| + |L_{\lambda,N}({{\bm \Theta}}) - L({{\bm \Theta}})|. \end{equation} From Lemma \ref{lemma:conv_puntual}, it follows that $|L_{\lambda,N}({{\bm \Theta}}) - L({{\bm \Theta}})| \to 0$ $P$-almost surely. Thus, there exists $N_0=N_0(\lambda)$ such that $|L_{\lambda,N}({{\bm \Theta}}) - L({{\bm \Theta}})|\le \varepsilon/2$ for all $N>N_0$. In order to bound the first term in the right hand side in \eqref{eq:split-L}, we apply Egorov's Theorem to construct a set $\mathcal{K} \subset \Omega$ such that $|\mathcal{K}|<\frac{\varepsilon}{16M}$ and $G_1({{\bm \Theta}}_N, \cdot) + G_2({{\bm \Theta}}_N,\cdot) \to G_1({{\bm \Theta}},\cdot) + G_2({{\bm \Theta}},\cdot)$ uniformly in $\Omega \setminus \mathcal{K}$. We bound \[ |L_{\lambda,N}({{\bm \Theta}}_N) - L_{\lambda,N}({{\bm \Theta}})| \le A_1 + A_2, \] where \[ \begin{aligned} A_1 & = \frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) \cap (\Omega\setminus \mathcal{K}) } \big| G_1({{\bm \Theta}}_N,x) + G_2({{\bm \Theta}}_N,x) - G_1({{\bm \Theta}},x) - G_2({{\bm \Theta}},x) \big|, \\ A_2 & = \frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) \cap \mathcal{K} } \big| G_1({{\bm \Theta}}_N,x) + G_2({{\bm \Theta}}_N,x) - G_1({{\bm \Theta}},x) - G_2({{\bm \Theta}},x) \big|. \end{aligned} \] Using the uniform convergence in $\Omega \setminus \mathcal{K}$, $P$-almost surely there exists $N_1 = N_1(\lambda)$ such that, if $N > N_1$, then $\big| G_1({{\bm \Theta}}_N,x) + G_2({{\bm \Theta}}_N,x) - G_1({{\bm \Theta}},x) - G_2({{\bm \Theta}},x) \big| < \frac{\varepsilon}{2|\Omega|}$ for all $x \in \Omega \setminus \mathcal{K}$. Then, it follows that $A_1 < \varepsilon/2$ if $N>N_1$. On the other hand, we use the uniform boundedness of $G_1$ and $G_2$ to derive \[ A_2 \le 2M\frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) } \chi_\mathcal{K}(x). \] By the strong law of large numbers, we have \[ \frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) } \chi_\mathcal{K}(x) \xrightarrow[N \to \infty]{a.s.} \int_{\Omega} \chi_\mathcal{K}(x) = |\mathcal{K}| < \frac{\varepsilon}{16M}. \] Therefore, $P$-almost surely there exists $N_2 = N_2(\lambda)$ such that, if $N>N_2$ then \[\Big| \frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) } \chi_\mathcal{K}(x) - \int_{\Omega} \chi_\mathcal{K}(x) \Big| < \frac{\varepsilon}{16M}, \] which implies that $\frac{|\Omega|}{N}\sum_{ x \in V_N(\lambda) } \chi_\mathcal{K}(x) \le \frac{\varepsilon}{8M}$. Consequently, we have $A_2 < \frac{\varepsilon}{4}.$ Collecting the estimates above, it follows that $P$-almost surely we can choose $N' = N'(\lambda) = \max \{N_0,N_1,N_2\}$ such that $$|L_{\lambda,N}({{\bm \Theta}}_N) - L({{\bm \Theta}})| \le |L_{\lambda,N}({{\bm \Theta}}_N) - L_{\lambda,N}({{\bm \Theta}})| + |L_{\lambda,N}({{\bm \Theta}}) - L({{\bm \Theta}})| \leq \varepsilon,$$ for all $N > N'$. This shows that \eqref{eq:liminf} holds, and concludes the proof. \end{proof} The following theorem is the main result of this section and it roughly states that, if we have a reasonable procedure for the minimization of $L_{\lambda,N}$ on $\mathcal{A}_m$, then we can expect convergence to the solution ${{\bm q}}_0$. \begin{theorem}[convergence] \label{teo:conv} Suppose that for any fixed $m \in \mathbb{N}$ and $R>0$ we can construct a sequence $\{{{\bm \Theta}}_N\}_{N \in \mathbb{N}} \subset B(0,R) \subset \mathbb{R}^m$ such that $\lim_{N \to \infty} L_{\lambda,N}({{\bm \Theta}}_N) = \lim_{N \to \infty} \inf_{{{\bm \Theta}} \in \mathbb{R}^m} L_{\lambda,N}({{\bm \Theta}})$, with $L_{\lambda,N}$ defined as in \eqref{eq:reg_func_dis}. Let $(u_0,{{\bm \phi}}_0) = {{\bm q}}_0 = \arg \min_{{{\bm q}} \in \mathcal{A}} \mathcal{L}({{\bm q}})$. Given $\varepsilon>0$, there $P$-almost surely exist $m_0=m_0(\varepsilon) \in \mathbb{N}$, $R=R(m_0)>0$ and $N_0 = N_0(m_0) \in \mathbb{N}$ such that, if one constructs a sequence $\{{{\bm \Theta}}_N\}_{N \in \mathbb{N}}$ as above, then \[\|(u_0,{{\bm \phi}}_0) - (u_{{{\bm \Theta}}_N} , {{\bm \phi}}_{{{\bm \Theta}}_N})\|_{H^1(\Omega) \times H(\Div; \Omega)} \le \varepsilon \quad \mbox{for all } N>N_0, \] where $(u_{{{\bm \Theta}}_N} , {{\bm \phi}}_{{{\bm \Theta}}_N})$ is the neural network function defined by the parameters ${{\bm \Theta}}_N$. \end{theorem} \begin{proof} Let $\varepsilon > 0$. From Lemma \ref{lemma:aprox-inf}, we know that there exists $m_0$ such that \begin{equation}\label{eq:teo_conv_1} \| {{\bm q}}_0 - {{\bm q}}_{m_0}\|_{H^1(\Omega) \times H(\Div; \Omega)} < \varepsilon/2, \end{equation} where ${{\bm q}}_{m_0} \in \arg \min_{{{\bm q}} \in \mathcal{A}_{m_0}} \mathcal{L}({{\bm q}})$. We next fix $R_0>0$ large enough in such a way that there exists a ${{\bm \Theta}} \in B(0,R_0)$ with $(u_{{{\bm \Theta}}},{{\bm \phi}}_{ {{\bm \Theta}}}) \in \arg \min_{{{\bm q}} \in \mathcal{A}_{m_0}} \mathcal{L}({{\bm q}})$. For this choice of $m_0$ and $R_0$, from Theorem \ref{teo:gamma_conv} we have $L_{\lambda,N} \xrightarrow[]{\Gamma} L$ P-almost surely. From the definition of $L_{\lambda,N}$ \eqref{eq:reg_func_dis}, it follows immediately that $\{L_{\lambda , N}\}_{N \in \mathbb{N}}$ is an equi-coercive sequence, according to Definition \ref{def:equicoercividad}. Therefore, we deduce that $P$-almost surely there exists $N_0 > 0$ such that \begin{equation}\label{eq:teo_conv_2} \| (u_{{{\bm \Theta}}_N},{{\bm \phi}}_{{{\bm \Theta}}_N}) - {{\bm q}}_{m_0} \|_{H^1(\Omega) \times H(\Div; \Omega)} < \varepsilon/2 \end{equation} for all $N>N_0$. This bound follows by Theorem \ref{teo:fund_teo_gamma_conv} because every cluster point of $\{ {{\bm \Theta}}_N\}$ is a minimum point for $\mathcal{L}$, and because of the continuity of the map ${{\bm \Theta}} \mapsto (u_{{{\bm \Theta}}},{{\bm \phi}}_{ {{\bm \Theta}}})$. The proof concludes upon combining \eqref{eq:teo_conv_1} and \eqref{eq:teo_conv_2}. \end{proof} \section{General framework} \label{sec:general} In this section, we extend the theoretical analysis we performed in Section \ref{sec:analysis} and put it into an abstract framework. Afterwards, we illustrate how such a framework applies to some well-established unstructured neural-network methods for the approximation of PDEs. Let $\Omega \subset \mathbb{R}^d$ and $\gamma \in \mathbb{N}$. We assume our problem is posed in some admissible vector space \[ \mathcal{A} \subset W^{\gamma,1}_{loc}(\Omega; \mathbb{R}^n), \] namely, that every function ${{\bm q}} \in \mathcal{A}$ has locally integrable weak derivatives of order up to $\gamma$. The space $\mathcal{A}$ may or may not include boundary conditions or constraints of any type. In the setting we described in Section \ref{sec:introduction}, the target dimension is $n = 1+d$, the differentiability index is $\gamma=1$, and we identify $\mathcal{A} \ni {{\bm q}} = (u, {{\bm \phi}})$. Additionally, we assume the space $\mathcal{A}$ is furnished with some norm $\| \cdot \|_{\mathcal{A}}$, which in our setting corresponds to the $H^1(\Omega) \times H(\Div; \Omega)$-norm. We consider $\omega_1,...,\omega_K$ Borel subsets of $\overline\Omega$, each $\omega_i$ furnished with a finite Radon measure $\mu_i$, and some given functions $f_1,...,f_{n_f}$ with $f_i:\Omega \to \mathbb{R}$. Given some integrable functions $F_i: \mathbb{R}^{n_\gamma n +n_f+d} \to \mathbb{R}$, $1 \le i \le K$, we define the loss functional % \begin{equation*} \mathcal{L}({{\bm q}}) := \sum^K_{i=1} \int_{\omega_i} F_i(D^{\alpha_1}{{\bm q}}, \ldots ,D^{\alpha_{n_\gamma}}{{\bm q}}, f_1, \ldots ,f_{n_f},x) \, d\mu_i, \end{equation*} with $F_i$ in such a way that all the integrals involved are well defined. Namely, we assume the loss functional consists of $K$ terms, each of which may be defined on different subdomains of $\overline\Omega$. Each of these terms involves certain partial derivatives of ${{\bm q}}$ of order up to $\gamma$. The subdomains $\omega_i$ need not be open; for example, we could allow for $\omega_i \subset \partial\Omega$ and the corresponding term would be able to accommodate boundary data. In such a case, the corresponding trace operator must be bounded on the space $\mathcal{A}$. Consider now a space $\mathcal{A}_m \subset \mathcal{A}$ in such a way that we have a surjective map ${{\bm \Theta}}: \mathbb{R}^m \mapsto \mathcal{A}_m$. In the setting from Section \ref{sec:analysis}, this space consists of the functions obtained through a neural network with a modification to account for boundary conditions, cf. \eqref{eq:admissible-class}. We denote by ${{\bm q}}_{{{\bm \Theta}}}$ a generic element of $\mathcal{A}_m$. For $1 \le i \le K$, we define $G_i({{\bm \Theta}},x): \mathbb{R}^n \times \Omega \to \mathbb{R}$ as \begin{equation*} G_i({{\bm \Theta}},x) = F_i(D^{\alpha_1}{{\bm q}}_{{{\bm \Theta}}},\ldots,D^{\alpha_{n_\gamma}}{{\bm q}}_{{{\bm \Theta}}},f_1,\ldots, f_{n_f},x) \end{equation*} and, given $R>0$, we define the regularized loss functional $L: \mathbb{R}^m \to \mathbb{R}$ \begin{equation}\label{eq:reg_func_gen} L({{\bm \Theta}}) := \left\lbrace \begin{array}{ll} \mathcal{L}( {{\bm q}}_{{{\bm \Theta}}} ) & \mbox{if } |{{\bm \Theta}}| \le R, \\ +\infty & \mbox{otherwise}. \\ \end{array} \right. \end{equation} Let $\{X^1_j\}_{j \in \mathbb{N}},...,\{X^K_j\}_{j \in \mathbb{N}}$ i.i.d. sequences of random variables, defined in the probability space $(\Lambda,\Sigma,P)$ with $X^i_j:\Lambda \to \omega_i \quad \forall j \in \mathbb{N}$, $1 \le i \le K$, in such a way that the probability density $\overline\mu^i$ of $X^i_j$ is distributed as $\mu_i$ on $\omega_i$, that is \[ \overline\mu^i(E) = \frac{\mu_i(E)}{\mu_i(\omega_i)} \quad \mbox{for every Borel set } E \subset \omega_i . \] Given $\lambda \in \Lambda$, $R>0$, and $N \in \mathbb{N}$ we define the sampling nodes $V^i_N(\lambda) := \cup_{j \le N} \{X^i_j(\lambda)\} $, and the regularized discrete loss functional $L_{\lambda,N}: \mathbb{R}^m \to \mathbb{R}$, \begin{equation}\label{eq:reg_func_dis_gen} L_{\lambda,N}({{\bm \Theta}}) := \left\lbrace \begin{aligned} & \sum^K_{i=1}\frac{\mu_i(\omega_i)}{N} \sum_{ x \in V^i_N(\lambda) } G_i({{\bm \Theta}},x) & \mbox{if } |{{\bm \Theta}}| \le R, \\ & +\infty & \mbox{otherwise.} \\ \end{aligned} \right. \end{equation} In order to extend our convergence estimates in Section \ref{sec:analysis} to a general framework, we consider the following hypotheses: \begin{itemize} \item[(H1)] The map $\mathbb{R}^m \mapsto (\mathcal{A}_m,\|\cdot\|_{\mathcal{A}})$ with ${{\bm \Theta}} \mapsto {{\bm q}}_{{\bm \Theta}}$ is continuous. \item [(H2)] For all $1 \le i \le K$ and every convergent sequence $\{{{\bm q}}_{{{\bm \Theta}}_n}\}_{n\in\mathbb{N}} \subset \mathcal{A}_m$, with ${{\bm q}}_{{{\bm \Theta}}_n} \to {{\bm q}}_{{{\bm \Theta}}} \in \mathcal{A}_m$ with respect to the $\mathcal{A}$-norm, there exists a subsequence $\{{{\bm q}}_{{{\bm \Theta}}_{n_j}}\}_{j\in\mathbb{N}}$ such that $G_i({{\bm \Theta}}_{n_j},x) \to G_i({{\bm \Theta}},x)$ $\mu_i$-almost everywhere. \item [(H3)] For every $R>0$, there exists a positive constant $M$ such that $|G_i({{\bm \Theta}},x)|<M$ for all $1 \le i \le K$, ${{\bm \Theta}} \in B(0,R)$ and $\mu_i$-almost every $x \in \omega_i$. \item [(H4)] The loss function $\mathcal{L}$ has a unique minimizer ${{\bm q}}_0 \in \mathcal{A}$, and at least one minimizer in $\mathcal{A}_m$ for all $m \in \mathbb{N}$. \item [(H5)] Let $\{{{\bm q}}_m\}_{m \in \mathbb{N}} \subset \mathcal{A}$ be a sequence of minimizers of $\mathcal{L}$, namely, ${{\bm q}}_m \in \arg \min_{{{\bm q}} \in \mathcal{A}_m} \mathcal{L}({{\bm q}})$ for all $m \in \mathbb{N}$. Then, $\|{{\bm q}}_m - {{\bm q}}_0\|_{\mathcal{A}} \to 0$ as $m \to \infty$. \end{itemize} Let us comment on these assumptions and how they relate to our analysis in the previous section. Hypothesis (H1) corresponds to the first part in the conclusion of Lemma \ref{lem:continuidad_tita}, and guarantees the stability of neural network functions with respect to the parameters. Hypothesis (H2) roughly states that, for neural network functions, one can pass from convergence in $\mathcal{A}$ to almost everywhere convergence (up to a subsequence). In our setting, we showed this condition to hold in the proof of Theorem \ref{teo:gamma_conv}. Our assumption (H3) requires the pointwise boundedness of the differential operators on the neural network functions, as well as the pointwise boundedness of the source functions $f_1,\ldots, f_{n_f}$. This condition appeared in the second part of Lemma \ref{lem:continuidad_tita}. The ellipticity of the loss functional $\mathcal{L}$ guarantees that hypothesis (H4) is satisfied. Finally, hypothesis (H5) involves the approximability of the solution to the continuous problem by the neural network minimizers of $\mathcal{L}$. In our setting, this appeared in Lemma \ref{lemma:aprox-inf}, and is a consequence of ellipticity and assumption \eqref{eq:hypothesis}. The following two results extend Theorem \ref{teo:gamma_conv} and Theorem \ref{teo:conv}, respectively; we outline the main steps of their proofs. We first address the $\Gamma$-convergence of the regularized discrete functionals. \begin{theorem}[almost sure $\Gamma$-convergence, general case]\label{teo:gamma_conv_gen} Let $R>0$, and $L$, $L_{\lambda,N}$ be as in \eqref{eq:reg_func_gen} and \eqref{eq:reg_func_dis_gen}, respectively. Then, under assumptions (H1), (H2), and (H3), it holds that $L_{\lambda,N} \xrightarrow[]{\Gamma} L$ with $N \to \infty$ P-almost surely. \end{theorem} \begin{proof} The arguments used in the proof of Theorem \ref{teo:gamma_conv} can be easily adapted to this case. Indeed, the lim-sup inequality follows trivially by taking the recovery sequence $\{{{\bm \Theta}}_N\}_{N \in \mathbb{N}}$, ${{\bm \Theta}}_N \equiv {{\bm \Theta}}$ and using a strong law of large numbers. To prove the lim-inf inequality, we start from a bounded sequence of parameters $\{ {{\bm \Theta}}_N\}_{N \in \mathbb{N}}$ and use (H1) to extract a converging subsequence $\{ {{\bm q}}_{{{\bm \Theta}}_N}\}_{N \in \mathbb{N}}$ in the $\mathcal{A}$-norm. Then, by (H2) we can extract another subsequence such that $G_i({{\bm \Theta}}_{n_j},x) \to G_i({{\bm \Theta}},x)$ $\mu_i$-almost everywhere for all $1 \le i \le K$ and by (H3) we know that all the $G_i$'s are in $L^\infty(B(0,R)\times \Omega)$. The conclusion then follows by applying Egorov's Theorem on every subset $\omega_1, \ldots \omega_K$. \end{proof} Once we have the almost sure $\Gamma$-convergence of the regularized discrete functionals, the convergence of the neural network minimizers can be proved by arguing as in Theorem \ref{teo:conv}. \begin{theorem}[convergence, general case] \label{teo:conv_general} Assume hypotheses (H1)--(H5) are satisfied, and suppose that for any fixed $m \in \mathbb{N}$ and $R>0$ we can construct a sequence $\{{{\bm \Theta}}_N\}_{N \in \mathbb{N}} \subset B(0,R) \subset \mathbb{R}^m$ such that $\lim_{N \to \infty} L_{\lambda,N}({{\bm \Theta}}_N) = \lim_{N \to \infty} \inf_{{{\bm \Theta}} \in \mathbb{R}^m} L_{\lambda,N}({{\bm \Theta}})$, with $L_{\lambda,N}$ defined as in \eqref{eq:reg_func_dis}. Let ${{\bm q}}_0 = \arg \min_{{{\bm q}} \in \mathcal{A}} \mathcal{L}({{\bm q}})$. Given $\varepsilon>0$ there exist $m_0=m_0(\varepsilon) \in \mathbb{N}$, $R=R(m_0)>0$ and $N_0 = N_0(m_0) \in \mathbb{N}$ P-almost surely, such that \[ \| {{\bm q}}_0 - {{\bm q}}_{{{\bm \Theta}}_N}\|_{\mathcal{A}} \le \varepsilon \quad \mbox{for all } N>N_0, \] where ${{\bm q}}_{{{\bm \Theta}}_N} \in \mathcal{A}_{m_0}$ is the neural network function defined by the parameters ${{\bm \Theta}}_N$. \end{theorem} \begin{proof} We first remark that hypothesis (H4) is needed to guarantee that ${{\bm q}}_0 \in \mathcal{A}$ is well defined and therefore that (H5) is meaningful. Given $\varepsilon > 0$, we use hypothesis (H5) to find $m_0 \in \mathbb{N}$ such that, if ${{\bm q}}_{m_0} \in \arg \min_{{{\bm q}} \in \mathcal{A}_{m_0}} \mathcal{L}({{\bm q}})$ then $ \| {{\bm q}}_0 - {{\bm q}}_{m_0}\|_{\mathcal{A}} < \varepsilon/2$. Next, we fix $R_0$ sufficiently large so that there exists ${{\bm \Theta}} \in B(0,R_0)$ with ${{\bm q}}_{ {{\bm \Theta}}} \in \arg \min_{{{\bm q}} \in \mathcal{A}_{m_0}} \mathcal{L}({{\bm q}})$, and we use this $R_0$ in Theorem \ref{teo:gamma_conv_gen} to deduce that $L_{\lambda,N} \xrightarrow[]{\Gamma} L$ with $N \to \infty$ P-almost surely. The result then follows by the equi-coercivity of the sequence $\{L_{\lambda , N}\}_{N \in \mathbb{N}}$ applying the fundamental theorem of $\Gamma$-convergence (Theorem \ref{teo:fund_teo_gamma_conv}). \end{proof} We next discuss how two well-known methods fit into the framework in hypotheses (H1)--(H5), and thus Theorem \ref{teo:conv_general} establishes their convergence. \begin{remark}[Deep Ritz Method] \label{rem:DRM} The DRM was proposed by E and Yu in \cite{E18}, and is tailored for numerically solving variational problems. A prototypical example is the homogeneous Dirichlet problem, that corresponds to the minimization of the energy $\mathcal{L} \colon H^1_0(\Omega) \to \mathbb{R}$, \[ \mathcal{L}(u) = \frac12 \int_\Omega |\nabla u|^2 - \int_\Omega f u. \] We assume $ \| f \|_{L^{\infty}(\Omega)} <\infty $, consider $\mathcal{A} = H^1_0(\Omega)$, and define the neural network spaces $\mathcal{A}_m$ as in \eqref{eq:admissible-class}. Arguing as in Section \ref{sec:analysis}, it is possible to show that hypotheses (H1)--(H4) hold for this loss function. Indeed, (H1) and (H3) can be proved in the same fashion as Lemma \ref{lem:continuidad_tita}, while (H2) follows because for every bounded sequence in $H^1_0(\Omega)$ we can extract an almost everywhere convergent subsequence, and (H4) is a standard PDE result. Finally, hypothesis (H5) can be obtained from classical approximation results \cite{Cybenko89, hornik1991, Barron93, Yarotsky17,He_etal20}. \end{remark} \begin{remark}[Deep Galerkin Method] \label{rem:DGM} The DGM was introduced by Sirignano and Spiliopoulos in \cite{DGM18}, and uses as loss functional the $L^2$-norm of the PDE residual on the neural network functions. Within the convergence framework in \cite[Section 7]{DGM18}, and the conditions assumed there, we set $\mathcal{A}: = \mathcal{C}^{0, \delta, \delta/2}(\overline {\Omega}_{T}) \cap L^2((0, T]; W^{1,2}_0 (\Omega)) \cap W^{(1,2), 2}_0 (\Omega_T ') $, where $ \delta> 0 $ and $ \Omega_T'$ is any interior subdomain of $\Omega_T$, cf. Theorem 7.3. We furnish this space with the $\| \cdot \|_{H^2 (\Omega_T)}$ norm, and define $\mathcal{A}_m$ according to \eqref{eq:admissible-class}. Then, assumptions (H1) and (H3) can be verified by arguing as in Lemma \ref{lem:continuidad_tita} and requiring the initial and boundary data and parameters of the equation to be bounded. Hypothesis (H2) can be proved by using the boundedness of $\Omega_T$ and arguing as in the proof of Theorem \ref{teo:gamma_conv} to exploit the convergence properties of the $H^2(\Omega)$-norm. Finally, hypotheses (H4) and (H5) are addressed in \cite[Theorem 7.3]{DGM18}. We point out, however, that the convergence of discrete minimizers of $\mathcal{L}$ is proven in the weaker norm $ \| \cdot \|_{L^{\rho}(\Omega_T)}$, with $\rho <2$. Therefore, our conclusion in Theorem \ref{teo:conv_general} is valid if we measure convergence in such a norm. \end{remark} \section{Numerical experiments} \label{sec:numerical} In this section, we present numerical results for the method we proposed in Section \ref{sec:description}. We did not prioritize any particular neural network architecture, and used five-layer networks with sigmoidal activation functions to construct $u_{{{\bm \Theta}}}$ and ${{\bm \phi}}_{{{\bm \Theta}}}$. For the construction of the auxiliary functions ${{\bm n}}$, $d_\calD$, $d_\calN$, $G_\calD$, $G_\calN$, we used three-layer networks with less neurons per layer. In the training process, we used the ADAM \cite{ADAM} algorithm to update the parameters, with a decaying learning rate schedule. We observe an improvement in the method's performance when explicit approximations of the auxiliary functions $d_\calD$ and $d_\calN$ are used. These functions, which depend on the geometry of the domain, are many times explicitly available in practice. We recall that, as explained in sections \ref{sec:analysis} and \ref{sec:general}, the numerical solution depends on the number of degrees of freedom $m$ and the number of collocation points $N$. Both must go to infinity to guarantee convergence. In all the numerical examples we show below, these quantities remain fixed. Therefore, in these examples the convergence as a function of the iterations occurs towards the minimizer of the discrete loss functional $L_{\lambda,N}$ (cf. \eqref{eq:reg_func_dis}) corresponding to the values of $m$ and $N$ we have set. \begin{example}[Laplace operator] We consider the following problem in arbitrary dimension. Let $\Omega = \{ x\in {\mathbb{R}^d} : \ -1< x_1,..., x_d< 1\}$, $\Gamma_\calN = [-1,1]^{d-1} \times \{1\},$ and $k \in \mathbb{N}$. We seek $u \colon \Omega \to \mathbb{R}$ such that \begin{equation}\label{eq:example1} \left\lbrace \begin{aligned} -\Delta u & = \prod^{d-1}_{i=1}\sin(k \pi x_i) \left( (d-1)k^2 \pi^2 (1-x_d^2) + 2 \right) \quad & \mbox{in } \Omega,\\ u & = 0 \quad & \mbox{on } \partial\Omega \setminus \Gamma_\calN,\\ \nabla u \cdot {{\bm \nu}} & = -2\prod^{d-1}_{i=1}\sin(k \pi x_i) \quad & \mbox{on } \Gamma_\calN.\\ \end{aligned} \right. \end{equation} Here, we have ${{\bm \nu}} = (0,...,0,1)$ on $\Gamma_\calN$, and the solution to \eqref{eq:example1} is \[ u = \prod^{d-1}_{i=1} \sin(k \pi x_i)(1-x^2_d). \] We point out that the parameter $k$ is a frequency that allows us to choose how oscillatory the exact solution $u$ is. We first tested the method in a two-dimensional domain ($d=2$). Figure \ref{fig:dim2} displays the results we obtained for $k=1$ and by constructing $u_{{{\bm \Theta}}}$ and ${{\bm \phi}}_{{{\bm \Theta}}}$ using neural networks with $15$ sigmoidal activation functions per layer. At the end of the stochastic gradient descent algorithm we computed the value $L_N ({{\bm \Theta}}) = 0.0450$. Taking into account the ellipticity of the loss function $\mathcal{L}$, arguing as in Lemma \ref{lemma:aprox-inf} we deduce \[ \mathcal{L} ({{\bm q}}_m) \simeq \| {{\bm q}}_m - {{\bm q}}_0 \|_{H^1(\Omega) \times H(\Div; \Omega)} , \] and therefore this quantity serves as an error estimator. Figure \ref{fig:dim2_k2} corresponds to $k=2$, and we used a similar architecture, but with $18$ sigmoidal activation functions per layer. We observed a fast convergence in the number of iterations, reaching $L_N ({{\bm \Theta}}) = 1.89$ by the end of the minimization algorithm. Finally, Figure \ref{fig:altadim} reports the results we obtained in case $d = 5$, $k=1$. In this case, we used networks with $25$ sigmoidal activation functions per layer and obtained $L_N ({{\bm \Theta}}) = 2.32$. \begin{figure}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \subf{\includegraphics[width=50mm]{solucion_dim2.png}} {$ u_{{{\bm \Theta}}}$} & \subf{\includegraphics[width=50mm]{loss_dim2.png}} {Loss function vs. iterations.} \\ \hline \subf{\includegraphics[width=50mm]{error_dim2.png}} { $| u_{{{\bm \Theta}}} - u | $} & \subf{\includegraphics[width=50mm]{errorL2_dim2.png}} { $\|u_{{{\bm \Theta}}} - u\|_{L^2(\Omega)}$ vs. iterations.} \\ \hline \end{tabular} \caption{Top left: computational solution $u_{{{\bm \Theta}}}$ to \eqref{eq:example1} in case $k = 1$ and $d = 2$. In the computation, we used an initial learning rate $\ell = 0.005$, with 4,000 collocation points, 5,000 optimization steps, and 2453 degrees of freedom (including auxiliary functions). Every 1,000 optimization steps, we halved the learning rate. The panel in bottom left exhibits the pointwise discrepancy $|u-u_{{\bm \Theta}}|$. We also report on the evolution of the loss function (top right) and the $L^2$ error (bottom right).} \label{fig:dim2} \end{figure} \begin{figure}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \subf{\includegraphics[width=50mm]{solucion_k2.png}} {$ u_{{{\bm \Theta}}}$} & \subf{\includegraphics[width=50mm]{loss_k2.png}} {Loss function vs. iterations.} \\ \hline \subf{\includegraphics[width=50mm]{error_k2.png}} { $| u_{{{\bm \Theta}}} - u | $} & \subf{\includegraphics[width=50mm]{errorL2_k2.png}} { $\|u_{{{\bm \Theta}}} - u\|_{L^2(\Omega)}$ vs. iterations.} \\ \hline \end{tabular} \caption{Computational solution $u_{{{\bm \Theta}}}$ (top left), evolution of the loss function (top right), pointwise error (bottom left), and evolution of the $L^2$-error (bottom right) for \eqref{eq:example1} with $k = 2$ and $d = 2$. We used an initial learning rate $\ell = 0.005$, with 5,000 collocation points, 10,000 optimization steps, and 2901 degrees of freedom. We halved the learning rate every 2,500 optimization steps.} \label{fig:dim2_k2} \end{figure} \begin{figure}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \subf{\includegraphics[width=50mm]{feta_05.png}} {$ \restr{u_{{{\bm \Theta}}}}{\{x_2,\ldots,x_5 = 0.5\}}$} & \subf{\includegraphics[width=50mm]{perdida_altadim.png}} {Loss function vs. iterations.} \\ \hline \subf{\includegraphics[width=50mm]{feta_error.png}} { $|\restr{u_{{{\bm \Theta}}}}{\{x_2,\ldots,x_5 = 0.5\}} - \restr{u}{\{x_2,\ldots,x_5 = 0.5\}}|$} & \subf{\includegraphics[width=50mm]{errorL2_altadim.png}} { MSE vs. iterations.} \\ \hline \end{tabular} \caption{Slice of the solution $u_{{{\bm \Theta}}}$ (top left), evolution of the loss function (top right), pointwise error (bottom left), and evolution of the Mean Squared Error (MSE) (bottom right) for \eqref{eq:example1} with $k = 1$ and $d = 5$. We used an initial learning rate $\ell = 0.005$, with 12,000 collocation points, 20,000 optimization steps, and 5656 degrees of freedom. We halved the learning rate every 4,000 optimization steps. We estimated the MSE by using 5,000 random points in $\Omega$ (re-sampled at every step).} \label{fig:altadim} \end{figure} \end{example} \begin{example}[singularly perturbed problem] \label{ex:singularly-perturbed} Let $\varepsilon > 0$, $\Omega = (0,1)^2$, ${{\bm b}} = (-1+2\varepsilon,-1+2\varepsilon)$, $c = 2(1-\varepsilon)$, and the function $f \colon \Omega \to \mathbb{R}$, \[ f(x,y) = - \left[ x - \left( \frac{1-e^{-x/\varepsilon}}{1-e^{-1/\varepsilon}}\right) + y - \left( \frac{1-e^{-y/\varepsilon}}{1-e^{-1/\varepsilon}}\right) \right]e^{x+y}. \] We consider the singularly perturbed problem: find $u \colon \Omega \to \mathbb{R}$ such that \begin{equation} \label{eq:example2} \left\lbrace \begin{aligned} -\varepsilon \Delta u + {{\bm b}}\cdot \nabla u +cu = f \quad & \mbox{in } \Omega,\\ u = 0 \quad & \mbox{on } \partial\Omega. \end{aligned} \right. \end{equation} The exact solution to \eqref{eq:example2} is \[ u(x,y) = \left( x - \frac{1-e^{-x/\varepsilon}}{1-e^{-1/\varepsilon}}\right) \left( y - \frac{1-e^{-y/\varepsilon}}{1-e^{-1/\varepsilon}}\right) e^{x+y}. \] Figure \ref{fig:sing_pert} exhibits our computed solutions for this example with $\varepsilon = 0.05$. In that case, we observed a fast convergence towards the solution, reaching $L_N ({{\bm \Theta}}) = 0.0112$, as well as a good adaptation of the discrete solution to the boundary layers. \begin{figure}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \subf{\includegraphics[width=50mm]{sol_sing_pert.png}} {$u_{{{\bm \Theta}}}(x)$} & \subf{\includegraphics[width=50mm]{loss_sing_pert.png}} {Loss function vs. iterations.} \\ \hline \subf{\includegraphics[width=50mm]{dist_err_sing_pert.png}} {$|u_{{{\bm \Theta}}}(x) - u(x)|$} & \subf{\includegraphics[width=50mm]{error_sing_pert.png}} {$\|u_{{{\bm \Theta}}} - u\|_{L^2(\Omega)}$ vs. iterations.} \\ \hline \end{tabular} \caption{ Computational solution $u_{{{\bm \Theta}}}$ (top left), evolution of the loss function (top right), pointwise error (bottom left), and evolution of the $L^2$-error (bottom right) for \eqref{eq:example2} with $\varepsilon=0.05$. We used an initial learning rate $\ell = 0.005$, with 5,000 collocation points, 10,000 optimization steps, and 3543 degrees of freedom. Auxiliary functions have been approximated exactly. } \label{fig:sing_pert} \end{figure} \end{example} \section{Concluding remarks} \label{sec:conclusion} In this work, we have proposed a First-Order System Least Squares (FOSLS) method based on deep learning for numerically solving second-order elliptic PDEs. This method is meshless, which is naturally advantageous for high-dimensional problems, but as a consequence implies that we cannot compute the loss functions exactly. Taking into account this practical issue, we proved the almost sure convergence of the neural network minimizers towards the PDE solutions. We furthermore extended the theoretical framework to incorporate other methods based on Monte Carlo quadrature. \begin{remark}[almost-everywhere solutions]\label{rem:regularization} The convergence proofs in Sections \ref{sec:analysis} and \ref{sec:general} are based on the use of regularized versions of the cost functionals and their discretizations. Regularization consists in restricting the size of the parameters, namely, imposing that $| {{\bm \Theta}} | < R$ for certain $R < \infty$. This ensures that any neural network function with large derivatives is penalized, thereby preventing minimizers from approximating non-smooth functions. Far from being an artificial condition of the proof, regularization mechanisms of this kind are necessary in the implementation to avoid convergence towards functions that satisfy the PDE almost everywhere but are not weak solutions of the target problem. To illustrate this point, consider the following example, which is just \eqref{eq:FOS} in a simplified setting: seek $u,\phi: (0,1) \to \mathbb{R}$ such that \begin{equation}\label{eq:remark} \left\lbrace \begin{aligned} \phi - u' & = 0 & \mbox{in } (0,1), \\ \phi'& = 0 & \mbox{in } (0,1), \\ u(0) & = 0, \\ u(1) & = 1. \end{aligned} \right. \end{equation} Naturally, the unique minimizer of the least-squares functional (cf. \eqref{eq:LS-loss}) \[ \mathcal{L} (u, \phi) := \| \phi - u' \|_{L^2(\Omega)}^2 + \| \phi' \|_{L^2(\Omega)}^2 \] in the corresponding admissible set $\mathcal{A} = \{ (u,\phi) \in [H^1(\Omega)]^2 \colon u(0) = 0, \ u(1) = 1 \}$ is $u(x) = x$ and $\phi(x) = 1$. Let $\delta \in (0,1/2)$ be a small number, and consider the functions \begin{equation} \label{eq:def-u-phi} u_\delta(x) = \left\lbrace \begin{aligned} & 0 & \mbox{in } (0,1/2-\delta) \\ & \frac{x-1/2+\delta}{2\delta} & \mbox{in } (1/2-\delta,1/2+\delta) \\ & 1 & \mbox{in } (1/2+\delta,1) \end{aligned} \right., \qquad \phi_\delta(x) = \left\lbrace \begin{aligned} & 0 & \mbox{in } (0,1/2-\delta) \\ & \frac1{2\delta} & \mbox{in } (1/2-\delta,1/2+\delta) \\ & 0 & \mbox{in } (1/2+\delta,1) \end{aligned} \right. . \end{equation} We notice $\phi_\delta = u'_\delta$ a.e. in $(0,1)$ and $\mathcal{L} (u_\delta, \phi_\delta) = 0$, although $(u_\delta, \phi_\delta) \notin \mathcal{A}$, because $\phi_\delta$ is not an $H^1$ function. If we utilize the discrete functional \eqref{eq:discrete_cost} with collocation points, and none of these points lies in the interval $(1/2-\delta, 1/2+\delta)$, then for these two functions we would have \[ \mathcal{L}_N (u_\delta, \phi_\delta) = 0. \] We remark that, independently of the number of collocation points $N$, one can always take $\delta > 0$ sufficiently small such that the probability of none of the sampling points lies in $(1/2-\delta, 1/2+\delta)$ is significant. Therefore, if our neural network is capable of producing functions $(u_{{{\bm \Theta}}}, \phi_{{{\bm \Theta}}})$ approximating $(u_\delta, \phi_\delta)$ in \eqref{eq:def-u-phi} (cf. Figure \ref{fig:remark}), then during the optimization process the descent algorithm may choose to approximate the pair $(u, \phi) = (\chi_{(1/2,1)}, 0)$. This function satisfies the differential equations in \eqref{eq:remark} almost everywhere, but is not a significant solution. The issue of approximating bad solutions of this kind is mitigated by applying classic regularization techniques that penalize large parameters, because $|{{\bm \Theta}}|$ must be large in order to $u_{{\bm \Theta}} '$ be large at some portion of the domain. \begin{figure}[h] \centering \includegraphics[scale=0.45]{example.png} \caption{In red the function $u_{\delta}$ defined in \ref{eq:def-u-phi}. On dashed lines the solution of problem \eqref{eq:remark}.} \label{fig:remark} \end{figure} This difficulty extends to all methods based on the minimization of cost functionals similar to \eqref{eq:discrete_cost}, such as DGM \cite{DGM18} or DRM \cite{E18}. The issue stems from the fact that the functional \eqref{eq:discrete_cost} is unable to distinguish between regular solutions (belonging to a suitable Sobolev space) from any other functions that satisfy the equation almost everywhere. As far as we know, this problem has not been addressed in the literature, and the question of how to develop suitable regularization techniques for these approaches remains open. \end{remark} \begin{remark}[approximation of non-smooth solutions] \label{rem:non-smooth} There are, however, problems in which the solution presents large gradients in regions of the domain. One can typically think of singularly perturbed problems, such as \eqref{eq:example2}, or singularities arising due to poor boundary regularity, such as for the Poisson problem on an $L$-shaped domain. In those problems, regularization can limit the approximation capabilities of the algorithm. For algebraic boundary singularities, if the boundary conditions are imposed in a strong fashion, as discussed in Section \ref{sec:boundary-data}, one could aim to modify the rate at which the corresponding auxiliary function $d_\calD$ or $d_\calN$ decreases to zero near the singularity. This could potentially avoid $v_{{{\bm \Theta}}}$ having to approximate a singular function and lead to a faster convergence. Nevertheless, this requires an a priori knowledge about the location and behavior of the singularities of the solution, that is not available in general. We emphasize that the theory we developed in Section \ref{sec:general} does not make any regularity assumption on the PDE, and therefore includes the case of non-smooth solutions. \end{remark} \bibliographystyle{abbrv}
1,314,259,992,919
arxiv
\section{Introduction } Ultracold atoms provides an ideal platform to investigate and simulate many-body problems of condensed-matter physics, e.g., the Mott insulator transition \cite{Bloch}, magnetic phase transition \cite{Lewenstein}, because of their unprecedented controllability in purity and interatomic interactions. There are a number of tools now available to tune the interatomic interactions, such as magnetic and optical Feshbach resonances, optical lattices, etc. The magnetic Feshbach resonance - resulting from the hyperfine coupling between two atomic states (i.e., open channel) and a molecular state (closed channel) near zero energy - have been widely used \cite{Kohler,Chin}, allowing the realization of the long-sought crossover from a molecular Bose-Einstein condensate (BEC) to a Bardeen-Cooper-Schrieffer (BCS) superfluid and the investigation of interesting few-body physics such as Efimov effects \cite{Braaten,Kraemer}. The interaction between atoms can be also tuned by laser light near photo-association transition, when two free atoms couple to an excited molecular state \cite{Fedichev,Bohn,Bohn2}. This so-called optical Feshbach resonance has been experimentally realized \cite{Fatemi,Theis}. Compared with the magnetic Feshbach resonance, optical Feshbach resonance could be used to control the interatomic interaction with high temporal and spatial flexibility. In addition, the optical transition between atomic states and molecular states is always available for most atomic species. Hence, optical Feshbach resonance becomes crucial to control the interatomic interaction for alkaline-earth atoms because of the lack of magnetic structure in their ground states \cite{Ciurylo,Enomoto,Thalhammer}. In a recent experiment of optical Feshbach resonance, optical standing wave is used to couple atomic and molecular states of ytterbium-174 atoms, leading to a spatially modulated interaction between atoms \cite{Yamazaki}. Theoretically, this spatially modulated interatomic interaction was understood by using a two-channel model \cite{Qi}. Future experiments on, e.g., the simulation of Hawking radiation in cold atoms \cite{Garay,Carusotto}, the emission of solitons \cite{Rodas-Verde}, the dynamics of BEC collapse \cite{Dong,Yan}, the localized to delocalized transition of solitons \cite{Abdullaev}, and the phase separation of Bose and Fermi gases \cite{Chien}, all resulting from spatially varying interactions, may benefit from the control of interatomic interactions with high spatial and temporal resolutions. However, due to the large light-induced atomic loss, the use of optical Feshbach resonance is greatly limited. To reduce the loss, it has been proposed to use alkaline-earth atoms with narrow inter-combination line-width \cite{Blatt}. Alternatively, one may optically control a magnetic Feshbach resonance by using a bound-to-bound transition between two molecular states \cite{Bauer,Bauer2,Fu} or by using a molecular dark state \cite{Wu}. Experimentally, the shift of the magnetic Feshbach resonance position and the modification of the two-body $s$-wave scattering length due to the bound-to-bound transition have been demonstrated for both atomic Bose \cite{Bauer,Bauer2} and Fermi gases \cite{Fu}, by using spatially uniform laser light. In this work, we investigate the optical control of a magnetic Feshbach resonance by using spatially \emph{varying} (i.e., standing-wave-like) laser light, which drives the bound-to-bound transition between a ground molecular state and an excited molecular state. This scheme can directly be implemented in current experiments \cite{Bauer,Fu} by replacing the uniform laser light with a standing-wave light. It offers the ability to tune the interatomic interactions with a spatial modulation at the sub-micron level. Compared with the previous spatial modulation of interatomic interactions with optical Feshbach resonance \cite{Yamazaki,Qi}, the major advantage of our scheme is that the optical induced atomic loss would be significantly suppressed \cite{Bauer,Fu}. As a result, our proposal provides a practical way to experimentally realize spatially modulated interatomic interactions, for the purpose of simulating related many-body problems. As we shall see, our scheme also has the advantage of tuning the \emph{width} of Feshbach resonances, with a great flexibility. The rest of paper is organized as follows. In the next section (Sec. II), we introduce the model Hamiltonian and calculate the energy bands of bound states. The scattering states are also investigated and a series of scattering resonances are obtained. In Sec. III, we present a detailed analysis and discussion to our results. Sec. IV is devoted to a summary of this work. \section{Theoretical framework} \subsection{Model Hamiltonian } In the absence of the bound-to-bound molecular transition, the system can be described by the following atom-molecule Hamiltonian \cite{Drummond,Timmermans,Timmermans2,Holland}, \begin{eqnarray} & H & =H_{0a}+H_{Ia}+H_{g}+H_{ag},\notag\\ & H_{0a} & =\sum_{\sigma}\int d\vec{r}\psi_{\sigma}^{\dagger}(\vec{r})\left[-\frac{\hbar^{2}\nabla^{2}}{2m}-\mu\right]\psi_{\sigma}(\vec{r}),\notag\\ & H_{Ia} & =U\int d\vec{r}\psi_{\uparrow}^{\dagger}(\vec{r})\psi_{\downarrow}^{\dagger}(\vec{r})\psi_{\downarrow}(\vec{r})\psi_{\uparrow}(\vec{r}),\notag\\ & H_{g} & =\int d\vec{R}\phi^{\dagger}(\vec{R})\left[-\frac{\hbar^{2}\nabla^{2}}{2M}-2\mu+v_{g}\right]\phi(\vec{R}),\notag\\ & H_{ag} & =\chi\int d\vec{R}\left[\phi^{\dagger}(\vec{R})\psi_{\uparrow}(\vec{R}/2)\psi_{\downarrow}(\vec{R}/2)+\textrm{H.c.}\right], \end{eqnarray} where $H_{0a}$ and $H_{Ia}$ are respectively the kinetic Hamiltonian and interaction Hamiltonian of atoms with the field operator $\psi_{\sigma}(\vec{r})$ ($\sigma=\uparrow,\downarrow$); $H_{g}$ is the Hamiltonian of molecules in their ground state with the field operator $\phi(\vec{R})$ and $v_{g}$ denotes the energy difference between the molecular state and atomic state; $H_{ag}$ describes the atom-molecule coupling and models the conversion between atoms and molecules. The mass of molecules is twice of atomic mass $M=2m$. $\mu$ is the chemical potential. $\textrm{H.c.}$ denotes the Hermitian conjugate. Note that we have assumed short-range contact interactions for both interatomic interaction $U(\vec{r}-\vec{r}')=U\delta(\vec{r}-\vec{r}')$ and atom-molecule coupling $\chi(\vec{R};\vec{r},\vec{r}')=\chi\delta[\vec{R}-(\vec{r}+\vec{r}')/2]\delta(\vec{r}-\vec{r}')$. We now consider the molecular bound-bound transition driven by a standing-wave laser light $\Omega\cos(\vec{K}\cdot\vec{R})/2$, where $\Omega$ is the related Rabi frequency and $\vec{K}$ is the wave-vector of the light. By using the field operator $\phi_{e}(\vec{R})$ for the excited molecular state and taking the rotating-wave approximation, we obtain the following two additional terms \cite{Bauer,Fu} : \begin{eqnarray} H_{e} & = & \int d\vec{R}\phi_{e}^{\dagger}(\vec{R})\left[-\frac{\hbar^{2}\nabla^{2}}{2M}-2\mu+v_{e}-\Delta-i\frac{\gamma}{2}\right]\phi_{e}(\vec{R}),\notag\\ H_{ge} & = & \int d\vec{R}\frac{\Omega\cos(KX)}{2}\left[\phi^{\dagger}(\vec{R})\phi_{e}(\vec{R})+\phi_{e}^{\dagger}(\vec{R})\phi(\vec{R})\right], \end{eqnarray} where $H_{e}$ is the kinetic Hamiltonian of the excited molecular state, $v_{e}$ is the energy of the excited state relative to the atomic state, $\Delta$ is the detuning of the molecular transition, $\gamma$ describes the decay of the excited state, and $H_{ge}$ is the coupling between the ground and excited states through the optical standing wave. We have assumed that the laser light is applied along the $x$-direction so that $\cos(\vec{K}\cdot\vec{R})=\cos(KX)$. In the case of large detuning ($\Delta\gg v_{e}$, $\gamma$), we may safely neglect the decay of the excited molecular state (i.e., $\gamma=0$) and eliminate the field operator $\phi_{e}(\vec{R})$. The coupling ($H_{ge}$) between molecular states leads to a Stark energy shift $\Omega^{2}\cos^{2}(KX)/(4\Delta)$ for the molecular ground state and consequently we have a modified Hamiltonian for ground-state molecules, \begin{align} \tilde{H}_{g} & =\int d\vec{R}\phi^{\dagger}\left[-\frac{\hbar^{2}\nabla^{2}}{2M}-2\mu+v_{g}+\frac{\Omega^{2}\cos{}^{2}(KX)}{4\Delta}\right]\phi(\vec{R}). \end{align} It is obvious that the Stark energy shift plays the role of optical lattices for ground-state molecules \cite{Fedichev2,Orso}. By taking a Fourier transformation, the total Hamiltonian can be rewritten in momentum space as \begin{eqnarray} H & = & H_{0a}+H_{Ia}+\tilde{H}_{g}+H_{ag},\notag\\ H_{0a} & = & \sum_{\vec{k}\sigma}(\epsilon_{\vec{k}}-\mu)C_{\vec{k}\sigma}^{\dagger}C_{\vec{k}\sigma},\notag\\ H_{Ia} & = & U\sum_{\vec{k},\vec{k}^{\prime},\vec{q}}C_{\vec{q}/2+\vec{k},\uparrow}^{\dagger}C_{\vec{q}/2-\vec{k},\downarrow}^{\dagger}C_{\vec{q}/2-\vec{k}^{\prime},\downarrow}C_{\vec{q}/2+\vec{k}^{\prime},\uparrow},\notag\\ \tilde{H}_{g} & = & \sum_{\vec{q}}\left(\frac{\epsilon_{\vec{q}}}{2}-2\mu+v_{g}+\frac{\Omega^{2}}{8\Delta}\right)b_{\vec{q}}^{+}b_{\vec{q}}\notag\\ & & -\sum_{\vec{q}}\frac{\Omega^{2}}{16\Delta}\left(b_{\vec{q}}^{\dagger}b_{\vec{q}+2K}+\textrm{H.c.}\right),\notag\\ H_{ag} & = & \chi\sum_{\vec{k},\vec{q}}\left(b_{\vec{q}}^{\dagger}C_{\vec{q}/2-\vec{k},\downarrow}C_{\vec{q}/2+\vec{k},\uparrow}+\textrm{H.c.}\right). \end{eqnarray} Here $\epsilon_{\vec{k}}=\vec{k}^{2}/2$ is the kinetic energy (in the units of $m=1$ and $\hbar=1$). The above Hamiltonian will be our starting point. In the following, we will solve the two-particle problem of the Hamiltonian. Note that, in the case of large detuning, the molecular excited state $|e\rangle$ does not appear in the above Hamiltonian. Note also that, here the lattice potential only appears for the ground molecular state, unlike the case of an optical Feshbach resonance, where the spatial modulation appears in the atom-molecule coupling $\chi$ \cite{Qi}. \subsection{Two-body bound states } Here we focus on the two-body problem, so the chemical potential $\mu=0$. Due to the presence of the lattice potential, eigenstates can be classified according to \emph{quasi-momentum} $q\in[-K,K]$ (Note that the period of the lattice in Eq. (3) is half of the wave length of laser beam). Hereafter, $q$ and $K$ are understood as along the $x$-direction unless explicitly specified. It is expected that the eigen-energy would form a band structure. The two-body wave function can be written as \begin{align} |\psi\rangle & =\sum_{n}A_{n}|nK+q,g\rangle\notag\\ & +\sum_{n\vec{k}}B_{n,\vec{k}}|(nK+q)/2+\vec{k},\uparrow;(nK+q)/2-\vec{k},\downarrow\rangle,\label{eq:BoundState} \end{align} where $|nK+q,g\rangle$ is the molecular state with a center-of-mass momentum $nK+q$, $|(nK+q)/2+\vec{k},\uparrow;(nK+q)/2-\vec{k},\downarrow\rangle$ is the state of a pair of atoms with total momentum $nK+q$ and relative momentum $\vec{k}$ and with un-like spins. The two-particle Schr\"{o}dinger equation reads \begin{align} & H|\psi\rangle=E|\psi\rangle, \end{align} from which we determine coupled equations for the coefficients $A_{n}$ and $B_{n,k}$, \begin{eqnarray} EA_{n} & = & \left[\epsilon_{nK+q,g}+v_{g}+\frac{\Omega^{2}}{8\Delta}\right]A_{n}\notag\\ & & -\frac{\Omega^{2}}{16\Delta}\left[A_{n+2}+A_{n-2}\right]+\chi\sum_{\vec{k}}B_{n,\vec{k}},\notag\\ EB_{n,\vec{k}} & = & \left[\epsilon_{(nK+q)/2-\vec{k},a}+\epsilon_{(nK+q)/2+\vec{k},a}\right]B_{n,\vec{k}}\notag\\ & & +U\sum_{\vec{k}^{\prime}}B_{n\vec{k}^{\prime}}+\chi A_{n},\label{eq:AnBnBoundState} \end{eqnarray} where the molecular kinetic energy $\epsilon_{nK+q,g}=(nK+q)^{2}/4$ and the atomic kinetic energy $\epsilon_{(nK+q)/2\pm\vec{k},a}=[(nK+q)/2\pm\vec{k}]^{2}/2$. The above equation demonstrates that the molecular amplitudes of different momenta $A_{n}$ are coupled by the lattice potential. After eliminating the atomic amplitude $B_{n,\vec{k}}$, we obtain, \begin{eqnarray} EA_{n} & = & \left[\epsilon_{nK+q,g}+v_{g}+Z_{n}+\frac{\Omega^{2}}{8\Delta}\right]A_{n}\notag\\ & & -\frac{\Omega^{2}}{16\Delta}[A_{n+2}+A_{n-2}], \end{eqnarray} where, \begin{align} & Z_{n}=\frac{\chi^{2}f_{n}}{1-Uf_{n}},\notag\\ & f_{n}=\Sigma_{\vec{k}}\frac{1}{E-(\epsilon_{(nK+q)/2-\vec{k},a}+\epsilon_{(nK+q)/2+\vec{k},a})}.\notag \end{align} The bare parameters ($\chi$, $U$ and $v_{g}$) need to be renormalized to real physical observables \cite{Fu}, for example, \begin{align} v_{g}+Z_{n}\rightarrow v_{g0}+Z_{n0}=v_{g0}+\frac{\chi_{0}^{2}f_{n0}}{1-U_{0}f_{n0}}, \end{align} where \begin{align} f_{n0} & =\sum_{\vec{k}}\left[\frac{1}{E-[\epsilon_{(nK+q)/2-\vec{k},a}+\epsilon_{(nK+q)/2+\vec{k},a}]}+\frac{1}{\vec{k}^{2}}\right]\notag\\ & =\sum_{\vec{k}}\left[\frac{1}{E-[(nK+q)^{2}/4+\vec{k}^{2}]}+\frac{1}{\vec{k}^{2}}\right]\notag\\ & =\frac{\sqrt{-E+(nK+q)^{2}/4}}{4\pi}.\notag \end{align} Detailed expressions for real observables $v_{g0}$, $\chi_{0}$ and $U_{0}$ are given in the next section. Eq. (8) differs from the usual eigenvalue problems in that the eigenvalue $E$ appears on both sides of the equation. We can divide the eigenvalue $E$ on both sides of the equation and obtain, \begin{align} A_{n}= & \frac{\left[\epsilon_{nK+q,g}+v_{g0}+Z_{n0}+\Omega^{2}/(8\Delta)\right]}{E}A_{n}\notag\\ & -\frac{\Omega^{2}}{16\Delta E}\left[A_{n+2}+A_{n-2}\right]. \end{align} The above equation has the form, \begin{align} & |\psi\rangle=K(E)|\psi\rangle, \end{align} where the matrix elements of the kernel $K(E)$ depend on the eigenvalue $E$. By adjusting $E$ to force the eigenvalues of the kernel $K(E)$ to be 1, we can solve all the eigenvalues and eigenvectors numerically. Then, from the molecular amplitudes ($A_{n}$), one can obtain the atomic amplitudes \begin{align} B_{n,\vec{k}}=\frac{\beta_{n}}{-E_{b}-[\epsilon_{(nK+q)/2-\vec{k},a}+\epsilon_{(nK+q)/2+\vec{k},a}]},\notag \end{align} where $E_{b}\equiv-E>0$ is the binding energy of the bound state and \begin{equation} \beta_{n}\equiv\frac{U_{0}\chi_{0}f_{n0}A_{n}}{1-U_{0}f_{n0}}+\chi_{0}A_{n}. \end{equation} \subsection{Radio-frequency spectroscopy of two-particle bound states } The existence of two-particle bound states may be detected by the radio-frequency (rf) spectroscopy technique. The Hamiltonian of the rf process can be written as \cite{Chin2,Bartenstein,Hu} \begin{align} V_{rf} & =V_{0}\int d\vec{r}\left[\psi_{3}^{\dagger}(\vec{r})\psi_{\downarrow}(\vec{r})+\textrm{H.c.}\right],\notag\\ & =V_{0}\sum_{\vec{q}}\left[C_{\vec{q},3}^{\dagger}C_{\vec{q},\downarrow}+\textrm{H.c.}\right]. \end{align} It represents a transition process, where the atoms in the state $|\vec{q},\downarrow\rangle$ are transferred to a third, unoccupied state $|\vec{q},3\rangle$. Recall that the atomic part of the wave function of a two-particle bound state is given by, \begin{align} |\psi,a\rangle & =\sum_{n\vec{k}}B_{n,\vec{k}}|(nK+q)/2+\vec{k},\uparrow;(nK+q)/2-\vec{k},\downarrow\rangle. \end{align} By acting $V_{rf}$ on this wave function, we obtain, \begin{align} & V_{rf}|\psi,a\rangle\notag\\ & =-V_{0}\sum_{n\vec{k}\vec{q}^{\prime}}B_{n\vec{k}}C_{\vec{q}^{\prime},3}^{\dagger}C_{(nK+q)/2+\vec{k},\uparrow}^{\dagger}C_{\vec{q}^{\prime},\downarrow}C_{(nK+q)/2-\vec{k},\downarrow}^{\dagger}|0\rangle,\notag\\ & =-V_{0}\sum_{n\vec{k}}B_{n\vec{k}}C_{(nK+q)/2-\vec{k},3}^{\dagger}C_{(nK+q)/2+\vec{k},\uparrow}^{\dagger}|0\rangle, \end{align} which give us the final two-particle state after the rf pulse. Using Fermi's Golden Rule, the transfer strength of the rf process is given by the following Frank-Condon factor, \begin{align} \Gamma(\omega) & =\frac{1}{\mathcal{C}}\sum_{n,\vec{k}}|B_{n,\vec{k}}|^{2}\delta\left(\omega-\left[\frac{(nK+q)^{2}}{4}+\vec{k}^{2}+E_{b}\right]\right), \end{align} where the $\delta$-function guarantees energy conservation during the rf process and $\mathcal{C}=\sum_{n,\vec{k}}|B_{n,\vec{k}}|^{2}$ is the normalization constant. By introducing $E_{n}=E_{b}+(nK+q)^{2}/4$, we find that $f_{n0}=\sqrt{E_{b}+(nK+q)^{2}/4}/4\pi=\sqrt{E_{n}}/4\pi$, $|B_{n\vec{k}}|^{2}=\beta_{n}^{2}/(E_{n}+\vec{k}^{2})^{2}$, and $\mathcal{C}=\sum_{n}\beta_{n}^{2}/[8\pi\sqrt{E_{n}}]$. The Frank-Condon factor can then be rewritten as, \begin{equation} \Gamma(\omega)=\sum_{n}\frac{\beta_{n}^{2}}{4\pi^{2}\mathcal{C}}\frac{\sqrt{\omega-E_{n}}}{\omega^{2}}\theta(\omega-E_{n}),\label{eq:FC} \end{equation} where $\theta(x)$ is the Heaviside step function. Therefore, once we obtain $E_{b}$ and $A_{n}$, the rf transfer strength can be calculated straightforwardly. \subsection{Two-particle scattering states} We now consider the low-energy scattering state with energy $E>0$ and $E\ll K^{2}$. Here we focus on the isotropic $s$-wave scattering at the quasi-momentum $q=0$. Without loss of generality, we assume that the incident wave propagates along the $z$-direction. The scattering wave function can be written as \begin{eqnarray} |\psi\rangle & = & |k_{z},\uparrow;-k_{z},\downarrow\rangle+\sum_{n}A_{n}|nK,g\rangle+\notag\\ & & +\sum_{n,\vec{k}}B_{n,\vec{k}}|nK/2+\vec{k},\uparrow;nK/2-\vec{k},\downarrow\rangle, \end{eqnarray} where the first term on the right-hand side $|k_{z},\uparrow;-k_{z},\downarrow\rangle$ stands for the incident state of two atoms with the total momentum $0$, relative momentum $k_{z}$ and energy $E=k_{z}^{2}$. By substituting the wave function into the two-particle Schr\"{o}dinger equation, we obtain, \begin{eqnarray} EA_{n} & = & \left[\epsilon_{nK,g}+v_{g}+\frac{\Omega^{2}}{8\Delta}\right]A_{n}-\frac{\Omega^{2}}{16\Delta}\left[A_{n+2}+A_{n-2}\right]\notag\\ & & +\chi\sum_{\vec{k}}B_{m,\vec{k}},\notag\\ EB_{n,\vec{k}} & = & \left[\epsilon_{nK/2-\vec{k},a}+\epsilon_{nK/2+\vec{k},a}\right]B_{n,\vec{k}}\notag\\ & & +U\sum_{\vec{k}^{\prime}}B_{n\vec{k}^{\prime}}+\chi A_{n}+U\delta_{n,0}. \end{eqnarray} Here, compared with Eq. (\ref{eq:AnBnBoundState}), the extra term $U\delta_{n,0}$ in the last line comes from the incident state with zero total momentum. It is important to note that, traditionally, in the absence of optical lattices the atomic and molecular states are referred to as the open and closed channels, respectively. In our case with the lattice potential, this two-channel viewpoint should be generalized, as the dispersion relation is now folded into discrete energy bands (i.e., different $n$). That is, we may classify any atomic states with a nonzero band index $n\neq0$ as a closed channel \cite{Qi}. As a result, with the lattice potential we are now dealing with a \emph{multi-channel} scattering problem, instead of the usual two-channel problem. As we shall see later, this multi-channel viewpoint is crucial to understand the width of scattering resonances. By adopting the similar strategy of eliminating the atomic amplitudes $B_{n,\vec{k}}$ as in the bound state calculation, we obtain, \begin{eqnarray} EA_{n} & = & \left[\epsilon_{nK,g}+v_{g}+Z_{n}+\frac{\Omega^{2}}{8\Delta}\right]A_{n}\notag\\ & & -\frac{\Omega^{2}}{16\Delta}\left[A_{n+2}+A_{n-2}\right]+\frac{Z_{n}U}{\chi}\delta_{n,0}, \end{eqnarray} After the renormalization, the equation becomes \begin{eqnarray} EA_{n}& = \left[\epsilon_{nK,g}+v_{g0}+Z_{n0}+\frac{\Omega^{2}}{8\Delta}\right]A_{n}\notag\\ & -\frac{\Omega^{2}}{16\Delta}\left[A_{n+2}+A_{n-2}\right]+\left(\frac{Z_{n0}U_{0}}{\chi_{0}}+\chi_{0}\right)\delta_{n,0}. \end{eqnarray} We can solve the above linear equation to obtain the molecular amplitudes $A_{n}$, and then the atomic amplitudes $B_{nk}$ through the expression $B_{nk}=\beta_{n}^{\prime}/[-(E_{n}^{\prime}+\vec{k}^{2})]$, where $E_{n}^{\prime}=-E+(nK)^{2}/4$ and \begin{equation} \beta_{n}^{\prime}=\frac{U_{0}^{2}f_{n0}\delta_{n,0}+U_{0}x_{0}f_{n0}A_{n}}{1-U_{0}f_{n0}}+x_{0}A_{n}+U_{0}\delta_{n,0}. \end{equation} \subsection{Spatially modulated interatomic interactions} In coordinate space, the atomic part of the scattering wave function can be written as, \begin{align} \langle r|\psi,a\rangle & =e^{ik_{z}z}+\sum_{n,\vec{k}}B_{n,\vec{k}}e^{inKX+i\vec{k}\vec{r}},\notag\\ & =e^{ik_{z}z}+\sum_{n}\frac{1}{(2\pi)^3}\int d^3\vec{k} \frac{-\beta^{\prime}_{n}}{E^{\prime}_n+\vec{k}^2}e^{inKX+i\vec{k}\vec{r}},\notag\\ & =e^{ik_{z}z}-\frac{\beta_{0}^{\prime}e^{i\sqrt{E}r}}{4\pi r}+\sum_{n\neq0}-\frac{\beta_{n}^{\prime}e^{inKX}e^{-\sqrt{E_{n}^{\prime}}r}}{4\pi r}.\label{eq:wfs1} \end{align} We can see that the wave functions of closed channels ($n\neq0$) all exponentially decrease with increasing $r$. As the incident energy $E\rightarrow0$, the $s$-wave scattering amplitude is given by $f_{0}=-\beta_{0}^{\prime}/4\pi$, from which we determine the $s$-wave scattering length \begin{equation} a_{eff}=-f_{0}=\frac{\beta_{0}^{\prime}}{4\pi}. \end{equation} On the other hand, at the short range ($r\rightarrow0$), the atomic part of the scattering wave function can be expressed as \begin{align} \langle r|\psi,a\rangle & \propto1/r-1/a_{loc}(X)+o(r),\label{eq:wfs2} \end{align} where $o(r)$ represents a quantity at the same order of magnitude of $r$, and $a_{loc}(X)$ can be interpreted as the \emph{local} $s$-wave scattering length. Comparing Eq. (\ref{eq:wfs1}) with Eq. (\ref{eq:wfs2}), we obtain the expression of the local $s$-wave scattering length \cite{Qi} \begin{align} a_{loc}(X) & =\frac{1-\sum_{n\neq0}U_{n}\cos(nKX)/U_{0}}{1/a_{eff}-\sum_{n\neq0}U_{n}|n|K\cos(nKX)/U_{0}}, \end{align} where \begin{eqnarray} U_{0} & = & +\frac{\beta_{0}^{\prime}}{4\pi},\notag\\ U_{n} & = & -\frac{\beta_{n}^{\prime}}{4\pi}. \end{eqnarray} Note that, when we construct an effective many-body Hamiltonian of our system, the interaction Hamiltonian may be modeled by using the local scattering length $a_{loc}(X)$ \cite{Qi}, which is position dependent. Thereby, the lattice potential gives rise to a spatially modulated interatomic interaction. \section{Results and discussion} Taking an ultracold Fermi gas of $^{40}$K atoms as an example \cite{Fu}, at the magnetic Feshbach resonance $B_{0}=202.20\pm0.02$ G the background scattering length $a_{bg}\simeq174a_{B}$ ($a_{B}$ is the Bohr radius), the difference in magnetic momentum of atoms and of ground-state molecules is $\mu_{ag}=2\mu_{a}-\mu_{g}\simeq2\mu_{B}$ ($\mu_{B}$ is the Bohr magneton), and the width of resonance $W\simeq7.04\pm0.10$ G. In the following calculations, we take the natural units: the mass of atoms $m=1$, the background scattering length $a_{bg}=1$ and $\hbar=1$. Therefore, energy is measured in units of $\hbar^{2}/ma_{bg}^{2}$. We take the parameters: $B-B_{0}=-0.6$ G; $\Omega=2\pi\hbar\times0.07$ GHz; the wave length of laser $\lambda=780$ nm; the wave vector $K=2\pi/\lambda$. The physical observables mentioned earlier are related to the above experimental parameters by the expressions, \begin{eqnarray} U_{0} & = & 4\pi\hbar^{2}a_{bg}/m,\notag\\ \chi_{0} & = & 2\hbar\sqrt{\pi a_{bg}W\mu_{ag}/m},\notag\\ \nu_{g0} & = & \mu_{ag}(B-B_{0}). \end{eqnarray} \begin{figure}[H] \centering \includegraphics[scale=0.5]{Figure11}\\[0pt] \caption{(Color online). The bound states energy spectrum with optical coupling $\Omega^{2}/16\Delta=0.0342$ (dotted blue lines) and $-0.0342$ (solid red lines), respectively. The dashed line indicates the lowest energy of bound states without lattice potential (i.e., $\Omega^{2}/16\Delta=0$). } \label{fig1} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.5]{Figure22}\\[0pt] \caption{(Color online). The Frank-Condon factor. The dashed blue, dotted black and solid red lines in Fig. \ref{fig2} correspond bound states 1, 2 and 3 in Fig. \ref{fig3} (a), respectively {[}see the three bound states denoted by arrow heads on the rightmost line of the panel (a) of Fig. \ref{fig3}{]}. } \label{fig2} \end{figure} In Fig. \ref{fig1} we show the bound state energies. The dotted blue and solid red lines correspond to the blue ($\Delta>0$) and red detunings ($\Delta<0$), respectively. As anticipated, overall the red detuning gives rise to a lower energy for two-particle bound states. Fig. \ref{fig2} reports the rf spectroscopy of three lowest bound states located at quasi-momentum $q=0$ for different lattice depths. We find that the smaller binding energy is, the sharper is rf signal (see the dashed blue line in Fig. \ref{fig2}). This is because when the binding energy approach zero, the wave function of bound states extends widely in coordinate space. Accordingly, the wave function in momentum space will concentrate near zero-momentum. So the overlap of wave functions which gives the Frank-Condon factor reaches large value near zero-energy. Due to the coupling of different total momenta, for each bound state its atomic part of the wave-function is a linear superposition of different components with different total momenta, as shown in Eq. (\ref{eq:BoundState}). This results in additional bumps in the rf spectroscopy, see for example, Eq. (\ref{eq:FC}). Therefore, we may identify the bumps as a unique characteristic of the energy band structure due to the lattice potential. With increasing the lattice potential strength, the bumps becomes more evident. The result of Fig. \ref{fig2} can be directly verified in current cold-atom experiments by using rf-spectroscopy \cite{Bartenstein,Fu}. \begin{figure}[H] \centering \includegraphics[scale=0.5]{Figure33}\\[0pt] \caption{(Color online). The bound states and their corresponding Feshbach resonances at the quasi-momentum $q=0$. The panel (a) gives the evolution of bound state energies with increasing the lattice depth. The panel (b) shows the $s$-wave scattering length ($a_{eff}$). The solid red (dashed blue) lines correspond to the case of $\Delta<0$ ($\Delta>0$). In (a), the dotted lines show the energy branches that do not induce resonance when they cross zero-energy. } \label{fig3} \end{figure} To show the evolution of the energy band structure as a function of the lattice depth $\Omega^{2}/16\Delta$, we report $E(q=0)$ in Fig. \ref{fig3}. With increasing the lattice depth, in the case of red detuning ($\Delta<0$), more bound states emerge {[}see the panel (a) of Fig. \ref{fig3}{]}, while in the case of blue detuning ($\Delta>0$), the energy of bound states move upward and crosses zero-energy. The corresponding evolution of the $s$-wave scattering length is shown in the panel (b) of Fig. \ref{fig3}. A resonance occurs in the $s$-wave scattering length when the energy of bound state crosses zero-energy, as one may anticipate. However, not all the energy branches induce resonance when they cross zero-energy. In the panel (a), the wave function of the bound states shown in dotted red lines is antisymmetric with respect to the momentum $nK=0$ (i.e., $\beta_{n}^{\prime}=-\beta_{-n}^{\prime}$), implying $\beta_{0}^{\prime}=0$. As a result, the $s$-wave scattering length $a_{eff}=\beta_{0}^{\prime}/4\pi=0$ and hence these bound states do not result in any Feshbach resonance. We note that, the resonance induced by a spatially modulated atom-molecule coupling has been previously discussed in the case of optical Feshbach resonance \cite{Qi}. The appearance of a resonance was similarly found to depend on the symmetry of the bound state. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline No. & A & B & C & D & E\tabularnewline \hline $(\Omega^{2}/16\Delta)_{0}$ & $-1.385$ & $-0.78$ & $-0.3279$ & $0.01399$ & $0.7807$ \tabularnewline \hline $W$ & $0.3526$ & $0.2535$ & $0.1403$ & $0.0045$ & $10.4948$ \tabularnewline \hline $E_{\text{uni}}$ & $5.1*10^{-5}$ & $3.3*10^{-5}$ & $1.1*10^{-5}$ & $7.6*10^{-9}$ & $0.0046$ \tabularnewline \hline \end{tabular} \caption{The resonance position and width of the five Feshbach resonances shown in the panel (b) of Fig. \ref{fig3}(in units of $\hbar^{2}/ma_{bg}^{2}$). $E_{\text{uni}}$ is an energy scale associated with the regime for universal two-body bound states (see Eq. (A.13) in the Appendex)} \end{table} \begin{figure}[H] \centering \includegraphics[scale=0.5]{Figure44}\\[0pt] \caption{(Color online). The atomic amplitudes as a function of the band index $n$ near the Feshbach resonances \textbf{D} and \textbf{E} (corresponding to the two resonances with blue detuning in the panel (b) of Fig. \ref{fig3}). From the upper panel (a) for the resonance \textbf{D}, we see that the closed channels ($n=\pm2$) have the largest amplitudes. Therefore, we interpret the resonance \textbf{D} as a closed-channel-dominated resonance. On the contrary, the lower panel (b) for the resonance \textbf{E} corresponds to a entrance-channel-dominated resonance.} \label{fig4} \end{figure} Different from the case of optical Feshbach resonance \cite{Qi}, however, in our case the width of the spatial-modulation-induced resonance varies significantly by changing the depth of the optical lattice potential. Near resonance, the scattering length can be written as \begin{equation} a_{eff}=a_{bg}\left[1-\frac{W}{(\Omega^{2}/16\Delta)-(\Omega^{2}/16\Delta)_{0}}\right], \end{equation} here {[}$(\Omega^{2}/16\Delta)_{0}${]} and ($W$) are the resonance position and width. In Table I, we calculate the width of the five Feshbach resonances shown in the panel (b) of Fig. \ref{fig3} (for details see Appendix). The position of resonance can be obtained through fitting our numerical data. Generally, the widths of Feshbach resonance are influenced greatly by the other atomic closed channels (see Table. I). In the absence of optical lattice potential, the resonance width of $^{40}$K atoms near the magnetic field $B_{0}=202.20\pm0.02$ is about $W\sim3.3$, in the energy unit of $\hbar^{2}/ma_{bg}^{2}$. From Table. I, we find that, in the presence of the lattice potential, the resonance width can be one order of magnitude larger or smaller than that without the lattice potential. For large blue detuning, the width of resonance \textbf{E} is extremely large. For red detuning we find that the width becomes larger with increasing the depth of the optical coupling $|\Omega^{2}/\Delta|$. As a result, we can access very wide Feshbach resonance by choosing the zero-energy bound state at large lattice depth. Fig. \ref{fig4} reports the atomic amplitude $\left|\beta_{n}^{\prime}\right|$ near the Feshbach resonance \textbf{D} and \textbf{E} {[}see the panel (b) of Fig. \ref{fig3}{]}. The width of the resonance \textbf{D} is very small. It is a closed-channel-dominated resonance, in the sense that the atomic amplitudes of closed channels $\beta_{n=\pm2}^{\prime}$ take the largest value relative to the open channel ($\beta_{n=0}^{\prime}$) {[}see Fig. \ref{fig4}(a){]}. On the contrary, the resonance \textbf{E} has a very large resonance width and the atomic amplitude peaks at $n=0$. \begin{figure}[H] \centering \includegraphics[scale=0.5]{Figure55}\\[0pt] \caption{(Color online). The local $s$-wave scattering length near the resonance \textbf{D}. The solid and dashed lines have the scattering length $a_{eff}=12.88a_{bg}$ and $a_{eff}=12.81a_{bg}$, respectively.} \label{fig5} \end{figure} It is worth noting that although there is a modulated lattice, universal two-body bound states near zero-energy threshold still exist (see Appendix), whose energy is approximately $E\propto -1/a^2_{eff}$. However, the universal regime may be extremely small because of the influence of other atomic closed channels. In Table. I, we calculate an characteristic energy scale $E_{\text{uni}}$ for each resonance, which determines the size of the universal regime. Only when the energy satisfy $|E|\ll E_{\text{uni}}$, the universal expression $E\propto -1/a^2_{eff}$ is valid (see Appendix). From Table. I, we find that the universal regimes for Feshbach resonances \textbf{A}, \textbf{B}, \textbf{C} and \textbf{D} are all extremely small. This explains why we can not see the universal behavior from Fig. 3. However, the Feshbach resonance \textbf{E} has a relatively large universal regime compared with others. As a result, the corresponding energy curve looks like quadratic parabola near the Feshbach resonance \textbf{E}. Fig. 5 shows the spatial dependence of the local $s$-wave scattering length. It is easy to see that the variation period of the scattering length $a_{loc}(X)$ is directly determined by the optical lattice potential. For a weak lattice potential (dashed line), the variation of the local scattering length follows a cosine function. The mean value of the local scattering length is roughly equal to the $s$-wave scattering length $a_{eff}$. For a stronger lattice potential (solid line), although $a_{eff}$ is nearly the same, the value of the local scattering length changes drastically, from positive to negative, when the position $X$ changes. This implies that a reasonably large lattice potential has crucial effects on spatially modulated interatomic interactions, similar to what has already been seen in the case of an optical Feshbach resonance \cite{Qi} \section{summary} In conclusion, we have investigated how to tune a magnetic Feshbach resonance by using standing-wave laser light that drives a molecular bound-to-bound transition. The two-particle bound states and scattering states (or scattering lengths) are significantly affected by the standing-wave light. A band structure is formed and a series of zero-energy scattering states appear. As a result, a number of laser-induced Feshbach resonances emerge, whose position and width can be tuned by changing the depth of the standing-wave laser. The resulting $s$-wave scattering length near resonance shows a strong spatial dependence. This provides a new tool to control interatomic interactions and therefore opens a new route to study many interesting many-body physics, for example, the exotic soliton, spatially inhomogeneous BCS superfluidity or BEC-BCS crossover, self-trapping of BECs induced by spatially modulated interatomic interactions. Our proposed scheme can be directly examined in current experiments for an ultracold Fermi gas of $^{40}$K atoms. Indeed, the optical control of the interaction between $^{40}$K atoms near the broad Feshbach resonance $B_{0}=202.20\pm0.02$ has recently been demonstrated \cite{Fu}, by using a spatially homogeneous laser. Our scheme is straightforward to implement by replacing the homogeneous laser with a standing-wave laser. The predicted energy band structure and the series of laser-induced Feshbach resonances could be easily observed by using the radio-frequency spectroscopy and atomic loss spectroscopy. We note that our calculations apply to bosonic systems as well. In that case, the spatially modulated interatomic interaction can be observed through the measurement of the mean-field energy of BECs \cite{Yamazaki}. \begin{acknowledgments} This work was supported by the NKBRSFC under grants Nos. 2011CB921502, 2012CB821305, NSFC under grants Nos. 61227902, 61378017, 11434015, SKLQOQOD under grants No. KF201403, SPRPCAS under grants No. XDB01020300. H.H. was supported by the Australian Research Council (ARC) Discovery Projects (Grant Nos. FT130100815 and DP140103231). \end{acknowledgments}
1,314,259,992,920
arxiv
\section{Introduction} The investigation of wormholes is a very hypothetical but interesting research topic. One may study wormholes in the Lorentzian signatures; if the wormhole throat is time-like and traversable, then it should violate the averaged null energy condition by a certain way, e.g., introducing an exotic matter or quantum effects \cite{Morris:1988cz}. One may also study wormholes in the Euclidean signatures \cite{Hawking:1988ae}; then this topic can have applications for quantum gravity \cite{Coleman:1988tj} and quantum cosmology \cite{Chen:2016ask}. For both cases, Lorentzian or Euclidean wormholes, one needs to introduce an exotic matter in order to obtain a wormhole throat. However, we have two troubles with this (for more details, see \cite{Visser:1995cc}). \begin{itemize} \item[(1)] Can we introduce the (at least, effective) exotic matter within the well-known safe field theory without the menace of phantom \cite{Cline:2003gs}? \item[(2)] Why should the two separated spaces be joined at the throat, i.e., is there a mechanism to bring two spaces to one place? \end{itemize} These two are challenging but difficult questions. However, recently, especially by string theorists, it was suggested that wormholes will do an important role within the string theory. For example, in order to overcome the inconsistency \cite{Yeom:2008qw,Almheiri:2012rt} of black hole complementarity \cite{Susskind:1993if} and resolve the firewall paradox \cite{Hwang:2012nn}, the Einstein-Rosen/Einstein-Podolsky-Rosen (ER=EPR) conjecture has been suggested \cite{Maldacena:2013xja}, where this is related to a space-like wormhole with the Einstein-Rosen bridge. Later, it was known that semi-classical effects on the Einstein-Rosen bridge can change the bridge to be traversable \cite{Chen:2016nvj}. Moreover, recently, it was reported that the existence of wormholes can be allowed within the string theory as we turn on suitable semi-classical quantum effects \cite{Gao:2016bin}. Therefore, it is interesting to study whether there can be other time-like wormhole solutions within the string theory or not. In this paper, we will focus on the Gauss-Bonnet-dilaton gravity, where this can be embedded by the string theory \cite{Metsaev:1987zx}. Previously, it was reported that there exists Lorentzian wormhole solutions within the theory \cite{Kanti:2011jz}. This is quite impressive since now we can say that the string theory seems to allow time-like wormholes even at the classical level. Also, energy conditions will be effectively violated, but we may trust the model since the string theory can be regarded as a candidate of the UV-completion of gravity. On the other hand, for all models of wormholes, their dynamical constructions were not known yet. In this paper, we will argue that Euclidean wormholes can be analytically continued to a time-like Lorentzian wormhole. We will find a Euclidean wormhole solution in the Gauss-Bonnet-dilaton gravity, and hence, the solution will be allowed by the string theory context. Therefore, based on our model, we can solve two previous difficult questions: (1) we used a string-inspired model which can be regarded as a UV-completion and (2) we explain a dynamical creation of a wormhole $\textit{ex nihilo}$ by quantum mechanical processes. This paper is constructed as follows. In Sec.~\ref{sec:mod}, we describe the details of the Gauss-Bonnet-dilaton gravity model. In Sec.~\ref{sec:sol}, we describe the details of the solution as well as its properties. In Sec.~\ref{sec:dis}, we give some comments on the applications of wormholes and discuss possible future works. \section{\label{sec:mod}Model} The four-dimensional string effective action can be expanded as follows \cite{Metsaev:1987zx}: \begin{eqnarray} S = \int d^{4}x \sqrt{-g} \left[ \frac{R}{2\kappa^{2}} - \frac{1}{2} \left(\nabla \phi\right)^{2} - V(\phi) + \frac{\lambda}{2} e^{-c \phi} \left( \mathcal{C}_{1} R_{\mathrm{GB}}^{2} + \mathcal{C}_{2} \left( \nabla \phi \right)^{4} \right) \right], \end{eqnarray} where $R$ is the Ricci scalar, $\kappa^{2} = 8\pi$, $\phi$ is the dilaton field with potential $V(\phi)$, $c$ is a constant, $\lambda$ is the coupling constant proportional to the $\alpha'$ parameter, and \begin{eqnarray} R_{\mathrm{GB}}^{2} = R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} - 4 R_{\mu\nu}R^{\mu\nu} + R^{2} \end{eqnarray} is the Gauss-Bonnet term. Here, $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ are model dependent parameters, where its ratio $\mathcal{C}_{2}/\mathcal{C}_{1}$ is $2:0:1$ for three types of string theories, i.e., the bosonic string theory, the type-II superstring theory, and the heterotic superstring theory, respectively \cite{Kanti:1995vq}. Therefore, the type-II superstring theory includes the Gauss-Bonnet-dilaton gravity model: \begin{eqnarray} S = \int d^{4}x \sqrt{-g} \left[ \frac{R}{2\kappa^{2}} - \frac{1}{2} \left(\nabla \phi\right)^{2} - V(\phi) + \frac{1}{2} \xi(\phi) R_{\mathrm{GB}}^{2} \right], \end{eqnarray} where \begin{eqnarray} \xi(\phi) = \lambda e^{- c (\phi-\phi_{0})} \end{eqnarray} with constants $\lambda$, $c$, and $\phi_{0}$ without loss of generality. In order to investigate non-perturbative effects of the model, we introduce the Hartle-Hawking wave function as a Euclidean path-integral \cite{Hartle:1983ai}: \begin{eqnarray} \Psi\left[ h_{\mu\nu}, \psi \right] = \int \mathcal{D}g_{\mu\nu} \mathcal{D} \phi \;\; e^{- S_{\mathrm{E}}\left[g_{\mu\nu}, \phi \right]}, \end{eqnarray} where we sum-over all regular Euclidean field combinations that satisfy $\partial g_{\mu\nu} = h_{\mu\nu}$ and $\partial \phi = \psi$. By using the steepest-descent approximation, this path-integral will be well approximated by Euclidean on-shell solutions, or so-called instantons \cite{Hartle:1983ai,Hartle:2007gi,Hwang:2011mp}. Especially, we investigate instantons with the following $O(4)$-symmetric metric ansatz: \begin{eqnarray} ds_{\mathrm{E}}^{2} = d\tau^{2} + a^{2}(\tau) d\Omega_{3}^{2}, \end{eqnarray} where \begin{eqnarray} d\Omega_{3}^{2} = d\chi^{2} + \sin^{2}{\chi} \left( d\theta^{2} + \sin^{2}{\theta} d\varphi^{2} \right) \end{eqnarray} is the three-sphere. \subsection{Equations of motion} We can present equations of motion in Euclidean signatures \cite{Koh:2014bka}: \begin{eqnarray} \label{Eq1} H^{2} &=& \frac{\kappa^{2}}{3} \left[ \frac{1}{2} \dot{\phi}^{2} - V + \frac{3K}{\kappa^{2} a^{2}} - 12 \dot{\xi} H \left( - H^{2} + \frac{K}{a^{2}} \right) \right],\\ \label{Eq2} \dot{H} &=& -\frac{\kappa^{2}}{2} \left[ \dot{\phi}^{2} + \frac{2K}{\kappa^{2} a^{2}} + 4\ddot{\xi} \left(-H^{2} + \frac{K}{a^{2}} \right) + 4 \dot{\xi} H \left(-2 \dot{H} + H^{2} - \frac{3K}{a^{2}} \right) \right],\\ \label{Eq3} 0 &=& \ddot{\phi} + 3 H \dot{\phi} - V' - 12 \xi' \left( - H^{2} + \frac{K}{a^{2}} \right) \left( \dot{H} + H^{2} \right), \end{eqnarray} where $H = \dot{a}/a$, $K = +1$, and $\kappa^{2} = 8\pi$. This set of equations are consistent with \cite{Ro:2016kyu}. Eqs.~(\ref{Eq2}) and (\ref{Eq3}) will be used to numerically solve the variables. Eq.~(\ref{Eq1}) will be the constraint equation, where this is simplified (if $a \neq 0$) to \begin{eqnarray} 0 = 6a \left( K - \dot{a}^{2} \right) - 24 \kappa^{2} \dot{a} \left( K - \dot{a}^{2} \right) \dot{\phi} \xi' + \kappa^{2} a^{3} \left( \dot{\phi}^{2} - 2V \right). \end{eqnarray} We present Eqs.~(\ref{Eq2}) and (\ref{Eq3}) for $\ddot{a}$ and $\ddot{\phi}$. \begin{eqnarray} \label{eq:a}\ddot{a} &=& - \frac{a^{2}}{2} \mathcal{F},\\ \label{eq:phi}\ddot{\phi} &=& V' - 3 \frac{\dot{a}}{a} \dot{\phi} - \frac{6 \xi' \left(K - \dot{a}^{2}\right)}{a} \mathcal{F}, \end{eqnarray} where \begin{equation} \mathcal{F} \equiv \frac{2a (K-\dot{a}^{2}) + \kappa^{2} a^{3} \dot{\phi}^{2} - 4\kappa^{2}\xi''(- K a \dot{\phi}^{2} + a \dot{a}^{2} \dot{\phi}^{2} ) - 4\kappa^{2} \xi'( - K a V' + a \dot{a}^{2} V' + 6 K \dot{a} \dot{\phi} - 6 \dot{a}^{3} \dot{\phi})}{a^{4} - 4\kappa^{2} a^{3} \dot{a} \dot{\phi} \xi' + 24 K^{2} \kappa^{2} \xi'^{2} - 48 K \kappa^{2} \dot{a}^{2} \xi'^{2} + 24 \kappa^{2} \dot{a}^{4} \xi'^{2}}. \end{equation} These expressions are useful since the right-hand sides of Eqs.~(\ref{eq:a}) and (\ref{eq:phi}) are functionals up to first derivations, i.e., $a$, $\dot{a}$, $\phi$, and $\dot{\phi}$. \subsection{Solving techniques} Usually, one gives the initial condition at a certain point and solve equations. One important point is to give regular boundary conditions for the end of the solution if the solution is compact. In order to make this boundary value problem simpler, we solve $V$ as a function of $\tau$ rather than directly solve $\phi$. This means that we first fix the solution $\phi$ and solve equations for $a$ and $V$. As we solve $V$ instead of $\phi$, one can arbitrarily choose the shape of the field in principle and we use \begin{eqnarray} \phi(\tau) = \phi_{0} + \frac{\left(\phi_{1} - \phi_{0} \right)}{12\pi} \left[ 12 \frac{\phi - \phi_{0}}{\Delta} - 8\sin 2\frac{\phi - \phi_{0}}{\Delta} + \sin 4 \frac{\phi - \phi_{0}}{\Delta} \right] \end{eqnarray} for $0 \leq \phi - \phi_{0} \leq \pi\Delta$, while $\phi(\tau) = \phi_{1}$ for $\phi-\phi_{0} > \pi\Delta$ and $\phi(\tau) = \phi_{0}$ for $\phi-\phi_{0} < \pi\Delta$ (Fig.~\ref{fig:phi}). This function is continuous up to third order differentiations. Here, $\phi_{0}$, $\phi_{1}$, and $\Delta$ are free parameters. \begin{figure} \begin{center} \includegraphics[scale=0.75]{phi} \caption{\label{fig:phi}An example of $\phi(\tau)$, where $\phi_{0} = 0$, $\phi_{1} = 0.08$, and $\Delta = 0.8$.} \end{center} \end{figure} Finally, the equation for $V$ becomes $\dot{V} = \dot{\phi} V'$, where \begin{equation} V' = \frac{a^{5} \ddot{\phi} + 6 \kappa^{2} a^{3} (K - 3 \dot{a}^{2}) \dot{\phi}^{2} \xi' - 12 \xi' (K - \dot{a}^{2})^{2} ( 6 \kappa^{2} \dot{a} \dot{\phi} \xi' - a - 2 a \kappa^{2} \dot{\phi}^{2} \xi'' - 2 a \kappa^{2} \ddot{\phi} \xi') + a^{4} \dot{a} \dot{\phi} (3 - 4 \kappa^{2} \ddot{\phi} \xi')}{a^{4} (a - 4\kappa^{2} \dot{a} \dot{\phi} \xi')}. \end{equation} In the first glimpse, this is not due to the legitimate choice of the potential shape; rather, the potential is chosen by the shape of the solution. However, this is still a good strategy (for example, see \cite{Kanno:2012zf}) if (1) we do not know the exact shape of the potential of the theory from the first principle and (2) we would like to know whether a specific type of solutions does exist or not. At once one can construct a solution with an arbitrary shape of the potential, then the theory does not disallow such a solution in principle; and then the next task is the justification of the potential that allows such a solution. In this paper, we will mainly focus on the first step of the question: the existence of the solution in the Gauss-Bonnet-dilaton gravity. \subsection{Conditions for wormholes} In order to have a Euclidean wormhole, we need $\ddot{a} > 0$ when $\dot{a} = 0$ and $a > 0$. The necessary conditions are \begin{eqnarray} \ddot{a} &=& - \frac{a^{3}}{2} \left( \frac{2 K + \kappa^{2} \dot{\phi}^{2} (a^{2} + 4 \xi'' K) + 4 \kappa^{2} \xi' K V'}{a^{4} + 24 K^{2} \kappa^{2} \xi'^{2}} \right),\\ \dot{\phi}^{2} &=& - \frac{6K}{\kappa^{2} a^{2}} + 2 V. \end{eqnarray} These are equivalent to \begin{eqnarray} \frac{1}{2\kappa^{2}} + \xi' V' + \left( \frac{a^{2}}{4K} + \xi'' \right) \dot{\phi}^{2} &<& 0,\\ V - \frac{3 K}{\kappa^{2} a^{2}} &\geq& 0. \end{eqnarray} These conditions are useful to figure out the properties of wormholes. This means that the Euclidean wormhole throat can exist only if $V > 0$ and $\xi' V' < 0$ are satisfied. \begin{figure} \begin{center} \includegraphics[scale=0.75]{atau} \includegraphics[scale=0.75]{dotatau} \caption{\label{fig:atau}$a(\tau)$ (left) and $\dot{a}(\tau)$ (right).} \includegraphics[scale=0.75]{Vtau} \caption{\label{fig:Vtau}$V(\tau)$. \end{center} \end{figure} \subsection{Initial conditions} For the numerical solving procedure, except the model parameters that we have mentioned, we need to choose the initial conditions: \begin{eqnarray} a(0) &=& \sqrt{\frac{6K}{\kappa^{2} \left(2 V_{0} - \dot{\phi}^{2}(0) \right)}},\\ \dot{a}(0) &=& 0,\\ V(0) &=& V_{0}, \end{eqnarray} where, for convenience, we choose \begin{eqnarray} \phi(0) &=& \phi_{0},\\ \dot{\phi}(0) &=& 0. \end{eqnarray} Hence, $\tau < 0$ region is a pure de Sitter space with $V = V_{0}$ and $\tau = 0$ is the half-way point ($\dot{a} = 0$ and $\ddot{a} < 0$) of the de Sitter space. $\phi_{0}$ can be chosen to be zero without loss of generality. From this initial condition, we can check whether the Euclidean wormhole solution can appear at $\tau > 0$ or not. \section{\label{sec:sol}Solutions and properties} By choosing suitable initial conditions and model parameters, one can obtain a Euclidean wormhole with regular boundary conditions at compact boundaries $a = 0$. We demonstrate an explicit example. \subsection{Quantum creation of time-like wormholes} \begin{figure} \begin{center} \includegraphics[scale=0.75]{concept} \caption{\label{fig:concept}The analytic continuation to Lorentzian signatures. Here, $r$ is the areal radius. The triangular part is the region where the metric $ds_{I}$ is applied. In the orange colored part, the spacetime includes a time-like wormhole. The green colored parts satisfy the metric $ds_{II}$, where left one is de Sitter and the right one is anti-de Sitter.} \end{center} \end{figure} We can obtain a Euclidean wormhole with smooth boundary conditions, i.e., $\dot{a} = \pm 1$ for $a = 0$, as in Figs.~\ref{fig:atau} and \ref{fig:Vtau}, where we have chosen $V_{0} = 1$. The left end, $\tau < 0$, of the solution is de Sitter and the right end of the solution, $\tau \sim 2$, is anti-de Sitter. Two compact regions are connected by a Euclidean wormhole, i.e., there exists a $\ddot{a} > 0$ and $\dot{a} = 0$ point. After the inhomogeneous Wick-rotation $d\chi = i dT$, one can obtain a time-like wormhole in the Lorentzian signatures \cite{Hawking:1998bn}. If we consider the analytic continuation $\chi = \pi/2 + iT$, then the Lorentzian metric becomes \begin{eqnarray}\label{eq:metric1} ds_{I}^{2} = d\tau^{2} + a^{2}(\tau) \left( - dT^{2} + \cosh^{2} T \; d\Omega_{2}^{2} \right). \end{eqnarray} In these coordinates, $0 \leq T < \infty$ is the time-like parameter (constant $T$ surfaces are space-like) and $\tau$ is the space-like parameter (constant $\tau$ surfaces are time-like). Note that both ends of $a = 0$ correspond null surfaces. Beyond the null surfaces, one can further analytically continue by choosing $\tau = it$, $T = i\pi/2 +\chi$, and $\alpha(t) = -ia(it)$: \begin{eqnarray} ds_{II}^{2} = - dt^{2} + \alpha^{2}(t) \left( d\chi^{2} + \sinh^{2} \chi \; d\Omega_{2}^{2} \right). \end{eqnarray} The analytically continued causal structures are summarized by Fig.~\ref{fig:concept}. The bottle-neck of the wormhole is located at the metric $ds_{I}$, where this corresponds $\dot{a} = 0$ and $\ddot{a} > 0$. For a constant space-like hypersurface $T = \mathrm{const.}$, the areal radius is proportional to $a(\tau)$, and hence, the point with $\dot{a} = 0$ and $\ddot{a} > 0$ satisfies the flare-out condition, where this is the correct condition for the bottle-neck of the wormhole. In addition, $\tau = \mathrm{const.}$ is manifestly time-like and hence the bottle-neck is traversable. One additional comment is that this bottle-neck does not connect two asymptotic infinities; rather, this connects between two maximums of the areal radius (of de Sitter background). \begin{figure} \begin{center} \includegraphics[scale=0.6]{atau08} \includegraphics[scale=0.6]{Vtau08} \caption{\label{fig:demonstrate1}$a(\tau)$ (left) and $V(\tau)$ (right) for $\Delta = 0.8$.} \includegraphics[scale=0.6]{atau1} \includegraphics[scale=0.6]{Vtau1} \caption{\label{fig:demonstrate2}$a(\tau)$ (left) and $V(\tau)$ (right) for $\Delta = 1.0$.} \includegraphics[scale=0.6]{atau18} \includegraphics[scale=0.6]{Vtau18} \caption{\label{fig:demonstrate3}$a(\tau)$ (left) and $V(\tau)$ (right) for $\Delta = 1.8$.} \end{center} \end{figure} \subsection{Parameter dependences} In this subsection, we discuss more on the technical details of the solutions. Since there are many parameters that we can handle, there may be no straightforward way to finely tune the initial conditions in order to satisfy the boundary conditions for both ends. However, by changing several parameters around the solution that we have obtained, we can have some technical intuitions. As an example, in Figs.~\ref{fig:demonstrate1}, \ref{fig:demonstrate2}, and \ref{fig:demonstrate3}, we demonstrated three examples by varying $\Delta$, where Fig.~\ref{fig:demonstrate1} is the same as Figs.~\ref{fig:atau} and \ref{fig:Vtau}. As we increase $\Delta$, the slope of $\phi$ becomes more and more gentle. Therefore, the contributions from the kinetic energy decreases and $V$ should be modified according to the back-reactions. By comparing $V(\tau)$ of Figs.~\ref{fig:demonstrate1}, \ref{fig:demonstrate2}, and \ref{fig:demonstrate3}, we can notice that the scales of $V$ increases drastically. In Fig.~\ref{fig:demonstrate2}, $V$ crosses from positive to negative values with relatively steeper way, and hence, there appears a cusp-like point due to the sharp dynamics nearby $\tau \sim 1.7$. One another tendency is that as $\Delta$ increases, the values of $a$ at the local minimum ($\dot{a} = 0$ and $\ddot{a} > 0$) and the second local maximum ($\dot{a} = 0$ and $\ddot{a} < 0$) become relatively lower than that of the first local maximum ($a \sim 0.35$). This means that as $\Delta$ increases, $a$ of the local minimum and the second local maximum continuously decreases and eventually there exists a limit such that $\dot{a} \rightarrow 0$ as $a \rightarrow 0$ (Fig.~\ref{fig:demonstrate3})\footnote{This behavior is quite similar to instantons in loop quantum gravity, e.g., see \cite{lqc}.}. In this paper, we could not report all possible variations of parameters, but these examples show that there are plenty of solutions that will have interesting physical implications. \subsection{Energy conditions} The energy conditions arise from the Raychaudhuri equation for the expansion~\cite{Raychaudhuri:1953yv}, which is given by \begin{eqnarray}\label{eq:RayEq1} \frac{d\theta}{d\eta}=-\frac{1}{2}\theta^2-\sigma_{\mu\nu}\sigma^{\mu\nu}+\omega_{\mu\nu}\omega^{\mu\nu}-R_{\mu\nu}k^{\mu}k^\nu\,, \end{eqnarray} where $\eta$ is the affine parameter of an observer moving along a null geodesic and $R_{\mu\nu}$ is the Ricci tensor. In addition, $\theta$, $\sigma^{\mu\nu}$, and $\omega^{\mu\nu}$ are the expansion, shear, and rotation, respectively, associated to the congruence defined by the null vector field $k^\mu$. The characteristic of attractive gravity is stated by the condition $d\theta/d\eta<0$. For the infinitesimal distortions, the Frobenius theorem \cite{ref:book1} shows that $\omega_{\mu\nu}=0$ when the congruence curves (time-like, space-like, or null) are hypersurface orthogonal. We, therefore, ignore the third term of the right-hand side of Eq.~(\ref{eq:RayEq1}). Since the first term and the second term are always negative, the attractive gravity is guaranteed if \begin{eqnarray}\label{eq:NEC1} R_{\mu\nu}k^\mu k^\nu\geq0\,. \end{eqnarray} This condition is called by the null energy condition, which is equivalent to $T_{\mu\nu}k^\mu k^\nu \geq 0$, where $T_{\mu\nu}$ is the (effective) energy-momentum tensor. After the Wick-rotation, null vectors should be proportional to $k^\mu=(\pm1, a, 0, 0)$, and hence, by using Eq.~(\ref{eq:metric1}), the null energy condition Eq.~(\ref{eq:NEC1}) is reduced to \begin{eqnarray} 1-\dot{a}^2+a\ddot{a}\leq0\,. \end{eqnarray} This is equivalent to \begin{eqnarray} \ddot{a}\leq \frac{1}{a}\left(\dot{a}^2-1\right). \end{eqnarray} This implies that, if $\ddot{a}>0$ for $\dot{a}=0$ and $a > 0$, the null energy condition should be violated as we expected. \begin{figure} \begin{center} \includegraphics[scale=0.75]{lag} \caption{\label{fig:lag}$L_{\mathrm{E}}(\tau)$ for the Euclidean wormhole solution.} \end{center} \end{figure} \subsection{Probabilities} Since the solution is compact, the Euclidean action should be finite and well-defined. The mini-superspace Euclidean action is \cite{Cai:2008ht} \begin{eqnarray} S_{\mathrm{E}} = \int L_{\mathrm{E}} d\tau, \end{eqnarray} where \begin{eqnarray} L_{\mathrm{E}} = 2\pi^{2} a^{3} \left[ \frac{\dot{\phi}^{2}}{2} + V(\phi) - \frac{3 (1 - \dot{a}^{2} - a \ddot{a})}{\kappa^{2} a^{2}} + 12 \xi(\phi) \frac{\ddot{a}}{a^{3}} \left( 1 - \dot{a}^{2} \right) \right]. \end{eqnarray} By applying the integration by part, we obtain \begin{eqnarray} L_{\mathrm{E}} = 2\pi^{2} a^{3} \left[ \frac{\dot{\phi}^{2}}{2} + V(\phi) - \frac{3}{\kappa^{2}a^{2}} \left(1 + \dot{a}^{2} \right) - 12 \xi' \dot{\phi} \frac{\dot{a} (1 - \dot{a}^{2})}{a^{3}} + 24 \xi \frac{\ddot{a} \dot{a}^{2}}{a^{3}} \right]. \end{eqnarray} Finally, by plugging Eq.~(\ref{Eq1}), we obtain \begin{eqnarray} L_{\mathrm{E}}^{\mathrm{on-shell}} = 4\pi^{2} \left[ a^{3} \tilde{V} - \frac{3}{\kappa^{2}} a \right], \end{eqnarray} where \begin{eqnarray} \tilde{V} = V + 12 \xi \frac{\dot{a}^{2} \ddot{a}}{a^{3}}. \end{eqnarray} This action is equivalent to that of Einstein gravity if $\xi = 0$. An example of the Euclidean Lagrangian is numerically demonstrated by Fig.~\ref{fig:lag}. After the integration, the Euclidean action is negative ($-306.492$) which is quite usual for compact instantons. One may further consider the tunneling from the de Sitter space with the vacuum energy $V = V_{0}$ and $\phi = \phi_{0}$, where the solution is $a(\tau) = H_{0}^{-1} \sin H_{0} \tau$ with $H_{0}^{2} = \kappa^{2} V_{0} / 3$. For this case, the Euclidean action is \begin{eqnarray} S_{\mathrm{E}} = - \frac{3}{8V_{0}} \left( 1 + \frac{256 \pi^{2}}{3} \xi(\phi_{0}) V_{0} \right). \end{eqnarray} Hence, in our numerical setup, the action is $-316.202.$ Therefore, after subtracting to the background solution, the nucleation rate $\Gamma \propto e^{-B}$ is exponentially suppressed, i.e., \begin{eqnarray} B = S_{\mathrm{E}} (\mathrm{solution}) - S_{\mathrm{E}} (\mathrm{background}) \end{eqnarray} is positive definite. Of course, there is a subtlety to choose the background solution. However, it is fair to say that our solution strongly supports that there can be a well-defined tunneling process from a pure de Sitter space to a time-like wormhole space within the framework of string-inspired Gauss-Bonnet-dilaton gravity. \section{\label{sec:dis}Discussion} In this paper, we investigated a nucleation of a Euclidean wormhole and its analytic continuation to Lorentzian signatures in Gauss-Bonnet-dilaton gravity. This model can be embedded by the type-II superstring theory. We show that there exists a Euclidean wormhole solution in this model by choosing a suitable shape of the dilaton potential. After the analytic continuation, this explains a quantum creation of a time-like wormhole \textit{ex nihilo}. This work is new since we could deal the following two topics at the same time: (1) we embedded the model in the context of the string theory\footnote{If we do not restrict to the string-inspired models, there may be further examples, e.g., Eddington-inspired-Born-Infeld gravity \cite{EiBI}.} and (2) we explained a quantum mechanical creation process of a wormhole. Based on this solution, there may be several applications of the wormhole solution. \begin{itemize} \item[(1)] This solution opens a possibility that a time-like wormhole can exist in the de Sitter space, e.g., in our universe. The causal structure near the throat will give specific signals in terms of gravitational waves \cite{Volkel:2018hwb}. Also, this has an implication in the context of the open inflation scenario \cite{Yamamoto:1995sw}. \item[(2)] In the orange colored region in Fig.~\ref{fig:concept}, there appears a hyperbolic time-like wormhole, where the other two time-like boundaries of the orange colored region correspond to the boundary of increasing areal radius. On this background, one may test some ideas of holography \cite{Maldacena:1997re}. \end{itemize} In addition to this, the applications to the information loss problem \cite{Hawking:1976ra,Chen:2014jwq} would be interesting. By giving matters from both sides of the wormhole throat, this wormhole can be trapped by apparent horizons. Then can the ER=EPR conjecture be still true? Since we only relied on the classical field combinations in order to have the traversable wormhole throat, we cannot rely on the quantum teleportation arguments to rescue the principle \cite{Gao:2016bin}. Also, the information loss problem is deeply related to the non-perturbative contributions of the Euclidean path-integral \cite{Hawking:2005kf,Sasaki:2014spa} and hence we need to investigate the possible effects of Euclidean wormholes \cite{ArkaniHamed:2007js}, e.g., the violation of unitarity \cite{Hawking:1987mz}. Since the Gauss-Bonnet-dilaton model can violate the effective null energy condition, there may be some hope to resolve singularities inside the black hole. We leave these topics for future investigations. \newpage \section*{Acknowledgment} We would like to thank Daeho Ro for fruitful discussions. GT is pleased to appreciate Asian Pacific Center for Theoretical Physics (APCTP) for its hospitality during completion of this work. DY was supported by the Korea Ministry of Education, Science and Technology, Gyeongsangbuk-Do and Pohang City for Independent Junior Research Groups at the Asia Pacific Center for Theoretical Physics and the National Research Foundation of Korea (Grant No.: 2018R1D1A1B07049126). GT was supported by Institute for Basic Science (IBS) under the project code, IBS-R018-D1.
1,314,259,992,921
arxiv
\subsection*{1. Introduction} Perhaps the most difficult problem of the supersymmetric grand unified theories (GUTs) is the `doublet--triplet splitting' problem. The heart of the problem has to do with the fact that in the GUT context the Higgs doublets of the minimal supersymmetric standard model (MSSM) $H, \bar H$ $inevitably$ get accompanied by their coloured triplet partners $T,\bar T$. GUT symmetry ($G$) forces the coloured triplet to be coupled to the quark and lepton superfields ($Q,u_c,d_c,L,e_c,\nu_c$) by the Yukawa coupling constant $Y_{T(\bar T)}$, which is equal (in the unbroken $G$ limit) to the one of the doublet $Y_{H(\bar H)}$. In such a situation the coloured triplet exchange can lead to an unacceptably rapid proton decay unless $T, \bar T$ are superheavy. This heaviness is certainly possible since, unless forbidden by some symmetry, the triplet can get large ($\sim M_{GUT}$) mass from the couplings with the vacuum expectation values (VEVs) that break $G$. What is much, much more difficult, however, is to protect the doublet partners from getting the same order mass. This is the famous doublet-- triplet splitting problem. Most of the attempts reported in the literature deal with this difficulty. In certain approaches such as the `missing partner' or the `missing VEV' \cite{mp}, the doublet appears light for the group theoretical reasons or because of the VEV structure. In the `pseudo--Goldstone picture'\cite{pp} its mass is protected by the Goldstone theorem and is fully controlled by the scale of SUSY breaking. The common feature of these approaches is that they try (though in different ways) to make the colour triplet very heavy in order to suppress the proton decay. Certainly there is a loophole that may avoid such an approach: there is no need for the heavy triplet if its effective Yukawa coupling constant $Y_T$ is suppressed by many orders of magnitude with respect to the one of the doublet $Y_H$. Say, if $Y_T/Y_H \sim M_W/M_{GUT}$, such a triplet can never lead to an observable proton decay even if its mass is comparable with the weak scale $M_W \sim $100 GeV --TeV. As suggested in \cite{dv}, such a situation can occur without any fine--tuning if the matter fermion masses are originated from the effective high--dimensional operators induced by the physics at $M_G$. To be more explicit consider the following $SO(10)$ invariant operator \cite{dv} \begin{equation} {Y_{\alpha, \beta} \over M} 10_i 45_{ik}16^{\alpha}\gamma_k 16^{\beta}, \end{equation} where $16^\alpha \alpha =1,2,3$ are three families of the matter fermions, $10_i$ ($i=1,..., 10$) is the multiplet in which reside the $H,\bar H \in 10_i (i=7,..., 10)$ and $T, \bar T \in 10_i (i= 1,..., 6)$ states and $45$ is the GUT Higgs in the adjoint representation of $SO(10)$. We have written $SO(10)$ tensor indices ($i,k$) explicitly, since the way of their construction is important for us and $\gamma_i$ are the matrices of the $SO(10)$ Clifford algebra. $M$ is a certain regulator scale $\sim M_{GUT}$. Coupling $(1)$ has to be understood as an effective operator obtained through the integrating out of some heavy states at $M_{GUT}$. Below we show explicitly how the above structure can result automatically from the tree--level exchanges \cite{bd} of heavy `scalar' or `fermionic' superfields with purely $renormalizable$ interactions. Before doing this, let us simply assume for a moment that coupling $(1)$ exists due to whatever reason and see its role in the D--T splitting problem. For this we require the $45$-plet Higgs to have the VEV of the form \begin{equation} \langle 45_{ik}\rangle = diag[0,0,0,A,A]\otimes \epsilon \end{equation} where $A \sim M_{GUT}$ and each element is assumed to be proportional to the $2 \times 2$ antisymmetric matrix $\epsilon$. This VEV breaks the $G_{L,R} = SU(2)_L\otimes SU(2)_R \otimes SU(4)$ subgroup of $SO(10)$ down to $SU(2)_L\otimes U(1)_R \otimes SU(4)$ and is thus oriented along the $T_R^3$ generator of $SU(2)_R$. In combination with the other VEVs, say the $16$-plet with non-zero SU(5)-singlet VEV ($\nu_c$), it leads to the desired breaking $SO(10) \rightarrow G_W =SU(3)_c\otimes SU(2)_L \otimes U(1)_Y$. Now, inserting (2) in (1), we do not need much effort to be convinced that the effective Yukawa coupling constants of the Higgs doublets are $Y_{\alpha,\beta}^H = Y_{\alpha,\beta} {A \over M}$, whereas the triplets have no couplings at all! Such a decoupled triplet cannot lead to the unacceptable proton decay, even if it is as light as its doublet partner. Thus, there is no need to couple the $10$-plet to the GUT Higgses and the unnatural fine tuning can be avoided. Of course, in this approach the standard coupling $16^{\alpha}16^{\beta}10$ must be forbidden by some symmetry. Therefore, we see that the $10$-plet must transform under a certain symmetry $G_{10}$ such that: (1) it prevents the $10$ from coupling with the GUT Higgses; (2) it allows $10$-plet to couple with $16^{\alpha}$ $only$ in combination with $45$-plet . Below we will show that the solution of the $\mu$ problem fixes $G_{10} = Z_2\otimes Z_3$. What are the motivations for the above approach? First we see at least three model-independent interesting consequences: (1) It automatically solves the problem of the coloured-Higgsino-mediated proton decay via $d=5$ operators \cite{d5}: In contrast with a standard case the suppression occurs not because of the large mass, but of the small coupling. As a result both $d=5$ and $d=6$ operators are suppressed by factors at least $\sim (M_w/M_{GUT})^2$ and the proton is practically stable; (2) In contrast with the standard GUTs, the $\mu$ problem can be solved through the coupling of the Higgs doublet to the light gauge singlet field. In the normal case this would lead to a destabilization of the hierarchy through the well-known `tadpole' diagram \cite{tp}. For the obvious reasons, no such diagram exists in the present case. (3) Prediction of the existence of long-lived $T,\bar T$ supermultiplets in the $100$ GeV - TeV mass region, which can be the subject of an experimental search. Last but not least, this approach is an alternative to the existing schemes of the D--T splitting problem and is certainly worth studying, especially since there exists no fully satisfactory solution at present. In this paper we construct the two realistic $SO(10)$ GUTs which explicitly realize the above scenario. Both models have superpotentials most general under symmetries and do not require any fine-tuning or unnaturally small parameters. \subsection*{2. The mechanism} In this section we study the origin of the operator (1) in somewhat more detail. As was mentioned, it can be generated by the exchange of the intermediate `fermionic' or `scalar' superfields. The reader should not be confused with these names, since they simply refer to the components propagating in the internal line of the tree (super)diagram once we fix the external legs to be fermions for $16^{\alpha}$ and scalars for $10$ and $45$. The `fermionic' exchange was already studied in \cite{dv}, but we will consider it in detail, since it is an essential ingredient for our models. \subsubsection*{2.1 Fermionic exchange} In order to generate (1) through the heavy-fermion exchange we introduce the three pairs of superfields $144_{\alpha}, \overline{144}_{\alpha} (\alpha = 1,2,3)$, where their representation content under $SO(10)$ is indicated explicitly. The relevant piece of the superpotential has the form \begin{equation} W_{Yukawa} = g_{\alpha}^{\beta}16^{\alpha}\overline{144}_{\beta}45 + M^{\alpha\beta}144_{\alpha}\overline{144}_{\beta} + g_{\alpha}^{'\beta}16^{\alpha}144_{\beta}10. \end{equation} This superpotential is invariant under the discrete $Z_2\otimes Z_3$ symmetry acting in the following way: under $Z_2$: only $10,45, 144$ and $\overline{144}$ change sign; under $Z_3$: $10 \rightarrow e^{i2\theta} 10$; $(16, 144) \rightarrow e^{-i\theta} (16, 144)$, and $ \overline{144} \rightarrow e^{i\theta}\bar {144}$. In fact, any $\theta$ leaves $W_{Yukawa}$ invariant, but as shown below only $\theta= 2\pi/3$ is allowed by the solution of the $\mu$ problem. This $Z_3$ symmetry ensures that $10$ cannot couple bilinearly with any of the GUT Higgses in the superpotential and both doublet and triplet are light. For energies much below $M$, this superpotential is effectively equivalent to the operator (1), due to a structure of $144$ representation. In order to be sure that the above structure can indeed suppress the proton decay, we have to find the effective couplings of $T,\bar T$ with the quarks and leptons. The strategy is straightforward: (1) insert the VEV of $45$ in (3); (2) find the light superpositions (express initial states through the final mass eigenstates); (3) insert the answer in the operator $g_{\alpha}^{'\beta}16^{\alpha}144_{\beta}10$ and select only the potentially dangerous couplings $T(\bar T)$ -- light -- light. The $G_{LR}$ decompositions of some $SO(10)$ representations are very useful \cite{sl}: \begin{eqnarray} 10 &=& (2,2,1) + (1,1,6)\nonumber\\ 16 &=& (2,1,4) + (1,2,\bar 4)\nonumber\\ 144 &=& (2,1,4) + (1,2,\bar 4) + (3,2,\bar 4) + (2,3,4) + (2,1,20) + (1,2,\overline{20})\nonumber\\ 45 &=& (1,3,1) + \dots \end{eqnarray} The $T,\bar T$ triplets live in $(1,1,6)_{10}$. Thus, its $G_{LR}$-invariant couplings with $16$ and $144$ are \begin{eqnarray} && g_{\alpha}^{'\beta} (1,1,6)_{10} \bigl[ (2,1,4)^{\alpha}_{16} (2.1.4)_{\beta}^{144} + (2,1,4)^{\alpha}_{16} (2.1.20)_{\beta}^{144}\nonumber\\ && + (1,2,\bar 4)^{\alpha}_{16} (1,2,\bar {20})_{\beta}^{144} + (1,2,\bar 4)^{\alpha}_{16} (1,2,\bar 4)_{\beta}^{144}\bigr] \end{eqnarray} We now have to find the light admixture in these fragments. First of all, we immediately notice that there is no state in $16$ to which $(2,1,4)^{144}, (2,1,20)^{144}$ and $(1,2,\overline{20})^{144}$ can mix via $(1,3,1)_{45}$ VEV (the would existing states had to transform as $(2,3,4), (2,3,20), (1,2,\overline{20})$ and $(1,4,\bar {20})$, respectively). Thus, $(2,1,4)^{144}, (2,1,20)^{144}$ and $(1,2,\overline{20})^{144}$ are the purely heavy states and the only potentially dangerous coupling in (4) is the last one. Before discussing its strength, note that this coupling involves only the $SU(2)_R$-doublet quarks and leptons. Thus, the only possible $T$-light-light couplings are: \begin{equation} \bar T u_cd_c + T u_ce_c \end{equation} Even if not suppressed, this couplings can lead to the proton decay only if there is a $T\bar T$ mass insertion somewhere. This is not necessary in general, since $T,\bar T$ can get masses from mixing with the other states. The latter can be an interesting possibility $per~se$, but it is not necessary in our case, since the above couplings can be naturally absent. To see this, notice that since mixing goes through the VEV $(1,3,1)$ the resulting coupling in terms of the light mass eigenstates has the form: \begin{equation} (1,1,6)_{10}<(1,3,1)_{45}> (1,2,\bar 4)^{\alpha}_{light} (1,2,\bar 4)_{\beta}^{light}. \end{equation} This coupling is antisymmetric in $SU(4)$ indices and symmetric in $SU(2)_R$ ones. So it will automatically vanish if the Yukawa coupling constants are symmetric in $\alpha,\beta$. This can be ensured by some flavour symmetry, which sooner or later probably has to be invented anyway in order to solve the fermion-mass problem. As far as we are not going to address this issue here, we will give just one possible example of such a flavour symmetry: $SU(3)_f$, under which $16^{\alpha}$ and $144_{\alpha}, \overline{144}_{\alpha}$ are antitriplets and triplets, respectively, and which is broken only by the Higgses in the symmetric representation ($6$-plets). In such a case $g_{\alpha}^{\beta} = g\delta_{\alpha}^{\beta},~ g_{\alpha}^{'\beta} = g'\delta_{\alpha}^{\beta}$ and $M_{\alpha\beta}$ has to be understood as a VEV of the symmetric representation of $SU(3)_f$. \subsubsection*{2.2 Scalar exchange} In order to generate eq(1) via the heavy scalar exchange let us (instead of $144,\overline {144}$) introduce a pair of $10',10''$-plets. The superpotential of the Yukawa sector now becomes: \begin{equation} W_{Yukawa} = g_{\alpha\beta} 16^{\alpha}16^{\beta} 10''+ g'4510'10 + M10'10''. \end{equation} This superpotential is also invariant under a $Z_2\otimes Z_3$ symmetry such that: under $Z_2$, $45$ and $10$ change sign, and under $Z_3$, \begin{equation} 16 \rightarrow e^{-i\theta} 16,~~~ (10,10'') \rightarrow e^{i2\theta} (10,10''),~~~ 10' \rightarrow e^{-i2\theta} 10'. \end{equation} After insertion of the $45$ VEV the mass matrices of doublets and triplets become \begin{equation} (g'AH + MH'')\bar H' + (- g'A \bar H + M\bar H'') H' + M(T'\bar T'' + T''\bar T'). \end{equation} We see that there is an admixture $\sim g'A/((g'A)^2 + M^2)^{1/2}$ of the light doublet in the $H''$ state, whereas the light triplet is simply decoupled. \subsection*{3. Solution of the $\mu$ problem} The present approach allows for a simple solution of the $\mu$ problem through the introduction of a light gauge singlet superfield $N$ \cite{nm}. The VEV of $N$ induced after SUSY breaking plays the role of an effective mass term $\mu H \bar H$ in the low-energy theory. The corresponding part of the superpotential has the following form: \begin{equation} W_{\mu} = \lambda N10^2 + \lambda'N^3/3. \end{equation} This form is the most general under the $Z_2\otimes Z_3$ symmetry introduced above, provided $N$ is invariant under $Z_2$, whereas it transforms in the same way as $10$ under $Z_3$. In order to guarantee the decoupling of the $N$ and $10$ from the heavy GUT Higgs fields, we require that none of them transform under $Z_3$. Such a simple solution of the $\mu$ problem is very difficult to implement in the standard cases with a large D--T mass hierarchy, since the introduction of a light singlet normally leads to a disastrous destabilization of the hierarchy through the well-known one-loop `tadpole' diagram \cite{tp}. The same difficulty appears in different versions of the `sliding singlet' scenario \cite{sp}. The source of the trouble is that the light singlet couples to both light ($H,\bar H$) and heavy ($T, \bar T$) states; because of this, its exchange immediately induces masses and VEVs of the $H, \bar H$ of the order of the geometric mean $\sim (M_{GUT}m_s)^{1/2}$, where $m_s$ is a SUSY-breaking scale in the low-energy sector. This is a serious problem for any scenario (with light singlets) in which the D--T masses are split. In contrast, such a difficulty never occurs in our case, since doublet and triplet are both light `by definition' and $N$ does not couple to the heavy states. So the troublesome `tadpole' is absent, allowing for a simple solution of the $\mu$ problem. It is worth pointing out that even in the standard GUTs with the heavy triplet partner the $\mu$ problem may be solved by some other mechanism. For example, when embedded in the minimal supergravity with the hidden sector SUSY breaking \cite{hs}, the $\mu$ problem is automatically solved in the `pseudo-Goldstone picture' \cite {pp}; $\mu = m_{3/2}$ is induced by a shift of heavy VEVs triggered by the SUSY breaking. In the other schemes, the solution may be achieved by going beyond the minimal supergravity and introducing the couplings of the $10$-plet with the hidden sector fields in the non-minimal K\"ahler potential \cite {gm} (although one may need some effort in order to explain why the similar couplings are absent from the superpotential). However, the solution with light singlet can work equally well even in the schemes with much lower scale of SUSY breaking, where the above supergravity solutions do not work. \subsection*{4. SO(10) examples } Now, let us turn to the model building and produce two realistic $SO(10)$ examples. The sectors of the theory that are responsible for the proton stability, fermion masses and the $\mu$ problem we have discussed above in a more or less model-independent way (apart from the fermion-mass structure, of course, which we believe has to be addressed in the frame of some specific flavour symmetry). In fact what we need now is to take care of the Higgs sector that breaks GUT symmetry and which does not participate in $W_{Yukawa}$ or $W_{\mu}$ due to $Z_2\times Z_3$-symmetry. As we know, the only GUT Higgs allowed to speak with $W_{Yukawa}$ is the $45$-plet with the VEV (2). Below we will denote it as $A$. The requirements that the GUT Higgs superpotential ($W_{GUT}$) has to obey are the following: ($a$) $W_{GUT}$ should be most general under symmetries; ($b$) no `fine-tuning'; ($c$) it should allow for the $G_w$-symmetric SUSY minimum in which $A$ has a VEV $(2)$ and all particles except for one pair of $L,\bar L$-type states + complete $SU(5)$ multiplets + (possibly) some $G_W$-singlets, have the GUT scale mass in order to keep the successful unification of gauge couplings \cite{uc} intact. We present below two models. \subsubsection*{4.1 Model I} $W_{GUT}$ includes the chiral superfields in the following $SO(10)$ representations: $ S,X,Y \equiv$ singlets; $\Sigma \equiv 54$-plet; $A,B,C,\Phi \equiv$ 45-plets; $\chi, \bar {\chi}, \psi, \bar {\psi} \equiv 16,\overline{16}$-plets and $F \equiv 10$-plet (not to be confused with the $10$-plet in $W_{Yukawa}$). The superpotential has the form: \begin{eqnarray} W_{GUT} &=& {\sigma \over 4}STr\Sigma^2 + {h \over 6} Tr\Sigma^3 + {1 \over 4}Tr(a\Sigma + M_a + a'S)A^2 + {1 \over 4}Tr(b\Sigma + M_b + b'S)B^2 \nonumber\\ &+& {1 \over 2}Tr(a''XA + b''YB)C + {g_c \over 2}\bar {\chi}C\chi + g_f\chi F \chi + \bar {g_f}\bar {\chi} F\bar {\chi} + g_{\Phi}\bar {\psi} \Phi \chi\nonumber\\ &+& \bar {g_{\Phi}}\bar {\chi} \Phi \psi + g_a\bar {\psi}A \psi + \rho X Tr\Phi^2 + M^2S + {M' \over 2}S^2 + {\kappa \over 3}S^3 \end{eqnarray} This form is strictly natural, since it is the most general compatible with the $Z_4^A\otimes Z_2^B \otimes U(1)^C$ global symmetry under which the chiral superfields transform as follows: under $Z_4^A$ \begin{eqnarray} && (A,X,10) \rightarrow -(A,X,10)\nonumber\\ && (\psi,\bar {\psi}) \rightarrow i (\psi,\bar {\psi})\nonumber\\ && \phi \rightarrow -i\phi \end{eqnarray} under $Z_2^B$ \begin{equation} (B,Y) \rightarrow -(B,Y) \end{equation} and under $U(1)^C$ \begin{eqnarray} (C,F) &\rightarrow& e^{i2\alpha} (C,F)\nonumber\\ (\chi,\bar {\chi}) &\rightarrow& e^{-i\alpha}(\chi,\bar {\chi})\nonumber\\ (X,Y) &\rightarrow& e^{-i2\alpha} (X,Y)\nonumber\\ \Phi &\rightarrow& e^{i\alpha} \Phi \end{eqnarray} As the reader can observe, $Z_4^A$ acts as $Z_2$ on $A$ and $10$. This is precisely the same $Z_2$ symmetry as was introduced in section 2 and which forces $the 10$-plet to be coupled with the matter superfields only in combination with $A$. We assume that all mass scales in $W_{GUT}$ are $\sim M_{GUT}$ and all coupling constants are of the order of 1. The standard procedure shows that the above superpotential admits the following supersymmetric ($F$-flat and $D$-flat) minimum with an unbroken $G_W$ symmetry: \begin{eqnarray} \Sigma &=& diag(2,2,2,2,2,2,-3,-3,-3,-3)\Sigma~~~~where~ \Sigma = {b'M_a - a'M_b \over 3ab' + 2ba'}\nonumber\\ A &=& diag[0,0,0,A,A]\otimes \epsilon\nonumber\\ B &=& diag[B,B,B,0,0]\otimes \epsilon\nonumber\\ \chi &=& \bar {\chi} = \chi|+,+,+,+,+\rangle~~ where~\chi^2 = -{a'' \over g_c} XA = -{b'' \over g_c} YB\nonumber\\ S &=& -{2bM_a + 3aM_b \over 3ab' + 2ba'}\nonumber\\ \psi &=& \bar {\psi} = F = \Phi = C = 0 \end{eqnarray} According to the standard notations (e.g. see \cite{wz}) the $SU(5)$ singlet component of $16$ is denoted by $|+,+,+,+,+\rangle$, where each $`+'$ refers to an eigenvalue of the respective Cartan subalgebra generator. The two quantities $A$ and $B$ are determined from the two equations: \begin{eqnarray} && 10(S\sigma \Sigma - h\Sigma^2) - aA^2 + bB^2 =0\nonumber\\ && 15\sigma\Sigma^2 + a'A^2 + {3 \over 2}b'B^2 + M^2 +M'S + \kappa S^2 =0 \end{eqnarray} Note that the absolute VEVs of the singlets $X$ and $Y$ are undetermined in the SUSY limit, and only their ratio, ${X \over Y} = {b''B \over a''A}$, is. It is not difficult to check that in the given vacuum $W_{GUT}$ delivers a pair of light doublets (with quantum numbers of $L, \bar L$) from the $\psi, \bar {\psi}$ multiplets. This is because $\psi, \bar {\psi}$ states get their masses only from the two sources: through the VEV of $A$ and via mixing with the heavy $\Phi$ through $\chi, \bar {\chi}$ VEV. Now, the $A$ VEV leaves all $SU(2)_L$-doublet states in $\psi,\bar {\psi}$ massless. These are the states with quantum numbers $Q, \bar Q$ and $L, \bar L$, which in the $SU(5)$ language belong to $10,\bar {10}$ and $5, \bar 5$ representations respectively; $10,\overline{10}$ components are mixed with similar fragments of $\Phi$ through the $SU(5)$ singlet VEVs of $\chi, \bar {\chi}$ and become heavy. In contrast, $5, \bar 5$ states cannot do so, since they have no partners in the $45$-plet $\Phi$. Thus, $L, \bar L$ states are massless. All other $G_W$ non-singlet states from $W_{GUT}$ have a GUT scale mass. If we recall now that in the $W_{Yukawa}$ sector we already had one light triplet pair ($T, \bar T$) on top of the MSSM particle content, it will be clear that new light states form a complete $SU(5)$-multiplets ($5,\bar 5$) and the unification of couplings is thus unaltered. \subsubsection*{4.2 Model II} In this version the $SO(10)$ content of $W_{GUT}$ is the same as in the Model I except for the fact that we exclude the $10$-plet $F$, the $45$-plet $\Phi$, and the two singlets $X$ and $Y$ from the theory. The superpotential becomes: \begin{eqnarray} W_{GUT} &=& {\sigma \over 4}STr\Sigma^2 + {h \over 6} Tr\Sigma^3 + {1 \over 4}Tr(a\Sigma + M_a + a'S)A^2 + {1 \over 4}Tr(b\Sigma + M_b + b'S)B^2\nonumber\\ &+& {1 \over 4}Tr(c\Sigma + M_c + c'S)C^2 + g_a\bar {\psi} A \psi + {1 \over 2} g_b\bar {\chi} B \chi + \bar {\chi}(M'' + \gamma S)\chi\nonumber\\ &+& M^2S + {M' \over 2}S^2 + {\kappa \over 3}S^3 \end{eqnarray} Again, this form is natural in the strong sense, as it is most general under $Z_4^A\otimes Z_2^C$ symmetry, which acts on the chiral superfields in the following way: under $Z_4^A$ (as before) \begin{equation} (A,10) \rightarrow - (A,10),~~~~~ (\psi,\bar {\psi}) \rightarrow i (\psi,\bar {\psi}) \end{equation} under $Z_2^C$ \begin{equation} C \rightarrow - C. \end{equation} All other superfields are invariant under the given symmetries. Again, by the straightforward solution of the standard $F$-flatness and $D$-flatness conditions we can find the following $G_W$-preserving supersymmetric vacuum \begin{eqnarray} \Sigma &=& diag(2,2,2,2,2,2,-3,-3,-3,-3)\Sigma {}~~~~where ~\Sigma = {c'M_a - a'M_c \over 3ac' + 2ca'}\nonumber\\ A &=& diag[0,0,0,A,A]\otimes \epsilon\nonumber\\ B &=& diag[B,B,B,B',B']\otimes \epsilon\nonumber\\ C &=& diag[C,C,C,0,0]\otimes \epsilon\nonumber\\ \chi &=& \bar {\chi} = \chi|+,+,+,+,+>\nonumber\\ \psi &=& \bar {\psi} = 0\nonumber\\ S &=& -{2cM_a + 3aM_c \over 3ac' + 2ca'} \end{eqnarray} The five remaining quantities $A,B,B',C,$ and $\chi$ are determined from the five equations: \begin{eqnarray} && (2b\Sigma + M_b + b'S)B + g_b\chi^2 = 0\nonumber\\ && (-3b\Sigma + M_b + b'S)B'+ g_b\chi^2 = 0\nonumber\\ && \gamma S + M'' + {g_b \over 2}(3B + 2B') = 0\nonumber\\ && 10(\sigma S\Sigma - h\Sigma^2) - aA^2 + b(B^2 - B^{'2}) + cC^2 =0\nonumber\\ && 15\sigma\Sigma^2 + a'A^2 + b({3 \over 2} B^2 + B^{'2}) + {3 \over 2}c'C^2 + \gamma \chi^2 + M^2 +M'S + \kappa S^2 =0 \end{eqnarray} Again, in the above vacuum there is a set of $G_W$ non-singlet massless states delivered by $W_{GUT}$. First of all, there are $Q,\bar Q, L,\bar L$ states from $\psi, \bar {\psi}$, which are zero eigenstates of $T_R^3$ generator and cannot get masses from the $A$ VEV. On top of this, there are the pseudo-Goldstone-type massless (in the SUSY limit) states resulting from the continuous degeneracy of the given vacuum. This degeneracy occurs because one can continuously rotate the $A$ and $B$ VEVs by an arbitrary $independent$ global $SO(4)$ transformation and/or $C$ and $B$ by an $independent$ global $SO(6)$ transformation without violating any of the conditions $F = 0$ or $D = 0$. This happens because $A,B,$ and $C$ do not communicate directly with one another in the superpotential, but only through $\Sigma$ to whom all $45$-plets are coupled bilinearly; since the $\Sigma$ VEV is invariant under $G_{LR}$, the vacuum automatically gets a larger degeneracy under $SO(6)_C\otimes SO(6)_B$ and $SO(4)_A\otimes SO(4)_B$ global transformations. Thus, there are pseudo-Goldstone modes with quantum numbers of the $SO(6)/SU(3)\otimes U(1)$ and $SU(2)_R/U(1)_R$ generators ($u_c \bar {u_c}$ and $e_c \bar {e_c}$ states) which are not eaten up by the gauge superfields. Thus again, as in Model I we end up with the complete $SU(5)$ multiplets beyond the MSSM particle content, but now these new light states effectively compose a 4th vector-like family $5 +\bar 5, 10 + \overline{10}$. This preserves the successful unification of the gauge couplings. \subsection*{5. conclusions} We have presented an alternative approach to the D--T splitting, problem which in contrast to the standard schemes $does~not$ require the heavy coloured triplet Higgs. The crucial point is that, independently from the triplet mass, the proton decay can be extremely suppressed for group theoretical reasons if the quark and lepton masses are induced from the high-dimensional operators of the form (1). In this case, the light coloured triplet $automatically$ gets decoupled after the GUT symmetry breaking. We have shown how the desired operators can be naturally (purely due to a symmetry and the field content) induced after integrating out some heavy states at $M_{GUT}$. Two serious problems of the standard approach: colour-Higgsino-mediated proton decay and the $\mu$ problem, can receive a natural solution. The first one is automatic: both Higgsino- and Higgs-mediated proton decays are suppressed by the same rate $(\sim M_W/M_{GUT})^2$ so that the proton is stable (practically). The second problem can be easily solved by introducing a light gauge singlet superfield without causing the standard `light singlet' problem. Another model-independent consequence is the existence of some decoupled long-lived particles in the low-energy theory. These necessarily include a coloured triplet Higgs pair (and at least one extra doublet pair which automatically preserves the successful unification of couplings). These new particles can be subject of an experimental search. The extremely small Yukawa coupling constant (suppressed at least by a factor $\sim {M_W \over M_{GUT}}$ with respect to the ordinary doublet) makes the lightest member of the supermultiplets $T,\bar T$ long-lived enough to appear stable in the detector so that the colour singlet bound states, which they form with ordinary quarks, should behave as heavy (with mass $\sim M_w$) stable hadrons (or mesons). In this respect their phenomenology is very similar to the one of the coloured pseudo-Goldstone states discussed in \cite{lt}, although their origin is very different. Finally, we have presented two $SO(10)$ examples which naturally accommodate the above scenario. Both have superpotentials that are most general under symmetries and do not suffer from any fine-tuning problem.
1,314,259,992,922
arxiv
\section{Hyperfinite Collections} Although we will not go into detail, Schramm \cite{hgl} discusses the concept of a hyperfinite collection whose definition is presented below. In this section, we merely provide an example of such a collection. \begin{defn} A collection of finite graphs $\mathcal{G}$ is $(k,\varepsilon)$-\emph{hyperfinite} for some positive integer $k$ and $\varepsilon > 0$ if for every $G \in \mathcal{G}$, there exists a set $S \subseteq E(G)$ such that $|S| \leq \varepsilon|V(G)|$, and each connected component of $G \setminus S$ has at most $k$ vertices. The collection $\mathcal{G}$ is \emph{hyperfinite} if for every $\varepsilon > 0$, there is a positive integer $k$ such that $\mathcal{G}$ is $(k,\varepsilon)$-hyperfinite. \end{defn} The collection $\mc{S}$ of finite paths is hyperfinite. Given $\varepsilon > 0$, choose \[ k = \ceil{1/\varepsilon}. \] Let $P_{n+1} = e_1 \cdots e_n$ be a finite path for some positive integer $n$ written as a sequence of its edges. Consider the set $S = \{e_i \in E(P_{n+1}) ~:~ k | i\}$. Then \[ |S| = \floor{\frac{n}{k}}, \] and every connected component of $P_{n+1} \setminus S$ has at most $k$ vertices. \begin{prop} The union of a finite number of hyperfinite collections is hyperfinite. \end{prop} \section{Introduction} In recent years, several mathematicians have proceeded to study a concept known as unimodularity. This report is an introductory survey of this theory. The reader will encounter known results, new observations, and detailed examples. Our presentation is independent of any particular source, and we include rigorous proofs of facts whose demonstrations have been omitted elsewhere. The results presented may very well lead to something nontrivial. In particular, with every proposition discovered and proved, we learn more about unimodularity, and so come closer to proving the current open problems and finding new ones. Throughout this report, we will assume that the reader has a basic understanding of metric spaces and graph theory. \section{Possible Directions of Research} There is much left to uncover in the realm of unimodular measures. Although it was not shown in this report, it is true that the Dirac measure $\delta_{[P_\infty,\cdot]}$ satisfies the iMTP. Indeed the infinite path may be ``rotated'' by 180 degrees, which means $[P_\infty,x,y]$ is symmetric in $x$ and $y$. A more general question that we wish to answer is whether a Dirac measure on \emph{any} vertex-transitive graph is unimodular. Note that Dirac measures of finite vertex-transitive graphs are laws, which we know now are unimodular, so the question is directed at infinite graphs. We may also consider the open problem mentioned earlier that claims every unimodular measure is a weak limit of a sequence of laws. Elek \cite{otlolggs} has provided a partial solution to this problem, and we wish to understand his reasoning. In fact we have yet to prove the converse of this conjecture, which, according to Aldous and Lyons \cite{pourn}, and Schramm \cite{hgl}, is trivial. Perhaps the convexity of the closure of laws offers interesting results as well. Hence its further study is also warranted. \section{Preliminaries} Unless mentioned otherwise, the letters $k$, $m$, $n$, and their uppercase variants denote positive integers. Furthermore, $\ol{\mathbb{N}} = \mathbb{N} \cup \{\infty\}$ and $\mathbb{N}^\ast = \mathbb{N} \setminus \{0\}$. All of the graphs in this paper are simple. That is, they have no loops, and there is at most one edge between any two vertices. A graph is \emph{finite} if its vertex set is finite. On the other hand, it is \emph{infinite} if it is not finite. If the vertex set of a graph is nonempty, the graph itself is said to be \emph{nonempty}. An edge in a graph is an unordered pair of vertices $\{u,v\}$, which we will denote by $uv$ instead. In this report, a \emph{measure} on a space $X$ is a countably additive function from a $\sigma$-algebra of subsets of $X$ to the interval $[0,\infty]$. If $\mu$ is a measure on $X$ and $\mu(X) = 1$, then $\mu$ is a \emph{probability measure}. If a probability measure is defined on a subset of its domain, we extend it to a measure over the entire domain by setting it equal to zero elsewhere. The vertex set of every graph is equipped with the \emph{shortest path metric} $d$. That is, for all vertices $x$ and $y$ in a graph $G$, $d(x,y)$ is the length of the shortest path from $x$ to $y$; if no such path exists, $d(x,y) = \infty$. \begin{defn} Let $G$ be a graph; let $X$ be a subset of the vertices of $G$. A \emph{subgraph of $G$ induced by $X$} is a graph whose vertex set is $X$ and whose edge set is \[ \{uv \in E(G) ~:~ u,v \in X\}. \] This subgraph is known as an \emph{induced subgraph} of $G$. It is denoted by $G[X]$. \end{defn} Note that the notion of an induced subgraph is stronger than that of a subgraph. \begin{defn} If $u$ and $v$ are vertices of a graph $G$, they are \emph{similar} if $\sigma(u) = v$ for some graph automorphism $\sigma$ on $G$. A graph is \emph{vertex-transitive} if any two of its vertices are similar. \end{defn} \begin{defn} A $k$-\emph{rooted graph} is a $(k+1)$-tuple $(G,o_1,\ldots,o_k)$ where $G$ is a graph and $\{o_1,\ldots,o_k\} \subseteq V(G)$. Each $o_i$ is known as a \emph{root}; we do not assume that the roots are pairwise distinct. For convenience, we will refer to $1$-rooted graphs as rooted graphs, and $2$-rooted graphs as birooted graphs. Two $k$-rooted graphs $(G,o_1,\ldots,o_k)$ and $(G',o_1',\ldots,o_k')$ are \emph{isomorphic}, written \[ (G,o_1,\ldots,o_k) \cong (G',o_1',\ldots,o_k'), \] if there exists a graph isomorphism $\varphi : G \to G'$ such that \[ \varphi(o_i) = o_i' \] for each $i \in \{1,\ldots,k\}$. \end{defn} As the reader can verify, $\cong$ is an equivalence relation; its equivalence classes are called \emph{isomorphism classes}. \begin{defn} A graph is \emph{locally finite} if the degree of each of its vertices is finite. \end{defn} Let $\wh{\mathcal{G}}$ be the collection of all isomorphism classes of locally finite connected rooted graphs. The elements of $\wh{\mathcal{G}}$ are of the form $[G,o]$. The subcollection $\wh{\mathcal{G}}_M$ of $\wh{\mathcal{G}}$ consists of rooted graphs whose maximal degree is at most $M$. From this point on, we will simply write \emph{rooted graph} when referring to its isomorphism class. Given a rooted graph $[G,o] \in \wh{\mathcal{G}}$ and a nonnegative real number $r$, let $B_G(o,r)$ be the subgraph of $G$ induced by the vertices at a distance of at most $r$ from $o$. That is, \[ B_G(o,r) = G[B(o,r)] \] where $B(o,r) = \{v \in V(G) ~:~ d(o,v) \leq r\}$. The reader may think of the graphs $B_G(o,r)$ as closed balls. If $G$ is a graph, then $G_x$ is the connected component of $G$ that contains $x$. The rooted graph $[G_x,x]$ is a \emph{rooted connected component} of $G$. Vertex-transitive graphs only have one rooted connected component, so it suffices to write $[G,\cdot]$ in this case. For clarity, we will write $(G_k,o_k)$ instead of $(G_{o_k},o_k)$. This notation will be especially useful when dealing with sequences of graphs. When the reader encounters a figure of a rooted graph, the roots are the solid circles; all other vertices are empty circles. \section{Sustained Probability Measures} Although laws are interesting objects, their domain consists only of finite graphs. This section attempts to view laws in a slightly more general setting. \begin{defn} A probability measure $\mu$ on $\wh{\mathcal{G}}_M$ is \emph{sustained} by a graph $G$ if the support of $\mu$ is a subset of rooted connected components of $G$. It is \emph{strictly} sustained by $G$ if every rooted connected component has a positive measure. \end{defn} \begin{lem}\label{imtp_sustained_implies_strictly_sustained} If a unimodular measure $\mu \in \mathcal{U}$ is sustained by a finite connected graph $G$, then $\mu$ is strictly sustained by $G$. \end{lem} \begin{proof} Let $\{[G,i] ~:~ 1 \leq i \leq k\}$ be the set of rooted connected components of $G$, and let $\mu[G,i] = p_i$ for all $i \in \{1,\ldots,k\}$. Suppose that $p_j = 0$ for some $j \in \{1,\ldots,k\}$. Consider the function \[ f[H,a,b] = \begin{cases} 1 & \text{ if $[H,a] = [G,j]$,}\\ 0 & \text{ otherwise,} \end{cases} \] which is well-defined. Observe that \[ \sum_{x \in V(G)} f[G,x,i] = |\mathrm{Aut}(G)j| \] for all $i \in \{1,\ldots,k\}$. Since $\mu$ is unimodular, \begin{align*} |\mathrm{Aut}(G)j| &= \sum_{i=1}^k p_i |\mathrm{Aut}(G)j|\\ &= \sum_{i=1}^k \sum_{x \in V(G)} f[G,x,i] p_i\\ &= \int \sum_{x \in V(G)} f[G,x,o]~d\mu[G,o]\\ &= \int \sum_{x \in V(G)} f[G,o,x]~d\mu[G,o]\\ &= \sum_{i=1}^k \sum_{x \in V(G)} f[G,i,x] p_i\\ &= 0, \end{align*} which is a contradiction. \end{proof} \begin{lem}\label{arithmetic_harmonic_means} Let $\{a_1,\ldots,a_n\}$ be a set of positive real numbers. If $\sum_{i=1}^n a_i = 1$ and $\sum_{i=1}^n a_i^{-1} = n^2$, then $a_i = n^{-1}$ for all $i \in \{1,\ldots,n\}$. \end{lem} \begin{proof} Recall that \[ \frac{\sum_{i=1}^n a_i}{n} \] is the arithmetic mean of the given set, and \[ \frac{n}{\sum_{i=1}^n a_i^{-1}} \] is its harmonic mean. In this case, both means are equal to $n^{-1}$. It is known that the arithmetic and harmonic means coincide if and only if \[ a_1 = \cdots = a_n, \] and so $a_i = n^{-1}$ for all $i \in \{1,\ldots,n\}$. \end{proof} \begin{theo}\label{law_characterization} If $\mu \in \mathfrak{M}_M$ is sustained by a finite connected graph $G$ and satisfies the iMTP, then $\mu$ is the law of $G$. \end{theo} \begin{proof} Let $\{[G,i] ~:~ 1 \leq i \leq k\}$ be the set of rooted connected components of $G$, and let $\mu[G,i] = p_i$ for all $i \in \{1,\ldots,k\}$. The functions \[ f : [G,x,y] \mapsto \frac{p_x}{|[G,x]|} \] and \[ h : [G,x,y] \mapsto \frac{p_x|[G,y]|}{p_y|[G,x]|} \] are nonnegative and measurable. Since $\mu$ satisfies the iMTP, \begin{align*} 1 &= \left(\sum_{i=1}^k p_i\right)\left(\sum_{j=1}^k p_j\right)\\ &= \sum_{i=1}^k \sum_{x \in V(G)} \left(\frac{p_x}{|[G,x]|}\right) p_i\\ &= \int \sum_{x \in V(G)} f[G,x,o]~d\mu\\ &= \int \sum_{x \in V(G)} f[G,o,x]~d\mu\\ &= \sum_{i=1}^k \sum_{x \in V(G)} \frac{p_i^2}{|[G,i]|}\\ &= |V(G)|\sum_{i=1}^k \frac{p_i^2}{|[G,i]|}, \end{align*} and so \[ \sum_{i=1}^k \frac{p_i^2}{|[G,i]|} = \frac{1}{|V(G)|}. \] Using this equation and similar reasoning, \begin{align*} &\int \sum_{x \in V(G)} h[G,x,o]~d\mu = \int \sum_{x \in V(G)} h[G,o,x]~d\mu\\ \Rightarrow ~~ &\sum_{i=1}^k \sum_{x \in V(G)} h[G,x,i]p_i = \sum_{i=1}^k \sum_{x \in V(G)} h[G,i,x]p_i\\ \Rightarrow ~~ &\sum_{i=1}^k \sum_{x \in V(G)} \frac{p_x|[G,i]|}{|[G,x]|} = \sum_{i=1}^k \sum_{x \in V(G)} \frac{p_i^2|[G,x]|}{p_x|[G,i]|}\\ \Rightarrow ~~ &\left(\sum_{i=1}^k |[G,i]|\right)\left(\sum_{x \in V(G)} \frac{p_x}{|[G,x]|}\right) = \left(\sum_{i=1}^k \frac{p_i^2}{|[G,i]|}\right)\left(\sum_{x \in V(G)} \frac{|[G,x]|}{p_x}\right)\\ \Rightarrow ~~ &|V(G)| = \frac{1}{|V(G)|} \sum_{x \in V(G)} \frac{|[G,x]|}{p_x}, \end{align*} which means \[ \sum_{x \in V(G)} \frac{|[G,x]|}{p_x} = |V(G)|^2. \] By applying Lemma \ref{arithmetic_harmonic_means} on the set \[ \left\{\frac{p_x}{|[G,x]|} ~:~ x \in V(G)\right\}, \] it follows that \[ p_x = \frac{|[G,x]|}{|V(G)|} \] for all $x \in V(G)$, as required. \end{proof} As stated in Proposition \ref{laws_are_unimodular}, laws are unimodular. Since laws themselves are sustained probability measures, the converse of Theorem \ref{law_characterization} is also true. Hence this theorem is a new characterization of laws of finite connected graphs. Aldous and Lyons \cite{pourn} briefly mention this characterization without proof. A silly but instructive application of Lemma \ref{imtp_sustained_implies_strictly_sustained} is the following proposition. \begin{prop} If the Dirac measure $\delta_{[G,o]}$ is unimodular for some $[G,o] \in \wh{\mathcal{G}}_M$, then $G$ is vertex-transitive. \end{prop} \begin{proof} Suppose that $p \notin \mathrm{Aut}(G)o$ for some $p \in V(G)$. Then $[G,o] \neq [G,p]$, and so $\delta_{[G,o]}[G,p] = 0$. By the assumption that $\delta_{[G,o]}$ is unimodular and Lemma \ref{imtp_sustained_implies_strictly_sustained}, this Dirac measure is strictly sustained by $G$; a contradiction. \end{proof} \section{The Law} Let $\mathfrak{M}_M$ be the space of all probability measures on $\wh{\mathcal{G}}_M$. Denote by $\mb{C}(\wh{\mathcal{G}}_M)$ the set of real-valued bounded continuous functions whose domain is $\wh{\mathcal{G}}_M$. \begin{defn} A sequence $(\mu_n)_{n=1}^\infty$ of probability measures on $\wh{\mathcal{G}}_M$ \emph{converges weakly} to some $\mu \in \mathfrak{M}_M$ if \[ \lim_{n \to \infty} \int f~d\mu_n = \int f~d\mu \] for all $f \in \mb{C}(\wh{\mathcal{G}}_M)$. The \emph{integral of $f$ with respect to the measure $\mu$} is the expression on the right-hand side, which may also be written as $\mu[f]$. Whenever we integrate a function, we will assume that it is measurable. The measure $\mu$ is known as the \emph{weak limit} of the given sequence. Occasionally, we will use $\mu_n \Rightarrow \mu$ to denote weak convergence of measures. \end{defn} Define $\mathcal{G}_M^0$ to be the collection of all isomorphism classes of nonempty finite graphs whose maximal degree is at most $M$. There is no difference between the notation of a finite graph and its isomorphism class, but this does not pose any confusion. For convenience, the elements of $\mathcal{G}_M^0$ will just be called finite graphs. We equip the vertex set of every graph in $\mathcal{G}_M^0$ with the uniform probability measure. That is, if $G \in \mathcal{G}_M^0$, then the probability of choosing a vertex $v \in V(G)$ is $1/|V(G)|$. What follows is perhaps the most important definition in this paper. \begin{defn} The \emph{law} is a function $\Psi : \mathcal{G}_M^0 \to \mathfrak{M}_M$ defined as follows: for every graph $G \in \mathcal{G}_M^0$, \[ \Psi(G)[G_o,o] = \frac{|\mathrm{Aut}(G)o|}{|V(G)|} \] if $G_o$ is a connected component of $G$ for some $o \in V(G)$, and $\Psi(G) = 0$ elsewhere. Here $\mathrm{Aut}(G)$ is the group of automorphisms on $G$, and $\mathrm{Aut}(G)o$ is the \emph{orbit} of the vertex $o$ in $G$: \[ \mathrm{Aut}(G)o = \{v \in V(G) ~:~ \exists \sigma \in \mathrm{Aut}(G) ~~ \sigma(v) = o\}. \] The image $\Psi(G)$ of a finite graph $G \in \mathcal{G}_M^0$ is a probability measure on $\wh{\mathcal{G}}_M$ called \emph{the law of $G$}. Usually, we will simply write \emph{the law} when no reference to a specific graph is necessary. The space of laws is the image $\Psi(\wh{\mathcal{G}}_M)$ equipped with the topology of weak convergence; its closure is denoted by $\mathfrak{M}_M^0$. \end{defn} Occasionally, $|[G_o,o]|$ may be used to denote $|\mathrm{Aut}(G)o|$. This is justified by an equivalent definition of the law of a finite graph that arises from the following fact. \begin{prop} If $G \in \mathcal{G}_M^0$, then \[ |\{(H,p) \in [G_o,o] ~:~ V(H) \subseteq V(G)\}| = |\mathrm{Aut}(G)o| \] whenever $[G_o,o]$ is a rooted connected component of $G$. \end{prop} \begin{proof} Observe that $(H,p) \in [G_o,o]$ and $V(H) \subseteq V(G)$ imply that $H = G_p$. This means the function \begin{align*} f : \{(H,p) \in [G_o,o] ~:~ V(H) \subseteq V(G)\} &\to \mathrm{Aut}(G)o\\ (H,p) &\mapsto p \end{align*} is injective. To see that $f$ is surjective, assume that $p \in \mathrm{Aut}(G)o$. Then $\sigma(p) = o$ for some automorphism $\sigma : G \to G$. Graph automorphisms are isometries because they preserve paths. If $r$ is a nonnegative integer, then \begin{align*} \sigma B(o,r) &= \{\sigma(x) \in V(G) ~:~ d(x,o) \leq r\}\\ &= \{\sigma(x) \in V(G) ~:~ d(\sigma(x),p) \leq r\}\\ &= \{y \in V(G) ~:~ d(y,p) \leq r\}\\ &= B(p,r) \end{align*} because $\sigma$ is a surjective isometry. Hence $(B_G(o,r),o) \cong (B_G(p,r),p)$ for all nonnegative integers $r$. In particular, $(G_o,o) \cong (G_p,p)$. Since $V(G_p) \subseteq V(G)$, the rooted graph $(G_p,p)$ is an element of the domain of $f$. Furthermore, $f(G_p,p) = p$. Thus $f$ is a bijection, and so the domain and codomain have the same cardinality. \end{proof} Please note that $[G_o,o]$ is an infinite set, so $|[G_o,o]|$ is just a symbol that denotes the number \[ |\{(H,p) \in [G_o,o] ~:~ V(H) \subseteq V(G)\}|. \] This notation is convenient though as demonstrated in the example below. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.75] \GraphInit[vstyle=Classic] \Vertex[x=0,y=0,Lpos=240,style={line width=1pt,fill=white,minimum size=5pt}]{6} \Vertex[x=4,y=0,Lpos=330,style={line width=1pt,fill=white,minimum size=5pt}]{7} \Vertex[x=2,y=2,style={line width=1pt,fill=white,minimum size=5pt}]{5} \Vertex[x=2,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{4} \Vertex[x=2,y=7,Lpos=90,style={line width=1pt,fill=white,minimum size=5pt}]{2} \Vertex[x=0,y=6,Lpos=120,style={line width=1pt,fill=white,minimum size=5pt}]{1} \Vertex[x=4,y=6,Lpos=60,style={line width=1pt,fill=white,minimum size=5pt}]{3} \Edges(6,5) \Edges(7,5) \Edges(5,4) \Edges(4,2) \Edges(4,1) \Edges(4,3) \Vertex[x=6,y=2,Lpos=240,style={line width=1pt,fill=white,minimum size=5pt}]{8} \Vertex[x=9,y=2,Lpos=330,style={line width=1pt,fill=white,minimum size=5pt}]{10} \Vertex[x=7.5,y=4.5,Lpos=90,style={line width=1pt,fill=white,minimum size=5pt}]{9} \Edges(8,10,9,8) \end{tikzpicture} \end{center} \caption[The computation of a law]{A finite graph $G$ with 10 vertices.} \label{fig:example_of_law} \end{figure} To visualize the definition of the law, consider the graph $G$ shown in Figure \ref{fig:example_of_law}. Note that $[G_6,6] = [G_7,7]$. In fact, $|[G_6,6]| = 2$. Furthermore, $[G_8,8] = [G_9,9] = [G_{10},10]$ and $|[G_8,8]| = 3$. Similarly, we compute that $\Psi(G)$ is defined by \begin{align*} \Psi(G)[G_1,1] &= 3/10\\ \Psi(G)[G_4,4] &= 1/10\\ \Psi(G)[G_5,5] &= 1/10\\ \Psi(G)[G_6,6] &= 2/10\\ \Psi(G)[G_8,8] &= 3/10. \end{align*} Before becoming too excited about $\Psi$, we must be certain that $\Psi(G)$ is actually a probability measure for every finite graph $G$. The following proposition establishes this certainty. \begin{prop} Let $G$ be a finite graph. The law $\Psi(G)$ is a well-defined probability measure. \end{prop} \begin{proof} If $[G_o,o] = [G_{o'},o']$, then $\mathrm{Aut}(G)o = \mathrm{Aut}(G)o'$, and so $\Psi(G)$ is well-defined. Let $\{[G_1,o_1],\ldots,[G_k,o_k]\}$ be the set of rooted connected components of $G$. Recall that $\mathrm{Aut}(G)$ partitions $V(G)$, which means $V(G)$ is the disjoint union of the orbits of the roots of the rooted connected components. Then \[ \Psi(G)(\wh{\mathcal{G}}_M) = \sum_{i=1}^k \Psi(G)[G_i,o_i] = \sum_{i=1}^k \frac{|\mathrm{Aut}(G)o_i|}{|V(G)|} = \frac{|V(G)|}{|V(G)|} = 1. \] Since $\Psi(G)$ has finite support, it is trivially countably additive. Hence $\Psi(G)$ is a probability measure. \end{proof} The remainder of this section will include various properties of $\Psi$. Most of the interesting results lie in the later sections, but some may use the propositions that follow. \begin{prop} The restriction of $\Psi$ to the set of graphs in $\mathcal{G}_M^0$ whose connected components are pairwise nonisomorphic is injective. \end{prop} \begin{proof} Let $G$ and $H$ be graphs in $\mathcal{G}_M^0$ whose connected components are pairwise nonisomorphic. Suppose that $\Psi(G) = \Psi(H)$. Given a root $o \in V(G)$, consider the rooted connected component $[G_o,o]$ of $G$. By definition of $\Psi(G)$, we know that $\Psi(G)[G_o,o] \neq 0$, and so $\Psi(H)[G_o,o] \neq 0$. Hence $[G_o,o]$ is a rooted connected component of $H$. That is, $[G_o,o] = [H_p,p]$ for some $p \in V(H)$. Thus every rooted connected component of $G$ corresponds to a rooted connected component of $H$. Furthermore, $[H_p,p]$ is unique because the components in $H$ are pairwise nonisomorphic. It follows that $G$ is isomorphic to $H$. \end{proof} Note that $\Psi$ itself is not injective. For example, if $G$ consists of two disjoint copies of $K_3$ and $H$ is just one copy, then $\Psi(G) = \Psi(H)$, but the graphs $G$ and $H$ are not isomorphic. The preceding proposition shows that to calculate the law of a certain graph, it suffices to consider only one copy of each component. A similar simplification occurs in the case of vertex-transitive graphs. A vertex-transitive graph $G \in \mathcal{G}_M^0$ has exactly one rooted connected component $[G_o,o]$. Its law is defined by $\Psi(G)[G_o,o] = 1$. That is, $\Psi(G)$ is the Dirac measure on the point $[G_o,o]$. Such measures will appear in a slightly more general context later on in the report. We will use the next proposition several times to simplify the computation of integrals. \begin{prop} If $\{[G_1,o_1],\ldots,[G_k,o_k]\}$ is the set of the rooted connected components of a graph $G \in \mathcal{G}_M^0$, then \[ \int f~d\Psi(G) = \frac{1}{|V(G)|} \sum_{x \in V(G)} f[G_x,x] \] for every real-valued function $f$ whose domain is $\wh{\mathcal{G}}_M$. \end{prop} \begin{proof} Note that \[ \sum_{x \in V(G)} f[G_x,x] = \sum_{i=1}^k f[G_i,o_i] \cdot |\mathrm{Aut}(G)o_i|. \] Hence \begin{align*} \int f~d\Psi(G) &= \sum_{i=1}^k f[G_i,o_i] \cdot \Psi(G)[G_i,o_i]\\ &= \frac{1}{|V(G)|} \sum_{i=1}^k f[G_i,o_i] \cdot |\mathrm{Aut}(G)o_i|\\ &= \frac{1}{|V(G)|} \sum_{x \in V(G)} f[G_x,x], \end{align*} as required. \end{proof} As there are now two types of convergence, the theorem below is a new observation that connects them when dealing with the special case of the Dirac measure. \begin{theo}\label{weak_conv_implies_graph_conv} If $(\Psi(G_n))_{n=1}^\infty$ converges weakly to the Dirac measure $\delta_{[G,o]}$ for some $[G,o] \in \wh{\mathcal{G}}_M$, then there exists a sequence of vertices $(o_n)_{n=1}^\infty$ such that $([G_n,o_n])_{n=1}^\infty$ converges\footnote{Here $G_n$ is the connected component of $o_n$. It is not the entire graph. That is, $[G_n,o_n]$ is actually $[(G_n)_{o_n},o_n]$. As the reader can see, this modification greatly simplifies the notation.} to $[G,o]$. \end{theo} \begin{proof} For each positive integer $i$, choose $o_i \in V(G_i)$ such that \[ \rho([G,o],[G_i,o_i]) \] is minimized. Recall that the function \[ \rho_{[G,o]} : \wh{\mathcal{G}}_M \to \mathbb{R} \] defined by \[ \rho_{[G,o]}[G',o'] = \rho([G,o],[G',o']) \] for all $[G',o'] \in \wh{\mathcal{G}}_M$ is continuous. By assumption that $(\Psi(G_n))_{n=1}^\infty$ converges weakly to $\delta_{[G,o]}$, \[ \lim_{n \to \infty} \int \rho_{[G,o]}~d\Psi(G_n) = \int \rho_{[G,o]}~d\delta_{[G,o]}. \] Note that \[ \int \rho_{[G,o]}~d\delta_{[G,o]} = 0 \] and \[ \int \rho_{[G,o]}~d\Psi(G_n) = \sum_{x \in V(G_n)} \frac{\rho_{[G,o]}[(G_n)_x,x]}{|V(G_n)|} \] for each positive integer $n$. Hence the average value of $\rho_{[G,o]}$ converges to $0$, and so \[ \rho([G,o],[G_n,o_n]) \to 0 \] by definition of the sequence $(o_n)_{n=1}^\infty$. \end{proof} The reader may notice that the converse of this theorem is not true, but this will be demonstrated later on. Something can be said about the convexity of the space of laws and its closure. To streamline the statements, a bit of notation is required. If $G$ and $H$ are graphs, then $G + H$ is their disjoint union. Similarly, $nG$ is a disjoint union of $n$ copies of $G$ for some nonnegative integer $n$. By convention, $0G$ is the graph with no vertices. The next example and the proposition that follows put this new notation to good use. The measure defined by $\mu[K_1,\cdot] = 1/2$ and $\mu[K_2,\cdot] = 1/2$ yields a law. Indeed \[ \mu = \Psi(2K_1 + K_2). \] \begin{prop} If $m$ and $n$ are nonnegative integers with $(m,n) \neq (0,0)$, and $G$ and $H$ are finite graphs, then \begin{align}\label{law_linearity} \Psi(mG + nH) = \frac{m|V(G)|\Psi(G) + n|V(H)|\Psi(H)}{m|V(G)| + n|V(H)|}. \end{align} \end{prop} \begin{proof} Suppose that $[I_q,q]$ is a rooted connected component of $G$ and $H$. That is, $[I_q,q] = [G_o,o] = [H_p,p]$ for some $o \in V(G)$ and $p \in V(H)$. Then \[ \Psi(mG + nH)[I_q,q] = \frac{|\mathrm{Aut}(mG + nH)q|}{|V(mG + nH)|} = \frac{|\mathrm{Aut}(mG + nH)q|}{m|V(G)| + n|V(H)|}. \] The cardinality of the orbit of $q$ in $mG + nH$ may be expressed as \[ |\mathrm{Aut}(mG + nH)q| = m|\mathrm{Aut}(G)o| + n|\mathrm{Aut}(H)p|. \] The result is \[ \Psi(mG + nH)[I_q,q] = \frac{m|\mathrm{Aut}(G)o| + n|\mathrm{Aut}(H)p|}{m|V(G)| + n|V(H)|}. \] On the other hand, \[ \frac{m|V(G)|\Psi(G)[G_o,o] + n|V(H)|\Psi(H)[H_p,p]}{m|V(G)| + n|V(H)|} = \frac{m|\mathrm{Aut}(G)o| + n|\mathrm{Aut}(H)p|}{m|V(G)| + n|V(H)|}. \] Hence Equation \ref{law_linearity} holds. Similar reasoning applies to the three remaining cases where $[I_q,q]$ is a rooted connected component of $G$ or $H$, but not both. \end{proof} As the following lemma and theorem demonstrate, the line segment between a pair of weak limits of laws lies in the closure of the space of laws. \begin{lem} The set of laws is rationally convex. That is, for all $G,H \in \mathcal{G}_M^0$ and $t \in [0,1] \cap \mathbb{Q}$, the measure $t\Psi(G) + (1 - t)\Psi(H)$ is a law. \end{lem} \begin{proof} Let $t = p/q$ for some integers $p$ and $q$ with $q \neq 0$. By Equation \ref{law_linearity}, \begin{align*} \Psi(p|V(H)|G &+ (q - p)|V(G)|H)\\ &= \frac{p|V(H)||V(G)|\Psi(G) + (q - p)|V(G)||V(H)|\Psi(H)}{p|V(H)||V(G)| + (q - p)|V(G)||V(H)|}\\ &= \left(\frac{p}{q}\right)\Psi(G) + \left(\frac{q - p}{q}\right)\Psi(H)\\ &= t\Psi(G) + (1 - t)\Psi(H). \end{align*} Hence the right-hand side is the law of the graph $p|V(H)|G + (q - p)|V(G)|H$. \end{proof} \begin{theo} The closure of the set of laws is convex. \end{theo} \begin{proof} Let $\mu_G$ and $\mu_H$ be limits of laws: $\Psi(G_n) \Rightarrow \mu_G$ and $\Psi(H_n) \Rightarrow \mu_H$. Given a real number $a$, there exists a sequence $a_n$ of rational numbers such that $a_n \to a$. Suppose that $f$ is a continuous function on $\wh{\mathcal{G}}_M$. Note that \[ \int f~d(a_n\Psi(G_n) + (1 - a_n)\Psi(H_n)) = a_n\int f~d\Psi(G_n) + (1 - a_n)\int f~d\Psi(H_n) \] for all positive integers $n$, and \[ \int f~d(a\mu_G + (1 - a)\mu_H) = a\int f~d\mu_G + (1 - a)\int f~d\mu_H. \] Then \begin{align*} \Bigg|\bigg(a_n\int f ~d\Psi(G_n) &+ (1 - a_n)\int f ~d\Psi(H_n)\bigg) - \bigg(a\int f ~d\mu_G + (1 - a)\int f ~d\mu_H\bigg)\Bigg|\\ &\leq |a_n\Psi(G_n)[f] - a\mu_G[f]| + |(1 - a_n)\Psi(H_n)[f] - (1 - a)\mu_H[f]|. \end{align*} The greater term is at most \begin{align*} |\Psi(G_n)[f]||a_n - a| &+ |a||\Psi(G_n)[f] - \mu_G[f]|\\ &+ |a_n - a||\Psi(H_n)[f]| + |1 - a||\Psi(H_n)[f] - \mu_H[f]|, \end{align*} which tends to zero. Hence \[ \lim_{n \to \infty} \int f ~d(a_n\Psi(G_n) + (1 - a_n)\Psi(H_n)) = \int f ~d(a\mu_G + (1 - a)\mu_H), \] and so $a\mu_G + (1 - a)\mu_H$ is the weak limit of a sequence of laws. \end{proof} \section{The Metric $\rho$} By itself, the set $\wh{\mathcal{G}}$ is not very interesting for our purposes. To benefit from it, we will equip it with a certain metric. Given $[G,o],[G',o'] \in \wh{\mathcal{G}}$, we define the distance $\rho : \wh{\mathcal{G}} \times \wh{\mathcal{G}} \to \mathbb{R}$ as follows: \[ \rho([G,o],[G',o']) = \begin{cases} 0 & \text{ if $[G,o] = [G',o']$,}\\ \frac{1}{1 + r} & \text{ otherwise.} \end{cases} \] where \[ r = \sup \{s \in \mathbb{N} ~:~ [B_G(o,s),o] = [B_{G'}(o',s),o']\}. \] \begin{prop} The function $\rho$ is a well-defined metric on $\wh{\mathcal{G}}$. Moreover, $\rho$ is an \emph{ultrametric}. That is, it satisfies a stronger version of the triangle inequality: \[ \rho([G_1,o_1],[G_2,o_2]) \leq \max\{\rho([G_1,o_1],[G_3,o_3]),\rho([G_3,o_3],[G_2,o_2])\} \] for all $[G_1,o_1],[G_2,o_2],[G_3,o_3] \in \wh{\mathcal{G}}$. \end{prop} \begin{proof} Let $[G_1,o_1],[G_1',o_1'],[G_2,o_2],[G_2',o_2'] \in \wh{\mathcal{G}}$ be arbitrary. Suppose that \[ [G_1,o_1] = [G_2,o_2] \] and \[ [G_1',o_1'] = [G_2',o_2'], \] and consider the following two cases. \noindent\textbf{Case 1.} If $[G_1,o_1] = [G_1',o_1']$, then $[G_2,o_2] = [G_2',o_2']$, and so \[ \rho([G_1,o_1],[G_1',o_1']) = 0 = \rho([G_2,o_2],[G_2',o_2']). \] \noindent\textbf{Case 2.} If $[G_1,o_1] \neq [G_1',o_1']$, then $[G_2,o_2] \neq [G_2',o_2']$. Let $r$ and $s$ be the largest integers such that \[ [B_{G_1}(o_1,r),o_1] = [B_{G_1'}(o_1',r),o_1'] \] and \[ [B_{G_2}(o_2,s),o_2] = [B_{G_2'}(o_2',s),o_2']. \] Then \[ [B_{G_1}(o_1,s),o_1] = [B_{G_2}(o_2,s),o_2] = [B_{G_2'}(o_2',s),o_2'] = [B_{G_1'}(o_1',s),o_1'], \] which means $s \leq r$. Similarly, $r \leq s$. Hence $r = s$, and so $\rho$ is well-defined. To see that $\rho$ is a metric, it suffices to show that it satisfies the triangle inequality because the other conditions follow immediately from the definition. In fact, we will prove that \[ \rho([G_1,o_1],[G_2,o_2]) \leq \max\{\rho([G_1,o_1],[G_3,o_3]),\rho([G_3,o_3],[G_2,o_2])\}. \] This inequality holds if any two of the three rooted graphs are equal. Suppose that \[ \rho([G_1,o_1],[G_2,o_2]) = \frac{1}{1 + r}, \] \[ \rho([G_1,o_1],[G_3,o_3]) = \frac{1}{1 + s}, \] and \[ \rho([G_3,o_3],[G_2,o_2]) = \frac{1}{1 + t} \] where $r$, $s$, and $t$ are the largest integers as defined previously. Since \[ [B_{G_1}(o_1,s),o_1] = [B_{G_3}(o_3,s),o_3] \] and \[ [B_{G_3}(o_3,t),o_3] = [B_{G_2}(o_2,t),o_2], \] it follows that \[ [B_{G_1}(o_1,\min\{s,t\}),o_1] = [B_{G_2}(o_2,\min\{s,t\}),o_2]. \] By definition, $\min\{s,t\} \leq r$, which means \[ \frac{1}{1 + r} \leq \frac{1}{1 + \min\{s,t\}} = \max\left\{\frac{1}{1 + s},\frac{1}{1 + t}\right\}, \] and the result follows. \end{proof} \begin{figure} \begin{minipage}[b]{0.48\linewidth} \begin{center} \begin{tikzpicture}[scale=0.75] \GraphInit[vstyle=Simple] \Vertex[x=0,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{A} \Vertex[x=2.5,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{B} \Vertex[x=1.25,y=2,style={line width=1pt,fill=white,minimum size=5pt}]{C} \Edges(A,B,C,A) \end{tikzpicture} \end{center} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \begin{center} \begin{tikzpicture}[scale=0.75] \GraphInit[vstyle=Simple] \Vertex[x=0,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{A} \Vertex[x=2.5,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{B} \Vertex[x=1.25,y=2,style={line width=1pt,fill=white,minimum size=5pt}]{C} \Vertex[x=5,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{D} \Edges(A,B,C,A) \Edges(B,D) \end{tikzpicture} \end{center} \end{minipage} \caption[An example of distance]{An example of distance.} \label{fig:example_of_distance} \end{figure} As with most concepts, examples are an excellent way to understand this metric space. Consider the graphs $[G,o]$ and $[H,p]$ shown in Figure \ref{fig:example_of_distance}. Then \[ [B_G(o,1),o] = [B_H(p,1),p], \] and 1 is the largest integer for which this is true. Hence $\rho([G,o],[H,p]) = 1/2$. Note that, in either case, a ball of radius 2 is the entire graph. When a metric space is defined, it is helpful to look at functions out of the space. An important example of such a function is given in the following proposition. \begin{prop}\label{degree_function} The \emph{degree function} $\deg$ defined by $[G,o] \mapsto \deg_G(o)$ for all $[G,o] \in \wh{\mathcal{G}}_M$ is $M$-Lipschitz. \end{prop} \begin{proof} Let $[G,o],[G',o'] \in \wh{\mathcal{G}}_M$ be arbitrary. If $\deg_G(o) = \deg_{G'}(o')$, the result follows. Suppose that the two values are distinct. Then \[ \rho([G,o],[G',o']) = 1, \] and so \[ |\deg_G(o) - \deg_{G'}(o')| \leq M = M \rho([G,o],[G',o']). \qedhere \] \end{proof} Note that the domain of the degree function is $\wh{\mathcal{G}}_M$, not all of $\wh{\mathcal{G}}$. Indeed this subspace is better-behaved than $\wh{\mathcal{G}}$. The theorem below reinforces this opinion. \begin{theo} The space $\wh{\mathcal{G}}_M$ is complete and totally bounded. That is, $\wh{\mathcal{G}}_M$ is compact. \end{theo} \begin{proof}[Proof of completeness.] Let $([G_n,o_n])_{n=1}^\infty$ be a Cauchy sequence of graphs in $(\wh{\mathcal{G}}_M,\rho)$. For every positive integer $n$, define \[ r_n = \sup\{r \in \mathbb{N} ~:~ \forall k \in \mathbb{N} ~~ [B_{G_n}(o_n,r),o_n] = [B_{G_{n+k}}(o_{n+k},r),o_{n+k}]\}. \] To simplify the notation, let $B_n = B_{G_n}(o_n,r_n)$. Furthermore, let \[ (\mathcal{V},\mathcal{E}) = \left(\bigcup_{i=1}^\infty V(B_i) \times \{i\},\bigcup_{i=1}^\infty E(B_i) \times \{i\}\right). \] Note that there exists an isometric embedding $j_n : B_n \hookrightarrow B_{n+1}$, which we use to define an equivalence relation $\approx$ as follows: \[ (u,n) \approx (v,m) \] if and only if \[ j_{n+m} \circ j_{n+m-1} \circ \cdots \circ j_n(u) = j_{n+m} \circ j_{n+m-1} \circ \cdots \circ j_m(v) \] for all $(u,n),(v,m) \in \mc{V}$. Following that, let $G$ be the graph with $V(G) = \mathcal{V}/\hspace{-0.5em}\approx$, and whose edge set is defined as follows: \[ \{[u,n],[v,m]\} \in E(G) \] if and only if \[ \{j_{n+m} \circ \cdots \circ j_n(u),j_{n+m} \circ \cdots \circ j_m(v)\} \in E(B_{n+m}). \] Having completed the construction of the rooted graph $[G,o]$ where $o = [o_1,1]$, we will now show that this is the limit of the Cauchy sequence defined at the beginning of the proof. Throughout, we will assume that $\varepsilon \leq 1$ because the distance $\rho$ is bounded above by 1. Given $\varepsilon > 0$, there exists a nonnegative integer $N$ such that for all $k \in \mathbb{N}$, \[ \rho([G_N,o_N],[G_{N+k},o_{N+k}]) < \varepsilon \] because $([G_n,o_n])_{n=1}^\infty$ is Cauchy. If $n \geq N$, then \begin{align*} \rho([G_n,o_n],[G,o]) &\leq \max\{\rho([G_n,o_n],[G_N,o_N]),\rho([G_N,o_N],[G,o])\}\\ &< \max\left\{\varepsilon,\frac{1}{1 + r_N}\right\}. \end{align*} Let $R_k$ be the radius of the ball corresponding to $\rho([G_N,o_N],[G_{N+k},o_{N+k}])$. As stated above, \[ \frac{1}{\varepsilon} - 1 < R_k \] for all $k \geq 0$. Consider the following cases. If \[ \ceil{\frac{1}{\varepsilon} - 1} < R_k \] for all $k \geq 0$, then \[ [B_{G_N}(o_N,\ceil{1/\varepsilon - 1} + 1),o_N] = [B_{G_{N+k}}(o_{N+k},\ceil{1/\varepsilon - 1} + 1),o_{N+k}] \] for all $k \geq 0$, and so \[ \ceil{\frac{1}{\varepsilon} - 1} + 1 \leq r_N, \] which means $1/\varepsilon - 1 < r_N$. That is, \[ \frac{1}{1 + r_N} < \varepsilon. \] On the other hand, suppose that \[ \ceil{\frac{1}{\varepsilon} - 1} = R_{k_0} \] for some $k_0 \geq 0$. Since $\ceil{1/\varepsilon - 1} \leq R_k$, \[ [B_{G_N}(o_N,R_{k_0}),o_N] = [B_{G_{N+k}}(o_{N+k},R_{k_0}),o_{N+k}] \] for all $k \geq 0$, and so \[ R_{k_0} \leq r_N. \] By definition of $r_N$, \[ [B_{G_N}(o_N,r_N),o_N] = [B_{G_{N+k_0}}(o_{N+k_0},r_N),o_{N+k_0}]. \] Then \[ \frac{1}{1 + R_{k_0}} \leq \frac{1}{1 + r_N}, \] which means $r_N \leq R_{k_0}$. It follows that $R_{k_0} = r_N$. Then $1/\varepsilon < R_{k_0}$ implies that \[ \frac{1}{1 + r_N} < \varepsilon. \] In either case, \[ \rho([G_n,o_n],[G,o]) < \varepsilon. \] Thus $([G_n,o_n])_{n=1}^\infty$ converges to $[G,o]$. \end{proof} \begin{proof}[Proof of total boundedness.] To show that $\wh{\mathcal{G}}_M$ is totally bounded, let $\varepsilon > 0$ and let \[ r = \ceil{\frac{1}{\varepsilon} - 1}. \] Denote by $F$ the set of all rooted graphs in $\wh{\mathcal{G}}_M$ of radius at most $r$ where the radius is the supremum of the distances between each vertex and the root. Observe that $F$ is a finite set because any graph of radius $r$ has at most \[ 1 + M \sum_{i=1}^{r} (M - 1)^{i-1} \leq (M + 1)^r \] vertices, and so $|F|$ is at most the number of graphs on $(M + 1)^r$ or fewer vertices. Now, given $[G,o] \in \wh{\mathcal{G}}_M$, the rooted graph $[B_G(o,r),o]$ is of radius $r$, and so it belongs to $F$. Furthermore, \[ \rho([G,o],[B_G(o,r),o]) \leq \frac{1}{1 + r} \leq \varepsilon. \] Hence $\wh{\mathcal{G}}_M$ is totally bounded. \end{proof} The larger space $(\wh{\mathcal{G}},\rho)$ is \emph{not} compact. Consider the sequence of $(1,n)$-bipartite graphs $K_{1,n}$ each rooted at the vertex of degree $n$. Figure \ref{fig:star_graph_sequence} depicts a part of this sequence. This sequence has no convergent subsequence because the degree of the root is increasing. Note that such a sequence would not exist in the subspace $(\wh{\mathcal{G}}_M,\rho)$ where the degree of the root is at most $M$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.75] \GraphInit[vstyle=Simple] \Vertex[x=0,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{A} \Vertex[x=0,y=1,style={line width=1pt,fill=white,minimum size=5pt}]{A1} \Vertex[x=4,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{B} \Vertex[x=4,y=1,style={line width=1pt,fill=white,minimum size=5pt}]{B1} \Vertex[x=4,y=-1,style={line width=1pt,fill=white,minimum size=5pt}]{B2} \Vertex[x=8,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{C} \Vertex[x=8,y=1,style={line width=1pt,fill=white,minimum size=5pt}]{C1} \Vertex[x=8,y=-1,style={line width=1pt,fill=white,minimum size=5pt}]{C2} \Vertex[x=7,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{C3} \Vertex[x=12,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{D} \Vertex[x=12,y=1,style={line width=1pt,fill=white,minimum size=5pt}]{D1} \Vertex[x=12,y=-1,style={line width=1pt,fill=white,minimum size=5pt}]{D2} \Vertex[x=11,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{D3} \Vertex[x=13,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{D4} \Vertex[x=16,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{E} \Vertex[x=16,y=1,style={line width=1pt,fill=white,minimum size=5pt}]{E1} \Vertex[x=16,y=-1,style={line width=1pt,fill=white,minimum size=5pt}]{E2} \Vertex[x=15,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{E3} \Vertex[x=17,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{E4} \Vertex[x=16.71,y=0.71,style={line width=1pt,fill=white,minimum size=5pt}]{E5} \Edges(A,A1) \Edges(B,B1) \Edges(B,B2) \Edges(C,C1) \Edges(C,C2) \Edges(C,C3) \Edges(D,D1) \Edges(D,D2) \Edges(D,D3) \Edges(D,D4) \Edges(E,E1) \Edges(E,E2) \Edges(E,E3) \Edges(E,E4) \Edges(E,E5) \end{tikzpicture} \end{center} \caption[The space $\wh{\mathcal{G}}$ is not compact]{The space $\wh{\mathcal{G}}$ is not compact.} \label{fig:star_graph_sequence} \end{figure} Before moving to the next section, there are a few interesting results concerning continuous functions on $\wh{\mathcal{G}}_M$. Denote by $\wh{\mathcal{G}}_M^0$ the subspace of $\wh{\mathcal{G}}_M$ of finite rooted graphs. \begin{prop} The set $\wh{\mathcal{G}}_M^0$ is a dense subspace of $\wh{\mathcal{G}}_M$. \end{prop} \begin{proof} Let $[G,o] \in \wh{\mathcal{G}}_M$ be arbitrary. Consider the sequence of finite rooted graphs defined by \[ [G_n,o_n] = [B_G(o,n),o] \] for all positive integers $n$. This sequence converges to $[G,o]$, which means $[G,o]$ lies in the closure of $\wh{\mathcal{G}}_M^0$. The result follows. \end{proof} This fact is helpful when searching for continuous functions on the space of rooted graphs. By a result from analysis, a uniformly continuous function on $\wh{\mathcal{G}}_M^0$ induces a unique continuous extension on $\wh{\mathcal{G}}_M$. On the other hand, the next fact implies that all functions on $\wh{\mathcal{G}}_M^0$ are continuous--though not necessarily uniformly continuous. \begin{lem} The isolated points of $\wh{\mathcal{G}}_M$ are precisely the finite rooted graphs. \end{lem} \begin{proof} If $[G,o] \in \wh{\mathcal{G}}_M$ is an isolated point, then any sequence that converges to it is eventually constant. In particular, this includes the sequence of balls rooted at $o \in V(G)$. Conversely, assume that $[G,o] \in \wh{\mathcal{G}}_M^0$, and let \[ R = \inf\{r \in \mathbb{N} ~:~ [G,o] = [B_G(o,r),o]\}, \] which is the \emph{radius} of $[G,o]$. If $[G,o] \neq [H,p]$ for some $[H,p] \in \wh{\mathcal{G}}_M$, then \[ \rho([G,o],[H,p]) = \frac{1}{1 + r} \] where $r$ is the largest nonnegative integer for which $[B_G(o,r),o] = [B_H(p,r),p]$. If $r > R$, the radius of $[H,p]$ must be at most $R$, but this means \[ [G,o] = [B_G(o,R),o] = [B_H(p,R),p] = [H,p], \] which is a contradiction. Hence $r \leq R$, and so \[ \rho([G,o],[H,p]) \geq \frac{1}{1 + R}. \] That is, $[G,o]$ is an isolated point. \end{proof} \begin{cor} The space $\wh{\mathcal{G}}_M^0$ is topologically discrete. \end{cor} \section{The Space of Paths} This section will cover the case of $M = 2$. Specifically, we will concentrate on a special subspace of $\wh{\mathcal{G}}_2$. Even though this is only a small portion of the general theory, the insight we gain will be helpful in understanding $\wh{\mathcal{G}}_M$. \begin{defn} A \emph{finite rooted path} is an element of the set \[ \left\{[P_k,o] \in \wh{\mathcal{G}}_2 ~:~ k \in \mathbb{N}^\ast\right\} \] where $P_k$ is a path of length $k - 1$ with $k$ vertices. These may also be denoted by $[P(u,v),o]$ where $u$ and $v$ are the ends of the path $P$. Consider the graph whose vertex set is $\mathbb{Z}$, and in which consecutive integers are adjacent. Such a graph is known as the \emph{bi-infinite} path $P_\infty$. An initial segment $[N,\infty) \cap \mathbb{Z}$ where $N$ is an integer induces the rooted graph $[P(N,\infty),0]$, which is called a \emph{semi-infinite} rooted path. Note that $P_\infty$ is vertex-transitive, meaning it has exactly one rooted connected component, which we will denote by $[P_\infty,\cdot]$. An instance of each of these rooted graphs is shown in Figure \ref{fig:types_of_paths}. \end{defn} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.75] \GraphInit[vstyle=Classic] \Vertex[NoLabel,x=0,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{A} \Vertex[Lpos=90,x=2,y=4,style={line width=1pt,fill=black,minimum size=5pt}]{$u_2$} \Vertex[NoLabel,x=4,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{C} \Vertex[NoLabel,x=6,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{D} \Vertex[NoLabel,x=8,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{E} \Edges(A,$u_2$,C,D,E) \Vertex[Lpos=90,x=0,y=2,style={line width=1pt,fill=white,minimum size=5pt}]{$-2$} \Vertex[Lpos=90,x=2,y=2,style={line width=1pt,fill=white,minimum size=5pt}]{$-1$} \Vertex[Lpos=90,x=4,y=2,style={line width=1pt,fill=black,minimum size=5pt}]{$0$} \Vertex[Lpos=90,x=6,y=2,style={line width=1pt,fill=white,minimum size=5pt}]{$1$} \Vertex[Lpos=90,x=8,y=2,style={line width=1pt,fill=white,minimum size=5pt}]{$2$} \Vertex[NoLabel,x=10,y=2,style={line width=1pt,fill=white,minimum size=0pt}]{K} \Edges($-2$,$-1$,$0$,$1$,$2$) \Edges[style={dotted}]($2$,K) \Vertex[NoLabel,x=-2,y=0,style={line width=1pt,fill=white,minimum size=0pt}]{L} \Vertex[NoLabel,x=0,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{M} \Vertex[NoLabel,x=2,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{N} \Vertex[NoLabel,x=4,y=0,style={line width=1pt,fill=black,minimum size=5pt}]{O} \Vertex[NoLabel,x=6,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{P} \Vertex[NoLabel,x=8,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{Q} \Vertex[NoLabel,x=10,y=0,style={line width=1pt,fill=white,minimum size=0pt}]{R} \Edges(M,N,O,P,Q) \Edges[style={dotted}](L,M) \Edges[style={dotted}](Q,R) \end{tikzpicture} \end{center} \caption[Three types of rooted paths]{The rooted paths $[P_5,u_2]$, $[P(-2,\infty),0]$, and $[P_\infty,\cdot]$.} \label{fig:types_of_paths} \end{figure} \begin{prop} The semi-infinite and bi-infinite rooted paths are the limit points of the set of finite rooted paths. \end{prop} \begin{proof} As the reader can verify, each of the mentioned rooted graphs is indeed a limit point. On the other hand, $[G,o]$ is a limit point only if there is a sequence of finite rooted paths that converges to $[G,o]$. Using the same construction as in the proof of completeness of the space $\wh{\mathcal{G}}_M$, the limit of the finite rooted paths will be a rooted path, bi-infinite or semi-infinite. \end{proof} With the proposition above, we can define the special subspace mentioned at the beginning of this section. The \emph{space of paths} $\mc{P}$ is the closure of the set of finite rooted paths. \subsection{Basics and Properties} One of the main attractions of the finite rooted paths is the ability to calculate the law explicitly. \begin{prop} Let $k$ be a nonnegative integer. Write $P_{2k} = u_1 \cdots u_{2k}$ and $P_{2k+1} = u_1 \cdots u_{2k+1}$. Then \[ \Psi(P_{2k})[P_{2k},u_i] = \frac{1}{k} \] for all $i \in \{1,\ldots,k\}$ and \[ \Psi(P_{2k+1})[P_{2k+1},u_i] = \begin{cases} \frac{2}{2k + 1} & \text{ if $i \in \{1,\ldots,k\}$},\\ \frac{1}{2k + 1} & \text{ if $i = k + 1$}. \end{cases} \] \end{prop} \begin{proof} Given a nonnegative integer $n$, let $P_n = u_1 \cdots u_n$. Then \[ [P_n,u_i] = [P_n,u_{n-i+1}] \] for all $i \in \{1,\ldots,\floor{n/2}\}$, and the result follows. \end{proof} To illustrate the usefulness of these explicit formulae, consider the following example. Let $\mu_n = \Psi(P_n)$ and let $\mu$ be the Dirac measure on the bi-infinite path rooted at some vertex. Recall that the degree function $\deg$ defined in Proposition \ref{degree_function} is continuous. Then \begin{align*} \int \deg~d\mu_{2k} &= \frac{\sum_{i=1}^k \deg[P_{2k},u_i]}{k}\\ &= \frac{\sum_{i=1}^k \deg_{P_{2k}}(u_i)}{k}\\ &= \frac{2k - 1}{k}\\ &= 2 - \frac{1}{k} \end{align*} and \[ \int \deg~d\mu = \deg[P_\infty,\cdot] = 2. \] Hence \[ \lim_{k \to \infty} \int \deg~d\mu_{2k} = \int \deg~d\mu. \] In fact this is not only true for the function $\deg$, but for all continuous functions whose domain is $\wh{\mathcal{G}}_M$. Indeed we will soon demonstrate this result. A slightly stronger variation of something that has already been mentioned is summarized in the following lemma. Using this lemma, we will show that the sequence of laws of finite paths converges to the Dirac measure on the bi-infinite path. \begin{lem}\label{convergence_of_paths} The sequence $\left([P_{2k},u_k]\right)_{k=1}^\infty$ converges to $[P_\infty,\cdot]$. Furthermore, \[ \rho\left([P_{2(k+l)},u_k],[P_\infty,\cdot]\right) = \rho\left([P_{2k},u_k],[P_\infty,\cdot]\right) \] for every nonnegative integer $l$. \end{lem} \begin{prop} The sequence $(\mu_{2k})_{k=1}^\infty$ converges weakly to $\mu$. \end{prop} \begin{proof} Let $g : \wh{\mathcal{G}}_M \to \mathbb{R}$ be an arbitrary continuous function with $|g| \leq B$ for some real number $B$. Given $\varepsilon > 0$, there exists $\delta > 0$ such that for all $[G,o] \in \wh{\mathcal{G}}_M$, \[ \rho\left([G,o],[P_\infty,\cdot]\right) < \delta ~~ \Rightarrow ~~ |g[G,o] - g[P_\infty,\cdot]| < \frac{\varepsilon}{2}. \] By Lemma \ref{convergence_of_paths}, there exists $N_1$ such that \[ \forall k \geq N_1 ~~ \forall l \geq 0 ~~ \rho\left([P_{2(k+l)},u_k],[P_\infty,\cdot]\right) < \delta. \] Furthermore, there exists $N_2$ such that \[ \forall k \geq N_2 ~~ \frac{1}{k} < \frac{\varepsilon}{4(M + 1)N_1}. \] Choose $N = \max\{N_1,N_2\}$. Then \begin{align*} \left|\int g~d\mu_k - \int g~d\mu\right| &= \frac{1}{k}\left|\sum_{i=1}^k \left(g[P_{2k},u_i] - g[P_\infty,\cdot]\right)\right|\\ &\leq \frac{1}{k}\sum_{i=1}^k \left|g[P_{2k},u_i] - g[P_\infty,\cdot]\right|\\ &\leq \frac{1}{k}\sum_{i=1}^{N_1-1} \left|g[P_{2k},u_i] - g[P_\infty,\cdot]\right| + \frac{1}{k}\sum_{i=N_1}^k \left|g[P_{2k},u_i] - g[P_\infty,\cdot]\right|\\ &< \frac{2M(N_1 - 1)}{k} + \frac{k - N_1}{k} \cdot \frac{\varepsilon}{2}\\ &< \frac{\varepsilon}{2} + \frac{\varepsilon}{2}\\ &= \varepsilon \end{align*} for all $k \geq N$. Hence $\mu_{2k} \Rightarrow \mu$. \end{proof} \begin{cor} The same is true for the sequence $(\mu_{2k+1})_{k=1}^\infty$. \end{cor} The two facts above may be combined using the following lemma. \begin{lem} Let $(a_n)_{n=1}^\infty$ be a sequence of real numbers. If $a_{2k} \to a$ and $a_{2k+1} \to a$, then $a_n \to a$. \end{lem} \begin{proof} Given $\varepsilon > 0$, there exists $N_1$ such that for all $k \geq N_1$, $|a_{2k} - a| < \varepsilon$, and $N_2$ such that for all $k \geq N_2$, $|a_{2k+1} - a| < \varepsilon$. Choose $N = \max\{2N_1,2N_2 + 1\}$. Then $|a_n - a| < \varepsilon$ for all $n \geq N$. \end{proof} \begin{theo}\label{weak_limit_of_paths} The Dirac measure on $[P_\infty,\cdot]$ is the weak limit of $(\Psi(P_n))_{n=1}^\infty$. \end{theo} Something that is much more surprising is that this is the only possible limit in the case of rooted paths. \begin{prop} Suppose that $\mu$ is a probability measure in $\mathfrak{M}_M$ whose support is a subset of $\mc{P}$. If $(\mu_n)_{n=1}^\infty$ is a sequence of laws of finite paths that weakly converges to $\mu$, then either this sequence is eventually constant or it contains a subsequence of $(\Psi(P_n))_{n=1}^\infty$. \end{prop} \begin{proof} Suppose that $(\mu_n)_{n=1}^\infty$ is not eventually constant. Define a function $f$ as follows: for every $n \geq 1$, there exists an $m \geq 1$ such that $\mu_n = \Psi(P_m)$; let $f(n) = m$. We will show that \[ \forall N ~~ \exists n > N ~~ f(N) < f(n). \] To derive a contradiction, assume that there exists $N$ such that $f(N) \geq f(n)$ for all $n > N$. It follows that \[ (\mu_k)_{k=N+1}^\infty = (\Psi(P_{f(k)}))_{k=N+1}^\infty \] has only a finite number of distinct terms. Since this sequence also converges to $\mu$, we know it is eventually constant, and so $(\mu_n)_{n=1}^\infty$ is too; a contradiction. Using this fact, if $k = 1$, choose $n_1 > 1$ such that $f(1) < f(n_1)$; then choose $n_2 > n_1$ such that $f(n_1) < f(n_2)$; and so forth. With this construction, \[ n_1 < n_2 < \cdots \] and \[ f(n_1) < f(n_2) < \cdots \] which means $(\mu_{n_k})_{k=1}^\infty$ is a subsequence of $(\mu_n)_{n=1}^\infty$, and $(\Psi(P_{f(n_k)}))_{k=1}^\infty$ is a subsequence of $(\Psi(P_m))$. \end{proof} \begin{cor} If $\mathfrak{M}_M^0(\mc{P})$ is the set of measures in $\mathfrak{M}_M^0$ whose support is a subset of $\mc{P}$, then \[ \mathfrak{M}_M^0(\mc{P}) = \Psi(\{P_n ~:~ n \in \mathbb{N}^\ast\}) \cup \{\delta_{[P_\infty,\cdot]}\}. \] \end{cor} Essentially, this means the space of rooted paths is not interesting. Indeed to better understand the concept of the iMTP, a space that offers more variety with regard to measures is necessary. Before we consider such a space, we will attempt to demonstrate why paths themselves are still intriguing. \subsection{A Natural Representation of $\mc{P}$} Although the title of this subsection includes the word ``natural,'' nothing explicitly categorical is happening. Its use is justified because the space of rooted paths may be viewed as a subspace of the plane $\ol{\mathbb{N}}^2$. What we construct is an explicit metric space model of $\mc{P}$. To simplify our arguments, we will adopt the following convention: \[ \frac{1}{1 + \infty} = 0. \] Let $\wt{\mc{P}} = \{(x,y) \in \ol{\mathbb{N}}^2 ~:~ x \leq y\}$. Define the distance $\tilde{\rho} : \wt{\mc{P}} \times \wt{\mc{P}} \to \mathbb{R}$ as follows: \[ \tilde{\rho}\left((x,y),(x',y')\right) = \frac{1}{1 + r} \] where \[ r = \begin{cases} \infty & \text{ if $x = x'$ and $y = y'$},\\ \min\{x,x'\} & \text{ if $x \neq x'$ and $y = y'$},\\ \min\{y,y'\} & \text{ if $x = x'$ and $y \neq y'$},\\ \min\{x,x',y,y'\} & \text{ if $x \neq x'$ and $y \neq y'$} \end{cases} \] for all $(x,y),(x',y') \in \wt{\mc{P}}$. \begin{theo} The metric space $(\mc{P},\rho)$ is isometric to $(\wt{\mc{P}},\tilde{\rho})$. \end{theo} \begin{proof} Define a function $f : \mc{P} \to \wt{\mc{P}}$ by \[ f[P(u,v),o] = \left(\min\{d(u,o),d(o,v)\},\max\{d(u,o),d(o,v)\}\right) \] for all $[P(u,v),o] \in \mc{P}$. It suffices to show that $f$ is an isometry, a well-defined surjective function that preserves distances. It is straightforward to verify that $f$ is well-defined and preserves distances. To see that $f$ is surjective, let $(x,y) \in \wt{\mc{P}}$ be arbitrary. Consider the path defined by the sequence of integers \[ (-x,-x+1,\ldots,-1,0,1,\ldots,y-1,y). \] The isomorphism class of this path when rooted at zero is mapped to $(x,y)$, as required. \end{proof} By keeping this result in mind, we may use the notation of natural numbers instead of paths and vertices. For convenience, we will not distinguish between $\mc{P}$ and $\wt{\mc{P}}$ for the remainder of this section. The following example uses this simpler notation when deriving a formula for the measure of a vertical strip in the space $\mc{P}$. A \emph{vertical strip} is a set of the form \[ A_m = \{(x,y) \in \mc{P} ~:~ m \leq x\} \] where $m$ is a positive integer with $m \leq k$. Observe that \begin{align*} \Psi(P_{2k+1})(A_m) &= \sum_{i=m}^{k-1} \Psi(P_{2k+1})(i,2k - i) + \Psi(P_{2k+1})(k,k)\\ &= \sum_{i=m}^{k-1} \frac{2}{2k + 1} + \frac{1}{2k + 1}\\ &= \frac{2(k - m)}{2k + 1} + \frac{1}{2k + 1}\\ &= \frac{2k + 1 - 2m}{2k + 1}\\ &= 1 - \frac{2m}{2k + 1} \end{align*} and \begin{align*} \Psi(P_{2k})(A_m) &= \sum_{i=m}^{k-1} \Psi(P_{2k})(i,2k - 1 - i)\\ &= \sum_{i=m}^{k-1} \frac{1}{k}\\ &= \frac{k - m}{k}\\ &= 1 - \frac{m}{k}. \end{align*} In general, \[ \Psi(P_n)(A_m) = \begin{cases} 1 - \frac{2n}{m} & \text{ if $m \leq \floor{\frac{n}{2}}$,}\\ 0 & \text{ otherwise} \end{cases} \] for all positive integers $m$ and $n$. \begin{lem}\label{finite_paths_are_discrete} The subspace $(\mathbb{N}^2,\rho)$ is topologically discrete. That is, every point is isolated. \end{lem} \begin{proof} If $(x,y) \in \mathbb{N}^2$, then \[ \{(x,y)\} = B_\rho\left((x,y),\min\left\{\frac{1}{x + 1},\frac{1}{y + 1}\right\}\right). \] To see this, suppose that \[ \rho((x,y),(x',y')) < \min\left\{\frac{1}{x + 1},\frac{1}{y + 1}\right\}. \] If $x = x'$ and $y \neq y'$, then \[ \rho((x,y),(x',y')) = \frac{1}{\min\{y,y'\} + 1} \geq \frac{1}{y + 1}, \] which is a contradiction. A similar argument establishes the other cases. \end{proof} \begin{theo} As a topological space, $(\mc{P},\rho)$ is homeomorphic to $(\alpha\mathbb{N}^2,d_\infty)$ where $d_\infty$ is the $\ell_\infty$ distance and \[ \alpha\mathbb{N} = \left\{\frac{1}{n} ~:~ n \in \mathbb{N}^\ast\right\} \cup \{0\}. \] \end{theo} \begin{proof} Define the function $f : \ol{\mathbb{N}}^2 \to \alpha\mathbb{N}^2$ by \[ f(x,y) = \left(\frac{1}{x + 1},\frac{1}{y + 1}\right) \] for all $(x,y) \in \ol{\mathbb{N}}^2$. We will show that $f$ is continuous. It is known that every function is continuous at the isolated points in its domain. By Lemma \ref{finite_paths_are_discrete}, $f$ is continuous on $\mathbb{N}^2$. Let $x \in \mathbb{N}$ and let $\varepsilon > 0$. Choose \[ \delta = \min\left\{\frac{1}{x + 1},\varepsilon\right\}. \] Suppose that $(x',y') \in \ol{\mathbb{N}}^2 \setminus \{(x,\infty)\}$. The only case in which $\rho((x,\infty),(x',y')) < \delta$ is when $x = x'$ and $y' \in \mathbb{N}$. Then \begin{align*} d_\infty(f(x,\infty),f(x',y')) &= \max\left\{\left|\frac{1}{x + 1} - \frac{1}{x' + 1}\right|,\left|0 - \frac{1}{y' + 1}\right|\right\}\\ &= \frac{1}{y' + 1}\\ &= \rho((x,\infty),(x',y'))\\ &< \delta\\ &\leq \varepsilon. \end{align*} Similarly, the same is true for $(\infty,y)$ for all $y \in \mathbb{N}$. To see that $f$ is continuous at $(\infty,\infty)$, note that \begin{align*} d_\infty(f(\infty,\infty),f(x',y')) &= \max\left\{\frac{1}{x' + 1},\frac{1}{y' + 1}\right\}\\ &= \frac{1}{\min\{x',y'\} + 1}\\ &= \rho((\infty,\infty),(x',y')) \end{align*} if $(x',y') \neq (\infty,\infty)$. In the case of equality, the result follows as well. Hence $f$ is continuous on $\ol{\mathbb{N}}^2$. Furthermore, it is a bijection. Upon applying a result from topology, which states that a continuous bijection from a compact space to a Hausdorff space is a homeomorphism, the proof is complete. \end{proof} Before leaving this section, the reader may be interested to know what else is in the space $\wh{\mathcal{G}}_2$. Although it is somewhat surprising, it is not very difficult to prove that \[ \wh{\mathcal{G}}_2 = \mc{P} \cup \{[C_n,\cdot] ~:~ n \geq 3\}. \] In fact, $([C_n,\cdot])_{n=3}^\infty$ converges to $[P_\infty,\cdot]$. \section{The Trees Inside $\wh{\mathcal{G}}_3$} As we have seen in the previous section, the laws of paths are not interesting enough. To remedy this situation, we will move to--in some sense--a higher dimension. Throughout this section, the object of study will be the 3-regular infinite tree $T_\infty$. Since $T_\infty$ is vertex-transitive, we will fix a root $t$ and use it for the remainder of this section. For each $n$, let \[ T_n = B_{T_\infty}(t,n). \] These trees will mimic the finite paths seen in the previous section. Recall Theorem \ref{weak_conv_implies_graph_conv}, which related the weak convergence of laws to the convergence of rooted graphs. As mentioned then, the converse of this theorem is not true. Indeed this section yields a simple counterexample. The sequence $([T_n,t])_{n=0}^\infty$ converges to $[T_\infty,t]$, but $\Psi(T_n) \not \Rightarrow \delta_{[T_\infty,t]}$. To see this, let $\deg$ be the degree function defined earlier. Suppose that $\Psi(T_n) \Rightarrow \delta_{[T_\infty,t]}$. In particular, \[ \lim_{n \to \infty} \int \deg~d\Psi(T_n) = \int \deg~d\delta_{[T_\infty,t]}. \] Note that \[ \int \deg~d\Psi(T_n) = \frac{1}{|V(T_n)|} \sum_{x \in V(T_n)} \deg_{T_n}(x) = \frac{2|E(T_n)|}{|V(T_n)|} = 2 - \frac{2}{|V(T_n)|} \] for all nonnegative integers $n$, and so \[ \lim_{n \to \infty} \int \deg~d\Psi(T_n) = 2. \] On the other hand, \[ \int \deg~d\delta_{[T_\infty,t]} = \deg[T_\infty,t] = 3; \] a contradiction. Recall that there is a unique path between two vertices in a tree. With this in mind, let $u_1 \cdots u_k t$ be such a path in $T_k$ from a leaf $u_1$ to the vertex $t$. \begin{theo}\label{trees_converge_to_s} The sequence $(\Psi(T_k))_{k=0}^\infty$ converges weakly to a measure $\mu$ on $\wh{\mathcal{G}}_M$ defined by \[ \mu[S,u_i] = \frac{1}{2^i} \] where $[S,u_i]$ is the limit of the sequence $([T_k,u_i])_{k=i}^\infty$ for all positive integers $i$. \end{theo} \begin{proof} Let $f \in \mb{C}(\wh{\mathcal{G}}_M)$ be arbitrary. Since $f$ is bounded, there exists a real number $L$ such that \[ \sup\left\{|f[G,o]| \in \mathbb{R} ~:~ [G,o] \in \wh{\mathcal{G}}_M\right\} \leq L. \] To simplify the proof, observe that \begin{align*} \lim_{k \to \infty} \int f ~d\Psi(T_k) &= \lim_{k \to \infty} \left(\sum_{i=1}^k \frac{f[T_k,u_i] \cdot |[T_k,u_i]|}{|V(T_k)|} + \frac{f[T_k,t]}{|V(T_k)|}\right)\\ &= \lim_{k \to \infty} \left(\sum_{i=1}^k \frac{f[T_k,u_i] \cdot 3 \cdot 2^{k-i}}{3 \cdot 2^k - 2} + \frac{f[T_k,t]}{3 \cdot 2^k - 2}\right)\\ &= \lim_{k \to \infty} \left(\frac{3 \cdot 2^k}{3 \cdot 2^k - 2}\right) \lim_{k \to \infty} \left(\sum_{i=1}^k \frac{f[T_k,u_i]}{2^i}\right) + \lim_{k \to \infty} \left(\frac{f[T_k,t]}{3 \cdot 2^k - 2}\right)\\ &= \lim_{k \to \infty} \left(\sum_{i=1}^k \frac{f[T_k,u_i]}{2^i}\right). \end{align*} Now it suffices to prove that the expression in the previous line is the integral of $f$ with respect to $\mu$. To demonstrate this, let $\varepsilon > 0$. We know that \[ \frac{L}{2^N} < \frac{\varepsilon}{4} \] for some positive integer $N$. By definition of $[S,u_i]$, the sequence $(f[T_k,u_i])_{k=i}^\infty$ converges to $f[S,u_i]$ for each $i \in \{1,\ldots,N\}$ because $f$ is continuous. That is, there exists a positive integer $n_i$ such that \[ |f[T_k,u_i] - f[S,u_i]| < \frac{\varepsilon}{2} \] for all $k \geq \max\{n_i,N + 1\}$. Choose $n = \max\{n_1,n_2,\ldots,n_N,N + 1\}$ and let $k$ be an integer with $k \geq n$. Then \begin{align*} \Bigg|\sum_{i=1}^k \frac{f[T_k,u_i]}{2^i} &- \sum_{i=1}^\infty \frac{f[S,u_i]}{2^i}\Bigg|\\ &\leq \sum_{i=1}^N \frac{|f[T_k,u_i] - f[S,u_i]|}{2^i} + \sum_{i=N+1}^k \frac{|f[T_k,u_i]|}{2^i} + \sum_{i=N+1}^\infty \frac{|f[S,u_i]|}{2^i}\\ &< \sum_{i=1}^N \frac{\varepsilon}{2^{i+1}} + \sum_{i=N+1}^k \frac{L}{2^i} + \sum_{i=N+1}^\infty \frac{L}{2^i}\\ &\leq \left(\frac{\varepsilon}{2}\right) \sum_{i=1}^\infty \frac{1}{2^i} + \frac{2L}{2^N}\\ &< \frac{\varepsilon}{2} + \frac{\varepsilon}{2}\\ &= \varepsilon, \end{align*} and so \[ \lim_{k \to \infty} \int f ~d\Psi(T_k) = \lim_{k \to \infty} \left(\sum_{i=1}^k \frac{f[T_k,u_i]}{2^i}\right) = \sum_{i=1}^\infty \frac{f[S,u_i]}{2^i} = \int f ~d\mu. \] Hence $(\Psi(T_k))_{k=0}^\infty$ converges weakly to $\mu$. \end{proof} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.75] \GraphInit[vstyle=Classic] \Vertex[Lpos=-90,x=0,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{$u_1$} \Vertex[Lpos=-90,x=4,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{$u_2$} \Vertex[Lpos=-90,x=8,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{$u_3$} \Vertex[Lpos=-90,x=12,y=0,style={line width=1pt,fill=white,minimum size=5pt}]{$u_4$} \Vertex[NoLabel,x=16,y=0,style={line width=1pt,fill=white,minimum size=0pt}]{A} \Edges($u_1$,$u_2$,$u_3$,$u_4$) \Edges[style={dotted}]($u_4$,A) \Vertex[NoLabel,x=4,y=2.5,style={line width=1pt,fill=white,minimum size=5pt}]{B} \Vertex[NoLabel,x=8,y=2.5,style={line width=1pt,fill=white,minimum size=5pt}]{C} \Vertex[NoLabel,x=6.9,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{C1} \Vertex[NoLabel,x=9.1,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{C2} \Vertex[NoLabel,x=12,y=2.5,style={line width=1pt,fill=white,minimum size=5pt}]{D} \Vertex[NoLabel,x=10.9,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{D1} \Vertex[NoLabel,x=13.1,y=4,style={line width=1pt,fill=white,minimum size=5pt}]{D2} \Vertex[NoLabel,x=10.2,y=5,style={line width=1pt,fill=white,minimum size=5pt}]{D11} \Vertex[NoLabel,x=11.6,y=5,style={line width=1pt,fill=white,minimum size=5pt}]{D12} \Vertex[NoLabel,x=12.4,y=5,style={line width=1pt,fill=white,minimum size=5pt}]{D21} \Vertex[NoLabel,x=13.8,y=5,style={line width=1pt,fill=white,minimum size=5pt}]{D22} \Edges($u_2$,B) \Edges($u_3$,C) \Edges($u_4$,D) \Edges(C,C1) \Edges(C,C2) \Edges(D,D1) \Edges(D,D2) \Edges(D1,D11) \Edges(D1,D12) \Edges(D2,D21) \Edges(D2,D22) \end{tikzpicture} \end{center} \caption[The graph $S$]{The graph $S$.} \label{fig:the_graph_S} \end{figure} The graph $S$ defined above is illustrated in Figure \ref{fig:the_graph_S}. \section{Unimodularity} Having defined the law of a finite graph, the reader may be curious to see why these maps are useful. \begin{defn} A measure $\mu$ on $\wh{\mathcal{G}}_M$ is \emph{unimodular} if \[ \int \sum_{x \in V(G)} f[G,x,o]~d\mu[G,o] = \int \sum_{x \in V(G)} f[G,o,x]~d\mu[G,o] \] for all nonnegative functions $f$ whose domain is the set of birooted connected graphs. Such a measure is also said to satisfy the \emph{intrinsic Mass Transport Principle}, the iMTP. The set of these measures is denoted by $\mathcal{U}$. \end{defn} The following fact appears in a paper by Schramm \cite{hgl}, but its proof is omitted. \begin{prop}\label{laws_are_unimodular} Every law is unimodular. \end{prop} \begin{proof} Suppose that $G$ is a finite graph. Denote by $\omega$ the number of connected components of $G$. If $\omega = 1$, then \begin{align*} \int \sum_{x \in V(G)} f[G,x,o]~d\Psi(G)[G,o] &= \frac{1}{|V(G)|} \sum_{y \in V(G)} \sum_{x \in V(G)} f[G,x,y]\\ &= \frac{1}{|V(G)|} \sum_{x \in V(G)} \sum_{y \in V(G)} f[G,x,y]\\ &= \int \sum_{y \in V(G)} f[G,o,y]~d\Psi(G)[G,o] \end{align*} for all rooted connected components $[G,o]$ of $G$. Hence the result holds for connected graphs. Suppose that $\omega = 2$. Write $G = H + I$ where $H$ and $I$ are its connected components. By Equation \ref{law_linearity}, \[ \Psi(G) = \frac{|V(H)|\Psi(H) + |V(I)|\Psi(I)}{|V(G)|}. \] Since $\Psi(H)$ and $\Psi(I)$ are unimodular and the integral is linear, it follows that $\Psi(G)$ is unimodular. The general statement holds by induction on $\omega$. \end{proof} Aldous and Lyons \cite{pourn}, and Schramm \cite{hgl} conjectured that every unimodular measure is the weak limit of a sequence of laws, but this remains open.
1,314,259,992,923
arxiv
\section*{References}
1,314,259,992,924
arxiv
\subsection{Myopic Algorithm}\label{sec:greedy-algorithm} The non-convexity of \probref{problem:BASIS-optimization-problem} is due to the expression of the stationary distribution $\boldsymbol{\pi}$ that appears in the objective function. As mentioned, the stationary distribution captures the long-term behavior of a system where users sequentially consume many contents. To simplify the objective, one can could consider a coarse approximation where the recommendation impact is there, but the algorithm ``greedily'' optimizes the access cost \textit{only for the next content access}. In other words, it is as if a user initially requests a content $i$, then requests another content $j$ (recommended or not), and then leaves the system. In this case, the objective becomes \begin{equation} (\mathbf{p_{0}}^{T}\cdot P) \cdot \mathbf{x}, \end{equation} where the first term of \eq{cost-finite-steps} is dropped (because it is independent of the control variables), and we keep only the second term. This gives rise to the following optimization problem. \begin{problem}[Myopic Cache-Friendly Recommendations]\label{problem:single-step-optimization} \begin{small} \begin{align} \underset{Y}{\mbox{minimize}}~~ \; & \mathbf{p_{0}}^{T} \cdot (a\cdot Y + (1-a)\cdot P_0) \cdot x, \label{eq:objective-single-step}\\ s.t.~~~ &\text{Eqs. (\ref{eq:y-box-constraint})--(\ref{eq:quality-constraint})} \nonumber \label{eq:constraint-set-single-step} \end{align} \end{small} \end{problem} In the above problem, the constraints remain intact as the Problem~\ref{problem:BASIS-optimization-problem}. However, now the objective is linear in $Y$. This is an Linear Problem (LP) with affine and box constraints, which can be solved efficiently in polynomial time, using e.g. interior-point methods~\cite{boyd2004convex}. \myitemit{Remark.} The single-step approach can be interpreted as a projection of the recent work of~\cite{chatzieleftheriou2017caching} to our framework. Specifically, the authors solve a similar ``single-step'' problem, jointly optimizing the caching and recommendation policy (which is formulated as a Knapsack problem). Omitting the caching decisions of~\cite{chatzieleftheriou2017caching}, for the recommendations the authors solve a similar problem to \probref{problem:single-step-optimization}. \oldtext{ The problem constraints remain intact as the Problem~\ref{problem:BASIS-optimization-problem}, however now the objective is linear in $Y$, therefore, we have in hand an LP with affine and box constraints, thus trivially solved. Essentially, the work described in~\cite{chatzieleftheriou2017caching}, and more specifically the part of recommendations, if projected onto our framework, can be parallelized using the above problem. } \subsection{Cache-Aware Recommendations for Sequential content access (CARS)}\label{sec:admm-algorithm} The above ``myopic'' approach does not exploit the full structure of the Markov chain $P$. For example, assume there are two contents $A$ and $B$ that are both cached and both have high similarity with a content currently consumed, but $B$ has slightly higher similarity. The Myopic scheme will choose to recommend $B$. However, assume that $A$ is similar to many contents that happen to be cached, while $B$ does not. This suggests that, if $B$ is recommended, then in the next step there will be very few good options (hence the algorithm's name): the myopic algorithm will either have to recommend cached contents with low quality or high quality contents which lead to cache misses. To be able to foresee such situations and take the right decisions, we need to go back to the objective of \probref{problem:BASIS-optimization-problem}. To circumvent the problem of having the inverse of the control matrix in the objective, we formulate an \textit{equivalent} optimization problem by introducing the stationary vector $\boldsymbol{\pi}$ as an explicit (``auxiliary'') control variable. \begin{problem}[Cache-Friendly Recommendations: Equivalent Problem]\label{problem:infinite-step-admm} \begin{small} \begin{subequations}\label{eq:equivalent-objective-infinite-step} \begin{align} \underset{\boldsymbol{\pi},Y}{\mbox{minimize}}~~ \; & \boldsymbol{\pi}^{T}\cdot \mathbf{x}, \tag{\ref{eq:equivalent-objective-infinite-step}}\\ s.t.~~~ &\text{Eqs. (\ref{eq:y-box-constraint})--(\ref{eq:quality-constraint})} \nonumber \\ & \boldsymbol{\pi}^{T} = \boldsymbol{\pi}^{T} \cdot (a\cdot Y+(1-a)\cdot \mathbf{p_{0}}^{T})\label{eq:stationarity-hard-con}\\ & \sum_{j =1}^{K} \pi_{j} = 1 \label{eq:stationarity-sum-pi}\\ & \pi_{j} \geq 0,~~\forall~j~\in \mathcal{K}. \label{eq:stationarity-positive-pi} \end{align} \end{subequations} \end{small} \end{problem} \probref{problem:infinite-step-admm} constraints three new (sets of) constrains. \eq{eq:stationarity-sum-pi} and \eq{eq:stationarity-positive-pi} simply ensure that $\boldsymbol{\pi}$ is a probability distribution. \eq{eq:stationarity-hard-con} is an important constraint that ensures that the two problems are equivalent, by forcing $\boldsymbol{\pi}$ to be a stationary distribution related to the transition matrix $P = a\cdot Y+(1-a)\cdot P_{0}$. It is easy to see that the two problems have the same set of optimal solutions. The objective function is now linear in the control variables $\boldsymbol{\pi}$. However, constraint \eq{eq:stationarity-hard-con} is a quadratic equality constraint, and thus the problem remains non-convex. Nevertheless, observe that the problem is now \textit{bi-convex} in the variables $Y$ and $\boldsymbol{\pi}$. Bi-convex problems can often be efficiently tackled with Alternating Convex Search (ACS) methods, that iteratively solve the convex sub-problems for each set of control variables. Unfortunately, such approaches fail here, as the $Y$ subproblem is simply a feasibility problem ($Y$ does not appear in the objective), and ACS would not converge (our implementation confirms this observation). What is more, having the quadratic equality constraint as a hard constraint does not facilitate such an iterative solution. \oldtext{ We are interested in investigating the inherent properties of \probref{}. As mentioned earlier, the feasibility region formed by~\ref{} is essentially a convex set as it consists of affine and box constraints. Yet further analysis is necessary for the objective~\ref{} which is not a well known form and therefore not easily characterized. We have: \begin{align} \underset{Y}{\mbox{minimize}} \; & - (1-a)\cdot \mathbf{p_{0}} \cdot (I-aY)^{-1} \cdot \mathbf{x}^{T}\\ s.t. \quad & Y \in \mathcal{A} \end{align} and as $\mathcal{A}$ we call the solution space defined by~\ref{}. We now introduce a new set of variables $\{\pi\}$ along with $K$ equality constraints as to form an equivalent problem. Using the Lemma~\ref{} the problem is formed as follows \begin{align} \underset{\pi,Y}{\mbox{minimize}} \; & - \pi^{T} \cdot \mathbf{x}\\ s.t. \quad & Y \in \mathcal{A},\\ & \pi \in \mathcal{B},\\ & \pi^T = \pi^T(aY+(1-a)P_0)\label{quadratic_equal} \end{align} where \eq{quadratic_equal}, is a vector equality of $K$ quadratic equalities. Therefore we ended up with an equivalent form for Problem~\ref{} that consists of a linear objective, the convex solution space $\mathcal{A}$ for variable $Y$, the probability simplex $\mathcal{B}$ for variable $\pi$ and $K$ quadratic equalities that couple the two sets of variables. As it is well known, a problem with quadratic equality constraints is nonconvex and according to~\cite{d2003relaxations} hard. Hence, due to equivalence, the initial Problem~\ref{} is also non convex and hard. } Instead, we propose to use a Lagrangian relaxation for that constraint, moving it to the objective. To ensure the strong convexity of the new objective, we form the \textit{Augmented Lagrangian}~\cite{boyd2011distributed}. Let us first define the function $c(\boldsymbol{\pi},Y)$ as \begin{equation} c(\boldsymbol{\pi},Y) = \boldsymbol{\pi}^T - \boldsymbol{\pi}^T \cdot (a\cdot Y - (1-a)\cdot P_{0}) \end{equation} so that the constraints of \eq{eq:stationarity-hard-con} can be written as \begin{equation} c(\boldsymbol{\pi},Y) = 0 \end{equation} The augmented Lagrangian is then given by: \begin{equation}\label{eq:equivalent-objective-infinite-step-AL} f_{\rho}(\boldsymbol{\pi},Y) = \boldsymbol{\pi}^{T}\cdot x + c(\boldsymbol{\pi},Y) \cdot \boldsymbol{\lambda}+ \frac{\rho}{2}\cdot (||c(\boldsymbol{\pi},Y)||_{2})^{2} \end{equation} where $\boldsymbol{\lambda}$ is the column vector of length $K$ of the Lagrangian multipliers (one multiplier per quadratic equality), $\rho$ a positive constant scalar, and $||\cdot ||_{2}$ the euclidean norm. This objective is still subject to the remaining constraints of \probref{problem:infinite-step-admm}, all of which are now affine. What is more, the problem remains bi-convex in the control variables $Y$ and $\boldsymbol{\pi}$. We can thus apply an ADMM-like method, where we iteratively solve the convex subproblems with respect to $Y$ and $\boldsymbol{\pi}$, but now with the above augmented objective, so that when $c(\boldsymbol{\pi},Y)$ diverges a lot from $0$, the subproblem solutions in the inner loop are penalized. We also update the Lagrangian multipliers $\lambda_i$ at each iteration. Our detailed algorithm is described in Algorithm~\ref{alg:ADMM}. \begin{algorithm} \begin{algorithmic} [1] \caption{CARS (Cache-Aware Recommendations for Sequential content access)}\label{alg:ADMM} \Statex {$Input: N, U, q, \mathbf{x}, a, \mathbf{p_{0}}$} \Comment (system parameters) \Statex {$Input: Acc_1, Acc_2, maxIter, \rho, \lambda_0, Y_0$} \Statex\Comment (algorithm tuning parameters) \State {$i \gets 1$} \State $COST_0 \gets \infty$ \State $V \gets True$ \While{$V$} \State $\boldsymbol{\pi}_i = \underset{\boldsymbol{\pi}\in C_{\pi}}{\operatorname{argmin}} \{f_{\rho}(\boldsymbol{\pi},Y_{i-1}) \}$ \State $Y_i = \underset{Y\in C_{Y}}{\operatorname{argmin}} \{f_{\rho}(\boldsymbol{\pi}_{i},Y) \}$ \State $\lambda \gets \lambda + (\frac{\rho}{2})\cdot c(\boldsymbol{\pi}_{i},Y_{i})$ \State $COST_i \gets (1-a)\cdot \mathbf{p_{0}^{T}} \cdot (I-a\cdot Y_{i})^{-1} \cdot \mathbf{x}$ \State $\epsilon_1 \gets (|c(\pi_i,Y_i)|_{2})^{2}$ \State $\epsilon_2 \gets |COST_i - COST_{i-1}|$ \State $V = ((\epsilon_1>Acc_1)\land(\epsilon_2>Acc_2))\lor(i \leq maxIter)$ \State $i \gets i+1$ \EndWhile \State $j\gets \underset{\ell=1,...,i-1}{\operatorname{argmax}} \{COST_{\ell}\}$ \State $return~~Y_{j}$ \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:ADMM} receives as input the system parameters $N, U, q, \mathbf{x}, a, \mathbf{p_{0}}$, and the desired accuracy levels and initialization parameters $Acc_1, Acc_2, maxIter, \rho, \lambda_0, Y_0$. It initializes the objective ($COST_{0}$) to infinity and starts an iteration for solving the convex subproblems (lines 4--13). In the first leg of the loop (line 5), the augmented Lagrangian $f_{\rho}(\boldsymbol{\pi},Y)$ is minimized over $\pi$, considering as constant the variables $Y$ (equal to their prior value). Then, considers the returned value of $\boldsymbol{\pi}$ from line 5 as constant and minimizes the Lagrangian over the variables $Y$. Both minimization sub-problems are convex and can be efficiently solved. The solution space of the sub-problems $C_{Y}$ and $C_{\pi}$ is given by Eqs.~(\ref{eq:y-box-constraint})--(\ref{eq:quality-constraint}) and Eqs.(\ref{eq:stationarity-sum-pi})--(\ref{eq:stationarity-positive-pi}), respectively. After calculating in line 8 the long term $COST$ we get from $Y_i$, the status of the current iteration is computed in the (a) primal residual of the problem (line 9) and (b) the difference of returned $COST$ compared to the previous step (line 10). The algorithm exits the while loop, when the value of the primal residual and improvement in the $COST$ are smaller than the required accuracy, or when the maximum allowable iterations are reached (as described in line 11). As a final note, the above problem can also be cast into a non-convex QCQP (quadratically constrained quadratic program). State-of-the-art heuristic methods for approximate solving generic QCQP problems~\cite{park2017general} are unfortunately of too high computational complexity for problems of this size. It is worth mentioning that we transformed the problem to a standard QCQP formulation and we applied methods based on~\cite{park2017general} but the algorithms were only capable of solving small instances of the problem (a few 10s of contents). \myitem{Convergence of CARS.} Finally, we investigate the performance of CARS (Algorithm~\ref{alg:ADMM}) as a function of its computational cost, i.e., the maximum number of iterations needed. Fig.~\ref{fig:admm-convergence} shows the achieved \textit{actual cost} (red line, circle markers) at each iteration, and the \textit{virtual cost} (gray line, triangle markers) calculated from the current value of the auxiliary variable $\boldsymbol{\pi}$, in a simulation scenario (see details in Section~\ref{sec:sims}). It can be seen that within $5$ iterations, CARS converges to its maximum achieved cache hit ratio. This is particularly important for cases with large content catalogue sizes that require an online implementation of CARS. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{convergence} \caption{Convergence of CARS.} \label{fig:admm-convergence} \end{figure} \oldtext{ We initialize our algorithm by feeding it the desired accuracy levels, our model parameters $N,a,U,x,q,p_0$ and the Augmented Lagrangian parameters. In the first leg of the loop, we minimize $f_{\rho}(\boldsymbol{\pi},Y)$ over $\pi$ and over $Y$ in the second over their respective constraints using cvxpy~\cite{cvxpy}. After calculating the long term $CHR$ we get from $Y_i$, we compute the primal residual of the problem and the difference of returned $CHR$ compared to the previous step. To exit the while loop, we either want to satisfy the accuracy of both tolerances () or to reach the maximum allowable iterations of the algorithm, this entity is captured by the control variable $V$. } \oldtext{ We will now attempt to reformulate Problem~\ref{problem:BASIS-optimization-problem}, in order to seek for a new policy, that could possibly exploit even more the Markovian nature of the access model. This approach is based on extending the set of control variables to now include $Y$ (recommendation matrix) and $\pi$ (the stationary distribution). \begin{problem}[Equivalent Problem]\label{problem:infinite-step-admm} \begin{small} \begin{align} \underset{Y, \pi}{\mbox{minimize}} \; & - \pi^{T} \cdot x, \label{eq:2-variable-objective}\\ s.t. \quad & \sum_{j =1}^{K} y_{ij} u_{ij} \geq q, ~~\forall i~\in \mathcal{F}\\ & Y \cdot \mathbf{1} = \mathbf{1}\\ & 0 \leq y_{ij} \leq \frac{1}{N}, ~~\forall~i~and~j~\in~\mathcal{F}\\ & y_{ii} = 0,~~\forall~i~\in \mathcal{F}\\ & \pi^{T} = \pi^{T} (aY+(1-a)P_0)\label{stationarity-hard-con}\\ & \sum_{j =1}^{K} \pi_{j} = 1\\ & \pi_{j} \geq 0,~~\forall~i~\in \mathcal{F}. \label{eq:constraint-set-2-variable} \end{align} \end{small} \end{problem} In principle the Problem~\ref{problem:infinite-step-admm}, affine and box constraints as the Problem~\ref{problem:BASIS-optimization-problem}, but with a crucial addition. We now impose the stationarity condition at the constraints of the problem, and with $Y$ and $\pi$ being control variables, it is evident that as~\ref{stationarity-hard-con} is a vector equality, this constraint is essentially $K$ quadratic equalities, hence nonconvex. } \oldtext{ To overcome these challenging set of equalities, $c_i(\pi,Y) = \pi^T - \pi^T(aY-(1-a)P_0) = 0,~\forall i \in \mathcal{K}$, we form the Augmented Lagrangian function: \begin{small} \begin{equation} f_{\rho}(\pi,Y)=f_0(\pi,Y) + \sum\nolimits_{i \in \mathcal{K}}\lambda_{i}c_i(\pi,Y)+ (\rho{}/2) \sum\nolimits_{i \in \mathcal{K}} ||c_i(\pi,Y))||_{2}^{2} \end{equation} \end{small} where $f_0(\pi,Y) = -\pi^T \cdot x$ and $\rho$ a positive constant scalar. } \oldtext{ The problem contains two sets of variables, we may observe that once we fix $\pi$ (or $Y$), the function $f_{\rho}(\pi_{fix},Y)$ (or $f_{\rho}(\pi,Y_{fix})$ becomes convex in $Y$ (or in $\pi)$. Therefore, we are naturally driven to take advantage of this observation by employing an ADMM-imitating algorithm. The algorithm along with its individual operations is being depicted below: } \subsection{Datasets}\label{sec:datasets} We collect two datasets that contain ratings about multimedia content. We use this information to build similarity matrices $U$, which are later used in the selection of recommendations, e.g., to satisfy a minimum recommendation quality $q$ (as defined in \secref{sec:optimization-problem}). \myitem{MovieLens.} We use the $100k$ subset from the \textit{latest Movielens} movies-rating dataset from the MovieLens website~\cite{movielens-related-dataset}, containing $69162$ ratings (from 0.5 to 5 stars) of $671$ users for $9066$ movies. To generate the matrix $U$ of movie similarities from the raw information of user ratings, we apply a standard collaborative filtering method~\cite{survey-collaborative-filtering}. Specifically, we first apply an item-to-item collaborative filtering (using 10 most similar items) to predict the missing user ratings, and then use the cosine-distance ($\in[-1,1]$) of each pair of contents based on their common ratings \begin{equation*} sim(i,j) = \frac{\sum_{n=1}^{\#users} r_{n}(i)\cdot r_{n}(j)}{\sqrt{\sum_{n=1}^{\#users} r_{n}^{2}(i)} \cdot \sqrt{\sum_{n=1}^{\#users} r_{n}^{2}(j)}} \end{equation*} where we normalized the ratings $r_{i}$, by subtracting from each rating the average rating of that item. We build the matrix $U$ by saturating to values above 0.6 to 1, and zero otherwise, so that $u_{nk}\in \{0,1\}$. \myitem{Last.fm.} We use the subset of \textit{The Million Song Dataset} from the Last.fm database~\cite{Bertin-Mahieux2011}, containing $10k$ song IDs. The dataset was built based on the method ``getSimilar'', and thus it contains a $K\times K$ matrix with the similarity scores (in [0,1]) between each pair of songs in the dataset, which we use as the matrix $U$. As the Last.fm dataset is quite sparse and we set the non zero values $u_{ij}$ to one to make a binary $U$ in that dataset as well. To facilitate simulations, we process both datasets, by removing rows and columns of the respective $U$ matrices with $\sum_{j \in \mathcal{K}}u_{ij} \leq N$ (where number $N=4$ is the number of total recommendations). After the preprocessing, we ended up with a content catalogue of size $K=1060$ and $K=757$ for MovieLens and Last.fm traces respectively. \subsection{Simulation Setup}\label{sec:simulation-setup} \myitem{Content Demand.} The users generate $40000$ requests for contents in a catalogue $\mathcal{K}$; requests are either \textit{direct} with probability $\mathbf{p_{0}} \sim Zipf(s)$ ($s$ the exponent of the Zipf law) for any content, or \textit{recommended} with probability $\frac{1}{N}$ for each of the recommended contents. We consider scenarios with exponent $s\in [0.4, 0.8]$ and $N=4$. Unless otherwise stated, we set the default value $\alpha=0.8$, similarly to the statistics in~\cite{gomez2016netflix}. \myitem{Caching Policy.} We consider a popularity based caching policy, where the $C$ most popular (w.r.t. $\mathbf{p_{0}}$) contents are locally cached in the base station. This policy is optimal in a single cache network, when no recommendation system is employed. \myitem{Recommendation policy.} We simulate scenarios under the following three recommendation policies: \begin{itemize}[leftmargin=*] \item \emph{No Recommendation}: This is is also a baseline scenario, where users request contents only based on $\mathbf{p_{0}}$ (or, equivalently $a=0$). \item \emph{Myopic policy}: Cache-aware recommendations using the algorithm of \secref{sec:greedy-algorithm}, which optimizes recommendations assuming single-step content requests. This policy relates to the previous works of~\cite{sermpezis-sch-globecom,chatzieleftheriou2017caching}. \item \emph{Proposed Policy/CARS}: Cache-aware recommendations using \textit{CARS}, which optimizes recommendations for sequential content consumption. \end{itemize} \begin{figure} \centering \subfigure[MovieLens,~$s = 0.7$]{\includegraphics[width=0.6\columnwidth]{wowmom_movielens_chr_vs_qual}\label{fig:movieles-quality}} \subfigure[Last.fm,~$s=0.4$]{\includegraphics[width=0.6\columnwidth]{wowmom_LastFM_chr_vs_qual}\label{fig:lastfm-quality}} \caption{Cache Hit Ratio vs Quality~$N=4,~C/K=5\%$.} \end{figure} \subsection{Results}\label{sec:simulation-results} We compare the three recommendation policies in scenarios with varying $q$ (minimum quality of recommendations - see \secref{sec:optimization-problem}), cache size $C$, probability to request a recommended content $a$, and $N$ recommended contents. For simplicity, we assume costs $x=0$ for cached contents and $x=1$ for non-cached. Hence, the cost becomes equivalent to the cache hit ratio ($CHR = (1-a)\cdot \mathbf{p_{0}}^{T} \cdot (I-aY)^{-1} \cdot \mathbf{x}$), which we use as metric to measure the achieved performance in our simulations. \myitem{Impact of Quality of Recommendations.} Recommending cached contents becomes trivial, if no quality in recommendations is required. However, the primary goal of a content provider is to satisfy its users, which translates to high quality recommendations. In the following, we present results that show that the proposed \textit{CARS can always achieve a good trade-off between cache hit ratio and quality of recommendation, significantly outperforming baseline approaches}. In Figures~\ref{fig:movieles-quality} and~\ref{fig:lastfm-quality} we present the achieved cache hit ratio (y-axis) of the four recommendation policies for the MovieLens and Last.fm, datasets, respectively, in scenarios where the \text{recommender quality} is imposed to be above a predefined threshold $q$ (x-axis). The first observation is that \textit{Myopic} and \textit{CARS} achieve their goal to increase the CHR compared to the baseline case of \textit{NoRec}. The absolute gains for both policies increases for lower values of $q$, because for lower $q$ there is more flexibility in recommendations. For high values of $q$, close to $100\%$, less recommendations that ``show the cache'' are allowed, and this leads to lower gains. However, even when the quality reaches almost $100\%$, the gains of \textit{CARS} remain significant. In fact, the relative performance of \textit{CARS} over the \textit{Myopic} increases with $q$, which indicates that non-\textit{Myopic} policies are more efficient when high recommendation quality is required. Moreover, comparing Figures~\ref{fig:movieles-quality} and~\ref{fig:lastfm-quality} reveals that the achievable gains depend also on the similarity matrix $U$. While in Fig.~\ref{fig:lastfm-quality} both cache-aware recommendation policies follow a similar trend (for varying $q$), in Fig.~\ref{fig:movieles-quality} for the larger dataset of MovieLens, the performance of \textit{CARS} decreases much less compared to \textit{Myopic} with $q$. \begin{figure} \centering \subfigure[MovieLens,~$s=0.5$]{\includegraphics[width=0.6\columnwidth]{wowmom_movielens_chr_vs_cache_size}\label{fig:movieles-cache-size}} \subfigure[Last.fm,~$s=0.4$]{\includegraphics[width=0.6\columnwidth]{wowmom_LastFM_chr_vs_cache_size}\label{fig:lastfm-cache-size}} \caption{Cache Hit Ratio vs Relative Cache size,~$Q=80\%,~N=4$.} \end{figure} \myitem{Impact of Caching Capacity.} In Figures~\ref{fig:movieles-cache-size} and~\ref{fig:lastfm-cache-size} we investigate the performance of the recommendation policies with respect to the cache size, for a fixed value of the recommender quality $q$. The proposed algorithm, outperforms significantly the other two policies. For example, in Fig.~\ref{fig:lastfm-cache-size}, for $C/K=8\%$ it achieves a $25\%$ improvement over the \textit{Myopic} algorithm. Even in the case of the MovieLens dataset (Fig.~\ref{fig:movieles-cache-size}), where the \textit{Myopic} algorithm can only marginally improve the cache hit ratio, \textit{CARS} still achieves significant gains. In total, in all scenarios we considered, \textit{the relative caching gains from the proposed cache-aware recommendation policy (over the no-recommendation case) are consistent and even increase with the caching size.} \myitem{Impact of Sequential Content Consumption.} \textit{CARS} takes into account the fact that users consume more than one content sequentially, and optimizes recommendations based on this. On the contrary the \textit{Myopic} algorithm (similarly to previous works~\cite{sermpezis-sch-globecom},~\cite{chatzieleftheriou2017caching}) considers single content requests. Therefore, Algorithm~\ref{alg:ADMM} is expected to perform better as the average number of consecutive requests by a user increases. The simulation results in Fig.~\ref{fig:num-access} validate this argument. We simulate a scenario of a small catalogue $K=100, C=4, N=3, s=0.6, q=90\%$ and a $U$ matrix with an $\overline{R}=4$ related contents per content on average, where we vary the number of consecutive requests by each user. It can be seen that the \textit{Myopic} algorithm increases the cache hit ratio when the users do a few consecutive requests (e.g., 3 or 4); after this point the cache hit ratio remains constant. However, under \textit{CARS}, not only the increase in the cache hit ratio is higher, but it increases as the number of consecutive requests increase. This is a promising message for real content services (such as YouTube, Netflix, Spotify, etc.) where users tend to consume sequentially many contents. \myitem{Impact of Probability $a$.} The probability $a$ represents the \textit{frequency} that a user follows a recommendation rather than requesting for an arbitrary content (restart probability, e.g., through the search bar in YouTube). The value of $a$ indicates the influence of the recommendation system to users; in the cases of YouTube and Netflix it is approximately 0.5 and 0.8 respectively~\cite{RecImpact-IMC10},~\cite{gomez2016netflix}. In Fig.~\ref{fig:restart} we present the performance of the two cache-aware recommendation policies for varying values of $a$. The higher the value of $a$, the more frequently a user follows a recommendation, and thus the higher the gains from the cache-aware recommendation policies. However, while the gain from the \textit{Myopic} algorithm increases linearly with $a$, the gains from the proposed \textit{CARS} increase superlinearly. This is due to the fact that Algorithm~\ref{alg:ADMM} takes into account the effect of probability $a$ when selecting the recommendations (e.g., see the objective function of \probref{problem:infinite-step-admm}). \begin{figure} \centering \subfigure[vs \# of Accesses (synthetic)]{\includegraphics[width=0.6\columnwidth]{wowmom_chrEvol_vs_nb_of_accesses}\label{fig:num-access}} \subfigure[vs Probability (a) (synthetic)]{\includegraphics[width=0.6\columnwidth]{wowmom_chr_vs_teleport}\label{fig:restart}} \caption{CHR for $N=3$,~(synthetic scenario)\\(a)~$q=85\%,~s=0.2,~C/K=4\%$\\~(b)~$q=90\%,~s=0.6,~C/K=2.5\%$} \end{figure} \section{Introduction} \label{sec:intro} \input{introduction_akis} \section{Problem Setup} \label{sec:problem-setup} \input{problem_setup_akis} \section{Problem Formulation}\label{sec:optimization-problem} \input{optimization_akis} \section{Optimization Algorithm} \label{sec:optim_algorithm} \input{optim_algorithm_akis} \section{Performance Evaluation} \label{sec:sims} \input{performance_eval} \section{Related Work} \label{sec:related} \input{related} \section{Conclusions} \label{sec:conclusions} \input{conclusion} \bibliographystyle{ieeetr} \input{wowmom2018_draft_theo.bbl} \end{document}
1,314,259,992,925
arxiv
\section{Introduction} \label{sec:intro} Sound generation by spinning modes is a central problem in many applications. Devices such as propellers and fans obviously produce a rotating source system, but the termination of a circular duct also radiates like a spinning source and jets have a source system which can be decomposed into spinning modes. The problem of the relationship between a source distribution and its spinning acoustic field has thus attracted considerable attention in the literature. This paper examines one part of the problem, the relationship between the radial structure of the source and the form of the corresponding acoustic field. Earlier work, using exact and asymptotic analysis \citep{carley10b,carley10c}, has fixed an upper limit on the information which is radiated into the field, but without considering the effect of the source distribution. In this paper, the theory is extended to include the relationship between the source and the field, allowing a discussion of the implications for a number of problems. A first area where the analysis is relevant is that of control of noise from rotors. One approach to this problem is to fit an inverse model to the measured noise and then use this model to compute a noise field which cancels the rotor noise at some point. In a recent study \cite[][]{gerard-berry-masson05a,gerard-berry-masson05b gerard-berry-masson-gervais07} it was found that a very low order acoustic model of a cooling fan was sufficiently accurate for control purposes. At first glance, it is not clear why a low order model should give a good match to the results from a finite rotor, beyond considerations of acoustical compactness. An analysis of the information content of the field, however, shows that the field is generated by a set of low order modes with the higher order modes being cut-off and generating exponentially small noise \cite[][]{carley10b,carley10c}. The second application of the approach of this paper is to inverse methods, in which a `source' is determined from acoustic field measurements. There have been numerous applications of such techniques, but of particular interest here is that of jet noise. It is known that the noise field of a turbulent jet is represented by a much lower order model than is the flow \citep{jordan-schlegel-stalnov-noack-tinney07}. This is partly explained by axial interference effects \citep[for example]{freund01} but relatively little attention has been paid to the influence of the radial structure of the source until quite recently \citep{michel09}. This paper presents an analysis of the general problem of radiation from a disc source and fixes limits on the proportion of the source which actually radiates into the acoustic field, with no approximation other than the standard acoustic assumption of linearity. In particular, the far-field assumption is not required, making the results applicable over a wide range of parameters. The implications of the results are then discussed with respect to rotor and jet noise. \section{Spinning sound fields} \label{sec:spin} \begin{figure} \centering \centerline{\includegraphics{jfm10-figs.2}} \caption{Coordinate system for disc radiation calculations} \label{fig:coordinate} \end{figure} The problem is formulated as that of calculating the acoustic field radiated by a monopole source distributed over a circular disc. This disc may be viewed as the source proper, such as in the case of rotor noise, or as part of a distributed three-dimensional source, as in jet noise. The system for the analysis is shown in Figure~\ref{fig:coordinate} with cylindrical coordinates $(r,\theta,z)$. All lengths are non-dimensionalized on disc radius and the disc lies in the plane $z=0$. The field from one azimuthal mode of the acoustic source, specified as $s_{n}(r_{1})\exp\J n \theta_{1}$, has the form $P_{n}(k,r,z)\exp\J n \theta$, with $P_{n}$ given by the Rayleigh integral: \begin{eqnarray} \label{equ:disc} P_{n}(k,r,z) &=& \int_{0}^{1} \int_{0}^{2\upi} \frac{\E^{\J(kR'+n\theta_{1})}}{4\upi R'}\,\D \theta_{1} s_{n}(r_{1}) r_{1}\,\D r_{1},\\ R' &=& \left[ r^{2} + r_{1}^{2} - 2rr_{1}\cos\theta_{1} + z^{2} \right]^{1/2},\nonumber \end{eqnarray} where $k$ is wavenumber and subscript $1$ indicates variables of integration. The field due to higher order sources, such as dipoles and quadrupoles, would be found by differentiation of (\ref{equ:disc}). The analysis to be presented does not include such sources but the conclusions drawn should still be valid. \subsection{Equivalent line source expansion} \label{sec:line} \begin{figure} \centering \centerline{\includegraphics{jfm10-figs.1}} \caption{Transformation to equivalent line source} \label{fig:sideline} \end{figure} The analysis of the nature of the sound field from an arbitrary disc source is based on a transformation of the disc to an exactly equivalent line source, an approach which has been used to study transient radiation from pistons \cite[][]{oberhettinger61a,pierce89}, rotor noise \cite[][]{chapman93,carley99} and source identification methods \citep{carley09}. The transformation to a line source is shown in Figure~\ref{fig:sideline}, which shows the new coordinate system $(r_{2},\theta_{2},z)$ centred on a sideline of constant radius $r$. Under this transformation: \begin{eqnarray} \label{equ:transformed} P_{n}(r,z) &=& \int_{r-1}^{r+1} \frac{\E^{\J kR'}}{R'} K(r,r_{2})r_{2}\,\D r_{2},\\ R' &=& \left(r_{2}^{2} + z^{2}\right)^{1/2},\nonumber\\ \label{equ:kfunc} K(r,r_{2}) &=& \frac{1}{4\upi} \int_{\theta_{2}^{(0)}}^{2\upi-\theta_{2}^{(0)}} \E^{\J n\theta_{1}}s_{n}(r_{1})\,\D\theta_{2}, \end{eqnarray} for observer positions with $r>1$, with the limits of integration given by: \begin{equation} \label{equ:theta} \displaystyle \theta_{2}^{(0)} = \cos^{-1}\frac{1-r^{2}-r_{2}^{2}}{2rr_{2}}. \end{equation} Functions of the form of $K(r,r_{2})$ have been analyzed in previous work \citep{carley99}. In this paper, it is sufficient to note that the source function can be written \citep{carley09}: \begin{eqnarray} \label{equ:kfunc:exp} K(r,r_{2}) &=& \sum_{q=0}^{\infty} u_{q}(r)U_{q}(s)(1-s^2)^{1/2}, \end{eqnarray} where $U_{q}(s)$ is a Chebyshev polynomial of the second kind, $s=r_{2}-r$ and the coefficients $u_{q}(r)$ are functions of $r$ but not of $z$. Inserting (\ref{equ:kfunc:exp}) into (\ref{equ:transformed}): \begin{eqnarray} \label{equ:line:ip} P_{n}(k,r,z) &=& \sum_{q=0}^{\infty} u_{q}(r) \int_{-1}^{1} \frac{\E^{\J kR'}}{R'} U_{q}(s) (r+s)(1-s^2)^{1/2}\,\D s,\\ R' &=& \left[(r+s)^{2} + z^{2}\right]^{1/2}. \end{eqnarray} The radiation properties of the integrals of (\ref{equ:line:ip}) have been examined in some detail elsewhere \citep[][]{carley10b,carley10c}, giving an exact result for the in-plane case $z=0$: \begin{equation} \label{equ:line:series} P_{n}(k,r,0) = \upi \E^{\J k r} \sum_{q=0}^{\infty} u_{q}(r) \J^{q} (q+1) \frac{\besselj{q+1}(k)}{k}. \end{equation} For large order $q$, the Bessel function $J_{q}(k)$ decays exponentially for $k<q$ so that the line source modes with order $q>k$ are `cut-off' and generate exponentially small noise fields. Since the integrals have their maximum in the plane $z=0$, (\ref{equ:line:series}) says that the whole field is exponentially small and modes $u_{q}(r)$ with $q>k$ are cut-off everywhere. This gives an indication of how much of a given source distribution radiates into the acoustic field, near or far. There only remains to establish the relationship between the radial source $s_{n}(r_{1})$ and the line source coefficients $u_{q}(r)$. \subsection{Series expansion for spinning sound fields} \label{sec:series} A recently derived series \citep{carley10} for the field radiated by a ring source of radius $r_{1}$ can be used to find an expression for the sound radiated by a disc source with arbitrary radial variation: \begin{eqnarray} R_{n} &=& \int_{0}^{2\upi} \frac{\E^{\J(kR'+n\theta_{1})}}{4\upi R'}\,\D \theta_{1}, \nonumber\\ &=& \J^{2n+1}\frac{\upi}{4} \frac{1}{(r_{1}R)^{1/2}} \sum_{m=0}^{\infty} (-1)^m \frac{(2n+4m+1)(2m-1)!!}{(2n+2m)!!}\nonumber\\ \label{equ:ring} &\times& \hankel{n+2m+1/2}(kR) P_{n+2m}^{n}(\cos\phi) \besselj{n+2m+1/2}(kr_{1}), \end{eqnarray} with $\hankel{\nu}(x)$ the Hankel function of the first kind of order $\nu$, $\besselj{\nu}$ the Bessel function of the first kind and $P_{n}^{m}$ the associated Legendre function. The observer position is specified in spherical polar coordinates $R=[r^{2}+z^{2}]^{1/2}$, $\phi=\tan^{-1}r/z$. Multiplication by the radial source term $r_{1}s_{n}(r_{1})$ and integration gives an expression for the field radiated by a general source of unit radius and azimuthal order $n$: \begin{eqnarray*} P_{n}(k,r,z) &=& \J^{2n+1}\frac{\upi}{4} \sum_{m=0}^{\infty} (-1)^m \frac{(2n+4m+1)(2m-1)!!}{(2n+2m)!!} P_{n+2m}^{n}(\cos\phi)S_{n+2m},\\ S_{n+2m}(k,r,z) &=& \int_{0}^{1} s_{n}(r_{1}) \besselj{n+2m+1/2}(kr_{1})\hankel{n+2m+1/2}(kR) \left( \frac{r_{1}}{R} \right)^{1/2}\,\D r_{1}. \end{eqnarray*} Setting $z=0$ ($\phi=\upi/2$, $R=r$): \begin{eqnarray} \label{equ:series:ip} P_{n}(k,r,0) &=& \frac{\J\upi}{4} \sum_{m=0}^{\infty} \frac{1}{m!} \frac{(2n+4m+1)(2n+2m-1)!!(2m-1)!!}{2^{m} (2n+2m)!!} S_{n+2m}, \end{eqnarray} where use has been made of the expression~\citep[8.756.1]{gradshteyn-ryzhik80}: \begin{equation} P_{n+2m}^{n}(0) = \frac{(-1)^{m+n}}{2^{m}}\frac{(2n+2m-1)!!}{m!}. \end{equation} \subsection{Line source coefficients} \label{sec:coefficients} The expressions for $P_{n}$ from \S\,\ref{sec:series} and~\S\,\ref{sec:line} are both exact and can be equated to derive a system of equations relating the coefficients $u_{q}(r)$ to the radial source distribution $s_{n}(r_{1})$: \begin{eqnarray} \frac{\J}{4} \sum_{m=0}^{\infty} \frac{1}{m!} \frac{(2n+4m+1)(2n+2m-1)!!(2m-1)!!}{2^{m}(2n+2m)!!} S_{n+2m} &=&\nonumber\\ \label{equ:system} \E^{\J k r} \sum_{q=0}^{\infty} u_{q}(r) \J^{q} (q+1) \frac{\besselj{q+1}(k)}{k}. \end{eqnarray} Under repeated differentiation, (\ref{equ:system}) becomes a lower triangular system of linear equations which connects the coefficients $u_{q}(r)$ and $S_{n+2m}$: \begin{eqnarray} \frac{\J}{4} \sum_{m=0}^{\infty} \frac{1}{m!} \frac{(2n+4m+1)(2n+2m-1)!!(2m-1)!!}{2^{m}(2n+2m)!!} S_{n+2m}^{(v)} &=&\nonumber\\ \label{equ:diff} \sum_{q=0}^{\infty} u_{q}(r) \J^{q} (q+1) \left[ \E^{\J k r} \frac{\besselj{q+1}(k)}{k} \right]^{(v)}, \end{eqnarray} where superscript $(v)$ denotes the $v$th partial derivative with respect to $k$, evaluated at $k=0$. Using standard series \citep{gradshteyn-ryzhik80}, the products of special functions can be written: \begin{eqnarray} \label{equ:prod:exp:j} \E^{\J k r}\frac{\besselj{q+1}(k)}{k} &=& \frac{1}{\J^{q}} \sum_{t=0}^{\infty}(\J k)^{t+q} E_{t,q}(r),\\ E_{t,q}(r) &=& \frac{1}{2^{q+1}} \sum_{s=0}^{[t/2]} \frac{r^{t-2s}}{4^{s}s!(s+q+1)!(t-2s)!},\nonumber \end{eqnarray} where $[t/2]$ is the largest integer less than or equal to $t/2$, and \begin{eqnarray} \left( \frac{r_{1}}{r} \right)^{1/2} \hankel{n+1/2}(kr)\besselj{n+1/2}(kr_{1}) &=& \left( \frac{r}{2} \right)^{2n+1} \sum_{t=0}^{\infty} \frac{k^{2t+2n+1}}{t!} \left( -\frac{r^{2}}{4} \right)^{t}V_{n,t}(r_{1}/r) \nonumber\\ &-& (-1)^n\J \sum_{t=0}^{\infty} \frac{k^{2t}}{t!} \left( -\frac{r^{2}}{4} \right)^{t}W_{n,t}(r_{1}/r), \label{equ:prod:h:j} \end{eqnarray} with the polynomials $V_{n,t}$ and $W_{n,t}$ given by: \begin{subequations} \label{equ:vwpoly} \begin{eqnarray} V_{n,t}(x) &=& \sum_{s=0}^{t} {t \choose s} \frac{x^{2s+n+1}}{\Gamma(n+s+3/2)\Gamma(t-s+n+3/2)},\\ W_{n,t}(x) &=& \sum_{s=0}^{t} {t \choose s} \frac{x^{2s+n+1}}{\Gamma(n+s+3/2)\Gamma(t-s-n+1/2)}. \end{eqnarray} \end{subequations} Given the power series, the derivatives at $k=0$ are readily found: \begin{subequations} \label{equ:derivatives} \begin{eqnarray} \J^{q}\left. \frac{\partial^{v}}{\partial k^{v}} \E^{\J k r}\frac{\besselj{q+1}(k)}{k} \right|_{k=0} &=& \left\{ \begin{array}{ll} 0, & v < q;\\ \J^{v}v!E_{v-q,q}(r), & v \geq q. \end{array} \right.\\ \left. \frac{\partial^{v}}{\partial k^{v}} (r_{1}/r)^{1/2} \hankel{n+1/2}(kr)\besselj{n+1/2}(kr_{1}) \right|_{k=0} &=& \nonumber\\ \left\{ \begin{array}{lll} \displaystyle 0, & v=2v'+1,& v' < n;\\ \displaystyle \left( \frac{r}{2} \right)^{2n+1} \left( -\frac{r^{2}}{4} \right)^{v'-n} \frac{v!}{(v'-n)!}V_{n,v'-n}(r_{1}/r), & v=2v'+1, & v'\geq n;\\ \displaystyle -(-1)^{n}\J \frac{(2v')!}{v'!} \left( -\frac{r^{2}}{4} \right)^{v'}W_{n,v'}(r_{1}/r), &v=2v'. \end{array} \right. \end{eqnarray} \end{subequations} Setting $v=0,1,\ldots$ yields an infinite lower triangular system of equations for $u_{q}(r)$: \begin{equation} \label{equ:system:1} \mathsfbi{E}\mathbf{U} = \mathbf{B}, \end{equation} with $\mathbf{U}=[u_{0}\,u_{1}\,\ldots]^{T}$ and the elements of matrix $\mathsfbi{E}$ and vector $\mathbf{B}$ given by: \begin{subequations} \label{equ:entries} \begin{eqnarray} E_{vq} &=& \begin{array}{ll} \J^{v}(q+1)v!E_{v-q,q}(r), & q\leq v;\\ 0, & q > v. \end{array}\\ B_{v} &=& \frac{\J}{4} \sum_{m=0}^{\infty} \frac{1}{m!} \frac{(2n+4m+1)(2n+2m-1)!!(2m-1)!!}{2^{m}(2n+2m)!!} S_{n+2m}^{(v)}, \end{eqnarray} \end{subequations} where \begin{eqnarray} \label{equ:integrals} S_{n+2m}^{(v)} &= \left\{ \begin{array}{lll} \displaystyle 0 & v = 2v'+1, & v' < n+2m;\\ \displaystyle (-1)^{n+v'} \frac{v!}{(v'-n-2m)!} \left( \frac{r}{2} \right)^{v} \int_{0}^{1}V_{n+2m,v'-n-2m}(r_{1}/r) s_{n}(r_{1})\,\D r_{1}, & v = 2v'+1, & v' \geq n+2m;\\ \displaystyle -(-1)^{n+v'} \frac{\J v!}{v'!} \left( \frac{r}{2} \right)^{v} \int_{0}^{1}W_{n+2m,v'}(r_{1}/r) s_{n}(r_{1})\,\D r_{1}, & v = 2v'. \end{array} \right. \end{eqnarray} Given a radial source term $s_{n}(r_{1})$, (\ref{equ:system:1}) can be solved to find the coefficients $u_{q}(r)$ of the equivalent line source modes. Since it is lower triangular, the first few values of $u_{q}$ can be reliably estimated, although ill-conditioning prevents accurate solution for arbitrary large $q$. \subsection{Radiated field} \label{sec:radiated} From the relationship between the radial source term and the line source coefficients, some general properties of the acoustic field can be stated. The first result, already found by \cite{carley10b,carley10c} is that, since the line source modes with $q+1>k$ are cut off, the acoustic field has no more than $k$ degrees of freedom, in the sense that the radiated field is given by a weighted sum of the fields due to no more than $k$ elementary sources. From (\ref{equ:system:1}), this result can be extended. The first extension comes from the fact that $B_{2v+1}\equiv0$, for $v'<n$, on the right hand side of (\ref{equ:system:1}). This means that $u_{q}$, $q=2v'+1$, is uniquely defined by the lower order coefficients with $q\leq 2v'$. The result is that the acoustic field of azimuthal order $n$, whatever might be its radial structure, has no more than $k-n$ degrees of freedom. A second extension comes from examination of (\ref{equ:system:1}) The first few entries of the system of equations are: \begin{equation} \label{equ:system:a} \left[ \begin{array}{rrrrr} 1/2 & 0 & 0 & 0 & \cdots \\ r/2 & 1/4 & 0 & 0 & \cdots \\ \vdots &\vdots & \vdots & 0 & \cdots \end{array} \right] \left( \begin{array}{c} u_{0} \\ u_{1} \\ \vdots \end{array} \right) = \left( \begin{array}{c} B_{0} \\ 0 \\ \vdots \end{array} \right), \end{equation} resulting in the solution: \begin{equation} \label{equ:system:sol} u_{0} = 2B_{0};\quad u_{1} = -2ru_{0} = -4rB_{0}, \end{equation} so that the ratio of $u_{0}$ and $u_{1}$ is constant, for arbitrary $s_{n}(r_{1})$. This means that low frequency sources of the same radius and azimuthal order generate fields which vary only by a scaling factor, since the higher order terms are cut off. Finally, if we attempt to isolate a source $s_{n}(r_{1})$ associated with a single line source mode, by setting $u_{q}\equiv1$ for some $q$, with all other $u_{q}\equiv0$, we find that the line modes must occur in pairs, since if $u_{2v'}\equiv1$, $u_{2v'+1}\neq0$, being fixed by the condition $B_{2v'+1}\equiv0$. \section{Results} \label{sec:results} To illustrate the application of the result of the previous section, we present some results for the calculation of the line source coefficients and for the use of the method to modify the radiating part of a source. We also discuss qualitatively the implications of the results for studies of jet noise. \subsection{Line source coefficient evaluation} \label{sec:coefficient} \begin{figure} \centering \begin{tabular}{cc} \includegraphics{jfm10-figs.3} & \includegraphics{jfm10-figs.4} \\ $n=2$ & $n=4$ \\ \includegraphics{jfm10-figs.5} \\ $n=16$ \end{tabular} \caption{Line source mode coefficients computed using the method of \S\,\ref{sec:coefficients} (solid lines) and directly from analytical formulae (symbols) for $r=5/4$, $s=r_{1}^{a}$, $a=0$ (circles), $a=2$ (squares) and $a=4$ (diamonds) for $n=2,4,16$.} \label{fig:cfft:compare} \end{figure} The first results are a comparison of coefficients $u_{q}(r)$ computed using (\ref{equ:system:1}) and those computed directly from exact closed-form expressions for $K(r,r_{2})$ in the case when the radial source term is a monomial in radius $s_{n}=r_{1}^{a}$ \citep{carley99}. Figure~\ref{fig:cfft:compare} compares the two sets of coefficients for $a=0,2,4$, with the plots terminated at a value of $q$ where the difference between the two sets of results becomes noticeable, $q\approx20$. This gives an indication of the effect of the ill conditioning of (\ref{equ:system:1}). For $q\lesssim 20$, the computed values of $u_{q}$ are reliable. It is noteworthy that for small $q$, the coefficients are practically equal for all values of $a$ so that for low frequency radiation, the radiated fields will be practically indistinguishable. \subsection{Radial cancellation} \label{sec:cancellation} \begin{figure} \centering \begin{tabular}{cc} \includegraphics{jfm10-figs.6} & \includegraphics{jfm10-figs.8} \\ \includegraphics{jfm10-figs.7} & \includegraphics{jfm10-figs.9} \end{tabular} \caption{Cancellation effects for radial source terms: top row radiated field from original $s_{n}(r_{1})$ (solid) and modified source $s_{n}(r_{1})-\beta s_{n}'(r_{1})$ (dashed); bottom row source terms $s_{n}(r_{1})$ (solid) and $\beta s_{n}'(r_{1})$ dashed. The left hand column covers the case $n=2$ and the right hand $n=8$. In both cases, $k=1$, $r=5/4$.} \label{fig:field:compare} \end{figure} The analysis so far has identified that part of a source distribution which radiates, as a function of wavenumber $k$. From the results, it appears that only a small part of the source is responsible for the acoustic field, with most of the line source modes being cut off throughout the field, except at high frequency. Indeed, asymptotic analysis \citep{carley10b,carley10c} shows that even the cut-on modes radiate efficiently only into a small part of the acoustic field, with the exception of those of low order. In any case, this offers a method for examination of the radiation properties of a source. Given a source term $s_{n}(r_{1})$ the approach is to impose a secondary source $s_{n}'(r_{1})$ which generates the same set of line source modes up to some required order. In the simplest case, we match the first line source coefficient $u_{0}$, which automatically matches $u_{1}$. If $u_{0}$ is known for both $s_{n}(r_{1})$ and for $s_{n}'(r_{1})$, then the source $s_{n}'(r_{1})-\beta s_{n}'(r_{1})$ will have $u_{0}=u_{1}=0$, if $\beta$ is taken as the ratio of $u_{0}$ for the primary and secondary sources $s_{n}(r_{1})$ and $s_{n}'(r_{1})$. Figure~\ref{fig:field:compare} shows the results of such a procedure using sources $s_{n}(r_{1})=J_{n}(\alpha r_{1})$, with $\alpha$ the first extremum of $J_{n}$ (similar to a duct mode), and $s_{n}'(r_{1})\equiv 1$. The wavenumber $k=1$ and values $n=2,8$ have been used. In the first case, $n=2$, the noise reduction in the plane $z=0$ is quite large, about 20\deci\bel, but there is a slight increase around $z=1$. This is because, as seen in Figure~\ref{fig:cfft:compare}, the coefficient $u_{2}$ is quite large and is multiplied by a Bessel function of order~3, which is not of high enough order for the exponential decay with $k$ which cuts off the mode. The cut-off behaviour is seen more clearly in the $n=8$ case, where the reduction at $z=0$ is~40\deci\bel. At larger $z$, the reduction is much smaller, but this is because, as found from asymptotic analysis \citep{carley10b,carley10c}, the field in this region only contains contributions from the remaining lower order modes, starting with $q=2$. \subsection{Degrees of freedom in jet noise fields} \label{sec:jets} The results of \S\,\ref{sec:radiated}, regarding the number of degrees of freedom in the acoustic field, can help explain some features of experiments on jet noise. Despite the lack of consensus on what is meant by the `source' of jet noise \citep[][]{jordan-gervais08,suzuki10}, some progress has been made by assuming that the source of jet noise can be identified with some combination of flow quantities. An open question, however, is which part of the source term radiates, since it is clear that only a small fraction of the flow generates the acoustic field. Two recent sets of results, one experimental, the other numerical, illustrate the issues. In one, \cite{freund01} has used direct numerical simulation to compute the flow and noise of a Mach~0.9 jet, validating the noise prediction against experiments and showing that a \cite{lighthill52} source term accurately reproduces the acoustic field. Spatial filtering of the source, using a wavenumber criterion to remove the non-radiating part, left ``a set of modes capable of radiating to the far field'', with the caveat that ``additional cancellation may occur due to the radial structure of the source which is not accounted for in this analysis''. Indeed, the radial structure of jet noise sources has not received much attention until quite recently \citep{michel09}. An experimental result of some interest is that of \cite{jordan-schlegel-stalnov-noack-tinney07} who performed a modal decomposition of a jet flow field and a proper orthogonal decomposition optimized for the resolution of the far field noise. They found that more than~350 modes were needed to capture half of the flow energy while~24 modes sufficed for~90\% of the far-field noise. As they note, passage to the far field acts as a filter passing only a low-dimensional representation of the flow. From these observations, it is plausible that the relatively low order structure of jet noise can be explained by the results of this paper. In the notation of this paper, $k=\upi \St M$, where $\St$ is Strouhal number based on jet diameter and $M$ is jet Mach number. For the range of Strouhal number important for jet noise $\St<2$ \citep{michalke-fuchs75}, $k<2\upi M$. For the $M=0.9$ jet studied by \cite{jordan-schlegel-stalnov-noack-tinney07}, for example, this yields $k\lesssim5.7$ and no more than about six line source modes radiate from the axisymmetric source modes at the highest frequency of interest. This estimate would need to be modified to take account of axial interference as in the far field analysis of \cite{michel09} but does offer the possibility of establishing some reasonable limits on the detail to be expected from acoustic measurements on jets and the requirements for low order models used in noise control. \section{Conclusions} \label{sec:conclusions} The radiation properties of disc sources of arbitrary radial variation have been analyzed to establish the part of the source which radiates into the acoustic field, without recourse to a far field approximation. Limits have been established on the number of degrees of freedom of the part of the source which radiates and the implications of these limits have been discussed for the problems of rotor noise and studies of source mechanisms in jets. Future work will consider the use of the findings of this paper to study the radiating portion of full jet source distributions, including axial interference effects.
1,314,259,992,926
arxiv
\section{Analysis} \subsection{Correctness}\label{sec:correctness} The private training protocol does not alter any of the computations of the original training algorithm and therefore results in the same output. The additive randomization $r_i$ introduced in Step 3 is removed in Step 6 leaving the results unchanged. Similarly, the multiplicative randomization $q_i$ introduced in Step 7 is removed in Step 9. As discussed in Section~\ref{sec:homomorphic-encryption}, the only source of error is the truncation of less significant digits in the finite precision representation of real numbers. In practice, we observe that the error in computing the weight vector $w$ is negligibly small and does not result in any loss of accuracy. \input{security} \subsection{Complexity}\label{sec:complexity} We analyze the encryption/decryption and the data transmission costs for a single execution of the protocol as these consume a vast majority of the time. There are 6 steps of the protocol where encryption or decryption operations are carried out. \begin{enumerate} \item In Step 1, Bob encrypts the $d$-dimensional vector $w_{(t)}$. \item In Step 3, Alice encrypts the $n$ random numbers $r_i$. \item In Step 4, Bob decrypts the $n$ inner products obtained from Alice. \item In Step 5, Bob encrypts the exponentiation of the $n$ inner products. \item In Step 8, Bob decrypts, takes a reciprocal, and encrypts the $n$ multiplicatively scaled quantities. \item In Step 12, Bob decrypts the $d$ dimensional updated weight vector obtained from Alice. \end{enumerate} \noindent \textbf{Total:} $3n + 2d$ encryptions and decryptions.~\\ Similarly, there are 6 steps of the protocol where Alice and Bob transfer data to each other. \begin{enumerate} \item In Step 1, Bob transfers the $d$-dimensional vector $w_{(t)}$ to Alice. \item In Step 3, Alice transfers $n$ randomized innner products to Bob. \item In Step 5, Bob transfers the $n$ encrypted exponentials to Alice. \item In Step 7, Alice transfers $n$ scaled quantities to Bob. \item In Step 8, Bob transfers the $n$ encrypted reciprocals to Alice. \item In Step 11, Alice transfers the $d$ dimensional encrypted updated weight vector to Bob. \end{enumerate} \textbf{Total:} Transmitting $4n + 2d$ elements.~\\ The speed of performing the encryption and decryption operations depends directly on the size of the key of the cryptosystem. Similarly, when we are transfering encrypted data, the size of an individual element also depends on the size of the encryption key. As the security of the encryption function is largely determined by the size of the encryption key, this reflects a direct trade-off between security and efficiency. \section{Conclusion} We developed protocols for training and evaluating a logistic regression based spam filtering classifier over emails belonging to multiple parties while preserving the privacy constraints. We presented an information theoretic analysis of the security of the protocol and also found that both the encryption/decryption and data transmission costs of the protocol are linear in the the number of training instances and the dimensionality of the data. We also experimented with a prototype implementation of the protocol on a large scale email dataset and demonstrate that our protocol is able to achieve close to state of the art performance in a feasible amount of execution time. The future directions of this work include applying our methods to other spam filtering classification algorithms. We also plan to extend our protocols to make extensive use of parallel architectures such as GPUs to further increase the speed and scalability. \section{Experiments}\label{sec:experiments} We provide an experimental evaluation of our approach for the task of email spam filtering. The privacy preserving training protocol requires a substantially larger running time as compared to the non-private algorithm. In this section, we analyze the training protocol for running time and accuracy. As the execution of the protocol on the original dataset requires an infeasible amount of time, we see how data independent dimensionality reduction can be used to effectively reduce the running time while still achieving comparable accuracy. As it is conventional in spam filtering research, we report AUC scores.\footnote{Area under the ROC curve.} It is considered to be a more appropriate metric for this task as compared to other metrics such as classification accuracy or F-measure because it averages the performance of the classifier in different precision-recall points which correspond to different thresholds on the prediction confidence of the classifier. The AUC score of a random classifier is 0.5 and that for the perfect classifier is 1. We compared AUC performance of the classifier given by the privacy preserving training protocol with the non-private training algorithm and in all cases the numbers were identical up to the five significant digits. Therefore, the error due to the finite precision representation mentioned in Section~\ref{sec:correctness} is negligible for practical purposes. \begin{table} \centering \caption{Email spam dataset summary.} \label{tab:dataset} \begin{tabular}{|c|c|c|c|} \hline Section & Spam & Non-spam & Total \\ \hline Training & 2466 (82\%) & 534 (18\%) & 3000 \\ \hline Testing & 2383 (79\%) & 617 (21\%) & 3000 \\ \hline \end{tabular} \end{table} \subsection{Email Spam Dataset} We used the public spam email corpus from the CEAS 2008 spam filtering challenge.\footnote{The dataset is available at \url{http://plg.uwaterloo.ca/~gvcormac/ceascorpus/} The part of the dataset we have used corresponds to \texttt{pretrain-nofeedback} task.} For generality, we refer to emails as documents. Performance of various algorithms on this dataset is reported in \cite{emailspam}. The dataset consists of 3,067 training and 206,207 testing documents manually labeled as spam or ham (\emph{i.e.}, not spam). To simplify the benchmark calculations, we used the first 3000 documents from each set (Table~\ref{tab:dataset}). Accuracy of the baseline majority classifier which labels all documents as spam is 0.79433. \subsection{Spam Filter Implementation}\label{sec:filterimpl} Our classification approach is based on online logistic regression \cite{Goodman06}, as described in Section \ref{sec:lr}. The features are overlapping character four-grams which are extracted from the documents by a sliding window of four characters. The feature are binary indicating the presence or absence of the given four-gram. The documents are in \texttt{ASCII} or \texttt{UTF-8} encoding which represents each character in 8 bits, therefore the space of possible four-gram features is $2^{32}$. Following the previous work, we used modulo $10^{6}$ to reduce the four-gram feature space to one million features and only the first 35 \texttt{KB} of the documents is used to compute the features. For all experiments, we use a step size of $\eta=0.001$ and no regularization or noise required for differential privacy is used. \begin{table}[ht] \centering \caption{Running time comparison of online training of logistic regression (LR) and the privacy preserving logistic regression (PPLR) for one document.} \label{tab:speed} \begin{tabular}{|c|r|r|} \hline Feature Count & LR & PPLR\\ \hline Original: $10^{6}$ & 0.5 s & 1.14 hours \\ \hline Reduced: $10^{4}$ & 5 ms & 41 s \\ \hline \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Running time of privacy preserving logistic regression for one document of $10^4$ features with different encryption key sizes.} \label{tab:encspeed} \begin{tabular}{|c|r|} \hline Encryption Key Size & Time\\ \hline 256 bit & 41 s \\ \hline 1024 bit & 2013 s\\ \hline \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Time requirement for steps of the protocol for random matrices of the dimensions shown (documents $\times$ features).} \label{tab:steptime} \begin{tabular}{|c|r|r|} \hline Steps&Time (s) - 200$\times$20&Time (s) - 200$\times$100\\ \hline 1&0.06&0.31\\ \hline 2, 3&2.59&10.14\\ \hline 4, 5&0.82&0.73\\ \hline 6, 7&0.46&0.41\\ \hline 8&0.84&0.73\\ \hline 9, 10&1.81&8.33\\ \hline 11&0.05&0.18\\ \hline\hline Total&6.61&20.81\\ \hline \end{tabular} \end{table} \subsection{Protocol Implementation} We created a prototype implementation of the protocol in C++ and used the variable precision arithmetic libraries provided by OpenSSL~\cite{openssl} to implement the Paillier cryptosystem. We used the GSL libraries~\cite{gsl} for matrix operations. We performed the experiments on a 3.2 GHz Intel Pentium 4 machine with 2 GB RAM and running 64-bit Ubuntu. The original dataset has $10^6$ features as described in Section~\ref{sec:filterimpl}. Similar to the complexity analysis of the training protocol (Section~\ref{sec:correctness}), we observed that time required for the training protocol is linear in number of documents and number of features. Table~\ref{tab:speed} compares the time required to train a logistic regression classifier with and without the privacy preserving protocol using 256-bit encryption for one document. It can be seen that the protocol is slower than non-private version by a factor of $10^4$ mainly due to the encryption in each step of the protocol. Also, we observe that the running time is drastically reduced with the dimensionality reduction. While the execution time for the training protocol over the original feature space would be infeasible for most applications, the execution time for the reduced feature space is seen to be usable in spam filtering applications. This motivated us to consider various dimensionality reduction schemes which we discuss in Section \ref{sec:dimensionality}. To further analyze the behavior of various steps of the protocol, in Table~\ref{tab:steptime} we report the running time of individual steps of the protocol outlined in Section~\ref{sec:training-protocol} on two test datasets of random vectors. It can be observed that encryption is the main bottle neck among the other operations in the protocol. We report the Paillier cryptosystem with 256-bit keys in the following experiments. As shown in Table~\ref{tab:encspeed}, using the more secure 1024-bit encryption keys, resulted in a slowdown by a factor of about 50 as compared to using 256-bit encryption keys. This is a constant factor which can be applied to all our timing results if the stronger level of security provided by 1024-bit keys is desired. Using a pre-computed value of the encrypted weight vector $E[w]$, the private evaluation protocol took 210.956 seconds for one document using $10^6$ features and 2.059 seconds for one document using $10^4$ features which again highlights the necessity for dimensionality reduction to make the private computation feasible. \subsection{Dimensionality Reduction} \label{sec:dimensionality} Since the time requirement of the privacy preserving protocol varies linearly with the data dimensionality, we can improve it by dimensionality reduction principally because data with fewer number of features will require fewer encryptions and decryptions. On the other hand, reducing the dimensionality of the features, particularly for sparse features such as $n$-gram counts, can have an effect on the classification performance. We study this behavior by experimenting with six different dimensionality reduction techniques, and compared the running time and AUC of the classifier learned by the training protocol. We consider PCA which is a data-dependent dimensionality reduction technique and five other ones which are data independent. The latter techniques are much more in our setting as they can be used by multiple parties on their individual documents without violating privacy. \begin{table}[ht] \centering \caption{Performance of PCA for dimensionality reduction.} \label{tab:pca} \begin{tabular}{|c|r|r|} \hline Dimension&Time (s) & AUC~~ \\ \hline 5&18&0.96159\\ \hline 10&37&0.99798\\ \hline 50&242&0.99944\\ \hline 100&599&0.99967\\ \hline 300&5949&0.99981\\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Time and space requirement for dimensionality reduction methods for reduction from $10^6$ to $10^4$ features.} \label{tab:drtimespace} \begin{tabular}{|c|c|c|} \hline Method & Time (s)& Space (GB) \\ \hline PCA & 7 $\times 10^6$&41\\ \hline LSH & 50 $\times 10^3$&40\\ \hline Hash Space & 41 & -- \\ \hline Document Frequency&1& -- \\ \hline Sample Uniform&2& -- \\ \hline Sample Multinomial&490& -- \\ \hline \end{tabular} \end{table} \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figures/Figure1} \caption{Time comparison for the dimensionality reduction approaches reduced from $10^6$ to $10^4$ dimensions.} \label{fig:timeauc} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.9\columnwidth]{figures/Figure2-matlab} \caption{Performance of one iteration of logistic regression training on 300 dimensional PCA feature vectors with different batch sizes.} \label{fig:batching} \end{figure} \begin{enumerate} \item {\bf Principal Component Analysis (PCA):} PCA is perhaps the most commonly used dimensionality reduction technique which computes the lower dimensional projection of the data based on the most dominant eigenvectors of covariance matrix of the original data. Since we only compute a small number of eigenvectors, PCA is found to be efficient for our sparse binary dataset. Table~\ref{tab:pca} summarizes the running time and the AUC of the classifier trained on the reduced dimension data. While the performance of PCA is excellent, it has the following disadvantages, motivating us to look at other techniques. \begin{enumerate} \item When training in a multiparty setting, all the parties are required to use a common feature representation. Among the methods we considered, only PCA computes a projection matrix which is data dependent. This projection matrix cannot be computed over the private training data because it reveals information about the data. \item For many classification tasks, reduction to an extremely small subspace hurts the performance much more significantly than in our case. Furthermore, computing PCA with high dimensional data is not efficient and we are interested in efficient and scalable dimensionality reduction techniques. \end{enumerate} \item {\bf Locality Sensitive Hashing (LSH):} In LSH~\cite{lsh}, we choose $k$ random hyperplanes in the original $d$ dimensional space which represent each dimension in the target space. The reduced dimensions are binary and indicate the side of the hyperplane on which the original point lies. \item {\bf Hash Space Reduction:} As mentioned in Section \ref{sec:filterimpl}, we reduce the original feature space to modulo $10^{6}$. We experimented with different sizes of this hash space. \item {\bf Document Frequency Based Pruning:} We select features which occur in at least $k$ documents. This is a common approach in removing rarely-occurring features, although some of those feature could be discriminative especially in a spam filtering task. \item {\bf Uniform Sampling:} In this approach, we draw from the uniform distribution until desired number of unique features are selected. \item {\bf Multinomial Sampling:} This approach is similar to the uniform sampling approach except that we first fit a multinomial distribution based on the document frequency of the features and then draw from this distribution. This causes the sampling to be biased toward features with higher variance which are often the more informative features. \end{enumerate} We ran each of these algorithms on 6000 documents of $10^6$ dimensions. Table~\ref{tab:drtimespace} summarizes the time and space requirement of each algorithm for reducing dimensions to $10^4$. We trained the logistic regression classifier on 3000 training documents with various reduced dimensions and measured the running time and AUC of the learned classifier on the 3000 test documents. The results are shown in Figure~\ref{fig:timeauc}. We observe that the data independent dimensionality reduction techniques such as LSH, multinomial sampling, and hash space reduction achieve close to perfect AUC. \subsubsection*{Classifier Performance for Varying Batch Size} As we discussed in Section~\ref{sec:security}, another important requirement of our protocol is to train in batches of documents rather than training on one document at a time. We have shown that the extra information gained by Bob about any party's data decreases with the increasing batch size. On the other hand, increasing the batch size causes the optimization procedure of the training algorithm to have fewer chances of correcting itself in a single pass over the entire training dataset. In Figure~\ref{fig:batching}, we see that the trade-off in AUC is negligible even with batch sizes of around 1000 documents. \subsection{Parallel Processing} An alternative approach to address the performance issue is parallelization. We experimented with a multi-threaded implementation of the algorithm. On average, we observed 6.3\% speed improvement on a single core machine. We expect the improvement to be more significant on a multi-core architecture. A similar scheme can be used to parallelize the protocol across a cluster of machines, such as in a MapReduce framework. In both of these cases, the accuracy of the online algorithms will decrease slightly as the number of threads or machines increase because the gradient $\nabla L(w_{(t)},x,y)$ computed in each of the parallel processes is based on an older value of the weight vector $w_{(t)}$. A more promising approach which does not impact the accuracy is encrypting vectors in parallel. In the present implementation of the protocol, we encrypt vectors serially and the procedure used for the individual elements is identical. We can potentially reduce the encryption time of a feature vector substantially by using a parallel processing infrastructure such as GPUs. We leave the experiments with such an implementation for future work. \subsection{Homomorphic Encryption} \label{sec:homomorphic-encryption} In a homomorphic cryptosystem, operations performed on encrypted data (\emph{ciphertext}) map to the corresponding operations performed on the original unencrypted data (\emph{plaintext}). If $+$ and $\cdot$ are two operators and $x$ and $y$ are two plaintexts, a homomorphic encryption function $E$ satisfies \[ E[x] \cdot E[y] = E[x + y]. \] This allows one party to encrypt the data using a homomorphic encryption scheme and another party to perform operations without being able to observe the plaintext data. This property forms the fundamental building block of our privacy preserving protocol. In this work we use the additively homomorphic Paillier cryptosystem~\cite{Paillier99} which also satisfies semantic security. The Paillier key generation algorithm produces a pair of $b$-bit numbers $(N,g)$ constituting the public key corresponding to the encryption function $E : \mathbb{Z}_N \mapsto \mathbb{Z}_{N^2}$ and another pair of $b$-bit numbers $(\lambda,\mu)$ constituting the private key corresponding to the decryption function $D : \mathbb{Z}_{N^2} \mapsto \mathbb{Z}_N$. Given a plaintext $x \in \mathbb{Z}_N$, the encrypted text is given by: \[ E[x] = g^x r^n \mod N^2, \] where $r$ is a random number sampled uniformly from $\mathbb{Z}_N$. Using a different value of the random number $r$ provides semantic security, \emph{i.e.}, two different encryptions of a number $x$ say, $E[x;r_1]$ and $E[x;r_2]$ will have different values but decrypting each of them will result in the same number $x$. It can be easily verified that the above encryption function satisfies the following properties: \begin{enumerate} \item For any two ciphertexts $E[x]$ and $E[y]$, \[ E[x] ~ E[y] = E[x + y\mod N^2]. \] \item And as a corollary, for any ciphertext $E[x]$ and plaintext $y$, \[ E[x]^y = E[x ~ y\mod N^2]. \] \end{enumerate} \subsubsection*{Extending the Encryption Function to Real Numbers} Paillier encryption as most other cryptosystems is defined over the finite field $\mathbb{Z}_N=\{0,\ldots,N-1\}$. However, in our protocol we need to encrypt real numbers, such as the training data and model parameters. We make the following modifications to the encryption function to support this. \begin{enumerate} \item Real numbers are converted to a fixed precision floating point representation. For a large constant $C$, a real number $x$ is represented as $\lfloor Cx \rfloor=\bar{x}$. \[ E[\bar{x}] = E[\lfloor Cx \rfloor], \quad D[E[\bar{x}]] = E[\lfloor Cx \rfloor]/C = x. \] \item The encryption of a negative integer is represented by the encryption of its modular additive inverse. If $-x$ is a negative integer, \[ E[-x] = E[N-x]. \] \item Exponentiation of an encrypted number by a negative integer is represented as the exponentiation of the multiplicative inverse of the encryption in the $\mathbb{Z}_{N^2}$ field, by the corresponding positive integer. We represent the exponentiation\footnote{We slightly abuse the notation to represent the non-modular exponentiation of the ciphertext by $E[x]^{a}$ to refer to $E[x] \cdot E[x] \cdots $ ($a$ times).} the ciphertext $E[x]$ by a negative integer $-y$ as \[ E[x]^{-y} = E[x^{-1} \mod N^2]^y. \] \end{enumerate} Representing real numbers by a fixed precision number introduces a small error due to the truncation which is directly proportional to the value of $C$. This representation also reduces the domain of the encryption function from $\{0,\ldots,N-1\}$ to $\{0,\ldots,\lfloor\frac{N-1}{C}\rfloor\}$. We need to ensure that the result of homomorphic operations on encrypted functions do not overflow the range, so we need to increase the bit-size $b$ of the encryption keys proportionally with $C$. As the computational cost of the encryption operations is also proportional to $b$, this creates a trade-off between accuracy and computation cost. The representation of negative integers on the other hand does not introduce any error but further halves the domain of the encryption function from $\{0,\ldots,\lfloor\frac{N-1}{C}\rfloor\}$ to $\{0,\ldots,\lfloor\frac{N-1}{2C}\rfloor\}$ which we denote by $\mathcal{D}$. \subsection{Classification Model: Logistic Regression in the Batch and Online Settings}\label{sec:lr} The training dataset consisting of $n$ documents classified by the user as spam or ham (\emph{i.e.}, not spam) are represented as the labeled data instances $(x,y)=\{(x_1,y_1),\ldots,(x_n,y_n)\}$ where $x_i \in \mathbb{R}^d$ and $y_i \in \{-1,1\}$. In the batch learning setting, we assume that the complete dataset is available at a given time. In the logistic regression classification algorithm, we model the class probabilities by a sigmoid function \[ P(y_i = 1 | x_i,w) = \frac{1}{1+e^{-y_iw^Tx_i}}. \] We denote the log-likelihood for the weight vector $w$ computed over the data instances $(x,y)$ by $L(w,x,y)$. Assuming the data instances to be i.i.d., the data log-likelihood $L(w,x,y)$ is equal to \[ L(w,x,y) = \log \prod_i \frac{1}{1+e^{-y_iw^Tx_i}} = -\sum_i\log[1+e^{-y_iw^Tx_i}]. \] We maximize the data log-likelihood $L(w,x,y)$ using gradient ascent to obtain the classifier with the optimal weight vector $w^*$. Starting with a uniformly initialized vector $w_{(0)}$, in the $t^{\rm th}$ iteration, we update $w_{(t)}$ as \begin{align} w_{(t+1)} = w_{(t)} + \eta \nabla L(w_{(t)},x,y) = w_{(t)} + \eta \sum_i \frac{y_ix_i^T}{1+e^{y_iw_{(t)}^Tx_i}}, \end{align} where $\eta$ is the pre-defined step size. We terminate the procedure on convergence between consecutive values of $w_{(t)}$. In the online learning setting, the data instances are obtained incrementally rather than being completely available at a given instance of time. In this case, we start with a model with the uniformly random weight vector $w_{(0)}$. A model $w_{(t)}$ learned using the first $t$ instances, is updated after observing a small block of $k$ instances. with the gradient of the log-likelihood computed over that block. \section{Introduction} \input{introduction} \input{related-work} \section{Preliminaries} \input{lr} \input{homomorphic-encryption} \input{protocol} \input{analysis} \input{experiments} \input{conclusion} \bibliographystyle{abbrv} \section{Privacy Preserving Classifier \\ Training and Evaluation}\label{sec:protocol} \subsection{Data Setup and Privacy Conditions} We define the party ``Bob'' who is interested in training a logistic regression classifier with weight vector $w\in\mathbb{R}^d$. In the online learning setting, multiple users interact with Bob at one time using their private training data as input. As all these parties play the same role in their interactions with Bob in one update step, we represent them by a generic user ``Alice''. Later on we see how Bob privately aggregates the encrypted gradients provided by individual parties. Alice has a sequence of labeled training data instances $(x,y)=\{(x_1,y_1),\ldots,(x_n,y_n)\}$. Bob is interested in training a logistic regression classifier with weight vector $w\in\mathbb{R}^d$ over $(x,y)$ as discussed in Section~\ref{sec:lr}. The privacy constraint implies that Alice should not be able to observe $w$ and Bob should not be able to observe $(x_i,y_i)$. The parties are assumed to be \emph{semi-malicious}, \emph{i.e.}, they correctly execute the steps of the protocol and do not attempt to cheat by using fraudulent data as input in order to extract additional information about the other parties. The parties are assumed to be curious, \emph{i.e.}, they keep a transcript of all intermediate results and can use that to gain as much information as possible. \subsection{Private Training Protocol}\label{sec:training-protocol} Bob generates a public and private key pair for a $b$-bit Paillier cryptosystem and provides the public key to Alice. In this cryptosystem, Bob is able to perform both encryption and decryption operations while Alice can perform only encryption. As mentioned before, we use the homomorphic properties of Paillier encryption to allow the parties to perform computations using private data. The update rule requires Bob to compute the gradient of the data log-likelihood function $\nabla L(w_{(t)},x,y)$ which involves exponentiation and division and cannot be done using only homomorphic additions and multiplications. We supplement the homomorphic operations with Bob performing those operations on multiplicative shares to maintain the privacy constraints. As mentioned in Section~\ref{sec:homomorphic-encryption}, the domain of the encryption function is $\mathcal{D}=\{0,\ldots,\lfloor\frac{N-1}{2C}\rfloor\}$. We sample the randomizations uniformly from this set. Bob initiates the protocol with a uniform $w_{(0)}$ and the gradient update step $\eta$ is publicly known. We describe the $t^{\rm th}$ iteration of the protocol below.~\\ \noindent \textbf{Input:} Alice has $(x,y)$ and the encryption key, \\ Bob has $w_{(t)}$ and both encryption and decryption keys. \\ \textbf{Output:} Bob has $w_{(t+1)}$. \begin{enumerate} \item\label{step:init} Bob encrypts $w_{(t)}$ and transfers $E[w_{(t)}]$ to Alice. \item For each training instance $x_i$, $i=1,\ldots,n$, Alice computes \[ \prod_{j=1}^d E[w_{(t)j}]^{y_ix_{ij}} = E\left[\sum_{j=1}^d y_iw_{(t)j}x_{ij}\right] = E\left[y_i w_{(t)}^Tx_{i}\right]. \] \item Alice samples $n$ numbers $r_1,\ldots,r_n$ uniformly from $\mathbb{Z}_N=\{1,\ldots,N-1\}$ and computes \[ E\left[y_i w_{(t)}^Tx_{i}\right] \cdot E[-r_i] = E\left[y_i w_{(t)}^Tx_{i} -r_i\right]. \] Alice transfers $E\left[y_i w_{(t)}^Tx_{i} -r_i\right]$ to Bob. \item Bob decrypts this to obtain $y_i w_{(t)}^Tx_{i} -r_i$. In this way, Alice and Bob have additive shares of the inner products $y_i w_{(t)}^Tx_{i}$. \item Bob exponentiates and encrypts his shares of the inner products. He transfers $E\left[e^{y_i w_{(t)}^Tx_{i}-r_i}\right]$ to Alice. \item Alice homomorphically multiplies the quantities she obtained from Bob by the exponentiations of her corresponding random shares to obtain the encryption of the exponentiations of the inner products.\footnote{In some cases, the exponentiation might cause the plaintext to overflow the domain of encryption function. This can be handled by computing the sigmoid function homomorphically using a piecewise linear sum of components.} \[ E\left[e^{y_i w_{(t)}^Tx_{i}-r_i}\right]^{e^{r_i}} = E\left[e^{y_i w_{(t)}^Tx_{i}}\right]. \] Alice homomorphically adds $E[1]$ to these quantities to obtain $E\left[1+e^{y_i w_{(t)}^Tx_{i}}\right]$. \item Alice samples $n$ numbers $q_1,\ldots,q_n$ from $\mathcal{D}$ using a bounded Power law distribution\footnote{We require that $q$ has the pdf $P(q) \propto 1/q$ for $1 \le q \le |\mathcal{D}|$. $q$ can be generated using inverse transform sampling. We discuss the reasons for this in Section~\ref{sec:security}.}. She then homomorphically computes \[ E\left[1+e^{y_iw_{(t)}^Tx_{i}}\right]^{q_i} = E\left[q_i\left(1+e^{y_iw_{(t)}^Tx_{i}}\right)\right]. \] She transfers these quantities to Bob. \item Bob decrypts these quantities and computes the reciprocal $\frac{1}{q_i\left(1+e^{y_iw_{(t)}^Tx_{i}}\right)}$. He then encrypts the reciprocals and sends them to Alice. \item Alice homomorphically multiplies $q_i$ with the encrypted reciprocals to cancel out her multiplicative share. \[ E\left[\frac{1}{q_i\left(1+e^{y_iw_{(t)}^Tx_{i}}\right)}\right]^{q_i} = E\left[\frac{1}{1+e^{y_iw_{(t)}^Tx_{i}}}\right]. \] \item Alice then homomorphically multiplies the encrypted reciprocal by each component of $y_ix_i^T$ to obtain the encrypted $d$-dimensional vector \[ E\left[\frac{1}{1+e^{y_iw_{(t)}^Tx_{i}}}\right]^{y_ix_i^T} = E\left[\frac{y_ix_i^T}{1+e^{y_iw_{(t)}^Tx_i}}\right]. \] She homomorphically adds each encrypted component to obtain \[ \prod_i E\left[\frac{y_ix_i^T}{1+e^{y_iw_{(t)}^Tx_{i}}}\right] = E\left[\sum_i \frac{y_ix_i^T}{1+e^{y_iw_{(t)}^Tx_{i}}}\right]. \] This is the encrypted gradient vector $E\left[\nabla L(w_{(t)},x,y)\right]$. \item\label{step:update} Alice homomorphically updates the encrypted weight vector she obtained in Step \ref{step:init} with the gradient. \begin{align*} E[w_{(t+1)}] &= E[w_{(t)}] ~ E\left[\nabla L(w_{(t)},x,y)\right]^\eta \\ &= E\left[w_{(t)} + \eta \nabla L(w_{(t)},x,y)\right]. \end{align*} \item Alice then sends the updated weight vector $E[w_{(t+1)}]$ to Bob who then decrypts it to obtain his output. \end{enumerate} In this way, Bob is able to update his weight vector using Alice's data while maintaining the privacy constraints. In the batch setting, Alice and Bob repeat Steps 2 to 11 to perform the iterative gradient descent. Bob can check for convergence in the value of $w$ between iterations by performing Step 12. In the online setting, Alice and Bob execute the protocol only once with using Alice using a typically small block of $k$ data instances as input. \subsubsection*{Extensions to the Training Protocol} \begin{enumerate} \item \emph{Training on private data horizontally split across multiple parties.} In the online setting we do not make any assumption about which data holding party is participating in the protocol. Just as Alice uses her data to update $w$ privately, other parties can then use their data to perform the online update using the same protocol. In the batch setting, multiple parties can execute one iteration of the protocol individually with Bob to compute the encrypted gradient on their own data. Finally, Bob can receive the encrypted gradients from all the parties and update the weight vector as follows. \[ w_{(t+1)} = w_{(t)} + \eta \sum_k \nabla L(w_{(t)},x^k,y^k), \] where $(x^1,y^1),\ldots,(x^K,y^K)$ are the individual datasets belonging to $K$ parties. \item \emph{Training a regularized classifier.} The protocol can easily be extended to introduce $\ell_2$ regularization, which is a commonly used method to prevent over-fitting. In this case the update rule becomes \[ w_{(t+1)} = w_{(t)} + \eta \nabla L(w_{(t)},x,y) + 2 \lambda w_{(t)}, \] where $\lambda$ is the regularization parameter. This can be accommodated by Alice homomorphically adding the term $2 \lambda w_{(t)}$ to the gradient in Step 11. \begin{align*} E[w_{(t+1)}] &= E[w_{(t)}]^{1+2\lambda} ~ E\left[\nabla L(w_{(t)},x,y)\right]^\eta \\ &= E\left[(1+2\lambda)w_{(t)} + \eta \nabla L(w_{(t)},x,y)\right]. \end{align*} In order to identify the appropriate value of $\lambda$ to use, Alice and Bob can perform $m$-fold cross-validation by repeatedly executing the private training and evaluation protocols over different subsets of data belonging to Alice. \end{enumerate} \subsection{Private Evaluation Protocol}\label{sec:evaluation-protocol} Another party ``Carol'' having one test data instance $x' \in \mathbb{R}^d$ is interested in applying the classification model with weight vector $w$ belonging to Bob. Here, the privacy constraint require that Bob should not be able to observe $x'$ and Carol should not be able to observe $w$. Similar to the training protocol, Bob generates a public and private key pair for a $b$-bit Paillier cryptosystem and provides the public key to Carol. In order to label the data instance as $y'=1$, Carol needs to check if $P(y'=1|x',w) = \frac{1}{1+e^{-w^Tx'}} > \frac{1}{2}$ and vice-versa for $y'=-1$. This is equivalent to checking if $w^Tx' > 0$. We develop the following protocol towards this purpose.~\\ \noindent \textbf{Input:} Bob has $w$ and generates a public-private key pair. \\ Carol has $x'$ and Bob's public key. \\ \textbf{Output:} Carol knows if $w^Tx' > 0$. \begin{enumerate} \item Bob encrypts $w$ and transfers $E[w]$ to Carol. \item Carol homomorphically computes the encrypted inner product. \[ \prod_{j=1}^d E[w]^{x_j'} = E\left[\sum_{j=1}^d w_jx_j'\right] = E\left[w^Tx'\right]. \] \item Carol generates a random number $r$ and sends $E\left[w^Tx'\right]-r$ to Bob. \item Bob decrypts it to obtain his additive share $w^Tx'-r$. Let us denote it by $-s$, so that $r - s = w^Tx'$. \item Bob and Carol execute a variant of the secure millionaire protocol~\cite{Yao82} to with inputs $r$ and $s$ and both learn whether $r > s$. \\ If $r > s$, Carol concludes $w^Tx' > 0$ and if $r < s$, she concludes $w^Tx' < 0$. \end{enumerate} In this way, Carol and Bob are able to perform the classification operation while maintaining the privacy constraints. If Bob has to repeatedly execute the same protocol, he can pre-compute $E[w]$ to be used in Step 1. \section{Related Work} Email spam filtering is a well established area of research. The accuracy of the best systems in the 2007 CEAS spam filtering competition was better than 0.9999~\cite{Cormack07}. Our implementation is an online logistic regression classifier implementation inspired by \cite{Goodman06} which on application to binary character four-gram features was shown to have near state of the art accuracy~\cite{Cormack07}. The application of privacy preserving techniques to large scale real world problems of practical importance, such as spam filtering, is an emerging area of research. Li, et al.~\cite{Li2009} present a distributed framework for privacy aware spam filtering. Their method is based on applying a one-way fingerprinting transformation~\cite{shingles} to the message text and comparing two emails using a Hamming distance metric and does not involve statistical learning. Additionally, this method also requires that the spam emails belonging to all users should be revealed which does not match our privacy criteria. We consider all emails to be private as the nature of the spam emails a user receives might be correlated to the user's online and offline activities. There has also been recent work on constructing privacy preserving protocols for general data mining tasks including decision trees~\cite{VaidyaCKP08}, clustering~\cite{LinCZ05}, naive Bayes~\cite{VaidyaKC08}, and support vector machines~\cite{jvppsvm}. To the best of our knowledge, this paper is the first to describe a practical privacy-preserving framework using a logistic regression classifier applied to a real world spam filtering task. \subsection{Security}\label{sec:security} The principal requirement of a valid secure multiparty computation (SMC) protocol is that any party must not learn anything about the input data provided by the other parties apart from what can be inferred from the result of the computation itself. As we mentioned earlier, we assume that the parties are semi-malicious. From this perspective, it can be seen that the private training protocol (Section~\ref{sec:training-protocol}) is demonstrably secure. ~\\ \noindent{\em Alice/Carol:} In the private training protocol, Alice can only observe encrypted inputs from Bob and hence she does not learn anything about the weight vector used by Bob. In the private classifier evaluation protocol, the party Carol with the test email only receives the final outcome of the classifier in plaintext. Thus, the only additional information available to her is the output of the classifier itself, which being the output is permissible under the privacy criteria of the problem.~\\ \noindent{\em Bob:} In the training stage, Bob receives unencrypted data from Alice in Steps 3, 8 and 12. \begin{itemize} \item{} {\em Step 3}: Bob receives $y_i w^T_{(t)}x_i - r_i$. Let us denote this quantity by $v$ and $yw^Tx$ by $z$, giving us $v = z - r_i$. Since $r_i$ is drawn from a uniform distribution over the entire finite field $\mathbb{Z}_N$, for any $v$ and for every value of $z$ there exists a unique value of $r_i$ such that $v = z - r_i$. Thus, $P_z(z|v) \propto P_z(z)P_r(z-v) = P_z(z)$.\footnote{The notation $P_x(X)$ denotes the probability with which the random variable $x$ has the value $X$.} The conditional entropy $H(z|v) = H(z)$, {\em i.e.}, Bob receives no information from the operation. \item{}{\em Step 8}: A similar argument can be made for this step. Here Bob receives $v = q z$, where $z = 1 + e^{y_i w^T_{(t)} x_i}$. It can be shown that for any value $v$ that Bob receives, $P_z(z|v) \propto \frac{P_z(z) P_q(v/z)}{z}$. Since $q$ is drawn from a power law distribution, {\em i.e.} $P_q(q) \propto 1/q$, for all $v < |\mathcal{D}|$, $P_z(z|v) = P_z(z)$. Once again, the conditional entropy $H(z|v) = H(z)$, {\em i.e.}, Bob receives no information from the operation. \item{} {\em Step 12}: The information Bob receives in this step is the updated weight vector, which is the result of the computation that Bob is permitted to receive by the basic premise of the SMC protocol. \end{itemize} \subsubsection*{Information Revealed by the Output} We assume that all the parties agree with Bob receiving the updated classifier at the end of the training protocol, this forms the premise behind their participation in the protocol to start with. If the parties use the modified training protocol which results in a differentially private classifier, no information about the data can be gained from the output classifier. In case the parties use the original training protocol, the output classifier does reveal information about the input data, which we quantify and present ways to minimize in the following analysis. At the end of Step 12 in each iteration, Bob receives the update weight vector $w_{t+1} = w_t + \eta \nabla L(w_{(t)},x,y)$. As he also has the previous weight vector $w_t$, he effectively observes the gradient $\nabla L(w_{(t)},x,y) = \sum_i y_ix^T_i \left(1+e^{y_iw^T_{(t)}x_i}\right)^{-1}$. In the online setting, we normally use one training data instance at a time to update the classifier. If Alice participates in the training protocol using only one document $(x_1,y_1)$, the gradient observed by Bob will be $y_1x_1\left(1+e^{y_1w^T_{(t)}x_i}\right)^{-1}$, which is simply a scaling of the data vector $y_1x_1$. As Bob knows $w_{(t)}$ he effectively knows $y_1x_1$. In particular, if $x_1$ is a vector of non-negative counts as is the case for n-grams, the knowledge of $y_1x_1$ is equivalent to knowing $x_1$. Although the protocol itself is secure, the output reveals Alice's data completely. Alice can prevent this by updating the classifier using {\em blocks} of $K$ document vectors $(x,y)$ at a time. The protocol ensures that for each block of $K$ vectors Bob only receives the gradient computed over them \begin{align*} \nabla L(w_{(t)},x,y) &= \sum_{i=1}^K y_ix^T_i \left(1+e^{y_iw^T_{(t)}x_i}\right)^{-1} \\ &= \sum_{i=1}^K g(w_{(t)},x_i,y_i)x_i, \end{align*} where $g(w_{(t)},x_i,y_i)$ is a scalar function of the data instance such that $g(w_{(t)},x_i,y_i)x_i$ has a one-to-one mapping to $x_i$. Assuming that all data vectors $x_i$ are i.i.d., using Jensen's inequality, we can show that the conditional entropy \begin{align}\label{eqn:info-loss} H\left[x_i | \nabla L(w_{(t)},x,y)\right] \le \frac{K-1}{K} H[x_i] + \log(K). \end{align} In other words, while Bob gains some information about the data belonging to Alice, the amount of this information is inversely proportional to the block size. In the online learning setting, choosing a large block size decreases the accuracy of the classifier. Therefore, the choice of the block size effectively becomes a parameter that Alice can control to trade off giving away some information about her data with the accuracy of the classifier. In Section 6.2, we empirically analyze the performance of the classifier for varying batch sizes. We observe that in practice, the accuracy of the classifier is not reduced even after choosing substantially large batches of 1000 documents, which would hardly cause any loss of information as given by Equation~\ref{eqn:info-loss}.
1,314,259,992,927
arxiv
\section{Introduction} In the recent paper \emph{The third homology of} $SL_{2}(\mathbb{Q})$ (\cite{ArticlehomologyQprint}), Hutchinson calculates the structure of $H_{3}(\mathrm{SL}_{2}(\mathbb{Q}),\mathbb{Z})$ by calculating the surjective homomorphism $H_{3}(\mathrm{SL}_{2}(\mathbb{Q}),\mathbb{Z})\rightarrow K_{3}^{\mathrm{ind}}(\mathbb{Q})$. He asked whether the structure theorem proved in his paper generalize to number fields and even global fields. We answer this question in this paper, the case of imaginary quadratic number fields whose ring of integers is a Euclidean domain with respect to the norm of the field (see section \ref{Section_proof_of_the_main_theorem} below). \medskip Let $F$ be a field, there is a natural induced map from $H_{3}(\mathrm{SL}_{2}(F), \mathbb{Z})$ to the indecomposable quotient,$K_{3}^{\mathrm{ind}}(F)$, of the third Quillen $K$-group of $F$. It can be shown this map is surjective (see \cite{HutchinsonLiqunTao}). In \cite{sah:dupont}, Johan Dupont and Chih-Han Sah showed that this map is an isomorphism for the case of $F=\mathbb{C}$ (and more generally $F^{\times}=(F^{\times})^{2}$). Furthermore, in \cite{ChinHanSah}, Chih-Han Sah proved this result for the case of $\mathbb{R}$. However, for general fields, the map $H_{3}(\mathrm{SL}_{2}(F), \mathbb{Z})\rightarrow K_{3}^{\mathrm{ind}}(F)$ has a large kernel (\cite{ArticleBloch}). In fact, $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})$ is naturally a module over the integral group ring $R_{F}:=\mathbb{Z}[F^{\times}/(F^{\times})^2]$ of the group of square classes of the field $F$ and the above homomorphism is a homomorphism of $R_{F}$-modules if $K_{3}^{\mathrm{ind}}(F)$ is given the trivial module structure. In (\cite{ArticleBloch}), Hutchinson shows that the action of $R_{F}$ on $H_{3}(\mathrm{SL}_{2}(F), \mathbb{Z})$ is highly nontrivial for fields with (many) discrete valuations. When $F^{\times}$ acts nontrvially on $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})$, the map $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\rightarrow K_{3}^{\mathrm{ind}}(F)$ has a nontrivial kernel. Mirzaii has shown (\cite{mirzaii:third}) that the kernel of the induced homomorphism $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})_{F^{\times}}\rightarrow K_{3}^{\mathrm{ind}}(F)$ consists at most of $2$-primary torsion. In other words $\mathrm{Ker}\left(H_{3}\left(\mathrm{SL}_{2}(F), \mathbb{Z}\left[\frac{1}{2}\right]\right)\rightarrow K_{3}^{\mathrm{ind}}(F)\left[\frac{1}{2}\right]\right)$ is $\mathcal{I}_{F}H_{3}(\mathrm{SL}_{2}(F), \mathbb{Z})$ (up to some $2$-torsion), where $\mathcal{I}_{F}$ is the augmentation ideal of $R_{F}$. \medskip For $F=\mathbb{Q}(\sqrt{-m})$ where $m\in \{1,2,3,7,11\}$, the main proposition of this article (see Proposition \ref{Main_Proposition_Isomorphism}) describes the structure of $H_{0}\left(\mathcal{O}^{\times}_{F}, \mathcal{I}_{F}H_{3}\left(\mathrm{SL}_{2}(F), \mathbb{Z}\left[\frac{1}{2}\right]\right)\right)$ as a $R_{\mathbb{Q}(\sqrt{-m})}$-module. The proposition states that this module - via a natural residue homomorphism $S_{p}$ - is isomorphic to the direct sum of the \emph{scissor congruence group} of the fields $k(\upsilon_{p})$, $\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]$, where the sum is over all the primes of the field $F$(see Section \ref{Section_Fields_with_a_valuation} and Section \ref{Section_proof_of_the_main_theorem} for more details). The square class of $-1$ acts trivially on each factor, while the square class of $p$ acts by multiplication by $-1$ on the corresponding factor $\mathcal{P}(k(\upsilon_{p}))$. When $m\not=1$, it follows that as an abelian group \begin{center} $H_{3}\left(\mathrm{SL}_{2}(\mathbb{Q}(\sqrt{-m})),\mathbb{Z}\left[\frac{1}{2}\right]\right)\cong K_{3}^{\mathrm{ind}}(\mathbb{Q}(\sqrt{-m}))\left[\frac{1}{2}\right]\oplus\left(\displaystyle\bigoplus_{p\in\Pi}\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)$ \end{center} where $\Pi$ denotes the set of primes (since $\mathcal{O}_{F}^{\times}$ acts trivially in this case) and the structure of indecomposable $K_{3}$ over number fields and the \emph{scissor congruence group} of finite fields are known (see \cite{KtheoryCharlesWeibel} and \cite{ArticleBlochWignerComplex} respectively). \medskip One of the main tool we use is the description of $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)$ in terms of the \emph{refined scissors congruence group}. The \emph{scissors congruence group} $\mathcal{P}(F)$ of a field $F$ was introduced by Dupont and Sah in their study of the Hilbert's third problem in hyperbolic $3$-space (\cite{sah:dupont}). It is an abelian group defined by an explicit presentation (see section \ref{Section_Bloch_group_of_fields}) and it was shown for the case $F=\mathbb{C}$ (and more generally when $F$ is algebraically closed) by the authors that $K_{3}^{\mathrm{ind}}(\mathbb{C})\cong H_{3}(\mathrm{SL}_{2}(\mathbb{C}),\mathbb{Z})$. Soon afterwards, Suslin (\cite{Suslin1}) proved that, for any infinite field $F$ it is closely connected with $K_{3}^{\mathrm{ind}}(F)$, the indecomposable $K_{3}$ of $F$ (see Theorem \ref{Suslinmaintheorem}). As noted above, $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})$ is naturally a module over the integral group ring $R_{F}:=\mathbb{Z}[F^{\times}/(F^{\times})^2]$ of the group of square classes of the field $F$. The \emph{refined scissors congruence group} of the field $F$ - introduced by Hutchinson in \cite{ArticleBlochWignerComplex}- is defined by generators and relations analogously to the \emph{scissors congruence group} but as a module over $R_{F}$ (see section \ref{Section_Refined_Bloch_group_of_fields}). In \cite{ArticleBlochWignerComplex}, it is elucidated the relation of the \emph{refined scissors congruence group} $\mathcal{RP}(F)$ to $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})$ and $K_{3}^{\mathrm{ind}}(F)$ (see Theorem \ref{TheoremBlochWignerComplex} for a precise statement). Our starting point in this article is essentially an isomorphism $\mathcal{I}_{F}H_{3}\left(\mathrm{SL}_{2}(F)), \mathbb{Z}\left[\frac{1}{2}\right]\right)\cong \mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]$ where $\mathcal{RP}_{+}(F)$ is a certain quotient of $\mathcal{RP}(F)$ (see Proposition \ref{PropsitionRP+(F)andH3(SL2)}). \subsection{Layout of this article} In Section \ref{Section_Bloch_group_of_fields}, we review some known results about the scissor congruence groups and their relation to the third homology of $\mathrm{SL}_{2}$ of fields. In Section \ref{Section_Characters}, we review the character-theoretic local-global principle for modules over a group ring (\cite{Articlediscretevaluation}). We recall some applications to \emph{Scissor congruence group} (\cite{ArticlehomologyQprint}), and generalize some of these results. In Section \ref{Section_Fields_with_a_valuation}, we review some algebraic properties of the refined scissor congruence groups over fields $F$ which have a discrete valuation. In Section \ref{Section_Quadratic_fields}, we review relevant facts about quadratic number fields. Section \ref{Section_proof_of_the_main_theorem} contains the proof of the main proposition (Proposition \ref{Main_Proposition_Isomorphism}) for the field $\mathbb{Q}(\sqrt{-m})$, where $m\in \{1,2,3,7,11\}$. \subsection{Some notation} For a Field $F$, we let $F^{\times}$ denote the group of units of $F$. For $x\in F^{\times}$ we will let $\langle x\rangle\in F^{\times}/(F^{\times})^{2}$ denote the corresponding square class. Let $R_{F}$ denote the integral group ring $\mathbb{Z}\left[F^{\times}/(F^{\times})^{2}\right]$ of the group $F^{\times}/(F^{\times})^{2}$. We will use the notation $\langle\langle x\rangle\rangle$ for the basis elements, $\langle x\rangle-1$ of the augmentation ideal $\mathcal{I}_{F}$ of $R_{F}$. For any $a\in F^{\times}$, we will let $p^{a}_{+}$ and $p^{a}_{-}$ denote the elements $1+\langle a\rangle$ and $1-\langle a\rangle$ in $R_{F}$ respectively. For any abelian group $G$ we will let $G\left[\frac{1}{2}\right]$ denote $G\otimes\mathbb{Z}\left[\frac{1}{2}\right]$.\newline \subsection{$H_{\bullet}(\mathrm{SL}_{2}(F), \mathbb{Z})$ is a $R_{F}$-module} Let us recall the group extension \begin{center} $\xymatrix{1\ar[r]& \mathrm{SL}_{n}(F)\ar[r]& \mathrm{GL}_{n}(F)\ar[r]&F^{\times}\ar[r]&1}$ \end{center} induces an action - by conjugation - of $F^{\times}$ on the homology group $H_{\bullet}(\mathrm{SL}_{n}(F), \mathbb{Z})$. Since the determinant of a scalar matrix is an $n$-th power, the subgroup $(F^{\times})^{n}$ acts trivially. In particular, the groups $H_{\bullet}(\mathrm{SL}_{2}(F), \mathbb{Z})$ are modules over the integral group ring $R_{F}:=\mathbb{Z}[F^{\times}/(F^{\times})^2]$. \section{Bloch Groups of fields.}\label{Section_Bloch_group_of_fields} In this section we will recall the definition and applications of the classical pre-Bloch group $\mathcal{P}(F)$ and the refined pre-Bloch group $\mathcal{RP}(F)$. \subsection{Classical Bloch Group $\mathcal{B}(F)$.} For a field $F$, with at least $4$ elements, the \emph{pre-Bloch group} or \emph{Scissors congruence group}, $\mathcal{P}(F)$, is the group generated by the elements $[x]$, with $F^{\times}\setminus\{1\}$, subject to the relations \begin{center} $R_{x,y}:\; 0=[x]-[y]+[y/x]-[(1-x^{-1})/(1-y^{-1})]+[(1-x)/(1-y)]$, \;\; $x\not=y$. \end{center} Let $S^{2}_{\mathbb{Z}}(F^{\times})$ denote the group \begin{center} $\frac{F^{\times}\otimes_{\mathbb{Z}}F^{\times} }{\langle x\otimes y +y\otimes x\; |\;x,y\in F^{\times}\rangle}$ \end{center} and denote by $x\circ y$ the image of $x\otimes y$ in $S^{2}_{\mathbb{Z}}(F^{\times})$. The map \begin{center} $\lambda: \mathcal{P}(F)\rightarrow S^{2}_{\mathbb{Z}}(F^{\times})$, \;\; $[x]\mapsto (1-x)\circ x$ \end{center} is well-defined, and the \emph{Bloch group} of $F$, $\mathcal{B}(F)\subset\mathcal{P}(F)$, is defined to be the kernel of $\lambda$. \smallskip For the fields $\mathbb{F}_{2}$ and $\mathbb{F}_{3}$ the following definitions allows us to include these fields in the statements of some of our results: \smallskip $\mathcal{P}(\mathbb{F}_{2})=\mathcal{B}(\mathbb{F}_{2})$ is a cyclic group of order $3$ with generator denoted $C_{\mathbb{F}_{2}}$. \smallskip $\mathcal{P}(\mathbb{F}_{3})$ is cyclic of order $4$ with generator $[-1]$. $\mathcal{B}(\mathbb{F}_{3})$ is the subgroup generated by $2[-1]$. The Bloch group $\mathcal{B}(F)$ of a general field $F$ is of interest because of the following result of Suslin on $K_{3}^{\mathrm{ind}}$: \begin{teo}\cite[Theorem 5.2]{Suslin1}\label{Suslinmaintheorem} Let $F$ be an infinite field, then there is a short exact sequence \begin{center} $\xymatrix{0\ar[r]&\mathrm{Tor}_{1}^{\mathbb{Z}}(\widetilde{\mu_{F},\mu_{F}})\ar[r]&K_{3}^{\mathrm{ind}}(F)\ar[r]&\mathcal{B}(F)\ar[r]&0}$ \end{center} where $\mathrm{Tor}_{1}^{\mathbb{Z}}(\widetilde{\mu_{F},\mu_{F}})$ is the unique nontrivial extension of $\mathrm{Tor}_{1}^{\mathbb{Z}}(\mu_{F},\mu_{F})$ by $\mathbb{Z}/2$ when $\mathrm{Char}(F)\not=2$ (and $\mathrm{Tor}_{1}^{\mathbb{Z}}(\widetilde{\mu_{F},\mu_{F}})=\mathrm{Tor}_{1}^{\mathbb{Z}}(\mu_{F},\mu_{F})$ if $\mathrm{Char}(F)=2$) . \end{teo} \begin{flushright} $\Box$ \end{flushright} \subsection{The refined Bloch Group $\mathcal{RB}(F)$.}\label{Section_Refined_Bloch_group_of_fields} The \emph{refined pre-Bloch group} $\mathcal{RP}(F)$, of a field $F$ which has at least $4$ elements, is the $R_{F}$-module with generators $[x]$, $x\in F^{\times}$ subject to the relations $[1]=0$ and \begin{center} $S_{x,y}:\; 0=[x]-[y]+\langle x\rangle[y/x]-\langle x^{-1}-1\rangle[(1-x^{-1})/(1-y^{-1})]+\langle1-x\rangle[(1-x)/(1-y)]$, \;\; $x,y\not=1$. \end{center} From the definitions, it follows that $\mathcal{P}(F)=(\mathcal{RP}(F))_{F^{\times}}$. Let $\Lambda=(\lambda_{1},\lambda_{2})$ be the $R_{F}$-module homomorphism \begin{center} $\mathcal{RP}(F)\rightarrow \mathcal{I}_{F}^{2}\oplus S^{2}_{\mathbb{Z}}(F^{\times})$ \end{center} where $S^{2}_{\mathbb{Z}}(F^{\times})$ has the trivial $R_{F}$-module structure, $\lambda_{1}:\mathcal{RP}(F)\rightarrow \mathcal{I}^{2}_{F}$ is the map $[x]\mapsto \langle\langle1-x\rangle\rangle\langle\langle x\rangle\rangle$ and $\lambda_{2}$ is the composite \begin{center} $\xymatrix{ \mathcal{RP}(F)\ar[r]&\mathcal{P}(F)\ar[r]^{\lambda} &S^{2}_{\mathbb{Z}}(F^{\times})}$, \end{center} The \emph{refined Bloch group} of $F$ is the module \begin{center} $\mathcal{RB}(F):=\mathrm{Ker}\left(\Lambda: \mathcal{RP}(F)\rightarrow \mathcal{I}_{F}^{2}\oplus S^{2}_{\mathbb{Z}}(F^{\times} )\right)$. \end{center} Furthermore, the \emph{refined scissor congruence group} of $F$ is the $R_{F}$-module \begin{center} $\mathcal{RP}_{1}(F):=\mathrm{Ker}(\lambda_{1}:\mathcal{RP}(F)\rightarrow\mathcal{I}^{2}_{F})$. \end{center} Thus $\mathcal{RB}(F)=\mathrm{Ker}(\lambda_{2}:\mathcal{RP}_{1}(F)\rightarrow S^{2}_{\mathbb{Z}}(F^{\times}))$. \bigskip The refined Bloch group is of interest because of the following result on the third homology of $\mathrm{SL}_{2}$ over a general field $F$: \begin{teo}\cite[Theorem 4.3]{ArticleBlochWignerComplex}\label{TheoremBlochWignerComplex} Let $F$ be a field with at least $4$ elements. \begin{itemize} \item[(1)]If $F$ is infinite, there is a natural complex of $R_{F}$-modules \begin{center} $\xymatrix{0\ar[r]&\mathrm{Tor}_{1}^{\mathbb{Z}}(\mu_{F},\mu_{F})\ar[r]&H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\ar[r]& \mathcal{RB}(F)\ar[r]&0}$, \end{center} which is exact except at the middle term where the homology is annihilated by $4$. \item[(2)]If $F$ is finite of odd characteristic, there is a complex \begin{center} $\xymatrix{0\ar[r]&H_{3}(B,\mathbb{Z})\ar[r]&H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\ar[r]& \mathcal{RB}(F)\ar[r]&0}$, \end{center} which is exact except at the middle term, where the homology has order $2$. \item[(3)]If $F$ is finite of characteristic $2$, there is a exact sequence \begin{center} $\xymatrix{0\ar[r]&H_{3}(B,\mathbb{Z})\ar[r]&H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\ar[r]& \mathcal{RB}(F)\ar[r]&0}$, \end{center} \end{itemize} \end{teo} \begin{flushright} $\Box$ \end{flushright} Now for $x\in F^{\times}$, we define the following elements of $\mathcal{RP}(F)$ \begin{center} $\psi_{1}(x):=[x]+\langle-1\rangle[x^{-1}]$ \;\; and\;\; $\psi_{2}(x):=\left\{ \begin{array}{ll} \langle x^{-1}-1\rangle[x]+\langle1-x\rangle[x^{-1}], & \hbox{$x\not=1$;} \\ 0, & \hbox{$x=1$.} \end{array} \right. $. \end{center} From the definitions of the the elements $\psi_{i}(x)$, we get that $\langle-1\rangle\psi_{i}(-1)=\psi_{i}(-1)$ for $i\in\{1,2\}$. We define $\widetilde{\mathcal{RP}}(F)$ to be $\mathcal{RP}(F)$ modulo the submodule generated by the elements $\psi_{1}(x)$, $x\in F^{\times}$. \smallskip In the section $3.2$ of \cite{Articlediscretevaluation}, it is shown that the elements \begin{center} $C(x)=[x]+\langle-1\rangle[1-x]+ \langle\langle1-x\rangle\rangle\psi_{1}(x) \in \mathcal{RP}(F)$ \end{center} are constant for a field with at least 4 elements; i.e. $C(x)=C(y)$ for all $x,y\in F^{\times}$. Therefore we have the following definition. \begin{defi} Let $F$ be a field with at least $4$ elements. We will denote by $C_{F}$ the common value of the expression $C(x)$ for $x\in F\setminus\{0,1\}$; i.e. \begin{center} $C_{F}:=[x]+\langle-1\rangle[1-x]+ \langle\langle1-x\rangle\rangle\psi_{1}(x)$ \;\;in\;\; $\mathcal{RP}(F)$. \end{center} \end{defi} For the field with $2$ and $3$ elements the following definitions allow us to include these fields in the statements of some results: \smallskip $\mathcal{RP}(\mathbb{F}_{2})=\mathcal{RP}_{1}(\mathbb{F}_{2})=\mathcal{RB}(\mathbb{F}_{2})=\mathcal{P}(\mathbb{F}_{2})$; i.e. it a cyclic module generated by $C_{\mathbb{F}_{2}}$. \smallskip $\mathcal{RP}(\mathbb{F}_{3})$ is a cyclic $R_{\mathbb{F}_{3}}$-module generated by $[-1]$ subject to the relation $[1]=0$ and \begin{center} $0=2\psi_{1}(-1)=2([-1]+\langle-1\rangle[-1])$. \end{center} $\mathcal{RB}(\mathbb{F}_{3})=\mathcal{RP}_{1}(\mathbb{F}_{3})$ is the submodule generated by $\psi_{1}(-1)=[-1]+\langle-1\rangle[-1]$. \smallskip For any field $F$, there is a natural surjective homomorphism of $R_{F}$-modules $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\rightarrow K_{3}^{\mathrm{ind}}(F)$, it can be shown this is an homomorphism of $R_{F}$-modules where $F^{\times}/(F^{\times})^{2}$ acts trivially on $K_{3}^{\mathrm{ind}}(F)$. In \cite{mirzaii:third}, we get this map induces an isomorphism \begin{center} $H_{0}\left(F^{\times}/(F^{\times})^{2},H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)\right)\cong K_{3}^{\mathrm{ind}}(F)\left[\frac{1}{2}\right]$ \end{center} Now let $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})_{0}$ denote the kernel of the surjective homomorphism $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\rightarrow K^{\mathrm{ind}}_{3}(F)$. Note the isomorphism above implies \begin{center} $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}=\mathcal{I}_{F}H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)$ \end{center} \begin{Rec}\label{Recall_Number_fields_K3_ind} Let us recall that when $F$ is a number field the surjective homomorphism $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\rightarrow K^{\mathrm{ind}}_{3}(F)$ is split as a map of abelian groups. In fact $K_{3}^{ind}(F)$ is finitely generated abelian group an it is enough that there is a torsion subgroup of $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})$ mapping isomorphically to the cyclic torsion subgroup of $K_{3}^{ind}(F)$. This latter statement follows from the explicit calculations of Christian Zickert in \cite[Section 8]{CZickert} . It follows as an abelian group \begin{center} $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\cong K_{3}^{ind}(F)\oplus H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})_{0}$ \end{center} for any number field $F$. \end{Rec} Furthermore, we have the following proposition: \begin{prop}\cite[Corollary 2.8, Corollary 4.4]{Articlediscretevaluation}\label{PropsitionRP+(F)andH3(SL2)} For any field $F$, the map $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\rightarrow \mathcal{RP}(F)$ induces an isomorphism of $R_{F}$-modules \begin{center} $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}=\mathcal{I}_{F}H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)\cong\mathcal{I}_{F}\mathcal{RP}_{1}(F)\left[\frac{1}{2}\right]$ \end{center} and furthermore \begin{center} $\mathcal{RP}_{1}(F)\left[\frac{1}{2}\right]=\widetilde{\mathcal{RP}}_{1}(F)\left[\frac{1}{2}\right]=e_{+}^{-1}\widetilde{\mathcal{RP}}(F)\left[\frac{1}{2}\right]$ \end{center} where $e_{+}^{-1}:=\frac{p_{+}^{-1}}{2}=\frac{1+\langle-1\rangle}{2}\in R_{F}\left[\frac{1}{2}\right]$ \end{prop} \begin{flushright} $\Box$ \end{flushright} Note that from $\mathcal{RB}(F)\subseteq \mathcal{RP}_{1}(F)$ and Theorem \ref{TheoremBlochWignerComplex}, it follows that the square class $\langle-1\rangle$ acts trivially on $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)$ (see \cite[Corollary 4.6]{Articlediscretevaluation} for more details). Now we define $\mathcal{RP}_{+}(F)$ to be $\widetilde{\mathcal{RP}}(F)$ modulo the submodule generated by the elements $(1-\langle-1\rangle)[x]$, $x\in F^{\times}$. Thus $\mathcal{RP}_{+}(F)$ is the $R_{F}$-module generated by the elements $[x]$, $x\in F^{\times}$ subject to the relations \begin{itemize} \item[1)] $[1]=0$. \item[2)] $S_{x,y}=0$ for $x,y\not=1$. \item[3)] $\langle-1\rangle[x]=[x]$ for all $x\in F^{\times}$. \item[4)] $[x]=-[x^{-1}]$ for all $x\in F^{\times}$. \end{itemize} The proposition above implies that the map $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})\rightarrow \mathcal{RP}(F)$ induces an isomorphism \begin{center} $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}\cong\mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]$ \end{center} \begin{Rem} It will be convinient below to introduce the following notation in $\mathcal{RP}_{+}(F)$: \begin{center} $[1]:=0$,\;\;\;\;$[0]:=C_{F}$ \;\; and\;\; $[\infty]:=-C_{F}$. \end{center} Thus the symbol $[x]\in \mathcal{RP}_{+}(F)$ is defined for all $x\in \mathbb{P}^{1}(F)$. Furthermore $[x]+[1-x]=C_{F}$ and $[x]=-[x^{-1}]$ in $\mathcal{RP}_{+}(F)$ for all $x\in \mathbb{P}^{1}(F)$. \end{Rem} \section{Characters}\label{Section_Characters} In this section we will review the character-theoretic method introduced by Hutchinson (\cite{Articlediscretevaluation}) for calculating with modules over a group ring with an elementary abelian $2$-group. Furthermore, we prove some new related results about subrings in number field which will be useful in Section \ref{Section_proof_of_the_main_theorem}. \bigskip Let $G$ be an abelian group satisfying $g^{2}=1$ for all $g\in G$. Let $\mathcal{R}$ denote the group ring $\mathbb{Z}[G]$. For a character $\chi\in \widehat{G}:=\mathrm{Hom}(G,\mu_{2})$, let $\mathcal{R}^{\chi}$ be the ideal of $\mathcal{R}$ generated by the elements $\{g-\chi(g)\;|\; g\in G\}$; i.e. $\mathcal{R}^{\chi}$ is the kernel of the ring homomorphism $\rho(\chi):\mathcal{R}\rightarrow \mathbb{Z}$ sending $g$ to $\chi(g)$ for any $g\in G$. We let $\mathcal{R}_{\chi}$ denote the associated $\mathcal{R}$-algebra structure on $\mathbb{Z}$. In other words $\mathcal{R}_{\chi}:=\mathcal{R}/\mathcal{R}^{\chi}$. \bigskip If $M$ is a $\mathcal{R}$-module, we let $M^{\chi}=\mathcal{R}^{\chi}M$ and we let \begin{center} $M_{\chi}:=M/M^{\chi}=(\mathcal{R}/\mathcal{R}^{\chi})\otimes_{\mathcal{R}}M=\mathcal{R}_{\chi}\otimes_{\mathcal{R}}M$. \end{center} Thus $M_{\chi}$ is the largest quotient module of $M$ with the property that $g\cdot m=\chi(g)\cdot m$ for all $g\in G$. In particular, if $\chi=\chi_{0}$ is the trivial character, then $\mathcal{R}^{\chi_{0}}$ is the augmentation ideal $\mathcal{I}_{G}$, $M^{\chi_{0}}=\mathcal{I}_{G}M$ and $M_{\chi_{0}}=M_{G}$. \bigskip Given $m\in M$, $\chi\in \widehat{G}$, we denote the image of $m$ in $M_{\chi}$ by $m_{\chi}$. For example for any character $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$, we can give a presentation of the $R_{F}$-module $\mathcal{RP}_{+}(F)_{\chi}$ which is our main object of study. $\mathcal{RP}_{+}(F)_{\chi}$ is the $R_{F}$-module with generators $[x]_{\chi}$, $x\in F^{\times}$, subject to the relations \begin{itemize} \item[1)] $\langle a\rangle\cdot[x]_{\chi}:=\chi(a)\cdot[x]_{\chi}$ for all $a,x\in F^{\chi}$. \item[2)] $[1]_{\chi}=0$ \item[3)] The five term relation \begin{center} $0=[x]_{\chi}-[y]_{\chi}+\chi(x)\left[\frac{y}{x}\right]_{\chi}-\chi(x^{-1}-1)\left[\frac{1-x^{-1}}{1-y^{-1}}\right]_{\chi}+\chi(1-x)\left[\frac{1-x}{1-y}\right]_{\chi}$ \end{center} for all $x,y\not=1$ \item[4)] $\chi(-1)\cdot[x]_{\chi}=[x]_{\chi}$ for all $x\in F^{\times}$. \item[5)] $[x]_{\chi}=-[x^{-1}]_{\chi}$ for all $x\in F^{\times}$. \end{itemize} \begin{Rec} Let us recall for $M$ a $\mathcal{R}$-module, $M_{\chi}=\mathcal{R}_{\chi}\otimes_{\mathcal{R}}M$. Hence, for any $\mathcal{R}$-homomorphism $f:M\rightarrow N$, it follows that we have the following induced map \begin{center} $f_{\chi}=Id_{\mathcal{R}_{\chi}}\otimes f:M_{\chi}=\mathcal{R}_{\chi}\otimes_{\mathcal{R}}M\rightarrow \mathcal{R}_{\chi}\otimes_{\mathcal{R}}N=N_{\chi}$ \end{center} \end{Rec} The following the character-theoretic local-global principle is central to calculations in the following chapter below \begin{prop}\cite[Section 3]{Articlediscretevaluation}\label{Proposition_Local_global_principle_character} \begin{itemize} \item[(1)] For any $\chi\in\widehat{G}$, $M\rightarrow M_{\chi}$ is an exact functor on the category of $\mathcal{R}\left[\frac{1}{2}\right]$-modules. \item[(2)] Let $f:M\rightarrow N$ be an $\mathcal{R}\left[\frac{1}{2}\right]$-module homomorphism. For any $\chi\in\widehat{G}$, let $f_{\chi}:M_{\chi}\rightarrow N_{\chi}$ be the induced map. Then $f$ is bijective (resp. injective, surjective) if and only if $f_{\chi}$ is bijective (resp. injective, surjective) for all $\chi\in\widehat{G}$. \end{itemize} \end{prop} \begin{flushright} $\Box$ \end{flushright} \begin{lema}\cite[Lemma 5.3]{ArticlehomologyQprint} Let $F$ be a field. Let $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$. Suppose that $a\in F^{\times}$ satisfy $\chi(1-a)=-1$ and $\chi(a)=1$ then $[a]_{\chi}=0$ in $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}$ \end{lema} \begin{flushright} $\Box$ \end{flushright} \begin{cor}\cite[Lemma 5.4, Corollary 5.5]{ArticlehomologyQprint}\label{ArticlehomologyQprint(1-l)a} Let $F$ be a field. Let $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$ with $\chi(-1)=1$. Suppose that $\ell\in F^{\times}$ satisfy $\chi(\ell)=-1$ and $\chi(1-\ell)=1$. Then $[a]_{\chi}=[(1-\ell)^{m}a]_{\chi}$ in $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}$ for all $a\in \mathbb{P}^{1}(F)$ and all $m\in\mathbb{Z}$. \end{cor} \begin{flushright} $\Box$ \end{flushright} \begin{Rem}\label{Remark[a]=[la]} For a field $F$, let $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$ such that $\chi(-1)=1$. Suppose that $\ell\in F^{\times}$ satisfy $\chi(\ell)=-1$ and $\chi(1-\ell)=-1$ then $\chi\left(\frac{1}{\ell}\right)=-1$, $\chi\left(1-\frac{1}{\ell}\right)=1$. Therefore by Corollary \ref{ArticlehomologyQprint(1-l)a}, we can deduce the following for all $a\in \mathbb{P}^{1}(F)$: \begin{itemize} \item[a)] If $x(\ell)=1$ and $\chi(1-\ell)=-1$ then $[a]_{\chi}=[\ell a]_{\chi}$. \item[b)] If $\chi(\ell)=-1$ and $\chi(1-\ell)=-1$ then $[a]_{\chi}=\left[\left(1-\frac{1}{\ell}\right)a\right]_{\chi}$. \end{itemize} \end{Rem} From Corollary \ref{ArticlehomologyQprint(1-l)a}, we can deduce the following result: \begin{cor}\cite[Lemma 5.6]{ArticlehomologyQprint}\label{CorollarySumQprint} Let $F$ be a field with at least $4$ elements. Let $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$ with $\chi(-1)=1$. Suppose that $\ell\in F^{\times}$ satisfy $\chi(\ell)=-1$ and $\chi(1-\ell)=1$. Then $[a]_{\chi}=[a+t\ell]_{\chi}$ in $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}$ for all $a\in F^{\times}$ and all $t\in\mathbb{Z}$. \end{cor} \begin{flushright} $\Box$ \end{flushright} Now let $R$ be a commutative ring and $x\in R$. Let us recall that $\mathbb{Z}[x]$ is the subring \begin{center} $\{a_{0}+a_{1}x+\cdots+a_{n}x^{n}\;|\; n\geq0\;\; a_{i}\in\mathbb{Z}\}\subset R$. \end{center} Now $x\mathbb{Z}[x]=\{xp(x)\;|\; p(x)\in \mathbb{Z}[x] \}$; i.e. $x\mathbb{Z}[x]$ is the ideal in $\mathbb{Z}[x]\subset R$ generated by $x\in R$. \bigskip The following proposition is a generalization of the Corollary \ref{CorollarySumQprint} for any field $F$: \begin{prop}\label{Propositionelement_lZ[l]} Let $F$ be a field with at least $4$ elements. Let $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$ with $\chi(-1)=1$. If $\chi(\ell)=-1$, $\chi(1-\ell)=1$ for $\ell\not=1\in F$. Then $[a]_{\chi}=[a+t]_{\chi}$ in $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}$ for all $a\in F^{\times}$ and all $t\in \ell\mathbb{Z}[\ell]$ \end{prop} $\textbf{\emph{Proof}}.$ Let $t\in \ell\mathbb{Z}[\ell]$. So there exist $m\geq1$, $b_{1},\ldots, b_{m}\in\mathbb{Z}$ with $t=b_{1}\ell+\cdots+b_{m}\ell^{m}$. We will prove it by induction on $m$. If $m=1$, the result follows from the Corollary \ref{CorollarySumQprint}. Now we suppose it is valid for any number $m\leq k$. We will prove it for the case $k+1$. First, note that $\chi\left(\frac{1}{1-\ell}\right)=1$ and $\chi\left(1-\frac{1}{1-\ell}\right)=\chi(-1)\chi(\ell)\chi\left(\frac{1}{1-\ell}\right)=-1$. Thus \begin{eqnarray} \nonumber [a]_{\chi} &= &\left[\frac{a}{1-\ell}\right]_{\chi} \mbox{(From the Remark \ref{Remark[a]=[la]})}\\ \nonumber &=&\left[\frac{a}{1-\ell}+f_{1}\ell+\cdots +f_{k}\ell^{k}\right]_{\chi} \mbox{ (For any $f_{1},\ldots f_{k}\in\mathbb{Z}$ by the induction hypothesis)}\\ \nonumber &=&\left[a+(1-\ell)(f_{1}\ell+\cdots +f_{k}\ell^{k})\right]_{\chi}\mbox{(From the Corollary \ref{ArticlehomologyQprint(1-l)a})}\\ \nonumber &=&\left[a+(1-\ell)(f_{1}\ell+\cdots +f_{k}\ell^{k})+ c_{1}\ell+\cdots +c_{k}\ell^{k} \right]_{\chi}\mbox{ (For any $c_{1},\ldots c_{k}\in\mathbb{Z}$ by hypothesis)}\\ \nonumber &=&\left[a+ (f_{1}+c_{1})\ell+ (f_{2}-f_{1}+c_{2})\ell^{2}+\cdots +(f_{k}-f_{k-1}+c_{k})\ell^{k-1}+f_{k}\ell^{k+1} \right]_{\chi}. \end{eqnarray} Since $f_{1},\ldots f_{k}, c_{1},\ldots, c_{k}$ are chosen arbitrarily integers. It follows that for any $b_{1},\ldots, b_{k+1}\in\mathbb{Z}$, we can solve the following system of equations: \begin{eqnarray} \nonumber f_{1}+c_{1} &=&b_{1}\\ \nonumber f_{2}-f_{1}+c_{2}&=&b_{2}\\ \nonumber &\vdots&\\ \nonumber f_{k}-f_{k-1}+c_{k}&=&b_{k}\\ \nonumber f_{k}&=&b_{k+1} \end{eqnarray} Therefore $[a]_{\chi}=\left[a+b_{1}\ell+\cdots +b_{k+1}\ell^{k+1}\right]_{\chi}$ for any $b_{1},\ldots, b_{k+1}\in\mathbb{Z}$. \begin{flushright} $\Box$ \end{flushright} \begin{lema}\label{Corollary_lZ[l]_NumberField} Let $F$ be a number field such that $[F:\mathbb{Q}]=d$ and $\mathcal{O}_{F}$ its ring of algebraic integers. Let $\ell\in \mathcal{O}_{F}$ of degree $d$ and $N=N_{F/\mathbb{Q}}(\ell)\in \mathbb{Z}$. Then \begin{center} $\ell\mathbb{Z}[\ell]=\mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}\subseteq \mathcal{O}_{F}$. \end{center} \end{lema} $\textbf{\emph{Proof}}.$ Let $p(t)=N+b_{1}t+\cdots + b_{d-1}t^{d-1}+t^{d}$ the minimal polynomial of $\ell$. Then \begin{center} $0=p(\ell)=N+b_{1}\ell+\cdots +b_{d-1}\ell^{d-1}+\ell^{d}$. \end{center} Since $b_{1}\ell+\cdots b_{d-1}\ell^{d-1}+\ell^{d}\in \ell\mathbb{Z}[\ell]$, it follows that $N\in \ell\mathbb{Z}[\ell]$. Therefore \begin{center} $\mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}\subseteq \ell\mathbb{Z}[\ell]$. \end{center} Now note that $\ell^{k}\in \mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}$ with $k\in\{1,2,\ldots,d-1\}$ and from the equation $0=p(\ell)$ we get that $\ell^{d}\in \mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}$. \bigskip Now, multiplying the equation $0=p(\ell)$ by $\ell$ we obtain that \begin{center} $\ell^{d+1}=-N-b_{1}\ell-\cdots -b_{d-1}\ell^{d}$. \end{center} Since $\ell^{d}\in \mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}$ , it follows that $\ell^{d+1}\in\mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}\subseteq \ell\mathbb{Z}[\ell]$. Hence following a similar recursive way, we get that $\ell^{m}\in\mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}\subseteq \ell\mathbb{Z}[\ell]$ with $m\geq d+1$. Therefore it follows that $\ell\mathbb{Z}[\ell] \subseteq\mathbb{Z}N+\mathbb{Z}\ell+\cdots\mathbb{Z}\ell^{d-1}$. \begin{flushright} $\Box$ \end{flushright} \begin{lema}\label{corollary_1_isintheset} Let $F$ be a number field such that $[F:\mathbb{Q}]=d$ and $\mathcal{O}_{F}$ its ring of algebraic integers. Let $\ell\in \mathcal{O}_{F}$ of degree $d$ and $N=N_{F/\mathbb{Q}}(\ell)\in \mathbb{Z}$. Then $\mathbb{Z}[\ell] \subset\frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$. \end{lema} $\textbf{\emph{Proof}}.$ Let $p(t)=N+b_{1}t+\cdots b_{d-1}t^{d-1}+t^{d}$ the minimal polynomial of $\ell$. Then \begin{center} $0=p(\ell)=N+b_{1}\ell+\cdots b_{d-1}\ell^{d-1}+\ell^{d}$. \end{center} Multiplying the last equation by $\ell^{-d}$, we get \begin{center} $0=p(\ell)=\frac{N}{\ell^{d}}+\frac{b_{1}}{\ell^{d-1}}+\cdots \frac{b_{d-1}}{\ell}+1$. \end{center} Since $\frac{N}{\ell^{d}}+\frac{b_{1}}{\ell^{d-1}}+\cdots \frac{b_{d-1}}{\ell}\in \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$. It follows that $1\in \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$ whence we obtain that $\mathbb{Z}\subset \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$. \bigskip Now we can finally prove $\mathbb{Z}[\ell] \subset\frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$. For that, it is enough to prove that $\ell^{m}\in \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$ where $m\geq1$. We will prove it by induction on $m$. If $m=1$, note that since $1\in \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$, we get that $1=\frac{a_{1}}{\ell}+\frac{a_{2}}{\ell^{2}}+\cdots +\frac{a_{d-1}}{\ell^{d-1}}$ with $a_{1},a_{2},\ldots,a_{d-1}\in\mathbb{Z}$, then multiplying the last equation by $\ell$ we get the following equation: \begin{center} $\ell=a_{1}+\frac{a_{2}}{\ell}+\cdots + \frac{a_{d-2}}{\ell^{d-3}}+\frac{a_{d-1}}{\ell^{d-2}}$. \end{center} Since $a_{1}\in \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$, it follows that $\ell\in \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$. Now we suppose that $\ell^{m}\in \mathbb{Z}\left[\frac{1}{\ell}\right]$ for all $m\leq k$. We will prove the case $m=k+1$. Multiplying the last equation by $\ell^{k}$, we get the equation \begin{center} $\ell^{k+1}=a_{1}\ell^{k}+a_{2}\ell^{k-1}\cdots + a_{d-1}\ell^{k-d+2}$. \end{center} By the induction hypothesis, it follows that $\ell^{k+1}\in \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$. Therefore as required $\mathbb{Z}[\ell] \subset\frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$. \begin{flushright} $\Box$ \end{flushright} \begin{Rem}\label{Remark_for_corollary_l_isintheset} From the proof of Lemma \ref{corollary_1_isintheset}, we have that $\frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$ is closed under multiplication by $\ell$. \end{Rem} \section{Fields with a valuation}\label{Section_Fields_with_a_valuation} Given a field $F$ and a surjective valuation $\upsilon:F^{\times}\rightarrow \Gamma$, where $\Gamma$ is a totally ordered additive abelian group, we let $\mathcal{O}_{\upsilon}:=\{x\in F^{\times}\;|\;\upsilon(x)\geq0\}\cup\{0\}$ be the associated valuation ring, with maximal ideal $\mathcal{M}_{\upsilon}=\{x\in \mathcal{O}_{\upsilon}\;|\;\upsilon(x)\not=0\}$, group of units $U_{\upsilon}=U:=\mathcal{O}_{\upsilon}\backslash\mathcal{M}_{\upsilon}$ and residue field $k=k(\upsilon):=\mathcal{O}_{\upsilon}/\mathcal{M}_{\upsilon}$. \smallskip Since $\Gamma$ is a torsion-free group, we have a short exact sequence of $\mathbb{F}_{2}$-vector spaces \begin{center} $\xymatrix{1\ar[r]&\frac{U}{U^{2}}\ar[r]&\frac{F^{\times}}{(F^{\times})^{2}}\ar[r]&\frac{\Gamma}{2}\ar[r]&1}$. \end{center} We have a homomorphism of commutative rings \begin{center} $\xymatrix{\mathbb{Z}\left[\frac{U}{U^{2}}\right]\ar@{^{(}->}[r] \ar@{>>}[d]&R_{F}\\ R_{k}&}$ \end{center} \smallskip Therefore given an $R_{k}$-module $M$, we denote by $\mathrm{Ind}_{k}^{F}M$ the $R_{F}$-module $R_{F}\otimes_{\mathbb{Z}[U_{\upsilon}]}M$. We have the following result \cite[Section 5]{Articlediscretevaluation}: \begin{lema} There is a natural homomorphism of $R_{F}$-modules $S_{\upsilon}:\widetilde{\mathcal{RP}}(F)\rightarrow \mathrm{Ind}_{k}^{F}\widetilde{\mathcal{RP}}(k)$ given by \begin{center} $S_{\upsilon}([x])=\left\{ \begin{array}{ll} 1\otimes[\overline{x}], & \hbox{$\upsilon(x)=0$;} \\ 1\otimes C_{k}, & \hbox{$\upsilon(x)>0$;} \\ -(1\otimes C_{k}), & \hbox{$\upsilon(x)<0$.} \end{array} \right. $ \end{center} \end{lema} \begin{flushright} $\Box$ \end{flushright} \subsection{Discrete valuations} Suppose that $\upsilon:F^{\times}\rightarrow \mathbb{Z}$ is a discrete valuation on the field $F$ with residue field $k=k(\upsilon)$. Let $\chi_{\upsilon}:F^{\times}/(F^{\times})^{2}\rightarrow \mu_{2}$ denote the associated character defined by $\chi_{\upsilon}(a)=(-1)^{\upsilon(a)}$. For an abelian group $M$, we let $M\{\upsilon\}$ denote the $R_{F}$-module $R_{\chi_{\upsilon}}\otimes_{\mathbb{Z}}M$. Equivalently we equip $M$ with the $R_{F}$-module structure $\langle a\rangle m:=(-1)^{\upsilon(a)}m$ for all $a\in F^{\times}$ and $m\in M$. We have the following result: \begin{prop}\cite[Theorem 3.7]{ArticlehomologyQprint}\label{NaturalIsomorphismRP+(F)} Let $F$ be a field with discrete valuation $\upsilon:F^{\times}\rightarrow \mathbb{Z}$ and residue field $k$. Then we have a natural isomorphism \begin{center} $\xymatrix{\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi_{\upsilon}}\ar[r]^{S_{\upsilon}}&\left(\mathrm{Ind}^{F}_{k}\mathcal{RP}_{+}(k)\left[\frac{1}{2}\right]\right)_{\chi_{\upsilon}}\ar[r]^{\cong}& \mathcal{P}(k)\left[\frac{1}{2}\right]\{\upsilon\} }$ \end{center} \end{prop} \begin{flushright} $\Box$ \end{flushright} For any field $F$ with discrete valuation $\upsilon$, we let $\overline{S}_{\upsilon}$ denote the composite $R_{F}$-module homomorphism \begin{center} $H_{3}\left(\mathrm{SL}_{2}(F), \mathbb{Z}\left[\frac{1}{2}\right]\right)\rightarrow \left(\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]\right)_{\chi_{\upsilon}}\cong\mathcal{P}(k(\upsilon))\left[\frac{1}{2}\right]\{\upsilon\}$ \end{center} By abuse of notation, we will use the same symbol to denote $\overline{S}_{\upsilon}$ restricted to $H_{3}\left(\mathrm{SL}_{2}(F), \mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}$. \begin{Rem}\label{squareclassuniformizer} Let us recall that if $\pi$ is a uniformizer for the valuation $\upsilon$, then $\upsilon(\pi)=1$. Therefore the square class of $\pi$ acts as $-1$ on the factor $\mathcal{P}(k(\upsilon))\{\upsilon\}$ on the right. \end{Rem} \begin{Rec} Give a family $\mathcal{V}$ of discrete valuations of $F$, we obtain a map \begin{center} $\xymatrix{H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z} )\ar[r]& \mathcal{RP}_{+}(F)\ar[r]& \displaystyle\prod_{\upsilon\in \mathcal{V}}\mathcal{P}(k(\upsilon))\{\upsilon\}}$. \end{center} Note that when we restrict to $H_{3}(\mathrm{SL}_{2}(F),\mathbb{Z})_{0}$ and tensor with $\mathbb{Z}\left[\frac{1}{2}\right]$, the image lies in the direct sum instead (For more details see \cite[Theorem 5.1]{ArticleBloch} ). \end{Rec} Let $\mathcal{V}$ be a set of discrete valuations, we get a homomorphism $\overline{S}=\{\overline{S}_{\upsilon}\}_{\upsilon\in\mathcal{V}}$ of $R_{F}$-modules \begin{center} $\overline{S}:H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}\rightarrow \displaystyle\bigoplus_{v\in\mathcal{V}}\mathcal{P}(k(\upsilon))\{\upsilon\}$. \end{center} If $\upsilon(x)$ is odd, then $\langle x\rangle$ acts as $-1$ on $M\{\upsilon\}$. \section{Quadratic fields.}\label{Section_Quadratic_fields} Let us recall that if $a,b\in R$ then $a$ is associated to $b$ ($a\sim b$), if there exist $u\in R^{\times}$ such that $a=ub$. For the field $\mathbb{Q}(\sqrt{m})$, we have the ring of algebraic integers $\mathcal{O}_{\mathbb{Q}(\sqrt{m})}=\mathbb{Z}[\omega_{m}]=\{a+\omega_{m}b\;|\; a,b\in\mathbb{Z}\}$ where \begin{center} $\omega_{m}=\left\{ \begin{array}{ll} \sqrt{m}, & \hbox{$m\;\equiv\;2,3\;(\mathrm{mod}\;4)$;} \\ \frac{1+\sqrt{m}}{2}, & \hbox{$m\;\equiv\;1\;(\mathrm{mod}\;4)$.} \end{array} \right. $ \end{center} For $\alpha\in \mathbb{Q}(\sqrt{m})$, let $\overline{\alpha}$ denote its conjugate. Then if $\alpha=a+\omega_{m}b\in \mathcal{O}_{\mathbb{Q}(\sqrt{m})}$ , we get that: \begin{center} $\overline{\alpha}=\left\{ \begin{array}{ll} a-\omega_{m}b, & \hbox{$m\;\equiv\;2,3\;(\mathrm{mod}\;4)$;} \\ (a+b)-\omega_{m}b, & \hbox{$m\;\equiv\;1\;(\mathrm{mod}\;4)$.} \end{array} \right.$ \end{center} Therefore in any quadratic field $\mathbb{Q}(\sqrt{m})$, the norm of $\alpha\in\mathcal{O}_{\mathbb{Q}(\sqrt{m})}$ is given by \begin{center} $N_{\mathbb{Q}(\sqrt{m})/\mathbb{Q}}(\alpha)=N(\alpha):=\alpha\overline{\alpha}=\left\{ \begin{array}{ll} a^{2}-mb^{2}, & \hbox{$m\;\equiv\;2,3\;(\mathrm{mod}\;4)$;} \\ a^{2}+ab+\left(\frac{1-m}{4}\right)b^{2}, & \hbox{$m\;\equiv\;1\;(\mathrm{mod}\;4)$.} \end{array} \right. $. \end{center} Let us recall that the \emph{discriminant} $\delta$ of any quadratic field $\mathbb{Q}(\sqrt{m})$ is: \begin{center} $\delta=\left\{ \begin{array}{ll} m, & \hbox{$m\;\equiv\;1\;(\mathrm{mod}\;4)$;} \\ 4m, & \hbox{$otherwise$.} \end{array} \right. $ \end{center} Now, the prime elements in $\mathcal{O}_{\mathbb{Q}(\sqrt{m})}$ are found by implementing the three following primality criteria \begin{itemize} \item If $N(\alpha)$ is a rational prime, then $\alpha$ is a prime. \item An odd rational prime $p$ is a prime in $\mathcal{O}_{\mathbb{Q}(\sqrt{m})}$ if and only if $p$ does not divide $\delta$ and $\left(\frac{\delta}{p}\right)=-1$. \item $2$ is prime in $\mathcal{O}_{\mathbb{Q}(\sqrt{m})}$ if and only if $m\;\equiv\;5\;(\mathrm{mod}\;8)$. \end{itemize} We introduce the following notation. For $\alpha=a+b\omega_{m}\in \mathbb{Z}[\omega_{m}]$, we let $a:=R(\alpha)$ and $b:=I(\alpha)$. Then with this notation we have: \begin{itemize} \item[i)] If $m\;\equiv\;2,3\;(\mathrm{mod}\;4)$ then \begin{center} $R(\overline{\alpha})=R(\alpha)$\;\;\; and\;\;\; $I(\overline{\alpha})=-I(\alpha)$. \end{center} \item[ii)] If $m\;\equiv\;1\;(\mathrm{mod}\;4)$ then \begin{center} $R(\overline{\alpha})=R(\alpha)+I(\alpha)$\;\;\; and\;\;\; $I(\overline{\alpha})=-I(\alpha)$. \end{center} \end{itemize} Thus \begin{center} $\overline{\alpha}=\left\{ \begin{array}{ll} R(\alpha)-\omega_{m}I(\alpha), & \hbox{$m\;\equiv\;2,3\;(\mathrm{mod}\;4)$;} \\ (R(\alpha)+I(\alpha))-\omega_{m}I(\alpha), & \hbox{$m\;\equiv\;1\;(\mathrm{mod}\;4)$.} \end{array} \right.$ \end{center} We will require the following observation below. \begin{lema}\label{R(alpha)_and__N(alpha)_are_not_coprimes} Let $\mathbb{Q}(\sqrt{-m})$ with $m>0$ such that $m\;\equiv\;1,2\;(\mathrm{mod}\;4)$ and let $\alpha=R(\alpha)+\omega_{m}I(\alpha)\in \mathcal{O}_{\mathbb{Q}(\sqrt{-m})}$ a prime element such that $N(\alpha)=p$ is a rational prime. If $p$ divides $R(\alpha)$ or $m$ in $\mathbb{Z}$ then $m=p$. In particular $\alpha\sim\sqrt{-p}$. \end{lema} $\textbf{\emph{Proof}}.$ If $p|R(\alpha)$, we have $R(\alpha)=pn$ for some $n\in\mathbb{Z}$. Thus \begin{center} $p=N(\alpha)=p^{2}n^{2}+m(I(\alpha))^{2}$. \end{center} From this the last equation, we get $m=p$ and $R(\alpha)=0$, $I(\alpha)=\pm1$ as otherwise $N(\alpha)=(R(\alpha))^{2}+m(I(\alpha))^{2}>p$. \smallskip If $p|m$ then $m=p\ell$ for some $\ell\in\mathbb{Z}$. Thus $p=N(\alpha)=(R(\alpha))^{2}+p\ell(I(\alpha))^{2}$. It follows that $p|R(\alpha)$ and from above, we get our result. \begin{flushright} $\Box$ \end{flushright} \section{The main theorem}\label{Section_proof_of_the_main_theorem} Let $F$ be a number field. We consider $\Pi$, the set of discrete valuations on $F$. As we seen, the maps $S_{p}$ induce \begin{center} $\overline{S}:H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}\cong\mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]\rightarrow \displaystyle\bigoplus_{p\in\Pi}\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}$ \end{center} In \cite{ArticlehomologyQprint}, Hutchinson proved this is an isomorphism for $F=\mathbb{Q}$. Our main theorem (see Proposition \ref{Main_Proposition_Isomorphism}) is that it is an isomorphism for $F=\mathbb{Q}(\sqrt{-m})$ for $m\in \{2,3,,7,11\}$. Furthermore we will prove a slightly weaker statement for the case $m=-1$. In all cases, we will use the local-global principle for characters; i.e. we will prove that \begin{center} $\overline{S}_{\chi}:H_{0}\left(\mathcal{O}^{\times}_{F}, \mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]\right)_{\chi}\rightarrow \displaystyle\bigoplus_{p\in\Pi}\left(\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi}$ \end{center} is an isomorphism for all $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$. \smallskip We will use the following observation repeatedly below: \begin{lema}\label{Lemma_Action_of_square_classes_different_sign_characters} Let $M$ be an $R_{F}$-module. Let $\chi\in\widehat{F^{\times}/(F^{\times})^{2}}$, let $\varepsilon\in\{-1, 1\}$. Suppose there exist $a\in F^{\times}$ such that $\langle a\rangle$ acts as $\varepsilon$ on $M$ and $\chi(\langle a\rangle)=-\varepsilon$. Then $M\left[\frac{1}{2}\right]_{\chi}=0$ \end{lema} $\textbf{\emph{Proof}}.$ Let $m\in M$. In $M\left[\frac{1}{2}\right]_{\chi}$ we get \begin{center} $-\varepsilon m_{\chi}=\chi(\langle a\rangle) m_{\chi}=\langle a\rangle m_{\chi}=\varepsilon m_{\chi}$. \end{center} Then $2\varepsilon m_{\chi}=0$. Therefore $m_{\chi}=0$ for all $m_{\chi}\in M\left[\frac{1}{2}\right]_{\chi}$. \begin{flushright} $\Box$ \end{flushright} To start with, we consider the trivial character $\chi_{0}$. By definition $\mathcal{I}_{F}\mathcal{RP}_{+}(F)$ is generated by elements of the form $\langle\langle x\rangle\rangle[y]$. Note that the square class of $x$ acts as multiplication by $-1$ on such an element (Since $\langle x\rangle\langle\langle x\rangle\rangle=-\langle\langle x\rangle\rangle$). On the other hand, $\langle x\rangle$ acts as $1$ on any element of $\mathcal{I}_{F}\mathcal{RP}_{+}(F)_{\chi_{0}}$. By Lemma \ref{Lemma_Action_of_square_classes_different_sign_characters} we get $\left(\mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]\right)_{\chi_{0}}=0$. \smallskip Similarly for $\mathcal{P}(k(\upsilon_{p}))\{p\}$, the element $\langle p\rangle$ acts as $-1$ (see Remark \ref{squareclassuniformizer}) and since $\chi_{0}(p)=1$, by Lemma \ref{Lemma_Action_of_square_classes_different_sign_characters} we deduce that $\left(\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi_{0}}=0$. \bigskip Next, we consider characters $\chi$ satisfying $\chi(-1)=-1$. Let us recall that $\langle-1\rangle$ acts as multiplication by $1$ on $\mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]$ by Proposition \ref{PropsitionRP+(F)andH3(SL2)} and also $\langle-1\rangle$ acts trivially on $\mathcal{P}(k(\upsilon_{p}))$, since $-1$ is a unit for all $p\in\Pi$. By Lemma \ref{Lemma_Action_of_square_classes_different_sign_characters}, it follows that \begin{center} $\left(\mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]\right)_{\chi}=0= \displaystyle\bigoplus_{p\in\Pi}\left(\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi}$. \end{center} Hence we only need to consider $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$ such that $\chi\not=\chi_{0}$ and $\chi(-1)=1$. \begin{lema}\label{LemmaTrivialActionOfSquareClass} Let $F=\mathbb{Q}(\sqrt{-m})$ be an imaginary quadratic number field with $m>0$ and $m\not=1$. Also let $\mathcal{O}_{F}$ be the ring of integers of $F$ then $\langle u\rangle$ acts trivially on $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)$ for all $u\in U=(\mathcal{O}_{F})^{\times}$. \end{lema} $\textbf{\emph{Proof}}.$ We have the following cases: \begin{itemize} \item[a)] If $m\not=-3$, then $U=(\mathcal{O}_{F})^{\times}=\{1,-1\}$. Since $\langle -1\rangle$ acts trivially on $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)$, the result follows. \item[b)] If $m=-3$ , then $U=(\mathcal{O}_{F})^{\times}=\{\pm1,\pm\omega,\pm\omega^{2}\}$ where $\omega^{3}=1$. It follows that $\omega=\omega^{4}$ and therefore $\langle\omega\rangle=\langle1\rangle$. Clearly we have that $\langle\omega^{2}\rangle=\langle1\rangle$ and therefore we get our result. \end{itemize} \begin{flushright} $\Box$ \end{flushright} Now let $F=\mathbb{Q}(\sqrt{-m})$ be an imaginary quadratic number field with $m\not=1$ and let $\chi\in \widehat{F^{\times}/(F^{\times})^{2}}$. If $\chi(u)=-1$ for some $u\in (\mathcal{O}_{F})^{\times}$, then by Lemma \ref{Lemma_Action_of_square_classes_different_sign_characters} and Lemma \ref{LemmaTrivialActionOfSquareClass}, we get $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}=0$. Furthermore, by Lemma \ref{Lemma_Action_of_square_classes_different_sign_characters} we also deduce $\left(\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi}=0$. \bigskip If the class number of $F$ is $1$ then each discrete valuation $\upsilon$ is associated to a prime $p$ and \begin{center} $\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}\cong\displaystyle\bigoplus_{p\in\Pi}p^{\mathbb{Z}/2}$. \end{center} (In fact, if $F$ has odd class number this latter isomorphism still holds, if for each prime ideal $\mathcal{P}$ we choose $\mathcal{P}^{m}=<p>$ for some odd $m$.) \smallskip It follows that \begin{center} $\widehat{\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}}\cong\widehat{\displaystyle\bigoplus_{p\in\Pi}p^{\mathbb{Z}/2}}=\mathrm{Hom}_{\mathbb{Z}}\left(\displaystyle\bigoplus_{p\in\Pi}p^{\mathbb{Z}/2}, \mu_{2}\right)=\mathrm{Hom}_{\mathbf{Sets}}(\Pi,\mu_{2})$. \end{center} Thus $\widehat{\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}}$ is naturally parameterized by the subsets of the set $\Pi$: If $S\in\Pi$ then the corresponding character $\chi_{S}$ is defined by \begin{center} $\chi_{S}(p)=\left\{ \begin{array}{ll} -1, & \hbox{$p\in S$;} \\ 1, & \hbox{$p\not\in S$.} \end{array} \right. $ \end{center} for all $p\in \Pi$ or, equivalently, for $x\in F^{\times}$, letting $\mathcal{L}(x):=\displaystyle\sum_{p\in S}\upsilon_{p}(x)$ \begin{center} $\chi_{S}(x)=(-1)^{\mathcal{L}(x)}$ \end{center} for all $x\in F^{\times}$. Conversely, the character $\chi$ is equal to $\chi_{S}$ where \begin{center} $S:=\mathrm{Supp}(\chi):=\{p\in \Pi \;|\; \chi(p)=-1\}$. \end{center} Thus, for a prime element $p$ in $\mathcal{O}_{F}$, $\chi_{p}$ is the unique character satisfying $\mathrm{Supp}(\chi_{p})=\{p\}$. \bigskip We consider three kinds of characters: \begin{itemize} \item $\chi=\chi_{0}$ ($\mathrm{Supp}(\chi)=\emptyset$). \item $\chi=\chi_{p}$ for some $p\in \mathcal{O}_{F}$; i.e. $|\mathrm{Supp}(\chi)|=1$ \item $|\mathrm{Supp}(\chi)|\geq2$. \end{itemize} The following lemmas are immediate from the definition of the $R_{F}$-structure on $\mathcal{P}(k(\upsilon_{p}))\{p\}$. \begin{lema}\label{LemmaQuotientCharacterBlochgroup}Let $\chi\in\widehat{F^{\times}/(F^{\times})^{2}}$. Let $p$ be a prime element. Then \begin{center} $\left(\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi}=\left\{ \begin{array}{ll} \mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}, & \hbox{$\chi=\chi_{p}$;} \\ 0, & \hbox{$otherwise$.} \end{array} \right. $ \end{center} \end{lema} $\textbf{\emph{Proof}}.$ If $\chi\not=\chi_{p}$, then there exist $\langle a\rangle\in F^{\times}/(F^{\times})^{2}$ such that $\chi(\langle a\rangle)\not=\chi_{p}(\langle a\rangle)$; i.e. $\chi(\langle a\rangle)=-\chi_{p}(\langle a\rangle)$. Therefore, the element $\langle a\rangle$ acts in $\left(\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi}$ as multiplication by $\chi(\langle a\rangle)$ and multiplication by $\chi_{p}(\langle a\rangle)=-\chi(\langle a\rangle)$ . By Lemma \ref{Lemma_Action_of_square_classes_different_sign_characters} we get $\left(\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi}=0$. \smallskip On other hand, when $\chi=\chi_{p}$, we have $\mathcal{P}(k(\upsilon_{p}))\{p\}=\mathcal{R}_{\chi_{p}}\otimes\mathcal{P}(k(\upsilon_{p}))= \mathcal{P}(k(\upsilon_{p}))_{\chi_{p}}$ by definition. \begin{flushright} $\Box$ \end{flushright} \begin{cor}\label{CorollaryQuotientCharacterBlochgroupDirectSum} For $\chi\in\widehat{F^{\times}/(F^{\times})^{2}}$, we get \begin{center} $\left(\displaystyle\bigoplus_{p\in\Pi}\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\right)_{\chi}=\left\{ \begin{array}{ll} \mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}, & \hbox{$\chi=\chi_{p}\; for\; some\; prime\; element \; p$;} \\ 0, & \hbox{$otherwise$.} \end{array} \right. $ \end{center} \end{cor} \begin{flushright} $\Box$ \end{flushright} The following lemmas will allow us to prove that $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}=0$ when $|\mathrm{Supp}(\chi)|\geq2$ (see Proposition \ref{Proposition_RP+(F)=0_Quotient_Characters}) \begin{lema}\label{Z[omega_m]contained_in_sum} Let $F=\mathbb{Q}(\sqrt{m})$ be a quadratic number field, let $\alpha\not\sim\beta$ be prime elements of $\mathcal{O}_{F}=\mathbb{Z}[\omega_{m}]$. Also let $x\in\{\alpha, \alpha^{-1}\}$, $y\in\{\omega_{m}\alpha, (\omega_{m}\alpha)^{-1}\}$, $z\in\{\beta,\beta^{-1}\}$ and $w\in\{\omega_{m}\beta,(\omega_{m}\beta)^{-1}\}$. Then: \begin{center} $\mathcal{O}_{F}\subseteq x\mathbb{Z}[x]+y\mathbb{Z}[y]+z\mathbb{Z}[z]+w\mathbb{Z}[w]$. \end{center} \end{lema} $\textbf{\emph{Proof}}.$ Note by Lemma \ref{corollary_1_isintheset} and Remark \ref{Remark_for_corollary_l_isintheset}, $\ell\mathbb{Z}[\ell]\subseteq \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$ for all non zero $\ell\in \mathbb{Z}[\omega]$. So it is enough to prove the result for the case when $x=\alpha$, $y=\omega_{m}\alpha$, $z=\beta$ and $w=\omega_{m}\beta$. We have the following cases: \begin{itemize} \item[a)] $\alpha=p\in \mathbb{Z}$ and $\beta=q\in \mathbb{Z}$; i.e. $\alpha$ and $\beta$ are primes in $\mathbb{Z}$ that inert in $\mathcal{O}_{F}$. \smallskip Note that $p\mathbb{Z}[p]=p\mathbb{Z}$. Thus \begin{eqnarray} \nonumber p\mathbb{Z}[p]+ q\mathbb{Z}[q] &=&p\mathbb{Z}+q\mathbb{Z}\\ \nonumber &=&\mathbb{Z}. \end{eqnarray} Also note that $\omega_{m}p\mathbb{Z}[\omega_{m}p]=p^{2}N(\omega_{m})\mathbb{Z}+p\omega_{m}\mathbb{Z}$ from Lemma \ref{Corollary_lZ[l]_NumberField}. Therefore \begin{eqnarray} \nonumber \omega_{m}p\mathbb{Z}[\omega_{m}p]+ \omega_{m}q\mathbb{Z}[\omega_{m}q]&=&p^{2}N(\omega_{m})\mathbb{Z}+p\omega_{m}\mathbb{Z}+q^{2}N(\omega_{m})\mathbb{Z}+q\omega_{m}\mathbb{Z}\\ \nonumber &\supset& p\omega_{m}\mathbb{Z}+ q\omega_{m}\mathbb{Z}\\ \nonumber &=&\omega_{m}\mathbb{Z} \end{eqnarray} Hence \begin{eqnarray} \nonumber p\mathbb{Z}[p]+ q\mathbb{Z}[q]+\omega_{m}p\mathbb{Z}[\omega_{m}p]+ \omega_{m}q\mathbb{Z}[\omega_{m}q] &\supset& \mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \item[b)] $N(\alpha)$ is a prime number and $\beta=p\in \mathbb{Z}$; i.e. $\beta$ is a prime in $\mathbb{Z}$ that inert in $\mathcal{O}_{F}$ \smallskip Let $\alpha=R(\alpha)+\omega_{m}I(\alpha)$. Note that \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]&=&N(\alpha)\mathbb{Z}+\alpha\mathbb{Z}\\ \nonumber &=&\alpha(\overline{\alpha}\mathbb{Z}+\mathbb{Z})\\ \nonumber &=&\alpha(\mathbb{Z}+I(\overline{\alpha})\omega_{m}\mathbb{Z})\\ \nonumber &\supset&\alpha\mathbb{Z}. \end{eqnarray} Changing $\alpha$ by $\omega_{m}\alpha$, we get that \begin{eqnarray} \nonumber \omega_{m}\alpha\mathbb{Z}[\omega_{m}\alpha]&=&N(\omega_{m})N(\overline{\alpha})\mathbb{Z}+\alpha\omega_{m}\mathbb{Z}\\ \nonumber &=&\alpha(N(\omega_{m})\overline{\alpha}\mathbb{Z}+\omega_{m}\mathbb{Z})\\ \nonumber &=&\alpha(\mathrm{R}(\overline{\alpha})N(\omega_{m})\mathbb{Z}+I(\overline{\alpha})N(\omega_{m})\omega_{m}\mathbb{Z}+\omega_{m}\mathbb{Z})\\ \nonumber &\supset&\alpha\omega_{m}\mathbb{Z}. \end{eqnarray} Therefore \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\omega_{m}\alpha\mathbb{Z}[\omega_{m}\alpha]&\supset&\alpha\mathbb{Z}+\alpha\omega_{m}\mathbb{Z}\\ \nonumber &=&\alpha\mathcal{O}_{F} \end{eqnarray} Since $p\mathbb{Z}[p]=p\mathbb{Z}$. We get that \begin{eqnarray} \nonumber p\mathbb{Z}[p]+ \omega_{m}p\mathbb{Z}[\omega_{m}p]&=&p\mathbb{Z}+ p^{2}N(\omega_{m})\mathbb{Z}+\omega_{m}p\mathbb{Z}\\ \nonumber &\supset&p\mathbb{Z}+\omega_{m}p\mathbb{Z}\\ \nonumber &=&p(\mathbb{Z}+\omega_{m}\mathbb{Z})\\ \nonumber &=&p\mathcal{O}_{F}. \end{eqnarray} Therefore \begin{eqnarray} \nonumber p\mathbb{Z}[p]+ \omega_{m}p\mathbb{Z}[\omega_{m}p]+\alpha\mathbb{Z}[\alpha]+\omega_{m}\alpha\mathbb{Z}[\omega_{m}\alpha]&\supset&p\mathcal{O}_{F}+\alpha\mathcal{O}_{F}\\ \nonumber &=&\langle p, \alpha\rangle\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \item[c)] $N(\alpha), N(\beta)$ are prime integers. \smallskip From case $b)$ we get that: \begin{center} $\alpha\mathbb{Z}[\alpha]+\omega_{m}\alpha\mathbb{Z}[\omega_{m}\alpha]\supset \alpha\mathcal{O}_{F}$ \end{center} and \begin{center} $\beta\mathbb{Z}[\beta]+\omega_{m}\beta\mathbb{Z}[\omega_{m}\beta]\supset \beta\mathcal{O}_{F}$ \end{center} Therefore \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\omega_{m}\alpha\mathbb{Z}[\omega_{m}\alpha]+ \beta\mathbb{Z}[\beta]+\omega_{m}\beta\mathbb{Z}[\omega_{m}\beta]&\supset&\alpha\mathcal{O}_{F}+\beta\mathcal{O}_{F}\\ \nonumber &=&\langle\alpha,\beta\rangle\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \end{itemize} \begin{flushright} $\Box$ \end{flushright} \begin{lema}\label{Z[omega_m]contained_in_sum for_X(omega)=-1} Let $F$ be an imaginary quadratic number field, let $\alpha\not\sim\beta$ be prime elements of $\mathcal{O}_{F}$. Also let $x\in\{\alpha,\alpha^{-1}\}$, $y\in\{\beta,\beta^{-1}\}$ and $z\in\{\omega_{m},\omega_{m}^{-1}\}$. Then: \begin{center} $\mathcal{O}_{F}\subseteq x\mathbb{Z}[x]+y\mathbb{Z}[y]+z\mathbb{Z}[z]$. \end{center} \end{lema} $\textbf{\emph{Proof}}.$ Similarly to Lemma \ref{Z[omega_m]contained_in_sum}, by Lemma \ref{corollary_1_isintheset} and Remark \ref{Remark_for_corollary_l_isintheset}, $\ell\mathbb{Z}[\ell]\subseteq \frac{1}{\ell}\mathbb{Z}\left[\frac{1}{\ell}\right]$ for all non zero $\ell\in \mathbb{Z}[\omega]$. So it is enough to prove the result for the case when $x=\alpha$, $y=\beta$ and $z=\omega_{m}$. We have the following cases: \begin{itemize} \item[a)] $\alpha=p\in \mathbb{Z}$ and $\beta=q\in \mathbb{Z}$; i.e. $\alpha$ and $\beta$ are primes in $\mathbb{Z}$ that inert in $\mathcal{O}_{F}$. \smallskip Note that $p\mathbb{Z}[p]=p\mathbb{Z}$. It follows that $p\mathbb{Z}[p]+q\mathbb{Z}[q]=p\mathbb{Z}+q\mathbb{Z}=\mathbb{Z}$. Hence \begin{eqnarray} \nonumber p\mathbb{Z}[p]+q\mathbb{Z}[q]+ \omega_{m}\mathbb{Z}[\omega_{m}]&=&\mathbb{Z}+N(\omega_{m})\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &\supset&\mathbb{Z}+ \omega_{m}\mathbb{Z}\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \item[b)] $N(\alpha)$ is a prime number and $\beta=p\in \mathbb{Z}$; i.e. $\beta$ is a prime in $\mathbb{Z}$ that inert in $\mathcal{O}_{F}$ \smallskip Let $\alpha=R(\alpha)+\omega_{m}I(\alpha)$. Since $\alpha\not\sim \beta=p$, it follows that $N(\alpha)\not=p$ and thus $N(\alpha)\mathbb{Z}+p\mathbb{Z}=\mathbb{Z}$. Therefore \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\beta\mathbb{Z}[\beta]+\omega_{m}\mathbb{Z}[\omega_{m}]&=&N(\alpha)\mathbb{Z}+\alpha\mathbb{Z}+p\mathbb{Z}+N(\omega_{m})\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &=&\left(N(\alpha)\mathbb{Z}+p\mathbb{Z}\right)+N(\omega_{m})\mathbb{Z}+\alpha\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &=&\mathbb{Z}+N(\omega_{m})\mathbb{Z}+\alpha\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &\supset&\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \item[c)] $N(\alpha), N(\beta)$ are prime integers. \smallskip We will consider the following cases: \begin{itemize} \item[1)] $N(\alpha)\not=N(\beta)$. Then \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\beta\mathbb{Z}[\beta]&=& N(\alpha)\mathbb{Z}+\alpha \mathbb{Z}+N(\beta)\mathbb{Z}+\beta\mathbb{Z}\\ \nonumber &=&\mathbb{Z}+ \alpha \mathbb{Z}+\beta\mathbb{Z}\\ \nonumber &\supset&\mathbb{Z} \end{eqnarray} Therefore \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\beta\mathbb{Z}[\beta]+ \omega_{m}\mathbb{Z}[\omega_{m}]&\supset&\mathbb{Z}+ N(\omega_{m})\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &\supset&\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \item[2)] $N(\alpha)=N(\beta)=p$. Then for $\alpha=R(\alpha)+I(\alpha)\omega_{m}$, we have $\beta=\overline{\alpha}$. We consider the following subcases: \begin{itemize} \item[a)] $m\;\equiv\;1\;(\mathrm{mod}\;4)$. Then $\beta=(R(\alpha)+I(\alpha))-I(\alpha)\omega_{m}$. Thus \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\beta\mathbb{Z}[\beta]&=&N(\alpha)\mathbb{Z}+\alpha\mathbb{Z}+N(\beta)\mathbb{Z}+\beta\mathbb{Z}\\ \nonumber &=& N(\alpha)\mathbb{Z}+\alpha\mathbb{Z}+\beta\mathbb{Z} \;\;\;\mbox{(Since $N(\alpha)=N(\beta)$)}\\ \nonumber &=&N(\alpha)\mathbb{Z}+ R(\alpha)\mathbb{Z}+ I(\alpha)\mathbb{Z}+ I(\alpha)\omega_{m}\mathbb{Z}\\ \nonumber &\supset&\mathbb{Z} + I(\alpha)\omega_{m}\mathbb{Z}\;\;\; \mbox{(Since $p\not|R(\alpha)$ or $p\not|I(\alpha)$ )}\\ \nonumber &\supset&\mathbb{Z}. \end{eqnarray} Therefore \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\beta\mathbb{Z}[\beta]+ \omega_{m}\mathbb{Z}[\omega_{m}]&\supset&\mathbb{Z}+ N(\omega_{m})\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &\supset&\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \item[b)] $m\;\equiv\;2,3\;(\mathrm{mod}\;4)$. Then $\beta=R(\alpha)-I(\alpha)\omega_{m}$. Hence \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\beta\mathbb{Z}[\beta]&=&N(\alpha)\mathbb{Z}+\alpha\mathbb{Z}+N(\beta)\mathbb{Z}+\beta\mathbb{Z}\\ \nonumber &=& N(\alpha)\mathbb{Z}+\alpha\mathbb{Z}+\beta\mathbb{Z} \;\;\;\mbox{(Since $N(\alpha)=N(\beta)$)}\\ \nonumber &=&N(\alpha)\mathbb{Z}+ R(\alpha)\mathbb{Z}+ I(\alpha)\omega_{m}\mathbb{Z}. \end{eqnarray} Note that either $N(\alpha)$ and $R(\alpha)$ are coprimes or $N(\alpha)$ and $N(\omega_{m})$ are coprimes otherwise by Lemma \ref{R(alpha)_and__N(alpha)_are_not_coprimes}, we get $\alpha\sim\beta$. Hence it follows: \begin{eqnarray} \nonumber \alpha\mathbb{Z}[\alpha]+\beta\mathbb{Z}[\beta]+ \omega_{m}\mathbb{Z}[\omega_{m}]&=&N(\alpha)\mathbb{Z}+ R(\alpha)\mathbb{Z}+ I(\alpha)\omega_{m}\mathbb{Z} + N(\omega_{m})\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &\supset&\mathbb{Z}+\omega_{m}\mathbb{Z}\\ \nonumber &=&\mathcal{O}_{F}. \end{eqnarray} \end{itemize} \end{itemize} \end{itemize} \begin{flushright} $\Box$ \end{flushright} \begin{prop}\label{Propositionsumalgebraiccintegergaussian} Let $F$ be an imaginary quadratic number field of class number $1$ and let $\chi\in\widehat{\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}}$. Suppose that $|\mathrm{Supp}(\chi)|\geq2$. Then \begin{center} $[a]_{\chi}=[a+t]_{\chi}$ \end{center} in $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}$ for all $a\in F$ and $ t\in\mathcal{O}_{F}$. \end{prop} $\textbf{\emph{Proof}}.$ Let $p,q\in \mathrm{Supp}(\chi)$. We consider the following cases: \begin{itemize} \item[a)] $\chi(\omega_{m})=1$. \smallskip Then $\chi(\omega_{m}p)=\chi(\omega_{m}q)=-1$. Therefore, depending of the values of $\chi(1-p)$, $\chi(1-q)$, $\chi(1-\omega_{m}p)$ and $\chi(1-\omega_{m}q)$, by Remark \ref{Remark[a]=[la]}, there is a choice of $\epsilon_{p},\epsilon_{q},\epsilon'_{p},\epsilon'_{q}\in \{\pm 1\}$ such that $\ell=p^{\epsilon_{p}}, q^{\epsilon_{q}}, (\omega_{m}p)^{\epsilon'_{p}},(\omega_{m}q)^{\epsilon'_{q}}$ satisfy $\chi(\ell)=-1$ and $\chi(1-\ell)=1$. Therefore the result follows from the Proposition \ref{Propositionelement_lZ[l]} and the Lemma \ref{Z[omega_m]contained_in_sum}. \item[b)] $\chi(\omega_{m})=-1$. \smallskip Then depending of the values of $\chi(1-p)$, $\chi(1-q)$ and $\chi(1-\omega_{m})$, by the Remark \ref{Remark[a]=[la]} there is a choice of $\epsilon_{p},\epsilon_{q},\epsilon_{\omega_{m}}\in \{\pm 1\}$ such that $\ell=p^{\epsilon_{p}}, q^{\epsilon_{q}}, (\omega_{m})^{\epsilon_{\omega_{m}}}$ satisfy $\chi(\ell)=-1$ and $\chi(1-\ell)=1$. Therefore from the Proposition \ref{Propositionelement_lZ[l]} and the Lemma \ref{Z[omega_m]contained_in_sum for_X(omega)=-1}, we get our result. \end{itemize} \begin{flushright} $\Box$ \end{flushright} \begin{prop}\label{Proposition_RP+(F)=0_Quotient_Characters} Let $F=\mathbb{Q}(\sqrt{-m})$ be an imaginary quadratic number field with $m=1,2,3,7,11$ and $\chi\in\widehat{\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}}$. Suppose that $|\mathrm{Supp}(\chi)|\geq2$. Then $\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]_{\chi}=0$. \end{prop} $\textbf{\emph{Proof}}.$ First, let us recall, for these fields, the ring of algebraic integer $\mathcal{O}_{F}$ is a Euclidean domain with respect the norm of $F$. \smallskip From the Proposition \ref{Propositionsumalgebraiccintegergaussian}, we get $[x]_{\chi}=[x+a]_{\chi}$ for all $x\in F$ and $a\in \mathcal{O}_{F}$. Then it follows that $[a]_{\chi}=[1]_{\chi}=0$ for all $a\in \mathcal{O}_{F}$ and therefore $\left[\frac{1}{a}\right]_{\chi}=0$ for all $a\in \mathcal{O}_{F}$. \bigskip Now for $x\in F$ we have that $x=\frac{a}{b}$ with $a,b \in \mathcal{O}_{F}$, we will prove that $[x]_{\chi}=0$ by induction on $\mathrm{min}(N(a),N(b))=m$. If $m=1$, then $a\in \mathcal{O}^{\times}_{F}$ or $b\in \mathcal{O}^{\times}_{F}$. Thus either $x\in \mathcal{O}_{F}$ or $x^{-1}\in \mathcal{O}_{F}$. Therefore, it follows from the previous paragraph that $[x]_{\chi}=0$. \bigskip Since $[x]_{\chi}=-\left[\frac{1}{x}\right]_{\chi}$, we can suppose that $N(b)<N(a)$. Now we suppose the results holds if norm $m=N(b)\leq n$. We will prove it for $\mathrm{min}(N(a),N(b))=N(b)=n+1$. Note that there exist $q,r\in \mathcal{O}_{F}$ such that $a=qb+r$ and $N(r)<N(b)$. Taking $t=q$ from Proposition \ref{Propositionsumalgebraiccintegergaussian}, we obtain that $[x]_{\chi}=\left[\frac{r}{b}\right]_{\chi}$. Note that $\mathrm{min}(N(r),N(b))=N(r)\leq n$. Hence by induction hypothesis we have that $[x]_{\chi}=0$. \begin{flushright} $\Box$ \end{flushright} \begin{lema} If $f:M\rightarrow N$ is a map of $R_{F}$-modules and if $f_{\chi}:M_{\chi}\rightarrow N_{\chi}$ is an isomorphism for all $\chi\in\widehat{\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}}$, then the induced map \begin{center} $H_{0}\left(\mathcal{O}^{\times}_{F}, M\left[\frac{1}{2}\right]\right)\rightarrow H_{0}\left(\mathcal{O}^{\times}_{F}, N\left[\frac{1}{2}\right]\right)$ \end{center} is an isomorphism. \end{lema} $\textbf{\emph{Proof}}.$ Let $\chi\in\widehat{\frac{F^{\times}}{(F^{\times})^{2}}}$. If $\chi\in \widehat{\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}}$, then $H_{0}\left(\mathcal{O}^{\times}_{F}, M\left[\frac{1}{2}\right]\right)_{\chi}=M\left[\frac{1}{2}\right]_{\chi}\cong N\left[\frac{1}{2}\right]_{\chi} =H_{0}\left(\mathcal{O}^{\times}_{F}, N\left[\frac{1}{2}\right]\right)_{\chi}$. \bigskip Otherwise, $\chi(u)=-1$ for some $u\in \mathcal{O}^{\times}$. But the element $\langle u\rangle$ acts trivially on $H_{0}\left(\mathcal{O}^{\times}_{F}, M\left[\frac{1}{2}\right]\right)$ and $H_{0}\left(\mathcal{O}^{\times}_{F}, N\left[\frac{1}{2}\right]\right)$. By Lemma \ref{Lemma_Action_of_square_classes_different_sign_characters} \begin{center} $H_{0}\left(\mathcal{O}^{\times}_{F}, M\left[\frac{1}{2}\right]\right)_{\chi}=0=H_{0}\left(\mathcal{O}^{\times}_{F}, N\left[\frac{1}{2}\right]\right)_{\chi}$. \end{center} The result follows by the local-global principle for characters (Proposition \ref{Proposition_Local_global_principle_character}). \begin{flushright} $\Box$ \end{flushright} \begin{prop}\label{Main_Proposition_Isomorphism} Let $F=\mathbb{Q}(\sqrt{-m})$ be an imaginary quadratic number field with $m=1, 2, 3, 7, 11$ , then the map \begin{center} $\overline{S}:H_{0}\left(\mathcal{O}_{F}^{\times}, H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}\right)\cong H_{0}\left(\mathcal{O}_{F}^{\times},\mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]\right)\rightarrow \displaystyle\bigoplus_{p\in\Pi}\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}$ \end{center} is an isomorphism of $R_{F}$-modules. \end{prop} $\textbf{\emph{Proof}}.$ By the local-global principle for characters (Proposition \ref{Proposition_Local_global_principle_character}) and by Corollary \ref{CorollaryQuotientCharacterBlochgroupDirectSum}, we must show that \begin{center} $\left(\mathcal{I}_{F}\mathcal{RP}_{+}(F)\left[\frac{1}{2}\right]\right)_{\chi}=\left\{ \begin{array}{ll} \mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}, & \hbox{$if \; \chi=\chi_{p}$;} \\ 0, & \hbox{$if\; \chi\not=\chi_{p}\; for\; any\; p$.} \end{array} \right. $ \end{center} If $\chi=\chi_{p}$, this is from Proposition \ref{NaturalIsomorphismRP+(F)}. If $\chi=\chi_{0}$, or $\chi(u)=-1$ for some $u\in(\mathcal{O}_{F})^{\times}$ this follows from above. Otherwise $\chi\in\widehat{\frac{F^{\times}}{(F^{\times})^{2}(\mathcal{O}_{F})^{\times}}}$ and $\mathrm{Supp}(\chi)$ contain at least two non-associated distinct prime elements of $\mathcal{O}_{F}$ and hence from Proposition \ref{Proposition_RP+(F)=0_Quotient_Characters} we get our result. \begin{flushright} $\Box$ \end{flushright} From the Proposition \ref{Main_Proposition_Isomorphism} and the definition of $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)_{0}$, we get: \begin{cor}\label{Corollary_Short_exact_sequence_quadratic_extension} Let $F=\mathbb{Q}(\sqrt{-m})$ be an imaginary quadratic number field with $m=1,2,3,7,11$ , then we have a short exact sequence of $R_{F}$-modules \begin{center} $\xymatrix{0\ar[r]&\displaystyle\bigoplus_{p\in\Pi}\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\ar[r]&H_{0}\left(\mathcal{O}_{F}^{\times},H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)\right)\ar[r]&K_{3}^{\mathrm{ind}}(F)\left[\frac{1}{2}\right]\ar[r]&0}$. \end{center} \end{cor} \begin{flushright} $\Box$ \end{flushright} \begin{cor} Let $F=\mathbb{Q}(\sqrt{-m})$ be an imaginary quadratic number field with $m=2,3,7,11$ ,then we have a short exact sequence of $R_{F}$-modules \begin{center} $\xymatrix{0\ar[r]&\displaystyle\bigoplus_{p\in\Pi}\mathcal{P}(k(\upsilon_{p}))\left[\frac{1}{2}\right]\{p\}\ar[r]&H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)\ar[r]&K_{3}^{\mathrm{ind}}(F)\left[\frac{1}{2}\right]\ar[r]&0}$. \end{center} \end{cor} $\textbf{\emph{Proof}}.$ By Lemma \ref{LemmaTrivialActionOfSquareClass}, we have that $\mathcal{O}^{\times}_{F}$ acts trivially on $H_{3}\left(\mathrm{SL}_{2}(F),\mathbb{Z}\left[\frac{1}{2}\right]\right)$. \begin{flushright} $\Box$ \end{flushright}
1,314,259,992,928
arxiv
\section{Introduction} \label{sec:intro} Hypotheses on the suppression of $\chi_c$ and $\chi_b$ production in nucleus-nucleus collisions play a crucial role in the interpretation of the J/$\psi$ and $\Upsilon(1S)$ measurements from SPS~\cite{bib:SPS}, RHIC~\cite{bib:RHIC} and LHC~\cite{bib:LHC} in terms of evidence of quark-gluon plasma (QGP) formation. The observation of the $\chi_c$ and $\chi_b$ suppression patterns in Pb-Pb collisions at the LHC could confirm or falsify the ``sequential quarkonium melting'' scenario~\cite{bib:sqm,bib:Karsch} and, therefore, discriminate between the QGP interpretation and other options. However, a direct observation of the $\chi_c$ and $\chi_b$ signals in their radiative decays to ${\rm J}/\psi$ and $\Upsilon(1S)$ is practically impossible in heavy-ion collisions, given the very large number of background photons produced in such events. In this paper we show that this important missing piece of information can be obtained by measuring how the dilepton decay distributions of prompt J/$\psi$ and $\Upsilon(1S)$ change from proton-proton (or peripheral nucleus-nucleus) to central nucleus-nucleus collisions, bypassing the difficulty of explicitly identifying the events containing $\chi_c$ and $\chi_b$ decays. The demonstration of the result is made by steps. In Sec.~\ref{sec:basics} we describe how the polarization is transferred in the decays to lighter quarkonium states and summarize the present knowledge of the feed-down contributions in collider experiments. In Sec.~\ref{sec:scenarios} we consider two illustrative polarization scenarios, complementing the existing experimental information on prompt-J/$\psi$ and $\Upsilon(1S)$ polarizations with educated guesses about the still unknown polarizations of the $P$-states. In Sec.~\ref{sec:formulas} we formalize the relations between the $P$-to-$S$-state feed-down fractions and the observable polarizations of directly and indirectly produced states. Using these ingredients, in Sec.~\ref{sec:suppression} we illustrate the main result of the paper, discussing the feasibility of the $\chi$ suppression measurement using dilepton angular distributions at the LHC. The notations used in this paper for angles and polarization parameters are those defined in Ref.~\cite{bib:EPJC}. We report here for convenience the most general form of the observable angular distribution of J/$\psi$ and $\Upsilon$ decays into lepton pairs: \begin{eqnarray} W(\cos \vartheta, \varphi) & \propto & \, \frac{1}{(3 + \lambda_{\vartheta})} \, (1 + \lambda_{\vartheta} \cos^2 \vartheta \label{eq:observable_ang_distr} \\ & + & \lambda_{\varphi} \sin^2 \vartheta \cos 2 \varphi + \lambda_{\vartheta \varphi} \sin 2 \vartheta \cos \varphi ) \, ,\nonumber \end{eqnarray} where $\vartheta$ and $\varphi$ are the polar and azimuthal emission angles of one of the leptons with respect to a system of axes defined in the dilepton rest frame and $\lambda_{\vartheta}, \lambda_{\varphi}, \lambda_{\vartheta \varphi}$ are the anisotropy parameters. \section{Polarization contributions from feed-down decays} \label{sec:basics} Many of the prompt J/$\psi$ and $\Upsilon$ mesons produced in hadronic collisions result from the decay of heavier $S$- or $P$-wave quarkonia. However, the existing polarization measurements at collider energies make no distinction between directly and indirectly produced states. The role of the feed-down from heavier $S$ states (responsible, for example, for about $8\%$ of J/$\psi$ production at low $p_{\rm T}$~\cite{bib:feeddown}) is rather well understood. Data of the BES~\cite{bib:BES_psiprime} and CLEO~\cite{bib:CLEO_upsilon} experiments in $e^+e^-$ collisions indicate that in the decays $\psi^{\prime} \rightarrow \mathrm{J}/\psi \pi \pi$ and $\Upsilon(2S) \rightarrow \Upsilon(1S) \pi \pi$ the di-pion system is produced predominantly in the spatially isotropic ($S$-wave) configuration, meaning that no angular momentum is transferred to it. Consequently, the angular momentum alignment is preserved in the transition from the $2S$ to the $1S$ state. This allows us to assume that the dilepton decay angular distribution of the J/$\psi$ [$\Upsilon(1S)$] mesons resulting from $\psi^{\prime}$ [$\Upsilon(2/3S)$] decays is the same as the one of the $\psi^{\prime}$ [$\Upsilon(2/3S)$], provided that a common polarization axis is chosen for the two particles. At high momentum, when the J/$\psi$ and $\psi^{\prime}$ directions with respect to the centre of mass of the colliding hadrons practically coincide, $\psi^{\prime}$ mesons and J/$\psi$ mesons from $\psi^{\prime}$ decays have the same observable polarization with respect to any system of axes defined on the basis of the directions of the colliding hadrons. In the case of the polar anisotropy parameter $\lambda_\vartheta$, for instance, the relative error, $|\Delta\lambda_\vartheta/\lambda_\vartheta|$, induced by the approximation of considering the J/$\psi$ and $\psi^{\prime}$ directions as coinciding is $\mathcal{O}[(\Delta m / p)^2]$, where $\Delta m$ is the $2S-1S$ mass difference and $p$ the total laboratory momentum of the dilepton. For $p>5$~GeV/$c$ this error is of order 1\%. Moreover, the directly produced J/$\psi$ [$\Upsilon(1S)$] and $\psi^\prime$ [$\Upsilon(2/3S)$] are expected to have the same production mechanisms and, therefore, very similar polarizations. As a consequence, the polarization of J/$\psi$ [$\Upsilon(1S)$] from $\psi^\prime$ [$\Upsilon(2/3S)$] can be considered to be almost equal to the polarization of directly produced J/$\psi$ [$\Upsilon(1S)$], so that, at least in first approximation, the two contributions can be treated as one. \begin{table}[t] \begin{center} \begin{tabular}{cccc} \hline $J_z$ & $\lambda_\vartheta(S)$ & $\lambda_\vartheta(P_1)$ & $\lambda_\vartheta(P_2)$ \\ \hline $0$ & $-1$ & $+1$ & $-3/5$ \\ $\pm 1$ & $+1$ & $-1/3$ & $-1/3$ \\ $\pm 2$ & - & - & $+1$ \\ \hline \end{tabular} \caption{ Values of the observable polar anisotropy parameter $\lambda_\vartheta$ of the J/$\psi$ [$\Upsilon(1S)$] dilepton decay distribution, corresponding to pure angular momentum states ($J_z$) of the \emph{directly} produced particle (J/$\psi$ [$\Upsilon(1S)$] itself, $\chi_1$ or $\chi_2$). The results are obtained in the high-momentum approximation (for $p>5$~GeV/$c$ they are affected by an error smaller than $1\%$) and are valid not only in the E1 approximation (usually assumed in the literature), but also including all orders of the photon radiation expansion~\cite{bib:chi_polarization}. } \label{tab:lambda_vs_helicity} \end{center} \end{table} On the contrary, the J/$\psi$ [$\Upsilon(1S)$] mesons resulting from $\chi_{cJ}$ [$\chi_{bJ}$] radiative decays can have very different polarizations with respect to the directly produced ones. Directly produced $P$ and $S$ states can originate from different partonic and long-distance processes, given their different angular momentum and parity properties. Moreover, the emission of the spin-1 and always transversely polarized photon necessarily changes the angular momentum projection of the $q \bar{q}$ system in the $P \to S$ radiative transition. As a result, the relation between the ``spin-alignment'' of the directly produced $P$ or $S$ state and the shape of the observed dilepton angular distribution is totally different in the two cases: for example, if directly produced $\chi_{c1}$ and J/$\psi$ both had ``longitudinal'' polarization (angular momentum projection $J_z = 0$ along a given quantization axis), the shape of the dilepton distribution would be of the kind $\propto 1 - \cos^2\!\vartheta$ for the direct J/$\psi$ and $\propto 1 + \cos^2\!\vartheta$ for the J/$\psi$ from $\chi_{c1}$. Table~\ref{tab:lambda_vs_helicity} lists values of the observable polar anisotropy parameter $\lambda_\vartheta$ of the J/$\psi$ [$\Upsilon(1S)$] dilepton decay distribution, corresponding to pure angular momentum states of the \emph{directly} produced particle. In particular, while for directly produced $S$ states $-1 < \lambda_\vartheta < +1$, for those from decays of $P_1$ and $P_2$ states the lower bound is $-1/3$ and $-3/5$, respectively. More detailed constraints on the three anisotropy parameters $\lambda_\vartheta$, $\lambda_\varphi$ and $\lambda_{\vartheta \varphi}$ in the cases of directly produced $S$ state and $S$ states from decays of $P_1$ and $P_2$ states can be found in Ref.~\cite{bib:chi_polarization}. Figure 3 of that work shows that the allowed parameter space of the decay anisotropy parameters for the directly produced J/$\psi$ [$\Upsilon(1S)$] strictly includes the one of the $S$-states from $P_2$ decays, which, in turn, strictly includes the one of the $S$-states from $P_1$ decays. The feed-down fractions are not well-known experimentally. In the charmonium case, the $\chi_{c}$-to-J/$\psi$ and $\chi_{c2}$-to-$\chi_{c1}$ yield ratios have been measured by CDF~\cite{bib:cdf_chic} in the rapidity interval $|y|< 0.6$, with insufficient precision to indicate or exclude important $p_{\rm T}$ dependencies. The $p_{\rm T}$-averaged results, \begin{align} \begin{split} R(\chi_{c1}) + R(\chi_{c2}) & = 0.30 \pm 0.06 \, , \\ \quad R(\chi_{c2}) / R(\chi_{c1}) & = 0.40 \pm 0.02 \, , \label{eq:Rchic} \end{split} \end{align} where $R(\chi_{c1})$ and $R(\chi_{c2})$ are the fractions of prompt J/$\psi$ yield due to the radiative decays of $\chi_{c1}$ and $\chi_{c2}$, effectively correspond to a phase-space region (low $p_{\rm T}$ and central rapidity), much smaller than the one covered by the LHC experiments. CDF also measured~\cite{bib:cdf_chib} the fractions of $\Upsilon(1S)$ mesons coming from radiative decays of $1P$ and $2P$ states as, respectively, $R(\chi_{b1})+R(\chi_{b2}) = (27 \pm 8)\%$ and $R(\chi_{b1}^\prime) + R(\chi_{b2}^\prime) = (11 \pm 5)\%$, for $p_{\rm T} > 8$~GeV$/c$ and without discrimination between the $J=1$ and $J=2$ states. These results tend to indicate that the contribution of the feed-down from $P$ states to $\Upsilon(1S)$ production is at least as large as in the corresponding charmonium case, even if the experimental error is quite large. The same indication is provided with higher significance by the $\Upsilon$ polarization measurement of E866~\cite{bib:e866_upsilon}, at low $p_{\rm T}$, as discussed in the next section. \section{Two example scenarios} \label{sec:scenarios} In this section we derive, using available experimental and theoretical information, two illustrative scenarios for the polarizations of the charmonium and bottomonium families. In our considerations we will use the following addition rule~\cite{bib:invariants}: \begin{equation} \vec{\lambda}^\mathrm{prt} \, = \, \frac{ \frac{ [1 - R(P_1) - R(P_2)] \, \vec{\lambda}^\mathrm{dir} } { 3 + \lambda_\vartheta^\mathrm{dir} } + \frac{ R(P_1) \, \vec{\lambda}^{P_1} } { 3 + \lambda_\vartheta^{P_1} } + \frac{ R(P_2) \, \vec{\lambda}^{P_2} } { 3 + \lambda_\vartheta^{P_2} } } { \frac{ [1 - R(P_1) - R(P_2)] } { 3 + \lambda_\vartheta^\mathrm{dir} } + \frac{ R(P_1) } { 3 + \lambda_\vartheta^{P_1} } + \frac{ R(P_2) } { 3 + \lambda_\vartheta^{P_2} } } \, , \label{eq:addition_rule} \end{equation} where $\vec{\lambda}^\mathrm{prt}$ are the observable polarization parameters of the promptly produced $S$ state, being $\vec{\lambda} = (\lambda_\vartheta, \lambda_\varphi, \lambda_{\vartheta \varphi})$, $\vec{\lambda}^\mathrm{dir}$ are the polarization parameters of the directly produced $S$ state, $R(P_1)$ and $R(P_2)$ the fractions of events produced by the decays of $P_1$ and $P_1$ states and $\vec{\lambda}^{P_1}$ and $\vec{\lambda}^{P_2}$ the corresponding polarizations. Figure~\ref{fig:CDF_direct_extrapolation} illustrates how the CDF measurement of prompt-J/$\psi$ polarization~\cite{bib:CDF_JpsiPol} can be translated in a range of possible values of the direct-J/$\psi$ polarization, using Eq.~\ref{eq:addition_rule}, the available information about the feed-down fractions and all possible combinations of hypotheses of pure polarization states for $\chi_{c1}$ and $\chi_{c2}$. The feed-down fraction is set to 0.42, two standard deviations higher than the central CDF value (Eq.~\ref{eq:Rchic}); using 0.30 simply decreases the spread between the curves. The $R(\chi_{c2}) / R(\chi_{c1})$ ratio is set to 0.40; changes remaining compatible with the CDF measurement give almost identical curves. \begin{figure}[tb!] \centering \includegraphics[width=0.45\textwidth]{fig1.pdf} \caption{ Direct-J/$\psi$ polarizations ($\lambda_{\vartheta}$) extrapolated from the CDF measurement of prompt-J/$\psi$ polarization (in the helicity frame), using several scenarios for the $\chi_c$ polarizations. The order of the lines from top to bottom is the same as in the legend. The NNLO* prediction of direct-J/$\psi$ polarization including only colour-singlet contributions~\cite{bib:CSM_directQQpol} is also shown. } \label{fig:CDF_direct_extrapolation} \end{figure} In the scenario in which $\chi_{c1}$ and $\chi_{c2}$ are produced with, respectively, $J_z = 0$ and $J_z = \pm 2$ polarizations the CDF measurement is seen to be described by partial next-to-next-to-leading order Colour Singlet Model predictions (NNLO$^*$ CSM) for directly produced $S$-states quarkonia~\cite{bib:CSM_directQQpol}. This is the J/$\psi$ polarization scenario that we will adopt in following considerations. Its validity can be probed by experiments able to discriminate if the J/$\psi$ is produced together with a photon such that the two are compatible with being $\chi_{c1}$ or $\chi_{c2}$ decay products. Such dilepton events, resulting from $\chi_c$ decays, should show a full transverse polarization ($\lambda_\vartheta^{\chi_{c1}} = \lambda_\vartheta^{\chi_{c2}} = +1$), while the directly produced J/$\psi$ mesons should have a strong longitudinal polarization ($\lambda_\vartheta^\mathrm{dir} \simeq -0.6$). We will not discuss here a possible scenario based on non-relativistic QCD (NRQCD) calculations, which include non-perturbative contributions and, in particular, colour-octet processes. In fact, the large transverse polarization predicted by current calculations for the directly produced $S$-states~\cite{bib:NRQCD_promptQQpol} could be reconciled with the prompt CDF data only assuming a huge deviation from the measured feed-down from $\chi_c$ [$R(\chi_{c}) \simeq 70\%$] and, at the same time, the maximum possible longitudinal polarization for the J/$\psi$ from $\chi_c$ (this latter assumption would be in contradiction with the corresponding prediction of NRQCD itself). \begin{figure}[tb!] \centering \includegraphics[width=0.46\textwidth]{fig2.pdf} \caption{ The E866 measurement of $\Upsilon(1S)$ and $\Upsilon(2S)+\Upsilon(3S)$ polarizations (in the Collins-Soper frame~\cite{bib:CollinsSoper}) as a function of $p_{\rm T}$ (a), the deduced ranges for the fraction of $\Upsilon(1S)$ mesons coming from $\chi_{b}$ decays (b) and the deduced range of their possible polarizations (c). An additional systematic uncertainty of $\pm 0.06$ is not included in the error bars of the $\lambda_{\vartheta}$ data points in (a). The error bars in the derived lower limit of the range for $R(\chi_{b})$ (b) reflect the uncertainty in the $\lambda_{\vartheta}$ measurements, assuming that the global systematic uncertainty affects the $\Upsilon(1S)$ and $\Upsilon(2S)+\Upsilon(3S)$ measurements in a fully correlated way. The lower limits for $R(\chi_{b})$ and $\lambda_{\vartheta}(\Upsilon\mathrm{\:from\:}\chi_{b})$ depend on the ratio $R(\chi_{b2})/R(\chi_{b1})$, for which three different values are assumed. } \label{fig:E866_feeddown} \end{figure} We will base our second scenario, for the bottomonium family, on the precise and detailed measurement of E866~\cite{bib:e866_upsilon}, shown in Fig.~\ref{fig:E866_feeddown}a. This result offers several interesting cues. It is remarkable that the $\Upsilon(2S)$ and $\Upsilon(3S)$ are found to be almost fully polarized, while the $\Upsilon(1S)$ is only weakly polarized. The most reasonable explanation of this fact is that the fraction of $\Upsilon(1S)$ mesons coming from $\chi_{b}$ decays is large and its polarization is very different with respect to the polarization of the directly produced $\Upsilon(1S)$. In fact, in the assumption that all directly produced $S$ states have the same polarization, we can translate the E866 measurement into a lower limit for the feed-down fraction $R(\chi_{b})$ from $P$ states, summing together $1P_1$, $1P_2$, $2P_1$, $2P_2$ contributions. We use Eq.~\ref{eq:addition_rule} and the values in Table~\ref{tab:lambda_vs_helicity} (the average longitudinal momentum of the $\Upsilon(1S)$ in the E866 data is $\sim 4.5$~GeV$/c$) assuming that the $\Upsilon(2S)+\Upsilon(3S)$ result has a negligible contamination from $\chi_{b}^\prime \rightarrow \Upsilon(2S) \gamma$ decays and, therefore, provides a good evaluation of the polarization of the directly produced $S$ states (a conservative assumption for this specific calculation). The lower limit for $R(\chi_{b})$ corresponds to the case $J_z(\chi_{b1}) = J_z(\chi_{b1}^\prime) = \pm 1$, $J_z(\chi_{b2}) = J_z(\chi_{b2}^\prime) = 0$, in which the $\Upsilon(1S)$ mesons from $\chi_{b}$ decays have the largest negative value of $\lambda_{\vartheta}$. The result, depending slightly on the assumed ratio between $P_2$ and $P_1$ feed-down contributions, is shown in Fig.~\ref{fig:E866_feeddown}b as a function of $p_{\rm T}$. More than $50\%$ of the $\Upsilon(1S)$ are produced from $P$ states for $\langle p_{\mathrm{T}} \rangle \simeq 0.5$~GeV$/c$, and more than $30\%$ for $\langle p_{\mathrm{T}} \rangle \simeq 2.3$~GeV$/c$. These limits are appreciably higher than the value of the feed-down fraction of J/$\psi$ from $\chi_{c}$ measured at similar energy, low $p_{\rm T}$ and mid rapidity~\cite{bib:HERAB_chic}. We remind that we have obtained only a lower limit (no upper limit is implied by the data), corresponding to the case in which $\chi_{b1}$ and $\chi_{b2}$ are always produced in the same very specific and pure angular momentum configurations. Any deviation from this extreme case would lead to higher values of the indirectly determined feed-down fraction. The E866 measurement data also set an upper limit on the combined polarization of $\chi_{b1}$ and $\chi_{b2}$. Figure~\ref{fig:E866_feeddown}c shows the derived range of possible polarizations of $\Upsilon(1S)$ coming from $\chi_{b}$. The upper bound, corresponding to $R(\chi_{b}) = 1$, coincides with the measured $\Upsilon(1S)$ polarization. The lower bound, slightly depending on the relative contribution of $\chi_{b1}$ and $\chi_{b2}$, is not influenced by the E866 data and corresponds to the minimum ($p_{\mathrm{T}}$ dependent) value of $R(\chi_{b})$ represented in Fig.~\ref{fig:E866_feeddown}b. The second strong indication of the E866 data is, therefore, that at low $p_{\rm T}$ the $\Upsilon(1S)$ coming from $\chi_{b}$ decays has a longitudinal component in the Collins-Soper frame larger than $\sim 30\%$ ($\lambda_{\vartheta} \lesssim 0.1$), being $\sim 60\%$ ($\lambda_{\vartheta} \sim -0.5$) the maximum amount of longitudinal polarization that the $\Upsilon(1S)$ produced in this way is allowed to have. Also CDF has measured~\cite{bib:upsCDF}, at higher $p_{\rm T}$, an almost unpolarized production of the $\Upsilon(1S)$ mesons. However, the precision of the $\Upsilon(2S)$ and $\Upsilon(3S)$ data does not allow us to draw any conclusion about the difference between the polarizations of directly and indirectly produced states and, therefore, to infer a possible scenario of polarizations for the $\Upsilon(1S)$ coming from $\chi_{b}$ decays. A comparison with the theory predictions for the directly produced $\Upsilon(1S)$~\cite{bib:CSM_directQQpol,bib:NRQCD_promptQQpol} would lead to conjectures identical to those made in the J/$\psi$ case, and, therefore, to a bottomonium polarization scenario completely analogous to the charmonium scenario described above. \section{Basic procedures and tools} \label{sec:formulas} The E866 example suggests an alternative method to determine the polarization of the $P$-states, particularly suitable to certain experimental conditions and always useful as a cross-check of direct determinations. In fact, referring again to the scenario of Fig.~\ref{fig:E866_feeddown}, a measurement of $R(\chi_{b})$ would transform the upper bound on the polarization of $\Upsilon(1S)$ from $\chi_{b}$ decays into a univocal determination. We can formulate a general way of measuring the combined polarization of $P_1$ and $P_2$ states, consisting in the following set of measurements: 1) polarization, $\vec{\lambda}^{1S}$, of the inclusively produced prompt-$1S$ state; 2) polarization, $\vec{\lambda}^{2S}$, of the $2S$ and/or $3S$ states, assumed to be mostly directly produced; 3) fraction, $R(P)$, of $1S$ states produced in the decays of $P$ states. The polarization of the $1S$ states coming from $P$ states can then be determined as using the expression \begin{equation} \vec{\lambda}^{P} \, = \, \frac{ (3 + \lambda_\vartheta^{2S}) \vec{\lambda}^{1S} - [1- R(P)] (3 + \lambda_\vartheta^{1S}) \vec{\lambda}^{2S} } { R(P) ( 3 + \lambda_\vartheta^{1S} ) + \lambda_\vartheta^{2S} - \lambda_\vartheta^{1S} } \, , \label{eq:lambdaP_measurement} \end{equation} obviously defined only for $R(P) > 0$ ($\vec{\lambda}^{2S} - \vec{\lambda}^{1S} \to 0$ for $R(P) \to 0$). As discussed in Ref.~\cite{bib:chi_polarization}, $\vec{\lambda}^{P_1}$ and $\vec{\lambda}^{P_2}$, anisotropy parameters of the dilepton decay distribution of the daughter $1S$ state, reflect univocally the average angular momentum configurations in which the $P_1$ and $P_1$ states are produced. A measurement of $\vec{\lambda}^{P}$, merging $P_1$ and $P_1$ polarizations, can give significant indications, especially if its value is close to the boundaries of the parameter space and, therefore, does not suffer cancellation effects (as the E866 example suggests). This method is convenient if the event sample becomes too small after the requirement of a photon coming from the $P \to S$ transition, precluding a detailed angular analysis. For our main result, presented in the next section, we will make use of the inverse procedure, in which a determination of $R(P)$ is obtained by performing dilepton polarization measurements. For example, from measurements of $\lambda_\vartheta^{1S}$, $\lambda_\vartheta^{2S}$ and $\lambda_\vartheta^{P}$ the $\chi$ feed-down is determined as \begin{equation} R(P) \, = \, \frac{ (3 + \lambda_\vartheta^{P}) ( \lambda_\vartheta^{2S} - \lambda_\vartheta^{1S} ) } {(3 + \lambda_\vartheta^{1S}) ( \lambda_\vartheta^{2S} - \lambda_\vartheta^{P} ) } \, . \label{eq:RP_measurement} \end{equation} The significance of this indirect determination will, in general, depend on the choice of the polarization frame and will be higher in a frame where the differences between the three $\lambda_\vartheta$ parameters are more significant. An analysis considering also the azimuthal anisotropy parameters $\lambda_\varphi$ and $\lambda_{\vartheta\varphi}$ (formulas similar to Eq.~\ref{eq:RP_measurement}, but depending also on these two parameters, can be easily deduced from Eq.~\ref{eq:addition_rule}) would lead to the maximum significance independently of the reference frame. A simpler alternative to achieve the same result is to use the frame-independent parameter $\mathcal{F} = (1 + \lambda_\vartheta + 2 \lambda_\varphi)/(3+\lambda_\vartheta)$ introduced in Ref.~\cite{bib:invariants}: \begin{equation} R(P) \, = \, \frac{ \mathcal{F}^{2S} - \mathcal{F}^{1S} } { \mathcal{F}^{2S} - \mathcal{F}^{P} } \, . \label{eq:RP_measurement_F} \end{equation} \section{Nuclear dissociation of \boldmath{$P$} states} \label{sec:suppression} \begin{figure}[tb!] \centering \includegraphics[width=0.4\textwidth]{fig3top.pdf} \includegraphics[width=0.4\textwidth]{fig3bot.pdf} \caption{ A hypothetical variation of $R(\chi_c)$ (normalized to the pp value) with the centrality of the Pb-Pb collision (top) and the consequent variation of the polarization parameter $\lambda_\vartheta$ of the prompt-J/$\psi$ decay distribution (bottom), according to the charmonium polarization scenario discussed in the text. } \label{fig:pol_seq_suppr} \end{figure} \begin{figure}[tb!] \centering \includegraphics[width=0.4\textwidth]{fig4.pdf} \caption{ Results of pseudo-measurements of the prompt-J/$\psi$ and $\Upsilon(1S)$ dilepton decay distributions in Pb-Pb collisions, according to the corresponding scenarios discussed in the text. The background fraction ($f_{BG}$) is defined within $\pm 2 \sigma$ from the maximum of the invariant mass distribution, where $\sigma$, the experimental width, is, respectively, $4$ and $10$~MeV$/c^2$. The contours represent $68$ and $99\%$ confidence levels. The shaded area is physically forbidden. } \label{fig:pol_measurements} \end{figure} The possibility of determining the feed-down fraction from $P$ states purely on the basis of \emph{dilepton} properties is particularly valuable in the perspective of quarkonium measurements in heavy-ion collisions, where a direct determination of the $\chi$ yields is essentially impossible. Figure~\ref{fig:pol_seq_suppr} illustrates the concept of the method. The top panel shows an hypothetical $R(\chi_c)$ pattern inspired from the sequential charmonium suppression scenario, in which the $\chi_c$ yield disappears rapidly beyond a critical value of the number of nucleons participating in the interaction ($N_{\rm part}$). This effect would be reflected by a change in the observed prompt-J/$\psi$ polarization. As shown in the bottom panel, according to the scenario presented in Sec.~\ref{sec:scenarios} the polarization should become significantly more longitudinal (in the helicity frame) after the disappearance of the transversely polarized feed-down contribution due to $\chi_c$ decays. We are assuming that the ``base'' polarizations of the directly produced $S$ and $P$ states remain essentially unaffected by the nuclear medium and are, therefore, not distinguishable from those measurable in pp collisions. In general, the determination of $R(\chi_c)$ as a function of number of participants from the corresponding polarization measurements requires the knowledge of the polarization of the J/$\psi$ coming from $\chi_c$ decays (Eq.~\ref{eq:RP_measurement}). This measurement can be made in pp collisions, merging the $\chi_{c1}$ and $\chi_{c2}$ contributions and using the simplified procedure illustrated in Sec.~\ref{sec:formulas} (Eq.~\ref{eq:lambdaP_measurement}). As a simpler option, a test of the sequential suppression pattern can be made by comparing the prompt-J/$\psi$ polarization measured in pp collisions (or peripheral nucleus-nucleus) with the one measured in central nucleus-nucleus collisions and checking that this latter tends to the polarization of the $\psi^\prime$, also measured in pp collisions. In this illustration we have neglected the role of the $\psi^\prime$ suppression, which, in the sequential suppression scenario, would lead to a slight increase of $R(\chi_c)$ (and, therefore, to a slight reduction of the prompt-J/$\psi$ polarization) \emph{before} the $\chi_c$ disappearance. This would make the change in polarization due to $\chi_c$ suppression more drastic as a function of $N_{\rm part}$, but would not modify the difference between the prompt-J/$\psi$ polarizations observed in proton-proton and central nucleus-nucleus collisions. Moreover, we are assuming that the parton recombination into low-$p_{\rm T}$ J/$\psi$ in central collisions does not play a role. For this reason, in what follows we will consider relatively high-$p_{\rm T}$ measurements. However, we remark that recombination would probably change the above picture in a distinctive way, leading to a second observable change in the J/$\psi$ polarization as a function of centrality. The same method can be applied to the measurement of $\chi_b$ suppression using $\Upsilon(1S)$ polarization. According to the scenario based on the E866 data (Sec.~\ref{sec:scenarios}), in pp (and peripheral Pb-Pb) collisions the $\Upsilon(1S)$ should be only slightly polarized, reflecting the mixture of directly and indirectly produced states with opposite polarizations. In central Pb-Pb collisions the $\Upsilon(1S)$ would acquire the fully transverse polarization characteristic of the directly produced $S$ states, indicating the suppression of the $P$ states. Also in this case the simple stepwise behaviour as a function of $N_{\rm part}$ would be slightly contaminated, but not made less visible, by the suppression of $\Upsilon(2S)$ and $\Upsilon(3S)$. On the other hand, the presence of the $2P$ states in the bottomonium family would add an intermediate step in the pattern of polarization change. Figure~\ref{fig:pol_measurements} shows the results of pseudo-measurements of the prompt-J/$\psi$ and $\Upsilon(1S)$ polarizations in central Pb-Pb collisions, based on, respectively, about $30$k and $10$k reconstructed signal events in the dimuon channel and assuming in both cases a background fraction of $40\%$. The dimuon $p_{\rm T}$ and rapidity are in the ranges $10 < p_{\rm T} < 20$~GeV$/c$ and $|y| < 2$. Only events where both muons have $p_{\rm T} > 5$~GeV$/c$ are included in the reconstruction, in order to simulate the typical reduction of angular phase-space affecting this kind of measurements at the LHC. The central values of the measurements correspond to the expected polarizations, strongly longitudinal and fully transverse in the two respective scenarios, after the melting of the $\chi$ states. The results exclude large part of the $(\lambda_\vartheta, \lambda_\varphi)$ plane and, in particular, the region around the origin, containing the (precisely determined) pp values ($\lambda_\vartheta \sim -0.15$ in the J/$\psi$ scenario and $\lambda_\vartheta \sim 0.3$ in the $\Upsilon(1S)$ scenario, assuming $R(\chi_{b}) \simeq 0.4$). Such measurements would represent a significant indication of the nuclear disassociation of the $\chi$ states. \section{Summary} \label{sec:summary} We have demonstrated that it is possible to determine the nuclear suppression of the $\chi_c$ and $\chi_b$ states through measurements of the inclusive dilepton decay distributions of prompt J/$\psi$ and $\Upsilon(1S)$. In a preliminary discussion we have illustrated how the polarizations of the directly produced $S$- and $P$-state quarkonia are likely to differ significantly from one another. Given that the feed-down contributions of $P$ states to the prompt $S$ states are large, this means that there must be a measurable difference between the decay distributions of indirectly and directly produced $S$ states. The hypothesis is strongly supported by the E866 $\Upsilon$ data and is seen to reconcile the CDF prompt-J/$\psi$ and $\Upsilon(1S)$ measurements with perturbative-QCD predictions for the polarizations of the directly produced states. These are interesting indications for the understanding of quarkonium production and should be verified with detailed polarization measurements distinguishing between the properties of directly and indirectly produced states. We have also proposed an alternative and simplified way of determining the polarizations of J/$\psi$ and $\Upsilon(1S)$ coming from decays of $P$ states, using measurements of $R(P)$ and of the polarizations of the $1S$ and $2/3S$ states, instead of studying directly the angular distribution of events identified with the presence of a radiated photon. With the above premises, we have shown that a change in the relative yield of $S$ and $P$ states from proton-proton to nucleus-nucleus collisions is directly reflected in an observable change of the prompt-J/$\psi$ [or $\Upsilon(1S)$] polarization. The sequential dissociation scenario has a particularly clean polarization signature: the melting of the $\chi$ states would be signalled by the observation of a significantly larger prompt-J/$\psi$ [$\Upsilon(1S)$] polarization than the one measured in pp collision. After the complete suppression of $\chi$ production, the polarization should approach the one measured (in pp collisions) for the $\psi^\prime$ [$\Upsilon(2/3S)$]. In conclusion, quarkonium polarization can be used as a new probe for the formation of a deconfined medium. This method, based on the study of dilepton kinematics alone, provides a feasible and clean alternative to the direct measurement of the $\chi$ yields through reconstruction of radiative decays. With sizeable J/$\psi$ and $\Upsilon(1S)$ event samples to be collected in nucleus-nucleus collisions, the LHC experiments have the potential to provide a clear insight into the role of the $\chi$ states in the dissociation of quarkonia, taking a crucial step forward in establishing the validity of the sequential melting mechanism. \bigskip We acknowledge support from Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal, under contracts SFRH/BPD/42343/2007, CERN/FP/116367/2010 and CERN/FP/116379/2010. We also acknowledge interesting discussions with C. Louren\c{c}o.
1,314,259,992,929
arxiv
\section{Introduction} \noindent This paper is devoted to the study of perturbations of the Crapper waves through gravity and concentrated vorticity. The problem that we analyze is the free-boundary stationary Euler equation with vorticity \begin{subequations}\label{v-euler} \begin{align}[left=\empheqlbrace\,] &v\cdot\nabla v+\nabla p+ge_2=0\hspace{1cm}\textrm{in}\hspace{0.3cm}\Omega\label{v-euler1}\\[2mm] &\nabla\cdot v=0\hspace{3.35cm}\textrm{in}\hspace{0.3cm}\Omega\label{v-euler2}\\[2mm] &\nabla^{\perp}\cdot v=\omega \hspace{3.05cm}\textrm{in}\hspace{0.3cm}\Omega\label{v-euler3}\\[2mm] &v\cdot n =0 \hspace{3.45cm}\textrm{on}\hspace{0.3cm} \mathcal{S}\label{v-euler4}\\[2mm] &p=TK\hspace{3.5cm}\textrm{on}\hspace{0.3cm}\mathcal{S}\label{v-euler5} \end{align} \end{subequations} \medskip \noindent Here $v$ and $p$ are the velocity and the pressure, respectively; $g$ is the gravity, $e_2$ is the second vector of the Cartesian basis, $K$ is the curvature of the free boundary, $T$ the surface tension and $\omega$ the vorticity, that we will specify later. Moreover, since $\Omega$ is defined as a fluid region and $\mathbb{R}^2\setminus \Omega$ as a vacuum region, there exists an interface $\mathcal{S}$ that separates the two regions and $n$ is the normal vector at the interface. We parametrize the interface with $z(\alpha)=(z_1(\alpha),z_2(\alpha))$, for $\alpha\in[-\pi,\pi]$. Thus $\Omega$ is defined for $-\pi< x< \pi$ and $y$ below the interface $z(\alpha)$ with $z_2(\pm \pi)=1$.\\ \noindent The Crapper waves are exact solutions of the water waves problem with surface tension at infinite depth. In \cite{Crapper1957}, Crapper proves the existence of pure capillary waves with an overhanging profile. Its result has been extended in \cite{Kinnersley1976} by Kinnersley for the finite depth case and in \cite{AAW2013} by Akers-Ambrose-Wright by adding a small gravity. In \cite{ASW2014} Ambrose-Strauss-Wright analyze the global bifurcation problem for traveling waves, considering the presence of two fluids and in \cite{CEG2016} and \cite{CEG2019}, C\'ordoba-Enciso-Grubic add beyond the small gravity a small density in the vacuum region in order to prove the existence of self-intersecting Crapper solutions with two fluids.\\ \noindent In the present paper we will deal with rotational waves. The literature about these waves is very recent and the first important result is the one by Constantin and Strauss \cite{CS2004}. They study the rotational gravity water waves problem without surface tension at finite depth and they are able to prove the existence of large amplitude waves. Later, in \cite{CV2011}, Constantin and Varvaruca extend the Babenko equation for irrotational flow \cite{Babenko87} to the gravity water waves with constant vorticity at finite depth. They remark that the new formulation opens the possibility of using global bifurcation theory to show the existence of large amplitude and possibly overhanging profiles. Furthermore, in a recent paper \cite{CSV2016}, the same authors construct waves of large amplitude via global bifurcation. Such waves could have overhanging profiles but their explicit existence is still an open problem.\\ \noindent Furthermore, there are some new results by Hur and Vanden-Broeck \cite{Hur-Vanden2020} and by Hur and Wheeler \cite{Hur-Wheeler2020}, where the authors prove the numerical and further analytical existence of a new exact solution for the periodic traveling waves in a constant vorticity flows of infinite depth, in the absence of gravity and surface tension. They show that the free surface is the same as that of Crapper's capillary waves in an irrotational flow.\\ \noindent Concerning the presence of surface tension in a rotational fluid we recall the works by Wahl\'en, in \cite{Wahlen2006-1}, where the author proves the existence of symmetric regular capillary waves for arbitrary vorticity distributions, provided that the wavelength is small enough and in \cite{Wahlen2006-2}, he adds a gravity force acting at the interface and proves the existence of steady periodic capillary-gravity waves. As far as we know, there is not a proof of the existence of overhanging waves in both capillary and gravity-capillary rotational settings, with a fixed period. In \cite{deBoeck2014}, De Boeck shows that Crapper waves are limiting configuration for both gravity-capillary water waves in infinte depth (see also \cite{AAW2013}) and gravity-capillary water waves with constant vorticity at finite depth. His formulation comes from the one introduced in \cite{CV2011} and the idea is based on taking a small period, which implies that Crapper's waves govern both gravity-capillary and gravity-capillary with constant vorticity at finite depth. \noindent Differently from his work, we will consider a fixed period and small and concentrated vorticity as the point vortex and the vortex patch. \\ \noindent In \cite{SWZ2013}, Shatah, Walsh and Zheng study the capillary-gravity water waves with concentrated vorticity and they extend their work in \cite{EWZ2019} by considering an exponential localized vorticity; in both cases they perturb from the flat and they do not consider overhanging profiles.\\ \noindent However, the technique we will use is completely different from the cited papers since we would like to show the existence of a perturbation of Crapper's waves with both small concentrated vorticity and small gravity. \subsection{Outline of the paper} \noindent In section \ref{settings} we describe the setting in which we work and we introduce a new formulation for the problem \eqref{v-euler}, through the stream function and a proper change of coordinates to fix the domain. In section \ref{point-case} we describe the point vortex formulation and the principal operators that identify our problem. In the end of the section we will prove the main theorem \ref{point-existence}, which shows the existence of a perturbation of Crapper's waves with a small point vortex. In the last section we introduce the problem \eqref{v-euler} with a vortex patch, which we identify through three operators and the implicit function theorem allow us to prove the existence of a perturbation of Crapper's waves also with a small vortex patch, theorem \ref{patch-existence}. \bigskip \section{Setting of the problem}\label{settings} \noindent The interface $\mathcal{S}=\partial\Omega$, between the fluid region with density $\rho=1$ and the vacuum region, has a parametrization $z(\alpha)$ which satisfies the periodicity conditions $$z_1(\alpha+2\pi)=z_1(\alpha) +2\pi, \hspace{1cm} z_2(\alpha+2\pi)=z_2(\alpha),$$ \medskip \noindent and it is symmetric with respect to the $y-$axis \begin{equation}\label{z-parity} z_1(\alpha)=-z_1(-\alpha), \hspace{1cm} z_2(\alpha)=z_2(-\alpha). \end{equation} \medskip \noindent The aim of this paper is to prove the existence of perturbations of the Crapper waves with vorticity through the techniques developed in \cite{OS2001}, in \cite{AAW2013} and \cite{CEG2016}. First of all we will rewrite the system \eqref{v-euler} in terms of the stream function and then we will do some changes of variables in order to modify the fluid region and to analyse the problem in a more manageable domain. The key point is the use of the implicit function theorem to show that in a neighborhod of the Crapper solutions there exists a perturbation due to the presence of the gravity and the vorticity. \subsection{The stream formulation with vorticity} \noindent The fluid flow is governed by the incompressible stationary Euler equations \eqref{v-euler}. The incompressibility condition \eqref{v-euler2} implies the existence of a stream function $\psi:\Omega\rightarrow\mathbb{R}$, with $v=\nabla^{\perp}\psi$ and the kinematic boundary condition \eqref{v-euler4} implies $\psi=0$ on $\mathcal{S}$. In addition we can rewrite the equation \eqref{v-euler1} at the interface by using the condition \eqref{v-euler5} and the fact that the vorticity we consider is concentrated in the domain $\Omega$, we end up in the Bernoulli equation. \begin{equation} \frac{1}{2} |v|^2+TK+gy=\textrm{constant}. \end{equation} \medskip \noindent We can write the system \eqref{v-euler} in terms of the stream function as follows \begin{subequations}\label{psi-euler} \begin{align}[left=\empheqlbrace\,] &\Delta\psi=\omega\hspace{6cm}\textrm{in}\hspace{0.3cm}\Omega\label{psi-eul1}\\ &\psi=0\hspace{6.4cm}\textrm{on}\hspace{0.3cm}\mathcal{S}\label{psi-eul2}\\ &\frac{1}{2}|\nabla\psi|^2+gy+TK=\textrm{constant}\hspace{1.9cm}\textrm{on}\hspace{0.3cm}\mathcal{S}\label{psi-eul3}\\ &\frac{\partial\psi}{\partial x}=0 \hspace{6.1cm}\textrm{on}\hspace{0.3cm} x=\pm\pi\label{psi-eul4}\\ &\lim_{y\rightarrow 0}\left(\frac{\partial\psi}{\partial y},-\frac{\partial\psi}{\partial x}\right)=(c,0)\label{psi-eul5} \end{align} \end{subequations} \medskip \noindent where, the condition \eqref{psi-eul4} comes from the periodic and symmetric assumptions and the condition \eqref{psi-eul5} means that the flow becomes uniform at the infinite bottom and $c\in\mathbb{R}$ is the wave speed. The main problem we have to face is the absence of a potential and is due to the rotationality of the problem. We will treat the point vortex and the vortex patch in two different ways, since the singularity of the problem is distinct, but before dealing with our problem we will focus on the general framework. \bigskip \subsection{The general vorticity case}\label{subsec-change-variables}\label{general-vorticity} \noindent The main difficulties of the problem \eqref{psi-euler} are the presence of a moving interface and the absence of a potential, since the fluid is not irrotational. We recall the Zeidler theory \cite{Zeidler} about pseudo-potential, so we introduce the function $\phi$, which satisfies the following equations \begin{equation}\label{pseudo-potential} \left\{\begin{array}{lll} \displaystyle\frac{\partial\phi}{\partial x}=W(x,y)\frac{\partial\psi}{\partial y}\\[2mm] \displaystyle\frac{\partial\phi}{\partial y}=-W(x,y)\frac{\partial\psi}{\partial x}, \end{array}\right. \end{equation} \medskip \noindent where $W(x,y)$ is exactly equal to $1$ when the fluid is irrotational and satisfies \begin{equation}\label{x-y-W} \frac{\partial W}{\partial x}\frac{\partial\psi}{\partial x}+\frac{\partial W}{\partial y}\frac{\partial\psi}{\partial y}+W\Delta\psi=0. \end{equation} \medskip \noindent We transform the problem from the $(x,y)$-plane into the $(\phi,\psi)$-plane, by taking the advantage of the fact that the stream function is zero at the interface, see fig. \ref{omega-to-disk} . Furthermore, we consider the case of symmetric waves, then it follows that \begin{equation}\label{symmetry} \left\{\begin{array}{lll} \phi(x,y)=-\phi(-x,y)\\[3mm] \psi(x,y)=\psi(-x,y), \end{array}\right. \end{equation} \medskip \noindent and they satisfy the following relations, coming from \eqref{pseudo-potential}. \begin{equation}\label{x-y-Jacobian} \begin{pmatrix} \displaystyle\frac{\partial x}{\partial\phi} & \displaystyle\frac{\partial x}{\partial\psi}\\ \displaystyle\frac{\partial y}{\partial\phi} & \displaystyle\frac{\partial y}{\partial\psi}\\ \end{pmatrix} =\frac{1}{W(v_1^2+v_2^2)} \begin{pmatrix} v_1 & -W v_2\\ v_2 & W v_1 \end{pmatrix}, \end{equation} \medskip \noindent where $v_1, v_2$ are the components of the velocity field. Moreover, we want to write the system in a non-dimensional setting, thus the new variables are $$(\phi,\psi)=\frac{1}{c}(\phi^*,\psi^*), \quad (v_1,v_2)=\frac{1}{c} (v_1^*,v_2^*), \quad \omega=\frac{1}{c}\omega^*,$$ \noindent where the variables with the star are the dimensional one and $c$ is the wave speed. The properties of our problem allow us to pass from $\Omega$ into $\tilde{\Omega}$, defined as follows \begin{equation}\label{Omega-tilde} \tilde{\Omega}:=\{(\phi,\psi): -\pi<\phi<\pi, -\infty<\psi<0\}. \end{equation} \medskip \noindent We have to transform the system \eqref{psi-euler} and the equation \eqref{x-y-W} in the new coordinates. So we take the derivative with respect to $\phi$ of the condition \eqref{psi-eul3} and we get \begin{equation}\label{phi-psi-DBernoulli} \frac{\partial}{\partial\phi}\left(\frac{v_1^2+v_2^2}{2}\right)+p\frac{v_2}{v_1^2+v_2^2}-q\frac{\partial}{\partial\phi}\left[\frac{W}{\sqrt{v_1^2+v_2^2}}\left(v_1\frac{\partial v_2}{\partial\phi}-v_2\frac{\partial v_1}{\partial\phi}\right)\right]=0, \end{equation} \medskip \noindent where $\displaystyle p=\frac{g}{c^2}$ and $\displaystyle q=\frac{T}{c^2}$ and \eqref{x-y-W} becomes \begin{equation}\label{phi-psi-W} (v_1^2+v_2^2)\frac{\partial W}{\partial\psi}=W\omega. \end{equation} \medskip \noindent The problem we study is periodic, so it is more natural to do the analysis in a circular domain. We introduce the independent variable $\zeta=e^{-i\phi+\psi}$, where $\phi+i\psi$ runs in $\tilde{\Omega}$ and $\zeta$ in the unit disk, so $\zeta=\rho e^{i\alpha}$. The relation between $(\phi,\psi)$ and the variable in the disk $(\alpha,\rho)$ is the following $(\phi,\psi)=(-\alpha,\log(\rho))$, where $-\pi<\alpha<\pi$ and $0 <\rho<1$. Thus, we pass from $\tilde{\Omega}$ into the unit disk, see fig. \ref{omega-to-disk}. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{omega-to-disk} \caption{The domains $\Omega$, $\tilde{\Omega}$ and the disk.}\label{omega-to-disk} \end{figure} \medskip \noindent Furthermore we define the dependent variables $\tau(\alpha,\rho)$ and $\theta(\alpha,\rho)$ as follows \begin{equation}\label{tau-theta} \tau=\frac{1}{2}\log(v_1^2+v_2^2),\quad \theta=\arctan\left(\frac{v_2}{v_1}\right) \end{equation} \medskip \noindent Thanks to \eqref{tau-theta}, the equation \eqref{phi-psi-W} for $W$ becomes \begin{equation*} e^{2\tau}\rho\frac{\partial W}{\partial\rho}=W\omega, \end{equation*} \medskip \noindent then we have \begin{equation}\label{alpha-rho-W} \displaystyle W(\alpha,\rho)=\exp\left(\int_{0}^{\rho}\omega\frac{e^{-2\tau(\alpha,\rho')}}{\rho'}\,d\rho'\right). \end{equation} \medskip \noindent The derivative of the Bernoulli equation \eqref{phi-psi-DBernoulli}, computed at the interface $z(\alpha)$ which corresponds to $\rho=1$, becomes \begin{equation}\label{alpha-rho-DBernoulli} \frac{\partial}{\partial\alpha}\left(\frac{1}{2}e^{2\tau(\alpha,1)}\right)-p\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W(\alpha,1)}+q\frac{\partial}{\partial\alpha}\left(W(\alpha,1)e^{\tau(\alpha,1)}\frac{\partial\theta}{\partial\alpha}\right)=0. \end{equation} \bigskip \section{The point vortex case}\label{point-case} \subsection{The point vortex framework} We consider a point of constant vorticity, that does not touch the interface $z(\alpha)$, defined as $\omega=\omega_0 \delta((x,y)-(0,0))$, where $\delta((x,y)-(0,0))$ is a delta distribution taking value at the point $(0,0)$ and $\omega_0$ is a small constant. In addition, since we have a fluid with density $1$ inside the domain $\Omega$ and the vacuum in $\mathbb{R}^2\setminus\Omega$, then there is a discontinuity of the velocity field at the interface and a concentration of vorticity $\tilde{\omega}(\alpha)\delta((x,y)-(z_1(\alpha),z_2(\alpha)))$, where $\tilde{\omega}(\alpha)$ is the amplitude of the vorticity along the interface. This implies the stream function $\psi$ in $\Omega$ to be the sum of an harmonic part \begin{equation}\label{harmonic-point-stream} \psi_H(x,y)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\log\left|(x,y)-(z_1(\alpha'), z_2(\alpha'))\right|\tilde{\omega}(\alpha')\,d\alpha', \end{equation} \noindent which is continuous over the interface and another part related to the point vortex. The velocity can be obtained by taking the orthogonal gradient of the stream function and we have\\ \begin{equation}\label{velocity-point} v(x,y)=(\partial_y\psi_{H}(x,y), -\partial_x\psi_H(x,y))+\frac{\omega_0}{2\pi}\frac{(y,-x)}{x^2+y^2}. \end{equation} \medskip \noindent However, in order to describe the point vortex problem we have to adapt the kinematic boundary condition \eqref{v-euler4} and the Bernoulli equation \eqref{v-euler5}, equivalent to \eqref{psi-eul3}. At first, let us compute the velocity at the interface by taking the limit in the normal direction and we get \begin{equation} \begin{split} v(z(\alpha))&=(\partial_{z_2}\psi_H,-\partial_{z_1}\psi_H)+\frac{1}{2}\frac{\tilde{\omega}(\alpha)}{|\partial_{\alpha}z|^2}\partial_{\alpha}z+\frac{\omega_0}{2\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}\\[3mm] &=BR(z(\alpha),\tilde{\omega}(\alpha))+\frac{1}{2}\frac{\tilde{\omega}(\alpha)}{|\partial_{\alpha}z|^2}\partial_{\alpha}z+\frac{\omega_0}{2\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}, \end{split} \end{equation} \medskip \noindent where $BR(z(\alpha),\tilde{\omega}(\alpha))$ is the Birkhoff-Rott integral \begin{equation*} BR(z(\alpha),\tilde{\omega}(\alpha))=\frac{1}{2\pi}\textrm{P.V.}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot\tilde{\omega}(\alpha')\,d\alpha'. \end{equation*} \medskip \noindent Thus the condition \eqref{v-euler4} becomes \begin{equation}\label{point-kbc} \begin{split} &v(z(\alpha))\cdot(\partial_{\alpha}z(\alpha))^{\perp}=\left(BR(z(\alpha),\tilde{\omega}(\alpha))+\frac{\omega_0}{2\pi}\frac{(z_2(\alpha),-z_1(\alpha)}{|z(\alpha)|^2}\right)\cdot \partial_{\alpha}z(\alpha)^{\perp}=0. \end{split} \end{equation} \medskip \noindent To deal with the Bernoulli equation and to reach a manageable formulation, we have to use the change of variables described in section \ref{general-vorticity}. We pass from the domain $\Omega$ in $(x,y)$ variables, fig. \ref{omega-to-disk} into $\tilde{\Omega}$ in $(\phi, \psi)$ and finally in the unit disk. In order to pass from $\Omega$ into $\tilde{\Omega}$ we use the pseudo-potential defined in \eqref{pseudo-potential} and \eqref{x-y-W}. Moreover, as one can see in fig. \ref{disk} (left), the interface $z(\alpha)$ is sending in the line $\psi=0$, thanks to condition \eqref{psi-eul2}, and the point vortex is still a point $(0,\psi_0)$ on the vertical axis due to the oddness of $\phi$. In order to pass from $\tilde{\Omega}$ into the unit disk, see fig. \ref{disk} (right), we use the function $e^{\psi-i\phi}=\rho e^{i\alpha}$ and the point vortex $(0,\psi_0)$ becomes a point $(0,\rho_0)$, it does not depend on the angle $\alpha$. \medskip \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{disk}\label{disk} \caption{The domain $\tilde{\Omega}$ and $D_1$}\label{disk} \end{figure} \medskip \noindent After this change of variables, we rewrite the equation \eqref{alpha-rho-W} for $W(\alpha,\rho)$, by substituting $\omega=\omega_0\delta((\alpha,\rho)-(0,\rho_0))$. And we have \begin{equation}\label{W_0} W(\alpha,\rho)= \left\{ \begin{array}{rl} 1 & \alpha\neq 0\\[2mm] \displaystyle \exp\left(\frac{\omega_0 e^{-2\tau(\alpha,\rho_0)}}{\rho_0}\right) & \alpha=0,\hspace{0.3cm}\rho_0\in (0,\rho). \end{array}\right. \end{equation} \medskip \noindent We immediately point out that in this case the function $W(\alpha,\rho)=W_{\omega_0,\rho_0}=W_0\in\mathbb{R}$ and the constant is exactly one when there is no vorticity.\\ \noindent The derivative of the Bernoulli equation \eqref{alpha-rho-DBernoulli} becomes \begin{equation}\label{point-DBernoulli} \frac{\partial}{\partial\alpha}\left(\frac{1}{2}e^{2\tau(\alpha,1)}\right)-p\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W_0}+q W_0\frac{\partial}{\partial\alpha}\left(e^{\tau(\alpha,1)}\frac{\partial\theta}{\partial\alpha}\right)=0 \end{equation} \medskip \noindent By integrating with respect to $\alpha$, we get \begin{equation*} \frac{1}{2}e^{2\tau(\alpha,1)}-\frac{p}{W_0}\int_{-\pi}^{\alpha} e^{-\tau(\alpha',1)}\sin(\theta(\alpha',1))\,d\alpha'+q W_0 e^{\tau(\alpha,1)}\frac{\partial\theta}{\partial\alpha}(\alpha,1)=\tilde{\gamma}. \end{equation*} \medskip \noindent In the pure capillarity case the constant is exactly $\frac{1}{2}$. For this reason we take $\tilde{\gamma}=\frac{1}{2}+B$, where $B$ is a perturbation of the Crapper constant, see \cite{OS2001}. We multiply the equation by $e^{-\tau(\alpha,1)}$ and we get a new formulation for the Bernoulli equation. \begin{equation}\label{point-Bernoulli-tau-theta} \begin{split} &\sinh(\tau(\alpha,1))-\frac{p}{W_0} e^{-\tau(\alpha,1)}\left(\int_{-\pi}^{\alpha}e^{-\tau(\alpha',1)}\sin(\theta(\alpha',1)\,d\alpha'{-1}\right)\\[3mm] &+q W_0\frac{\partial\theta(\alpha,1)}{\partial\alpha}-Be^{-\tau(\alpha,1)}=0. \end{split} \end{equation} \medskip \noindent We can solve our problem by finding $2\pi$ periodic functions $\tau(\alpha)$ even and $\theta(\alpha)$ odd, a function $\tilde{\omega}(\alpha)$ even, that satisfy the following equations \eqref{point-kbc} and \eqref{point-Bernoulli-tau-theta}. However, in subsection \ref{subsec-change-variables}, we explain the necessary change of variables to fix the domain. We observe that at the interface $z(\alpha)$, we have $\psi(z(\alpha))=0$ and $\rho=1$, respectively. Thus, \begin{equation}\label{phi-parametrization} \phi(z(\alpha))=-\alpha\quad\Longrightarrow\quad \nabla\phi(z(\alpha))\cdot\partial_{\alpha}z(\alpha)= -1, \end{equation} \medskip \noindent from \eqref{pseudo-potential} we get that \eqref{phi-parametrization}, can be written as follows \begin{equation}\label{v-tangential} W_0 v(z(\alpha))\cdot\partial_{\alpha}z(\alpha)=-1. \end{equation} \medskip \noindent Since the equation \eqref{v-tangential} has been obtained by using the kinematic boundary condition \eqref{point-kbc}, then to solve our problem we will use the Bernoulli equation \eqref{point-Bernoulli-tau-theta} and the equation \eqref{v-tangential}. \bigskip \subsection{Crapper formulation}\label{Crapper formulation} \noindent Our goal is to prove the existence of overhanging waves with the presence of concentrated vorticity, such as a point vortex or a vortex patch (Section \ref{patch}). It is well-known that without vorticity ($\omega_0=0$ equivalent to $W_0=1$), in \cite{AAW2013} the authors prove the existence of gravity-capillary overhanging waves. If we remove also the gravity then there is the pillar result of Crapper \cite{Crapper1957}, where the problem was to find a $2\pi$ periodic, analytic function $f_c=\theta_c+i\tau_c$ in the lower half plane which solves the Bernoulli equation \begin{equation}\label{Crapper-Bernoulli} \sinh(\tau_c)+q \frac{\partial\theta_c}{\partial\alpha}=0, \end{equation} \medskip \noindent where $q=\frac{T}{c^2}$. Furthermore, the analyticity of the function $f$ implies that $\tau_c$ can be written as the Hilbert transform of $\theta_c$ at the boundary $\rho=1$, so the equation above reduces to an equation in the variable $\theta_c$, \begin{equation}\label{Crapper-Bernoulli-only-theta} \sinh(\mathcal{H}\theta_c)+q \frac{\partial\theta_c}{\partial\alpha}=0. \end{equation} \medskip \noindent This problem admits a family of exact solutions, \begin{equation}\label{Crapper-solutions} f_c(w)=2i\log\left(\frac{1+Ae^{-iw}}{1-Ae^{-iw}}\right), \end{equation} \medskip \noindent where $w=\phi+i\psi$ and in this case $(\phi,\psi)$ are harmonic conjugates. The parameter $A$ is defined in $(-1,1)$ and for $|A|<A_0=0.45467\ldots$, the interface do not have self-intersection. Moreover, by substituting \eqref{Crapper-solutions} into \eqref{Crapper-Bernoulli-only-theta} for $\rho=1$, we get $q=\frac{1+A^2}{1-A^2}$. This implies \begin{equation}\label{surface-tension} T=\frac{1+A^2}{1-A^2}c^2. \end{equation} \medskip \noindent By using \eqref{x-y-Jacobian} in the Crapper case so with $W=1$, coupled with $\phi=-\alpha$ and $\rho=1$, we get \begin{equation}\label{Crapper-z-derivative} \partial_{\alpha}z^c(\alpha)=- e^{-\tau_c(\alpha)+i\theta_c(\alpha)}. \end{equation} \medskip \noindent We focus on this kind of waves because for some values of the parameter $A$, these waves are overhanging. \bigskip \subsection{Perturbation of Crapper waves with a point of vorticity} \noindent In our formulation, the main difference with respect to the Crapper \cite{Crapper1957} waves is in the function $f=\tau+i\theta$ which is not analytic because of the presence of vorticity. The idea is to prove that our solutions are perturbation of the Crapper waves. If we recall the Crapper solution with small gravity but without vorticity, $(\theta_A,\tau_A)$, we know that $f_A=\theta_A+i\tau_A$ is now analytic and $(\theta_A,\tau_A)$ satisfy the following relations in both $(\phi,\psi)$ and $(\alpha,\rho)$ variables. \begin{equation} \label{tauCR-thetaCR} \begin{cases} \displaystyle\frac{\partial\theta_A}{\partial\phi}=\frac{\partial\tau_A}{\partial\psi} \\[4mm] \displaystyle\frac{\partial\theta_A}{\partial\psi}=-\frac{\partial\tau_A}{\partial\phi} \end{cases}\Longrightarrow \begin{cases} \displaystyle\frac{\partial\theta_A}{\partial\alpha}=-\rho\frac{\partial\tau_A}{\partial\rho} \\[4mm] \displaystyle\rho\frac{\partial\theta_A}{\partial\rho}=\frac{\partial\tau_A}{\partial\alpha}. \end{cases} \end{equation} \medskip \noindent Moreover, $\tau_A=\mathcal{H}\theta_A$ at the interface. The idea is to write our dependent variables $\tau$ and $\theta$ as the sum of a Crapper part and a small perturbation, due to the small vorticity. So we have \begin{equation}\label{point-tau-theta-perturbations} \tau=\tau_A+\omega_0\tilde{\tau},\quad \theta=\theta_A+\omega_0\tilde{\theta}. \end{equation} \medskip \noindent So the Bernoulli equation \eqref{point-Bernoulli-tau-theta}, reduces \begin{equation}\label{point-Bernoulli} \begin{split} &\sinh(\mathcal{H}\theta_A+\omega_0\tilde{\tau})-p e^{-\mathcal{H}\theta_A-\omega\tilde{\tau}}\left(\frac{1} {W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}}\sin(\theta_A+\omega_0\tilde{\theta})\,d\alpha'{-1}\right)\\[2mm] &+q\frac{\partial(\theta_A+\omega_0\tilde{\theta})}{\partial\alpha}W_0-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}}=0\hspace{1cm}\textrm{at}\hspace{0.3cm}\rho=1. \end{split} \end{equation} \medskip \noindent However, we will figure out that $\tilde{\tau}$ and $\tilde{\theta}$ are functions of $\theta_A$ and so \eqref{point-Bernoulli} will be an equation in the variable $\theta_A$. In order to end up with this statement we need to use some properties of our problem. We use the incompressibility and rotational conditions and we get the following relations for $(\tau,\theta)$ \begin{equation}\label{point-theta-tau-quasi-Cauchy-Riemann-1} \left\{\begin{array}{lll} \displaystyle\frac{\partial\theta}{\partial\psi}=-W_0\frac{\partial\tau}{\partial\phi}\\[5mm] \displaystyle\frac{\partial\theta}{\partial\phi}=\frac{\omega_0 e^{-2\tau(0,\psi_0)}\delta((\phi,\psi)-(0,\psi_0))}{W_0}+\frac{1}{W_0}\frac{\partial\tau}{\partial\psi} \end{array}\right. \end{equation} \medskip \noindent By substituting \eqref{point-tau-theta-perturbations} in \eqref{point-theta-tau-quasi-Cauchy-Riemann-1}, we get \begin{equation}\label{point-tilde-tau-theta-quasi-Cauchy-Riemann-1} \left\{\begin{array}{lll} \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\psi}=-W_0\omega_0\frac{\partial\tilde{\tau}}{\partial\phi}-W_0\frac{\partial\tau_A}{\partial\phi}-\frac{\partial\theta_A}{\partial\psi}\\[5mm] \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\phi}=\frac{\omega_0 e^{-2\mathcal{H}\theta_A(0,\psi_0)-2\omega_0\tilde{\tau}(0,\psi_0)}\delta((\phi,\psi)-(0,\psi_0))}{W_0}+\frac{1}{W_0}\left(\omega_0\frac{\partial\tilde{\tau}}{\partial\psi}\frac{\partial\tau_A}{\partial\psi}\right)+\frac{\partial\theta_A}{\partial\phi} \end{array}\right. \end{equation} \medskip \noindent If we cross systems \eqref{tauCR-thetaCR} with system \eqref{point-tilde-tau-theta-quasi-Cauchy-Riemann-1} then we obtain \begin{equation}\label{point-tilde-theta-phi-psi} \left\{\begin{array}{lll} \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\psi}=-W_0\omega_0\frac{\partial\tilde{\tau}}{\partial\phi}+(W_0-1)\frac{\partial\theta_A}{\partial\psi}\\[5mm] \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\phi}=\frac{\omega_0 e^{-2\mathcal{H}\theta_A(0,\psi_0)-2\omega_0\tilde{\tau}(0,\psi_0)}\delta((\phi,\psi)-(0,\psi_0))}{W_0}+\frac{}{W_0}\frac{\partial\tilde{\tau}}{\partial\psi}+\left(1+\frac{1}{W_0}\right)\frac{\partial\theta_A}{\partial\phi} \end{array}\right. \end{equation} \medskip \noindent By taking the derivative with respect to $\phi$ in the first equation and the derivative with respect to $\psi$ in the second equation and then the difference we get an elliptic equation \begin{equation}\label{point-elliptic-phi-psi} \displaystyle W_0\tilde{\varepsilon}\frac{\partial^2\tilde{\tau}}{\partial\phi^2}+\frac{1}{W_0}\tilde{\varepsilon}\frac{\partial^2\tilde{\tau}}{\partial\psi^2}+\frac{\omega_0 e^{-2\tau(0,\psi_0)}}{W_0}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) +\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi\partial\psi}=0 \end{equation} \medskip \noindent We can do the same as \eqref{point-theta-tau-quasi-Cauchy-Riemann-1}, \eqref{point-tilde-tau-theta-quasi-Cauchy-Riemann-1} and \eqref{point-tilde-theta-phi-psi} also in the variables $(\alpha,\rho)$ and the elliptic equation is the following \begin{equation*}\label{point-elliptic-alpha-rho} \begin{split} &\frac{\rho}{W_0}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\rho^2}+\frac{1}{W_0}\omega_0\frac{\partial\tilde{\tau}}{\partial\rho}+\frac{W_0}{\rho}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\alpha^2}+\frac{\partial}{\partial\rho}\left(\frac{\omega_0 e^{-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau}}}{W_0}\right)+\left(W_0-\frac{1}{W_0}\right)\frac{\partial^2\theta_A}{\partial\alpha\partial\rho}=0. \end{split} \end{equation*} \medskip \noindent Once we solve the elliptic equation we have a solution $\tilde{\tau}$ as a function of $\theta_A$ and thanks to the relations \eqref{point-tilde-theta-phi-psi} also $\tilde{\theta}$ is a function of $\theta_A$. \subsection{The elliptic problem} \noindent In this section we want to show how to solve the elliptic problem. For simplicity, we will study the problem in the $(\phi,\psi)$ coordinates thus, from \eqref{point-elliptic-phi-psi}, the system is \begin{equation*}\label{point-elliptic-phi-psi-system} \displaystyle W_0\omega_0\frac{\partial^2\tilde{\tau}}{\partial\phi^2}+\frac{1}{W_0}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\psi^2}+\frac{\omega_0 e^{-2\tau(0,\psi_0)}}{W_0}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) +\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi\partial\psi}=0. \end{equation*} \medskip \noindent The equation above is a linear elliptic equation with constant coefficients $\displaystyle W_0, \frac{1}{W_0}$. If we do a change of variables we obtain a Poisson equation. In the specific if we define $\phi=W_0\phi'$, then we have \begin{equation*} \begin{split} &\frac{\partial f}{\partial\phi'}(\phi',\psi)=\frac{\partial f}{\partial\phi}\frac{\partial\phi}{\partial\phi'}=W_0\frac{\partial f}{\partial\phi}\Longrightarrow\frac{\partial f}{\partial\phi}=\frac{1}{W_0}\frac{\partial f}{\partial\phi'}\\[3mm] &\frac{\partial^2 f}{\partial\phi^2}=\frac{1}{W_0^2}\frac{\partial^2 f}{\partial\phi'^2}. \end{split} \end{equation*} \medskip \noindent And the domain $\tilde{\Omega'}=\{(\phi',\psi):-\frac{\pi}{W_0}<\phi<\frac{\pi}{W_0}, -\infty<\psi<0\}$. By substituting in \eqref{point-elliptic-phi-psi}, we have \begin{equation}\label{point-elliptic-phi-psi-NEW} \displaystyle \omega_0\frac{\partial^2\tilde{\tau}}{\partial\phi'^2}+\omega_0\frac{\partial^2\tilde{\tau}}{\partial\psi^2}+\omega_0 e^{-2\tau(0,\psi_0)}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) +\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi'\partial\psi}=0 \end{equation} \medskip \noindent Since we are looking for $\tau\in H^2$ and we know that $\tilde{\tau}\in H^2$ so that its Laplacian is in $L^2(\tilde{\Omega'})$; then by the elliptic theory there exists a weak solution and so we can invert the Laplace operator, \cite[Theorem 9.25]{Brezis}. We have \begin{equation}\label{point-tilde-tau} \omega_0\tilde{\tau}=\left(-\omega_0 e^{-2\tau(0,\psi_0)}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) -\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi'\partial\psi}\right)*G_2(\phi',\psi), \end{equation} \medskip \noindent where $G_2$ is the Green function of the Poisson equation in $\tilde{\Omega'}$. \subsection{Existence of gravity rotational perturbed Crapper waves}\label{point-existence-Crapper} \noindent The main theorem we want to prove is the following \begin{theorem}\label{point-existence} Let us consider the water waves problem \eqref{v-euler}, with a small point vortex and a small gravity $g$. Then, for some values of $A<A_0$, defined in \eqref{Crapper-solutions}, there exist periodic solutions to \eqref{v-euler} with overhanging profile. \end{theorem} \noindent In order to prove the existence of perturbed rotational Crapper waves we will apply the implitic function theorem around the Crapper solutions. \begin{theorem}[Implicit function theorem]\label{IFT} Let $X, Y, Z$ be Banach spaces and $\zeta:X\times Y\rightarrow Z$ is a $C^k$, with $k\geq 1$. If $\zeta(x_*, y_*)=0$ and $D_x\zeta(x_*, y_*)$ is a bijection from $X$ to $Z$, then there exists $\varepsilon>0$ and a unique $C^k$ map $\chi:Y\rightarrow X$ such that $\chi(y_*)=x_*$ and $\zeta(\chi(y_*), y_*)=0$ when $\|y-y_*\|_Y\leq\varepsilon$. \end{theorem} \medskip \noindent The operators that identify the water waves problem with a point vortex are the following \begin{subequations} \label{point-water-waves} \begin{align} \begin{split} &\mathcal{F}_1(\theta_A,\tilde{\omega}; B,p,\omega_0):=\sinh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}(\theta_A))\\[2mm] &\hspace{2cm}-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\left(\frac{1}{W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\sin(\theta_A+\omega_0\tilde{\theta}(\theta_A))\,d\alpha'{-1}\right)\\[2mm] &\hspace{2cm}+q\frac{\partial(\theta_A+\omega_0\tilde{\theta}(\theta_A))}{\partial\alpha}W_0-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)} \end{split}\label{point-water-waves-1}\\[5mm] \begin{split} &\mathcal{F}_2(\theta_A,\tilde{\omega};B, p,\omega_0):= W_0\left(2BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)+\tilde{\omega}(\alpha)\right.\\[2mm] &\hspace{2cm}\left.+\frac{\omega_0}{\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}\cdot\partial_{\alpha}z(\alpha)\right)+2 \end{split}\label{point-water-waves-2} \end{align} \end{subequations} \medskip \noindent We have that $$(\mathcal{F}_1,\mathcal{F}_2)(\theta_A,\tilde{\omega};B, p,\omega_0): H^{2}_{odd}\times H^{1}_{even}\times\mathbb{R}^{{3}}\rightarrow H^{1}_{even}\times H^{1}_{even}.$$ \medskip \subsubsection{Proof of Theorem \ref{point-existence}}\label{point-existence-proof} We have to analyze the two operators. First we have to show that the operators are zero when computed at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$. \begin{equation}\label{F1} \mathcal{F}_1(\theta_c,\tilde{\omega}_c;0,0,0)=\sinh(\mathcal{H}\theta_c)+q\frac{\partial\theta_c}{\partial\alpha}=0, \end{equation} \noindent since this is exactly \eqref{Crapper-Bernoulli-only-theta}. \medskip \noindent The second operator related to the kinematic boundary conditions satisfies \begin{equation}\label{F2} \mathcal{F}_2(\theta_c,\tilde{\omega}_c;0,0,0)= 2BR(z^c({\alpha}),\tilde{\omega}_c(\alpha))+\tilde{\omega}_c(\alpha)+2=0, \end{equation} \medskip \noindent where $z^c(\alpha)$ is the parametrization of the Crapper interface and it is zero by construction \eqref{phi-parametrization}. \medskip \noindent Now, we compute all the Fr\'echet derivatives. We will take the derivatives with respect to $\theta_A$ and $\tilde{\omega}$, then we will compute them at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$ and we will show their invertibility. For the operator $\mathcal{F}_1$ we observe that $$D_{\tilde{\omega}}\mathcal{F}_1(\theta_c,\tilde{\omega}_c,;0,0,0)=0.$$ \medskip \noindent It remains to compute the derivative with respect to $\theta_A$. \begin{equation*} \begin{split} &D_{\theta_A}\mathcal{F}_1=\left[\frac{d}{d\mu}\mathcal{F}_1(\theta_A+\mu\theta_1,\psi_H,c;B,p,\varepsilon,\omega_0)\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left[\sinh(\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1))\right.\right.\\[3mm] &-p e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\left(\frac{1}{W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\sin(\theta_A+\mu\theta_1+\omega_0 \tilde{\theta})\,d\alpha'{-1}\right)\\[2mm] &\left.\left.+q\frac{\partial(\theta_A+\mu\theta_1+\omega_0\tilde{\theta})}{\partial\alpha}W_0-Be^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\right]\right]_{|\mu=0}=\\[3mm] &=\cosh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}(\theta_A))\left(\mathcal{H}\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\tau}(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\\[3mm] &-p\left[\frac{d}{d\mu}\left[e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\cdot\right.\right.\\[3mm] &\hspace{2cm}\left.\left.\cdot\left(\frac{1}{W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\sin(\theta_A+\mu\theta_1+\omega_0 \tilde{\theta})\,d\alpha'{-1}\right)\right]\right]_{|\mu=0}\\[3mm] &+qW_0\frac{\partial\theta_1}{\partial\alpha}-B\left[\frac{d}{d\mu}\left[e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\right]\right]_{|\mu=0} \end{split} \end{equation*} \medskip \noindent In order to compute $\frac{d}{d\mu}\tilde{\tau}$ we refer to \eqref{point-tilde-tau} and since it is multiply by $\omega_0$ that we will take equal to zero, then it will desapper as well as the terms multiplied by $p, B$. Thus the Fr\'echet derivative computed at $(\theta_c,\tilde{\omega}_c;0,0,0)$ is \begin{equation}\label{D-theta-F1} D_{\theta_A}\mathcal{F}_1(\theta_c,\tilde{\omega}_c;0,0,0)=\cosh(\mathcal{H}\theta_c)\mathcal{H}\theta_1+q\frac{\partial\theta_1}{\partial\alpha}. \end{equation} \bigskip \noindent The Fr\'echet derivative with respect to $\theta_A$, can be obtained by substituting the definition of the interface $z(\alpha)$ to the operator. Indeed, from the equations \eqref{x-y-Jacobian}, we get \begin{equation}\label{interface-derivative} \left\{\begin{array}{lll} \displaystyle\frac{\partial z_1}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\cos(\theta(\alpha,1))}{W(\alpha,1)}\\[4mm] \displaystyle\frac{\partial z_2}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W(\alpha,1)}, \end{array}\right. \end{equation} \medskip \noindent By substituting the value of $W(\alpha,1)$ for the point vortex and by rewriting $\tau, \theta$ as the sum of Crapper and a perturbation, then we have \begin{equation}\label{point-interface} \left\{\begin{array}{lll} \displaystyle z_1(\alpha)=-\frac{1}{W_0}\int_{-\pi}^{\alpha} e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\cos(\theta_A+\omega_0 \tilde{\theta}(\theta_A))\,d\alpha'\\[4mm] \displaystyle z_2(\alpha)=-\frac{1}{W_0}\int_{-\pi}^{\alpha} e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\sin(\theta_A+\omega_0 \tilde{\theta}(\theta_A))\,d\alpha'-1 \end{array}\right. \end{equation} \medskip \noindent In a compact way, the interface $z(\alpha)$ is \begin{equation}\label{point-z} \displaystyle z(\alpha)=-\frac{1}{W_0}\int_{-\pi}^{\alpha} e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)+i(\theta_A+\omega_0\tilde{\theta}(\theta_A))}\,d\alpha' -e_2. \end{equation} \medskip \noindent The main Fr\'echet derivative for the operator $\mathcal{F}_2$ is with respect to $\tilde{\omega}$. \begin{equation*} \begin{split} &D_{\tilde{\omega}}\mathcal{F}_2=\left[\frac{d}{d\mu}\mathcal{F}_2(\theta_A,\tilde{\omega}+\mu\omega_1;B,p,\omega_0)\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left[2 W_0 BR(z(\alpha),\tilde{\omega}(\alpha)+\mu\omega_1)\cdot\partial_{\alpha}z(\alpha)\right.\right.\\[3mm] &\left.\left.+W_0 (\tilde{\omega}(\alpha)+\mu\omega_1(\alpha))+W_0\frac{\omega_0}{\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}\cdot\partial_{\alpha}z(\alpha)+2\right]\right]_{|\mu=0}\\[3mm] &=2W_0 \textrm{P.V.}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot \omega_1(\alpha')\,d\alpha'\cdot \partial_{\alpha}z(\alpha)+W_0\omega_1(\alpha). \end{split} \end{equation*} \medskip \noindent When we compute this derivative at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$ we get \begin{equation}\label{D-omega-F2} D_{\tilde{\omega}}\mathcal{F}_2(\theta_c,\tilde{\omega}_c;0,0,0)=2BR(z^c(\alpha),\omega_1(\alpha))\cdot \partial_{\alpha}z^c(\alpha)+\omega_1(\alpha), \end{equation} \medskip \noindent where $z^c(\alpha)$ is the parametrization of the Crapper interface coming from \eqref{Crapper-z-derivative}. \bigskip \noindent The final step of this proof is to show the invertibility of the derivative's matrix, defined as follows \begin{equation}\label{point-frechet-derivative} D\mathcal{F}(\theta_c,\tilde{\omega}_c;0,0,0)= \begin{pmatrix} D_{\theta_A}\mathcal{F}_1 & 0 \\ D_{\theta_A}\mathcal{F}_2 & D_{\tilde{\omega}}\mathcal{F}_2 \end{pmatrix}= \begin{pmatrix} \Gamma & 0 \\ D_{\theta_A}\mathcal{F}_2 & \mathcal{A}(z^c(\alpha))+\mathcal{I} \end{pmatrix}\cdot \begin{pmatrix} \theta_1\\ \omega_1 \end{pmatrix} \end{equation} \medskip \noindent where \begin{align*} &\Gamma\theta_1=\cosh(\mathcal{H}\theta_c)\mathcal{H}\theta_1+q\frac{d}{d\alpha}\theta_1\\[3mm] &(\mathcal{A}(z^c(\alpha))+\mathcal{I})\omega_1=2BR(z^c(\alpha),\omega_1)\cdot\partial_{\alpha}z^c(\alpha)+\omega_1. \end{align*} \medskip \noindent The invertibility of \eqref{point-frechet-derivative} is related with the invertibility of the diagonal, since the matrix is triangular. Hence we have to analyze the invertibility of the operators $\Gamma $ and $ \mathcal{A}+\mathcal{I}$, where $\mathcal{I}$ stays for the identity operator. Below, we resume the properties of the $\Gamma$ operator, for details, see in \cite{AAW2013} and \cite{CEG2016}. \begin{lemma} The operator $$D_{\theta_A}\mathcal{F}_1(\theta_c, \tilde{\omega}_c;0,0,0)=\cosh(\mathcal{H}\theta_c)\theta_1+q\frac{d}{d\alpha}\theta_1=\Gamma\theta_1,$$ defined $\Gamma: H^{1}_{odd}\rightarrow L^{2}_{even}$ is injective. \end{lemma} \begin{proof} The injectivity follows from the fact that $\Gamma\theta_1=0$ if and only if $\displaystyle\theta_1=\frac{d\theta_c}{d\alpha}$, see \cite[Lemma 2.1]{OS2001}. Moreover, we know that $\theta_c$ is an odd function then $\displaystyle\frac{d\theta_c}{d\alpha}$ is even. This statement implies that the constants are the only trivial solutions of $\Gamma\theta_1=0$. \end{proof} \noindent The problem concerning the invertibility of this operator is related with its surjectivity. \begin{lemma}\label{DF1-invertible} Let $f\in L^{2}_{even}$. Then there exists $\theta_1\in H^{1}_{odd}$ with $\Gamma\theta_1=f$ if and only if $$(f,\cos\theta_c)=\int_{-\pi}^{\pi} f(\alpha)\cos\theta_c(\alpha)\,d\alpha=0$$ \end{lemma} \begin{proof} The complete proof can be found in \cite[Proposition 3.3]{AAW2013}. Here we will prove that the cokernel has dimension one and it is spanned by $\cos\theta_c$.\\ \noindent If we consider the operator $\mathcal{F}_1$ with $(p,\omega_0,B)=(0,0,0)$, we have \begin{align*} &\int_{-\pi}^{\pi}\mathcal{F}_1\cos\theta\,d\alpha=\int_{-\pi}^{\pi}\left(\sin\mathcal{H}\theta + q\frac{d\theta}{d\alpha}\right)\cos\theta\,d\alpha=0, \end{align*} \noindent because the second term is the $q$ multiplied by $\sin\theta$ in the interval $(-\pi,\pi)$ and the the first term is $0$ because of the Cauchy integral theorem. In particular, if we take the derivative with respect to $\theta$ and we compute it in $\theta_c$ we get \begin{align*} &\int_{-\pi}^{\pi}\Gamma\theta_1\cos\theta_c\,d\alpha +\int_{-\pi}^{\pi} \mathcal{F}_1\sin\theta_c\,d\alpha=\int_{-\pi}^{\pi}\Gamma\theta_1\cos\theta_c\,d\alpha+\int_{-\pi}^{\pi}\left(q\frac{d\theta_c}{d\alpha}+\sinh\mathcal{H}\theta_c\right)\sin\theta_c\,d\alpha=0, \end{align*} \noindent since the quantity in the brackets is $0$ for \eqref{Crapper-Bernoulli} thus it follows \begin{equation}\label{Gamma-coseno} \int_{-\pi}^{\pi}\Gamma\theta_1\cos\theta_c\,d\alpha=0. \end{equation} \end{proof} \bigskip \noindent For the operator $\mathcal{A}+\mathcal{I}$ we have the following result, proved in \cite{CCG2011}. \begin{lemma}\label{DF2-invertible} Let $z\in H^{3}$ be a curve without self-intersections. Then $$\mathcal{A}(z)\omega=2 BR(z,\omega)\cdot\partial_{\alpha}z$$ \medskip \noindent defines a compact linear operator $$\mathcal{A}(z): H^{1}\rightarrow H^{1}$$ \medskip \noindent whose eigenvalues are strictly smaller than $1$ in absolute value. In particular, the operator $\mathcal{A}+\mathcal{I}$ is invertible. \end{lemma} \medskip \noindent In conclusion, the equations \begin{equation} \begin{pmatrix} \Gamma & 0 \\ D_{\theta_A}\mathcal{F}_2 & D_{\tilde{\omega}}\mathcal{F}_2 \end{pmatrix}\cdot \begin{pmatrix} \theta_1\\ \omega_1 \end{pmatrix}= \begin{pmatrix} f\\ g \end{pmatrix}. \end{equation} \medskip \noindent computed at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$ has a solution if and only if $|A|<A_0$ and $(f,\cos\theta_c)=0$. \bigskip \noindent To prove Theorem \ref{point-existence}, we cannot use directly the implicit function theorem \ref{IFT} since the F\'echet derivative $D\mathcal{F}$ is not surjective. Following \cite{AAW2013} and also \cite{CEG2016}, we use an adaptation of the Lyapunov-Schmidt reduction argument. Define \begin{equation*} \Pi\theta_1:=(\cos\theta_c,\theta_1)\frac{\cos\theta_c}{\|\cos\theta_c\|^2_{L^2}}, \end{equation*} \medskip \noindent where $\Pi$ is the $L^2$ projector onto the linear span of $\cos\theta_c$ and from \eqref{Gamma-coseno} we have $\Pi\Gamma=0$. Thus, we define the projector on $\Gamma(H^{2}_{odd})$, as $\mathcal {I}-\Pi$ and \begin{equation}\label{tilde-mathcal-F} \tilde{\mathcal{F}}=((\mathcal {I}-\Pi)\mathcal{F}_1,\mathcal{F}_2):H^2_{odd}\times H^1_{even}\times\mathbb{R}^3\rightarrow \Gamma(H^2_{odd})\times L^2, \end{equation} \medskip \noindent where $\mathcal{F}=(\mathcal{F}_1,\mathcal{F}_2)$ is defined in \eqref{point-water-waves}. The Fr\'echet derivatives of \eqref{tilde-mathcal-F} in $(\theta_A,\tilde{\omega}_A)$ at the Crapper point $(\theta_c,\tilde{\omega_c};0,0,0)$ is now invertible. So we can apply the implicit function theorem to $\tilde{\mathcal{F}}$ then there exists a smooth function $\Theta_c:U_{B,p,\omega_0}\rightarrow H^2_{odd}\times H^1_{even}$, where $U_{B,p,\omega_0}$ is a small neighborhod of $(0,0,0)$ such that $\Theta_c(0,0,0)=(\theta_c,\tilde{\omega}_c)$ and for all $(B,p,\omega_0)\in U_{B,p,\omega_0,}$ $$\tilde{\mathcal{F}}(\Theta_c(B,p,\omega_0);B,p,\omega_0)=0.$$ \medskip \noindent But now, if we consider $\mathcal{F}(\Theta_c(B,p,\omega_0);B,p,\omega_0)$, defined in \eqref{point-water-waves}, then it could not be $0$. So we introduce a differentiable function on $U_{B,p,\omega_0}$: $$f(B;p,\omega_0)=(\cos\theta_c,\mathcal{F}_1(\Theta_c(B,p,\omega_0);B,p,\omega_0)).$$ \medskip \noindent We have that $\Pi\mathcal{F}_1=f(B;p,\omega_0)\frac{\cos\theta_c}{\|\cos\theta_c\|^2_{L^2}}$ and if we find a point $(B^*;p^*,\omega_0^*)$ such that $f(B^*;p^*,\omega_0^*)=0$, then $\mathcal{F}_1(\Theta_c(B^*,p^*,\omega_0^*);B^*,p^*,\omega_0^*))=0$ and so our problem is solved.\\ \noindent We note that choosing $(B^*;p^*,\omega_0^*)=(0,0,0)$, then $f(0;0,0)=0$. Its derivative with respect to $B$ is \begin{equation*} D_B f(0,0;0)=\left(\cos\theta_c,\Gamma\partial_k\Theta_c+e^{-\mathcal{H}\theta_c}\right)=\left(\cos\theta_c,e^{-\mathcal{H}\theta_c}\right)=-2\pi, \end{equation*} \medskip \noindent where we have used \eqref{Gamma-coseno} and the Cauchy integral theorem. Hence, we can apply the implicit function theorem \ref{IFT} to the function $f$ and there exists a smooth function $B^*(p,\omega_0)$ that satisfies $f(B^*(p,\omega_0);p,\omega_0)=0$, for $(p,\omega_0)$ in $U_{p,\omega_0}$, a small neighborhod of $(0,0)$. \\ \noindent We can resume these results in the following theorem. \begin{theorem} Let $|A|<A_0$. There exist $(B,p,\omega_0)$ and a unique smooth function $B^*:U_{p,\omega_0}\rightarrow U_B$, such that $B^*(0,0)=0$ and a unique smooth function $$\Theta_c: U_{B,p,\omega_0}\rightarrow H^2_{odd}\times H^1_{even},$$ such that $\Theta_c(0,0,0)=(\theta_c,\tilde{\omega}_c)$ and satisfy $$\mathcal{F}(\Theta_c(B^*(p,\omega_0), p,\omega_0);B^*(p,\omega_0),p,\omega_0,)=0.$$ \end{theorem} \medskip \noindent The main Theorem \ref{point-existence} is a direct consequence of the theorem above. \bigskip \section{The vortex patch case}\label{patch} \subsection{Framework} \noindent We consider a patch of vorticity $\omega(x,y)=\omega_0\chi_{D}(x,y)$, where $\omega_0\in\mathbb{R}$ and $\chi_{D}$ is the indicator function of the vortex domain $D$ near the origin, symmetric with respect to the $y$-axis, satisfying \begin{equation}\label{small-distance-cond} \max\{\textrm{dist}((x,y),(0,0))\}<<1, \quad\quad \forall (x,y)\in \partial D, \end{equation} \noindent equivalent to consider a small vortex patch. In this case, as for the previous one, the fluid is incompressible then we introduce a stream function, which is the sum of an harmonic part $\psi_H$, defined in \eqref{harmonic-point-stream} and a part related to the vortex patch \begin{equation}\label{patch-stream-function} \begin{split} \psi_{VP}(x,y)&=\frac{\omega_0}{2\pi}\int_{D}\log|(x,y)-(x',y')|\,dx'\,dy'. \end{split} \end{equation} \medskip \noindent We introduce the parametrization of the boundary $\partial D=\{\gamma(\alpha),\alpha\in [-\pi,\pi]\}$, that for now is a generic parametrization that satisfies the condition \eqref{small-distance-cond}. The representation of $D$ in the fig. \ref{new-annulus} is just an example, since it depends on the choice of the parametrization. Additionally, we obtain the velocity by taking the orthogonal gradient of the stream function, then the velocity associated to \eqref{patch-stream-function} is \begin{equation}\label{patch-velocity} \begin{split} v_{VP}(x,y)&=\frac{\omega_0}{2\pi}\int_{\partial D}\log|(x,y)-(\gamma_1(\alpha'),\gamma_2(\alpha'))|\partial_{\alpha}\gamma(\alpha')\,d\alpha'. \end{split} \end{equation} \medskip \noindent The velocity is the sum of \eqref{patch-velocity} and the orthogonal gradient of the harmonic stream function \eqref{harmonic-point-stream} \begin{equation}\label{velocity-patch} v(x,y)=(-\partial_y\psi_{H}(x,y), \partial_x\psi_H(x,y))+v_{VP}(x,y). \end{equation} \medskip \noindent As for the case of the point vortex we have to adapt the problem \eqref{v-euler}. One of the conditions is the kinematic boundary condition, so we need the velocity at the interface, \begin{equation} \begin{split} v(z(\alpha))=&BR(z(\alpha),\tilde{\omega}(\alpha))+\frac{1}{2}\frac{\tilde{\omega}(\alpha)}{|\partial_{\alpha}z|^2}\partial_{\alpha}z+\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\,\partial_{\alpha}\gamma(\alpha')\,d\alpha', \end{split} \end{equation} \medskip \noindent where $z(\alpha)$ is the parametrization of the interface $\partial\Omega$ and we can write the kinematic boundary condition as follows \begin{equation}\label{patch-kbc} \begin{split} v(z(\alpha))\cdot(\partial_{\alpha}z(\alpha))^{\perp}&=BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)^{\perp}\\[3mm] &+\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\,\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot (\partial_{\alpha}z(\alpha))^{\perp}=0. \end{split} \end{equation} \medskip \noindent In the analysis of this case we observe that the patch satisfies an elliptic equation and it has a moving boundary. We impose the patch to be fixed by the following condition $$v(\gamma(\alpha))\cdot (\partial_{\alpha}\gamma(\alpha))^{\perp}=0.$$ \noindent This is equivalent to require \begin{equation}\label{fix-patch} \begin{split} v(\gamma(\alpha))\cdot (\partial_{\alpha}\gamma(\alpha))^{\perp}&=\frac{1}{2\pi}\int_{-\pi}^{\pi} \frac{(\gamma(\alpha)-z(\alpha'))^{\perp}}{|\gamma(\alpha)-z(\alpha')|^2}\cdot\tilde{\omega}(\alpha')\,d\alpha'\cdot \partial_{\alpha}\gamma(\alpha)^{\perp}\\[3mm] &+\frac{\omega_0}{2\pi} P.V. \int_{\partial D}\log|\gamma(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot \partial_{\alpha}\gamma(\alpha)^{\perp}=0 \end{split} \end{equation} \medskip \noindent Furthermore, we need another condition for identify completely our problem. This condition is related to the Bernoulli equation \eqref{v-euler5}, equivalent to \eqref{psi-eul3}. The most important issue is to fix the interface $\partial\Omega$, but for the vortex patch case we will slightly change the idea presented in subsection \ref{general-vorticity} and used for the point vortex case. To pass from the domain $\Omega(x,y)$ into $\tilde{\Omega}(\phi,\psi)$ we will consider an approximated stream function $\tilde{\psi}$ such that \begin{equation}\label{approx-stream} \left\{\begin{array}{lll} \displaystyle\frac{\partial\tilde{\psi}}{\partial x}=W(x,y)\frac{\partial\psi}{\partial x}\\[2mm] \displaystyle\frac{\partial\tilde{\psi}}{\partial y}=W(x,y)\frac{\partial\psi}{\partial y}, \end{array}\right. \end{equation} \medskip \noindent hence, in this way $(\phi,\tilde{\psi})$ are in relation through the Cauchy-Riemann equations \begin{equation}\label{potential-approx-stream} \left\{\begin{array}{lll} \displaystyle\frac{\partial\phi}{\partial x}=\frac{\partial\tilde{\psi}}{\partial y}=W v_1\\[2mm] \displaystyle\frac{\partial\phi}{\partial y}=-\frac{\partial\tilde{\psi}}{\partial x}=W v_2, \end{array}\right. \end{equation} \medskip \noindent where $W(x,y)$ has to satisfy \eqref{x-y-W} and it is equal to $1$ in the case of irrotational fluid and $\Delta\tilde{\psi}=0$. Moreover, we point out that the new domain $\tilde{\Omega}(\phi,\tilde{\psi})$ is also the lower half plane, see fig. \ref{new-annulus}, because the approximated stream function $\tilde{\psi}(z(\alpha))=0$, due to the kinematic boundary condition \eqref{v-euler4} and the positivity of $W$.\\ \noindent In view of the fact that we will use the new coordinate system $(\phi,\tilde{\psi})$, we have to rewrite \eqref{x-y-Jacobian} and \eqref{phi-psi-DBernoulli}. Let us start by writing the relation between $(x,y)$ and $(\phi,\tilde{\psi})$, so the system \eqref{x-y-Jacobian} becomes \begin{equation}\label{x-y-phi-tilde_psi} \begin{pmatrix} \displaystyle\frac{\partial x}{\partial\phi} & \displaystyle\frac{\partial x}{\partial\tilde{\psi}}\\ \displaystyle\frac{\partial y}{\partial\phi} & \displaystyle\frac{\partial y}{\partial\tilde{\psi}}\\ \end{pmatrix} =\frac{1}{W(v_1^2+v_2^2)} \begin{pmatrix} v_1 & - v_2\\ v_2 & v_1 \end{pmatrix}, \end{equation} \medskip \noindent Now, we have to rewrite the Bernoulli equation \eqref{psi-eul3} in the new coordinates. We will derive \eqref{psi-eul3} with respect to $\phi$, such that the constant on the RHS will disappear. Thus we have \begin{equation}\label{DBernoulli-patch2} \frac{1}{2}\frac{\partial}{\partial\phi}\left(v_1^2+v_2^2\right)+p\frac{v_2}{W(v_1^2+v_2^2)}-q\frac{\partial}{\partial\phi}\left(\frac{W}{\sqrt{v_1^2+v_2^2}}\left(v_1\frac{\partial v_2}{\partial\phi}-v_2\frac{\partial v_1}{\partial\phi}\right)\right)=0. \end{equation} \medskip \noindent Finally, it is natural to bring the equations into a disk, because of the periodicity of the problem. As we did for the point and we define \begin{equation}\label{patch-annulus} \left\{\begin{array}{lll} \phi=-\alpha\\[2mm] \tilde{\psi}=\log\rho, \end{array}\right. \end{equation} \medskip \noindent And we pass from $\tilde{\Omega}(\phi,\tilde{\psi})$ into the unit disk, see fig. \ref{new-annulus}. We observe that the patch has been chosen symmetric with respect to the vertical axis, so in the coordinates $(\phi,\tilde{\psi})$ it remains symmetric with respect to the $\tilde{\psi}-$axis (due to the symmetry of the functions \eqref{symmetry}). And in the coordinates $(\alpha,\rho)$, by using \eqref{patch-annulus}, it will be symmetric with respect to the horizontal axis and contained in a circular sector, where $\pm\alpha_1$ are defined through $\pm\phi_1$, in this way $$\textrm{dist}((\pm\phi_1,\tilde{\psi}_1),(0,\tilde{\psi}))>\textrm{dist}((\phi,\tilde{\psi}),(0,\tilde{\psi})), \quad \forall (\phi,\tilde{\psi})\in \partial{\tilde{D}}.$$ \medskip \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{patch-domains} \caption{The transformation of the patch $D(x,y)$, $\tilde{D}(\phi,\tilde{\psi})$ and $\mathcal{A}$.}\label{new-annulus} \end{figure} \medskip \noindent In addition, by using the independent variables $(\tau,\theta)$, defined in \eqref{tau-theta}, we write the equation $\Delta\tilde{\psi}=0$ in the new coordinates $(\phi,\tilde{\psi})$, by using the relations \eqref{approx-stream} and \eqref{potential-approx-stream} and we get an equation for $W(\phi,\tilde{\psi})$ $$\frac{\partial W}{\partial\tilde{\psi}}(v_1^2+v_2^2)-\omega_0\chi_{\tilde{D}}=0\quad\Rightarrow\quad W(\phi,\tilde{\psi})-W\left(\phi,-\infty\right) = \omega_0\int_{-\infty}^{\tilde{\psi}} e^{-2\tau(\phi,\tilde{\psi'})}\chi_{\tilde{D}}(\phi,\tilde{\psi})\,d\tilde{\psi'}. $$ \medskip \noindent Since the value of $W$ at infinite is $1$, then in the variables $(\phi,\tilde{\psi})$, we have \begin{equation}\label{W-phi-tilde-psi} W(\phi,\tilde{\psi})=1+\omega_0\int_{-\infty}^{\tilde{\psi}} e^{-2\tau(\phi,\tilde{\psi'})}\chi_{\tilde{D}}(\phi,\tilde{\psi'})\,d\tilde{\psi'}. \end{equation} \medskip \noindent By using the change of variables \eqref{patch-annulus}, we have \begin{equation}\label{patch-W-alpha-rho} W(\alpha,\rho)=1+\omega_0\int_{0}^{\rho} \frac{e^{-2\tau(\alpha,\rho')}}{\rho'}\chi_{\mathcal{A}}(\alpha,\rho')\,d\rho' \end{equation} \medskip \noindent Concerning the derivative of the Bernoulli equation \eqref{DBernoulli-patch2}, we get \begin{equation}\label{DBernoulli-patch3} \frac{\partial}{\partial\alpha}\left( \frac{1}{2} e^{2\tau}\right)-p\frac{e^{-\tau}\sin\theta}{W}+q\frac{\partial}{\partial\alpha}\left(W e^{\tau}\frac{\partial\theta}{\partial\alpha}\right)=0. \end{equation} \medskip \noindent By integrating with respect to $\alpha$ and taking the constant that appears on the RHS as $\frac{1}{2}+B$, as explained for the point vortex case, we get \begin{equation*} \frac{1}{2} e^{2\tau}-p\left(\int_{-\pi}^{\alpha}\frac{e^{-\tau}\sin\theta}{W(\alpha,1)}\,d\alpha'-1\right)+q W(\alpha,1) e^{\tau}\frac{\partial\theta}{\partial\alpha}=\frac{1}{2}+B, \end{equation*} \medskip \noindent We multiply by $e^{-\tau}$ and we obtain the equation \begin{equation}\label{patch-Bernoulli-tau-theta} \sinh(\tau(\alpha,1))-p e^{-\tau(\alpha,1)}\left(\int_{-\pi}^{\alpha}\frac{e^{-\tau(\alpha',1)}\sin\theta(\alpha',1)}{W}\,d\alpha'-1\right)+q W \frac{\partial\theta(\alpha,1)}{\partial\alpha}-Be^{-\tau(\alpha,1)}=0. \end{equation} \medskip \noindent We can solve our problem by finding a $2\pi$ periodic functions $\tau(\alpha)$ even and $\theta(\alpha)$ odd, an even function $\tilde{\omega}(\alpha)$ and a curve $\gamma(\alpha)$, which is the parametrization of the vortex patch that satisfy \eqref{patch-kbc}, \eqref{fix-patch} and \eqref{patch-Bernoulli-tau-theta}. As we did for the point vortex, the kinematic boundary condition \eqref{patch-kbc}, can be replaced by \begin{equation}\label{patch-v-tangential} W(\alpha,1) v(z(\alpha))\cdot\partial_{\alpha}z(\alpha)=-1, \end{equation} \medskip \noindent then our problem reduced to analyze the equation \eqref{fix-patch}, \eqref{patch-Bernoulli-tau-theta} and \eqref{patch-v-tangential}. \bigskip \subsection{Perturbation of the Crapper formulation with a vortex patch} In this section, we want to write our variables as a perturbation of Crapper variables. First of all, we get a relation between $(\tau,\theta)$ in both $(\phi,\tilde{\psi})$ and $(\alpha,\rho)$ variables, by using the rotational and the divergence free conditions, \begin{equation}\label{tau-theta-patch} \begin{cases} \displaystyle\frac{\partial\theta}{\partial\phi}=\frac{\omega e^{-2\tau}}{W}+\frac{\partial\tau}{\partial\tilde{\psi}}\\[4mm] \displaystyle\frac{\partial\theta}{\partial\tilde{\psi}}=-\frac{\partial\tau}{\partial\phi} \end{cases}\Longrightarrow \begin{cases} \displaystyle -\frac{\partial\theta}{\partial\alpha}=\frac{\omega e^{-2\tau}}{W}+\rho\frac{\partial\tau}{\partial\rho}\\[4mm] \displaystyle\rho\frac{\partial\theta}{\partial\rho}=\frac{\partial\tau}{\partial\alpha}. \end{cases} \end{equation} \medskip \noindent Once we find the values of $(\tau, \theta)$, we can use the relations \eqref{x-y-phi-tilde_psi} and \eqref{patch-annulus} to obtain the parametrization of the interface \begin{equation}\label{patch-interface-derivative} \left\{\begin{array}{lll} \displaystyle\frac{\partial z_1}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\cos(\theta(\alpha,1))}{W(\alpha,1)}\\[4mm] \displaystyle\frac{\partial z_2}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W(\alpha,1)}, \end{array}\right. \end{equation} \medskip \noindent where $W(\alpha,1)$ is defined in \eqref{patch-W-alpha-rho}.\\ \noindent In the case of rotational waves $(\tau,\theta)$ do not satisfy the Cauchy-Riemann equations. For this reason we define $\tau=\tau_A+\omega_0\tilde{\tau}$ and $\theta=\theta_A+\omega_0\tilde{\theta}$, such that $(\tau_A,\theta_A)$ is the Crapper solution with small gravity but without vorticity thus it is incompressible and irrotational and satisfies the Cauchy-Riemann equations in the variables $(\phi,\psi)$, as explained in \eqref{tauCR-thetaCR} \begin{equation*} \left\{\begin{array}{lll} \displaystyle\frac{\partial\theta_A}{\partial\phi}=\frac{\partial\tau_A}{\partial{\psi}}\\[4mm] \displaystyle\frac{\partial\theta_A}{\partial{\psi}}=-\frac{\partial\tau_A}{\partial\phi}. \end{array}\right. \end{equation*} \medskip \noindent This implies that on the interface $\mathcal{S}$, i.e. $\psi=0$, one variable can be written as the Hilbert transform of the other $\tau_A=\mathcal{H}\theta_A$. Hence, in the $(\phi,\tilde{\psi})$ variables, we have \begin{equation}\label{Crapper-C-R-patch} \begin{cases} \displaystyle \frac{\partial\theta_A}{\partial\phi}=W\frac{\partial\mathcal{H}\theta_A}{\partial\tilde{\psi}}\\[4mm] \displaystyle\frac{\partial\theta_A}{\partial\tilde{\psi}}=-\frac{1}{W }\frac{\partial\mathcal{H}\theta_A}{\partial\phi} \end{cases}\Longrightarrow \begin{cases} \displaystyle -\frac{\partial\theta_A}{\partial\alpha}=W\rho\frac{\partial\mathcal{H}\theta_A}{\partial\rho}\\[4mm] \displaystyle\rho\frac{\partial\theta_A}{\partial\rho}=\frac{1}{W}\frac{\partial\mathcal{H}\theta_A}{\partial\alpha}. \end{cases} \end{equation} \medskip \noindent By substituting \eqref{Crapper-C-R-patch} in \eqref{tau-theta-patch}, we have \begin{equation}\label{patch-tilde-tau-theta-quasi-Cauchy-Riemann-1} \left\{\begin{array}{lll} \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\phi}=\left(\frac{1}{W}-1\right)\frac{\partial\theta_A}{\partial\phi}+\frac{\omega_0\chi_{\tilde{D}}e^{-2\tau}}{W}+\omega_0\frac{\partial\tilde{\tau}}{\partial\tilde{\psi}}\\[5mm] \displaystyle \omega_0\frac{\partial\tilde{\theta}}{\partial\tilde{\psi}}=-\omega_0\frac{\partial\tilde{\tau}}{\partial\phi}+(W-1)\frac{\partial\theta_A}{\partial\tilde{\psi}} \end{array}\right. \end{equation} \medskip \begin{center} $\Downarrow$ \end{center} \medskip \begin{equation}\label{patch-tilde-tau-theta-quasi-Cauchy-Riemann-2} \left\{\begin{array}{lll} \displaystyle-\omega_0\frac{\partial\tilde{\theta}}{\partial\alpha}=\left(1-\frac{1}{W}\right)\frac{\partial\theta_A}{\partial\alpha}+\frac{\omega_0\chi_{\tilde{D}}e^{-2\tau}}{W}+\rho\omega_0\frac{\partial\tilde{\tau}}{\partial\rho}\\[5mm] \displaystyle \omega_0\frac{\partial\tilde{\theta}}{\partial\tilde{\rho}}=\frac{1}{\rho}\omega_0\frac{\partial\tilde{\tau}}{\partial\alpha}+\left(1+W\right)\frac{\partial\theta_A}{\partial\tilde{\rho}} \end{array}\right. \end{equation} \medskip \noindent By deriving with respect to the opposite variable and taking the difference, we have the following elliptic equation in $(\alpha,\rho)$ \begin{equation}\label{elliptic-alpha-rho-patch} \begin{split} \omega_0\frac{\partial^2\tilde{\tau}}{\partial\rho^2}+\frac{1}{\rho^2}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\alpha^2}+\frac{1}{\rho}\omega_0\frac{\partial\tilde{\tau}}{\partial\rho}=&-\frac{1}{\rho}\frac{\partial}{\partial\rho}\left(\frac{\omega_0\chi_{\mathcal{A}}(\alpha,\rho)e^{-2\tau}}{W}\right)+\frac{1}{\rho}\frac{\partial}{\partial\rho}\left(\frac{1}{W}\right)\frac{\partial\theta_A}{\partial\alpha}\\[3mm] &-\frac{1}{\rho}\frac{\partial W}{\partial\alpha}\frac{\partial\theta_A}{\partial\rho}+\frac{1}{\rho}\frac{\partial^2\theta_A}{\partial\alpha\partial\rho}\left(\frac{1-W^2}{W}\right). \end{split} \end{equation} \medskip \noindent But we are interested in the elliptic equation in $(\phi,\tilde{\psi})-$coordinates, since it will be easy to study and we have \begin{equation}\label{elliptic-phi-tilde_psi-patch} \begin{split} \omega_0\Delta\tilde{\tau}=-\frac{\partial}{\partial\tilde{\psi}}\left(\frac{\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi}) e ^{-2\tau}}{W}\right)+\frac{1}{W^2}\frac{\partial W}{\partial\tilde{\psi}}\frac{\partial\theta_A}{\partial\phi}+\frac{\partial W}{\partial\phi}\frac{\partial\theta_A}{\partial\tilde{\psi}}+\left(\frac{W^2-1}{W}\right)\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}. \end{split} \end{equation} \bigskip \noindent We want to find a solution $\tilde{\tau}$ of the elliptic problem \eqref{elliptic-phi-tilde_psi-patch}. First of all let us rewrite the equation with all the explicit terms. \begin{equation*} \begin{split} \Delta\tilde{\tau}(\phi,\tilde{\psi})=&-\frac{\partial}{\partial\tilde{\psi}}\left(\frac{\chi_{\tilde{D}}(\phi,\tilde{\psi}) e ^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\right)\\[4mm] &+\frac{\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{\left(1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}\frac{\partial\theta_A}{\partial\phi}\\[4mm] &+\frac{\partial}{\partial\phi}\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)\frac{\partial\theta_A}{\partial\tilde{\psi}}\\[4mm] &+\frac{2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\\[4mm] &+\frac{\omega_0\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}=f((\phi,\tilde{\psi}),\tilde{\tau}). \end{split} \end{equation*} \bigskip \noindent where we use that $\displaystyle\frac{\partial W}{\partial\tilde{\psi}}=\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}.$ Now, we define a solution in the following way \begin{equation}\label{tau-tilde-solution} \tilde{\tau}(\phi,\tilde{\psi})=f((\phi,\tilde{\psi}),\tilde{\tau})*G_2(\phi,\tilde{\psi}), \end{equation} \medskip \noindent where $G_2(\phi,\tilde{\psi})$ is the Green function in the domain $\tilde{\Omega}$. We will show that \eqref{tau-tilde-solution} solves the elliptic equation, thanks to the smallness of the parameters involved.\\ \noindent If we use the properties of commutativity and differentiation of the convolution; the integration by parts with the fact that $W(\pm\pi,\tilde{\psi}')=1$, then we are able to eliminate the derivative of $\tilde{\tau}$ and we have \begin{equation}\label{patch-tilde-tau-omega0} \begin{split} \tilde{\tau}(\phi,\tilde{\psi})=&-\frac{\chi(\phi,\tilde{\psi}) e ^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}*\frac{\partial}{\partial\tilde{\psi}} G_2(\phi,\tilde{\psi})\\[4mm] &+\frac{\chi(\phi,\tilde{\psi}) e ^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{\left(1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}\cdot\frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &-\frac{\partial G_2}{\partial\phi}*\frac{\partial\theta_A}{\partial\tilde{\psi}}\cdot\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi'})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi'})}\,d\tilde{\psi}'\\[4mm] &-G_2*\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\cdot\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi'})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi'})}\,d\tilde{\psi}'\\[4mm] &+\frac{2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\cdot \frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2\\[4mm] &+\frac{\omega_0\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\cdot \frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2 \end{split} \end{equation} \medskip \noindent Since we are looking for a solution with small $\omega_0$, we rewrite \eqref{patch-tilde-tau-omega0} around $\omega_0=0$, we write just the first order \begin{equation}\label{approxim-tilde-tau} \begin{split} \tilde{\tau}&=-\chi(\phi,\tilde{\psi}) e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}*\frac{\partial G_2}{\partial\tilde{\psi}}+\chi(\phi,\tilde{\psi}) e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\cdot\frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &-\frac{\partial G_2}{\partial\phi}*\frac{\partial\theta_A}{\partial\tilde{\psi}}\int_{-\infty}^{\tilde{\psi}}\chi(\phi,\tilde{\psi}') e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}' \end{split} \end{equation} \begin{equation*} \begin{split} &-G_2*\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\int_{-\infty}^{\tilde{\psi}}\chi(\phi,\tilde{\psi}') e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'+2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\cdot \frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2\\[4mm] &+\omega_02\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\tilde{\tau}(\phi,\tilde{\psi})*\frac{\partial G_2}{\partial\tilde{\psi}}\\[4mm] &+\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'*\frac{\partial G_2}{\partial\tilde{\psi}}\\[4mm] &-2\omega_0 \chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\tilde{\tau}(\phi,\tilde{\psi})\cdot \frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &-2\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\cdot \frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &+\omega_0\frac{\partial G_2}{\partial\phi}*\frac{\partial\theta_A}{\partial\tilde{\psi}}\cdot 2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\tilde{\tau}(\phi,\tilde{\psi}')\,d\tilde{\psi}'\\[4mm] &+\omega_0 G_2*\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\cdot 2 \int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\tilde{\tau}(\phi,\tilde{\psi}')\,d\tilde{\psi}'\\[4mm] &-4\omega_0\int_{-\frac{\tilde{a}}{c}}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\tilde{\tau}(\phi,\tilde{\psi}')\,d\tilde{\psi}'\cdot\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2\\[4mm] &-\omega_0\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2\cdot\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2+o(\omega_0^2)\\[4mm] &\equiv\omega_0\mathcal{A}_1(\tilde{\tau},\theta_A)+\omega_0\mathcal{A}_2(\theta_A)+b(\theta_A)+o(\omega_0^2), \end{split} \end{equation*} \medskip \noindent We define the operator \begin{equation}\label{G-operator} \mathcal{G}(\tilde{\tau}; \omega_0,\theta_A)=\tilde{\tau}- \omega_0\mathcal{A}_1(\tilde{\tau},\theta_A)-\omega_0\mathcal{A}_2(\theta_A)-b(\theta_A)+o(\omega_0^2), \end{equation} \medskip \noindent where $\mathcal{G}(\tilde{\tau}; \omega_0,\theta_A):H^2_{even}\times\mathbb{R}\times H^2_{odd}\rightarrow H^2$ and to invert this operator in a neighborhod of $\omega_0=0$ we will use the Implicit function theorem \ref{IFT}. We observe that \begin{equation}\label{operator-tilde-tau} \left\{\begin{array}{lll} \mathcal{G}(\tilde{\tau};0,\theta_A) =0\\[4mm] D_{\tilde{\tau}}\mathcal{G}(\tilde{\tau};0,\theta_A)=\tau_1. \end{array}\right. \end{equation} \medskip \noindent The equations \eqref{operator-tilde-tau} guarantees that in a neighborhod of $(\omega_0=0,\theta_A)$, there exists a smooth function $\tilde{\tau}^*(\omega_0,\theta_A)$, such that $\tilde{\tau}^*(0,\theta_A)=\tilde{\tau}$. \bigskip \subsection{Existence of Crapper waves in the presence of a small vortex patch} In this section we prove the existence of a perturbation of the Crapper waves, with small gravity and small vorticity. We will prove the existence theorem (Theorem \ref{patch-existence}), by means of the implicit function theorem. However, to prove it we need an explicit parametrization for $\gamma(\alpha)$ in such a way that the operator, related to \eqref{fix-patch}, fulfils the hypotesis of the implicit function theorem. We define $\gamma(\alpha)$ as follows \begin{equation}\label{gamma} \gamma(\alpha)= \left\{ \begin{array}{lll} \displaystyle r\left(-\frac{\alpha+\pi}{\sin\alpha}\cos\alpha,-\alpha-\pi\right)\hspace{1.5cm}-\pi\leq \alpha<-\frac{\pi}{2}\\[5mm] \displaystyle r\left(\frac{\alpha}{\sin\alpha}\cos\alpha,\alpha\right)\hspace{3.3cm}-\frac{\pi}{2}\leq \alpha\leq\frac{\pi}{2}\\[5mm] \displaystyle r\left(-\frac{\alpha-\pi}{\sin\alpha}\cos\alpha,-\alpha+\pi\right)\hspace{2cm}\frac{\pi}{2}<\alpha\leq\pi, \end{array}\right. \end{equation} \medskip \noindent where $r\in \mathbb{R}$ the small radius. So that its derivative is \begin{equation} \partial_{\alpha}\gamma(\alpha)= \left\{ \begin{array}{lll} \displaystyle r\left(\frac{(\alpha+\pi)-\cos\alpha\sin\alpha}{\sin^2\alpha},-1\right)\hspace{1.5cm}-\pi\leq \alpha<-\frac{\pi}{2}\\[5mm] \displaystyle r\left(\frac{\cos\alpha\sin\alpha-\alpha}{\sin^2\alpha},1\right)\hspace{3.3cm}-\frac{\pi}{2}\leq \alpha\leq\frac{\pi}{2}\\[5mm] \displaystyle r\left(\frac{(\alpha-\pi)-\cos\alpha\sin\alpha}{\sin^2\alpha},-1\right)\hspace{2cm}\frac{\pi}{2}<\alpha\leq\pi \end{array}\right. \end{equation} \medskip \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{My-patch} \caption{The choice of $\gamma(\alpha)$.}\label{my-patch} \end{figure} \medskip \noindent However, we will work in a neighborhood of $\omega_0=0$, then we substitute $\tau=\mathcal{H}\theta_A+\omega_0 \tilde{\tau}^*$. \begin{subequations} \label{patch-water-waves} \begin{align} \begin{split} &\mathcal{F}_1(\theta_A, \tilde{\omega}, r; B,p,\omega_0)=\sinh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}^*)\\[3mm] &\hspace{2cm}-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*}\left(\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*}\sin\theta(\alpha',1)}{W(\alpha',1)}\,d\alpha'-1\right)\\[3mm] &\hspace{2cm}+q W(\alpha,1) \frac{\partial(\theta_A+\omega_0\tilde{\theta}^*)}{\partial\alpha}-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*}. \end{split}\label{patch-water-waves-1}\\[5mm] \begin{split} &\mathcal{F}_2(\theta_A,\tilde{\omega}, r; B,p,\omega_0):=W(\alpha,1)\left(2BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)+\tilde{\omega}(\alpha)\right.\\[3mm] &\hspace{2cm}+\left.\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot\partial_{\alpha}z(\alpha)\right)+2 \end{split}\label{patch-water-waves-2}\\[5mm] \begin{split} &\mathcal{F}_3(\theta_A,\tilde{\omega}, r; B, p,\omega_0):=\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(\gamma(\alpha)-z(\alpha'))^{\perp}}{|\gamma(\alpha)-z(\alpha')|^2}\cdot\tilde{\omega}(\alpha')\,d\alpha'\cdot \partial_{\alpha}\gamma(\alpha)^{\perp}\\[3mm] &\hspace{2cm}+\frac{\omega_0}{2\pi}P.V.\int_{-\pi}^{\pi}\log|\gamma(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot\partial_{\alpha}\gamma(\alpha)^{\perp} \end{split}\label{patch-water-waves-3} \end{align} \end{subequations} \bigskip \noindent We have that $$(\mathcal{F}_1,\mathcal{F}_2,\mathcal{F}_3)(\theta_A, \tilde{\omega}, r; B,p,\omega_0):H^{2}_{odd}\times H^{1}_{even}\times\mathbb{R}^{{4}}\rightarrow H^{1}_{even}\times H^{1}_{even}\times H^{1}.$$ \medskip \noindent The main theorem we want to prove is the following \begin{theorem}\label{patch-existence} Let us consider the water waves problem \eqref{v-euler}, with a small vortex patch and a small gravity $g$. Then, for some values of $A<A_0$, defined in \eqref{Crapper-solutions}, there exist periodic solutions to \eqref{v-euler} with overhanging profile. \end{theorem} \bigskip \subsubsection{Proof of Theorem \ref{patch-existence}} We will analyse the three operators \eqref{patch-water-waves} that identify our problem. And we will show they satisfy the hypotesis of the implicit function theorem. First of all we have to show that \begin{equation}\label{operators-zero} (\mathcal{F}_1,\mathcal{F}_2,\mathcal{F}_3)(\theta_c,\tilde{\omega}_c,0;0,0,0)=(0,0,0). \end{equation} \medskip \noindent For $\mathcal{F}_1$, we use \eqref{Crapper-Bernoulli} \begin{equation*} \begin{split} \mathcal{F}_1(\theta_c,\tilde{\omega}_c,0;0,0,0)&=\sinh(\mathcal{H}\theta_c)+q\frac{\partial\theta_c}{\partial\alpha}=0 \end{split} \end{equation*} \medskip \noindent For $\mathcal{F}_2$ it holds by construction \eqref{patch-v-tangential}. For $\mathcal{F}_3$, we write explicitly $\gamma(\alpha)$ as in \eqref{gamma} and by taking the radius $r$ to be $0$. Thus $\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0,0)$ satisfies \eqref{operators-zero}.\\ \noindent The most considerable part is to prove the invertibility of the derivatives. We observe that $D_{\tilde{\omega}}\mathcal{F}_1=D_{r}\mathcal{F}_1=0$, so it remains to compute $D_{\theta_A}\mathcal{F}_1$. \begin{equation*} \begin{split} &D_{\theta_A}\mathcal{F}_1=\frac{d}{d\mu}\left[\sinh(\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1))+qW_{\mu}\frac{\partial(\theta_A+\mu\theta_1+\omega_0\tilde{\theta}^*(\theta_A+\mu\theta_1))}{\partial\alpha}\right.\\[3mm] &\left.-pe^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1-\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}\sin(\theta_A+\mu\theta_1+\omega_0\tilde{\theta}^*(\theta_A+\mu\theta_1))}{W_\mu} \,d\alpha'\right.\\[3mm] &\left.-Be^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1-\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}\right]_{|\mu=0}\\[5mm] &=\cosh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}^*(\theta_A))\cdot\left(\mathcal{H}\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\\[3mm] &+ q\left[\frac{d}{d\mu}W_{\mu}\right]_{|\mu=0}\frac{\partial(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))}{\partial\alpha}+q W\frac{\partial}{\partial\alpha}\left(\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\theta}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\\[3mm] &+p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\left(\mathcal{H}\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\cdot\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\sin(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))}{W} \,d\alpha'\\[5mm] &-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}}{W}\left(-\mathcal{H}\theta_1-\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\sin(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))\,d\alpha'\\[5mm] &-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}}{W}\cos(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))\left(\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\theta}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\,d\alpha'\\[5mm] &+p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\sin(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))}{W^2}\left[\frac{d}{d\mu}W_{\mu}\right]_{|\mu=0}\,d\alpha'\\[3mm] &-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\left(-\mathcal{H}\theta_1-\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right). \end{split} \end{equation*} \medskip \begin{remark}\label{tilde-tau-frechet-derivative} \noindent The equation for $W_{\mu}(\alpha,1)$ is the following \begin{equation}\label{patch-W-mu} W_{\mu}(\alpha,1)=1+\int_{0}^{1} \omega_0\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\mu\mathcal{H}\theta_1-2\tilde\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}}{\rho'}\,d\rho' \end{equation} \bigskip \noindent Now we have to compute $\displaystyle\frac{d W_{\mu}}{d\mu}$, that is \begin{equation}\label{patch-DW-mu} \begin{split} &\left[\frac{d W_{\mu}}{d\mu}\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left(1+\int_{0}^{1} \omega_0\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\mu\mathcal{H}\theta_1-2\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}}{\rho'}\,d\rho'\right)\right]_{|\mu=0}\\[5mm] &=\left[\int_{0}^{1}\omega_0\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\mu\mathcal{H}\theta_1-2\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}}{\rho'}\cdot\right.\\[4mm] &\hspace{3cm}\left.\cdot\left(-2\mathcal{H}\theta_1-2\omega_0\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right)\,d\rho'\right]_{|\mu=0}\\[5mm] &=\omega_0\int_{0}^{1}\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau}^*(\theta_A)}}{\rho'}\left(-2\mathcal{H}\theta_1-2\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{\mu=0}\right)\,d\rho' \end{split} \end{equation} \medskip \noindent It remains to observe that for our purpose it is sufficient to have the existence of $\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)$, coming from the elliptic equation \eqref{elliptic-alpha-rho-patch}. Indeed we must compute the Fr\'echet derivative at the point $(\theta_c,\tilde{\omega_c},0;0,0,0)$ and, as we can see in $D_{\theta_A}\mathcal{F}_1$, the term $\displaystyle\left[\frac{d}{d\mu}\tilde{\tau}(\theta_A+\mu\theta_1)\right]_{|\mu=0}$ is always multiplied by $\omega_0$ that it is taken equal to zero. And we can state that also $\displaystyle \left[\frac{d W_{\mu}}{d\mu}\right]_{|\mu=0}$ is zero. \end{remark} \medskip \noindent The remark \ref{tilde-tau-frechet-derivative} implies that \begin{equation*} D_{\theta_A}\mathcal{F}_1(\theta_c,\tilde{\omega}_c,0;0,0,0)= \cosh(\mathcal{H}\theta_c)\cdot\mathcal{H}\theta_1+q\frac{\partial\theta_1}{\partial\alpha}. \end{equation*} \medskip \noindent For the second operator, we observe that for computing $D_{\theta_A}\mathcal{F}_2$, we need to use the equation for $z(\alpha)$ can be obtained by integrating \eqref{patch-interface-derivative} and we define $$z_\mu(\alpha)=-\int_{-\pi}^{\alpha} \frac{e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1-\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)+i(\theta_A+\mu\theta_1+\omega_0 \tilde{\theta}^*(\theta_A+\mu\theta_1))}}{W_{\mu}(\alpha',1)}\,d\alpha'-e_2$$ \medskip \noindent where $W_{\mu}(\alpha',1)$ is defined in \eqref{patch-W-mu}. \\ \noindent In the same way, we did for computing $D_{\theta_A}\mathcal{F}_1$, we can compute $D_{\theta_A}\mathcal{F}_2$ and then at the point $(\theta_c,\tilde{\omega}_c,0;0,0,0)$ we will get $D_{\theta_A}\mathcal{F}_2(\theta_c,\tilde{\omega}_c,0;0,0,0)$.\\ \noindent Instead, it is important to compute $D_{\tilde{\omega}}\mathcal{F}_2$. \begin{equation*} \begin{split} &D_{\tilde{\omega}}\mathcal{F}_2=\left[\frac{d}{d\mu}\mathcal{F}_2(\theta_A,\tilde{\omega}+\mu\omega_1,r;p,\varepsilon,\omega_0,B)\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left[2 W(\alpha,1) BR(z(\alpha),\tilde{\omega}(\alpha)+\mu\omega_1)\cdot\partial_{\alpha}z(\alpha)\right.\right.\\[3mm] &\left.\left.+W(\alpha,1) (\tilde{\omega}(\alpha)+\mu\omega_1(\alpha))+W(\alpha,1)\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot\partial_{\alpha}z(\alpha)+2\right]\right]_{|\mu=0}\\[3mm] &=2W(\alpha,1)\textrm{P.V.}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot \omega_1(\alpha')\,d\alpha'\cdot \partial_{\alpha}z(\alpha)+W(\alpha,1)\omega_1(\alpha). \end{split} \end{equation*} \medskip \noindent At the Crapper point we have $$D_{\tilde{\omega}}\mathcal{F}_2(\theta_c,\tilde{\omega}_c,0;0,0,0,0)=2\textrm{P.V.}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot \omega_1(\alpha')\,d\alpha'\cdot \partial_{\alpha}z(\alpha)+\omega_1(\alpha).$$ \bigskip \noindent It remains to compute the last derivate \begin{equation*} \begin{split} &D_{r}\mathcal{F}_2=\frac{d}{d\mu}\left[\mathcal{F}_2(\theta_A,\tilde{\omega},r+\mu r_1;B,p,\omega_0)\right]_{|\mu=0}\\[3mm] &=\left[\frac{d}{d\mu}\left[2 W(\alpha,1) BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)+W(\alpha,1)\tilde{\omega}(\alpha)\right.\right.\\[3mm] &+W(\alpha,1) \frac{\omega_0}{2\pi}\int_{-\pi}^{-\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)+(r+\mu r_1)\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+(r+\mu r_1)(\alpha'+\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot (r+\mu r_1)\left(\frac{(\alpha'+\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)-(r+\mu r_1)\frac{\alpha'}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)-(r+\mu r_1)(\alpha')\right)^2}\\[3mm] &\hspace{3cm}\cdot (r+\mu r_1)\left(\frac{\cos\alpha'\sin\alpha'-\alpha'}{\sin^2\alpha'},1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[4mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\int_{\frac{\pi}{2}}^{\pi}\log\sqrt{\left(z_1(\alpha)+(r+\mu r_1)\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+(r+\mu r_1)(\alpha'-\pi)\right)^2}\\[3mm] &\hspace{3cm}\left.\cdot (r+\mu r_1)\left(\frac{(\alpha'-\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\right]_{|\mu=0}\\[4mm] \end{split} \end{equation*} \begin{equation*} \begin{split} &=W(\alpha,1)\frac{\omega_0}{2\pi}r\cdot r_1\int_{-\pi}^{-\frac{\pi}{2}}\frac{z_1(\alpha)+r\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'+z_2(\alpha)+r(\alpha'+\pi)}{\left(z_1(\alpha)+r\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'+\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'+\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\cdot r_1\int_{-\pi}^{-\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)+r\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'+\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'+\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}r\cdot r_1\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{z_1(\alpha)-r\frac{\alpha'}{\sin\alpha'}\cos\alpha'+z_2(\alpha)-r(\alpha')}{\left(z_1(\alpha)-r\frac{\alpha'}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)-r\alpha'\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{\cos\alpha'\sin\alpha'-\alpha'}{\sin^2\alpha'},1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\cdot r_1\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)-r\frac{\alpha'}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)-r\alpha'\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{\cos\alpha'\sin\alpha'-\alpha'}{\sin^2\alpha'},1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha) \end{split} \end{equation*} \begin{equation*} \begin{split} &+W(\alpha,1)\frac{\omega_0}{2\pi}r\cdot r_1\int_{\frac{\pi}{2}}^{\pi}\frac{z_1(\alpha)+r\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'+z_2(\alpha)+r(\alpha'-\pi)}{\left(z_1(\alpha)+r\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'-\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'-\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\cdot r_1\int_{\frac{\pi}{2}}^{\pi}\log\sqrt{\left(z_1(\alpha)+r\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'-\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'-\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha) \end{split} \end{equation*} \medskip \noindent When we evaluate this derivative at $(\theta_c,\tilde{\omega}_c,0;0,0,0)$, we get $$D_{r}\mathcal{F}_2(\theta_c,\tilde{\omega}_c,0;0,0,0)=0.$$ \medskip \noindent For the last operator $\mathcal{F}_3$ we have to compute the derivates, but for $D_{\theta_A}\mathcal{F}_3$ and $D_{\tilde{\omega}}\mathcal{F}_3$, we have just to substitute $\theta_A\mapsto \theta_A+\mu\theta_1$ and $\tilde{\omega}\mapsto\tilde{\omega}+\mu\omega_1$, respectively and compute the derivatives as we did for the previous operators. Then we will compute them at the Crapper point, so that we get $D_{\theta_A}\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)$ and $D_{\tilde{\omega}}\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)$. In order to apply the implicit function theorem the relevant derivative for the third operator is the one with respect to $r$. The presence of $r$ is in the definition of $\gamma(\alpha)$ in \eqref{gamma}, so we rewrite $\mathcal{F}_3$ in a convenient way. \begin{equation*} \begin{split} &\mathcal{F}_3(\theta_A,\tilde{\omega},r;B,p,\omega_0)=\frac{-\partial_{\alpha}\gamma_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{-\gamma_2(\alpha)+z_2(\alpha')}{(\gamma_1(\alpha)-z_1(\alpha'))^2+(\gamma_2(\alpha)-z_2(\alpha'))^2}\tilde{\omega}(\alpha)\,d\alpha'\\[3mm] &\hspace{1cm}+\frac{\partial_{\alpha}\gamma_1(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{\gamma_1(\alpha)-z_1(\alpha')}{(\gamma_1(\alpha)-z_1(\alpha'))^2+(\gamma_2(\alpha)-z_2(\alpha'))^2}\tilde{\omega}(\alpha)\,d\alpha'\\[3mm] &\hspace{1cm}-\frac{\omega_0}{2\pi}\partial_{\alpha}\gamma_2(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\sqrt{(\gamma_1(\alpha)-\gamma_1(\alpha'))^2+(\gamma_2(\alpha)-\gamma_2(\alpha'))^2}\hspace{0.2cm}\partial_{\alpha}\gamma_1(\alpha')\,d\alpha'\\[3mm] &\hspace{1cm}+\frac{\omega_0}{2\pi}\partial_{\alpha}\gamma_1(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\sqrt{(\gamma_1(\alpha)-\gamma_1(\alpha'))^2+(\gamma_2(\alpha)-\gamma_2(\alpha'))^2}\hspace{0.2cm}\partial_{\alpha}\gamma_2(\alpha')\,d\alpha'. \end{split} \end{equation*} \bigskip \noindent In order to simplify the computation we will define $\gamma(\alpha)=r\hspace{0.1cm}(\tilde{\gamma}_1(\alpha),\tilde{\gamma}_2(\alpha))$ and $\partial_{\alpha}\gamma(\alpha)=r\hspace{0.1cm}(\partial_{\alpha}\tilde{\gamma}_1(\alpha),\partial_{\alpha}\tilde{\gamma}_2(\alpha))$. \medskip \begin{equation*} \begin{split} &D_r\mathcal{F}_3=\frac{d}{d\mu}\left[\mathcal{F}_3(\theta_A,\tilde{\omega},r+\mu r_1;B,p,\omega_0)\right]_{|\mu=0}=\\[3mm] &=-\frac{r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{-r\tilde{\gamma}_2(\alpha)+z_2(\alpha')+1}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\\[3mm] &-\frac{r\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\left\{\int_{-\pi}^{\pi}\frac{-r_1\tilde{\gamma}_2(\alpha)}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\right.\\[3mm] &\hspace{2cm}-\int_{-\pi}^{\pi}\frac{-r\tilde{\gamma}_2(\alpha)+z_2(\alpha')+1}{\left[(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2\right]^2}\tilde{\omega}(\alpha')\\[3mm] &\hspace{3cm}\left.\cdot\left[2(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))r_1\tilde{\gamma}_1(\alpha)+2(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)r_1\tilde{\gamma}_2(\alpha)\right]\,d\alpha'\right\}\\[4mm] &+\frac{r_1\partial_{\alpha}\tilde{\gamma}_1(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{r\tilde{\gamma}_1(\alpha)-z_1(\alpha')}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\\[3mm] &+\frac{r\partial_{\alpha}\tilde{\gamma}_1(\alpha)}{2\pi}\left\{\int_{-\pi}^{\pi}\frac{r_1\tilde{\gamma}_1(\alpha)}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\right.\\[3mm] &\hspace{2cm}-\int_{-\pi}^{\pi}\frac{r\tilde{\gamma}_1(\alpha)-z_1(\alpha')}{\left[(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2\right]^2}\tilde{\omega}(\alpha')\\[3mm] &\hspace{3cm}\left.\cdot\left[2(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))r_1\tilde{\gamma}_1(\alpha)+2(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)r_1\tilde{\gamma}_2(\alpha)\right]\,d\alpha'\right\} \end{split} \end{equation*} \begin{equation*} \begin{split} &-\frac{\omega_0}{2\pi}2 r r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\left(r\sqrt{(\tilde{\gamma}_1(\alpha)-\tilde{\gamma}_1(\alpha'))^2+(\tilde{\gamma}_2(\alpha)-\tilde{\gamma}_2(\alpha'))^2}\right)\partial_{\alpha}\tilde{\gamma}_1(\alpha')\,d\alpha'\\[3mm] &-\frac{\omega_0}{2\pi}r r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)\int_{-\pi}^{\pi}\partial_{\alpha}\tilde{\gamma}_1(\alpha')\,d\alpha'\\[3mm] &+\frac{\omega_0}{2\pi}2rr_1\partial_{\alpha}\tilde{\gamma}_1(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\left(r\sqrt{(\tilde{\gamma}_1(\alpha)-\tilde{\gamma}_1(\alpha'))^2+(\tilde{\gamma}_2(\alpha)-\tilde{\gamma}_2(\alpha'))^2}\right)\partial_{\alpha}\tilde{\gamma}_2(\alpha')\,d\alpha'\\[3mm] &+\frac{\omega_0}{2\pi}r r_1\partial_{\alpha}\tilde{\gamma}_1(\alpha)\int_{-\pi}^{\pi}\partial_{\alpha}\tilde{\gamma}_2(\alpha')\,d\alpha' \end{split} \end{equation*} \bigskip \begin{remark} We notice that all the terms above for $r=0$ disappear except for the first one and the third one. Moreover, by computing them at the Crapper point $(\theta_c, \tilde{\omega}_c)$ it follows that also the third will be zero because of the parity of the Crapper curve $z^c(\alpha)$ (see \eqref{z-parity}) and of $\tilde{\omega}(\alpha)$ which is even. Hence, in order to have the Fr\'echet derivative different from zero for every $\alpha\in[-\pi,\pi]$, we will choose $\gamma(\alpha)$ as \eqref{gamma} so that the first term will always be different from zero. \end{remark} \bigskip \noindent Then we end up in \begin{equation*} \begin{split} D_r\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0,0)=&-\frac{r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{z^c_2(\alpha')+1}{(z^c_1(\alpha'))^2+(z^c_2(\alpha')+1)^2}\omega(\alpha')\,d\alpha'. \end{split} \end{equation*} \medskip \noindent It remains to prove the invertibility of the derivatives. In particular, the derivatives' matrix is the following \begin{equation}\label{frechet-derivative} D\mathcal{F}(\theta_c,\tilde{\omega}_c, 0;0,0,0)= \begin{pmatrix} D_{\theta_A}\mathcal{F}_1 & 0 & 0 \\ D_{\theta_A}\mathcal{F}_2 & D_{\tilde{\omega}}\mathcal{F}_2 & 0\\ D_{\theta_A}\mathcal{F}_3 & D_{\tilde{\omega}}\mathcal{F}_3 & D_r \mathcal{F}_3 \end{pmatrix}= \begin{pmatrix} \Gamma & 0 & 0\\ D_{\theta_A}\mathcal{F}_2 & \mathcal{A}(z^c(\alpha))+\mathcal{I}& 0\\ D_{\theta_A}\mathcal{F}_3 & D_{\tilde{\omega}}\mathcal{F}_3 & D_r\mathcal{F}_3 \end{pmatrix}\cdot \begin{pmatrix} \theta_1\\ \omega_1\\ r_1 \end{pmatrix} \end{equation} \medskip \noindent where \begin{align*} &\Gamma\theta_1=\cosh(\mathcal{H}\theta_c)\mathcal{H}\theta_1+q\frac{d}{d\alpha}\theta_1\\[3mm] &(\mathcal{A}(z^c(\alpha))+\mathcal{I})\omega_1=2BR(z^c(\alpha),\omega_1)\cdot\partial_{\alpha}z^c(\alpha)+\omega_1.\\[3mm] &D_r\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)=-\frac{r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{z^c_2(\alpha')+1}{(z^c_1(\alpha'))^2+(z^c_2(\alpha')+1)^2}\omega(\alpha')\,d\alpha'. \end{align*} \medskip \noindent We put in evidence only these three operators since the matrix is diagonal and it will be invertible if the diagonal is invertible.\\ \noindent We observe immediately that the choice of the curve $\gamma(\alpha)$ is crucial since the second component of $\partial_{\alpha}\tilde{\gamma}(\alpha)\neq 0$, for every $\alpha\in [-\pi,\pi]$. So we can invert $D_r\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)$, as required. For the other two operators we have to use Lemma \ref{DF1-invertible} and Lemma \ref{DF2-invertible} to overcome the problem of the non invertibility of $\Gamma$, see section \ref{point-existence-proof}. Hence, we state the following result. \begin{theorem} Let $|A|<A_0$. Then \medskip \begin{enumerate} \item there exists $(\omega_0,\theta_A)$ and a unique smooth function $\tilde{\tau}^*:U_{\omega_0, \theta_A}\rightarrow H^2_{even}$, such that $\tilde{\tau}^*(0,\theta_A)=\tilde{\tau}$ (see \eqref{G-operator}),\\[3mm] \item there exists $(B,p,\omega_0)$ and a unique smooth function $B^*:U_{p,\omega_0}\rightarrow U_B$, such that $B^*(0,0)=0$,\\[3mm] \item there exists a unique smooth function $\Theta_c: U_{B,p,\omega_0}\rightarrow H^2_{odd}\times H^1_{even}\times\mathbb{R}$, such that $\Theta_c(0,0,0)=(\theta_c,\tilde{\omega}_c,0)$ \end{enumerate} \medskip and satisfy $$\mathcal{F}(\Theta_c(B^*(p,\omega_0), p,\omega_0,);B^*(p,\omega_0),p,\omega_0)=0.$$ \end{theorem} \medskip \noindent The proof of Theorem \ref{patch-existence} holds directly from this Theorem. \bigskip \subsection*{Acknowledgements.} This work is supported in part by the Spanish Ministry of Science and Innovation, through the “Severo Ochoa Programme for Centres of Excellence in R\&D (CEX2019-000904-S)” and MTM2017-89976-P. DC and EDI were partially supported by the ERC Advanced Grant 788250. \printbibliography \end{document}
1,314,259,992,930
arxiv
\section{Introduction} We work over a field $k$ with algebraic closure $\bar{k}$. A rational curve is a smooth, projective, geometrically connected curve of genus $0$. \subsection*{The field of moduli} Let $D\subset \PP^{1}_{\bar{k}}$ be an effective reduced divisor. If $k$ is perfect, consider the subgroup $H\subset\operatorname{Gal}(\bar{k}/k)$ of elements $\sigma$ such that there exists an automorphism of $\PP^{1}_{\bar{k}}/\bar{k}$ mapping $D$ to $\sigma(D)$. The \emph{field of moduli} $k(D)$ of $D$ is the subfield of $\bar{k}$ of elements fixed by $H$. In our joint paper with A. Vistoli \cite{giulio-angelo-moduli} we have generalized the definition of the field of moduli to arbitrary $k$; we work in this generality. If $\bar{k}/k'/k$ is a sub-extension and there exists a rational curve $P$ over $k'$ with a divisor $D_{0}\subset P$ such that $(P_{\bar{k}},D_{0,\bar{k}})\simeq(\PP^{1}_{\bar{k}},D)$, we say that $k'$ is a field of definition for $(\PP^{1}_{\bar{k}},D)$. The field of moduli is contained in every field of definition. \subsection*{Marinatto's results} Let $n$ be the degree of $D$, assume $n\ge 3$. Assuming $k$ perfect of characteristic $\neq 2$, A. Marinatto \cite{marinatto} showed that, if $n$ is odd or $n=4$, then $D$ descends to a divisor of $\PP^{1}_{k(D)}$. Hence, if $n$ is odd or $n=4$, $k(D)$ is a field of definition for $(\PP^{1}_{\bar{k}},D)$. For every even integer $n\ge 6$, Marinatto constructed examples of divisors $D\subset\PP^{1}_{\bar{k}}$ of degree $n$ which do not descend to $\PP^{1}_{k(D)}$. His examples descend to a non-trivial rational curve over $k(D)$, though, hence $k(D)$ is a field of definition for $(\PP^{1}_{\bar{k}},D)$ in these cases. \subsection*{Our results} Because of this, the problem of fields of definition versus fields of moduli for $n$ even and $\ge 6$ is still open. We solve it, and generalize Marinatto's results for $n$ odd and $n=4$ to an arbitrary base field. The following theorem sums up the various results we obtain. \begin{theorem}\label{thm:main} Let $k$ be any field, $n\ge 3$ an integer and $D\subset \PP^{1}_{\bar{k}}$ a reduced, effective divisor of degree $n$. If at least one of the following conditions holds, then $(\PP^{1}_{\bar{k}},D)$ is defined over its field of moduli. \begin{itemize} \item $\aut(\PP^{1},D)$ is not cyclic of degree even and prime with $\operatorname{char} k$. \item $n=4$. \item $n=6$. \item $n$ is odd. \end{itemize} In particular, the first condition implies that $(\PP^{1}_{\bar{k}},D)$ is always defined over its field of moduli if $\operatorname{char} k=2$. If $n=4$ or $n$ is odd, we can say more precisely that $D$ descends to a divisor of $\PP^{1}$ over the field of moduli. On the other hand, if $\operatorname{char} k\neq 2$, $n\ge 8$ is even and the $2$-torsion $\on{Br}(k)[2]$ of the Brauer group of $k$ is non-trivial, then we may choose $D$ such that the field of moduli is $k$ and $(\PP^{1}_{\bar{k}},D)$ is not defined over $k$. \end{theorem} For $n=4$ or $n$ odd Theorem~\ref{thm:main} is a generalization of Marinatto's theorem to arbitrary fields. For $n$ even and $\ge 6$, it is new. In order to prove all the various sub-statements of Theorem~\ref{thm:main}, we first characterize the divisors $D\subset\PP^{1}_{\bar{k}}$ such that the field of moduli is a field of definition for $(\PP^{1}_{\bar{k}},D)$, see Theorem~\ref{thm:divisors}. \subsection*{Discussion of proofs} Marinatto's proof for $n$ odd is based on a case-by-case analysis of the finite subgroups of $\mathrm{PGL}_{2}$. Our approach is more conceptual: for $n$ odd, we only use general techniques for problems about fields of moduli (not specific to divisors or rational curves) plus a parity counting. For $n$ even and $\ge 8$, we also need to make some elementary geometric arguments. If $\aut(\PP^{1},D)$ is not cyclic of degree even and prime with $\operatorname{char} k$, we have a conceptual argument based on the Riemann-Hurwitz formula; from this case we get the cases $n=4,6$, too. Marinatto also uses a theorem of Huggins \cite[Theorem 5.3]{huggins} about fields of moduli of hyperelliptic curves. Huggins' proof is based on a case-by-case analysis of the finite subgroups of $\mathrm{PGL}_{2}$, too. Our argument with Riemann-Hurwitz directly yields a short, conceptual proof of Huggins' result; we include it at the end of the paper. Our version is slightly more general than the original one since we do not assume the base field to be perfect. While we use the general framework constructed in \cite{giulio-angelo-moduli}, we mention the fact that, in dimension $1$ (which is our case), the main ideas are due to P. Dèbes and M. Emsalem \cite{debes-emsalem}. \subsection*{Acknowledgements} This paper was born as part of my recent joint articles with A. Vistoli \cite{giulio-angelo-valuative} and \cite{giulio-angelo-moduli}. I am grateful to him for many useful discussions, as well as for pointing out Marinatto's article to me. \section{The residual gerbe and the compression} For the convenience of the reader, we recall some objects and constructions from our joint paper with A. Vistoli \cite{giulio-angelo-moduli}. Consider the functor $\cG:\on{Sch}/k\to\on{Sets}$ sending a scheme $S$ over $k$ to the set of projective bundles $Q\to S$ of relative dimension $1$ with an effective divisor $E\subset Q$ finite étale over $S$ such that each geometric fiber is isomorphic to $(\PP^{1}_{\bar{k}},D)$, possibly after enlarging the base field. It can be checked that $\cG$ is an algebraic stack of finite type over $k$ with finite, étale inertia. The algebraic stack $\cG$ is a gerbe over a field $k(D)$ finite over $k$ which is, by definition, the field of moduli of $(\PP^{1}_{\bar{k}},D)$, and the base change $\cG\times_{k(D)}\bar{k}$ is isomorphic to the classifying stack $\cB_{\bar{k}}\aut(\PP^{1}_{\bar{k}},D)$ \cite[Proposition 3.10]{giulio-angelo-moduli}. It is called the \emph{residual gerbe} of $(\PP^{1}_{\bar{k}},D)$. By definition, $(\PP^{1}_{\bar{k}},D)$ is defined over $k(D)$ if and only if the residual gerbe $\cG$ has a $k(D)$-rational point. If $k$ is finite then this is always true \cite[Theorem 8.1]{diproietto-tonini-zhang}. There is a universal projective bundle $\cP\to\cG$ over $\cG$ with a universal divisor $\cD\subset\cP$ finite étale over $\cG$. The coarse moduli space $P$ of $\cP$ is called the \emph{compression} of $(\PP^{1}_{\bar{k}},D)$, it is a rational curve over $k(D)$ whose pullback to $\bar{k}$ identifies naturally with $\PP^{1}_{\bar{k}}/\aut(\PP^{1}_{\bar{k}},D)$ \cite[\S 5]{giulio-angelo-moduli}. Let us call \emph{compressed divisor} the coarse moduli space $\bD\subset\bP$ of $\cD$, since $\cD$ is finite étale over $\cG$ then $\bD$ is finite étale over $\spec k$. If $k$ is perfect, we can give a more down-to-earth construction of the compression: $\operatorname{Gal}(\bar{k}/k(D))$ acts naturally on $\PP^{1}_{\bar{k}}/\aut(\PP^{1}_{\bar{k}},D)$ which thus descends to a rational curve $\bP$ over $k(D)$, and similarly $D/\aut(\PP^{1}_{\bar{k}},D)$ descends to a divisor $\bD\subset\bP$. There is a rational map $P\dashrightarrow \cG$ whose pullback $P_{\bar{k}}=\PP^{1}_{\bar{k}}/\aut(\PP^{1},D)\dashrightarrow \cG_{\bar{k}}=\cB_{\bar{k}}\aut(\PP^{1}_{\bar{k}},D)$ to $\bar{k}$ is associated with the projection $\PP^{1}_{\bar{k}}\to\PP^{1}_{\bar{k}}/\aut(\PP^{1}_{\bar{k}},D)$, see the proof of \cite[Theorem 5.4]{giulio-angelo-moduli}. In particular, if $P\simeq \PP^{1}_{k(D)}$ then $\cG$ has a $k(D)$-rational point. \section{Rational maps from rational curves to gerbes} As we have seen above, rational maps $P\dashrightarrow \cG$ where $P$ is a rational curve, $\cG$ is a finite étale gerbe and such that the geometric fibers are connected of genus $0$ play a crucial role in our problem. In this section, we study such maps. \begin{proposition}\label{prop:cyclicgerbe} Let $k$ be a field, $P$ a rational curve with $P(k)=\emptyset$, $\Phi$ a finite, étale gerbe with a rational map $P\dashrightarrow\Phi$. Assume that the geometric fibers of $P\dashrightarrow\Phi$ are irreducible curves of genus $0$. The gerbe $\Phi$ is abelian with cyclic band of order prime to $\operatorname{char}k$, and there exists a separable extension $k'/k$ of degree $2$ and a point $p\in P(k')$ such that $P\dashrightarrow\Phi$ restricts to a morphism $P\setminus\{p\}\to \Phi$. \end{proposition} Let us first sketch the proof in the particular case in which $k$ is perfect. Write $\Phi_{\bar{k}}=\cB_{\bar{k}}G$ for some finite group $G$, the base change of $P\dashrightarrow\Phi$ to $\bar{k}$ corresponds to a $G$-covering $\PP^{1}_{\bar{k}}\to\PP^{1}_{\bar{k}}$. The branch locus of the $G$-covering descends to $P$: if $P(k)=\emptyset$, this forces every branch point $p$ of $\PP^{1}_{\bar{k}}$ to have at least one Galois conjugate $\bar{p}$ such that the ramification data over $p$ and $\bar{p}$ are equal. Using the Riemann-Hurwitz formula, it can be easily checked that this only happens for $G$ cyclic. \begin{proof} Let $\bar{k}$ be an algebraic closure and $\bar{k}\supseteq k^{s}\supseteq k$ be the separable closure, we have $P_{k^{s}}\simeq\PP^{1}_{k^{s}}$, in particular $\Phi_{k^{s}}$ is neutral since there is a rational map $P\dashrightarrow\Phi$. Choose a section $\spec k^{s}\to \Phi$ and let $G$ be its automorphism group, it is a finite constant group and there is an induced branched $G$-covering $f:\PP^{1}_{k^{s}}\to P_{k^{s}}\simeq \PP^{1}_{k^{s}}$. Let $R$ be the ramification divisor of $f$ as in \cite[\href{https://stacks.math.columbia.edu/tag/0C1B}{Tag 0C1B}]{stacks-project}, we think of $R$ as a non-reduced, finite scheme over $k^{s}$. There is an action of $G$ on $R$, in particular on the underlying set. Given a point $r\in R$ denote by $d_{r}$ the length of $R$ at $r$, $e_{r}$ the ramification index, $o_{r}$ the cardinality of the set-theoretic $G$-orbit of $r$. Since $f$ has degree $n$, then $n=o_{r}e_{r}[k^{s}(r):k^{s}(f(r))]$. By Riemann-Hurwitz \cite[\href{https://stacks.math.columbia.edu/tag/0C1F}{Lemma 0C1F}]{stacks-project} we have \[2n-2=\sum_{r\in R}d_{r}[k^{s}(r):k^{s}]\] with $d_{r}=e_{r}-1$ if $f$ is tamely ramified at $r$ and $d_{r}\ge e_{r}$ otherwise. There exists a largest open subset $U\subset P$ such that $U\to\Phi$ is defined \cite[Corollary A.2]{giulio-ed}. Write $Z=P\setminus U$, then $Z_{k^{s}}$ is the branch locus of $f$ \cite[Lemma A.1, Corollary A.2.iv]{giulio-ed}, in particular $f(R)=Z_{k^{s}}$ set-theoretically. Since $P$ is a non-trivial Brauer-Severi variety, $Z$ has no rational points. This implies that, if $r\in R$ is a point such that $k^{s}(f(r))=k^{s}$, there exists a point $r'\in R$ such that $f(r')\neq f(r)$ is a Galois conjugate of $f(r)$, and hence $o_{r}=o_{r'}$, $d_{r}=d_{r'}$, $e_{r}=e_{r'}$, $k^{s}(r)\simeq k^{s}(r')$. Let us show first that $f$ is tamely ramified at every point. Assume that $f$ has wild ramification at some point $r$, in particular $d_{r}\ge e_{r}$. If $k^{s}(f(r))=k^{s}$, by what we have said above we have \[2n-2\ge 2o_{r}e_{r}[k^{s}(r):k^{s}]=2n,\] which is absurd. If $k^{s}(f(r))\neq k^{s}$, then \[2n-2\ge o_{r}e_{r}[k^{s}(r):k^{s}]\ge 2o_{r}e_{r}[k^{s}(r):k^{s}(f(r))]=2n,\] hence we get a contradiction in this case, too. We may thus assume that $f$ has tame ramification: in particular, $k^{s}(r)=k^{s}(f(r))$ for every $r$, $n=o_{r}e_{r}$ and $d_{r}=e_{r}-1$ for every $r\in R$. Since $f$ is ramified at every $r\in R$, then $e_{r}\ge 2$ and $o_{r}\le n/2$. For every $z\in Z_{k^{s}}$, write $e_{z}$, $o_{z}$ for $e_{r}$, $o_{r}$, where $r\in R$ is some point with $f(r)=z$. We have \[2n-2=\sum_{r\in R}(e_{r}-1)[k^{s}(r):k^{s}]=\sum_{z\in Z_{k^{s}}}o_{z}(e_{z}-1)[k^{s}(z):k^{s}]=\] \[=\sum_{z\in Z_{k^{s}}}(n-o_{z})[k^{s}(z):k^{s}]\ge \deg Z\cdot n/2\] which implies $\deg Z\le 3$. Since $P$ is a non-trivial Brauer-Severi variety of dimension $1$, then $\deg Z$ is even, hence we may assume $\deg Z=2$ (if $\deg Z=0$, then $G$ is trivial). Assume first $Z_{k^{s}}$ contains only one point $z$ with $[k^{s}(z): k^{s}]=2$, in particular $\operatorname{char}k=2$. Then \[2n-2=o_{z}(e_{z}-1)[k^{s}(z):k^{s}]=2n-2o_{z},\] hence $o_{z}=1$ and $n=e_{z}$ is prime with $\operatorname{char} k=2$, i.e. it is odd. Since $n$ is odd, the base change $f_{\bar{k}}$ of $f$ to $\bar{k}$ is tamely ramified and since $f$ has only one point then the same is true for $f_{\bar{k}}$, and the ramification index must be $n$. By Riemann-Hurwitz applied to $f_{\bar{k}}$, we have $2n-2=n-1$, i.e. $n=1$ which is in contradiction with $\deg Z=2$. If $Z_{k^{s}}$ contains two $k^{s}$-rational points, the Galois action swaps them because $Z$ has no rational points, hence \[2n-2=2o_{z}(e_{z}-1)=2n-2o_{z},\] which implies $o_{z}=1$ and $n=e_{z}$ is prime to $\operatorname{char}k$. If $r\in R$ is one of the ramification points, since $|G|=n=e_{z}$ then $r$ is fixed by $G$. Since $|G|=n$ is prime with $\operatorname{char} k$, then $G$ is linearly reductive: this implies that $G$ acts faithfully on the tangent space of $r$ and is thus cyclic. \end{proof} \begin{corollary}\label{cor:cyclic} Let $k$ be a field and $D\subset \PP^{1}_{\bar{k}}$ an effective, reduced divisor of degree $n\ge 3$ defined over $\bar{k}$. If the compression of $(\PP^{1}_{\bar{k}},D)$ is not isomorphic to $\PP^{1}$ over the field of moduli, then $\aut(\PP^{1}_{\bar{k}},D)$ is cyclic of order prime to the characteristic. \end{corollary} \begin{lemma}\label{lem:BSgerbe} Let $P\dashrightarrow \Phi$ be as in Proposition~\ref{prop:cyclicgerbe}. Then $\Phi$ is neutral if and only if it has odd degree. \end{lemma} \begin{proof} By Proposition~\ref{prop:cyclicgerbe}, $\Phi$ is abelian with cyclic band $A$ of order prime to $\operatorname{char} k$. We have that $\Phi$ corresponds to a class $\phi\in\H^{2}(k,A)$ and $\Phi$ is neutral if and only if $\phi$ is trivial. Let $n$ be the degree of $\Phi$, which corresponds to the degree of $A$. The group $\H^{2}(k,A)$ is abelian and $n$-torsion. Let $k'$ be the residue field of the point $p$ given by Proposition~\ref{prop:cyclicgerbe}, then $k'/k$ is a separable extension of degree $2$ and $\phi_{k'}=0$ since $k'$ splits $P$. We have $2\phi=\operatorname{cor}_{k'/k}(\phi_{k'})=0$: if $n$ is odd, then $\phi=\frac{n+1}{2}2\phi=0$. If $\Phi$ has even degree, by Proposition~\ref{prop:cyclicgerbe} $\operatorname{char} k$ is prime with $n$ and hence $\operatorname{char} k\neq 2$. In particular, the $2$-adic étale fundamental group of $\GG_{m}$ over $k^{s}$ is $\ZZ_{2}$. Write $U=P\setminus\{p\}$, to prove that $\Phi$ is not neutral it is then enough to show that there exists a morphism $U\to\Gamma$ where $\Gamma$ is a non-neutral gerbe banded by $\mu_{2}$, since the fact the the degree of $\Phi$ is even gives us a factorization $U\to\Phi\to\Gamma$. Such a morphism $U\to\Gamma$ is constructed in the proof of \cite[Proposition 13.2]{borne-vistoli} using parameters $p=r=2$ and $X=U$. \end{proof} \section{Which divisors descend to a rational curve} We characterize which pairs $(\PP^{1}_{\bar{k}},D)$ descend to the field of moduli. We'll then prove Theorem~\ref{thm:main} using this characterization. \begin{theorem}\label{thm:divisors} Let $k$ be a field and $D\subset\PP^{1}_{\bar{k}}$ an effective, reduced divisor of degree $n\ge 3$ on $\PP^{1}_{\bar{k}}$ with $n\ge 3$. The following are equivalent. \begin{itemize} \item $(\PP^{1}_{\bar{k}},D)$ is not defined over its field of moduli. \item The compression has no rational points and $\aut(\PP^{1}_{\bar{k}},D)$ has even degree. \item The compression has no rational points and $\aut(\PP^{1}_{\bar{k}},D)$ is cyclic of degree even and prime to $\operatorname{char}k$. \end{itemize} \end{theorem} \begin{proof} Thanks to \cite[Proposition 4.1]{giulio-angelo-moduli}, we may assume that $k$ is infinite. Furthermore, by base change we may assume that $k$ is the field of moduli. Let $\cG,\bP$ be the residual gerbe and the compression respectively, there is a rational map $\bP\dashrightarrow\cG$ whose geometric fibers are irreducible of genus $0$. If $(\PP^{1}_{\bar{k}},D)$ is not defined over $k$, i.e. $\cG$ is not neutral, then $\bP$ is not isomorphic to $\PP^{1}$ and hence $\cG$ is abelian with cyclic band of degree even and prime with $\operatorname{char} k$ by Proposition~\ref{prop:cyclicgerbe} and Lemma~\ref{lem:BSgerbe}. Since $\cG_{\bar{k}}=\cB_{\bar{k}}\aut(\PP^{1}_{\bar{k}},D)$, we obtain that the first condition implies the third, which in turn clearly implies the second. If $\bP(k)=\emptyset$ and $\aut(\PP^{1}_{\bar{k}},D)$ has even degree, then by Lemma~\ref{lem:BSgerbe} the residual gerbe $\cG$ is not neutral, hence we conclude. \end{proof} \begin{corollary}\label{cor:cha2} Let $k$ be a field of characteristic $2$ and $D\subset \PP^{1}_{\bar{k}}$ an effective, reduced divisor with $n\ge 3$. Then $(\PP^{1}_{\bar{k}},D)$ is defined over its field of moduli. \end{corollary} \section{Divisors of degree $6$} \begin{proposition}\label{prop:deg6} Let $k$ be a field and $D\subset \PP^{1}_{\bar{k}}$ an effective, reduced divisor of degree $6$. Then $(\PP^{1}_{\bar{k}},D)$ is defined over its field of moduli. \end{proposition} \begin{proof} Thanks to Corollary~\ref{cor:cha2}, we may assume $\operatorname{char} k\neq 2$. Up to base change, we may assume that $k$ is the field of moduli, let $\bP$ be the compression and $\bD\subset\bP$ the compressed divisor, we have that $\bD$ is finite étale over $k$. Assume by contradiction that $(\PP^{1}_{\bar{k}},D)$ is not defined over $k$, then by Theorem~\ref{thm:divisors} $\aut(\PP^{1}_{\bar{k}},D)$ is cyclic of degree even and prime to $\operatorname{char} k$, and $\bP$ is a non-trivial rational curve. Since $\bP$ is non-trivial, then $\bD$ has even degree. Since $\operatorname{char} k\neq 2$, then $\bD_{\bar{k}}=D/\aut(\PP^{1},D)$ has either $2,4$ or $6$ points. Since $\aut(\PP^{1}_{\bar{k}},D)$ is cyclic of degree even and prime with $\operatorname{char} k$, then its action on $\PP^{1}_{\bar{k}}$ has two fixed points, while all the other orbits have even cardinality equal to $|\aut(\PP^{1}_{\bar{k}},D)|$. It then follows that $D$ contains the $2$ fixed points, $\bD_{\bar{k}}$ has $4$ points and $\aut(\PP^{1}_{\bar{k}},D)$ has order $2$. Choose coordinates on $\PP^{1}_{\bar{k}}$ such that $0,\infty\in D$ are the two fixed points, and $1$ is another point of $D$. With respect to these coordinates, we have that the only non-trivial element of $\aut(\PP^{1}_{\bar{k}},D)$ is the map $x\mapsto -x$, we may thus write $D=\{0,\infty,1,-1,\lambda,-\lambda\}$. It is immediate to check that $x\mapsto \lambda /x$ defines another non-trivial automorphism of $(\PP^{1},D)$, hence we get a contradiction. \end{proof} \section{Divisors of even degree $n\ge 8$}\label{sec:n8} In order to construct divisors of even degree $n\ge 8$ not defined over their field of moduli, we first construct a divisor over a non-trivial rational curve with certain special properties. \begin{lemma}\label{lem:prep} Let $k$ be an infinite field of characteristic $\neq 2$, $P$ a non-trivial rational curve, $n\ge 8$ an even integer. There exists a reduced, effective divisor $E\subset P$, a quadratic extension $k'/k$, a point $p\in P(k')$ with Galois conjugate $\bar{p}\in P(k')$ and a cyclic cover $f:\PP^{1}_{\bar{k}}\to P_{\bar{k}}$ of degree $2$ ramified over $p,\bar{p}$ such that the divisor $D=f^{-1}(E_{\bar{k}})$ with the reduced structure has degree $n$ and $\aut(f)$ is its own centralizer in $\aut(\PP^{1}_{\bar{k}},D)$. \end{lemma} \begin{proof} Let $\omega$ be the canonical bundle of $P$. Write either $n=4m$ or $n=4m-2$ with $m\ge 2$, depending on the class of $n$ modulo $4$. Consider the natural map $\PP(\H^{0}(\omega^{-1}))\times\PP(\H^{0}(\omega^{-m+1}))\to\PP(\H^{0}(\omega^{-m}))$ corresponding to the sum of divisors, it is dominant. If $n\ge 10$, i.e. $m\ge 3$, a generic rational point of $\PP(\H^{0}(\omega^{-m}))$ corresponds to a divisor étale over $k$ with trivial automorphism group scheme. Moreover, any rational point of $\PP(\H^{0}(\omega^{-1}))$ corresponds to a point of $P$ whose residue field is a quadratic extension of $k$. Taking a generic rational point of $\PP(\H^{0}(\omega^{-1}))\times\PP(\H^{0}(\omega^{-m+1}))$, we can thus find a divisor $E\subset P$ of degree $2m$ étale over $k$ with trivial automorphism group scheme and a quadratic extension $k'/k$ such that $E(k')\neq\emptyset$. If $n=8$, with an analogous argument we get a reduced divisor $E\subset P$ of degree $4$ such that $\aut(\PP^{1},E_{\bar{k}})$ is the Klein group. If $p\in P(k')$ is a $k'$-rational point, denote by $\bar{p}$ its Galois conjugate. If $n=4m-2$, choose $p\in E(k')$, otherwise $p\in P\setminus E(k')$. If $n=8$, for every non-trivial $g\in\aut(P_{\bar{k}},E_{\bar{k}})$ the set of points $p\in E(k')$ such that either $g(p)=p$ or $g(p)=\bar{p}$ is not Zariski-dense, since $k$ is infinite we may choose $p\in P\setminus E(k')$ so that $g(p)\neq p$, $g(p)\neq\bar{p}$ for every non-trivial $g\in\aut(P_{\bar{k}},E_{\bar{k}})$. Consider a cyclic cover $f:\PP^{1}_{\bar{k}}\to P_{\bar{k}}$ of degree $2$ which ramifies at $p$ and $\bar{p}$ and let $D\subset \PP^{1}_{\bar{k}}$ be the inverse image of $E_{\bar{k}}$ with the reduced structure, it is an effective divisor of degree $n$. The group of automorphisms $\aut(\PP^{1},D)$ has a cyclic subgroup $\aut(f)$ of order $2$. Let $C\subset\aut(\PP^{1},D)$ be the centralizer of $\aut(f)$, then $C/\aut(f)$ acts faithfully on $(P_{\bar{k}},E_{\bar{k}})$. If $n\ge 10$, $\aut(P_{\bar{k}},E_{\bar{k}})$ is trivial and hence $C=\aut(f)$. If $n=8$ and $g\in C/\aut(f)\subset \aut(P_{\bar{k}},E_{\bar{k}})$, then $g(p)$ is a branch point for $f$ and hence either $g(p)=p$ or $g(p)=\bar{p}$. By our choice of $p$, we get that $g$ is trivial, hence $C=\aut(f)$ in this case, too. \end{proof} \begin{lemma}\label{lem:grp} Let $G$ be a finite group and $g\in G$ a non trivial element with $g^2=\id$ and such that the centralizer of $g$ is $<g>$. Then $|G|/2$ is odd. \end{lemma} \begin{proof} Let $P\subset G$ be a $2$-Sylow subgroup containing $g$, the center $Z$ of $P$ is non-trivial. Since the centralizer of $g$ is $<g>$, we get that $Z\subset <g>$ and hence $Z=<g>$. It follows that $P$ centralizes $g$, hence $P=Z=<g>$ and $|G|/2=[G:P]$ is odd. \end{proof} \begin{proposition}\label{prop:deg8} Let $k$ be a field of characteristic $\neq 2$, assume that the $2$-torsion $\operatorname{Br}(k)[2]$ of the Brauer group is non-trivial. Let $n\ge 8$ be an even integer. There exists an effective, reduced divisor $D\subset \PP^{1}_{\bar{k}}$ of degree $n$ with field of moduli equal to $k$ such that $(\PP^{1}_{\bar{k}},D)$ is not defined over $k$. \end{proposition} \begin{proof} Since the $2$-torsion of the Brauer group is non trivial, in particular $k$ is infinite, and there exists a non-trivial rational curve $P$. Let $E\subset P$, $p\in E(k')$, $f:\PP^{1}_{\bar{k}}\to P_{\bar{k}}$, $D\subset \PP^{1}_{\bar{k}}$ as in Lemma~\ref{lem:prep}. Let $k(D)/k$ be the field of moduli of $(\PP^{1}_{\bar{k}},D)$ and $\cG$ the residue gerbe. Consider $p$ as a divisor of degree $2$, let $\sqrt[2]{P,p}$ be $2$nd root stack and $\sqrt[2]{P,p}\to\Pi$ its étale fundamental gerbe. We have that $\Pi$ is an abelian gerbe banded by $\mu_{2}$ and $\sqrt[2]{P,p}\to\Pi$ is representable, it can be checked after base changing to $\bar{k}$. Let $E'\subset \sqrt[2]{P,p}$ be the inverse image of $E\subset P$ with the reduced structure, since $\sqrt[2]{P,p}\to\Pi$ is representable then $E'$ finite étale over $\Pi$ and the geometric fibers of $(\sqrt[2]{P,p},E')\to\Pi$ are isomorphic to $(\PP^{1}_{\bar{k}},D)$. Let $\cG$, $\cP\to\cG$ be the residual gerbe and the universal family of $(\PP^{1}_{\bar{k}},D)$ respectively. By definition of residual gerbe, we have a $2$-cartesian diagram \[\begin{tikzcd} \sqrt[2]{P,p}\rar\dar & \cP\dar \\ \Pi\rar & \cG. \end{tikzcd}\] Since $\Pi$ is a gerbe over $k$ and there is a factorization $\Pi\to\cG\to\spec k(D)\to\spec k$, we get that $k(D)=k$. By Lemma~\ref{lem:grp}, $\aut(f)$ has odd index in $\aut(\PP^{1},D)$. Let $2a$ with $a$ odd be the order of $\aut(\PP^{1},D)$. Let $\bP$ be the compression of $(\PP^{1}_{\bar{k}},D)$, by the diagram above there is a natural induced map $P\to \bP$ of degree $a$. Since $a$ is odd and $P$ is a non-trivial rational curve, it follows that $\bP$ is a non-trivial rational curve, since otherwise $P$ would have a divisor of odd degree. By Corollary~\ref{cor:cyclic}, we get that $\aut(\PP^{1},D)$ is cyclic of even order and hence $(\PP^{1}_{\bar{k}},D)$ is not defined over $k$ by Theorem~\ref{thm:divisors}. \end{proof} \section{Generalizations of results of A. Marinatto and B. Huggins} The following lemma connects our results to those of Marinatto. \begin{lemma}\label{lem:compoint} Let $k$ be a field and $D\subset\PP^{1}_{\bar{k}}$ an effective, reduced divisor of degree $n\ge 3$ on $\PP^{1}_{\bar{k}}$ with $n\ge 3$. The following are equivalent. \begin{itemize} \item $D$ descends to a divisor of $\PP^{1}$ over the field of moduli. \item The universal family $\cP\to\cG$ has a rational point. \item The compression $\bP$ has a rational point. \end{itemize} \end{lemma} \begin{proof} By base change, we may assume that $k$ is the field of moduli. If $D$ descends to a divisor of $\PP^{1}$ over $k$, then by definition there is a morphism $\PP^{1}_{k}\to\cP$, hence the universal family $\cP$ has a rational point, which in turn implies that the compression $\bP$ has a rational point. Now assume that $\bP(k)$ is non-empty, we want to show that $D$ descends to a divisor of $\PP^{1}$ over $k$. Assume first that $k$ is finite. By \cite[Theorem 8.1]{diproietto-tonini-zhang}, $\cG(k)$ is non-empty, hence there exists a model $(P,D_{0})$ of $(\PP^{1}_{\bar{k}},D)$ over $k$. Since $k$ is finite, then $P=\PP^{1}_{k}$. Assume now that $k$ is infinite. As shown in the proof of \cite[Theorem 5.4]{giulio-angelo-moduli}, there exists an open subset $U\subset\cP$ which is a scheme. Since $k$ is infinite and $U(k)\subset\bP(k)$ is open, then $U(k)$ is non-empty, hence there exists a rational point $p\in \cP(k)$. The image of $p$ in $\cG(k)$ gives, by definition of $\cG$ and $\cP$, a model $(P,D_{0})$ of $(\PP^{1}_{\bar{k}},D)$ over $k$ such that $P(k)\neq\emptyset$. \end{proof} \begin{theorem}[{\cite[Theorem 1]{marinatto}}]\label{thm:marinatto1} Let $k$ be a field and $D\subset\PP_{\bar{k}}^{1}$ a reduced, effective divisor of $\PP^{1}_{\bar{k}}$ of degree $n\ge 3$. If $n$ is odd, then $D$ descends to a divisor of $\PP^{1}$ over the field of moduli. \end{theorem} \begin{proof} Thanks to \cite[Proposition 4.1]{giulio-angelo-moduli} and Corollary~\ref{cor:cha2}, we may assume that $k$ is infinite with $\operatorname{char} k\neq 2$. By base change, we can also assume that $k$ is the field of moduli of $(\PP_{\bar{k}}^{1},D)$. Let $\bP$ be the compression and $\bD\subset\bP$ the compressed divisor, we have that $\bD$ is a finite étale scheme over $k$. By Lemma~\ref{lem:compoint} it is enough to show that, if $\bP(k)=\emptyset$, then $n$ is even. If $p,q\in \bD(k^{s})=\bD(\bar{k})$ are in the same Galois orbit, then the fibers of $D\to \bD_{\bar{k}}=D/\aut(\PP^{1},D)$ over $p$ and $q$ have the same cardinality. Since $\bP$ is non-trivial and $\bD$ is étale over $k$, the Galois-orbits of $\bD(k^{s})$ have even cardinality. It follows that $D$ has even cardinality, too. \end{proof} \begin{proposition}[{\cite[Proposition 2.12]{marinatto}}]\label{prop:marinatto2} Let $k$ be a field and $D\subset\PP^{1}_{\bar{k}}$ an effective, reduced divisor of degree $4$. Then $D$ descends to a divisor of $\PP^{1}$ over the field of moduli. \end{proposition} \begin{proof} For every such divisor, there is a copy of $\ZZ/2\times\ZZ/2$ inside $\aut(\PP^{1}_{\bar{k}},D)$, hence the compression has a rational point by Proposition~\ref{prop:cyclicgerbe}. We conclude by Lemma~\ref{lem:compoint}. \end{proof} \begin{theorem}[{\cite[Theorem 5.3]{huggins}}] Let $k$ be a field of characteristic $\neq 2$ and $H$ a hyperelliptic curve over $\bar{k}$ with hyperelliptic involution $\iota$. If $H$ is not defined over its field of moduli, then $\mathop{\underline{\mathrm{Aut}}}\nolimits(H)/<\iota>$ is cyclic of order prime to $\operatorname{char}k$. \end{theorem} \begin{proof} Let $D\subset\PP^{1}_{\bar{k}}=H/\iota$ be the branch divisor, we have $\aut(X)/<\iota>=\aut(\PP^{1}_{\bar{k}},D)$. Let $\cG_{H},\cP_{H},\cG_{D},\cP_{D}$ be the residue gerbes and the universal families of $H,D$ respectively cf. \cite[\S 3, \S 5]{giulio-angelo-moduli}, there is a cartesian diagram \[\begin{tikzcd} \cP_{H}\rar\dar & \cP_{D}\dar \\ \cG_{H}\rar & \cG_{D} \end{tikzcd}\] where the morphism $\cG_{H}\to\cG_{D}$ is an abelian gerbe banded by $\mu_{2}$. It follows that $(\PP^{1}_{\bar{k}},D)$ and $H$ have equal fields of moduli and equal compression $\bP$. Thanks to \cite[Proposition 4.1]{giulio-angelo-moduli}, we may assume that $k$ is infinite. Since $H$ is not defined over the field of moduli, i.e. $\cG_{H}(k)=\emptyset$, and there is a rational map $\bP\dashrightarrow\cG_{H}$ \cite[Proof of Theorem 5.4]{giulio-angelo-moduli}, we get that $\bP(k)=\emptyset$. We conclude by applying Corollary~\ref{cor:cyclic} to $\bP\dashrightarrow\cG_{D}$. \end{proof} \bibliographystyle{amsalpha}
1,314,259,992,931
arxiv
\section{Introduction} An increasingly complex and diverse collection of ML models as well as hardware/software stacks are being proposed each day. This has lead to a vibrant and diverse landscape of ML. The amount of ML solutions are overwhelming. In ~\cite{dean2018new} the authors show that the number of ML \textit{arXiv} papers published has outpaced Moore's law. Thus researchers struggle to keep up to date and to analyze, study and adapt them. This is exacerbated by the complicated and sometimes non reproducible procedures for ML evaluation. To facilitate and accelerate the adoption of ML innovations, ML evaluation must be easily reproducible and a better way of sharing ML artifacts is needed. The current practice of sharing ML artifacts is by publishing source code to repositories such as GitHub. Model authors post their ad-hoc code and some documentation. We often find authors fail to reveal critical information for others to reproduce their results. Some authors also release Dockerfiles. However, Docke only guarantees the software stack but does not help model users examine or modify the artifact to adapt to other environments. In short, one often fails to reproduce artifact authors' claims, not to mention adapt the models to his/her own use. This paper discusses the challenges of ML evaluation and benchmarking, and outlines common pitfalls model users often encounter when attempting to replicate model authors' claims. In ~\cite{dakkak2018mlmodelscope}, we present MLModelScope\xspace, an open-source ML evaluation system which lowers the cost and effort for performing model evaluation and benchmarking, making it easier to reproduce, evaluate, and analyze accuracy, performance, and resilience claims of ML artifacts. This paper documents some of the lessons we learned when developing MLModelScope\xspace, and aims to inform both model authors on the critical information they must reveal for others to reproduce their claims, and system developers on how to benchmark or design a ML system in a reproducible manner. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figures/cpu_gpu.pdf} \caption{ResNet\_v1\_50 using TensorFlow 1.13 on GPU and GPU systems with varying batch sizes.} \label{fig:cpu_v_gpu} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{figures/volta_pascal.pdf} \caption{POWER8 with Pascal GPU and NVLink vs X86 with Volta for a ``cold-start'' inference using Caffe AlexNet for batch size $64$. The color coding of layers and runtime functions signify that they have the same kernel implementation, but does not imply that the parameters are the same.} \label{fig:alexnet_detail} \end{figure*} \section{Challenges and Pitfalls} Complicated and sometimes non-reproducible procedures for ML artifacts is currently a big hurdle in research adoption. The lack of standard and efficient systems for specifying and provisioning ML evaluation is the main cause of the \textit{pain point}. There are many factors that must work in unison within a ML model workflow, including hardware, programming language, pre/post-processing, model, dataset, software stack and hardware configurations. Researchers who publish and share ML artifacts are often unaware of some of the factors, and fail to reveal the information critical for others to reproduce their results. In the process of developing MLModelScope\xspace we identified a few common pitfalls and handled them in the model manifest specification and the platform's design. This section details the factors that affect ML evaluation, how the pitfalls arise, and provides suggested solutions. \subsection{Hardware} Different hardware architectures can result in varying performance and accuracy, since system and ML libraries leverage features within the hardware architecture. \pitfall{ Pitfall 1: Only look at partial hardware, not the entire system. E.g. Inference on a Volta GPU must be faster than that on a Pascal GPU.} Figure~\ref{fig:cpu_v_gpu} compares inference performance across systems. Volta (V100) is faster than Pascal (P100) in this case. One often assumes this to be always true. However, looking at only GPU or CPU compute sections when comparing performance is a common pitfall. Figure~\ref{fig:alexnet_detail} shows a Pascal system can perform better than a Volta system because of a faster CPU-GPU interconnect. One therefore should consider the entire system and its end-to-end latency under different workload scenarios when reporting system performance results. With MLModelScope\xspace's profiling capabilities, one can discern why there is a performance difference. Figure~\ref{fig:alexnet_detail} shows the layer and GPU kernel breakdown of the model inference on the two systems. We ``zoom-into'' the longest running layer (\texttt{FC6}) and show the model inference choke point. The difference between the model performance mainly comes form \texttt{FC6} layer. On identifying this issue, we were able to look at the Caffe source code and observe that Caffe does lazy copy, meaning the layer weights get copied from CPU to GPU only when it's needed. For \texttt{FC6}, $144MB$ of weights needs to be transferred. As we can see in the GPU kernel breakdown, even though the V100 performs better for \texttt{SGEMM} computation, with the NVLink~\cite{nvlink} (faster than PCIe) between CPU and GPU the IBM P8 system achieves higher memory bandwidth and thus achieves a $2\times$ speedup for \texttt{FC6} layer. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{figures/language.png} \caption{Execution time (normalized to C/C++) vs. batch size of Inception-v3 inference on CPU and GPU using TensorFlow with C++, Python using NumPy data types, and Python using native lists.} \label{fig:c_vs_python} \end{figure} \subsection{Programming Language} Core ML algorithms within frameworks are written in C/C++ for performance and in practice low-latency inference uses C/C+. It is common for developers to use NumPy for numerical computation (NumPy arrays are not Python objects). ML frameworks optimize the execution for NumPy arrays, and avoid memory copy overhead when interfacing with C/C++ code. \pitfall{Pitfal 2: Use Python API to measure and report bare-metal benchmark results or to deploy latency sensitive production code.} While no one claims Python to be as fast as C++, we find researchers believe that the glue code that binds Python to C++ takes negligible time. For example, benchmarks such as MLPerf are implemented in Python and report the latency and throughput for Python code. We show in Figure~\ref{fig:c_vs_python} above that the performance difference between Python and C++ in model evaluation is not negligible and one should use C++ for latency sensitive production code or when reporting bare-metal benchmark results. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/colormode.png} \caption{Top 5 predictions using Inception-v3 with RGB or BGR color layout.} \label{fig:lemon} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{figures/layout.png} \caption{Top 1 predictions using Inception-v3 with \texttt{NCHW} or \texttt{NHWC} data layout.} \label{fig:platapus} \end{figure} \subsection{Pre/Post-Processing} Pre-processing is transforming the user input into a form that can be consumed by the model. Post-processing is processing the model output that can be evaluated using metrics or consumed by subsequent components in the application pipeline. The processing parameters, methods and order affect accuracy and performance. \pitfall{Pitfall 4: Model authors typically fail to reveal some pre/post-processing details that are needed to reproduce their claims.} Among all the factors that affect model evaluation accuracy, pre/post-processing is the one that can result in big difference. The input dimension of a model is usually reported by the model author since without the right input dimensions, the model evaluation does not run and gives an error. Even if the input dimension is not explicitly given, model users can inspect the model architecture to figure that out. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{figures/pil_opencv.png} \caption{Pil vs OpenCV Implementation} \label{fig:pil_vs_opencv_impl} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{figures/opencv_pil.png} \caption{Image decoding difference between PIL and OpenCV.} \label{fig:pil_vs_opencv_diff} \end{figure} However, there are some critical pre/post-processing information that if not explicitly reported by the model authors, model users might easily fall into a incorrect evaluation setup and get ``silent errors" in accuracy --- the evaluation runs but the prediction results for some cases are incorrect. These ``silent errors" are difficult to debug. Here we take computer vision models as an example and discuss what model users might struggle with when reproducing others' results. \subsubsection{Color Mode} Models are trained with decoded images that are in either RGB or BGR color mode. For legacy reasons, OpenCV decodes images in BGR mode by default and subsequently both Caffe and Caffe2 use BGR. Other frameworks such as TensorFlow, PyTorch, MXNet use RGB mode~\cite{caffebgr}. Figure~\ref{fig:lemon} shows the Inception v3 inference results of the same image using different color modes and everything else being the same. \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{figures/cropping.pdf} \vspace{-10pt} \caption{Differences in the prediction results due to cropping using TensorFlow Inception-v3.} \label{fig:diff_cropping} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{figures/preprocessing.pdf} \caption{Differences due to order of operations for data type conversion and normalization using TensorFlow Inception-v3.} \label{fig:diff_output_process} \end{figure*} \subsubsection{Data Layout} The data layout for a two-dimensional image (to be fed into the model as tensors) is represented by four letters: \begin{itemize} \item N: Batch size, number of input processed together by the model \item C: Channel, $3$ for computer vision models \item W: Width, number of pixels in horizontal dimension \item H: Height, number of pixels in vertical dimension \end{itemize} Models are trained with input in either \texttt{NCHW} or \texttt{NHWC} data layout. Figure~\ref{fig:platapus} shows the Top1 inference results of TensorFlow Inception v3 using different layouts for the same input image. The model was trained with \texttt{NHWC} layout. As can be seen, the predictions are very different. \subsubsection{Image Decoding} It is typical for authors to use JPEG as the image data format (with ImageNet being stored as JPEG images). There are different decoding methods for JPEG. One usually use \texttt{opencv.imread} or \texttt{PIL.Image.open} or \texttt{tf.image.decode\_jpeg} to decode a jpeg image. TensorFlow uses libJPEG and uses either \texttt{INTEGER\_FAST} or \texttt{INTEGER\_ACCURATE} as default (varies across systems); PIL maps to \texttt{INTEGER\_ACCURATE} method while OpenCV may not use libJPEG. Even for the same method, ML libraries may have different implementations. For example, JPEG is stored on disk in YCrCb format, and the standard does not require bit-by-bit decoding accuracy. The implementation is defined differently across libraries, as shown in Figure~\ref{fig:pil_vs_opencv_impl}. Figure~\ref{fig:pil_vs_opencv_diff} shows the difference between decoding an image using Python Imaging Library (PIL) and OpenCV. We find that edge pixels (having high or low intensity) are not encoded consistently across libraries, even though these are the more interesting pixels for vision algorithms such as object detection. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/resizing_method.png} \caption{Top 5 predictions using Inception-v3 with nearest interpolation or bilinear interpolation for resizing} \label{fig:resizing_method} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/format.png} \caption{Table of model inference formats for different frameworks.} \label{fig:data_format_table} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/alexnet.png} \caption{AlexNet performance difference across frameworks on Volta.} \label{fig:alexnet_machine_compare} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=0.9\textwidth]{figures/timeline.pdf} \caption{Digging Deep into AlexNet Performance} \label{fig:alexnet_framework_compare} \end{figure*} \subsubsection{Cropping} For image classification, accuracy is sometimes reported for cropped validation datasets. The cropping method and parameter are often overlooked by model evaluators, which results in different accuracy numbers. For Inception-v3, for example, the input images are center-cropped with fraction $87.5\%$, and then resized to $299\times299$. Figure~\ref{fig:diff_cropping} shows the effect of omitting cropping from pre-processing: (a) is the original image; (b) is the result of center cropping the image with $87.5\%$ and then resizing; (c) is the result of just resizing; (d) and (f) shows the prediction results using processed images from (b) and (c). Intuitively, cropping differences are more pronounced for input images where the marginal regions are meaningful (e.g. paintings within frames). \subsubsection{Resizing Method} Image input size is fixed per model, but resizing method is not widely described by model authors. Multiple interpolation methods are available (nearest, bilinear, lanczos) and implementation for the same method can be different across libraries or frameworks. Figure \ref{fig:resizing_method} shows the TensorFlow Inception-v3 Top 5 predictions difference between using nearest interpolation and using bilinear interpolation for resizing. \subsubsection{Type Conversion and Normalization} After decoding, the image data is in bytes and is converted to FP32 (assuming FP32 model) before being fed to the model. Also we need to subtract mean and scale the image data so that it has zero mean and unit variance ($(pixel-mean)/adjustedstddev$). Mathematically, float to byte conversion is float to byte conversion is $float\_to\_byte(x) = 255x$, and byte to float conversion is $byte\_to\_float(x) = x/255.0$. Because of programming language semantics the executed behavior of byte to float conversion is $byte\_to\_float(x) = \floor{255x}$. As part of the pre-processing, the input may also need to be normalized to have zero mean and unit variance ($\frac{pixel-mean}{stddev}$). We find that the order of operations for type conversion and normalization matters. Figure~\ref{fig:diff_output_process} shows the image processing results using different order of operations for $meanByte = stddevByte = 127.5$ and $meanFloat = stddevFloat = 0.5$ where: (a) is the original image, (b) is the result of reading the image in bytes then normalizing it with the mean and standard deviation in bytes, $byte2float(\frac{imgByte - meanByte}{stddevByte})$, (c) is the result of reading an image in floats then normalizing it with the mean and standard deviation in floats, $\frac{byte2float(imgByte) - meanFloat}{stddevFloat}$, and (d) is the difference between (b) and (c)\footnote{\small To increase the contrast of the differences on paper, we dilate the image (with radius $2$) and rescaled its pixel values to cover the range between $0$ and $1$.}. The inference results of Figure~\ref{fig:diff_output_process} (b,c) are shown in Figure~\ref{fig:diff_output_process} (e,f).\abdul{this is not using tensorflow, but mxnet} \subsection{Model and Data Formats} There are a variety of formats used by ML frameworks to store models and data on disk, for inference and training. Some frameworks define models as Protocol Buffer~\cite{protobuf} and other use custom data formats. Figure~\ref{fig:data_format_table} shows the model format used for inference for different frameworks. Some data formats such as TensorFlow TFRecord~\cite{tfrecord} or MXNet’s RecordIO~\cite{recordio} are optimized for static datasets. One can achieve 7x speedup with TFRecord and TF Dataset Iterator API for ImageNet evaluation. \pitfall{Pitfall 4: Use an inappropriate format when measuring end-to-end performance.} \subsection{Software Stack} The major software components affecting reproducibility are ML framework (TensorFlow, MXNet, PyTorch, etc.) and libraries (MKL-DNN, Open-BLAS, cuDNN, etc.). They both impact not only the performance but also the accuracy of the model. \pitfall{Pitfall 5a: If framework A and B use the same cuDNN and other ML libraries, they give the same performance and accuracy for the same model.} Figure~\ref{fig:alexnet_machine_compare} shows AlexNet performance across different frameworks. All the frameworks are compiled with GCC 5.5 and use the same software stack (cuDNN and other libraries), but the performance is very different. With MLModelScope\xspace, we can dig deeper into the inference processes of the frameworks to identify the bottlenecks and overheads of each framework. Figure~\ref{fig:alexnet_framework_compare} shows that ML layers across frameworks have different implementations or dispatch to different library functions. Take the \texttt{conv2} and the following \texttt{relu} layers for example. In TensorRT, these two layers are merged together and are mapped to 2 \texttt{trt\_volta\_scudnn\_128x128\_relu\_small\_nn\_v1} kernels. While in other three frameworks, the two layers are not merged. Also the \texttt{conv2} layer in MXNet is executed very differently from the other frameworks. \pitfall{Pitfall 5b: Same version of framework gives same performance.} Framework installation and compilation affect model performance. Benchmark results should report numbers with frameworks installed from source (with optimal compilation flags) for fair comparison. Researchers usually have the choice to install a ML framework from source or from binary. Even through installation from binary is much easier, binary versions of framework may not use the CPU vectorization instructions (e.g. AVX, AVX2). For example, TensorFlow 1.13 with vectorization is $40\%$ faster than one without for Inception-v3 using batch size 1 on CPUs. Compilation options for framework and underlying libraries matters. For example, we compile Caffe using GCC 5.5 and with (1) the Caffe-default compiler flags in Figure~\ref{fig:caffe_install_opts}; (2) the Caffe-default and the Caffe-Single-Threaded-No-SIMD environment variables in Figure~\ref{fig:caffe_install_opts}. Figure~\ref{fig:caffe_install_diff_plot} shows the SphereFace-20 performance comparison on a Intel NUC system between the two Caffe installations. As can be seen, Caffe-default is almost $2\times$ more performant than the other due to multithreading and vectorization. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/compileflags.png} \caption{Caffe installation options} \label{fig:caffe_install_opts} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/compilation.png} \caption{Performance comparison between Caffe compiled with different options.} \label{fig:caffe_install_diff_plot} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/multithreading.png} \caption{TensorFlow inference latency varying num threads} \label{fig:tensorflow_vary_num_threads} \end{figure} \subsection{Hardware Configuration} Hardware configurations such as CPU Scaling, Multi-threading, Vectorization, affect mode evaluation performance. \pitfall{Pitfall 6: Always use the default hardware configurations without tuning the system for performance.} Take Multi-threading for example Modern CPUs have simultaneous multi-threading (also known as SMT or Hyper-threading). This allows multiple threads to run on the same core with the idea that each thread will not fully utilize the ALUs. As a study we vary the number of threads run by the TensorFlow using experiment variables \texttt{intra\_op\_parallelism\_threads} and \texttt{inter\_op\_parallelism\_threads}. The defaults for the two variables are the number of logical CPU cores and are effective for systems ranging from CPUs with $4$ to $70+$ combined logical cores~\cite{tfperf}. Figure~\ref{fig:tensorflow_vary_num_threads} shows Inception-v3 performance using different number of threads. On the system used which has 16 logical cores and 2-way SMT, the performance varies with different number of threads and the best is achieved using 16. \section{Conclusion} This article discusses some of the common pitfalls that one can encounter when trying to reproduce model evaluation. To address these outlined challenges and pitfalls, we propose MLModelScope\xspace in ~\cite{dakkak2018mlmodelscope}, an effective system solution for specifying and running model evaluation. MLModelScope\xspace addresses the challenges and pitfalls with a model specification (referred to as model manifest), a distributed runtime to set up the required environments for running model evaluations, and an aggregation and summarization pipeline that captures application traces and explains the model execution process. To learn more, visit \hyperlink{http://mlmodelscope.org}{mlmodelscope.org}.
1,314,259,992,932
arxiv
\section{Introduction} Increasing the accuracy of the approximations to the exchange-correlation energy functional $E_{xc}[\rho]$ of Kohn-Sham (KS) density functional theory (DFT) is of crucial importance for research areas ranging from theoretical chemistry and biochemistry to solid-state and surface physics (for a recent review, see, e.g., Ref.~\onlinecite{CohMorYan-CR-12}). A piece of {\em exact} information on $E_{xc}[\rho]$ is provided by the strong-interaction limit of DFT, in which the coupling constant of the electron-electron interaction becomes infinitely large while the one-electron density $\rho({\bf r})$ does not change.\cite{Sei-PRA-99,SeiPerLev-PRA-99,SeiPerKur-PRA-00,SeiPerKur-PRL-00} This defines a fictitious system with the same density of the physical one and maximum possible correlation between the relative electronic positions, useful to describe situations in which restricted Kohn-Sham DFT encounters problems, such as low-density many-particle scenarios and the breaking of the chemical bond.\cite{GorSeiVig-PRL-09,GorSei-PCCP-10} The exact mathematical structure of this limit has been uncovered only recently,\cite{SeiGorSav-PRA-07,GorVigSei-JCTC-09,ButDepGor-PRA-12} and exact calculations (at least for simple systems) have started to become available.\cite{GorSeiVig-PRL-09,GorSei-PCCP-10,RasSeiGor-PRB-11} The aim of this paper is to make a step forward in the inclusion of this new piece of exact information into approximations to the exchange-correlation energy functional $E_{xc}[\rho]$. Previous attempts in this direction\cite{SeiPerLev-PRA-99,SeiPerKur-PRA-00,SeiPerKur-PRL-00} focused on global (i.e., integrated over all space) quantities, introducing size-consistency errors. The exact solution of the strong-interaction limit, now available, makes accessible not only global, but also {\em local} quantities, from which it is easier to construct size-consistent approximations\cite{Bec-JCP-05,BecJoh-JCP-07,PerStaTaoScu-PRA-08} (for a critical review on size consistency of approximate energy density functionals see, e.g., Ref.~\onlinecite{GorSav-JPCS-08} and, especially, Ref.~\onlinecite{Sav-CP-09}). Local quantities are in general not uniquely defined (for a complete discussion see, e.g., Ref.~\onlinecite{BurCruLam-JCP-98}). Here we focus on the conventional, physical transparent, definition in terms of the electrostatic energy of the exchange-correlation hole. We derive the exact expression for this energy density in the strong-interaction limit, and we evaluate it for small atoms and small model quantum dots, making comparisons with available approximations within the same definition (same {\em gauge}). We then discuss the idea of a local interpolation along the adiabatic connection by comparing energy densities in the physical case, the weak- and the strong-interaction limit. As a byproduct, our results allow to analyze the local version of the Lieb-Oxford bound, a condition widely used to construct approximate exchange-correlation functionals. As well known, the Lieb-Oxford bound is an exact condition\cite{Lie-PLA-79,LieOxf-IJQC-81,ChaHan-PRA-99} on the global $E_{xc}[\rho]$. Many non-empirical approximate functionals, however, use its local version, which is a sufficient but not necessary condition to ensure the global bound (see, e.g., Refs.~\onlinecite{PerBurErn-PRL-96,TaoPerStaScu-PRL-03} and~\onlinecite{HauOdaScuPerCap-JCP-12}). Our analysis strongly suggests that the local version of the Lieb-Oxford bound is formulated in the gauge of the electrostatic potential of the exchange-correlation hole. This, in turn, implies that the local bound is certainly violated at least in the tail region of a molecular or atomic density, and in the bond region of a stretched molecule. The paper is organized as follows. In the next Sec.~\ref{sec_inter} we review the DFT adiabatic connection as a tool to build approximate $E_{xc}[\rho]$, highlighting the role of the strong-interaction limit and discussing the size-consistency problem of interpolations based on global quantities. In Sec.~\ref{sec_energydens} we discuss energy densities in general and we introduce the gauge of the electrostatic energy associated to the exchange-correlation hole. The exact expression for this quantity in the strong-interaction limit is derived in Sec.~\ref{sec_strong}, where also approximations are discussed. In particular, we analyze the ``point-charge plus continuum'' (PC) model functional,\cite{SeiPerKur-PRA-00} showing that it is meant to be an approximation to the energy density within the same conventional definition considered here. Energy densities along the adiabatic connection are discussed and analyzed in Sec.~\ref{sec_enedenslambda}. We then use our results to discuss the local version of the Lieb-Oxford bound in Sec.~\ref{sec_LO}. The last Sec.~\ref{sec_conc} is devoted to conclusions and perspectives. Finally, a simple illustration for the case of the uniform electron gas is reported in the Appendix. \section{Interpolation along the adiabatic connection} \label{sec_inter} Within the framework of the adiabatic connection\cite{HarJon-JPF-74, LanPer-SSC-75, GunLun-PRB-76} the exchange-correlation energy can be expressed by the coupling constant integration \begin{align} E_{xc}[\rho]&=\int_0^1 d\lambda\big\langle\Psi_\lambda[\rho]\big\vert\hat{V}_{ee}\big\vert\Psi_\lambda[\rho]\big\rangle-U[\rho]\label{eq:cci}\\ &=:\int_0^1 d\lambda\; W_\lambda[\rho], \end{align} with $\Psi_\lambda[\rho]$ being the ground state wave function of a fictitious system with scaled electron-electron interaction \begin{align} \hat{H}=\hat{T} +\lambda\hat{V}_{ee} +\hat{V}_{ext}^\lambda. \label{eq:Hlambda} \end{align} The external potential $\hat{V}_{ext}^\lambda$ is adjusted to keep the density $\rho_\lambda({\bf r})$ out of $\Psi_\lambda[\rho]$ in agreement with the physical density, $\rho_\lambda({\bf r})=\rho_{1}({\bf r})\equiv \rho({\bf r})$. For the weak interaction limit ($\lambda=0$) we encounter the Kohn-Sham\cite{KohSha-PR-65} reference system and the integrand $W_0[\rho]$ becomes the exact exchange energy $E_x[\rho]$. For the strong-interaction limit ($\lambda\rightarrow\infty$) a reference system within the strictly correlated electrons concept (SCE)\cite{SeiPerLev-PRA-99, Sei-PRA-99, SeiGorSav-PRA-07,GorVigSei-JCTC-09} can be defined. The asymptotic expansions of $W_{\lambda}[\rho]$ are \begin{align} W_{\lambda\rightarrow 0}[\rho]=E_x[\rho]+2\,\lambda\, E_c^{\rm GL2}[\rho]+O(\lambda^2)\label{eq:expl0}\\ W_{\lambda\rightarrow\infty}[\rho]=W_\infty[\rho]+\frac{W'_\infty[\rho]}{\sqrt{\lambda}}+O(\lambda^{-p}),\label{eq:explinf} \end{align} where $E_c^{\rm GL2}[\rho]$ is the correlation energy given by second order G\"{o}rling-Levy perturbation theory (GL2)\cite{GorLev-PRB-93} and $p\geq 5/4$.\cite{GorVigSei-JCTC-09} Exact expressions for the functionals $W_\infty[\rho]$ and $W'_\infty[\rho]$ are given, respectively, in Refs.~\onlinecite{SeiGorSav-PRA-07} and \onlinecite{GorVigSei-JCTC-09}. Expression \eqref{eq:cci} for the exchange-correlation energy is exact as long as the exact dependence of the integrand on $\lambda$ is known.\cite{TeaCorHel-JCP-09,TeaCorHel-JCP-10} As this is obviously not the case, Eq. \eqref{eq:cci} still enables approximate exchange-correlation energies by modeling $W_\lambda[\rho]$ along the adiabatic connection. Attempts towards approximate $W_\lambda[\rho]$ have been undertaken by Becke,\cite{Bec-JCP-93a} introducing the half and half functional. A model is defined assuming a linear dependence of $W_\lambda[\rho]$ on $\lambda$ and setting $W_0[\rho]$ equal to exact exchange and $W_1[\rho]$ to LSDA exchange-correlation. This results in a functional with 50\% exact exchange and 50\% LSDA exchange-correlation. Further adjustment of the portion of exact exchange by semi-empirical arguments gives rise to hybrid functionals like B3LYP.\cite{Bec-JCP-93, SteDevChaFri-JPC-94, LeeYanPar-PRB-88, VosWilNus-CJP-80} The adiabatic connection may also be used for the construction of non-empirical hybrids as in Ref.~\onlinecite{BurErnPer-CPL-97}. Here a model for $W_\lambda[\rho]$ is defined consisting of two intersected straight lines fixed by exact exchange, GGA exchange and GGA exchange-correlation. Ernzerhof\cite{Ern-CPL-96} introduced a curved model by proposing a Pad\'{e} interpolation for the integrand using as input exact exchange and $E_c^{\rm GL2}[\rho]$ in the weak interaction limit and GGA exchange-correlation for $\lambda=1$. The mentioned models for the integrand (i.e. all except B3LYP) have in common that for the weak interaction limit exact exchange is employed and for the physical situation with $\lambda=1$ information from approximate DFT (DFA) is used. The argument for the recourse to exact exchange is that DFA exchange works well only if combined with DFA correlation. This is due to error cancellation. Consequently, DFA exchange-correlation can be used for the physical case where exchange and correlation are employed together. As error cancellation in DFA exchange-correlation might not be satisfactory, a continuation of the ansatz of Ernzerhof\cite{Ern-CPL-96} is possible by taking DFA exchange-correlation at some intermediate $\lambda$ instead of $\lambda=1$. This would allow to balance the exchange error with the correlation error. Along this line, Mori-S\'{a}nchez, Cohen and Yang\cite{MorCohYan-JCPa-06} constructed their MCY1 functional: a Pad\'{e} interpolation is undertaken with exact exchange and meta-GGA exchange input in the weak interaction limit and meta-GGA exchange-correlation for an intermediate $\lambda$ (chosen semi-empirically). The discussed models clearly outperform the stand alone DFAs they are based on.\cite{Bec-JCP-93a,BurErnPer-CPL-97,Ern-CPL-96,MorCohYan-JCPa-06} Nonetheless, employment of DFA quantities in their construction can lead to serious misbehavior in the curvature of the integrand as demonstrated by Peach, Teale and Tozer\cite{PeaTeaToz-JCP-07} by comparison of the MCY1 approximation with accurate quantities along the adiabatic connection (see, e.g., Fig. 3 in Ref.~\onlinecite{PeaTeaToz-JCP-07}). In the same paper the authors show that accurate exchange-correlation can be recovered by establishing an interpolation with accurate full-CI ingredients. A model that avoids unfavorable DFA bias is the interaction strength interpolation (ISI).\cite{SeiPerLev-PRA-99, Sei-PRA-99, SeiPerKur-PRA-00, GorVigSei-JCTC-09} Here information from the weak interaction limit is employed, namely exact exchange and GL2, together with information from the strong interacting limit through the point-charge plus continuum (PC) model, which provides approximate expressions for $W_\infty[\rho]$ and $W'_\infty[\rho]$. The $\lambda$ dependence of $W_\lambda[\rho]$ is then modeled by an interpolation between the two limits. Nowadays, the functionals $W_\infty[\rho]$ and $W'_\infty[\rho]$ can be accurately computed within the SCE many-electron formalism.\cite{SeiGorSav-PRA-07, GorVigSei-JCTC-09} Refs.~\onlinecite{SeiGorSav-PRA-07} and \onlinecite{GorVigSei-JCTC-09} compare the PC solutions with the exact SCE values for small atoms: while $W^{\rm PC}_\infty[\rho]$ is a very reasonable approximation to its exact counterpart,\cite{SeiGorSav-PRA-07} the original $W'^{\rm PC}_\infty[\rho]$ turned out to be much less accurate.\cite{GorVigSei-JCTC-09} The exact results could be used to propose a revised PC approximation $W'^{\rm revPC}_\infty[\rho]$ having accuracy similar to the one of $W^{\rm PC}_\infty[\rho]$. Further comparison is undertaken in section \ref{sec:pc} of this paper for a more refined quantity, the local energy density as it will be defined in the next sections. Although unplugged from any DFA bias (if we use exact input quantities), an unpleasant feature of the ISI is the violation of size consistency. This is due to the non-linear way the (size-consistent) ingredients $W_0[\rho]$, $W'_0[\rho]$, $W_\infty[\rho]$ and $W'_\infty[\rho]$ enter the interpolation. For example, the revised ISI (which behaves better in the $\lambda\to\infty$ limit than the original ISI) reads\cite{GorVigSei-JCTC-09} \begin{align} W_\lambda^{\rm revISI}[\rho]=\frac{\partial}{\partial\lambda}\left(a[\rho]\lambda+\frac{b[\rho]\lambda}{\sqrt{1+c[\rho]\lambda}+d[\rho]}\right) \end{align} where $a$, $b$, $c$ and $d$ are non linear functions of $W_0[\rho]$, $W'_0[\rho]$, $W_\infty[\rho]$ and $W'_\infty[\rho]$, determined by imposing the asymptotic expansions of Eqs.~\eqref{eq:expl0} and \eqref{eq:explinf}: \begin{eqnarray} a[\rho] & = & W_{\infty}[\rho]\\ b[\rho] & = & -\frac{8\,E_c^{\rm GL2}[\rho] W'_{\infty}[\rho]^2}{(E_x[\rho]-W_{\infty}[\rho])^2} \\ c[\rho] & = & \frac{16\,E_c^{\rm GL2}[\rho]^2 W'_{\infty}[\rho]^2}{(E_x[\rho]-W_{\infty}[\rho])^4} \\ d[\rho] & = & -1-\frac{8\,E_c^{\rm GL2}[\rho] W'_{\infty}[\rho]^2}{(E_x[\rho]-W_{\infty}[\rho])^3}. \end{eqnarray} Notice that the lack of size consistency is shared by all functionals in which the exact exchange energy (or any global energy) enters in a nonlinear way. Thus also, for example, MCY1. As a final remark on the revISI functional, we can add that if one makes the approximation $E_c^{\rm GL2}\approx E_c^{\rm MP2}$, it can be viewed as a double hybrid functional (see, e.g., Refs.~\onlinecite{Gri-JCP-06,ShaTouSav-JCP-11,BreAda-JCP-11,TouShaBreAda-JCP-11}). With respect to available double hybrids, the revISI lacks size consistency, but it has the advantage of being able to deal with the small-gap systems problematic for perturbation theory. The practical impact of the lack of size consistency of the revISI functional still needs to be tested. \section{Energy densities: definitions} \label{sec_energydens} A possible way to recover size consistency in the ISI framework is to use a {\em local} integrand in Eq.~\eqref{eq:cci}: \begin{align} E_{xc}[\rho]=\int_0^1 d\lambda \int d {\bf r} \,\rho({\bf r})\,w_\lambda[\rho]({\bf r}). \label{eq:enrgdens} \end{align} The idea is then to build a local model, $w^{\rm ISI}_\lambda[\rho]({\bf r})$, by interpolating between the $\lambda\to 0$ and the $\lambda\to\infty$ limits. As said, the energy density $w_\lambda[\rho]({\bf r})$ is not uniquely defined, so that an important requirement here is that the input local quantities in the weak- and in the strong-interaction limits are defined in the same way (same gauge). One of the most widely used definitions of the energy density in DFT is in terms of the exchange-correlation hole (see, e.g., Refs.~\onlinecite{Bec-JCP-05,BecJoh-JCP-07} and \onlinecite{PerStaTaoScu-PRA-08}) $h_{xc}^{\lambda}({\bf r},{\bf r}')$, \begin{equation} w_\lambda[\rho]({\bf r})=\frac{1}{2}\int \frac{h_{xc}^\lambda({\bf r},{\bf r}')}{|{\bf r}-{\bf r}'|}\,d{\bf r}', \label{eq:defenergydensity} \end{equation} where \begin{equation} h_{xc}^{\lambda}({\bf r},{\bf r}')=\frac{P_2^\lambda({\bf r},{\bf r}')}{\rho({\bf r})}-\rho({\bf r}'), \label{eq_hxclambda} \end{equation} and the pair-density $P_2^\lambda({\bf r},{\bf r}')$ is obtained from the wave function $\Psi_{\lambda}[\rho]$ of Eqs.~\eqref{eq:cci}-\eqref{eq:Hlambda}: \begin{eqnarray} & & P_2^\lambda({\bf r},{\bf r}')= N(N-1)\times \nonumber \\ & & \sum_{\sigma_1...\sigma_N}\int|\Psi_\lambda({\bf r}\sigma_1,{\bf r}'\sigma_2,{\bf r}_3\sigma_3,...,{\bf r}_N\sigma_N)|^2d{\bf r}_3...d{\bf r}_N.\qquad \label{eq_P2} \end{eqnarray} In the definition of Eq.~\eqref{eq:defenergydensity}, $w_\lambda[\rho]({\bf r})$ is the electrostatic potential of the exchange-correlation hole (a negative charge distribution normalized to $-1$), around a reference electron in ${\bf r}$. This quantity at the physical coupling strength $\lambda=1$ (plus the Hartree potential) has been also called $v_{\rm cond}({\bf r})$ in the literature (see, e.g., Ref.~\onlinecite{BuiBaeSni-PRA-89}). The energy density $w_\lambda[\rho]({\bf r})$ in the $\lambda\to\infty$ limit in the gauge of Eq.~\eqref{eq:defenergydensity} is the central quantity of this paper: we will derive in the next section an exact expression using the strictly-correlated electron concept, and we will evaluate it for small atoms and quantum dots. Notice that the relevance of $w_\infty[\rho]({\bf r})$ for constructing a new generation of approximate $E_{xc}[\rho]$ has also been pointed out very recently by Becke.\cite{Bec-cecam-11} \section{Energy densities in the strong interaction limit} \label{sec_strong} \subsection{Exact}\label{sec:SCEexact} When $\lambda\to\infty$ the wave function $\Psi_\lambda[\rho]$ tends to the strictly-correlated electron state, $\Psi_{\lambda\to\infty}[\rho]\to\Psi_{\rm SCE}[\rho]$, with\cite{SeiGorSav-PRA-07,GorVigSei-JCTC-09} \begin{eqnarray} & & |\Psi_{\rm SCE}({\bf r}_1,...,{\bf r}_N)|^2 = \frac{1}{N!}\sum_{\mathcal{P}}\int d{\bf s} \frac{\rho({\bf s})}{N} \delta({\bf r}_1-{\bf f}_{\mathcal{P}(1)}({\bf s})) \nonumber \\ & & \times \delta({\bf r}_2-{\bf f}_{\mathcal{P}(2)}({\bf s}))...\delta({\bf r}_N-{\bf f}_{\mathcal{P}(N)}({\bf s})), \label{eq_PsiSCE} \end{eqnarray} where ${\bf f}_1,..,{\bf f}_N$ are ``co-motion functions'', with ${\bf f}_1({\bf r})\equiv{\bf r}$, and $\mathcal{P}$ denotes a permutation of $\{1,...N\}$. This means that the $N$ points ${\bf r}_1,...,{\bf r}_N$ in 3D space found upon simultaneous measurement of the $N$ electronic positions in the SCE state always obey the $N-1$ relations \begin{equation} {\bf r}_i={\bf f}_i({\bf r}_1)\qquad(i=2,...,N). \label{cofs} \end{equation} In other words, the position of one electron determines all the relative $N-1$ electronic positions (limit of {\em strict correlation}). All the $N-1$ co-motion functions ${\bf f}_i({\bf s})$ satisfy the differential equation \begin{equation} \rho({\bf f}_i({\bf r}))d {\bf f}_i({\bf r})=\rho({\bf r})d {\bf r}, \label{eq_fdiff} \end{equation} which, together with the group properties\cite{SeiGorSav-PRA-07,ButDepGor-PRA-12} of the ${\bf f}_i({\bf r})$, ensure that the SCE wavefunction of Eq.~(\ref{eq_PsiSCE}) yields the given density $\rho({\bf r})$. Equation \eqref{eq_fdiff} has also a simple physical interpretation: since the position of one electron determines the position of all the others, the probability of finding one electron in the volume element $d{\bf r}$ about ${\bf r}$ must be the same of finding the $i^{\rm th}$ electron in the volume element $d{\bf f}_i({\bf r})$ about ${\bf f}_i({\bf r})$. Inserting Eq.~\eqref{eq_PsiSCE} into Eq.~\eqref{eq_P2} we obtain for the pair density $P_2^{\lambda\to\infty}({\bf r}_1,{\bf r}_2)=P_2^{\rm SCE}({\bf r}_1,{\bf r}_2)$ in the strong-interaction limit \begin{equation} P_2^{\rm SCE}({\bf r}_1,{\bf r}_2)=\sum_{\substack{i,j=1 \\ i\neq j}}^N\int d{\bf s} \frac{\rho({\bf s})}{N}\delta({\bf r}_1-{\bf f}_i({\bf s}))\delta({\bf r}_2-{\bf f}_j({\bf s})), \label{eq_P2SCE} \end{equation} which also has a transparent physical meaning: two electrons can only be found at strictly correlated relative positions. We first compute \begin{equation} \int \frac{P_2^{\rm SCE}({\bf r},{\bf r}')}{|{\bf r}-{\bf r}'|}d{\bf r}'=\sum_{\substack{i,j=1 \\ i\neq j}}^N\int d{\bf s}\frac{\rho({\bf s})}{N}\frac{\delta({\bf r}-{\bf f}_i({\bf s}))}{|{\bf r}-{\bf f}_j({\bf s})|}, \label{eq_eleSCE1} \end{equation} where, in the right-hand-side, we have already integrated over the variable ${\bf r}'$. From the properties of the Dirac delta distribution and of the co-motion functions, Eq.~\eqref{eq_eleSCE1} becomes \begin{eqnarray} \int \frac{P_2^{\rm SCE}({\bf r},{\bf r}')}{|{\bf r}-{\bf r}'|}d{\bf r}' & = & \frac{1}{N}\sum_{\substack{i,j=1 \\ i\neq j}}^N \frac{\rho\left({\bf f}_i^{-1}({\bf r})\right)|{\rm det}\, \partial_\alpha f_{i,\beta}^{-1}({\bf r})|}{|{\bf r}-{\bf f}_j\left({\bf f}_i^{-1}({\bf r})\right)|} \nonumber \\ & = & \frac{\rho({\bf r})}{N}\sum_{\substack{i,j=1 \\ i\neq j}}^N \frac{1}{|{\bf r}-{\bf f}_j\left({\bf f}_i^{-1}({\bf r})\right)|}, \label{eq_doublesum} \end{eqnarray} where $|{\rm det}\, \partial_\alpha f_{i,\beta}^{-1}({\bf r})|$ (with $\alpha,\beta=x,y,z$) is the determinant of the Jacobian of the transformation ${\bf r}\to{\bf f}_i^{-1}({\bf r})$, and we have used the fact that all the ${\bf f}_i({\bf r})$ (and their inverses, which, by virtue of the group properties of the co-motion functions are also co-motion functions for the same configuration \cite{SeiGorSav-PRA-07,GorVigSei-JCTC-09}) satisfy Eq.~\eqref{eq_fdiff}. Now we can use once more the group properties of the co-motion functions to recognize that for all $i\neq j$ the function ${\bf f}_j\left({\bf f}_i^{-1}({\bf r})\right)$ must be another co-motion function with the exclusion of ${\bf f}_1({\bf r})= {\bf r}$ (the identity can arise only if $i=j$). The double sum in the last term of Eq.~\eqref{eq_doublesum} is then exactly equal to $N$ times a single sum over all the co-motion functions ${\bf f}_k({\bf r})$ with $k\ge 2$, so that \begin{equation} \int \frac{P_2^{\rm SCE}({\bf r},{\bf r}')}{|{\bf r}-{\bf r}'|}d{\bf r}'=\rho({\bf r})\sum_{k=2}^N\frac{1}{|{\bf r}-{\bf f}_k({\bf r})|}. \label{eq_vcondinf} \end{equation} Inserting Eq.~\eqref{eq_vcondinf} into Eqs.~\eqref{eq:defenergydensity}-\eqref{eq_hxclambda} we finally obtain \begin{equation} w_\infty({\bf r})=\frac{1}{2}\sum_{k=2}^N\frac{1}{|{\bf r}-{\bf f}_k({\bf r})|}-\frac{1}{2}v_H({\bf r}), \label{eq_endensSCE} \end{equation} where $v_H({\bf r})$ is the Hartree potential. Notice that in previous work the exact $W_\infty[\rho]$ was given as\cite{SeiGorSav-PRA-07,GorVigSei-JCTC-09,GorSei-PCCP-10} \begin{equation} W_\infty[\rho]=\frac{1}{2}\int d{\bf r}\, \frac{\rho({\bf r})}{N} \, \sum_{\substack{i,j=1 \\ i\neq j}}^N \frac{1}{|{\bf f}_i({\bf r})-{\bf f}_j({\bf r})|}-U[\rho], \end{equation} suggesting a corresponding energy density \begin{equation} \tilde{w}_\infty({\bf r})=\frac{1}{N}\sum_{i=1}^N\left(\frac{1}{2}\sum_{\substack{j=1 \\ j\neq i}}^N \frac{1}{|{\bf f}_i({\bf r})-{\bf f}_j({\bf r})|}-\frac{1}{2}v_H({\bf f}_i({\bf r}))\right). \label{eq_endensSCEold} \end{equation} Equations \eqref{eq_endensSCEold} and \eqref{eq_endensSCE} yield the same $W_\infty[\rho]$ when integrated with the density $\rho({\bf r})$, but are locally different. They show a general feature of the co-motion functions: any given energy density $w_\infty^a({\bf r})$ can be always transformed into a different energy density $w_\infty^b({\bf r})$ defined as \begin{equation} w_\infty^b({\bf r})=\frac{1}{N}\sum_{i=1}^N w_\infty^a({\bf f}_i({\bf r})). \label{eq_changegauge} \end{equation} When multiplied by the density $\rho({\bf r})$, $w_\infty^a({\bf r})$ and $w_\infty^b({\bf r})$ integrate to the same quantity, because all the co-motion functions (and their inverses, which are also co-motion functions) satisfy Eq.~\eqref{eq_fdiff}. Only Eq.~\eqref{eq_endensSCE} corresponds to the gauge of the exchange-correlation hole defined by Eqs.~\eqref{eq:defenergydensity}-\eqref{eq_P2}. \subsection{Approximations: the PC model}\label{sec:pc} The point-charge-plus-continuum (PC) model\cite{Ons-JPC-39,SeiPerKur-PRA-00} is a physically sound approximation to the $\lambda\to\infty$ indirect electron-electron repulsion energy $W_\infty[\rho]$. The idea is to rewrite the indirect Coulomb interaction energy $W_\lambda[\rho]$ as the electrostatic energy $E_{es}[\Psi_\lambda,\rho]$ of a system of $N$ electrons in the state $\Psi_\lambda[\rho]$ embedded in a smeared background of positive charge $\rho_+({\bf r})=\rho({\bf r})$.\cite{SeiPerKur-PRA-00} In fact, this total electrostatic energy $E_{es}[\Psi_\lambda,\rho]$ is just the sum of the electron-electron repulsion energy, $E_{ee}=\langle\Psi_\lambda|\hat{V}_{ee}|\Psi_\lambda\rangle$, the electron-background attraction energy, $E_{eb}=-2U[\rho]$, and the background-background repulsion energy $E_{bb}=U[\rho]$, thus yielding exactly $E_{es}[\Psi_\lambda,\rho]=E_{ee}+E_{eb}+E_{bb}= W_\lambda[\rho]$. This relation is valid for every $\lambda$, but in the $\lambda\to\infty$ limit, when $\Psi_\lambda\to\Psi_{\rm SCE}$, we expect that the electrons minimize $E_{es}[\Psi_\lambda,\rho]$ by occupying relative positions that divide the space into neutral cells with possibly zero (or weak) lowest-order electrostatic multipole moments.\cite{SeiPerKur-PRA-00} The idea is then that for one of the SCE configurations $\{{\bf r},{\bf f}_2({\bf r}),...,{\bf f}_N({\bf r})\}$ we may approximate the indirect electron-electron repulsion by the sum of the electrostatic energies of all the cells (i.e., we neglect the cell-cell interaction in view of their neutrality and low multipole moments): \begin{equation} \epsilon_{es}({\bf r},{\bf f}_2({\bf r}),...,{\bf f}_N({\bf r}))\approx\sum_{i=1}^N E_{\rm cell}([\rho];{\bf f}_i({\bf r})), \label{eq_esofr} \end{equation} where $E_{\rm cell}([\rho];{\bf r}_i)$ is the electrostatic energy of the cell around an electron at position ${\bf r}_i$, equal to the sum of the attraction between the electron and the background contained in the cell and the background-background repulsion inside the cell.\cite{SeiPerKur-PRA-00} Notice that, for a given SCE configuration, the electrostatic energy $\epsilon_{es}({\bf r},{\bf f}_2({\bf r}),...,{\bf f}_N({\bf r}))$ of Eq.~\eqref{eq_esofr} is equal to $N\tilde{w}_\infty({\bf r})$, where $\tilde{w}_\infty({\bf r})$ is given in Eq.~\eqref{eq_endensSCEold}. The PC model is then trying to approximate $N\tilde{w}_\infty({\bf r})$ by constructing the electrostatic energy $E_{\rm cell}([\rho];{\bf r}_i)$ of a cell around the electron at position ${\bf r}_i$. However, and this is a crucial step to understand the gauge of the PC model, once an approximation for $E_{\rm cell}([\rho];{\bf r}_i)$ has been built, the sum over the $N$ electrons in the right-hand side of Eq.~\eqref{eq_esofr} is replaced by $N E_{\rm cell}([\rho];{\bf r})$.\cite{SeiPerKur-PRA-00} In the original derivation of the PC model \cite{SeiPerKur-PRA-00} this step was seen as a further approximation. We now know, thanks to the exact SCE formulation, that this is is not an approximation, but an exact feature of the $\lambda\to\infty$ limit, clarified in Eq.~\eqref{eq_changegauge}. Because of this transformation, the local electrostatic energy that the PC model is trying to approximate is then exactly the same as the one of the exchange-correlation hole of Eq.~\eqref{eq_endensSCE}. It is important to stress that the PC cell {\em is not} an approximation to the exchange-correlation hole in the $\lambda\to\infty$ limit.\cite{SeiPerKur-PRA-00} However, we have now proved that its electrostatic energy (electron-background attraction plus background-background repulsion) {\em is} an approximation to the electrostatic potential of the exchange correlation hole, Eq.~\eqref{eq:defenergydensity}. This concept is further clarified in the Appendix, where the case of the uniform electron gas at low density is treated explicitly. The simplest approximation to the PC cell is a sphere of uniform density $\rho({\bf r})$ around the electron at position ${\bf r}$ with a radius $r_s({\bf r})=\left(\frac{4 \pi}{3}\rho({\bf r})\right)^{-1/3}$ fixed by the condition that the fictitious positive background exactly neutralizes the electron at its center. This leads to the simple PC-LDA approximation\cite{SeiPerKur-PRA-00} \begin{equation} w_{\rm PC}^{\rm LDA}({\bf r})=-\frac{9}{10}\left(\frac{4\pi}{3}\right)^{1/3}\rho({\bf r})^{1/3}. \label{eq_PCLDA} \end{equation} If we approximate the dipole moment of the cell in terms of the gradient of the density and we set it equal to zero we obtain the PC-GGA expression\cite{SeiPerKur-PRA-00} \begin{equation} w_{\rm PC}^{\rm GGA}({\bf r})=w_{\rm PC}^{\rm LDA}({\bf r})+\frac{3}{350}\left(\frac{3}{4\pi}\right)^{1/3}\frac{|\nabla\rho({\bf r})|^2}{\rho({\bf r})^{7/3}}. \label{eq_PCGGA} \end{equation} In Fig.~\ref{fig_SCEandPC} we compare the exact $\lambda\to\infty$ energy densities of Eq.~\eqref{eq_endensSCE} with the PC-LDA and PC-GGA approximations of Eqs.~\eqref{eq_PCLDA}-\eqref{eq_PCGGA} for the He atom, the sphericalized B and C atoms and for the Ne atom, using accurate Hylleras and quantum Monte Carlo densities.\cite{FreHuxMor-PRA-84,AlsResUmr-PRA-98}\footnote{J. Toulouse, private communication. The sphericalized densities of the B and C atoms were obtained from variational Monte Carlo using accurate optimized wavefunctions as described in Refs.~\onlinecite{TouUmr-JCP-07} and~\onlinecite{UmrTouFilSorHen-PRL-07}.} \begin{figure} \includegraphics[width=5.cm,angle=-90]{He_ePC.pdf} \includegraphics[width=5.5cm,angle=-90,clip,trim= 2.8cm 0cm -.3cm 6cm]{B_ePC.pdf} \includegraphics[width=5.5cm,angle=-90,clip,trim= 2.8cm 0cm -.3cm 6cm]{C_ePC.pdf} \includegraphics[width=5.5cm,angle=-90,clip,trim= 2.8cm 0cm -.3cm 6cm]{Ne_ePC.pdf} \caption{Energy density in the gauge of the electrostatic potential of the exchange-correlation hole, Eq.~\eqref{eq:defenergydensity}, in the $\lambda\to\infty$ limit. The exact SCE result of Eq.~\eqref{eq_endensSCE} is compared with the PC-LDA and PC-GGA approximations of Eqs.~\eqref{eq_PCLDA}-\eqref{eq_PCGGA}.} \label{fig_SCEandPC} \end{figure} We see that the PC model becomes a rather good approximation in the valence region of B, C and Ne, while being quite poor in the core region, and especially at the nucleus. The PC-LDA energy density is actually a better local approximation except close to the nucleus. The PC-GGA performs better globally (see Table \ref{tab_PCglobal}), but we clearly see that this is due to an error compensation between the core region and the intershell region. The approximations done in the PC model are {\it i}) neglecting the cell-cell interaction, and {\it ii}) the gradient expansion of Eqs.~\eqref{eq_PCLDA}-\eqref{eq_PCGGA} which assumes a slowly varying density. At the nucleus, we can easily construct what would be the ``exact'' PC cell, so that we can at least remove approximation {\it ii}) and check the effect of approximation {\it i}) alone. The ``exact'' PC cell around the nucleus is the sphere $\Omega_1$ of radius $a_1$, with \begin{equation} \int_0^{a_1}4 \pi r^2\,\rho(r)\,d r=1, \end{equation} and the ``exact'' value of $w_{\rm PC}(r=0)$ is \begin{equation} w_{\rm PC}(r=0)=-\int_{\Omega_1}d{\bf r}\frac{\rho({\bf r})}{r}+\frac{1}{2}\int_{\Omega_1}d{\bf r}\int_{\Omega_1}d{\bf r}'\frac{\rho({\bf r})\rho({\bf r}')}{|{\bf r}-{\bf r}'|}. \label{eq_wPC0} \end{equation} In Table~\ref{tab_PCr0} we compare the values at the nucleus from the exact SCE, the PC-LDA or PC-GGA (they become equal at the nucleus) and the result of Eq.~\eqref{eq_wPC0} for several atoms. We see that Eq.~\eqref{eq_wPC0} is very accurate for $N=2$ electrons: in this case, in fact, when the reference electron is at the nucleus, the other one is at infinity, so that the cell-cell interaction becomes indeed zero. For $N>2$, we see that the ``smearing hypothesis,'' i.e. the idea that the cell-cell interaction is negligeable, leads to some errors, although there is an improvement with respect to the gradient expansion of Eqs.~\eqref{eq_PCLDA}-\eqref{eq_PCGGA}, reducing the relative error of about a factor 2. Along these lines, one might try to construct an improved PC model that performs locally better than the PC-GGA, which, as said, achieves good global accuracy at the price of error compensation between different regions of space. \begin{table} \begin{tabular}{llll} \hline \hline & SCE & PC-LDA & PC-GGA\\ \hline H$^-$ & $-0.569$ & $-0.664$ & $-0.559$\\ He & $-1.498$ & $-1.735$ & $-1.468$ \\ Li & $-2.596$ & $-2.983$ & $-2.556$\\ Be & $-4.021$ & $-4.561$ & $-3.961$\\ B & $-5.706$ & $-6.412$ & $-5.650$ \\ C & $-7.781$ & $-8.650$ & $-7.719$\\ Ne & $-19.993$ & $-21.647 $ & $-19.999$\\ \hline \hline \end{tabular} \caption{Global value $W_{\infty}[\rho]=\int\rho({\bf r})w_{\infty}[\rho]({\bf r}) d {\bf r}$ for small atoms at different levels of approximation. The SCE corresponds to the exact value, Eq.~\eqref{eq_endensSCE}, while PC-LDA and PC-GGA correspond, respectively, to Eqs.~\eqref{eq_PCLDA} and \eqref{eq_PCGGA}.} \label{tab_PCglobal} \end{table} \begin{table} \begin{tabular}{llll} \hline \hline & $w_{\rm SCE}(r=0)$ & $w_{\rm PC}^{\rm GGA}(r=0)$ & $w_{\rm PC}(r=0)$\\ \hline H$^-$ & $-0.6825$ & $-0.9671$ & $-0.7157$\\ He & $-1.6883$ & $-2.1729$ & $-1.6672$ \\ Li & $-2.2041$ & $-3.4019$ & $-2.6396$\\ Be & $-3.1568$ & $-4.6578$ & $-3.6354$\\ B & $-3.8230$ & $-5.8995$ & $-4.6190$ \\ C & $-4.7727$ & $-7.1446$ & $-5.6050$\\ Ne & $-8.0276$ & $-12.119 $ & $-9.5463$\\ \hline \hline \end{tabular} \caption{Comparison of the values at the nucleus of the energy density in the gauge of the exchange-correlation hole potential in the strong-interaction limit for small atoms. The value $w_{\rm SCE}(r=0)$ corresponds to the exact expression of Eq.~\eqref{eq_endensSCE}, the value $w_{\rm PC}^{\rm GGA}(r=0)$ is the PC gradient expansion approximation of Eqs.~\eqref{eq_PCLDA}-\eqref{eq_PCGGA} (the PC-LDA and PC-GGA are equal at the nucleus), and $w_{\rm PC}(r=0)$ is the value from the ``exact'' PC cell of Eq.~\eqref{eq_wPC0}. } \label{tab_PCr0} \end{table} \section{Energy densities along the adiabatic connection} \label{sec_enedenslambda} \subsection{Kohn-Sham ($\lambda=0$)} At \textit{zero} coupling-strength the exact solution for the wave function $\Psi_0$ becomes a Slater determinant $\Phi=|\phi_1...\phi_N\rangle$, and the energy density $w_0[\rho]({\bf r})$ in the gauge of the exchange-correlation hole is given by the electrostatic potential of the KS exchange hole $h_{x}({\bf r},{\bf r}')$, \begin{equation} w_0[\rho]({\bf r})=\frac{1}{2}\int \frac{h_{x}({\bf r},{\bf r}')}{|{\bf r}-{\bf r}'|}\,d{\bf r}', \label{eq:energydensityKS} \end{equation} as the pair density simply writes \begin{equation} P_2^0({\bf r},{\bf r}')=\rho({\bf r})\rho({\bf r}')+\rho({\bf r})h_{x}({\bf r},{\bf r}'). \label{eq:PdensKS} \end{equation} One can use in Eq.~\eqref{eq:energydensityKS} the exact exchange hole built from a Hartee-Fock like expression in terms of the KS orbitals $\phi_i$, or a density functional approximation for $h_{x}({\bf r},{\bf r}')$, e.g., the one of Becke and Roussel.\cite{BecRou-PRA-89} These two choices would correspond, respectively, to construct a hyper-GGA and a meta-GGA functional from a local interpolation along the adiabatic connection. The aim of the present work is a preliminary study of {\em exact} energy densities along the adiabatic connection. The exact KS orbitals and the corresponding non-interacting potential $\hat{V}^0_{\rm ext}$ for a given physical density can be found in an exact way, e.g,. by inversion of the KS equations\cite{WanPar-PRA-93,vanBae-PRA-94,GrivanBae-PRA-95,PeiNecWar-PRA-03,KadSto-PRA-04,AstSto-PRB-06} or by the use of Lieb's Legendre transform DFT formalism.\cite{Lie-IJQC-83,ColSav-JCP-99,TeaCorHel-JCP-10} For an ISI-like interpolation on the energy density, $w_0[\rho]({\bf r})$ will be a key ingredient. Additionally, knowledge of the next leading order in the asymptotic expansion of the local energy density around $\lambda=0$ is necessary, but not available yet. The next leading order in the asymptotic expansion constitutes an active field of research in our group (see also the discussion in Sec.~\ref{sec_conc}). \subsection{Physical ($\lambda=1$)} To compute the exact energy density at coupling-strength $\lambda=1$ we resort to \begin{equation} w_1[\rho]({\bf r})=\frac{1}{2\rho({\bf r})}\int \frac{P_2^1({\bf r},{\bf r}')}{|{\bf r}-{\bf r}'|}\,d{\bf r}'-\frac{1}{2}\int \frac{\rho({\bf r}')}{|{\bf r}-{\bf r}'|}\,d{\bf r}', \label{eq:energydensityphys} \end{equation} with the pair density given by the full many-body wave function $\Psi_1$ in Eq. \eqref{eq_P2}. The density $\rho_1({\bf r})$ out of $P_2^1({\bf r},{\bf r}')$ defines the density $\rho({\bf r})=\rho_1({\bf r})$ to be held constant along the adiabatic connection. The exact $w_1[\rho]({\bf r})$ can serve as benchmark for models on $w_\lambda[\rho]({\bf r})$ but also gives an estimate of the importance of the strong-interaction limit in the $w_\lambda[\rho]({\bf r})$ model. If the physical system is close to the KS one, correlation is less important and already Hartree-Fock should perform well. In this case, we expect that inclusion of the $\lambda\rightarrow\infty$ information in the $w_\lambda[\rho]({\bf r})$ model does not lead to a major improvement. In contrast, for stronger correlated system the physical energy density should tend more towards the $\lambda\rightarrow\infty$ limit and the SCE concept can provide useful input for an accurate model for $w_\lambda[\rho]({\bf r})$. The relevance of the strong-interaction limit will be discussed in the next section. \subsection{Results} \subsubsection{Coulomb external potential} We have performed full-CI calculations in an aug-cc-pVTZ basis for some two and four electron atoms within the Gamess-US package\cite{GAMESS} to obtain an accurate ground state wave function for the physical interaction strength. Starting from this, we are able to calculate the energy density in the gauge of the exchange-correlation hole for $\lambda=0,1,\infty$. At $\lambda=1$ we calculate the energy density from the full-CI pair density, Eq.~\eqref{eq:energydensityphys}, by a program similar to the one used for the calculation of $v_{\rm cond}$ in Ref.~\onlinecite{BuiBaeSni-PRA-89}. For the energy density at $\lambda=0$, Eq.~\eqref{eq:energydensityKS}, we have to compute the single particle KS orbitals corresponding to the full-CI density first. In the case of 2 electron atoms they are readily constructed by the simple relation \begin{equation} \phi({\bf r})=\sqrt{\dfrac{\rho({\bf r})}{2}}. \end{equation} For the four electron atoms we choose the scheme of van~Leeuwen, Baerends and Gritsenko\cite{vanBae-PRA-94,GrivanBae-PRA-95} to invert the KS equations. In the strong-interaction limit we calculate the energy density within the SCE concept, see Sec.~\ref{sec:SCEexact} and Refs.~\onlinecite{SeiGorSav-PRA-07,GorVigSei-JCTC-09}. \begin{figure} \begin{center} \includegraphics[width=5.cm,angle=-90]{He_e.pdf} \includegraphics[width=5.cm,angle=-90]{H-_e.pdf} \includegraphics[width=5.5cm,angle=-90,clip,trim= 2.8cm 0cm -.3cm 6cm]{Be_e.pdf} \includegraphics[width=5.5cm,angle=-90,clip,trim= 2.8cm 0cm -.3cm 6cm]{Li-_e.pdf} \caption{Energy densities in the gauge of the electrostatic potential of the exchange-correlation hole for accurate full-CI densities (aug-cc-pVTZ) and coupling strength $\lambda=0,1$ and $\infty$.} \label{fig_l=0,1,inf} \end{center} \end{figure} In Fig.~\ref{fig_l=0,1,inf} we show the energy densities for $\lambda=0,1,\infty$ for two- and four-electron atoms. As expected, He and Be are relatively weakly correlated, and their $\lambda=1$ energy densities are much closer to the KS ones than to the SCE ones. Here a description at the Hartree-Fock level is very reasonable and gives indeed at least $98.5\%$ of the total energy. The anion H$^-$, instead, being a system with a more diffuse density and thus more correlated, has a physical energy density that is much more in between the KS and the SCE curves, with an Hartree-Fock treatment giving only $94\%$ of the total energy. Here we expect the inclusion of the information from the strong-interaction limit to be important. The valence regions of Be and Li$^-$ (see the insets in Fig.~\ref{fig_l=0,1,inf}) can also be better described by a proper inclusion of the $\lambda=\infty$ information. \subsubsection{Harmonic external potential} Another useful class of systems to investigate the impact of the strong-interaction limit on the physical energy density is given by model quantum dots, where electrons are confined by a harmonic potential and correlation gains importance as the spring constant is lowered. We have computed the energy density for $N=2$ electrons in three dimensions for spring constants for which an analytic solution for the wave function can be found.\cite{Tau-PRA-93} The results are displayed in Fig.~\ref{fig:hooke} for the largest and smallest spring constant considered. \begin{figure} \begin{center} \includegraphics[width=5.5cm,angle=-90,clip,trim= 2.8cm 0cm -.3cm 6cm]{n=1_e.pdf} \includegraphics[width=5.5cm,angle=-90]{n=10_e.pdf} \caption{Energy densities in the gauge of the electrostatic potential of the exchange-correlation hole for a quantum dot with $N=2$ electrons with less pronounced correlation ($\omega=0.5$) and pronounced correlation ($\omega=0.0014$).} \label{fig:hooke} \end{center} \end{figure} As expected, the physical energy density comes closer to the SCE energy density in the stronger correlated case. Additionally, a remarkable feature we observe is that the physical energy density crosses the SCE energy density. By intuition one would expect the physical energy density to be always in between the KS and SCE energy densities, as the KS energy density represents the weakest possible correlation and the SCE energy density the strongest possible correlation in the given density. However, the wave functions are chosen according to the global quantities \begin{eqnarray} \min_{\Psi\rightarrow\rho}\langle\Psi|\hat{T}|\Psi\rangle & \Rightarrow & \Psi_{\rm KS} \\ \min_{\Psi\rightarrow\rho}\langle\Psi|\hat{V}_{ee}|\Psi\rangle & \Rightarrow & \Psi_{\rm SCE} \label{eq_VeeSCEconstr}\\ \min_{\Psi\rightarrow\rho}\langle\Psi|\hat{T}+\hat{V}_{ee}|\Psi\rangle & \Rightarrow & \Psi_{\lambda=1}, \end{eqnarray} yielding the global inequalities \begin{equation} \langle\Psi_{\rm SCE}|\hat{V}_{ee}|\Psi_{\rm SCE}\rangle\leq\langle\Psi_{\lambda=1}|\hat{V}_{ee}|\Psi_{\lambda=1}\rangle\leq \langle\Psi_{\rm KS}|\hat{V}_{ee}|\Psi_{\rm KS}\rangle. \end{equation} Locally, these inequalities can be violated without violating the global ones, and hence the physical energy density can go below the SCE energy density. The crossing feature can be attributed to polarization effects, that are present in the physical case, but are not contained in the KS and SCE pair density. In the KS case we rely on the independent-particle picture and polarization of a particle due to other particles is not described by this model. Whereas for the SCE, we build on point particles that are a priori unpolarizable. To underline this argument we have computed the asymptotic behavior of the physical energy density for the quantum dot with $\omega=0.5$ by use of the asymptotic expansion of the physical pair density\cite{ErnBurPer-JCP-96} \begin{equation} \dfrac{P({\bf r},{\bf r}')}{\rho({\bf r})\rho_{N-1}({\bf r}')}\rightarrow1-2\dfrac{r'}{r}\cos(\Delta\Omega)+\cdots\quad(r\rightarrow\infty), \label{eq:Passympt} \end{equation} where $\rho_{N-1}({\bf r}')$ is the density of the $N-1$-particle system and $\Delta\Omega$ the angle between ${\bf r}$ and ${\bf r}'$. The second order term in Eq.~\eqref{eq:Passympt} represents the polarization correction. As can be seen from Fig.~\ref{fig:hooke_asympt}, for large distances $|{\bf r}-{\bf r}'|$ the energy density in the KS and SCE case behaves like $\nicefrac{-1}{r}$, which corresponds to the first order in the expansion of the physical pair density. Inclusion of the second order term in the energy density gives essentially the physical behavior and deviates from the KS and SCE energy densities. \begin{figure} \begin{center} \includegraphics[width=5.5cm,angle=-90,]{n=1_asympt.pdf} \caption{Energy density in the gauge of the electrostatic potential of the exchange-correlation hole for a $N=2$-electron model quantum dot calculated with the full KS, physical and SCE pair density and with the asymptotic expansion of the physical pair density, Eq.~\eqref{eq:Passympt}. The KS and 1st order curve lie on top of each other.} \label{fig:hooke_asympt} \end{center} \end{figure} Although the crossing happens in a region in which the density is very small and thus with an almost negligible energetic contribution, the analysis presented here can be helpful in constructing models for $w_\lambda[\rho]({\bf r})$. Notice that, instead, with the Coulomb external potential we always observed, so far, the expected behavior $w_{\lambda\to\infty}({\bf r})\le w_{\lambda=1}({\bf r})\le w_{\lambda=0}({\bf r})$ everywhere. \section{The local form of the Lieb-Oxford bound} \label{sec_LO} The Lieb-Oxford (LO) bound\cite{Lie-PLA-79,LieOxf-IJQC-81,ChaHan-PRA-99} is a rigorous lower bound to the indirect part of the electron-electron repulsion energy $\tilde{W}[\Psi]$ associated with a given many-electron wave function $\Psi$, \begin{equation} \tilde{W}[\Psi]\equiv\langle\Psi|\hat{V}_{ee}|\Psi\rangle-U[\rho_\Psi]\ge -C \int d{\bf r}\,\rho_\Psi({\bf r})^{4/3}, \end{equation} where $\rho_\Psi({\bf r})$ is the density obtained from the wave function $\Psi$. The positive constant $C$ is rigorously known to have a value\cite{LieOxf-IJQC-81,ChaHan-PRA-99} $C\le 1.679$. It has been suggested\cite{Per-INC-91,RasPitCapPro-PRL-09} that a tighter bound can be obtained by taking the value of $C$ that corresponds to the low-density limit of the uniform electron gas, $C\approx1.44$, since the bound is known to be more challenged when the number of electrons increases\cite{LieOxf-IJQC-81} and when the system has low density.\cite{LevPer-PRB-93} The LO bound translates into a lower bound for the exchange and exchange-correlation functionals,\cite{Per-INC-91,LevPer-PRB-93} \begin{equation} E_x[\rho]\ge E_{xc}[\rho]\ge -C \int d{\bf r}\,\rho({\bf r})^{4/3}, \label{eq_LOExc} \end{equation} simply because $E_x[\rho]=W_{\lambda=0}[\rho]$ is the indirect Coulomb repulsion of the Slater determinant of KS orbitals, and $E_{xc}[\rho]$ is the sum of the indirect Coulomb repulsion of the physical wave function, $W_{\lambda=1}[\rho]$, plus the correlation correction to the kinetic energy which is always positive. The way the LO bound is used in the construction of approximate functionals is, usually (with the exception of Ref.~\onlinecite{OdaCap-PRA-09}), by imposing it locally (see, e.g., Refs.~\onlinecite{PerBurErn-PRL-96,HauOdaScuPerCap-JCP-12}). That is, a given approximate exchange-correlation functional, $E_{x(c)}^{\rm DFA}[\rho]=\int \rho({\bf r})\,\epsilon_{x(c)}^{\rm DFA}({\bf r})\,d{\bf r}$, is required to satisfy \begin{equation} \epsilon_{x(c)}^{\rm DFA}({\bf r})\ge -C \,\rho({\bf r})^{1/3}. \label{eq_localLO} \end{equation} This is a sufficient condition to ensure the global bound of Eq.~\eqref{eq_LOExc}, but it is by no means necessary (see, e.g., Ref.~\onlinecite{ZhaYan-PRL-98}). In other words, there is no proof that a local version of the LO bound should hold. Actually, before even asking wether a local version of the LO bound should hold or not, we need to understand to which definition (gauge) of the energy density should apply the local LO bound of Eq.~\eqref{eq_localLO}. In fact, since energy densities are not uniquely defined, the inequality \eqref{eq_localLO} should be satisfied only for a well defined gauge: one can indeed always add to $\epsilon_{x(c)}^{\rm DFA}({\bf r})$ a quantity that integrates to zero and violates Eq.~\eqref{eq_localLO} in some region of space. We argue here that {\em i}) the gauge of the local LO bound is the conventional one of the electrostatic energy of the exchange-correlation hole, and {\em ii}) that the local LO bound is certainly violated, at least in the tail region of an atom or of a molecule, and in the bonding region of a stretched molecule. The argument behind point {\em i}) is the following. For a given density $\rho$, the wave function $\Psi[\rho]$ that maximally challenges\cite{RasSeiGor-PRB-11} the LO bound is the one that minimizes the expectation $\langle\Psi[\rho]|\hat{V}_{ee}|\Psi[\rho]\rangle$, i.e., by definition, $\Psi_{\rm SCE}[\rho]$. In fact, we also have \begin{equation} E_x[\rho]\ge E_{xc}[\rho]\ge W_\infty[\rho]\ge -C \int d{\bf r}\,\rho({\bf r})^{4/3}. \end{equation} In Sec.~\ref{sec_strong} we have discussed the energy density associated to $W_\infty[\rho]$ in the gauge of the electrostatic potential of the exchange-correlation hole. We have also shown that this energy density can be approximated by the PC model that considers the electrostatic energy of a cell around the reference electron of positive charge $\rho_+({\bf r})=\rho({\bf r})$. The LDA version of this approximation has exactly the same form of the local LO bound. Moreover, the recently suggested value\cite{RasPitCapPro-PRL-09} $C\approx 1.44$ is extremely close to the one of the PC-LDA model, $C_{\rm PC}\approx1.45$. Notice that the fact that the PC model is in the gauge of the electrostatic energy of the exchange-correlation hole follows from the properties of the strong-interaction limit of DFT, in particular Eq.~\eqref{eq_changegauge}. If the PC model is an approximation in this gauge, and if the LO bound is locally equal to it, then conclusion {\em i}) should follow. We then easily see that the local LO bound of Eq.~\eqref{eq_localLO} is certainly violated in the tail region of an atom or a molecule, where the exact energy density in the conventional exchange-correlation hole gauge goes like $-1/r$ while the right-hand side of Eq.~\eqref{eq_localLO} decays exponentially. The local bound is also violated in the bond region of a stretched molecule. As an example, we show in Fig.~\ref{fig_h2LO} the energy densities of the stretched H$_2$ molecule for $\lambda=0$ and $\lambda=1$: with $C=1.44$ the local bound is violated in the bond region when the internuclear distance is $R\gtrsim 7$~a.u., and with $C=1.67$ when $R\gtrsim 8$~a.u. \begin{figure} \begin{center} \includegraphics[width=5.5cm,angle=-90]{LO2.pdf} \includegraphics[width=5.5cm,angle=-90]{LO3.pdf} \caption{Violation of the local form of the Lieb-Oxford bound for the stretched H$_2$ molecule.} \label{fig_h2LO} \end{center} \end{figure} As a concluding remark, we can say that it is very difficult, or maybe even impossible, to find a rigorous local lower bound for the energy density. In fact, we have just seen in Sec.~\ref{sec_enedenslambda} that, at least for the harmonic external potential, it is not even true that $w_{\lambda=1}({\bf r})\ge w_{\infty}({\bf r})$ everywhere. This means that even if we maximize the correlation between the electrons we do not construct a rigorous local lower bound, but only a global one. \section{Conclusions and Perspectives} \label{sec_conc} We have derived an exact expression for the energy density in the strong-interaction limit of DFT in the gauge of the exchange-correlation-hole electrostatic potential, and we have computed it for small atoms and model quantum dots. A careful analysis of the point-charge plus continuum (PC) model showed that this approximation is formulated in the same gauge, and a comparison with the exact results showed that it is locally reasonable. Our formalism also allowed to analyze the local version of the Lieb-Oxford bound: we were able to assess to which gauge should the local LO bound correspond, then showing that it is certainly violated. Our findings are in agreement with (and give formal support to) the very recent results of Vilhena {\it et al.}\cite{VilRasLehMar-arxiv-12} (which only appeared when this manuscript was completed). More generally, our results suggest that it is very difficult (if not impossible) to derive a rigorous local lower bound for the energy density. We have also discussed the idea of a local interpolation along the adiabatic connection. The values of the local energy density in the same gauge at $\lambda=0$ and $\lambda=\infty$ are now available, either exactly or in an approximate way. Even if we have found that in the harmonic external potential the physical energy density is not always in between the $\lambda=0$ and the $\lambda=\infty$ curves, the regions of space in which the expected order is reversed are energetically not important. In the external Coulomb potential we have found, instead, the expected behavior $w_{\lambda\to\infty}({\bf r})\le w_{\lambda=1}({\bf r})\le w_{\lambda=0}({\bf r})$ everywhere. To really be able to build a local interpolation, at least the slope at $\lambda=0$, and possibly the next leading term at $\lambda=\infty$, are also needed in a local form, and in the same gauge. A first step towards the construction of a local slope at $\lambda=0$ is to produce exact results for this quantity, crucial to assess approximations. This can be achieved with the Legendre transform techniques developed in Refs.~\onlinecite{TeaCorHel-JCP-09,TeaCorHel-JCP-10} and it is the object of a current project. A possible way, then, to construct an approximate local slope is to use the so-called ``extended Overhauser model''\cite{GorPer-PRB-01,DavPolAsgTos-PRB-02,GorSav-PRA-05} locally, in a perturbative way. A local next leading term at $\lambda=\infty$ can also be constructed by deriving the exact exchange-correlation hole corresponding to the wave function of the zero-point oscillations, discussed in Ref.~\onlinecite{GorVigSei-JCTC-09}. All this will be investigated in future work. \section*{Acknowledgments} The authors thank Robert van Leeuwen for sharing the program for the inversion of the KS equations. We are grateful to Oleg Gritsenko for helpful assistance in developing the program for the calculation of the physical energy density. AM acknowledges Evert Jan Baerends for his warm hospitality at the Postech University in Pohang, Korea, where part of this work was done. PG-G thanks Gaetano Senatore for useful discussions. This work was supported by the Netherlands Organization for Scientific Research (NWO) through a Vidi grant.
1,314,259,992,933
arxiv
\section{Introduction} \label{sec:introduction} The detection of merging black hole (BH) binaries through gravitational waves \citep[e.g.,][]{abbott_2016_aa, ligo18b} has opened an observational window on the most massive stellar BHs in the Universe. Stellar evolution theory predicts the existence of a gap in the BH mass distribution due to pair-instability evolution \citep[][]{fowler:64, barkat:67, woosley:17}, and the current population of detected binary BHs are consistent with a lack of BHs with masses $\geq 45\ensuremath{\rm{M}_\odot}$ \citep{abbott_2016_ab,abbott_2016_aa,abbott_2017_ac,abbott_2017_ah,abbott_2017_ai}. So far, the most massive BH found is the primary of GW170729, with a mass of $50.6^{+16.6}_{-10.2}\,\ensuremath{\rm{M}_\odot}$ \citep{ligo18a}. This object is at the edge of the theoretically predicted mass gap. \cite{fishbach:17} showed that the existence of the gap and the maximum BH mass at its lower edge can be significantly constrained with the detections expected during the third LIGO/Virgo observing run. The existence of this pair-instability BH mass gap is expected because of the occurrence of pair-instability supernovae (PISN) which can completely disrupt the progenitor star leaving no compact remnant behind \citep[][]{rakavy:67,fraley:68, woosley:02}. However, it is the mass loss during the pulsations in a pulsational pair instability supernovae (PPISN) that set the lower edge of this PISN BH mass gap. PPISN are predicted for stars slightly less massive than PISN progenitors, and they leave behind a BH, but only after having experienced several episodes of pulsational mass loss, which reduce the mass of the final BH. Here, we investigate how robust the location of the lower edge of the BH mass gap due to PPISN is \citep{rakavy:67,fraley:68}, and in particular its lower boundary, i.e., how massive can the most massive BH below the gap be. Single stars with initial masses $100\,\ensuremath{\rm{M}_\odot}\lesssim M_\mathrm{ZAMS} \lesssim 140\,\ensuremath{\rm{M}_\odot}$ (or equivalently final helium core masses of $32\,\ensuremath{\rm{M}_\odot}\lesssim\ensuremath{\rm{M_{He}}}\lesssim60\,\ensuremath{\rm{M}_\odot}$), are expected to undergo pulsation pair instabilities (PPI) \citep[][Renzo et al subm.]{woosley:02,chen:14,yoshida:16, woosley:17}. This instability results in a series of pulses, each removing mass from the star. Eventually, the core stabilizes, the pulses cease, and the star ends its evolution in an iron core collapse (CC) most likely producing a BH \citep{barkat:67, woosley:17}. More massive stars are fully disrupted instead of producing ever more massive BHs: for initial masses $140\,\ensuremath{\rm{M}_\odot}\lesssim M_\mathrm{ZAMS}\lesssim260\,\ensuremath{\rm{M}_\odot}$ (metallicity dependent), corresponding roughly to final helium cores $60\,\ensuremath{\rm{M}_\odot}\lesssim \ensuremath{\rm{M_{He}}} \lesssim 140\,\ensuremath{\rm{M}_\odot}$ \citep{heger02}, the first pulse is so violent that the entire star is fully disrupted in a PISN \citep{woosley:02,heger:03}, without any BH remnant formed. For even higher initial masses, corresponding to final $M_\mathrm{He}\gtrsim130\ensuremath{\rm{M}_\odot}$, the photodisintegration instability allows again for BH formation \citep[][]{heger:03}, closing the PISN BH mass gap from above. From a population of binary BH mergers, we can determine their rate \citep{abbott_2016_ab}, and their mass distribution \citep{cutler94,kovetz17}. However, the time-scale for binary BHs to merge can be of the order of giga-years \citep{paczynski67}. Therefore, even if determining the host galaxy is possible despite the limited spatial localization of binary BH mergers the local observed population of stars may have formed later and hence have a different metallicity to that of the BH progenitor. This complicates estimating the rate of BH formation \citep{Portegies00,dominik:12,abbott_2016_ab}, since this estimate requires knowing the star formation rate and metallicity evolution of the Universe \citep{madau14,neijssel19,mapelli19}. The maximum BH mass below the PISN gap however can be more easily determined \citep{ligo18b} as it is independent of the rate of BH formation. We can thus use it without knowing the metallicity dependent star formation rate of the Universe. In this study we explore how sensitive the maximum BH mass is to uncertainties in the metallicity of the progenitors as well as other known uncertainties in our understanding of stellar physics. In section \ref{sec:method} we describe the evolution of PPISN and PISN while introducing our computational approach. We outline the parameter variations we consider in section \ref{sec:parameters}. In section \ref{sec:met} we discuss the sensitivity of the maximum BH mass to changes in the metallicity of the stellar progenitors. Section \ref{sec:physics} explores how uncertain the maximum BH mass below the PISN mass gap is because of uncertainties in the assumed input physics. We discuss the implications of the maximum BH mass in section \ref{sec:implicat}. We conclude and summarize our results in sections \ref{sec:desc} \& \ref{sec:conc}. \section{Evolution though the pulses} \label{sec:method} Using \code{MESA}{} version\footnote{This version is not an official release, but it is publicly available from \url{http://mesa.sourceforge.net/}} 11123 \citep{paxton:11,paxton:13,paxton:15,paxton:18,paxton:19}, we evolve a series of single bare helium cores until they undergo either PPI followed by a core collapse supernovae (PPISN) or the more violent pair instability that fully disrupts the star in a PISN. Input files necessary to reproduce this work and the resulting output files are made freely available at \href{www.mesastar.org}{www.mesastar.org}\footnote{As well as \href{http://doi.org/10.5281/zenodo.3346593}{http://doi.org/10.5281/zenodo.3346593} }. Based on the results of \citet{marchant:18}, we evolve systems around the lower edge of the PISN BH mass gap, with initial helium core masses between 30-105\,\ensuremath{\rm{M}_\odot}. We chose to evolve bare helium cores as stars in this mass range are expected to lose their hydrogen-rich envelope long before their death. This could happen either through binary interactions \citep{kobulnicky:07,sana:12,almeida:17}, strong stellar winds \citep{vink:05, renzo:17}, LBV-like mass loss \citep{humphreys94}, opacity-driven pulsations in the envelope \citep{moriya15}, or because of chemically homogeneous evolution due to fast rotation \cite{maeder:00, yoon:06,demink:09, mandel:16b, marchant:16}. As stars evolve from the zero age helium branch (ZAHB) they proceed by burning helium convectively in their core which encompasses $\sim90\%$ of the mass, taking $\sim 10^{5}\,\rm{years}$. Once helium has been burnt in the core convection ceases, leaving behind a carbon/oxygen (CO) core with an outer helium burning shell surrounded by a helium-rich surface layer. For sufficiently massive cores an inner region of the star will enter the pair instability region. Due to dynamical instability from the production of \ensuremath{e^\pm}{} pairs softening the equation of state the core begins contracting and heating up. Eventually this region will heat up sufficiently to ignite the residual carbon and explosively ignite the oxygen \citep{fowler:64,rakavy:67,barkat:67}. This ignition will reverse the contraction and may generate an outwardly propagating pulse, if the star was sufficiently massive. As this pulse propagates outwards the inner region of the star expands and cools. Once the pulse reaches the surface, it steepens into a shock wave which can then accelerate material beyond the escape velocity. This removes between a few tenths and a few tens of solar masses of material in a pulsational mass loss episode (PPI) \citep{yoshida:16,marchant:18,woosley19}. Some stars will undergo ``weak'' pulsations, these stars undergo PPI instabilities but do not drive a shock sufficient to remove mass \citep{woosley:17, marchant:18}. To focus on the impact that this process has on the BH masses, in this study we define only systems which can drive mass loss as undergoing a pulse. We define weak pulses as ones only able to drive small amounts of mass loss $\approx0.1\ensuremath{\rm{M}_\odot}$ per pulse, while strong pulses drive up to several tens of solar masses lost per pulse. The star then contracts and cools either via neutrinos or in the most massive cores undergoing PPIs via radiative energy losses \citep{woosley:17,marchant:18}. This cycle of contraction and ignition can occur multiple times. This contraction and expansion process is hydrodynamical in nature, generating multiple shocks. To model these shocks we uses \code{MESA}'s HLLC contact solver \citep{Toro1994,paxton:18} However for computational reasons we do not use the HLLC solver while the star is in hydrostatic equilibrium. Instead, only as the star evolves away from hydrostatic equilibrium do we switch to using the HLLC solver. We then follow the hydrodynamics through the ignition and expansion of the star. Once all secondary shocks have reached the surface, we excise any material that has a velocity greater than the escape velocity \citep{yoshida:16,marchant:18}. We then create a new stellar model with the same mass, chemical composition, and entropy as the previous model had (minus the excised material). At this point we switch back to using \code{MESA}'s hydrostatic solver as the star can be approximated as being in hydrostatic equilibrium. This model is then evolved until the next pulse, where this process repeats, or on to core collapse, which is defined when any part of the star infalls with $v>8\,000\,{\mathrm{km\ s^{-1}}}$. Stars which undergo a PISN are evolved until all stellar material becomes unbound. We define the time just before a pulse to be when the pressure weighted integral of $\langle\Gamma_1\rangle<4/3$ \citep{stothers:99,marchant:18}. A special case occurs once the core temperature ($\rm{T_c}$) exceeds $\rm{T_c} > 10^{9.6}\rm{K}$, when we continue using the HLLC solver as the star is approaching CC. During the hydrodynamical phases we turn off mass loss from winds. Given the short amount of physical time spent by our models during the hydrodynamical phase of evolution and the typical wind mass loss rates of $\approx10^{-5}\,\ensuremath{\rm{M}_\odot}\ \rm{yr}^{-1}$, this does not influence significantly the final BH masses. We define the mass of the BH formed to be the mass enclosed with a binding energy $>10^{48}\ensuremath{\rm{ergs}}{}$ (e.g., \citealt{nadezhin:80,lovegrove:13,fernandez18}) and velocities less than the escape velocity, measured at iron core collapse. Stars which undergo a PISN are expected to be fully disrupted and thus leave no remnant behind. The final BH mass may depend on the mass of neutrinos lost during the collapse, assuming they are not accreted into the BH \citep{coughlin:18}. Without a fully consistent theory for BH formation, we use this simple value based on the binding energy, which provides an upper limit on the BH mass. This value of $10^{48}\ensuremath{\rm{ergs}}{}$ is a conservative estimate for the minimum energy released when a star collapses into a BH, due to neutrino emission \citep{nadezhin:80,lovegrove:13,fernandez18}. In general this limit is $\approx0.01\ensuremath{\rm{M}_\odot}$ smaller than the total mass of bound material at core collapse. We define the location in mass of the CO core at the end of core helium burning where $X\left(^{12}\rm{C}\right) > 0.01$ and $X\left(^{4}\rm{He}\right) < 0.01$. \section{Choice of parameters} \label{sec:parameters} There are many uncertain ingredients in the modelling of stars. These can either be algorithmic parameters that are insufficiently constrained by experiments or observations (e.g., convective mixing) or physical quantities that can only be measured in regimes which are much different than the stellar case and require complicated and uncertain extrapolation for their applications to stars (e.g., nuclear reaction rates). Thus we model a range of systems, with differing environmental, physical, and numerical parameters to test the sensitivity of our results to these parameters. \subsection{Metallicity} Since LIGO has the ability to detect stellar mass BH mergers out to red-shifts $\approx1$, for stellar mass BHs, and the potential for the progenitor stars to come from even earlier epochs it can thus probe the history of star formation across the Universe \citep{ligo18b}. Thus we evolve a series of models with varying metallicities ($\rm{Z}$) between $10^{-5}$ and $3\times10^{-3}$. Metallicity primarily effects the evolution of a helium core by varying the amount of mass-lost via winds (see section \ref{sec:winds}), due to the wind-lost prescriptions strong dependence on the metallicity \citep{vink:01,Mokiem+2007a}. The lower limit results in stars that do not not lose any significant amount of mass though winds. The upper limit is set by the requirement for us to be able to robustly model the PPISN and PISN region. The upper limit used is comparable to the physical upper limit found in \citet{langer:07} for H-rich PPISN progenitors. At higher metallicities stars lose sufficient mass that they do not enter the pair instability region and instead evolve in hydrostatic equilibrium though carbon, oxygen, and silicon burning and then undergo direct collapse, likely forming a BH when they try to burn iron. Our fiducial metallicity, when varying other physics parameters, is $\rm{Z=10^{-3}}$. \subsection{Wind mass loss}\label{sec:winds} The total mass a star loses during its evolution plays a critical role in the fate of the star, however just as important is how and when it loses the mass. Mass loss via winds is not self-consistently solved in 1D stellar evolution models, but instead, is set by a mass loss prescription and that functional form can have a large impact on the star's evolution \citep{renzo:17}. We investigate three different wind mass loss algorithms, each having a different dependence on the stellar properties: the prescription of \citet{hamann:82, hamann:95, hamann:98} (H); the prescription of \citet{nugis:00} (N\&L); the prescription of \citet{tramper:16} (T); as well as no mass loss ($\dot{\rm{M}}=0$). The helium cores we investigate have surface luminosities $\approx10^6\,\rm{L_\odot}$, which is at the upper edge of currently known Wolf-Rayet stars used to derive these prescriptions. Thus we also append a free scaling factor $\eta$ to test possible uncertainties in our knowledge of mass loss rates in high luminosity helium cores. This free scaling parameter can be related to the inhomogeneities in the wind structure (so-called ``clumpiness'') with $\eta=\sqrt{\langle\rho^2\rangle/\langle \rho\rangle^2}$, where $\rho$ is the wind mass density, and the angle brackets indicate the spatial average over the stellar surface. We vary $\eta$ between 0.1 and 1.0 \citep{smith:14}, with our fiducial wind being the (H) rate with $\eta=0.1$ \citep{yoon:10}. We assume a value of $\rm{Z}_{\odot}=0.014$ \citep{asplund:09}. \subsection{Neutrino physics} The evolution of massive stars is governed by neutrino losses, as the star evolves to higher core temperatures and densities the rate of thermal neutrino losses increases. Stars undergoing pulsational instabilities are also sensitive to the neutrino cooling rates, as due to the generation of \ensuremath{e^\pm}{} they produce copious amounts of neutrinos from their annihilation which leads to the core cooling. The stronger the cooling, the more energy is required from nuclear burning to overcome these loses. \code{MESA}{} implements the analytic fits to neutrino losses from \citet{itoh96} for pair, photo, plasma, bremsstrahlung and recombination neutrino processes. These fits have a quoted fitting errors of $\approx 10\%$ for pair, $\approx 1\%$ for photo, $\approx 5\%$ for plasma, $\approx 10\%$ for recombination, neutrinos compared to the detailed calculations for the regions where these processes are dominant \citep{itoh96}. Outside of the dominant regions the error increases rapidly. Bremsstrahlung neutrino losses have no quoted error, thus we assume a $\approx 10\%$ error, similar to the other processes. We test the uncertainity due to this fitting error by varying the neutrino rates by increasing (decreasing) the neutrino loss rate by multiples of the quoted fitting error. While \citet{itoh96} states that the analytic fits will generally under predict the true value, we test both over and under estimates for completeness. A second important factor for the rate of neutrino loss in stars is the Weinberg angle, or the weak mixing angle from the Weinberg\textemdash Salam theory of the electroweak interaction \citep{weinberg67,salam68}. In the analytical fits of \citet{itoh96}, the Weinberg angle sets the relative rate of neutrino production between neutral current reactions and charged current neutrino reactions. Increasing the Weinberg angle increases the neutrino cooling rate, by increasing the fraction of charged current reactions. While individual measurements of the Weinberg angle have small quoted uncertainties, there is an systematic offset between different values which is larger than the quoted uncertainties. Thus we model three values for the Weinberg angle 0.2319 \citep{itoh96} (our fiducial value), 0.23867 \citep{erler05}, and 0.2223 \citep{codata14}. Over the range of Weinberg angles considered here, we find the neutrino rates vary by up to $\approx3\%$, with the greatest change being in the pair-creation region. \subsection{Mixing} Convection inside a star is a difficult process to model \citep{bohmvitense:58,canuto96,meakin07}, especially during dynamical phases of a star's evolution \citep{Chatzopoulos14,Chatzopoulos16}. Thus, we take a simpler approach and restrict ourselves to testing uncertainties within the framework of mixing length theory (MLT). Specifically, we test the MLT's \ensuremath{\alpha_{MLT}}{} efficiency parameter between 1.5 and 2.0, with 2.0 being our fiducial value. While this may not capture the true uncertainty due to convection, it can provide bounds on the result. We use the prescription of convective velocities from \citet{marchant:18} to limit the acceleration of convective regions. At the convective boundaries we assume convective overshoot mixing with an exponential profile. This is parameterized into two terms, $\ensuremath{f_{ov}}$ and $f_0$, the first term dictates the scale height of the convective overshoot, in units of the pressure scale height. The second term dictates the starting point inside the convective boundary from where the overshoot begins, in pressure scale heights \citep{paxton:11}. We assume the value of $f_0=0.005$, and vary $\ensuremath{f_{ov}}$ between 0.0 (no overshooting) and 0.05, with $\ensuremath{f_{ov}}=0.01$ being our fiducial value. \subsection{Nuclear physics} Nuclear reaction rates are highly sensitive to the temperature of at which the reaction occurs, and due to this sensitivity the uncertainty in the rate is also highly temperature dependent \citep{iliadis_2010_ab,iliadis_2010_aa,longland_2010_aa}. Varying nuclear reaction rate within its known uncertainties has been shown to a have large impact on the stellar structure of a star \citep{hoffman99,iliadis02,fields16,fields18}. We vary several nuclear reaction rates between their $\pm1\sigma$ uncertainties with data from \code{STARLIB}{} \citep{sallaska13}. \code{MESA}'s default rate set is a combination of \code{NACRE}{} \citep{angulo99} and \code{REACLIB}{} \citep{cyburt10}. To sample the rates, we take the median value from \code{STARLIB}{} and by taking the uncertainty on a rate to be a log normal distribution we can compute both an upper and lower rate (given by $\pm1\sigma$) to cover 68\% of the rate's probability distribution. These bounds vary as a function of temperature reflecting the varying uncertainty in the underlying experimental data. When sampling the rates, we vary only one rate at a time, with the reminder of the rates being taken from \code{NACRE}{} and \code{REACLIB}{}. Correlations between rates can impact the structure of a star and deserve further study \citep{fields16,fields18}. We test variations in three rates; $3\alpha$ is the triple alpha reaction, $\rm{C12\alpha}$ is the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} reaction, and $\rm{O16\alpha}$ is the $\rm{^{16}O\left(\alpha,\gamma\right)^{20}Ne}$ reaction. We choose to vary only a few rates over their $1\sigma$ uncertainties to limit the computational cost. We also investigate the effect of changing the nuclear network used, which can have a large impact on the evolution of massive stars, due to changes in both which isotopes and which reactions are followed \citep{farmer16}. By default we use the \code{approx21.net} which follows alpha chain reactions from carbon and iron, and includes compound reactions to follow $\left(\alpha,\rm{p}\right)\left(\rm{p},\gamma\right)$ reactions (which assumes that the intermediate isotope is in a steady state equilibrium) \citep{Timmes99,Timmes00}. We also evolve models with both \code{mesa\_75.net}, which has 75 isotopes up to $^{60}\rm{Zn}$, and \code{mesa\_128.net}, which has 128 isotopes up to $^{60}\rm{Zn}$, including more neutron rich nuclei than the \code{mesa\_75.net} network, which do not include any compound reactions. \subsection{Other physics} \code{MESA}{} is built upon a range of other physics, which we do not vary here but which can provide other uncertainties in the modelling of massive stars. \code{MESA}'s equation of state (EOS) for massive stars is a blend of the OPAL \citep{Rogers2002} and HELM \citep{Timmes2000} EOSes. Radiative opacities are primarily from OPAL \citep{Iglesias1993,Iglesias1996}, with low-temperature data from \citet{Ferguson2005} and the high-temperature, Compton-scattering dominated regime by \citet{Buchler1976}. Electron conduction opacities are from \citet{Cassisi2007}. Nuclear screening corrections come from \citet{salpeter:54,dewitt:73,alastuey:78,itoh:79}. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{./bh_2d_mass_met} \caption{The mass of the BH formed as function of the metallicity of the star and its initial helium star mass. The hashed region denotes those models which undergo pulsational mass loss. Grey regions indicate models which do not reach CC due to numerical issues. } \label{fig:bh_z} \end{figure} \section{Robustness of the gap to metallicity}\label{sec:met} Figure \ref{fig:bh_z} shows the predicted mass of the BH formed from a helium star with a mass between between $30-100\ensuremath{\rm{M}_\odot}$ and initial metallicities between $Z=10^{-5}$ and $3\times10^{-3}$. At first, as the helium core mass increases, so does the resulting BH mass due to the larger initial mass of the star. However, once the star enters the pulsational regime, it begins to lose mass and eventually the amount of mass loss via pulses is sufficient to lower the final BH mass. This turn over occurs due to changes in the behavior of the PPI pulses. As the core mass increases, the pulses decrease in number but become more energetic, driving off more mass in each pulse. At the edge of the PISN region, the helium cores can lose $\approx10\,\ensuremath{\rm{M}_\odot}$ of material in a single pulse. As the core mass is increased further, the first pulse becomes energetic enough for the star to be completely disrupted in a PISN. At the lower edge of the BH gap, the most massive helium stars under going PPI mass loss without being disrupted lose several tens of solar masses of material per pulse, leaving behind BHs of $\approx15\ensuremath{\rm{M}_\odot}$. The lowest mass a BH may have, after undergoing PPISN, is set by the production of \ensuremath{^{56}\mathrm{Ni}}{} inside the star. As the initial mass of the star increases, more \ensuremath{^{56}\mathrm{Ni}}{} is produced inside the star. Eventually sufficient \ensuremath{^{56}\mathrm{Ni}}{} is produced to unbind any material that was not initially driven away the pulses \citep{marchant:18}. However the exact edge of the PPISN/PISN boundary, and thus the minimum BH mass produced by PPISN, is not resolved given our grid spacing. \begin{figure*}[htp] \centering \includegraphics[width=1.0\textwidth]{./bh_co_mass_met} \caption{Mass of final BH as a function of the CO core mass, for different metallicities. Circles denote models that underwent at least one pulse, pluses evolved to directly CC, and crosses undergo a PISN. The left blue region denotes where models undergo CC, the middle green region denotes PPISN, while the right yellow region denotes PISN, as determined by stars with $Z=10^{-5}$. Points in the right panel show the current median mass estimates for the double compact objects detected by LIGO/VIRGO with their 90\% confidence intervals \citep{ligo18b}. Dashed horizontal lines emphasize the maximum spread in the locations for the edge of the BH mass gap, or in other words the spread in the maximum BH mass below the PISN BH mass gap.} \label{fig:bh_comass_z} \end{figure*} As the initial metallicity of the star increases the mass of the BH decreases, for fixed initial helium core mass. This is due to increases in the amount of mass lost via winds before the star enters the PPI region, which decreases the final mass of the star before collapse. The progressive shift to the right of the hatched region in Figure \ref{fig:bh_z} with increasing Z shows that the minimum (and maximum) initial helium core mass needed to undergo pulsations also increases as the metallicity increases. Models with $Z\le3\times10^{-3}$ fully populate the regime for pair instability pulsations. Numerical limitations prevent us from fully populating the PPISN/PISN region at higher metallicities. Thus it may be possible to have PPISN at higher metallicities, however this would require helium cores initially more massive than $\approx100\,\ensuremath{\rm{M}_\odot}$. Again this is due to the winds; as the winds become stronger we require an initially more massive progenitor star to retain sufficient mass to undergo pulsations. Figure \ref{fig:bh_comass_z} shows the BH mass as a function of the CO core mass over our metallicity range. Here we see a much tighter relationship between the CO core mass and the final BH mass than in Figure \ref{fig:bh_z} between the initial helium core mass and final BH mass. We find strong PPI pulses removing significant amount of mass between CO core masses $\ensuremath{\rm{M_{\rm CO}}}\approx38\,\ensuremath{\rm{M}_\odot}$ and $\ensuremath{\rm{M_{\rm CO}}}\approx60\,\ensuremath{\rm{M}_\odot}$. The upper edge of the PPISN region slightly decreases to $\ensuremath{\rm{M_{\rm CO}}}=56\,\ensuremath{\rm{M}_\odot}$ as the metallicity increases. The most massive BHs come from stars with $\ensuremath{\rm{M_{\rm CO}}}\approx50\,\ensuremath{\rm{M}_\odot}$, not from those with the most massive CO cores that undergo a PPI (in Figure \ref{fig:bh_z} these are $\ensuremath{\rm{M_{\rm CO}}}\approx60\,\ensuremath{\rm{M}_\odot}$). This is due the pulses becoming stronger and thus driving more mass loss. We attribute the differences arising from changes in metallicity primarily due to the differences in wind mass loss rate. Higher metallicity stars have higher wind mass loss rates, which increases the amount of mass loss \citep{castor75,vink:01}. This increased mass loss forces the convective core to recede, leaving behind a smoother composition gradient in the outer layers of the star. At the highest metallicities the stellar winds have also removed all remaining helium from the star and have begin ejecting C/O rich material, pre-pulses. Thus these progenitors would likely look like carbon or oxygen rich Wolf-Rayet (WC/WO) stars before pulsating. This justifies our choice of using the CO core mass over the He core mass as a better proxy for the final BH masses. We note that while the CO-BH mass distribution is relativity constant over the metallicities considered here, the BH formation rate, and hence the merger rate, will vary as a function of metallicity. This is due to changes in the initial stellar mass needed to form such massive CO cores. The right panel of Figure \ref{fig:bh_comass_z} also shows a comparison with the LIGO/VIRGO BH masses detected by the end of the second observing run \citep{ligo18a,ligo18b}. We find that the most massive BH LIGO/VIRGO has so far detected is consistent with the upper edge of the BH masses we find. This is due in part, to the large 90\% confidence intervals on the individual BH masses from GW detections. Nevertheless, even when considering the much better determined chirp mass of GW170729, it remains within the maximum chirp mass predicted assuming random pairing of BHs with mass ratio $\rm{q=M_2/M_1>0.5}$ \citep{marchant:18}. Figure \ref{fig:max_bh_z} shows, as a function of Z, what is the final fate of the mass inside the progenitor star forming the most massive BH. At low metallicities, the weakness of the stellar winds results in most of the initial stellar mass of the star forming the BH. At higher metallicites wind mass loss is able to drive approximately half of the initial mass away before the star collapses to form a BH. The stars making the most massive BHs only lose $1-5\,\ensuremath{\rm{M}_\odot}$ of material in the pulsations. Our models span over 2.5 orders of magnitudes in metallicity, but over such a wide range the maximum BH mass decreases only slightly between $\rm{M_{BH,Max}}=43-46\,\ensuremath{\rm{M}_\odot}$. This corresponds to a $7\%$ variation over the metallicity range considered here, for BHs whose progenitor underwent a PPISN. The initial helium core mass which forms the most massive BHs at each Z increases from $\approx54\ensuremath{\rm{M}_\odot}$ at $\rm{Z}=10^{-5}$ to $100\,\ensuremath{\rm{M}_\odot}$ at $\rm{Z}=3\times10^{-3}$. This increase in mass is not due to changes in pulse behavior, but instead to the increased mass loss due to winds (seen as the yellow shaded region in figure \ref{fig:max_bh_z}). Thus with a change of only 6\ensuremath{\rm{M}_\odot}{} in BH mass, the initial mass needed to produce the BH changes by $\approx50\,\ensuremath{\rm{M}_\odot}$ due to changing the metallicity over 2.5 orders of magnitude. \begin{figure}[htp] \centering \includegraphics[width=0.5\textwidth]{./maxbh_met2} \caption{Fate of the mass of the progenitors of the most massive BH below the PISN BH mass gap as a function of metallicity. The gray region denotes mass which becomes the BH. The green region denotes mass that is lost via pulsations. The yellow region denotes mass loss via winds. Black points denote the final BH mass. Blue points denote the edge between mass loss in pulsations and in winds. Red points mark the initial helium core mass of the star. } \label{fig:max_bh_z} \end{figure} \section{Physics dependence of the gap}\label{sec:physics} In figure \ref{fig:bh_param}, we show the variations in the BH mass distribution for multiple assumptions of stellar physics, varied within either their theoretical or experimentally derived uncertainties. Each model is computed at a fixed metallicity of $\rm{Z}=10^{-3}$, with only one parameter varied in each model. \begin{figure*}[ht] \centering \subfigure[Winds]{\label{fig:bh_wind_pres}\includegraphics[width=0.49\linewidth]{./bh_co_mass_wind}} \subfigure[Neutrinos]{\label{fig:bh_neu}\includegraphics[width=0.49\linewidth]{./bh_co_neu}} \subfigure[Mixing]{\label{fig:bh_mixing}\includegraphics[width=0.49\linewidth]{./bh_co_mixing}} \subfigure[Nuclear reaction rates]{\label{fig:bh_rates}\includegraphics[width=0.49\linewidth]{./bh_co_rates}} \caption{BH mass as function of CO core mass for different physics assumptions. Panel a shows variations in the wind mass loss prescription; H is the prescription of \citet{hamann:98}, N\&L is the prescription of \citet{nugis:00}, and T is from \citet{tramper:16}, while $\eta$ varies between 0.1 and 1.0. Panel b shows variations in the neutrino physics; due to the numerical uncertainties in the fits \citep{itoh96}, each $\Delta$ represents a scaling of this fitting error; and the Weinberg angle. Panel c shows variations in the convective treatment, with varying MLT scale heights $\alpha_{MLT}$ and convective overshoot values $\ensuremath{f_{ov}}$. Panel d shows variations in a select set of nuclear reaction reactions; \code{MESA}'s default rates are from \code{NACRE}{} \citep{angulo99} and \code{REACLIB}{} \citep{cyburt10}, while the other rates come from \code{STARLIB}{} \citep{sallaska13} as either the median or $\pm1\sigma$ uncertainties, $3\alpha$ is the triple alpha reaction, $\rm{C12\alpha}$ is the $\rm{^{12}C\left(\alpha,\gamma\right)^{16}O}$ reaction, and $\rm{O12\alpha}$ is the $\rm{^{16}O\left(\alpha,\gamma\right)^{20}Ne}$ reaction. Plot symbols have the same meaning as in Figure \ref{fig:bh_comass_z}. A star represents our default model assumptions for each physics variation. Dashed lines indicate the range of locations for the edge of the BH mass gap. Colour shading shows the regions between the CC, PPISN, and PISN outcomes for our fiducial set of physics assumptions.} \label{fig:bh_param} \end{figure*} \subsection{Wind prescription} Figure \ref{fig:bh_wind_pres} shows the effect of different mass loss prescriptions on the CO-BH mass distribution. Overall the difference in masses between the different prescriptions (and $\eta$ values) is small. The different prescriptions bifurcate into two groups, those where $\rm{M}_{\rm{BH,max}}\approx44\ensuremath{\rm{M}_\odot}$ (H$\eta=0.1$ and N\&L$\eta=0.1$) and those with $\rm{M}_{\rm{BH,max}}\approx48\ensuremath{\rm{M}_\odot}$ ($\dot{\rm{M}}=0.0,\rm{N\&L}\eta=1.0$, and $\rm{T}$ (with both $\eta 's$)). The models producing smaller maximum BH masses, also shift their transition to PISN to smaller CO core masses. These models lose more mass via winds and come from $\ensuremath{\rm{M_{He,int}}}\approx64\ensuremath{\rm{M}_\odot}$. The second group, which make $\rm{M}_{\rm{BH,max}}\approx48\ensuremath{\rm{M}_\odot}$ , come from $\ensuremath{\rm{M_{He,int}}}\approx58\ensuremath{\rm{M}_\odot}$ cores and lose less mass via winds. As the strength of mass loss increases, either though changing the wind prescription or increasing the metallicity, the CO-BH mass distribution flattens and decreases the maximum BH mass. There is no set of models (H) with $\eta=1.0$ shown, as the amount of mass loss when using this prescription is sufficient that no model enters the pulsation region. \subsection{Neutrino Physics} Figure \ref{fig:bh_neu} shows the BH mass as a function of the CO core mass for variations in the neutrino rate and the Weinberg angle. Over the rage of variations in neutrino rates considered here, the effect on the maximum BH mass is small. As the rate increases we find little change in the BH mass distribution, with the maximum BH mass varying by $\approx1\ensuremath{\rm{M}_\odot}$ and a trend for less massive BHs as the neutrino rate increases. As the Weinberg angle varies, again the CO-BH mass function is approximately constant. Smaller Weinberg angles result in a slightly lower maximum BH mass, with a variation of $\sim1.5\ensuremath{\rm{M}_\odot}$ for the range of $\ensuremath{\sin^2\theta_{W}}$ considered here. \subsection{Convective mixing} Figure \ref{fig:bh_mixing} shows variations in \ensuremath{\alpha_{MLT}}{} between 1.5 and 2.0, with our default assumption being $\ensuremath{\alpha_{MLT}}=2.0$. Within these limits there is very little change in the behavior of the BH masses, with the BH masses slightly decreasing as \ensuremath{\alpha_{MLT}}{} increases. Figure \ref{fig:bh_mixing} also shows the effect of varying $\ensuremath{f_{ov}}$ to be small. The maximum BH mass varies within $1\ensuremath{\rm{M}_\odot}$ over the range considered here. The most significant difference occurs at the PPISN/CC boundary where $\ensuremath{f_{ov}}=0.05$, decreases the final BH mass relative to the lower \ensuremath{f_{ov}}{} models. This is due to a change in behavior in the burning and convection regions at the center of the star. When \ensuremath{f_{ov}}{} is small the star has a separate off-center and a central burning region, both of which drive convection zones. When \ensuremath{f_{ov}}{} increases these convection zones can merge, which increases the available fuel supply and causes the pulses to become stronger, driving increased mass loss. \subsection{Nuclear reaction rates}\label{sec:rates} Figure \ref{fig:bh_rates} shows the CO-BH mass function for different rates computed from \code{STARLIB}{} and our default rates from \code{NACRE}{} and \code{REACLIB}. Overall the effect of the $\rm{^{16}O\left(\alpha,\gamma\right)^{20}Ne}$ is minimal on both the BH mass distribution and the maximum BH mass. However both the $3\alpha$ rate and the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rates have a large impact on both the BH mass distribution and the maximum BH mass formed. As the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate decreases the maximum BH mass increases, for $+1\sigma$ we find $\ensuremath{\rm{M}_{\rm{BH,max}}}=40\ensuremath{\rm{M}_\odot}$ while at $-1\sigma$ we find $\ensuremath{\rm{M}_{\rm{BH,max}}}=58\ensuremath{\rm{M}_\odot}$. Thus within the 68\% confidence interval for the $\rm{C12\alpha}$ the maximum BH mass varies by $\approx18\ensuremath{\rm{M}_\odot}$ \footnote{For \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} reactions with the $+1\sigma$ rate, we burn sufficient $^{12}\rm{C}$ during core helium burning such that we never trigger the CO core mass definition in section \ref{sec:method}. Thus we relax our CO core mass definition to be the mass coordinate at the maximum extent of the core helium burning convection zone.} . The median \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate from \code{STARLIB}{}, from \citet{kunz02}, is smaller than the \code{NACRE}{} rate, thus \code{STARLIB}{} predicts a more massive maximum BH mass. \citet{deboer17} also provide an updated \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate which is smaller, over the core helium burning temperature range, than \code{NACRE}{}. Models with this rate showed a similar increase in the maximum BH mass. As the $3\alpha$ rate, from \citet{angulo99}, increases the maximum BH mass also increases. This correlates with the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate behavior; as $3\alpha$ rate increases or the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate decreases we increase the mass fraction of $^{12}\rm{C}$ in the core. For the values tested here, this increases from $\approx10\%$ to $\approx30\%$. We find that as the mass fraction of carbon increases in the core the maximum BH mass also increases, and also alters the behavior of the pulses. Higher carbon fractions decrease the range in CO core mass within which models undergo pulsations. This would translate into a smaller predicted rate of PPISN in the Universe, as there is a smaller range of possible progenitors. Increasing the mass fraction of carbon also decreases the fraction of models with strong pulsational mass loss, by weakening the pulses such that they do not eject mass. As the carbon fraction increases the BH mass distribution sharpens (similar to what is seen with no mass loss in Figure \ref{fig:bh_wind_pres}). This also shifts the boundary between CC/PPISN and between PPISN/PISN to higher masses as the carbon fraction increases. Moving the boundary between PPISN/PISN to higher CO core masses would translate to needing a more massives initial star, and thus this would decrease the predicted rate of PPISN and PISN. We performed additional tests varying the $\rm{^{12}C+^{12}C}$ and $\rm{^{16}O+^{16}O}$ reaction rates\footnote{In the \code{approx21.net} nuclear network these reactions rates are compound rates where the different output channels have been combined.} between 0.1 and 10 times their default \code{MESA}{} values as \code{STARLIB}{} does not have temperature dependent uncertainties for them. These rates showed variations in the maximum BH mass of $\sim4\ensuremath{\rm{M}_\odot}$, with the $\rm{^{12}C+^{12}C}$ having a larger effect on the maximum BH mass. Due to the sensitivity of the maximum BH mass to the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate, the measured value of the maximum BH mass (below the PISN mass gap) can be used to place constraints on the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate (Farmer et al, in prep). \subsection{Model resolution} \code{MESA}{} has a number of ways to control the spatial and temporal resolution of a model. Here we vary \code{MESA}'s \code{mesh\_delta\_coeff}, which controls the maximum allowed change in stellar properties between adjacent mesh points during the hydrostatic evolution, between 0.8 and 0.3. Decreasing the value increases the resolution. This range corresponds to roughly a factor of two increase in the number of grid points. We also vary \code{MESA}'s adaptive mesh refinement parameters (AMR), which set the resolution during hydrodynamical evolution. We vary \code{split\_merge\_amr\_nz\_baseline} between 6000 and 10000 and \code{split\_merge\_amr\_nz\_MaxLong} between 1.25 and 1.15, where the second values denotes a higher resolution. This leads to and increase by a factor of two in the number of spatial zones during the evolution of a pulse. We have also varied \code{MESA}'s \code{varcontrol\_target}, which sets the allowed changed in stellar properties between timesteps, between $5\times10^{-4}$ and $5\times10^{-5}$, and varied the \code{max\_timestep\_factor} between $1.025$ and $1.05$, which sets the maximum timestep factor by which \code{MESA}{} can increase a timestep. This leads to an increase of $\approx30\%$ in number of timesteps taken. Over the ranges considered here we find changes of $\approx1\ensuremath{\rm{M}_\odot}$ in the maximum BH mass. Over the range of nuclear networks considered here; \code{approx21.net}, \code{mesa\_75.net}, \code{mesa\_128.net}; there is little change in the BH mass for a given CO mass, by at most $\approx1\ensuremath{\rm{M}_\odot}$. There is a trend for larger nuclear networks to produce slightly more massive BHs. \citet{woosley:17} suggest that PPI systems need large nuclear networks which can adequately follow weak interactions, which \code{approx21.net} does not follow. However both the CO-BH mass relationship, and the maximum BH mass vary within $\approx1\ensuremath{\rm{M}_\odot}$ over the networks considered here. Changing the isotopes evolved will have an effect on the composition and final structure of the star as well as the composition of the ejecta from the pulses. However, we find that much of the behavior that determines the final BH mass is set by the conditions at the initial pulse. This is set by the CO core mass and carbon mass fraction, both of which are set by core helium burning, which is not affected by the lack of weak reactions in \code{approx21.net}. \section{The maximum black hole mass and its implications}\label{sec:implicat} Figure \ref{fig:bh_massrange} summarizes the range in the maximum BH mass below the PISN gap due to the variations considered in sections \ref{sec:met} and \ref{sec:physics}. These include those affected by the environment (metallicity) and thus vary across the Universe, those for which we have incomplete or uncertain physics (rates, winds, $\ensuremath{\alpha_{MLT}}$, $\ensuremath{f_{ov}}$, $\ensuremath{\nu_{\rm{rate}}}$, and $\ensuremath{\sin^2\theta_{W}}$) but we expect to be constant in the Universe, and those that are model dependent (spatial, temporal, and nuclear network resolution). For most of the physics for which we are uncertain ($\ensuremath{\alpha_{MLT}}$, $\ensuremath{f_{ov}}$, $\ensuremath{\nu_{\rm{rate}}}$, and $\ensuremath{\sin^2\theta_{W}}$) and the model resolution (spatial, temporal, and in number of isotopes) there is a limited effected on the maximum BH mass. These terms place $\approx2\ensuremath{\rm{M}_\odot}$ uncertainties on the maximum BH mass, over the ranges considered here, contingent on how the different uncertainties are combined. The next most significant factors are the metallicity and winds. We consider these together, since the metallicity dependence of wind mass loss rates introduces a degeneracy between these two elements. As we observe a population of BHs from different progenitor stars with varying metallicities, then this 7\% variation in the maximum BH mass places a minimum level of uncertainty on what we can learn from the most massive BHs detected. Given a sufficiently large population of binary BHs (at multiple redshifts) it may be possible to disentangle the effects of the star formation and metallicity evolution of the Universe on the BH population \citep{dominik:13,dvorkin16}. However this uncertainty which varies over the Universe, is small compared to the current measurement uncertainties. From a gravitational-wave detection we can infer the luminosity distance to the source. We also obtain the chirp mass in the detector frame, i.e., the redshifted true chirp mass. Knowledge of the true source mass would therefore also provide the redshift to the source, and so allows use of gravitational-wave events to measure the expansion history of the Universe without the need for electromagnetic detections to supply the redshift of the event. Knowledge of the edge of the PISN BH mass gap allows BH mergers to act as ``standardizable sirens'' for cosmology (demonstrated by \citet{farr19}, following \citet{schutz:86,holz15}). The sharper the edge of the PISN mass gap is, the smaller the uncertainty in the derived cosmological parameters that can be achieved \citep{farr19} The most significant physics variation considered here is due to the nuclear physics uncertainties, and primarily due to the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate, leading to a 40\% variation in the maximum BH mass. Models having lower \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rates lose less mass in pulsations and thus produce more massive BHs. Thus, even with a lack of knowledge about the environment in which any individual BH formed, we can still use the detection of sufficiently massive BHs to constrain nuclear physics. The most massive detected BH indicates the maximum value for the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate over the core helium burning temperature range. Given the sensitivity of the BH mass to the CO core mass, the maximum BH mass formed is effectively independent of its stellar origin. Assuming that both chemically homogeneous evolution or common envelope evolution can produce a sufficiently massive, H-poor, He core we would expect those evolutionary scenarios to merging BHs to result in similar final BH masses. \begin{figure}[htp] \centering \includegraphics[width=0.5\textwidth]{./bhmassrange} \caption{Range of maximum BH masses, for different environment and stellar physics assumptions. See Figure \ref{fig:bh_comass_z} for the range of metallicities considered here, see Figure \ref{fig:bh_param} for the ranges of each physics assumption.} \label{fig:bh_massrange} \end{figure} \section{Comparisons to other work}\label{sec:desc} In \citet{yoshida:16} they studied PPISN from stars with initial masses between 140 and 250\ensuremath{\rm{M}_\odot}{} and Z=0.004. They find the final masses of their stars to be between $50$ and $53\ensuremath{\rm{M}_\odot}$ at collapse, broadly consistent with the masses we find. For our models at Z=0.004 we would expect slightly smaller BHs, due to the winds stripping the outer CO layers of the stars. Another possible source of differences may be the choice of the \citet{caughlan88} \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate \citep{yoshida11}. Our models agree with the wind-less, metal-free, helium-core models of \citet{woosley:17}, who finds a maximum final BH mass $48\ensuremath{\rm{M}_\odot}$. This agrees with our wind-less, models where we also find a maximum BH mass of $48\ensuremath{\rm{M}_\odot}$ (though we evolve them at a non-zero metallicity). \citet{woosley:17} also find a maximum BH mass of $52\ensuremath{\rm{M}_\odot}$ for models which did not remove their entire hydrogen envelope. Although they are not directly comparable to our results, which assume all helium has been removed, they provide bounds on the variation in the maximum BH mass, if the H-envelope is not completely removed, of $\approx4\ensuremath{\rm{M}_\odot}$. \citet{woosley19} investigated the evolution of naked He cores finding a maximum BH mass below the gap of $46\ensuremath{\rm{M}_\odot}$ in agreement with our results, for our default \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate and assumptions on metallicity. \citet{takahashi18} looked at the effect of varying the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate on the boundary between PPISN and PISN. Our results are not directly comparable as they used the \citet{caughlan88} rate, while by default we use the \code{NACRE}{} rate. They found that as the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} rate decreases, which increases the $^{12}\rm{C}$ fraction, the PISN boundary shifts to higher initial masses, similar to our findings. \citet{takahashi18} also find a narrowing of the PPISN region (in initial mass space), consistent with our results. \citet{leung19} studied the evolution of helium cores between $80-140\ensuremath{\rm{M}_\odot}$ and $\rm{Z}=10^{-3}-1.0\rm{Z_{\odot}}$, with a previous \code{MESA}{} release (version 8118) and a different treatment of the hydrodynamics. Their results are in agreement with ours. They find a maximum BH mass of $\approx50\ensuremath{\rm{M}_\odot}$, which is larger than what we predict, likely due to their lowest metallicity models ($\rm{Z=10^{-2}Z_{\odot}}$) having no wind mass loss. Our no mass-loss models at $\rm{Z=10^{-3}}$ have a maximum BH mass of $48\ensuremath{\rm{M}_\odot}$. At higher metallicities we find that wind mass loss is sufficient to make the maximum BH come from a CC event rather than a PPISN. The remaining differences may be due to other differences in choice of parameters and/or changes between the two \code{MESA}{} versions. In this work we did not consider the effects of rotation on the stellar progenitors, which is especially important if they came from chemically homogeneous evolution \citep{demink:09,demink16,marchant:16}. However, \citet{chatzopoulos:12,chatzopoulos:12b} have previously studied the impact of rotation on zero metallicity models evolved first with \code{MESA}{} and then \code{FLASH}, in 1-D, for the dynamical evolution. They found that increasing the rotation rate the initial mass for PPISN or PISN decreases. This is due to the increase in CO core mass due to rotational mixing. They find PPISN from stars with core masses between $\approx40-65\ensuremath{\rm{M}_\odot}$, which agrees with our results. However the impact of centrifugal support on the resulting BH masses from PPISN is an open problem. Rotation may also affect the final BH mass depending on how the BH is formed and whether material with high angular momentum is accreted into the BH. This may, however, be more relevant for the final spin of the BH compared to the final mass \citep{rockefeller:06,fryer04,batta17}. \section{Summary and Conclusions} \label{sec:conc} The prediction of a gap in the mass distribution of BHs from the collapse of stars dates back to the sixties, when the theory of pair instability evolution had first been developed \citep{fowler:64, barkat:67}. However, it is only recently that the possibility of testing this prediction directly with gravitational waves has opened. As the presently observed population of binary BHs is compatible with having stellar origin \citep[][]{ligo18a,ligo18b}, instead of dynamical or primordial, we can use stellar evolution models to interpret the upper end of the BH mass distribution. We find that the evolution of single bare He cores robustly predicts a maximum BH mass of $\approx45\ensuremath{\rm{M}_\odot}$, and that this value is relativity insensitive to variations in the input physics, the algorithmic approach, and the metallicity of the models. In particular, despite the uncertain wind mass loss rates of massive stars, we find a variation of the maximum BH mass of only $\approx7\%$ (from $\approx43\ensuremath{\rm{M}_\odot}$ to $\approx46\ensuremath{\rm{M}_\odot}$) over 2.5 orders of magnitude in metallicity. This implies that detailed knowledge of the host galaxy of merging binary BHs is not required to use gravitational wave detections to probe the physics of the unobserved stellar progenitors. The insensitivity to metallicity of the maximum BH mass below the gap might also allow for cosmological applications. If its value can accurately be determined, it can provide a ``standard siren'' \citep{schutz:86,holz15,farr19}, allowing estimates of both a redshift and a luminosity distance to the mergers from just the gravitational wave detection. Assuming a stellar origin, the most massive BHs detected below the pair instability mass gap might be used to further constrain nuclear physics, specifically the \ensuremath{^{12}\rm{C\left(\alpha,\gamma\right)^{16}O}}{} reaction in the core helium burning regime. In particular, the maximum BH mass puts an upper limit on this reaction rate. Other physics variations including neutrino physics, wind algorithms, and chemical mixing have sub-dominant effects on the maximum BH hole mass and negligible contributions to the uncertainty compared to the typical observational uncertainties. We note however that our estimates of the BH mass may be over-predicted if, for instance, a significant amount of mass is loss via neutrinos during the final collapse \citep{coughlin:18}. Also, our simulations do not account self-consistently for binary interactions between the progenitor stars, which deserves further attention \citep{gotberg:17,marchant:18}. If BHs with masses inside the predicted PISN mass gap are detected they could either have non-stellar origins, be the result of multiple mergers in a cluster \citep{rodriguez16a, stone:17,dicarlo19}, or if the star was H-rich at the time of collapse and then merged in a cluster \citep{vigna19,spera2019}. However, the expected rate of mergers that include a BH in the mass gap is small, due to the requirement that the BHs be the result themselves of previous mergers and that they stayed bound to the cluster after the merger \citep{gerosa19}. Whether BHs are ejected from clusters also depends strongly on whether they are born spinning; if they do not spin then they are more likely to stay bound in the cluster \citep{rodriguez19}. If the star can retain its H-rich envelope, by evolving as an isolated single star and then merging with another BH in a dense cluster, then it might be massive enough to enter the mass gap \citep{woosley:17}. However, the expected rate of merges in dense clusters is a factor 40 less than that of isolated binary mergers \citep{rodriguez16a,belczynski:16nat}. The present and upcoming detections of binary BH mergers might provide evidence constraining the death of the most massive stars before we might be able to might be able to unequivocally observe these phenomena in the electromagnetic spectrum \citep{stevenson:19}. Our results suggest, that with a large population of merger events, we can put constraints on uncertain nuclear physics and provide a new tool for cosmology. \begin{acknowledgements} We acknowledge helpful discussions with W.~ Farr, D.~Brown, B.~Paxton, F.~Timmes, and I.~Mandel, Y.~G\"otberg, D.~Hendriks, E.~C.~Laplace, E.~Zapartas. RF is supported by the Netherlands Organization for Scientific Research (NWO) through a top module 2 grant with project number 614.001.501 (PI de Mink). SdM and MR acknowledge funding by the European Union's Horizon 2020 research and innovation programme from the European Research Council (ERC) (Grant agreement No.\ 715063), and by the Netherlands Organization for Scientific Research (NWO) as part of the Vidi research program BinWaves with project number 639.042.728. SdM acknowledges the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation to Harvard University. PM acknowledges support from NSF grant AST-1517753 and the Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) program in Gravity and Extreme Universe, both granted to Vassiliki Kalogera at Northwestern University. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative. This research has made use of NASA's Astrophysics Data System. \end{acknowledgements} \software{ \texttt{mesaPlot} \citep{mesaplot}, \texttt{mesaSDK} \citep{mesasdk}, \texttt{ipython/jupyter} \citep{perez_2007_aa,kluyver_2016_aa}, \texttt{matplotlib} \citep{hunter_2007_aa}, \texttt{NumPy} \citep{der_walt_2011_aa}, and \code{MESA} \citep{paxton:11,paxton:13,paxton:15,paxton:18,paxton:19}. }
1,314,259,992,934
arxiv
\section{Introduction} With the rapid development of modern communication and multimedia technologies, the dissemination and processing of digital multimedia products are becoming more and more popular, which inevitably gives rise to a variety of piracy and infringement issues. Watermarking techniques have received significant research attention as a means to efficiently protect the copyright of digital multimedia product \cite{Cox2007}. While watermarks can be embedded into media formats including but not limited to document, image, audio, and video, in this paper, we focus on watermarking for audio signals, which are functions of time. Recently, the authors in \cite{Hua2016_SP} reviewed the research, development, and commercialization achievements of digital audio watermarking technology for the past twenty years. Generally, the existing audio watermarking techniques could be classified according to the domains in which the watermarks are embedded. More specifically, time domain methods either modify the raw audio samples frame by frame \cite{Nishimura2012,Hua2015_TASLP,Hua2015_TIFS}, or change the histogram of host audio signals \cite{Xiang2007_TM}. On the other hand, transform domain methods, which have received much more research attention, can be classified into spread spectrum (SS) \cite{Cox1997_TIP,Kirovski2003_TSP,Malvar2003_TSP,Xiang2015_TASLP}, patchwork \cite{Xiang2014_TASLP}, quantization index modulation (QIM) \cite{Lei2013_TASLP}, and a special case based on over-complete transform dictionaries \cite{Cancelli2009,Hua2016_SPL}. It could be seen from the literature that although audio watermarking solutions have been extensively studied in time or a transform domain individually, less efforts have been devoted to the case in which the host audio signal is analyzed and represented jointly in time-frequency (TF) domain, based on the well established TF analysis techniques (transforms) \cite{SejdiDjurovi2009,StankoviKrishnan2010,Boashash2016,Flandrin2018}. TF analysis is a generalization of Fourier analysis for the case when the signal frequency characteristics are time-varying. Since many practical signals of interest, such as speech and music, have varying frequency characteristics, TF analysis has a broad scope of applications, and one of the most basic forms of TF analysis is the short-time Fourier transform (STFT) \cite{haijianDSP2016,haijianSC2017}, while more sophisticated techniques have also been developed \cite{Zhang2015_TAES,Zhang2016_SPL,Zhang2020_TIFS}. In \cite{Stankovic2008,Orovic2010}, the authors proposed two efficient approaches to speech watermarking based on the STFT and the S-method \cite{Stankovic1994}. \begin{figure*}[t!] \centering \subfigure[]{\includegraphics [width=4.5in]{drawing1.eps}} \subfigure[]{\includegraphics [width=4.5in]{drawing2.eps}} \subfigure[]{\includegraphics [width=5.2in]{drawing3.eps}} \caption{Commonly used and the proposed watermark embedding schemes: (a) Transform after framing. (b) Transform before framing. (c) Proposed scheme.}\label{Schemes} \end{figure*} In this paper, we discuss audio watermarking from the perspective of time-frequency analysis, and propose an audio watermarking framework based on the fact that audio signals are a function of time. Specifically, we propose to embed the watermark signal into a set of low-energy points in the TF representation, which correspond to noise-only or silent segments in audio signals. The selected points can form non-overlapping 2-dimensional (2D) feature frames, each of which is composed of TF domain samples across multiple time and frequency bins. To achieve this purpose, a method to automatically determine these low-energy TF positions is introduced, and the energy invariance before and after watermark embedding is exploited. The proposed scheme only modifies a few frames within the feature space, while other frames are kept intact. The imperceptibility property of the watermarking system could hence be improved in that the host audio signal containing strong audio content is less modified. Therefore, while the robustness against host signal interference could be ensured via the use of improved spread spectrum (ISS) method \cite{Malvar2003_TSP}, the inability of ISS method to control imperceptibility is remedied by the proposed localized watermark embedding. Furthermore, the proposed system enjoys improved robustness against a series of signal processing attacks including adding noise, amplitude scaling, and lossy compressions, thanks to the appropriately designed 2D embedding frames. In general, the proposed framework could be considered as a design for feature domain audio watermarking, in which the features correspond to the appropriately selected embedding locations. We concretize the above framework via the realizations of the basic STFT and a similarly formed short-time cosine transform (STCT), with both SS and ISS embedding and extraction mechanisms. Conventional SS and ISS schemes with a uniform embedding rule across different frequency bands are also implemented for comparison. Extensive experiments are carried out to evaluate the proposed framework and demonstrate its performance advantages. \section{Audio Watermarking in TF Feature Domain} \subsection{General Frameworks} We use the following notations in this paper. The host audio signal is denoted by vector $\mathbf{x}\in\mathbb{R}^{N\times 1}$, and its $i$th frame after time domain non-overlapping framing is $\mathbf{x}_i\in\mathbb{R}^{M_0 \times 1}$, where $N$ is the number of samples of the host signal, $M_0$ is the number of samples per frame, and $i\in\{0,1,\ldots,\lceil {N/M_0} \rceil-1\}$, where $\lceil {\cdot} \rceil$ is the ceiling function. Similarly, the host audio signal in transform domain is denoted by $\mathbf{y}$ with the same length as $\mathbf{x}$, and the $i$th frame of $\mathbf{y}$ after transform domain non-overlapping framing is $\mathbf{y}_i$. We use subscript $\{\cdot\}_{\textnormal{w}}$ to denote watermarked version of a signal, thus the representations of watermarked signal in time and transform domain, and in terms of frame and ensemble, are denoted by $\mathbf{x}_{\textnormal{w},i}$, $\mathbf{x}_{\textnormal{w}}$, $\mathbf{y}_{\textnormal{w},i}$, and $\mathbf{y}_{\textnormal{w}}$, respectively. Since this paper mainly utilizes SS based watermark embedding and extraction mechanisms, the corresponding spreading sequence is a pseudo-random noise sequence $\mathbf{p}\in\{+1,-1\}^{L\times 1}$. In conventional full spectrum SS settings, we have $L=M_0$ so that the spreading sequence could be additively embedded into host signal frames. Audio watermark embedding in transform domain could be carried out under two basic schemes, i.e., transform after and before framing, which are shown in Fig. \ref{Schemes} (a) and (b). Transform after framing is the most widely adopted processing flow in the existing literature, which is summarized in \cite{Hua2016_SP}. On the other hand, host audio signal has also been considered as a single frame to calculate its corresponding transform domain representations, directly obtaining $\mathbf{y}$ from $\mathbf{x}$. Good examples of such works could be seen in \cite{Xiang2015_TASLP,Xiang2014_TASLP,Kang2016_TM}. In this paper, we propose an alternative watermark embedding framework as depicted in Fig. \ref{Schemes} (c), which is similar to the transform before framing case. Specifically, instead of calculating the transform $\mathbf{y}$, we fist obtain the TF representation of the host signal $\mathbf{x}$, denoted by matrix $\mathbf{Y}\in\mathbb{C}^{M_0 \times \lceil {N/M_0} \rceil }$ composed of both time and frequency bins. In this way, the proposed framework differs from most of existing ones by considering watermark embedding based on a 2D TF image. Furthermore, the TF domain feature, denoted by $\mathbf{f}_i\in\mathbb{C}^{W^2 \times 1}$ (where $W$ is a window dimension), is selected as the patches with low-energy values, which correspond to noise-only or silent locations. One of the advantages of using the proposed framework over the similar framework in Fig. \ref{Schemes} (b) is that the modification of host signal in one area will not affect the host signal in other areas, while for a system in Fig. \ref{Schemes} (b), any modification in transform domain samples will cause changes of all samples in time domain. In addition, since each of the selected feature vectors contains multiple time and frequency bins, the correlation of the host signal at both different time intervals and frequency ranges is considered for watermark embedding, which could lead to improved robustness against a series of processing and attacks. Details of the proposed framework are provided in next subsection. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{Tune.eps} \caption{Heuristic tuning mechanism to control imperceptibility based on ODG. The dashed line is validated only if the decision is ``Yes'' (stopping criterion).} \label{Tune} \end{figure} \subsection{Watermark Embedding and Extraction Schemes} In this subsection, STFT is used as an example for TF analysis. To ensure well controlled imperceptibility, heuristic tuning is incorporated in the embedding scheme, in which the objective difference grade (ODG) \cite{ODG} is utilized to quantify current imperceptibility condition, according to which the watermark embedding strength parameter $\alpha$ is adjusted according to the feedback of the ODG value. The ODG value is a real non-positive number in the intervals of $\{-4,-3,-2,-1,0\}$, with $0$ corresponding to imperceptible and -$4$ corresponding to very annoying. The heuristic tuning mechanism is depicted in Fig. \ref{Tune}. The proposed watermark embedding scheme is detailed as follows. \begin{enumerate} \item Partition $\mathbf{x}$ into non-overlapping frames $\mathbf{x}_i$ with $M_0$ samples\footnote{Here, the frames could not be overlapped because otherwise the watermarked patches will affect multiple overlapped frames and the inverse transform would become unstable. This is slightly different from TF analysis literature. However, such a treatment will not cause performance degradation since we are not interested in the resolution or accuracy of TF analysis in the context of watermarking.}. Perform Hilbert transform on $\mathbf{x}_i$ to remove the symmetry of the frequency spectrum within $2\pi$ radius \cite{Grafakos2004_book}. For simplicity of notation, we still use $\mathbf{x}_i$ to denote the Hilbert transform output, \begin{equation}\label{Hilbert} {{\mathbf{x}}_i}\in\mathbb{C}^{M_0\times 1} \leftarrow \textnormal{Hilbert} \left( {{{\mathbf{x}}_i}} \right), i={0,1,\cdots,\lceil {N/M_0} \rceil-1}, \end{equation} but note here $\mathbf{x}_i$ becomes complex quantities. \item Compute the non-symmetric STFT of the host audio signal $\mathbf{x}$, and obtain the TF representation $\mathbf{Y}$. Specifically, perform fast Fourier transform (FFT) for each frame, \begin{equation} {{\mathbf{y}}_i} = {\mathbf{H}}{{\mathbf{x}}_i}, \end{equation} where $\mathbf{H}$ is the orthonormal FFT matrix. Then we have \begin{equation} {\mathbf{Y}} = [{{\mathbf{y}}_0},{{\mathbf{y}}_1}, \ldots ,{{\mathbf{y}}_{\left\lceil {N/M_0} \right\rceil - 1}}]. \end{equation} Further, select low to middle frequency bins bounded by $f_1$ and $f_2$, e.g., $f1=60$ Hz, and $f_2=2800$ Hz, as the feasible watermark embedding region. The exact dimension, $M$, resulted from this process depends on $f_1$, $f_2$, the sampling frequency, and the length of FFT. We simply denote the refined 2D TF image as $\tilde{\mathbf{Y}}\in\mathbb{C}^{M\times \lceil {N/M_0} \rceil}$. Therefore, vertically, the $M$ samples correspond to a frequency region within $[f_1,f_2]$. \item Partition $\tilde{\mathbf{Y}}$ into square patches using a $W\times W$ window, and index the patches in raster scanning order. Usually, we have \begin{equation} W < M < M_0 \ll N. \end{equation} For convenience, we further assume $W$ is chosen such that $W^2$ is divisible by both $M$ and ${\left\lceil {N/M_0} \right\rceil }$, hence there is no residual after partition, and $M\left( {\left\lceil {N/M_0} \right\rceil} \right)/{W^2}$ patches are obtained in total. \item Calculate the average energy of each patch. Denote each patch as $\mathbf{P}_j$, $j\in\{0,1,\ldots,M\left( {\left\lceil {N/M_0} \right\rceil - 1} \right)/{W^2}-1\}$, then, the average energy is given by \begin{equation} {E_j} = \frac{1}{{{W^2}}}\sum\limits_{{m_1} = 0}^{W - 1} {\sum\limits_{{m_2} = 0}^{W - 1} {\left| {{{\mathbf{P}}_j}({m_1},{m_2})} \right|^2} }. \end{equation} \begin{figure}[t!] \centering \includegraphics[width=3.6in]{Fig_method_a}\\[-1em] \caption{Demonstration of the proposed STFT-based watermark embedding scheme. (a) STFT of host audio signal. (b) Partition of the 2D TF image. (c) Energy image of the patches of the 2D TF image. (d) Selected feature patches (red, payload size $P=32$).} \label{Embed} \end{figure} \item Sort $E_j$ in ascending order. According to the binary payload vector $\mathbf{w}\in\{+1,-1\}^{P\times 1}$, usually, \begin{equation} P<M\left( {\left\lceil {N/M_0} \right\rceil } \right)/{W^2}, \end{equation} select the first $P$ patches with minimum average energies as features for watermark embedding. Vectorize the selected patches into feature vectors $\mathbf{f}_i\in\mathbb{C}^{W^2 \times 1}$, $i\in\{0,1,\ldots,P-1\}$. Next, the embedding order is from top to bottom and column-wise, i.e., the feature patches in the first column are first embedded from high frequency bands to low frequency bands, followed by patches in the second column, as so on. \item For each feature vector ordered as above, generate the PN sequence $\mathbf{p}\in\{+1,-1\}^{W^2 \times 1}$ as the spreading code, and perform SS or ISS watermark embedding additively, i.e., \begin{equation}\label{EmbedFun} {{\mathbf{f}}_{{\textnormal{w,}}i}} = {{\mathbf{f}}_i} + \left( {\alpha {\mathbf{w}}(i) - I\Upphi} \right){\mathbf{p}}, \end{equation} where \begin{equation} \Upphi \triangleq \frac{{{\mathbf{f}}_i^T{\mathbf{p}}}}{{\left\| {\mathbf{p}} \right\|_2^2}}, \end{equation} $\{\cdot\}^T$ is transpose operator, and $0<\alpha<1$ controls watermark embedding strength. For simplicity, parameter $I$ is a binary indicator, i.e., if $I=0$, then the scheme is based on SS, while if $I=1$, then the scheme is based on ISS. \item After embedding the payload $\mathbf{w}$, $\mathbf{Y}_\textnormal{w}$ is obtained by simply replacing its subset $\tilde{\mathbf{Y}}$ with $\tilde{\mathbf{Y}}_\textnormal{w}$. Then, perform inverse STFT according to the same framing rule as used in Step 1, and reorder the output to vector form and discard the imaginary part to obtain $\mathbf{x}_\textnormal{w}$. \item Calculate the ODG value according to $\mathbf{x}$ and $\mathbf{x}_\textnormal{w}$. Adjust parameter $\alpha$ according to a desired ODG level, i.e., if the ODG value is greater than the desired value (more imperceptible), then $\alpha$ could be slightly increased as long as the resultant ODG is within a tolerant distance from the desired value; if the ODG value is smaller than the desired value (less imperceptible), then $\alpha$ should be reduced accordingly. This process is shown in Fig. \ref{Tune}. \end{enumerate} The watermark embedding process is visualized in Fig. \ref{Embed}. At the receiving end, assuming an error-free channel, the extraction of the payload in terms of the detection of each embedded information bit is carried out as follows. \begin{enumerate} \item Partition $\mathbf{x}_\textnormal{w}$ into non-overlapping frames $\mathbf{x}_{\textnormal{w},i}$ with $M_0$ samples, and perform Hilbert transform similar to (\ref{Hilbert}). \item Compute the non-symmetric STFT of $\mathbf{x}_{\textnormal{w}}$, and obtain the TF representation $\mathbf{Y}_{\textnormal{w}}$. Specifically, perform FFT for each frame, \begin{equation} {{\mathbf{y}}_{\textnormal{w},i}} = {\mathbf{H}}{{\mathbf{x}}_{\textnormal{w},i}}, \end{equation} then we have \begin{equation} {\mathbf{Y}_\textnormal{w}} = [{{\mathbf{y}}_{\textnormal{w},0}},{{\mathbf{y}}_{\textnormal{w},1}}, \ldots ,{{\mathbf{y}}_{\textnormal{w}, {\left\lceil {N/M_0} \right\rceil - 1}}}]. \end{equation} Further, according to $f_1$ and $f_2$, construct the sub-matrix $\tilde{\mathbf{Y}}_\textnormal{w}\in\mathbb{C}^{M\times \lceil {N/M_0} \rceil}$. \item Partition $\tilde{\mathbf{Y}}_\textnormal{w}$ into square patches using a $W\times W$ window, and index the patches in raster scanning order. \item Calculate the average energy of each patch. Denote each patch as $\mathbf{P}_{\textnormal{w},j}$, $j\in\{0,1,\ldots,M\left( {\left\lceil {N/M_0} \right\rceil - 1} \right)/{W^2}-1\}$, then, the average energy is given by \begin{equation} {E_{\textnormal{w},j}} = \frac{1}{{{W^2}}}\sum\limits_{{m_1} = 0}^{W - 1} {\sum\limits_{{m_2} = 0}^{W - 1} {\left| {{{\mathbf{P}}_{\textnormal{w},j}}({m_1},{m_2})} \right|^2} } . \end{equation} \item Sort ${E_{\textnormal{w},j}}$ in ascending order, and find $P$ patches with least energy values. Vectorize these patches to form $\mathbf{f}_{\textnormal{w},i}$ which are ordered column-wise and from top to bottom. The embedded information bit is estimated by the following function \begin{eqnarray}\label{Extract} \hat{\mathbf{w}}(i) & = & \textnormal{sgn} \left\langle {\Re({{\mathbf{f}}_{\textnormal{w},i}}),{\mathbf{p}}} \right\rangle /\left\| {\mathbf{p}} \right\|_2^2 \nonumber\\ & = & \textnormal{sgn} \left( {\frac{{\Re({\mathbf{f}}_i)^T{\mathbf{p}} + \left( {\alpha {\mathbf{{w}}}(i) - I\Re({\Upphi}) } \right)\left\| {\mathbf{p}} \right\|_2^2}}{{\left\| {\mathbf{p}} \right\|_2^2}}} \right) \nonumber\\ & = & \textnormal{sgn} \left( {(1 - I)\Re({\Upphi}) + \alpha {\mathbf{{w}}}(i)} \right), \end{eqnarray} where $\Re\{\cdot\}$ denotes the real part. \end{enumerate} It can be seen from (\ref{Extract}) that the SS scheme ($I=0$) suffers from host signal interference $\Upphi$, while the ISS scheme ($I=1$) is able to remove the interference term in a closed-loop environment. Note that the above embedding and extraction schemes could be modified to other schemes by simply replacing the STFT with STCT or other transforms. \subsection{Feature Invariance} In this subsection, we address an important issue to validate the proposed system, i.e., feature invariance before and after watermark embedding. It can be seen from the embedding function (\ref{EmbedFun}) that the energy of watermarked feature vector will be altered. Therefore, after the whole embedding process, the energy distribution of the feature patches in $\mathbf{Y}_\textnormal{w}$, at least the $P$ patches with least energy levels, should still have least energy levels. To study the feature recovery property of the proposed system under different TF transforms, the recovery results using a sample audio clip in a closed-form environment are depicted in Fig. \ref{FeatureRecover}, where the audio clip has a duration of $10$ seconds, and $P=32$. It could be seen that STFT based method is more suitable for the proposed framework. The reason behind is that DCT tends to compact the signal's energy into smaller frequency band and also de-correlate the signal in frequency domain, making the small energy regions more ambiguous in terms of energy difference. Therefore, in the sequel, we will only consider STFT as the TF analysis tool in the following experiments. Extensive experimental results will be provided later to demonstrate how additive noise and other processing attacks affect the effectiveness of the proposed framework. It is worth noting that there actually exists another solution for the requirement of feature invariance, which could be obtained using an indexing array pointing to $P$ patches randomly to identify which patches are watermarked. This array could be considered as a private key shared among the authority and trusted parties. \textbf{Note that the system proposed in the previous subsection is strictly a blind watermarking scheme whose watermark extraction does not require any auxillary information, but if the random indexing array is introduced, then this array, serving as a key, should be transmitted to authorized receivers via some secure channels.} The study of random index key based TF feature domain watermarking is noted here for future research attention. \begin{figure}[!t] \centering \subfigure[\small Top: STFT. Mid.: Feature patches. Bot.: Recovered patches.]{\includegraphics [width=3.7in]{Fig_method_b.eps}}\\[-.5em] \subfigure[\small Top: STCT. Mid.: Feature patches. Bot.: Recovered patches.]{\includegraphics [width=3.7in]{Fig_method_c.eps}}\\[-.5em] \caption{Detection results of watermark positions via (a) STFT and (b) STCT.} \label{FeatureRecover} \end{figure} \begin{figure}[!t] \centering \subfigure[DR versus DWR.]{\includegraphics [width=2.8in]{Fig_dwr.eps}}\\[-.5em] \subfigure[DR versus ODG.]{\includegraphics [width=2.8in]{Fig_odg.eps}}\\[-.5em] \caption{Watermark detection rates averaged by running $3$ audio samples: (a) Detection Rate versus DWR. (b) Detection Rate versus ODG.} \label{Imp} \end{figure} \section{Evaluations and Experimental Results} In this section, we carry out extensive experiments to evaluate the proposed framework in terms of imperceptibility and robustness. The imperceptibility is measured quantitatively by document-to-watermark ratio (DWR) and ODG respectively. For comparison, the counterpart system based on DCT and the scheme in Fig. \ref{Schemes} (b) is also implemented. Therefore, the experiments and comparisons will be conducted on four systems, i.e., STFT-SS, STFT-ISS, DCT-SS, and DCT-ISS, respectively. Some measurement metrics are defined as follows. First, the DWR is given by \begin{equation} \textnormal{DWR} = 10{\log _{10}}\frac{{\left\| {\mathbf{x}} \right\|_2^2}}{{\left\| {{{\mathbf{x}}_\textnormal{w}} - {\mathbf{x}}} \right\|_2^2}}. \end{equation} The signal-to-noise ratio (SNR) is defined by \begin{equation} \textnormal{SNR} = 10{\log _{10}}\frac{{\left\| {{{\mathbf{x}}_{\textnormal{w}}}} \right\|_2^2}}{{{\sigma ^2}}}, \end{equation} where $\sigma^2$ is the variance of additive white Gaussian noise (AWGN). To characterize watermark extraction performance, the detection rate (DR) is defined by \begin{equation} \textnormal{DR} = \frac{1}{{2P}}\sum\limits_{i = 0}^{P - 1} {\left| {{\mathbf{w}}(i) - \hat{\mathbf{w}}(i)} \right|} \times 100\%. \end{equation} Parameters are set as follows. $M_0=1024$, $f_1=60$ Hz, $f_2=2800$ Hz, $W=16$, $P=32, 32^2$, $L=W^2$, $\alpha$ is heuristically controlled by ODG values no less than $-1$. During comparison, ODG values are tuned to be similar for fair comparison. In our simulation, four music samples are selected, including male song ($240$ s), female song ($10$ s), violin and piano duet ($10$ s), and electronic music ($10$ s). All the samples audio files have 16-bit quantization and a sampling frequency of $44.1$ kHz. \subsection{Imperceptibility} The imperceptibility property of the implemented systems is demonstrated in Fig. \ref{Imp}, where three of the four audio samples with $10$ s duration are used to generate the performance curves, and AWGN with $\textnormal{SNR} = 30$ dB is considered. It can be observed from both sub-figures that the proposed schemes constantly yield better imperceptibility when the DR values are the same. In terms of ODG, the proposed systems could obtain ODG values between $-0.5$ and $0$ with above $90\%$ DRs. Further, the DWR and ODG values of the four audio watermarking systems applied on four audio samples are summarized in Tables \ref{Table1} and \ref{Table2}. It can be seen that performance improvement on imperceptibility is consistent across different samples, and the inability of ISS based methods in controlling imperceptibility is resolved, thanks to localized embedding in selected features. The robustness testing results will be provided in next subsection, also based on the four audio samples, and the corresponding imperceptibility information is as shown in the two tables. We will demonstrate that while the proposed systems could achieve improved imperceptibility, the robustness against several common processing attacks can also be improved. \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \tabcolsep 4mm \caption{Imperceptibility of DCT-SS and DCT-ISS methods} \centering \vspace{-0.5em} \begin{tabular}{c|c c| c c} \firsthline \multirow{ 2}{*}{{Data}} & \multicolumn{2}{c|}{{DCT-SS}} & \multicolumn{2}{c}{{DCT-ISS}} \\ \cline{2-3} \cline{4-5} & {DWR (dB)} & {ODG} & {DWR (dB)} & {ODG} \\ \hline Sample 1 & 33.9 & -0.76 & 33.7 & -0.70 \\ Sample 2 & 34.7 & -0.13 & 32.4 & -0.02 \\ Sample 3 & 40.4 & -0.40 & 32.8 & -0.90 \\ Sample 4 & 33.5 & -0.95 & 32.3 & -0.44 \\ \lasthline \end{tabular}\label{Table1} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \tabcolsep 4mm \caption{Imperceptibility of STFT-SS and STFT-ISS methods} \centering \vspace{-0.5em} \begin{tabular}{c|c c| c c} \firsthline \multirow{ 2}{*}{{Data}} & \multicolumn{2}{c|}{{STFT-SS}} & \multicolumn{2}{c}{{STFT-ISS}} \\ \cline{2-3} \cline{4-5} & {DWR (dB)} & {ODG} & {DWR (dB)} & {ODG} \\ \hline Sample 1 & 39.9 & -0.61 & 39.6 & -0.44 \\ Sample 2 & 40.7 & -0.03 & 42.6 & -0.02 \\ Sample 3 & 46.4 & -0.43 & 42.1 & -0.40 \\ Sample 4 & 39.5 & -0.55 & 39.9 & -0.34 \\ \lasthline \end{tabular}\label{Table2} \end{table} \begin{figure}[!h] \centering \includegraphics[height=2in,width=3.5in]{Fig_logo1}\\[-2em] \caption{$32\times 32$-bit watermark logos used with DCT-ISS and STFT-ISS.} \label{logos} \end{figure} \begin{table*}[!h] \renewcommand{\arraystretch}{1.45} \tabcolsep 2.8mm \caption{DRs (\%) of DCT-SS method under different attacks (first setting)} \centering \vspace{-0.5em} \begin{tabular}{ c |c|c|c|c|c|c } \hline \multicolumn{2}{c|}{{Attach Type}} &{Sample 1} & {Sample 2} & {Sample 3} & Sample 4 & {Average} \\ \hline \multirow{1}{*}{{Re-Quantization}} & 8 Bit & 84.4 & 96.8 & 78.1 & 93.8 & 88.2 \\ \hline \multirow{2}{*}{{Gaussian Noise}} & 30 dB & 84.4 & 96.8 & 78.1 & 90.6 & 87.5 \\ & 50 dB & 84.4 & 96.8 & 78.1 & 93.8 & 88.2 \\ \hline \multirow{2}{*}{{Amplitude Scal.}} & 1.2 & 84.4 & 96.8 & 78.1 & 93.8& 88.2 \\ & 1.8 & 84.4 & 96.8 & 78.1 & 93.8 & 88.2 \\ \hline \multirow{2}{*}{{AAC Compression}} & 96 kbps & 84.4 & 96.8 & 78.1 & 93.8 & 88.2 \\ & 160 kbps & 84.4 & 96.8 & 78.1 & 93.8 & 88.2 \\ \hline \multirow{2}{*}{{MP3 Compression}} & 64 kbps & 84.4 & 96.8 & 78.1 & 93.8& 88.2 \\ & 128 kbps & 84.4 & 96.8 & 78.1 & 93.8 & 88.2 \\ \hline \end{tabular}\label{Table3} \end{table*} \begin{table*}[!h] \renewcommand{\arraystretch}{1.45} \tabcolsep 2.8mm \caption{DRs (\%) of DCT-ISS method under different attacks (first setting)} \centering \vspace{-0.5em} \begin{tabular}{ c |c|c|c|c|c|c } \hline \multicolumn{2}{c|}{{Attach Type}} &{Sample 1} &{Sample 2} & {Sample 3} & {Sample 4} &{Average} \\ \hline \multirow{1}{*}{{Re-Quantization}} & 8 Bit & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{2}{*}{Gaussian Noise} & 30 dB & 75.0 & 75.0 & 68.8 & 71.9 & 72.7 \\ & 50 dB & 100 & 100 & 100& 100 & 100 \\ \hline \multirow{2}{*}{Amplitude Scal.} & 1.2 & 100 & 100 & 100 & 100 & 100 \\ & 1.8 & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{2}{*}{{AAC Compression}} & 96 kbps & 96.8 & 96.8 & 96.8 & 96.8 & 96.8 \\ & 160 kbps & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{2}{*}{{MP3 Compression}} & 64 kbps & 87.5 & 100 & 93.8 & 100 & 95.3 \\ & 128 kbps & 100 & 100 & 96.8 & 100 & 99.2 \\ \hline \end{tabular}\label{Table4} \end{table*} \begin{table*}[!h] \renewcommand{\arraystretch}{1.45} \tabcolsep 2.8mm \caption{DRs (\%) of STFT-SS method under different attacks (first setting)} \centering \vspace{-0.5em} \begin{tabular}{ c |c|c|c|c|c|c } \hline \multicolumn{2}{c|}{{Attach Type}} & {Sample 1} & {Sample 2} & {Sample 3} & {Sample 4} & {Average} \\ \hline \multirow{1}{*}{{Re-Quantization}} & 8 Bit & 96.8 & 100 & 81.3 & 96.8 & 93.8 \\ \hline \multirow{2}{*}{{Gaussian Noise}} & 30 dB & 93.7 & 100 & 78.1 & 93.8 & 91.4 \\ & 50 dB & 96.8 & 100 & 81.3 & 96.8 & 93.8 \\ \hline \multirow{2}{*}{{Amplitude Scal.}} & 1.2 & 96.8 & 100 & 81.3 & 96.8 & 93.8 \\ & 1.8 & 96.8 & 100 & 81.3 & 96.8 & 93.8 \\ \hline \multirow{2}{*}{{AAC Compression}} & 96 kbps & 96.8 & 100 & 78.1 & 96.8 & 92.9 \\ & 160 kbps & 96.8 & 100 & 81.3 & 96.8 & 93.8 \\ \hline \multirow{2}{*}{{MP3 Compression}} & 64 kbps & 96.8 & 100 & 78.1 & 96.8 & 92.9 \\ & 128 kbps & 96.8 & 100 & 81.3 & 96.8 & 93.8 \\ \hline \end{tabular}\label{Table5} \end{table*} \begin{table*}[!h] \renewcommand{\arraystretch}{1.45} \tabcolsep 2.8mm \caption{DRs (\%) of STFT-ISS method under different attacks (first setting)} \centering \vspace{-0.5em} \begin{tabular}{ c |c|c|c|c|c|c } \hline \multicolumn{2}{c|}{{Attach Type}} & {Sample 1} & {Sample 2} & {Sample 3} & {Sample 4} & {Average} \\ \hline \multirow{1}{*}{{Re-Quantization}} & 8 Bit & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{2}{*}{{Gaussian Noise}} & 30 dB & 96.8 & 93.7 & 96.8 & 87.5 & 93.7 \\ & 50 dB & 100 & 100 & 100& 100 & 100 \\ \hline \multirow{2}{*}{{Amplitude Scal.}} & 1.2 & 100 & 100 & 100 & 100 & 100 \\ & 1.8 & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{2}{*}{{AAC Compression}} & 96 kbps & 100 & 100 & 100 & 100 & 100 \\ & 160 kbps & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{2}{*}{{MP3 Compression}} & 64 kbps & 100 & 100 & 100 & 100 & 100 \\ & 128 kbps & 100 & 100 & 100 & 100 & 100 \\ \hline \end{tabular}\label{Table6} \end{table*} \begin{table*}[!h] \renewcommand{\arraystretch}{1.5} \tabcolsep 2.8mm \caption{DRs (\%) of DCT-ISS and STFT-ISS methods (second setting)} \vspace{-0.5em} \centering \begin{tabular}{ cc|c|c|c|c| c} \hline \multicolumn{2}{c|}{\multirow{2}{*}{{Attack Type}}} & \multicolumn{2}{c|}{DCT-ISS} & \multicolumn{2}{c|}{STFT-ISS} & \multirow{2}{*}{\shortstack{Average \\ Improvement}} \\ \cline{3-6} & & $\;\;$IEEE$\;\;$ & Springer & $\;\;$IEEE$\;\;$ & Springer & \\ \hline \multirow{1}{*}{{Re-Quantization}} & \multicolumn{1}{|c|}{8 Bit} & 94.2 & 95.1 & 100 & 100 & 5.35 \\ \hline \multirow{3}{*}{{Gaussian Noise}} & \multicolumn{1}{|c|}{30 dB}& 55.2 & 58.9 & 85.2 & 86.7 & 29.3 \\ & \multicolumn{1}{|c|}{40 dB} & 70.1 & 69.5 & 94.3 & 95.0 & 24.9 \\ & \multicolumn{1}{|c|}{50 dB} & 92.6 & 94.5 & 98.6 & 100 & 5.75 \\ \hline \multirow{2}{*}{{Amplitude Scal.}} & \multicolumn{1}{|c|}{1.2} & 100 & 100 & 100 & 100 & 0 \\ & \multicolumn{1}{|c|}{1.8} & 100 & 100 & 100 & 100 & 0 \\ \hline \multirow{3}{*}{{AAC Compression}} & \multicolumn{1}{|c|}{96 kbps} & 75.5 & 73.9 & 85.4 & 82.5 & 9.25 \\ & \multicolumn{1}{|c|}{128 kbps} & 81.5 & 81.9 & 90.0 & 87.9 & 7.25 \\ & \multicolumn{1}{|c|}{160 kbps} & 93.4 & 91.9 & 100 & 100 & 7.35 \\ \hline \multirow{3}{*}{{MP3 Compression}} & \multicolumn{1}{|c|}{64 kbps} & 64.6 & 75.1 & 85.1 & 86.3 & 15.9 \\ & \multicolumn{1}{|c|}{128 kbps} & 86.7 & 99.2 & 100 & 100 & 7.05\\ & \multicolumn{1}{|c|}{192 dbps} & 99.5 & 100 & 100 & 100 & 0.25 \\ \hline & \multicolumn{1}{c}{ODG} & \multicolumn{1}{c}{-0.75} & \multicolumn{1}{c}{-0.76} & \multicolumn{1}{c}{-0.65} & \multicolumn{1}{c}{-0.66} & \\ \end{tabular}\label{Table7} \end{table*} \begin{figure*}[!t] \centering \includegraphics[height=3.7in,width=6.3in]{Fig_logos}\\[-2em] \caption{Recovered $32\times 32$-bit watermark logos (second setting) under different significant attacks used with DCT-ISS and STFT-ISS.} \label{logos2} \end{figure*} \subsection{Robustness to a Series of Attacks} The attacks considered in this paper include adding Gaussian noise, amplitude scaling, AAC lossy compression, and MP3 lossy compression, each with several different attack strength settings. Here, we use two sets of watermarks and two sets of sample audio clips. In the first setting, a random binary sequence of $32$ bits is used as the watermark, which is embedded into four audio samples of $10$ seconds. The embedding DWR and ODG values in this setting are given in Tables \ref{Table1} and \ref{Table2}. In the second setting, two graphic logos of $32\times 32$ bits, as shown in Fig. \ref{logos}, are used for watermark embedding, and the $4$-minute audio file is used as the host signal. The ODG values for the second setting are given at the bottom of Table \ref{Table7}. The implemented methods for the second setting are DCT-ISS and STFT-ISS only, for simplicity. The DRs against several attacks with different attack strength settings for DCT-SS, DCT-ISS, STFT-SS, and STFT-ISS methods are shown in Tables \ref{Table3} to \ref{Table6} respectively. For DCT and STFT based methods respectively, we observe better robustness when ISS is used. This agrees with the classical property of the ISS technique. More importantly, we can see from these tables that the proposed TF feature domain watermarking systems outperform DCT based frequency domain methods for both SS and ISS implementations. The best performance is observed in Table \ref{Table6}, which corresponds to STFT-ISS method. Recall Table \ref{Table2}, in which the DWR and ODG values of STFT-ISS method are very close to, or even slightly better than those obtained from its SS counterpart or DCT based methods. \textbf{Therefore, it could be concluded that the proposed system is able to simultaneously achieve improved imperceptibility and robustness.} Finally, we test the two ISS based systems, i.e., DCT-ISS and STFT-ISS in the second setting where IEEE and Springer logos are embedded in a $4$-minute audio clip. The results are shown in Table \ref{Table7}. The improvement of the proposed system against its frequency domain counterpart is consistent across different attacks for both logos. The recovered IEEE and Springer logos under different significant attacks using the DCT-ISS and the STFT-ISS are shown in Fig. \ref{logos2}. Two observations can be made: the DCT-ISS fails to reconstruct the original logos; Although the STFT-ISS does not achieve completely error-free performance, it can still well recover the shape and content of the logos. \section{Conclusion} In this paper, we proposed an audio watermarking framework from the perspective of TF analysis. Different from existing schemes, the proposed framework considers the 2D TF representation of host audio signal as the raw signal for watermark embedding. Based on partitioning the 2D TF image into small patches, and selecting patches with lower energy values as features, watermarks are embedded into the vectorized feature patches using SS and ISS mechanisms. Extensive experimental results have been carried out in comparison with the counterpart systems that embed watermark in frequency domain. Consistent performance improvements have been shown via the experimental results, both using random sequences and image logos as watermarks. Due to the requirement of feature invariance, the proposed systems are less robust against AWGN attacks. We have noted in Section II.C that using a random indexing key instead of sorting patch energies would be an effective solution to this problem. Future research efforts will be put into this issue. In general, it is worth investigating more efficient TF feature domain audio watermarking methods that could potentially lead to substantially improved robustness against desynchronization attacks. This may possibly be approached via exploring desynchronization invariant features in TF domain and utilizing both SS and QIM based embedding mechanisms.
1,314,259,992,935
arxiv
\section{Introduction} The gravitational memory effect results in a net change in the relative separation of distant particles. After a wave passes it is determined by the difference between the initial and final radiation strain measured by a gravitational wave detector. The possibility of observable astrophysical consequences of the effect was first studied in linearised gravity where the memory resulted from a burst of massive particles which escaped to infinity, as described by the retarded solution of the linearised Einstein equations~\cite{zeld,brag}. In previous work~\cite{sky}, we showed that this result could also be obtained in linearised theory by considering the transition from an initial state whose exterior was described by a Schwarzschild metric at rest to a final state whose exterior was a boosted exterior Schwarzschild metric. This result was subsequently extended to the nonlinear treatment of the transition from a stationary to boosted Schwarzschild exterior~\cite{boost,boost2}. Here we further extend this treatment of the memory effect to the boosted Kerr metric. Our linearised treatment of the memory effect was based upon the stationary and boosted versions of the ingoing Kerr-Schild version of the Schwarzschild metric to describe the far field of the initial and final states. In order to extend this result to the nonlinear case three major differences from the linearised theory had to be dealt with. First, the linearised result was based upon the boost symmetry of the unperturbed Minkowski background. The Kerr-Schild metrics~\cite{ks1,ks2} have the form \begin{equation} g_{ab} = \eta_{ab} +2 H n_a n_b \label{eq:ksk} \end{equation} comprised of a Minkowski metric $\eta_{ab}$, a principal null vector $n_a$ (with respect to both $\eta_{ab}$ and $g_{ab}$) and a scalar field $H$. In the nonlinear case, the time reflection symmetry of the Schwarzschild metric leads to two different choices of a ``Minkowski background'' $\eta_{ab}$ depending on whether $n^a$ is chosen to be in the ingoing or outgoing direction. The relation between Boyer-Lindquist coordinates~\cite{BL} and the ingoing and outgoing versions of Kerr-Schild coordinates is described in Sec.~\ref{sec:ksk}. Second, no analogue of the Green function exists in the nonlinear case to construct a retarded solution. Instead, the retarded solution due to the emission of radiation from an accelerated particle is characterised by the absence of ingoing radiation from past null infinity ${\mathcal I}^-$. The vanishing of radiation memory at ${\mathcal I}^-$ is necessary for the absence of ingoing radiation. This condition allows the ingoing radiation strain, which forms the free characteristic initial data on ${\mathcal I}^-$, to be set to zero. Otherwise, non-zero radiation memory at ${\mathcal I}^-$ would require ingoing radiation. Since an initial stationary Schwarzschild metric has vanishing radiation strain at ${\mathcal I}^-$, the final boosted metric must also have vanishing radiation strain at ${\mathcal I}^-$ if there is no intervening ingoing radiation. We found unexpected differences in the boosts associated with the ingoing and outgoing versions of the Kerr-Schild metric. The memory effect due to the final velocity of a Schwarzschild black hole is only correctly described by the boost ${\mathcal B}$ associated with the Poincar{\'e} group of the Minkowski background of the ingoing Kerr-Schild metric. This is because ${\mathcal B}$ belongs to the preferred Lorentz subgroup of the Bondi-Metzner-Sachs (BMS)~\cite{Sachs_BMS} asymptotic symmetry group at ${\mathcal I}^-$ which is picked out by the stationarity of the spacetime. This preferred Lorentz subgroup maps asymptotically shear-free cross-sections of ${\mathcal I}^-$ into themselves. Such cross-sections are non-distorted in the sense that their radiation strain vanishes. Thus ${\mathcal B}$ does not introduce strain at ${\mathcal I}^-$ so that the transition from a stationary to boosted state is consistent with the lack of ingoing radiation from ${\mathcal I}^-$. But ${\mathcal B}$ is not a preferred BMS Lorentz symmetry of future null infinity ${\mathcal I}^+$ where it induces a supertranslation component of the BMS group. As shown in~\cite{boost2}, the strain introduced by this supertranslation results in non-zero radiation memory at ${\mathcal I}^+$. This memory effect is in precise agreement with the linearised result based upon the retarded Green function. Conversely, the boost symmetry of the Minkowski metric associated with the outgoing version of the Kerr-Schild metric is a preferred (supertranslation-free) BMS symmetry at ${\mathcal I}^+$. Consequently, it introduces neither strain nor radiation memory at ${\mathcal I}^+$. Third, in the nonlinear regime the mass of the final black hole depends upon the energy loss carried off by gravitational waves. This couples the memory effect due to the escape of an unbound particle to the Christodoulou memory effect~\cite{Christ_mem} due to energy loss from gravitational radiation. In extending this approach from the boosted Schwarzschild to the boosted Kerr metric there are further complications, in addition to the algebraic complexity. Unlike the Schwarzschild case, the principal null directions of the Kerr metric are not hypersurface orthogonal. As a result, there is no natural way to construct a null coordinate system in order to study the asymptotic behavior at null infinity. Here we show that there does exist a natural choice of {\it hyperboloidal} coordinates which provide a spacelike foliation extending asymptotically to null infinity. These hyperboloidal hypersurfaces are the null hypersurfaces of the Minkowski background for the Kerr-Schild version of the Kerr metric, which we abbreviate by KSK metric. For the ingoing KSK metric (see Sec.~\ref{sec:scrim}), the hyperboloids approach ${\mathcal I}^-$ and for the outgoing case (see Sec.~\ref{sec:scrip}) they approach ${\mathcal I}^+$. The associated coordinates lead to a straightforward Penrose compactification of null infinity and allow an unambiguous treatment of the asymptotic radiation strain and identification of the Poincar{\'e} symmetries of the Minkowski background with BMS symmetries. Such curved space hyperboloidal hypersurfaces have been utilised in formulating the Cauchy problem for Einstein's equations in a manner suitable for radiation studies~\cite{friedrich}. Their existence for the Kerr exterior was investigated in terms of an asymptotic series expansion in~\cite{anil} and in terms of an implicit analytic scheme for the Teukolsky equation in~\cite{anil2}. Here we construct a simple, geometrically natural and purely analytic hyperboloidal foliation that globally covers the entire Kerr exterior. An additional factor is that the Kerr metric does not have the time reflection symmetry of the Schwarzschild metric, but instead a $(\tau,\varphi)\rightarrow (-\tau,-\varphi)$ symmetry in Boyer-Lindquist coordinates, as described in Sec.~\ref{sec:ksk}. This further complicates relating asymptotic properties at ${\mathcal I}^-$ and ${\mathcal I}^+$. Although these issues introduce considerable technical complication, our main results for the Schwarzschild case extend without change to the Kerr case. This could be expected on heuristic grounds since the results depend only upon the asymptotic behavior of the boosted metric for which the spin of the black hole enters at a higher order in $1/r$ than its mass. Nevertheless, an important aspect of the memory effect is the supertranslation ambiguity, which enters the bookkeeping of angular momentum. At some deeper level, which deserves further investigation, the spin of the black hole should enter. There has been extensive elaboration and generalisation of the memory effect since its first astrophysically relevant discussion~\cite{zeld} in terms of the linearised description of the burst of massive particles ejected to infinity. This remained the dominant mechanism until Christodoulou demonstrated a purely nonlinear memory effect due to mass loss from an isolated system by gravitational waves. Extensive subsequent work showed that the Christodoulou effect was not necessarily a nonlinear effect but that analogous effects occurred in linear theory due to radiative mass loss to null infinity by Maxwell fields or other rest mass zero fields or particles. This motivated a change in terminology of the Christodoulou effect to a ``null'' memory effect~\cite{Bieri_cov}, as opposed to a nonlinear memory effect. It is now understood that the memory effect results either by the radiation to null infinity by zero rest mass fields (or particles) or by transport to timelike infinity by massive particles boosted with escape velocity. Three distinct mechanisms for the memory effect have been proposed. The first is what historically has been called the ``linear'' memory effect resulting from a burst of ejected particles, as described in~\cite{zeld}. For the present purpose, we refer this as the boost mechanism. The second mechanism, i.e. null memory, generalises the Christodoulou effect to rest mass zero fields or particles~\cite{Bieri_em,Bieri_neutrino,tolish,Bieri_null}. The third is a homogeneous wave mechanism~\cite{sky} due to source free graviational waves emanating from ${\mathcal I}^-$ to ${\mathcal I}^+$. All three mechanisms exist in linearised theory. They can be separated into an ``ordinary'' memory effect in which no energy is lost to ${\mathcal I}^-$ and a null memory effect. The treatment of null memory only depends upon asymptotic propertiesin the neighborhood of null infinity, which readily extends to nonlinear gravitational waves, as demonstrated in terms of the asymptotic properties of the vacuum Bianchi identities~\cite{frauen}, or in terms of the asymptotic properties of Maxwell or other mass zero radiation fields propagating in curved spacetime. These mechanisms combine to form the net memory effect measured by the change in radiation strain between the infinite future and infinite past retarded times. In this paper we demonstrate how the memory effect due to a boosted Kerr black hole can be treated in the nonlinear case. The nonlinear treatment is purely asymptotic but, as explained above, differs considerably from the linearised derivation based upon the retarded flat space Green function. No linear approximations are involved. Supposedly the mechanism for the homogeneous wave memory also extends to the nonlinear theory although there is no general proof by construction of exact solutions as in the linearised case. Other memory effects based upon changes in angular momentum or center-of-mass integrals have also been proposed~\cite{spin_mem, mem_angular,nichols}. Although an exact Kerr metric is an unrealistic approximation to the exterior of a dynamical spacetime, it is a reasonable far field approximation for the final black hole state, in accordance with the no hair scenario. In Sec.~\ref{sec:mem}, we derive the nonlinear memory effect for the transition from a stationary to boosted Kerr black hole. In Sec.~\ref{sec:discuss}, we show how this result may be generalised and, as an example, treat the collision of two boosted black holes to form a final black hole. We derive upper and lower bounds for the boost memory in terms of the final mass of the Kerr black hole resulting from the collision. Here the mass of the final black hole depends upon the energy loss in the intervening radiative period, which leads to the null memory effect. The difference between the final and initial radiation strains, as measured by the net memory effect, combines both the null and boost effects in a way which can only be determined by knowledge of the intervening radiative period. This is discussed further in Sec.~\ref{sec:discuss}. Kerr-Schild metrics have played an important role in the construction of exact solutions~\cite{exact}. Because their metric form (\ref{eq:ksk}) is invariant under the Lorentz symmetry of the Minkowski background metric $\eta_{ab}$, the boosted KSK metric has been important in numerical relativity in prescribing initial data for superimposed boosted and spinning black holes in a binary orbit~\cite{ksm1,ksm2}. The initial data for numerical simulations are prescribed in terms of the ingoing version of the KSK metric, whose advanced time coordinatisation extends across the future event horizon. The initial black hole velocities are generated by the boost symmetry of the Minkowski background for the ingoing KSK metric. This is in accord with our treatment of radiation memory. We denote abstract spacetime indices by $a,b,...$ and coordinate indices by $\alpha,\beta,...$. In addition, we denote 3-dimensional spatial indices for the background inertial coordinates by $i,j,...$, and denote the associated 2-dimensional spherical coordinates by $x^A=(\theta,\phi)$. We often use the standard comma notation to denote partial derivatives, e.g. $f_{,\alpha} =\partial f /\partial x^\alpha$. The distinction between the ingoing and outgoing Kerr-Schild metrics and their associated background Minkowski symmetries requires extra notational care. We retain the notation in our previous papers in which a superscript $(+)$ denotes quantities associated with the advanced time versions of the Schwarzschild metric and a superscript $(-)$ denotes quantities associated with the retarded time version. Corresponding to this notation, we use a superscript $(+)$ for quantities associated with the ingoing version of the KSK metric and a superscript $(-)$ for quantities associated with the outgoing version. As an example, the ingoing principal null vector is denoted by $n^{(+)}_a$ and its null rays emanate from past null infinity ${\mathcal I}^-$ and extend across the future event horizon; and the outgoing principal null vector is denoted by $n^{(-)}_a$, whose null rays extend to future null infinity ${\mathcal I}^+$. Because the memory effect is gauge invariant, it can be computed in either in the inertial coordinates $x^{(+)\alpha}$ or $x^{(-)\alpha}$ associated with the ingoing or outgoing Minkowski backgrounds of the Kerr-Schild metric, respectively. In Sec.~\ref{sec:mem}, for technical convenience we choose the ingoing version $x^{(+)\alpha}$. However, the relation to the outgoing version $x^{(-)\alpha}$ is necessary to compute limits at ${\mathcal I}^+$. The details of this transformation are presented in Sec.~\ref{sec:ksk}. \section{The Kerr-Schild Kerr (KSK) metric and its associated Minkowski backgrounds} \label{sec:ksk} The Boyer-Lindquist coordinates~\cite{BL}, which we denote by $ (\tau,r,\vartheta,\varphi)$, provide the intermediate connection between the ingoing and outgoing versions of the KSK metric. In these coordinates, the Kerr metric is \begin{equation} ds^2 = -d\tau^2 +\Sigma \Big(\frac{dr^2}{\Delta}+d\vartheta^2\Big) +(r^2+a^2)\sin^2\vartheta d\varphi^2 +\frac{2mr}{\Sigma}(a\sin^2\vartheta d\varphi -d\tau)^2 , \label{eq:Schwarz_std} \end{equation} where $m$ is the mass, $a$ is the specific angular momentum and \begin{equation} \Sigma=r^2+a^2 \cos^2\vartheta\; , \quad \Delta = r^2-2mr+a^2\; . \end{equation} Note that this substitutes $a\rightarrow-a$ in the formulae of~ \cite{ks1,ks2} to agree with the standard convention that the sense of rotation is in the positive $\varphi$ direction. The KSK metric can be expressed in terms of either the ingoing principal null direction $n^{(+)a}$ or the outgoing principal null direction $n^{(-)a}$. These two forms of the metric have different inertial coordinates $x^{(\pm)\alpha} =(t^{(\pm)},x^{(\pm)},y^{(\pm)},z^{(\pm)} )$ for their corresponding Minkowski backgrounds $ \eta_{ab}^{(\pm)}$. The main details have been worked out by considering the $(\tau,\varphi) \rightarrow (-\tau,-\varphi)$ reflection symmetry of the Kerr metric in Boyer-Lindquist coordinates~\cite{BL}. The coordinate transformations leading from (\ref{eq:Schwarz_std}) to the ingoing ($+$) or outgoing $(-)$ Kerr-Schild form involve a generalisation of the Schwarzschild tortoise coordinate $r^*$, \begin{eqnarray} r^*& =& \int \big (\frac{r^2 +a^2}{r^2-2mr+a^2}\big) dr \nonumber \\ &=& r+ m \ln \big( \frac{r^2 -2mr +a^2}{4m^2} \big ) +\frac{m^2}{\sqrt{m^2-a^2}} \ln\big (\frac{r-m -\sqrt{m^2-a^2}}{r-m +\sqrt{m^2-a^2}} \big ) \label{eq:rstar} \end{eqnarray} and the intermediate angles \begin{equation} \label{eq:phi_trafo} \Phi^{\pm} = \varphi \pm a \int \f{dr}{\Delta} = \varphi \pm \frac{a}{2\sqrt{m^2-a^2}} \ln\Big (\frac{r-m -\sqrt{m^2-a^2}}{r-m +\sqrt{m^2-a^2}} \Big ) \;\; . \end{equation} The transformation from Boyer-Lindquist to Kerr-Schild coordinates can then be written compactly as \begin{eqnarray} \quad \quad t^{(\pm)}&=& \tau \pm (r - r^*) \label{eq:trafo_t} \\ x^{(\pm)} +iy^{(\pm)} &=& \sqrt{r^2+a^2}\sin\vartheta \exp\Big\{\mathrm{i}\big[\Phi^{(\pm)}\pm \arctan(a/r)\big]\Big\} \label{eq:trafo_xi}\\ \quad \quad z^{(\pm)}&=&r\cos\vartheta . \end{eqnarray} Here $x^{(+)2}+y^{(+)2}=x^{(-)2}+y^{(-)2}$ so we simply denote $x^2+y^2=x^{(\pm)2}+y^{(\pm)2}$. Similarly, we denote $z= z^{(\pm)}$. The Boyer-Lindquist radial coordinate $r$ is then determined implicitly by \begin{equation} \frac{x^2+y^2}{r^2+a^2} +\frac{z^2}{r^2}=1 , \label{eq:r} \end{equation} independent of the choice of background coordinates. The resulting Kerr Schild metrics in the background inertial coordinates $x^{(\pm)\alpha}$ are \begin{equation} g_{\alpha\beta} = \eta_{\alpha\beta}^{(\pm)} +2Hn_\alpha^{(\pm )}n_\beta^{(\pm)}\;\;,\;\; H = \f{mr^3}{r^4+a^2 z^2} , \end{equation} where the ingoing and outgoing versions of the principal null vectors have components \begin{equation} \fl n^{(+)}_\alpha(x^{(+)\beta}) = (n^{(+)}_t,n^{(+)}_x,n^{(+)}_y,n^{(+)}_z) =\Big(-1, -\frac{r{x^{(+)}}+ a{y^{(+)}}}{r^2 +a^2}, -\frac{r{y^{(+)}}- a{x^{(+)}}}{r^2 +a^2}, - \frac{{z}}{r}\Big) , \label{eq:pnd} \end{equation} and \begin{equation} \fl n^{(-)}_\alpha(x^{(-)\beta}) = (n^{(-)}_t,n^{(-)}_x,n^{(-)}_y,n^{(-)}_z) =\Big(-1, \frac{r{x^{(-)}}- a{y^{(-)}}}{r^2 +a^2}, \frac{r{y^{(-)}}+ a{x^{(-)}}}{r^2 +a^2}, \frac{{z}}{r}\Big) . \label{eq:pndm} \end{equation} Recall that $\det(g_{\alpha\beta}) =\det(\eta^\pm_{\alpha\beta})=-1$ for Kerr-Schild metrics in these background coordinates. From \eref{eq:trafo_t} and \eref{eq:trafo_xi}, it follows that the time coordinates of the two Minkowski backgrounds are related by \begin{equation} t^{(+)}=t^{(-)} +2(r-r^*) \label{eq:ttfm} \end{equation} and the spatial coordinates are related by \begin{equation} x^{(+)} +iy^{(+)} = \big(x^{(-)} +iy^{(-)}\big) e^{i\Psi(r)} \; , \quad z= z^{(+)} = z^{(-)} , \label{eq:stfm} \end{equation} where \begin{equation} \Psi(r):= \frac{a}{\sqrt{m^2-a^2}} \ln\Big (\frac{r-m -\sqrt{m^2-a^2}}{r-m +\sqrt{m^2-a^2}} \Big) +2 \arctan(a/r)\;\;. \label{eq:Psi} \end{equation} We set \begin{equation} \rho^2 =x^2+y^2+z^2, \end{equation} and introduce the standard spherical coordinates $(\rho,\theta,\phi^{(\pm)})$ for the Minkowski backgrounds, \begin{equation} x^{(\pm)}+i y^{(\pm)}= \rho\sin\theta \exp[i \phi^{(\pm)}] . \quad z =\rho \cos \theta \, . \label{eq:xypm} \end{equation} Here $\rho$ and $\theta$, but not $\phi^{(\pm)}$, are background independent. \section{ ${\mathcal I}^-$ and the boost symmetry} \label{sec:scrim} In~\cite{boost}, we showed that the linearised memory effect arising from the retarded solution for a freely ejected particle could be obtained from the boosted version of the advanced time Kerr-Schild-Schwarzschild metric. In that treatment, the boost was a Lorentz symmetry of the linearised Minkowski background. However, this could not be extended unambiguously to the nonlinear case, where there are two different choices of boost symmetry corresponding to the Minkowski backgrounds $ \eta^{(+)}_{\alpha\beta}$ or $ \eta^{(-)}_{\alpha\beta}$ of the ingoing or outgoing versions of the curved space Kerr-Schild-Schwarzschild metric. In the curved space case, it is not the choice of ingoing or outgoing Kerr-Schild metric (which are algebraically equal) but the choice of boost that leads to the essential result. In particular, because the boost symmetry of the ingoing background $\eta^{(+)}_{\alpha\beta}$ is a preferred BMS symmetry of ${\mathcal I}^-$ it does not produce ingoing radiation strain at ${\mathcal I}^-$ but it does induce a supertranslation at ${\mathcal I}^+$, which leads to outgoing radiation strain. In the Schwarzschild case, the null hypersurfaces determined by the principal null directions provide a simple approach to construct null infinity. In the Kerr case, this is more complicated because the principal null directions are not hypersurface orthogonal. For this reason, we describe ${\mathcal I}^-$ in the Kerr case by considering the null spherical coordinates associated with the Minkowski background $\eta^{(+)}_{\alpha\beta}$, \begin{equation} \tilde x^{(+)\alpha} = (v,\rho,\theta,\phi^{(+)}), \quad v=t^{(+)}+\rho. \end{equation} In these coordinates, \begin{equation} \eta^{(+)}_{\alpha\beta} d\tilde x^{(+)\alpha} d\tilde x^{(+)\beta} = - dv^2 +2dv d\rho +\rho^2 dq^{(+)2}, \label{eq:minkplus} \end{equation} where $dq^{(+)2}=q^{(+)}_{AB}dx^{(+)A}dx^{(+)B} =d\theta^2+\sin^2 \theta d\phi^{(+)2}$ is the unit sphere metric. The ingoing KSK metric takes the form \begin{equation} g_{\alpha\beta} d \tilde x^{(+)\alpha} d \tilde x^{(+)\alpha} = - dv^2 +2dv d\rho+\rho^2 dq^{(+)2} + 2H (n^{(+)}_\alpha d \tilde x^{(+)\alpha})^2, \label{eq:KS_hyperbolic} \end{equation} where, using (\ref{eq:r}) and (\ref{eq:pnd}), \begin{equation} n^{(+)}_\alpha d \tilde x^{(+)\alpha} = -dv +(1-\frac{r}{\rho}) d\rho +\frac{a\rho^2\sin\theta}{r(r^2+a^2)} \big ( a\cos\theta d\theta +r\sin\theta d\phi^{(+)}). \end{equation} The inverse property of Kerr-Schild metrics, \begin{equation} g^{ab} = \eta^{(+)ab} -2H n^{(+)a} n^{(+)b}, \end{equation} implies \begin{equation} g^{ab}(\partial_a v)\partial_b v = -2H (n^{(+)a} \partial_a v)^2. \end{equation} As a result, since $H\ge0$, the hypersurfaces $v=const$ are spacelike except in the limiting Schwarzschild case, where $a=0$ and the hypersurfaces are null. Explicitly, (\ref{eq:pnd}) leads to \begin{equation} n^{(+)\alpha} \partial_\alpha v =1-\frac{r\rho}{r^2 +a^2} -\frac{a^2 z^2}{r\rho(r^2 +a^2)}, \end{equation} or, using (\ref{eq:r}), \begin{equation} n^{(+)\alpha} \partial_\alpha v =1-\frac{r}{\rho}. \label{eq:ellv} \end{equation} \subsection{Compactification of past null infinity} For $a\ne 0$, the hypersurfaces $v=const$ are spacelike hyperboloids in the Kerr geometry which approach ${\mathcal I}^-$ asymptotically. In order to compactify ${\mathcal I}^-$, we replace the hyperboloidal spherical coordinates $\tilde x^{(+)\alpha} = (v,\rho,\theta,\phi^{(+)})$ by the compactified coordinates $\hat x^{(+)\alpha}=(v,\ell,\theta,\phi^{(+)})$, where $\ell=1/\rho$. In these coordinates ${\mathcal I}^-$ is given by $\ell=0$. Now introduce the conformally rescaled metric denoted by $\hat g_{ab}=\ell^2 g_{ab}$. The conformal metric is given in the compactified coordinates by \begin{equation} \hat g_{\alpha\beta} d\hat x^{(+)\alpha} d \hat x^{(+)\beta} = - \ell^2 dv^2 -2dv d\ell + dq^{(+)2} + 2 H(\ell n^{(+)}_\alpha d \hat x^{\alpha(+)})^2, \label{eq:KS_hyperbolic} \end{equation} with \begin{equation} \fl \quad \quad \ell n^{(+)}_\alpha d \hat x^{(+)\alpha} = -\ell dv -[\ell^{-1} - r ] d\ell +\frac{a \sin\theta}{r\ell(r^2+a^2)} \big ( a\cos\theta d\theta +r\sin\theta d\phi^{(+)}). \end{equation} The asymptotic behaviour of $\hat g_{\alpha\beta}$ depends upon the asymptotic expansion of the Boyer-Lindquist coordinate $r$. From (\ref{eq:r}), $r$ is determined by the quartic equation \begin{equation} \label{eq:r4} \fl \quad r^4 - (x^2+y^2+z^2-a^2)r^2 -a^2z^2 =r^4 -(\rho^2 -a^2) r^2 - a^2\rho^2 \cos^2 \theta =0. \end{equation} The solution \begin{eqnarray} r &=&\f{1}{2}\sqrt{2\rho^2 -2a^2+2\sqrt{\rho^4-2\rho^2a^2 +a^4+4a^2 z^2} } \label{eq:rz}\\ &=&\f{1}{2}\sqrt{2\rho^2 -2a^2+2\sqrt{\rho^4-2\rho^2a^2 +a^4+4a^2\rho^2 \cos^2 \theta}} \label{eq:rtheta} \end{eqnarray} has the asymptotic $\ell$ expansion about ${\mathcal I}^-$ \begin{equation} \label{eq:rell} \fl \quad r(\ell, \theta) = \ell^{-1} -\frac{a^2\sin^2\theta}{2}\;\ell -\frac{a^4\sin^2\theta(1-5\cos^2\theta )}{8}\; \ell^3 +O(\ell^{5})\; , \quad \ell =1/ \rho \, . \end{equation} As a result, $H$, $\ell n^{(+)}_\alpha d \hat x^{(+)\alpha}$ and the conformal metric have the asymptotic $\ell$-expansions \begin{equation} H = m\ell \bigg[1 +\frac{ a^2}{2}(1-3\cos^2\theta) \ell^2 +\frac{ a^4}{8}(3-30\cos^2\theta +35\cos^4\theta) \ell^4+ O(\ell^6)\bigg] , \end{equation} \begin{eqnarray} \ell n^{(+)}_\alpha d \hat x^\alpha & = & \ell \Big(-dv -\frac{a^2}{2}\sin^2\theta d\ell + a\sin^2\theta d\phi^{(+)}\Big) + \ell^2\Big(a^2\sin\theta \cos\theta d\theta \Big)\nonumber \\ & & +\ell^3 \Big[\frac{a^4}{8}(5\cos^2\theta-1) \sin^2\theta d\ell -a^3\sin^2\theta\cos^2\theta d\phi^{(+)}\Big] \nonumber\\ && +\ell^4 \Big[\frac{a^4}{2}(1-3\cos^2\theta)\sin\theta\cos\theta d\theta\Big] + O(\ell^5) \end{eqnarray} and \begin{eqnarray} \fl &&\hat g_{\alpha\beta} d\hat x^{(+)\alpha} d \hat x^{(+)\beta} = -2dv d\ell + dq^{(+)2} - \ell^2 dv^2 +2m\ell^3\Big(dv +\frac{a^2}{2}\sin^2\theta d\ell - a\sin^2\theta d\phi^{(+)}\Big)^2 \nonumber\\ \fl &+&\ell^4 \Big[-4ma^2\Big(dv +\frac{a^2}{2}\sin^2\theta d\ell - a\sin^2\theta d\phi^{(+)}\Big)\sin\theta\cos\theta d\theta\Big] +O(\ell^5)\; , \label{gcrimexp} \end{eqnarray} with the determinant $\hat g = -\sin^2\theta$ and inverse \begin{eqnarray} \fl \lefteqn{ \hat g^{\alpha\beta} \partial_{\hat x^{(+)\alpha}} \partial_{ \hat x^{(+)\beta} } =-2\partial_v\partial_\ell + q^{(+)AB}\partial_{\hat x^{(+)A}} \partial_{ \hat x^{(+)B} } +\ell^2 \partial_\ell \partial_\ell -2m\ell^3\Big( \f{a^2\sin^2\theta }{2}\partial_v + \partial_\ell + a \partial_{\phi^{(+)}} \Big)^ 2 }\nonumber\\ \fl && -4a^2 m\ell^4\Big[ \Big( \f{a^2\sin^2\theta }{2}\partial_v + \partial_\ell + a \partial_{\phi^{(+)}}\Big)\sin\theta\cos\theta\Big] \partial_\theta +O(\ell^5) . \end{eqnarray} As a result, ${\mathcal I}^-$, given by $\ell=0$ with $\hat \nabla_a \ell |_{{\mathcal I}^-}\neq 0 $, has Penrose compactification with metric \begin{equation} \hat g_{\alpha\beta} d\hat x^{(+)\alpha} d \hat x^{(+)\beta} \big |_{{\mathcal I}^-} = -2dv d\ell + dq^{(+)2}, \end{equation} i.e. ${\mathcal I}^-$ is a null hypersurface with standard asymptotically Minkowskian geometry consisting of unit sphere cross-sections. In addition, it is straightforward to verify that \begin{equation} \hat \nabla_\alpha \hat \nabla_\beta \ell |_{{\mathcal I}^-} =0, \end{equation} so that $\ell$ is a conformal factor in which the shear and divergence of ${\mathcal I}^-$ vanish. Thus $\ell$ is a preferred conformal factor for which the compactification of ${\mathcal I}^-$ has the same asymptotic properties as described by a conformal Bondi frame~\cite{tam}. This allows a simple description of the BMS asymptotic symmetries and other physical properties of ${\mathcal I}^-$. \subsection{Physical properties of ${\mathcal I}^-$} The Lorentz symmetries of $\eta^{(+)}_{\alpha\beta}$ are not symmetries of the KSK metric but they are preferred BMS symmetries of ${\mathcal I}^-$. The remaining BMS symmetries are the supertranslations on ${\mathcal I}^-$, $v\rightarrow v + \alpha(\theta,\phi^{(+)})$. The supertranslations with $\alpha$ composed of $l=0$ and $l=1$ spherical harmonics correspond to the Poincar{\'e} translations of the Minkowski background. In the hyperboloidal coordinates $(v,\ell,x^{(+)A})$, where $x^{(+)A}=(\theta, \phi^{(+)})$, the strain tensor $\sigma_{AB}(v,x^{(+)A})$ describing the ingoing radiation from ${\mathcal I}^-$ is determined by the asymptotic expansion of the metric according to \begin{equation} \hat g_{AB} = q_{AB} +2\ell\sigma_{AB} +O(\ell^2) . \end{equation} Here $\sigma_{AB}(v,x^{(+)C})$ is trace-free and corresponds to the asymptotic shear of the ingoing null hypersurfaces emanating from the $v=const$ cross-sections of ${\mathcal I}^-$ or, equivalently, the radiation strain of the cross-sections. It can be described by the spin-weight-2 function \begin{equation} \sigma(v,x^{(+)A}) = q^A q^B \sigma_{AB}(v,x^{(+)A}), \label{eq:sigmap} \end{equation} where $q^A$ is the complex polarisation dyad associated with the unit sphere metric on ${\mathcal I}^-$, \begin{equation} q^{(+)} _{AB} = \frac{1}{2}(q_A \bar q_B +\bar q_A q_B), \quad q^A \bar q_A =2, \quad q^A q_A =0, \quad q_A=q^{(+)} _{AB}q^B . \end{equation} For the standard choice of spherical coordinates, we set $q^A\partial_{(+)A} = \partial_\theta +(i/\sin\theta)\partial_{\phi^{(+)} }$. This normalisation implies \begin{equation} \label{ } \sigma=\sigma_{\theta \theta} -\frac{\sigma_{\phi^{(+)} \phi^{(+)}} }{\sin^2\theta} +\frac{2i\sigma_{\theta\phi^{(+)}}}{\sin\theta}\;\;, \end{equation} which corresponds to the standard plus/cross decomposition, as used in~\cite{sky,boost,BSscholar}. The normalisation used in~\cite{boost2} inadvertently reduced $\sigma$ by a factor of $1/2$. In terms of the physical space description in the associated inertial Cartesian coordinates, the polarisation dyad $q^A$ has components $q^i =\rho Q^i$ where \begin{equation} \fl \quad Q^i =\frac{1}{\rho} x^i_{,(+)A}q^A = ( \cos\theta\cos\phi^{(+)} -i\sin\phi^{(+)}, \cos\theta\sin\phi^{(+)} +i\cos\phi^{(+)},-\sin\theta)\; , \end{equation} where $Q_i \bar Q^i =2$, $Q_i Q^i =0$ and $Q_i x^i =0$. Here we raise and lower the Cartesian indices $i,j,...$ for fields in the Euclidean background according to the example $Q_i=\delta_{ij}Q^j$. Then $\sigma$ is determined by the physical space metric according to \begin{equation} \sigma(v,x^{(+)A}) = \lim_{\rho \rightarrow \infty} \frac{\rho}{2} Q^i Q^j g_{ij}, \end{equation} where the limit at ${\mathcal I}^-$ is taken holding $(v,x^{(+)A})$ constant. For the unboosted KSK metric, \begin{equation} \sigma(v,x^{(+)A}) = \lim_{\rho\rightarrow \infty} \rho H (Q^i n^{(+)}_i )^2 = \lim_{\rho\rightarrow \infty} m (Q^i n^{(+)}_i )^2. \label{eq:sigmap} \end{equation} A straightforward calculation gives \begin{equation}\label{eq:Qiell_i} Q^i n^{(+)}_i = \frac {\rho a \sin\theta (a\cos\theta+ir)}{r(r^2+a^2)} . \end{equation} With reference to (\ref{eq:rell}), it follows that $Q^i n^{(+)}_i=O(1/\rho)$ so that the radiation strain vanishes at the advanced times $v=const$ picked out by the null cones of $\eta_{\alpha\beta}^{(+)}$. However, under the supertranslation $v\rightarrow v+\alpha(x^{(+)A})$ the radiation strain has the gauge freedom (cf.~\cite{BSscholar}) \begin{equation} \sigma \rightarrow \sigma+ q^A q^B \eth_{(+)A}\eth_{(+)B} \; \alpha(x^{(+)A}) \label{eq:super} \end{equation} where $\eth_{(+)A}$ is the covariant derivative with respect to the unit sphere metric $dq^{(+)2}$. As a result, even in a stationary epoch where $\sigma(v,x^{(+)A})= \sigma(x^{(+)A})$, cross-sections of ${\mathcal I}^-$ {\it distorted} by a supertranslation have non-vanishing strain. \subsection{The boost symmetry of ${\mathcal I}^-$} The boost symmetries ${\mathcal B}$ of the Minkowski background $\eta^{(+)}_{ab}$ are not exact symmetries of the Kerr metric but they are asymptotic BMS symmetries of ${\mathcal I}^-$ which are preferred in the sense that they map strain-free cross-sections into strain-free cross-sections. It is this asymptotic property of ${\mathcal B}$ which is essential for our treatment of the memory effect. However, the BMS transformations are only uniquely determined to first order in $\ell$ by the requirement that they be asymptotic symmetries. In the compactified coordinates $\hat x^{(+)\alpha}=(v,\ell,\theta,\phi^{(+)})$, this leads to an equivalence class of BMS transformations in which the identity takes the form~\cite{gw} \begin{equation} \hat x^{(+)\alpha} \rightarrow \hat x^{(+)\alpha} +O(\ell^2). \end{equation} Instead of the Minkowski boost symmetries ${\mathcal B}$ we could use any equivalent BMS subgroup in our treatment of the memory effect. The resulting $O(\ell^2)$ terms in the boost would not affect the radiation strain at ${\mathcal I}^-$ or ${\mathcal I}^+$. Consider now a Minkowski boost ${\mathcal B}$ whose 4-velocity has components $v^\alpha = \Gamma(1, V^i)$, where $V^i =V l^i$, with direction cosines $l^i$, and $\Gamma =1/ \sqrt{1-V^2}$. Under this boost, $\eta^{(+)}_{\alpha\beta} \rightarrow \eta^{(+)}_{\alpha\beta}$. The boosted coordinates $x^{(+)\alpha}_{\mathcal B}$ are given by the Lorentz transformation \begin{equation} x^{(+)\alpha}_{\mathcal B} = \Lambda^\alpha_\beta x^{(+)\beta}\;, \end{equation} where \begin{eqnarray} \Lambda^{t}_{\phantom{0}t} &=& \Gamma\;,\label{eq:LT1} \\ \Lambda^{t}_{\phantom{0}i} &=&\Lambda^{i}_{\phantom{0}t}= -\Gamma V_i\;,\label{eq:LT2}\\ \Lambda^{i}_{\phantom{0}j} &=& \kron{i}{j}+(\Gamma-1)l^i l_j\;.\label{eq:LT4} \end{eqnarray} The boosted coordinates are \begin{eqnarray} t_{\mathcal B}^{(+)} &=& \Gamma (t^{(+)} - V_i x^{(+)i} ) ,\\ x^{(+)i}_{\mathcal B}&=&x^{(+)i} +\Big[ -\Gamma V t^{(+)} + (\Gamma-1)l_jx^{(+)j} \Big]l^i \;, \end{eqnarray} e.g, for a boost in the $z$-direction, $l^x=l^y=l^z-1=0$, $x_{\mathcal B}^{(+)}=x^{(+)}$, $y_{\mathcal B}^{(+)}=y^{(+)}$ and \begin{eqnarray} \label{eq:z_B} t_B^{(+)} &=& \Gamma (t^{(+)} - V z^{(+)}) ,\\ z^{(+)}_{\mathcal B}&=& \Gamma (z^{(+)} - V t^{(+)} )\; . \end{eqnarray} The background spherical radius $\rho$ transforms as \begin{equation} \label{eq:rho_B} \fl \quad \rho^2\rightarrow \rho^2_{\mathcal B} =x^{(+)\alpha}x^{(+)}_\alpha+(v_\alpha x^{(+)\alpha})^2 = -[t^{(+)}]^2 + \rho^2 + \Gamma^2 ( t^{(+)} - V_i x^{(+)i})^2 \; . \end{equation} Setting $t^{(+)}=v -\rho$, the large $\rho$ expansion of $ \rho_{\mathcal B}$ about ${\mathcal I}^-$ holding $v$ constant is \begin{equation} \label{eq:rho_B_expa} \rho_{\mathcal B} = \rho \Gamma (1+V_i\rho^i) - \frac{v V \Gamma (V+l_i \rho^i)} {1+V_i\rho^i} +O(1/\rho) \;\; , \label{eq:rhoexp} \end{equation} where \begin{equation} \rho^i =x^{(+)i}/\rho = (\sin\theta\cos\phi^{(+)},\sin\theta\sin\phi^{(+)}, \cos\theta). \label{eq:rhoi} \end{equation} The boosted version of the Boyer-Lindquist radial coordinate (\ref{eq:rz}), \begin{equation} \label{eq:rKS_boosted} r_{\mathcal B}=r(\rho_{\mathcal B}, z_{\mathcal B}) =\f{1}{2}\sqrt{2\rho_{\mathcal B}^2 -2a^2 +2\sqrt{\rho_{\mathcal B}^4 -2\rho_{\mathcal B}^2a^2+a^4+4a^2z^2_{\mathcal B}}}\; , \end{equation} has the large $\rho$ expansion about ${\mathcal I}^-$, holding $v=t^{(+)}+\rho$ constant, \begin{eqnarray} r_{\mathcal B} &=& (1+V_i\rho^i)\Gamma \rho - \frac{v V \Gamma (V+l_i \rho^i)} {1+V_i\rho^i} +O(1/\rho) =\rho_{\mathcal B} +O(1/\rho) . \label{eq:rexp} \end{eqnarray} This leads to the expansion of the boosted version of the Kerr-Schild function about ${\mathcal I}^-$, \begin{equation} H_{\mathcal B}= \f{mr_{\mathcal B}^3}{r_{\mathcal B}^4+a^2 z_{\mathcal B}^2} =\frac {m} {(1+V_i\rho^i) \Gamma \rho}\bigg[ 1+ \f{V(V+l_i\rho^i)v} {(1+V_i\rho^i)^2 \rho}\bigg] +O(1/\rho^2). \label{eq:Hexp} \end{equation} It follows from (\ref{eq:pnd}) and (\ref{eq:rell}) that the ingoing principle null direction has asymptotic behaviour \begin{equation} n^{(+)}_\alpha =-\nabla_\alpha (t^{(+)}+\rho) + O(1/\rho) \; . \end{equation} Using the covariant substitutions $-\nabla_\alpha t\rightarrow v_\alpha$ and $\nabla_\alpha \rho \rightarrow [x_\alpha+ v_\beta x^{(+)\beta}v_\alpha]/\rho_{\mathcal B} $, its boosted version $N_\alpha:=n_{{\mathcal B}\, \alpha}^{(+)}$ has asymptotic behaviour \begin{equation} N_\alpha = v_\alpha - \frac {1}{\rho_{\mathcal B}} (x^{(+)}_\alpha+v_\beta x^{(+)\beta }v_\alpha) + O(1/\rho) . \label{eq:Lexp} \end{equation} Setting $t^{(+)}=v-\rho$, the expansion of the boosted version of (\ref{eq:Qiell_i}) about ${\mathcal I}^-$, holding $v$ constant, then leads to \begin{equation} \fl \quad Q^i N_i =(1- \frac {1}{\rho_{\mathcal B}} v_\beta x^{(+)\beta}) Q^i v_i + O(1/\rho) = \bigg (1-\frac{1}{\rho_{\mathcal B}} \rho \Gamma(1+V_i \rho^i) \bigg)Q^i v_i + O(1/\rho) \; , \end{equation} so it follows from (\ref{eq:rhoexp}) that $Q^i N_i = O(1/\rho)$. Thus, referring to the boosted version of (\ref{eq:sigmap}), \begin{equation} \sigma_{\mathcal B}(v,x^{(+)A}) = \lim_{\rho\rightarrow \infty} \rho H_{\mathcal B} (Q^i N_i )^2 =0, \label{eq:sigmapl} \end{equation} i.e. the strain at ${\mathcal I}^-$ vanishes for the boosted KSK metric, as expected since the boost is a preferred BMS symmetry of ${\mathcal I}^-$. \section{Future null infinity} \label{sec:scrip} Following the procedure for treating ${\mathcal I}^-$, we describe ${\mathcal I}^+$ in terms of the KSK metric by considering retarded null spherical coordinates associated with the Minkowski background, $\eta^{(-)}_{ab}$, \begin{equation} \tilde x^{(-)\alpha} = (u,\rho,\theta,\phi^{(-)}), \quad u=t^{(-)}-\rho . \end{equation} In these coordinates, \begin{equation} \eta^{(-)}_{\alpha \beta} d\tilde x^{(-)\alpha} d\tilde x^{(-)\beta} =- du^2 -2du d\rho +\rho^2 dq^{(-)2} , \label{eq:minkplus} \end{equation} where $dq^{(-)2}=q^{(-)}_{AB}dx^{(-)A}dx^{(-)B} =d\theta^2+\sin^2 \theta d\phi^{(-)2}$ and the outgoing version of the KSK metric has components \begin{equation} g_{\alpha \beta} d\tilde x^{(-)\alpha} d\tilde x^{(-)\beta} = - du^2 -2du d\rho +\rho^2 dq^{(-)2} + 2H(n^{(-)}_\alpha d \tilde x^{(-)\alpha})^2. \label{eq:KS_hyperbolic} \end{equation} The inverse form of the outgoing KSK metric, \begin{equation} g^{ab} = \eta^{(-)ab} -2H n^{(-)a} n^{(-)b}, \end{equation} now implies \begin{equation} g^{ab}(\partial_a u)\partial_b u = -2H (n^{(-)a} \partial_a u)^2. \end{equation} Analogous to the ingoing case, since $H\ge0$, the hypersurfaces $u=const$ are spacelike hyperbolae which approach ${\mathcal I}^+$, except in the limiting Schwarzschild case where they are null. Explicitly, following the calculation of (\ref{eq:ellv}), \begin{equation} n^{(-)\alpha} \partial_\alpha u =1-\frac{r}{\rho}. \end{equation} In order to compactify ${\mathcal I}^+$, we replace the hyperboloidal spherical coordinates $\tilde x^{(-)\alpha} = (u,\rho,\theta,\phi^{(-)})$ by the compactified coordinates $\hat x^{(-)\alpha}=(u,\ell,\theta,\phi^{(-)})$, where $\ell=1/\rho$, and $\ell=0$ at ${\mathcal I}^+$. Again we introduce the conformally rescaled metric $\hat g_{ab}=\ell^2 g_{ab}$, \begin{equation} \hat g_{\alpha\beta} d \hat x^{(-)\alpha} d \hat x^{(-)\beta}= - \ell^2 du^2 +2du d\ell + dq^{(-)2} + 2 H(\ell n^{(-)}_\alpha d \hat x^{\alpha(-)})^2, \label{eq:KS_hyperbolic} \end{equation} where \begin{equation} \fl \quad \ell n^{(-)}_\alpha d \hat x^{(-)\alpha} = -\ell du +[\ell^{-1} - r ] d\ell -\frac{a \sin\theta}{r\ell (r^2+a^2)} \big ( a\cos\theta d\theta -r\sin\theta d\phi^{(-)}). \end{equation} The asymptotic behaviour of $\hat g_{\alpha\beta}$ at ${\mathcal I}^+$ follows from the asymptotic $\ell$ expansion (\ref{eq:rell}) of the Boyer-Lindquist coordinate $r$ which leads to \begin{eqnarray} \fl \quad \ell n^{(-)}_\alpha d \hat{x}^{(-)\alpha} & = & \ell \Big(-du +\frac{a^2}{2}\sin^2\theta d\ell + a\sin^2\theta d\phi^{(-)}\Big) + \ell^2\Big(-a^2\sin\theta \cos\theta d\theta \Big)\nonumber \\ \fl &+& \ell^3 \Big[-\frac{a^4}{8}(5\cos^2\theta-1) \sin^2\theta d\ell -a^3\sin^2\theta\cos^2\theta d\phi^{(-)}\Big] \nonumber\\ \fl&+& \ell^4 \Big[-\frac{a^4}{2}(1-3\cos^2\theta)\sin\theta\cos\theta d\theta\Big] + O(\ell^5) \end{eqnarray} so that \begin{eqnarray} \fl \lefteqn{\hat g_{\alpha\beta} d\hat{x}^{(-)\alpha} d \hat{x}^{(-)\beta} = 2du d\ell + dq^{(-)2} - \ell^2 du^2 +2m\ell^3\Big(-du +\frac{a^2}{2}\sin^2\theta d\ell + a\sin^2\theta d\phi^{(-)}\Big)^2 }&& \nonumber\\ \fl&&+\ell^4 \Big[-4ma^2\Big(-du +\frac{a^2}{2}\sin^2\theta d\ell + a\sin^2\theta d\phi^{(-)}\Big)\sin\theta\cos\theta d\theta\Big] +O(\ell^5)\; , \end{eqnarray} with determinant $\hat g = -\sin^2\theta$ and inverse \begin{eqnarray} \fl \lefteqn{ \hat g^{\alpha\beta} \partial_{\hat x^{(-)\alpha}} \partial_{ \hat x^{(-)\beta} } =2\partial_u\partial_\ell + q^{(-)AB}\partial_{\hat x^{(-)A}} \partial_{ \hat x^{(-)B} } +\ell^2 \partial_\ell \partial_\ell -2m\ell^3\Big( \f{a^2\sin^2\theta }{2}\partial_u - \partial_\ell + a \partial_{\phi^{(-)}} \Big)^ 2 }\nonumber\\ \fl && +4a^2 m\ell^4\Big[ \Big( \f{a^2\sin^2\theta }{2}\partial_u - \partial_\ell + a \partial_{\phi^{(-)}}\Big)\sin\theta\cos\theta\Big] \partial_\theta +O(\ell^5) . \end{eqnarray} We have \begin{equation} \hat g_{\alpha\beta} d\hat x^{(-)\alpha} d \hat x^{(-)\beta} \big |_{{\mathcal I}^+} = 2du d\ell + dq^{(-)2}, \end{equation} i.e. ${\mathcal I}^+$ is a null hypersurface with standard asymptotically Minkowskian geometry consisting of unit sphere cross-sections. In addition, analogous to the case for ${\mathcal I}^-$, \begin{equation} \hat \nabla_\alpha \hat \nabla_\beta \ell |_{{\mathcal I}^+} =0 \end{equation} so that $\ell$ is a preferred conformal factor in which the shear and divergence of ${\mathcal I}^+$ vanish. An important feature is that both ${\mathcal I}^-$ and ${\mathcal I}^+$ have universal conformal structure of unit sphere cross-sections with the identical conformal factors $\ell$. Moreover, (\ref{eq:Psi}) leads to the expansion\begin{equation} \Psi = -2ma\ell^2-\f{8}{3}m^2 a \ell^3 -2ma( 2m^2-a^2 \cos^2\theta) \ell^4 + O(\ell^5) \end{equation} so that $\Psi |_{\ell=0} =0$. Consequently, (\ref{eq:stfm}) and (\ref{eq:xypm}) imply that we can set \begin{equation} \phi = \phi^{(+)} |_{\ell=0}=\phi^{(-)} |_{\ell=0} \end{equation} and \begin{equation} dq^2=q_{AB}dx^A dx^B= d\theta^2 +\sin^2\theta d\phi^2 = dq^{(+)2}|_{\ell=0}=dq^{(-)2} |_{\ell=0}. \end{equation} Thus we can use a common unit sphere metric $q_{AB}$, with associated covariant derivative $\eth_A$, common spherical coordinates $x^A=(\theta,\phi)$ and a common polarisation dyad $q_{AB} =(1/2)(q_A\bar q_B+\bar q_A q_B)$ to describe both the ingoing radiation from ${\mathcal I}^-$ and the outgoing radiation at ${\mathcal I}^+$. \section{Boosts and radiation memory} \label{sec:mem} Analogous to (\ref{eq:sigmap}), the outgoing radiation strain at ${\mathcal I}^+$ can be described by a spin-weight-2 function $\sigma(u,x^A)$, where $u$ is the retarded time and $x^A=(\theta,\phi)$ are the angular coordinates on ${\mathcal I}^+$ determined by the Minkowski background. In these retarded coordinates adapted to the asymptotic Minkowskian structure, the strain is given by the limit at ${\mathcal I}^+$, holding $u$ and $x^A$ constant, \begin{equation} \sigma(u,x^A) = \lim_{\rho\rightarrow \infty} \frac{1}{2\rho} q^A q^B g_{AB} = q^A q^B \sigma_{AB}(u,x^A). \end{equation} The radiation memory $\Delta \sigma (x^C)$ at ${\mathcal I}^+$ measures the change in the radiation strain between infinite future and past retarded time, \begin{equation}\label{def_memory} \Delta \sigma (x^A) = \sigma(u=\infty,x^A) - \sigma(u=-\infty,x^A). \end{equation} In the associated inertial Cartesian coordinates, the dyad $q^A$ has components $q^i = \rho Q^i$ and \begin{equation} \sigma(u,x^A) = \lim_{\rho\rightarrow \infty} \frac{\rho}{2} Q^i Q^j g_{ij}. \label{eq:rstrain} \end{equation} For the unboosted KSK metric, \begin{equation} \sigma(u,x^A) = \lim_{\rho\rightarrow \infty} \rho H (Q^i n^{(-)}_i )^2 = \lim_{\rho\rightarrow \infty} m (Q^i n^{(-)}_i )^2. \end{equation} A straightforward calculation gives \begin{equation} Q^i n^{(-)}_i = -\frac{\rho a^2\sin\theta\cos\theta}{r(r^2+a^2)} +\frac{i\rho a\sin\theta}{r^2+a^2}. \end{equation} Again using (\ref{eq:rell}), this implies $ Q^i n^{(-)}_i =O(1/\rho)$ so that the radiation strain at ${\mathcal I}^+$ of the unboosted KSK metric vanishes. Consider now a system which is asymptotically described by an unboosted Kerr metric in the retarded past $u=-\infty$ and by a boosted KSK metric in the future $u=\infty$, where the boost ${\mathcal B}$ is a Lorentz symmetry of $\eta^{(+)}_{\alpha\beta}$. The radiation memory due to the boost is then given by \begin{equation} \Delta \sigma(x^A) = \sigma_{\mathcal B}(u=\infty, x^A) - \sigma(u=-\infty,x^A) , \label{eq:memeff} \end{equation} where $\sigma_{\mathcal B}(u=\infty, x^A)$ is the strain of the final boosted state and the intial strain vanishes, $\sigma(u=-\infty, x^A) =0$. The strain $\sigma_{\mathcal B}(u=\infty, x^A)$ may be calculated using either the ingoing or outgoing form of the KSK metric. It is technically simpler to use the ingoing form since the boost ${\mathcal B}$ leaves $\eta^{(+)}_{\alpha\beta}$ unchanged. The final strain for the boosted version of the KSK metric $g_{{\mathcal B}\alpha\beta}$, computed in the same frame as the initial strain (\ref{eq:rstrain}), is then given by \begin{equation} \sigma_{\mathcal B}(u=\infty,x^A) =\lim_{u\rightarrow \infty} \lim_{\rho\rightarrow \infty} \frac{\rho}{2} Q^i Q^j g_{{\mathcal B}ij} = \lim_{u\rightarrow \infty} \lim_{\rho\rightarrow \infty} \rho H_{\mathcal B} (Q^i N_i)^2 , \end{equation} where $N_i=n_{{\mathcal B}\, i}^{(+)}$. The leading terms in the $1/\rho$ expansion of $\rho_{\mathcal B}$, $r_{\mathcal B}$ and $H_{\mathcal B}$, given in (\ref{eq:rhoexp}), (\ref{eq:rexp}) and (\ref{eq:Hexp}), are unchanged when the limit at ${\mathcal I}^+$ is taken holding $u$ constant. Thus \begin{equation} \sigma_{\mathcal B}(u=\infty,x^A) =\lim_{u\rightarrow \infty} \lim_{\rho\rightarrow \infty} \frac{m\rho}{\rho_{\mathcal B}}(Q^i N_i)^2 . \end{equation} The key difference here is that the limit at ${\mathcal I}^+$ involves the boosted ingoing principal null direction $N_\alpha$ whose asymptotic behaviour (\ref{eq:Lexp}) leads to \begin{equation} Q^iN_i = Q^iv_i \bigg ( 1+ \frac{ \Gamma (t^{(+)} -V_i x^{(+)i })} {\rho_{\mathcal B}} \bigg ) + O(1/\rho). \label{eq;QL} \end{equation} Now, instead of holding the advanced time $v$ constant to take the limit at ${\mathcal I}^-$, we hold the retarded time $u$ constant to take the limit at ${\mathcal I}^+$. Referring to (\ref{eq:ttfm}) \begin{equation} t^{(+)}=t^{(-)} +2(r-r^*) =u+\rho +2(r-r^*), \label{eq:tpu} \end{equation} where (\ref{eq:rstar}) leads to the expansion \begin{equation} r^*-r = 2m\ln\Big(\f{\rho}{2m}\Big) -\f{4m^2}{\rho} +O(1/\rho^2). \end{equation} From (\ref{eq:rho_B}), $\rho_{\mathcal B}$ has the asymptotic behaviour, holding $u$ constant, \begin{equation} \frac{\rho_{\mathcal B}}{\rho} = \Gamma(1-\rho^i V_i) +O(1/\rho), \end{equation} where $\rho^i$ is defined in (\ref{eq:rhoi}). As a result, since $\lim_{\rho\rightarrow \infty} \ln \rho/\rho =0$, (\ref{eq:tpu}) leads to the limit, holding $u$ constant, \begin{equation} \lim_{\rho\rightarrow \infty} \frac{t^{(+)}}{\rho_{\mathcal B}} = \lim_{\rho\rightarrow \infty} \frac{\rho}{\rho_{\mathcal B}} =\frac{1}{ \Gamma(1-\rho^i V_i)} \end{equation} and (\ref{eq;QL}) leads to \begin{equation} \lim_{\rho\rightarrow \infty} Q^iN_i = 2 Q^i v_i . \end{equation} Thus \begin{equation} \sigma_{\mathcal B}(u=\infty,x^A) =\lim_{u\rightarrow \infty} \lim_{\rho\rightarrow \infty} \frac{4m\rho}{\rho_{\mathcal B}} (Q^i v_i)^2 =\frac{4m\Gamma}{(1-\rho^i V_i )} (Q^i V_i)^2 . \end{equation} The resulting boost memory due to the ejection of a Kerr black hole of mass $m$ is \begin{equation} \Delta \sigma = \frac{4m\Gamma}{(1-\rho^iV_i)} (Q^i V_i)^2. \label{eq:mem1} \end{equation} This is identical to the nonlinear result~\cite{boost} and to the linearised result~\cite{brag} for the memory due to the relative boost between the initial and final states of a Schwarzschild black hole. \section{Discussion} \label{sec:discuss} We have shown that a boost ${\mathcal B}$ of the Minkowski background $\eta^{(+)}_{ab} $ for the ingoing KSK metric leads to a model for the memory effect for an initially stationary Kerr black hole which, after some accelerating and radiating stage, results in a final boosted state. The result is consistent with the absence of ingoing radiation and does not depend upon any linearised approximation. The full memory effect combines the boost memory, which we now denote by $\Delta \sigma_{\mathcal B}$, and the null memory $\Delta \sigma_{\mathcal N}$ resulting from radiative energy loss to ${\mathcal I}^+$. The net memory effect is then \begin{equation} \Delta \sigma= \Delta \sigma_{\mathcal B} + \Delta \sigma_{\mathcal N}. \end{equation} Although we have concentrated on the boost memory for the simple process consisting of the ejection of a boosted Kerr black hole, the result can be generalised. First, the asymptotic Lorentz symmetry at null infinity implies that the radiation memory for the transition of a black hole from a rest state to a boosted state with mass $m$ and velocity $V^i$ is the same as the memory for a black hole of mass $m$ with initial velocity $-V^i$ and zero final velocity. In addition, even in the nonlinear theory it is expected that the superposition principle holds for particles at infinite separation since the constraints vanish in that limit. This allows the memory effect to be generalised to a system of particles. As a simple example, we consider the collision of two distant Kerr black holes of mass $m$ with initial velocities $V^i$ and $-V^i$ in the $z$-direction which come to rest in a final state with mass $M$. The collision is constrained by the Bondi mass loss fomula, which requires \begin{equation} 2m\Gamma -M =\int_{-\infty}^{\infty} du \oint \|\partial_u \sigma\|^2 \sin\theta d\theta d\phi. \label{eq:eneq} \end{equation} The null memory is determined nonlocally from the integrated radiation flux. In the case of gravitational radiation, Christodoulou's result~\cite{Christ_mem} can be expressed in terms of the radiation strain according to~\cite{frauen,boost} \begin{equation} \bar q^A \bar q^B \eth_A \eth_B \Delta \sigma_{\mathcal N} = \int_{-\infty}^{\infty} \|\partial_u \sigma \|^2 du -\Delta {\mathcal P} , \end{equation} where $\Delta {\mathcal P}$ cancels the $l=0$ and $l=1$ spherical harmonics in the flux integral. The $l=0$ harmonic of the integrand determines the mass loss via (\ref{eq:eneq}) and the $l=1$ harmonics determine the momentum loss. These harmonics do not enter the memory effect which is a spin-weight-2 quantity with $l\ge 2$. According to (\ref{eq:mem1}), the boost memory for this process is \begin{equation} \Delta \sigma_{\mathcal B} = 4m\Gamma(Q^i V_i )^2 \big (\frac{1}{1+\rho^i V_i }+\frac{1}{1- \rho^i V_i } \big ) =\frac{8m \Gamma V^2\sin^2\theta}{1-V^2\cos^2 \theta}. \label{eq:mem} \end{equation} As a consequence of (\ref{eq:eneq}), $2m\Gamma -M >0$ so that the boost memory (\ref{eq:mem}) for the collision has a lower bound determined by the mass of the final black hole, \begin{equation} \Delta \sigma_{\mathcal B} > \frac{ 4M V^2\sin^2\theta}{1-V^2\cos^2 \theta}. \end{equation} This lower bound is largest when the merger of the two black holes takes place slowly so that $M\approx 2m \Gamma$ and there is negligible radiative energy loss and negligible null memory. The memory effect is also constrained by Hawking's area increase law for the event horizon in the merger of two black holes~\cite{hawk2}. For the above collision of initially distant Schwarzschild black holes to form a Kerr black hole, this leads to \begin{equation} 4m^2 < M(M+\sqrt{M^2-A^2}) =M^2(1+\cos\chi), \quad 0\le \chi \le \pi/2 , \label{eq:areaineq} \end{equation} where $A=M\sin\chi$ is the specific angular momentum of the final Kerr black hole as determined by the initial impact parameter. Thus (\ref{eq:areaineq}) implies \begin{equation} 2m <M\sqrt{1+\cos\chi} . \label{eq:mineq} \end{equation} As a result, the boost memory (\ref{eq:mem}) has both upper and lower bounds in terms of the final black hole mass, \begin{equation} \frac{4M\Gamma V^2\sin^2\theta\sqrt{1+\cos\chi}}{1-V^2\cos^2 \theta} > \Delta \sigma_{\mathcal B} > \frac{ 4M V^2\sin^2\theta}{1-V^2\cos^2 \theta}. \end{equation} There is no universal way to separate the net memory effect into boost and null mechanisms without knowledge of the strain in the intermediate radiative period. However, there are two extremes in which the net memory takes either the pure boost or pure null form. The first is the adiabatic limit, discussed in~\cite{boost}, in which the intermediate processes take place slowly and $\Delta \sigma_{\mathcal N}$ is negligible. This limit was first considered in~\cite{Smarr} and is now referred to in the literature as the infrared or ``soft graviton'' limit~\cite{pasterski,mem_soft_theorem}. In the other extreme, no boosted massive particles are captured or ejected from the intrinsic rest frame of the system, so that $ \Delta \sigma= \Delta \sigma_{\mathcal N}$. An example would be a binary in an initial Newtonian orbit which subsequently inspirals to form a stationary black hole with respect to the initial rest frame. In~\cite{boost}, we analysed how radiation memory affects angular momentum conservation. In a non-radiative regime where $\partial_u \sigma=0$, the supertranslation freedom (\ref{eq:super}) can be used to pick out preferred cross-sections of ${\mathcal I}^+$ by setting the electric component of $\sigma$ to 0. These preferred cross-section reduce the supertranslation freedom to the translation freedom so that a preferred Poincar{\'e} subgroup can be picked out from the BMS group. The same holds in the limits $u\rightarrow \pm \infty$, in which the requirement of a finite radiative energy loss implies $\partial_u \sigma \rightarrow 0$. Although the electric part of the strain can be gauged away at either $u=+\infty$ or $u=-\infty$, the memory effect $\Delta \sigma$ is gauge invariant and (\ref{eq:super}) determines a supertranslation shift \begin{equation} q^A q^B \eth_A \eth_B \,\alpha(x^C) =\Delta \sigma(x^C) \end{equation} between the preferred Poincar{\'e} groups at $u=\pm \infty$. The rotation subgroups picked out by the initial and final preferred Poincar{\'e} groups differ by this supertranslation. As a result, the corresponding components of angular momentum intrinsic to the initial and final states differ by supermomenta. Only the electric part of the strain is affected by supertranslations because $\alpha$ is real and $\sigma$ is intrinsically complex. The decomposition of the strain into electric and magnetic parts is analogous to the E-mode/B-mode decomposition of electromagnetic waves. The magnetic part of the null memory effect must vanish except for matter fields whose stress-energy tensor satisfies properties which are not expected for astrophysical systems~\cite{sky}. The exceptional cases are matter fields whose magnetic part of their stress is anisotropic and has nonvanishing time derivative in the limit of infinite future or past retarded time in the neighbourhood of ${\mathcal I}^+$. Recently, consistent with general physical principles, a shell of matter with anisoptropic magnetic stress which expands to timelike infinity in a radial flow has been constructed which produces ordinary memory of the magnetic type~\cite{waldmag}. The supertranslation shift between the initial and final preferred Poincar{\'e} groups complicates the interpretation of angular momentum flux conservation laws. This could lead to a distinctly general relativistic mechanism for angular momentum loss. Although the intermediate radiative epoch must be treated by numerical methods, the Kerr-Schild model developed here provides a framework for such investigations. \ack We are grateful to the AEI in Golm for hospitality during this project. We thank I. R{\' acz} for comments on the manuscript and R. M. Wald for informing us of recent results. TM appreciates support from the members of the N{\'u}cleo de Astronom{\'ia} and Faculty of Engineering of University Diego Portales, Santiago. JW was supported by NSF grants PHY-1505965 and PHY-1806514 to the University of Pittsburgh. \section*{References}
1,314,259,992,936
arxiv
\section{Introduction} \begin{figure}[h] \begin{center} \includegraphics[width=.9\linewidth]{img/main.png} \end{center} \caption{ Bigger backbone (from ResNet18 to ViT-B16) and bigger pre-train dataset (from ImageNet to YMC1000) improve the performance of ERM on VLCS, PACS, OfficeHome, TerraIncognita. Even without fine-tuning the image encoder, our DPL~(Domain Prompt Learning) effectively improves the performance of CLIP and outperforms the baseline ERM by a large margin (3.7\%). The CLIP w/o FT use the template prompt, such as `a photo of a \{class name\}'. } \label{fig:first_image} \end{figure} Pre-training large vision models using web-scale images is an essential ingredient of recent success in computer vision. Fine-tuning pre-trained models, such as ResNet~\cite{he2015deep} and Vision Transformer (ViT)~\cite{dosovitskiy2020image} is the most popular paradigm for many downstream tasks. However, domain shifts pose a substantial challenge in real-world scenarios for successfully transferring models. Over the past decade, various studies on domain generalization (DG) have sought a systematic way to narrow the gap between source and target domains~\cite{zhou2021domain,wang2021generalizing,shen2021towards} aiming to build a model that generalizes to unseen domains. Despite the significant work on this front, machine learning systems are still vulnerable to domain shifts even after using DG methods \cite{gulrajani2020search}. Large pre-trained vision-language models like Contrastive Language-Image Pre-Training (CLIP) are an emerging category of models showing great potential in learning transferable representation across many vision tasks. At the core of CLIP is to learn image representations by contrasting them with the representations of text description of the image, such as `a photo of a \{class name\}'. The text description is often called \textit{prompt}, and its design is vital in enhancing CLIP performance. Notably, CLIP can handle unseen classes without fine-tuning them by adequately changing the text description using the target class name. This paper investigates the robustness of CLIP against various distribution shifts using DomainBed~\cite{gulrajani2020search}, a recently proposed benchmark for DG setup. While prior works test various DG methods in the benchmark, the most studied only focused on medium-scale pre-trained models, such as ResNet18 or ResNet50. There are two na\"{i}ve approaches to leveraging CLIP in the DG setup~\autoref{fig:concept}. The first approach is fine-tuning the image encoder trained by CLIP, similar to the other vision models such as ResNet and ViT. We show that the backbone networks trained by CLIP substantially outperform many backbone networks trained solely on images, such as ResNet, big transfer \cite{kolesnikov2020big}, and vision transformer \cite{dosovitskiy2020image}. At the same time, however, fine-tuning sometimes degraded the performance on some domains, suggesting that fine-tuning possibly distorts good properties of pre-trained features~\cite{kumar2022fine}. Another na\"{i}ve approach is designing the template prompt, such as `a photo of a \{class name\}'. The clear merit of this approach is that it does not require optimizing any network and, therefore, keeps the representations learned via pre-training. Despite its simplicity, we show that zero-shot CLIP is still more robust on many DG benchmarks than the vision backbones (e.g., ResNet18, ResNet50, ViT-B16) fine-tuned on source domains, while it is inferior to fine-tuning vision backbone trained by CLIP. Based on the observations, we propose Domain Prompt Learning~(DPL), a simple yet effective extension of CLIP in the DG setup. A natural way to adapt the model is to add domain-specific features to the prompt template. However, manually designing a prompt template is challenging in many cases due to its ambiguity. Instead, we propose DPL~for automatically generating a prompt that estimates domain-specific features given unlabeled examples from each distribution. More specifically, DPL~trains a lightweight prompt generator using source domains, which outputs fixed-length continuous domain prompts given input images of each distribution while freezing other networks. During test-time, the prompt generator generates domain prompt given input images from the target distribution and adds them to the label prompts. Since the entire networks are frozen, the core properties of the pre-training would remain in DPL and are expected to improve CLIP performance in DG stably, as shown in our experiments. It is worth noting our work is not the first attempt to tune the prompt of CLIP. For example, \cite{gao2021clipadapter, zhou2021coop} have proposed optimizing continuous prompts on the target datasets, effectively improving CLIP performance. CoCoOp~\cite{zhou2022conditional}, as a contemporary work, trains a meta-net to generate a meta token for adapting to each instance. CoCoOp focuses on unseen classes and demonstrates its performance by transferring from ImageNet to the four specially designed ImageNet variants. This work focuses on the robustness of CLIP against distribution shifts, and proposes a generic way to extract a domain-specific features and improve performance on the target domain at test-time. \begin{figure*}[t] \begin{center} \includegraphics[width=1.\linewidth]{img/concept.png} \end{center} \caption{ The concept illustration of three approaches to apply CLIP in DG. (1) Fine-tuning updates the CLIP's image encoder with a trainable classifier. (2) Zero-shot CLIP contrastive prediction with hand-craft prompts at the test time without updating parameters on the train domains. (3) Prompt learning trains a prompt optimizer then utilize the optimized prompts to prediction. Our DPL~is categorized to (3) Prompt learning, which trains a prompt generator in train phase and infers unseen domain to generate a domain-specific prompt. } \label{fig:concept} \end{figure*} We conduct experiments on four standard datasets included in DomainBed to evaluate DPL, following the experiment setup in\cite{gulrajani2020search,iwasawa2021testtime}, such as parameter tuning and model selection. We show that CLIP with DPL~outperforms the strong baselines by a large margin, raising the accuracy from 73.7\% to 79.6\%~(\autoref{table:main_dg}). Moreover, since DPL~can be seen as a kind of Test-Time Adaptation (TTA) method, we compare it with a series of SoTA TTA methods and demonstrate the efficiency of~DPL~(\autoref{table:main_tta}). And lastly, through various ablation studies, we surprisingly found that frozen backbone outperforms fine-tuning on OfficeHome datasets for all of ResNet, DeiT~\cite{touvron2021training}, HViT, and ViT-B16~(\autoref{table:ablation_frezon}). These results prove that DPL~is effective, and more importantly, they provide many insights for future works that apply CLIP on DG. In summary, our main contributions are: \begin{enumerate} \item We introduce CLIP to standard DG benchmark DomainBed via prompt learning. \item We propose Domain Prompt Learning~(DPL), a novel approach of domain inference, to effectively help domain generalization by utilizing domain-specific features. \item We demonstrate the impressive empirical performance of DPL~by comparing with strong DG baselines and a series of state-of-the-art (SoTA) TTA methods. \end{enumerate} \section{Related Work} \subsection{Domain Generalization} Over the past decade, various approaches have been proposed to solve DG. Most prior works have focused on regularizing the model using the knowledge from multiple source domains. For example, domain-invariant representation learning~\cite{ganin2016domainadversarial} is a major branch of domain generalization, aiming to reduce the domain gaps in the space of latent representations. There are many different approaches to measures the domain gaps, including adversarial classifier~\cite{li2018domain,ganin2015unsupervised,ganin2016domainadversarial}, kernel mapping~\cite{blanchard2011generalizing,grubinger2015domain}, metric learning~\cite{motiian2017unified,jin2020feature}, and invariant risk minimization~\cite{arjovsky2020invariant}. Similarly, several researchers have sought to generate samples with diverse styles so that models can learn domain-invariant features through them~\cite{shankar2018generalizing,zhou2020deep,borlino2021rethinking}. Other methods use meta learning to learn how to regularize the model to improve robustness~\cite{dou2019domain,li2017learning}. Our work investigates the importance of the CLIP~\cite{radford2021learning} in DG, and proposes a lightweight way to adapt the CLIP for unseen domains. There are several recent observations to motivate us to benchmark CLIP in the DG setup. First, \cite{gulrajani2020search} shows that many prior approaches do not provide significant improvement compared to simple supervised learning. The results imply that regularizing the model is not sufficient to achieve high performance in DG. Secondly, despite significant related works, most studies have focused on medium-scale pre-trained models, such as ResNet18 or ResNet50, although very large-scale models often lead to substantial improvements. Notably, the latest work~\cite{iwasawa2021testtime} compares more large-scale backbone networks, including big transfer \cite{kolesnikov2020big} (BiT-MR50x3, BiT-M-R101x3, and BiT-M-R152x4), vision transformer (ViTB16 and ViT-L16~\cite{dosovitskiy2020image}, Hybrid ViT, DeiT~\cite{touvron2021training}), and MLP-Mixer~\cite{tolstikhin2021mlpmixer} (Mixer-L16), and shows that the selection of backbone networks is important in DG. In contrast with~\cite{iwasawa2021testtime}, we herein demonstrate that CLIP performs surprisingly well without fine-tuning the entire model in source domains, which is time-consuming in practice. From the methodological perspective, our work relates to several prior works that have attempted leveraging domain features rather than discarding them~\cite{ganin2016domainadversarial,zhou2020deep,borlino2021rethinking}. While these works focused on the standard vision backbone, we propose a CLIP-specific approach to leverage the domain features by combining these features with prompt tuning. \subsection{Test Time Adaptation} \label{sec:tta} Regarding the problem setup, our work can also be seen as Test-Time Adaptation (TTA). The concept of TTA is updating a part of networks to minimize the prediction entropy for adapting the model to an unseen domain robustly at the test time. Pseudo Label~\cite{lee2013pseudo} updates entire networks and Tent~\cite{wang2020tent} updates the BN parameters. SHOT\cite{liang2020we} update feature extractor and minimizes a diversity regularizer and pseudo-label loss, not only prediction entropy. Instead of minimizing prediction entropy at the test time, we infer domain information and generate a domain-specific prompt to adapt CLIP to an unseen target domain. Our work also relates to~\cite{iwasawa2021testtime} in that both approaches modulate their prediction given the unlabeled data available at test time. Specifically, \cite{iwasawa2021testtime} proposes T3A that replaces the linear classifier using pseudo-labeling and prototypical classification and shows that it stably improves the performance in unseen domains. However, T3A cannot be directly applied to CLIP, as it assumes a simple linear classifier that CLIP does not employ. \subsection{Prompt Learning} \label{sec:prompt} The success of GPT-3 demonstrated the importance of prompt tuning. There are various prompting strategies, such as discrete natural language prompts and continuous prompts~\cite{liu2021pretrain}. PADA~\cite{bendavid2021pada} proposed a domain adaptation algorithm that trains T5~\cite{raffel2019exploring}, a language foundation model, to generate unique domain-relevant features for each input. PADA uses discrete prompts for the NLP applications and differs from our DPL~with continuous prompts in computer vision. On the other hand, many recent works~\cite{li2021prefix,lester2021power} directly tuning prompts in continuous vector forms, and P-Tuning v2~\cite{liu2021ptuning} showed that continuous prompt tuning achieves the same performance as fine-tuning in various settings. Because of the successful applications of CLIP, prompt tuning is also of great interest in computer vision. Context Optimization (CoOp~\cite{zhou2021coop}) demonstrated that CLIP performance is susceptible to prompts and that a suitable prompt can improve performance for the image recognition task. CLIP-Adapter~\cite{gao2021clipadapter} was proposed to learn with an additional adapter network. \cite{ge2022domain} adapts CLIP using contrastive learning in the Unsupervised Domain Adaption setup. Unlike these works, which need to access the image or class labels in the target domain, we adapt CLIP to an unseen domain with a generated domain prompt inferred from input images. \section{Method} In this section, we first introduce the notations and definitions of DG following~\cite{wang2021generalizing}. Then, we explain how to use CLIP in DG and introduce Domain Prompt Learning~to enhance CLIP performance in DG. \subsection{Problem Setup of DG} \begin{figure}[h] \begin{center} \includegraphics[width=1.\linewidth]{img/architecture.png} \end{center} \caption{ The architecture of CLIP + DPL. The only one network we trained is the prompt generator $F(\cdot)$, which is colored in blue. First, the input images are encoded to obtain image features with the frozen CLIP’s image encoder $f(\cdot)$. The image features are fed into the domain prompt generator $F(\cdot)$ to generate a domain prompt. Simultaneously, all of labels are encoded using the frozen CLIP’s text encoder $g(\cdot)$ to obtain the label prompt embeddings. Secondly, the domain prompt embeddings are added to the label prompt embeddings for calculating the similarity. Finally, to obtain the prediction output in probability, the cosine similarity $\langle\cdot,\cdot \rangle$ are calculated with image embeddings and domain prompt embeddings. } \label{fig:architecture} \end{figure} Let $\mathcal{X}$ denote an input space and $\mathcal{Y}$ an output space. A domain is composed of data that has been sampled from a distribution. We denote the datasets from distribution as $\mathcal{S}^{i} = {(\bm{x}_{j}^{i}, y_{j}^{i})}^{n_i}_{j=1} \sim \mathcal{P}_{XY}^{i}$, where $\bm{x} \in \mathcal{X} \subset \mathbb{R}^{d}$ is an input image, $\bm{y} \in \mathcal{Y}$ denotes the class associated with $\bm{x}$, and $\mathcal{P}_{XY}^{i}$ denotes the joint distribution of the sample and output label in the domain $i$. $X, Y$ denote the corresponding random variables. In DG, we are interested in predictor $h$ performance on data from an unseen domain $\mathcal{P}_{XY}^{t} \neq \mathcal{P}_{XY}^{i}$ for all $i$. Prior works fine-tuned a pre-trained image encoder $f$ (usually ResNet18 or ResNet50) in conjunction with a randomly initialized classification head $g$ (linear classifier), using data from multiple different datasets to achieve the goal. Specifically, given $M$ datasets $\mathcal{S}^{i}$ collected from various domains $i \in \{ 1, \cdots, M\}$, $f$ and $g$ are updated by \begin{equation} \label{eq:domain-generalization} \min_{f, g} \frac{1}{M}\sum_{i=1}^{M}\frac{1}{n_i}\sum_{j=1}^{n_i}\ell(g \circ f(\bm{x_j^i}), y_j^i), \end{equation} where $\ell(\cdot)$ is a loss function. In the simplest case, $\ell$ is a simple cross-entropy loss, and minimizing eq. \ref{eq:domain-generalization} is called empirical risk minimization (ERM). As discussed in Section \autoref{subsec:dtl}, different methods in DG use other loss functions by designing regularization terms to prevent overfitting specific domains. These datasets are frequently referred to as source domains, and they are distinguished as target domains where we want the model to perform well. \subsection{Na\"{i}ve Approaches for Using CLIP in DG} CLIP consists of two parts: an image encoder $f_{clip}$ and a language model $g_{clip}$. CLIP classifies the image features based on the similarity between embedding of a text prompt $\bm{p}$, such as `dog' or `a photo of a {class label},' rather than initially using the classification head trained from scratch. Specifically, given an image $\bm{x}$ and $K$ class prompt $\bm{p}_k$, CLIP output a prediction using both $f_{clip}$ and $g_{clip}$: \begin{equation} \label{eq:domain-generalization} \hat{y}_{clip} = \argmax_{k}\langle f_{clip}(\bm{x}), g_{clip}(\bm{p}_k)\rangle \end{equation} where $K$ is the number of categories and $\langle \cdot,\cdot \rangle$ is cosine similarity. To demonstrate how powerful the representation of massively pre-trained models (CLIP) for DG setup, we tested following two na\"{i}ve approaches to use CLIP in DG setups: fine-tuning and zero-shot. Firstly, we evaluated CLIP in a zero-shot manner; i.e., we freeze both the image encoder and the language model, and substitute the class labels used in each dataset for the text prompt $\bm{p}$. Secondly, we can use the image encoder $f_{clip}$ as an alternative to the standard image backbones, such as ResNet and ViT. In this setup, we train $f_{clip}$ by using the datasets $\mathcal{S}^{i}$ from multiple source domains $i$, similar to the standard DG setup. We can use any algorithms tailored for DG setup, such as DANN and CORAL during fine-tuning. While it is powerful as shown in experiments, it requires additional computational costs to re-train such large models entirely. Besides, good properties of massive pre-training might be distorted during fine-tuning, as highlighted by the performance degradation compared to zero-shot approach. In summary, zero-shot approach is computationally effective yet less expressive, and fine-tuning can leverage the knowledge of source datasets but it is computationally heavy and possibly distort good representations learned during pre-training. Based on the observation, we propose a novel approach to design the prompt $p$ to improve the performance in an unseen domain without fine-tuning the entire model. \subsection{Domain Prompt Learning~for CLIP in DG} \begin{table*}[h] \begin{center} \begin{tabular}{lc|cccc|l} \toprule \textbf{DomainBed} & category & \textbf{VLCS} & \textbf{PACS} & \textbf{OfficeHome} & \textbf{Terra} & \textbf{Avg} \\ \midrule ERM (CLIP) & Fine-tuning & 82.7 $\pm$ 0.3 & 92.9 $\pm$ 1.9 & 78.1 $\pm$ 2.1 & 50.2 $\pm$ 1.7 & 75.9\\ CORAL & Fine-tuning & 82.0 $\pm$ 0.2 & 93.2 $\pm$ 1.1 & 78.9 $\pm$ 1.9 & \textbf{53.5 $\pm$ 0.7} & 76.9 \\ DANN & Fine-tuning & \underline{83.2 $\pm$ 1.2} & 93.8 $\pm$ 1.3 & 78.8 $\pm$ 1.1 & 52.2 $\pm$ 2.0 & \underline{77.0} \\ \midrule CLIP & Zero-shot & 76.6 $\pm$ 0.0 & 95.8 $\pm$ 0.1 & 79.9 $\pm$ 0.1 & 36.4 $\pm$ 0.1 & 72.2 \\ CLIP (template prompt)& Zero-shot & 82.3 $\pm$ 0.1 & 96.1 $\pm$ 0.1 & \underline{82.3 $\pm$ 0.2} & 34.1 $\pm$ 0.1 & 73.7 \\ \textbf{CLIP + DPL~(ours)}& Prompt learning & \textbf{84.3 $\pm$ 0.4} & \textbf{97.3 $\pm$ 0.2} & \textbf{84.2 $\pm$ 0.2} & \underline{52.6 $\pm$ 0.6} & \textbf{79.6} \\ \midrule \end{tabular} \caption{ Comparison experiments on VLCS, PACS, OfficeHome, and TerraIncognita. The best results are in bold, and the second-best results are underlined. CLIP (standard template) indicates using `a photo of a \{class name\}' prompt. Following the experiment setup in~\cite{iwasawa2021testtime}, reported results are the mean and std with seed=\{1, 2, 3\}. } \label{table:main_dg} \end{center} \end{table*} As discussed in Section \ref{sec:prompt}, designing a prompt is a powerful approach to improve the performance of the transformer-based models. It is powerful and should also be easier to train because the dimension of prompts is significantly smaller than the entire parameters of $f$ and $g$. For example, supposing we can access a supervised dataset from the target domain, we can optimize a prefix vector $\bm{p}_{pre}$ by simple supervised loss: \begin{equation} \min_{\bm{p}_{pre}} \mathbb{E}_{x, y \sim \mathcal{S}} \ell(\hat{y}_{clip*}, y), \end{equation} where $\hat{y}_{clip*}$ is \begin{equation} \hat{y}_{clip*} = \argmax_{k}\langle f_{clip}(\bm{x}), g_{clip}(\bm{p}_k^{*})\rangle, \end{equation} where $\bm{p}_k^{*}$ is a concatenation of trainable parameters $\bm{p}_{pre}$ and $\bm{p}_{k}$. Particularly, $g_{clip}$ outputs the fixed length vector regardless of the input dimension (i.e., size of $\bm{p}_{k}$). The size of $\bm{p}_{k}$ is a hyperparameter. Unfortunately, this labeled training data for the target domain is unavailable in DG. Instead, we proposed DPL~to replace the optimization process of $\bm{p}_{pre}$ in each domain by training novel prompt generators $F(\cdot)$ that generate a prompt $\bm{p}_{pre}$ given small unlabeled images from a distribution. Specifically, we use a fully connected network $F(\cdot)$ to generate a prompt $p$ from input images: \begin{equation} \bm{p}_{ap}^{i} = \frac{1}{N} \sum_{j = 1}^{N}F(f(x_{j}^{i})), \end{equation} where $N$ is the batch size for each domain, and $\bm{x}_{j}^{i}$ denotes the images from the $i$-th distribution. Given a batch of data from multiple source distributions, we use the following loss function to optimize $F$: \begin{equation} \min_{F} \frac{1}{M}\sum_{i=1}^{M}\frac{1}{n_i}\sum_{j=1}^{n_i}\ell(\hat{y}_{ap}^i, y_j^i), \end{equation} and \begin{equation} \hat{y}_{ap}^i = \argmax_{k}\langle f_{clip}(\bm{x^i}), g_{clip}(\bm{p}_k^{i})\rangle, \end{equation} where $\bm{p}_k^{i}$ is a concatenation of pre-defined $\bm{p}_k$ and $\bm{p}_{ap}^i$ . The architecture of CLIP + DPL~is depicted in~\autoref{fig:architecture}. \begin{table*}[t] \begin{center} \begin{tabular}{l|cccc|l} \toprule \textbf{Methods} & \textbf{VLCS} & \textbf{PACS} & \textbf{OfficeHome} & \textbf{Terra} & \textbf{Avg} \\ \midrule ERM & 81.4 $\pm$ 0.3 & 91.9 $\pm$ 0.7 & 78.4 $\pm$ 1.1 & 47.8 $\pm$ 3.1 & 74.9 \\ +T3A & \textbf{82.2 $\pm$ 0.1} & 88.2 $\pm$ 0.0 & 76.9 $\pm$ 0.9 & 48.2 $\pm$ 3.2 & 73.9 \\ +Pseudo Label & 81.1 $\pm$ 1.0 & 92.1 $\pm$ 0.4 & 78.0 $\pm$ 2.5 & \textbf{50.5 $\pm$ 4.4} & \underline{75.4} \\ +Pseudo Label$\star$ & \underline{82.1 $\pm$ 0.4} & 87.6 $\pm$ 0.0 & 76.6 $\pm$ 1.9 & 46.9 $\pm$ 3.1 & 73.3 \\ +Tent$\star$ & \textbf{82.2 $\pm$ 0.4} & 87.8 $\pm$ 0.0 & 76.5 $\pm$ 1.2 & 46.7 $\pm$ 3.2 & 73.3 \\ +SHOT & 80.4 $\pm$ 0.4 & \underline{93.8 $\pm$ 0.8} & \underline{80.7 $\pm$ 0.8} & 40.5 $\pm$ 1.7 & 73.9 \\ +SHOT$\star\star$ & 80.3 $\pm$ 0.5 & 91.2 $\pm$ 0.0 & 79.4 $\pm$ 1.1 & 40.6 $\pm$ 1.8 & 72.9 \\ \midrule CORAL & 81.2 $\pm$ 0.3 & 91.1 $\pm$ 1.9 & 78.7 $\pm$ 0.8 & 48.6 $\pm$ 2.9 & 74.9 \\ +T3A & 80.8 $\pm$ 0.5 & 91.2 $\pm$ 1.9 & 79.1 $\pm$ 0.9 & 49.0 $\pm$ 3.0 & 75.0 \\ +Pseudo Label & 80.0 $\pm$ 1.4 & 93.1 $\pm$ 2.0 & 79.8 $\pm$ 1.2 & 44.5 $\pm$ 3.3 & 74.4 \\ +Pseudo Label$\star$ & 81.4 $\pm$ 0.1 & 91.2 $\pm$ 1.9 & 78.8 $\pm$ 0.9 & 48.6 $\pm$ 2.9 & 75.0 \\ +Tent$\star$ & 81.3 $\pm$ 0.2 & 91.2 $\pm$ 1.9 & 78.6 $\pm$ 0.8 & 48.7 $\pm$ 2.6 & 75.0 \\ +SHOT & 78.7 $\pm$ 1.9 & 93.0 $\pm$ 1.2 & \underline{80.7 $\pm$ 0.9} & 41.9 $\pm$ 2.0 & 73.6\\ +SHOT$\star\star$ & 78.5 $\pm$ 2.0 & 93.1 $\pm$ 1.1 & \underline{80.7 $\pm$ 0.9} & 41.9 $\pm$ 2.0 & 73.5 \\ \midrule \textbf{CLIP + DPL~(ours)}& 81.0 $\pm$ 1.1 & \textbf{95.9 $\pm$ 0.0} & \textbf{82.3 $\pm$ 0.7} & \underline{49.4 $\pm$ 1.1} & \textbf{77.2} \\ \bottomrule \end{tabular} \end{center} \caption{ Comparison with TTA methods. Here, $\star$ indicates updating the linear classifier, and $\star\star$ indicates updating the feature extractor to minimize entropy reported in table 3 of the T3A paper. The best results are in bold, and second-best results are underlined. All the experiments listed in this table are run on a cluster of A100 GPUs. The numbers of ERM, CORAL, and CLIP + DPL~are different from \autoref{table:main_dg} because of the use of half precision floating point on A100. } \label{table:main_tta} \end{table*} \section{Experiment} In this section, we experimentally demonstrate the effectiveness of DPL. First, we clarify the important DG settings, including the datasets, hyperparameters, model selection strategy, and other implementation details. Second, we show CLIP + DPL~outperforms the strong DG baselines and several SoTA TTA methods on DomainBed benchmark. Finally, our ablation experiments, including variants backbone comparison and different prompt strategies study, provide meaningful insights of applying CLIP + DPL~to DG. \paragraph{Datasets} Following~\cite{iwasawa2021testtime}, we selected four real-world datasets from DomainBed benchmark, including VLCS~\cite{fang2013unbiased}, PACS~\cite{li2017deeper}, OfficeHome~\cite{venkateswara2017deep}, TerraIncognita~\cite{beery2018recognition}. More details are provided in Appendix A. \paragraph{Hyperparameters and model selection.} We set up experiments on DomainBed~\footnote{https://github.com/facebookresearch/DomainBed}, and implemented DPL~based on CLIP\footnote{https://github.com/openai/CLIP}. We strictly followed the basic selection criterion~\cite{gulrajani2020search} and selected the hyperparameters using standard training-domain validation. First, we split the data of each domain into 80\% and 20\% for the training model and select hyperparameters. Then, we ran 20 trials at random across a joint distribution of all hyperparameters. Next, we ran three trials of each hyperparameter setting, reserving one domain for testing and the rest for training. Finally, we selected the hyperparameters that maximize validation accuracy across the training domains and reported overall accuracy averaged across all three trials. \paragraph{Detail of the implements} As shown as in \autoref{fig:architecture}, we only trained a three-layer MLP as the domain prompt generator. We used stochastic gradient descent~\cite{bottou2012stochastic} with momentum as an optimizer. Refer to our source code for implement details. \subsection{Comparison with existing DG methods} \label{subsec:dg} \paragraph{Baselines} We compared our method to domain generalization algorithms, which fine-tune image features, and the handcrafted prompt for CLIP. For DG, we trained CLIP image features (ViT-B16) using ERM, CORAL~\cite{sun2016deep}, and DANN~\cite{ganin2016domain}. Note that, as~\cite{gulrajani2020search} pointed out, ERM is a strong DG baseline when the experiments are fairly performed. For handcrafted prompt, we adopted three types prompt for CLIP including `\{class name\}', template prompt `a photo of a \{class name\}.', and Domain Prompt Learning~`$v_{1}, v_{2}, ..., v_{n}$ \{class name\}.'. All experiments listed in \autoref{table:main_dg} are based on the CLIP ViT-B16 backbone. We observed that zero-shot CLIP could achieve an average of 72.2\% accuracy and an average of 73.7\% by using a template prompt. Notably, DPL~improves CLIP performance to 79.6\% and outperforms all baselines, although ERM, CORAL, and DANN are fine-tuning their image encoder. Based on this result, we infer that DPL~should be an effective method in DG. Surprisingly, we found that fine-tuning the backbones hurts performance on PACS and OfficeHome. We consider that fine-tuning causes the model to overfit in the source domain in the case where the pre-training domain is big enough to cover the target domain. On the other hand, the models perform better with fine-tuning on Terra, with a high likelihood of being not covered by the CLIP's pre-training dataset. It is worth noting that our DPL~can effectively trade-off well between both cases. \subsection{Comparison with existing TTA methods} DPL~is generated by extracting domain features from a batch of input images. As discussed in \autoref{sec:tta}, DPL~can be considered as a TTA method. Therefore, we performed a fair comparison with several TTA algorithms to validate DPL. \paragraph{Baselines} Following~\cite{iwasawa2021testtime}, we adopted the baselines including Pseudo Label, SHOT, Tent, and T3A with batch size equal to 64 during the test time. We trained all models with the same CLIP ViT-B16 backbone. All the experiments follow the model selection, hyperparameter selection strategy, and evaluation method proposed in T3A and DomainBed. As shown in \autoref{table:main_tta}, DPL~beats the most effective TTA methods on four datasets. The result demonstrates that DPL~can consistently improve the model's generalization performance at test time. We believe this is sufficient evidence that the central concept of DPL, extracting unseen domain features to help model adapting at the test time, is practical. \subsection{Backbone Ablation} \paragraph{Different Backbones} Many proposed DG methods are evaluated using the standard ResNet backbones. However, more and more large models are being studied, and their validity is being experimentally demonstrated~\cite{bommasani2021opportunities,wang2022deepnet}. Therefore, we reported the performance of ResNet18 and ResNet50, Mixer-16~\cite{tolstikhin2021mlpmixer}, Vision Transformer (ViT)~\cite{dosovitskiy2020image} and several variations of ViT, such as BiT~\cite{kolesnikov2020big}, DeiT~\cite{touvron2021training}, HViT, and Mutual Information Regularization with Oracle (MIRO)~\cite{cha2022domain} in \autoref{table:ablation_backbone}. As a result, we discovered that the CLIP ViT-B16 backbone trained on YFCC100M~\cite{thomee2016yfcc100m} performs as well as HViT. Moreover, CLIP + DPL~surpassed most of the backbones, including HViT and MIRO. Notably, DPL~only trains a three-layer MLP, in contrast to others fine-tuning their backbones. We observed that the SoTA performance is provided by MIRO using RegNetY-16GF backbone with SWAG pre-training and combined with Stochastic Weight Averaging Densely (MIRO + SWAG~\cite{singh2022revisiting} + SWAD~\cite{cha2021swad}). The simple DPL~can achieve close performance (difference of ~1.9\%). Although comparing with different pre-training datasets and different parameters is unfair, this result demonstrates that Domain Prompt Learning~can efficiently adapt CLIP to unseen domains. \begin{table}[t] \begin{center} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|cccc|l} \toprule \textbf{Backbone Model} & \textbf{VLCS} & \textbf{PACS} & \textbf{OfficeHome} & \textbf{Terra} & \textbf{Avg} \\ \midrule ResNet18$^\dag$ & 73.2 $\pm$ 0.9 & 80.3 $\pm$ 0.4 & 55.7 $\pm$ 0.2 & 40.7 $\pm$ 0.3 & 62.5\\ ResNet50$^\dag$ & 75.5 $\pm$ 0.1 & 83.9 $\pm$ 0.2 & 64.4 $\pm$ 0.2 & 45.4 $\pm$ 1.2 & 67.3\\ \midrule Mixer-L16$^\dag$ & 76.4 $\pm$ 0.2 & 81.3 $\pm$ 1.0 & 69.4 $\pm$ 1.6 & 37.1 $\pm$ 0.4 & 66.1\\ \midrule BiT-M-R50x3$^\dag$ & 76.7 $\pm$ 0.1 & 84.4 $\pm$ 1.2 & 69.2 $\pm$ 0.6 & 52.5 $\pm$ 0.3 & 70.7\\ BiT-M-R101x3$^\dag$ & 75.0 $\pm$ 0.6 & 84.0 $\pm$ 0.7 & 67.7 $\pm$ 0.5 & 47.8 $\pm$ 0.8 & 68.6\\ BiT-M-R152x2$^\dag$ & 76.7 $\pm$ 0.3 & 85.2 $\pm$ 0.1 & 71.3 $\pm$ 0.6 & 51.4 $\pm$ 0.6 & 71.1\\ ViT-B16$^\dag$ & 79.2 $\pm$ 0.3 & 85.7 $\pm$ 0.1 & 78.4 $\pm$ 0.3 & 41.8 $\pm$ 0.6 & 71.3\\ ViT-L16$^\dag$ & 78.2 $\pm$ 0.5 & 84.6 $\pm$ 0.5 & 78.0 $\pm$ 0.1 & 42.7 $\pm$ 1.9 & 70.9\\ DeiT$^\dag$ & 79.3 $\pm$ 0.4 & 87.8 $\pm$ 0.5 & 76.6 $\pm$ 0.3 & 50.0 $\pm$ 0.2 & 73.4 \\ HViT$^\dag$ & 79.2 $\pm$ 0.5 & 89.7 $\pm$ 0.4 & 80.0 $\pm$ 0.2 & 51.4 $\pm$ 0.9 & 75.1\\ \midrule MIRO* & 79.0 $\pm$ 0.0 & 85.4 $\pm$ 0.4 & 70.5 $\pm$ 0.4 & 50.4 $\pm$ 1.1 & 71.3 \\ MIRO + SWAD* & 79.6 $\pm$ 0.2 & 88.4 $\pm$ 0.1 & 72.4 $\pm$ 0.1 & 52.9 $\pm$ 0.2 & 73.3 \\ MIRO + SWAG* & 79.9 $\pm$ 0.6 & \textbf{97.4 $\pm$ 0.2} & 80.4 $\pm$ 0.2 & \underline{58.9 $\pm$ 1.3} & 79.2 \\ MIRO + SWAD + SWAG* & 81.7 $\pm$ 0.1 & 96.8 $\pm$ 0.2 & \underline{83.3 $\pm$ 0.1} & \textbf{64.3 $\pm$ 0.3} & \textbf{81.5} \\ \midrule \midrule CLIP ViT-B16 & \underline{82.7 $\pm$ 0.3} & 92.9 $\pm$ 1.9 & 78.1 $\pm$ 2.1 & 50.2 $\pm$ 1.7 & 75.9\\ \textbf{CLIP+DPL}& \textbf{84.3 $\pm$ 0.4} & \underline{97.3 $\pm$ 0.2} & \textbf{84.2 $\pm$ 0.2} & 52.6 $\pm$ 0.6 & \underline{79.6} \\ \midrule \end{tabular}} \end{center} \caption{ Results of ERM with various backbone networks on DG benchmark. $^\dag$ indicates that the numbers are taken from Table 2 in~\cite{iwasawa2021testtime}. * indicates the numbers are taken from MIRO~\cite{cha2022domain}. The best scores are bolded, and the second-best scores are underlined. } \label{table:ablation_backbone} \end{table} \begin{table}[t] \label{sample-table} \begin{center} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|cccc|l} \toprule \textbf{Backbone} & \textbf{VLCS} & \textbf{PACS} & \textbf{OfficeHome} & \textbf{Terra} & \textbf{Avg} \\ \midrule (1) Frozen & 76.0 $\pm$ 0.3 & 66.0 $\pm$ 0.7 & 61.7 $\pm$ 0.5 & 25.5 $\pm$ 1.8 & 57.3 \\ (2) ResNet18$^\dag$ & 73.2 $\pm$ 0.9 & 80.3 $\pm$ 0.4 & 55.7 $\pm$ 0.2 & 40.7 $\pm$ 0.3 & 62.5 \\ (2) - (1) & \textcolor{red}{-2.8} & +14.3 & \textcolor{red}{-6.0} & +15.2 & +4.2 \\ \midrule (1) Frozen & 77.4 $\pm$ 0.3 & 67.2 $\pm$ 0.4 & 68.0 $\pm$ 0.3 & 35.4 $\pm$ 1.5 & 62.0 \\ (2) ResNet50$^\dag$ & 75.5 $\pm$ 0.1 & 83.9 $\pm$ 0.2 & 64.4 $\pm$ 0.2 & 45.4 $\pm$ 1.2 & 67.3 \\ (2) - (1) & \textcolor{red}{-1.9} & +16.7& \textcolor{red}{-3.6} & +10.0 & +5.3 \\ \midrule (1) Frozen & 77.5 $\pm$ 0.4 & 74.3 $\pm$ 0.3 & 77.4 $\pm$ 0.2 & 43.4 $\pm$ 0.3 & 68.2 \\ (2) DeiT $^\dag$ & 79.3 $\pm$ 0.4 & 87.8 $\pm$ 0.5 & 76.6 $\pm$ 0.3 & 50.0 $\pm$ 0.2 & 73.4 \\ (2) - (1) & +1.8 & +13.5 & \textcolor{red}{-0.8} & +6.6 & +5.2 \\ \midrule (1) Frozen & 79.2 $\pm$ 0.1 & 76.6 $\pm$ 0.4 & 81.1 $\pm$ 0.2 & 35.7 $\pm$ 0.7 & 68.1 \\ (2) HViT$^\dag$ & 79.2 $\pm$ 0.5 & 89.7 $\pm$ 0.4 & 80.0 $\pm$ 0.2 & 51.4 $\pm$ 0.9 & 75.1 \\ (2) - (1) & \textcolor{red}{-0.0} & +13.1 & \textcolor{red}{-1.1} & +15.7 & +7.0 \\ \midrule (1) Frozen & 82.6 $\pm$ 0.3 & 96.9 $\pm$ 0.1 & 83.2 $\pm$ 0.2 & 46.5 $\pm$ 2.1 & 77.3 \\ (2) CLIP ViT-B16 & 82.7 $\pm$ 0.3 & 92.9 $\pm$ 1.9 & 78.1 $\pm$ 2.1 & 50.2 $\pm$ 1.7 & 75.9 \\ (2) - (1) & +0.1 & \textcolor{red}{-4.0} & \textcolor{red}{-5.1} & +3.7 & \textcolor{red}{-1.4} \\ (3) DPL~(ours)& \textbf{84.3 $\pm$ 0.4} & \textbf{97.3 $\pm$ 0.2} & \textbf{84.2 $\pm$ 0.2} & \textbf{52.6 $\pm$ 0.6} & \textbf{79.6} \\ (3) - (1) & +1.7 & +0.4 & 1.0 & +6.1 & +2.3 \\ \midrule \end{tabular} } \end{center} \caption{ The results of frozen backbone ablation with ERM. Each block represents a backbone. Frozen means using the frozen backbone. $^\dag$ indicates that the numbers are taken from Table 2 in~\cite{iwasawa2021testtime}. The highlighted numbers indicate the Frozen ERM outperforms the standard ERM with fine-tuning backbones. (3) refers to DPL, which scores beat all others and are bolded. } \label{table:ablation_frezon} \end{table} \paragraph{Frozen Backbone} Fine-tuning a large model like CLIP or other Foundation Models necessitates much computing power. DPL~also aims to adapt CLIP to the target domain with minimum computing. We wondered if simply training an MLP classifier with the frozen backbone could aid model transfer and conducted the ablation experiments with five different backbones. From \autoref{table:ablation_frezon}, we surprisingly found that Frozen ERM outperforms the standard ERM in OfficeHome with all the backbones. In VLCS, the performance of Frozen ERM is also unexpected. These results show that fine-tuning hurts the model more than expected on specific datasets. On the other hand, DPL~steadily improving the performance on all datasets demonstrates the robustness of DPL. A similar phenomenon, fine-tuning does not constantly improve performance in DG, is also observed in \autoref{subsec:dg}. Due to computing resource constraints, we only evaluated several backbones of varying sizes in this work. We found several recent studies analyzing the same phenomenon, the effect of pre-training datasets and backbones on DG and Out-of-Distribution settings\cite{kim2022broad,wenzel2022assaying}. \section{Conclusions} We introduce CLIP to DG on DomainBed. For this purpose, we proposed a novel approach called Domain Prompt Learning~(DPL) for efficiently adapting CLIP to an unseen domain. By generating the domain prompt conditional on input images, CLIP + DPL~brings substantial improvements over strong DG baselines and several effective TTA methods on DomainBed. Then, we conducted ablation experiments with various backbones and Frozen ERM. We verified that DPL~can stabilize performance and present meaningful insights about existing datasets and fine-tuning strategy of backbones. We hope that our research will broaden and inspire the roles of prompt learning in domain transfer learning. \subsection{Limitation} \label{subsec:limitation} \paragraph{Interpretability of Domain Prompt} To better perform, our DPL~is directly represented in a continuous vector form, which lacks interpretability. However, improving interpretability is an important research direction in both FM applications and Domain Generalization. We consider producing discrete semantically informative prompts by some means is an exciting extension of DPL, even with some loss of precision. \paragraph{Label Shift} From the technical perspective, DPL~cannot capture the domain shift outside of the images because DPL~uses domain features extracted from only images. As a result, DPL~has no idea how to record such non-visual domain shift. Unfortunately, the label shift exists in the actual-world applications~\cite{azizzadenesheli2019regularized}. An innovative question is whether adding appropriate information to the Domain Prompt can help solve the label shift problem, such as a detailed textual description of the target domain. \paragraph{Social impact perspective} Many images and text descriptions of web data are used directly to train CLIP. Though CLIP benefits from low-cost data that do not require manual labeling, it inevitably includes a lot of bias and privacy in CLIP and other foundation models~\cite{bommasani2021opportunities}. This requires us to spend more time paying attention to the opportunities and risks of Foundation Models. \subsection{Future Work} First and foremost, interpretability is critical in both Domain Transfer Learning and the Foundation Model. As discussed in \autoref{subsec:limitation}, DPL~introduce the possibility of using a large language model in DG in the form of prompt. We will investigate this direction in our future work. There are two simple and critical approaches to improving the performance of DG. One is to apply visual prompt tuning~\cite{jia2022visual} on the pure visual backbones, which can be used to more previous methods. Another is focusing on a data-centric approach since we observe uneven data quality on the widely used datasets. Finally, several recent studies systematically analyze the performance and shortcomings of large-scale pre-train models in the Out-of-Distribution generalization~\cite{cha2022domain,wenzel2022assaying}. We hope that our results will inspire more research in this direction.
1,314,259,992,937
arxiv
\section{Introduction}\label{sec:int} In this paper we are interested in the solution of the generalized equation \begin{equation} \label{eq:ipi} F(x) +T(x) \ni 0, \end{equation} where $F:{\Omega}\to H$ is a Fr\'echet differentiable function, $H$ is a Hilbert space, $\Omega\subseteq H$ an open set and $T:H \rightrightarrows H$ is set-valued and maximal monotone. As is well known, the generalized equation \eqref{eq:ipi} covers wide range of problems in classical analysis and its applications. For instance, systems of nonlinear equations and abstract inequality systems. If $\psi: H \to (-\infty, +\infty]$ is a proper lower semicontinuous convex function and $$ T(x)=\partial \psi(x) =\{u\in H ~:~ \psi(y) \geq \psi(x) + \langle u,y-x\rangle\}, \qquad \forall~ y\in H, $$ then \eqref{eq:ipi} becomes the variational inequality problem $$ F(x) +\partial \psi(x) \ni 0, $$ including linear and nonlinear complementary problems; additional comments about such problems can be found in \cite{ DontchevRockafellar2009, FerreiraSilva, josephy1979, Robinson1972_2, Uko1996, UkoArgyros2009}, and the references cited therein. Newton's method is one of the most important methods to solve \eqref{eq:ipi} which generates a sequence, for an initial point $x_0$, as follows \begin{equation} \label{eq:ipi1} F(x_k) + F'(x_k)(x_{k+1}-x_k)+ T(x_{k+1}) \ni 0, \qquad k=0,1, \ldots. \end{equation} This method may be viewed as a Newton-type method based on a partial linearization, which has been studied in several papers including \cite{ Dontchev1996, PietrusJean2013}; see also \cite[ Section 6C]{DontchevRockafellar2009} where a interesting discussion about iterative methods for solving generalized equations is presented. When $F\equiv 0$, the iteration \eqref{eq:ipi1} becomes the standard Newton's method for solving the nonlinear equation $F(x)=0.$ L.~V. Kantorovich in \cite{Kantorovich1948}, established a convergence result to a solution of the Newton method for solving the equation $F(x)=0$ using conditions where the iterations begin. To be more specific, by imposing assumptions on the derivative $F'(x_0)$ and of the term $\|F'(x_0)^{-1}F(x_0)\|$ he obtained a convergence result to a solution of the Newton method. The main idea of Kantorovich, used in his proof, was the majorization principle of the sequence $x_k$ by a sequence of scalars. Recently, there has been some growing interest in Kantorovich's theorem, see for instance \cite{Ferreira2015, FerreiraSilva, FerreiraSvaiter2009, Potra2005, Wang1999}. Up to our knowledge, S.~M. Robinson in \cite{Robinson1972_2}, was the first to consider a generalization of the Kantorovich theorem of the type $F(x)\in K$, where $K$ is a nonempty closed and convex cone, giving convergence results and error bounds for this method. His idea was to use properties of convex process introduced by Rockafellar \cite{Rockafellar1970}, for more details and results about convex process see \cite{Robinson1972_1, Rockafellar1967}. In \cite{Robinson1972_2} the iterative $x_{k+1}$ is taken as the element of the set of points $x_{k+1}$ defined by \eqref{eq:ipi1} which is at minimal distance from $x_k$. Thus is obtained that the sequence $x_k$ converges quadratically to a solution of $F(x)\in K.$ N.~H. Josephy in \cite{josephy1979} was the first to consider a semi local Newton's method of the kind \eqref{eq:ipi1}, in order to solving (\ref{eq:ipi}), to $F=N_C$ the normal cone mapping of a convex set $C\subset \mathbb{R}^{m}$. For guarantee the well definition of the method, in the theory of generalized equations, the property of {\it strong regularity} of $F(x_0)+F'(x_0)(x-x_0)+T(x)$ at $x_1$ for $0$, where $x_1$ is obtained from $x_0$, introduced by S.~M. Robinson in \cite{Robinson1980}, was used. If $X = Y$ and $F=\{ 0\},$ then strong regularity of $f(x_0)+f'(x_0)(x-x_0)+F(x)$ at $x_1\in X$ for $0\in X$ is equivalent to assumption that $f'(x_0)^{-1}$ is a continuous linear operator. If $X=\mathbb{R}^{n}$, $Y=\mathbb{R}^{m}$ and $F=\mathbb{R}^{s}_{-}\times \{0\}^{m-s}$, then strong regularity is equivalent to Mangasarian-Fromovitz constraint qualification; see \cite[Example 4D.3]{DontchevRockafellar2009}. An important case is when \eqref{eq:ipi} represents the Karush-Kuhn-Tucker's systems for the standard nonlinear programming problem with a strict local minimizer, see \cite{DontchevRockafellar2009} pag. 232. In \cite{chang2015, UkoArgyros2009} under a majorant condition and generalized Lipschitz condition, local and semi local convergence, quadratic rate and estimate of the best possible convergence radius of Newton's method as well as uniqueness of the solution for solving generalized equation were established. One usual assumption used to obtain quadratic convergence of Newton's method \eqref{eq:ipi1}, for solving equation \eqref{eq:ipi}, is the Lipschitz continuity of $F'$ in a neighborhood the solution; see \cite{DontchevRockafellar2009}. Indeed, keeping control of the derivative is an important point in the convergence analysis of Newton's method. On the other hand, a couple of papers have dealt with the issue of convergence analysis of the Newton's method, for solving the equation $F(x)=0$, by relaxing the assumption of Lipschitz continuity of $F'$, see for example \cite{FerreiraSvaiter2009, Wang1999, Zabrejko1987}, actually all this conditions are equivalent to X. Wang's condition introduced in \cite{Wang1999}. The advantage of working with a majorant condition rests in the fact that it allow to unify several convergence results pertaining to Newton's method; see \cite{FerreiraSvaiter2009, Wang1999}; see Section~\ref{apl}. In this paper we rephrase the majorant condition introduced in \cite{FerreiraSvaiter2009}, in order to study the local convergence properties of Newton's method \eqref{eq:ipi1}. The analysis presented provides a clear relationship between the majorant function and the function defining the generalized equation. Also, it allows us to obtain the optimal convergence radius for the method with respect to the majorant condition and uniqueness of solution. The analysis of this method, under Lipschitz's condition and Smale's condition, are provided as special case. The organization of the paper is as follows. In Section~\ref{sec:int.1}, some notations and important results used throughout the paper are presented. In Section \ref{lkant}, the main result is stated and in Section~\ref{sec:PMF} properties of the majorant function, the main relationships between the majorant function and the nonlinear operator are established. In Section~\ref{convanalysis} the main result is proved, the uniqueness of the solution and some applications of this result are given in Section~\ref{apl}. Some final remarks are made in Section~\ref{rf}. \section{Preliminaries} \label{sec:int.1} The following notations and results are used throughout our presentation. Let $H$ be a Hilbert space with scalar product $\langle ., .\rangle$ and norm $\|.\|$, the {\it open} and {\it closed balls} at $x$ with radius $\delta\geq 0$ are denoted, respectively, by $ B(x,\delta) := \{ y\in X ~: ~\|x-y\|<\delta \}$ and $B[x,\delta] := \{ y\in X ~: ~\|x-y\|\leqslant \delta\}.$ We denote by ${\mathscr L}(X,Y)$ the {\it space consisting of all continuous linear mappings} $A:X \to Y$ and the {\it operator norm} of $A$ is defined by $ \|A\|:=\sup \; \{ \|A x\|~: \|x\| \leqslant 1 \}.$ Recall that a bounded linear operator $G:H \to H$ is called a positive operator if $G$ is a self-conjugate and $\langle Gx,x\rangle \leq 0$ for each $x\in H$. The {\it domain} and the {\it range} of $H$ are, respectively, the sets $ \mbox{dom}~H:=\{x\in X ~: ~ H(x)\neq \varnothing\} $ and $ \mbox{rge}~H:=\{y\in Y ~: ~ y \in H(x) ~\emph{for some} ~x\in X\}$. The {\it inverse} of $H$ is the set-valued mapping $H^{-1}:Y \rightrightarrows X$ defined by $ H^{-1}(y):=\{x \in X ~: ~ y \in H(x)\}$. \begin{definition}\label{def:pplm} Let $H$, be a Hilbert space, $\Omega$ be an open nonempty subset of $H$, $h: \Omega \to H$ be a Fr\'echet derivative with derivative $h'$ and $T:H \rightrightarrows H$ be a set-valued mapping. The {\it partial linearization} of the mapping $h +T$ at $x\in H$ is the set-valued mapping $L_h(x, \cdot ):H \rightrightarrows H$ given by \begin{equation} \label{eq:pplm} L_h(x, y ):=h(x)+h'(x)(y-x)+T(y). \end{equation} For each $x\in H$, the inverse $L_h(x, \cdot )^{-1}:H \rightrightarrows H$ of the mapping $L_h(x, \cdot )$ at $z\in H$ is denoted by \begin{equation} \label{eq:invplm} L_h(x, z )^{-1}:=\left\{y\in X ~:~ z\in h(x)+h'(x)(y-x)+T(y)\right\}. \end{equation} \end{definition} \begin{remark} If in above definition we have $T\equiv {0}$, $z=0$ and $h'(x)$ invertible, then $L_h(x, 0 )^{-1}=x-h'(x)^{-1}h(x)$ is the Newton iteration for solving the equation $h(x)=0$. \end{remark} Now, we recall notions of monotonicity for set-valued operators. \begin{definition}\label{def.mono} Let $T:H \rightrightarrows H$ be a set-valued operator. $T$ is said to be monotone if for any $x,y\in \mbox{dom}~{T}$ and, $u \in T(y)$, $v\in T(x)$ implies that the follwing inequality holds: $$ \langle u-v,y-x \rangle \geq 0 . $$ \end{definition} A subset of $H \times H$ is monotone if it is the graph of a monotone operator. If $ \varphi: H \to (-\infty, +\infty]$ is a proper function then the subdifferential of $\varphi$ is monotone. \begin{definition} Let $T:H \rightrightarrows H$ be monotone. Then $T$ is maximal monotone if the following implication holds for all $x,u\in H$: \begin{equation} \langle u-v,y-x \rangle \geq 0 \quad \mbox{for each} \quad y\in \emph{dom}{T} \quad \mbox{and}\quad v\in T(y) \Rightarrow \quad x\in \emph{dom}{T} \quad \emph{and} \quad v\in T(x). \end{equation} \end{definition} An example of maximal monotone operator is the subdifferential of a proper, lower semicontinuous, convex function $ \varphi: H \to (-\infty, +\infty]$. The following result can de found in \cite{Wang2015}. \begin{lemma}\label{eq:plm} Let $G$ be a positive operator. The following statements about $G$ hold: \begin{enumerate} \item $\|G^2\|=\|G\|^2$; \item If $G^{-1}$ exists, then $G^{-1}$ is a positive operator. \end{enumerate} \end{lemma} As a consequence of this result we have the following result: \begin{lemma}\label{eq:adjunt} Let $G$ be a positive operator. Suppose that $G^{-1}$ exists, then for each $x\in H$ we have $$ \langle Gx,x\rangle \geq \frac{\|x\|^2}{\|G^{-1}\|}. $$ \end{lemma} \begin{proof} See Lemma~2.2 of \cite{Uko1996}. \end{proof} Let $G:H \to H$ be a bounded linear operator. We will use the convention that $\widehat{G}:=\frac{1}{2}(G+G^*)$ where $G^*$ is the conjugate operator of $G$. As we can see, $\widehat{G}$ is a self-conjugate operator. From now, we assume that $T:H \rightrightarrows H$ is a set-valued maximal monotone operator and $F: H \to H$ is a Fr\'echet derivative function. The next result is of major importance to prove the good definition of Newton's method. Its proof can be found in \cite[Lemma~1, p.189]{Smale1986}. \begin{lemma}[Banach's lemma]\label{eq.banachlemma} Let $B: H \to H$ be a bounded linear operator and $I:H \to H$ the identity operator. If $\|B-I\|<1$ then $B$ is invertible and $\|B^{-1}\|\leq 1/(1-\|B-I\|)$. \end{lemma} \section{Local analysis of Newton's method } \label{lkant} In this section, we study the Newton's method for solving the generalized equation \eqref{eq:ipi}, which is based in the partial linearization of this equation, see \cite{josephy1979} (see also, \cite{Dontchev1996}). For study the convergence properties of this method, we assume that the derivative $F'$ satisfies a weak Lipschitz condition on a region $\Omega$ relaxing the usual Lipschitz condition. The statement of the our main result is: \begin{theorem}\label{th:nt} Let $H$ be a Hilbert space, $\Omega$ be an open nonempty subset of $H$, $F: \Omega \to H$ be continuous with Fr\'echet derivative $F'$ continuous, $T:H \rightrightarrows H$ be a set-valued operator and $x_0\in \Omega$. Suppose that $F'(x_0)$ is a positive operator and $\widehat{F'(x_0)}^{-1}$ exists. Let $R>0$ and $\kappa:=\sup\{t\in [0, R): B(x_0, t)\subset \Omega\}$. Suppose that there exist $f:[0,\; R)\to \mathbb{R}$ twice continuously differentiable such that \begin{equation}\label{Hyp:MH} \|\widehat{F'(x_0)}^{-1}\| \left\|F'(y)-F'(x)\right\| \leq f'\left(\|x-y\|+\|x-x_0\| \right)-f'\left(\|x-x_0\|\right), \end{equation} for all $x,y \in B(x_0, \kappa)$. Moreover, suppose that \begin{equation} \label{eq.ipoint1} \|x_1-x_0\|\leq f(0), \end{equation} and the following conditions hold, \begin{itemize} \item[{\bf h1)}] $f(0)>0$ and $f'(0)=-1$; \item[{\bf h2)}] $f'$ is convex and strictly increasing. \item [{\bf h3)}] $f(t)=0$ for some $t\in (0,\; R)$. \end{itemize} Then, $f$ has a smallest zero $t^*\in (0,\; R)$, the sequences generated by Newton's method for solving the generalized equation $F(x)+T(x)\ni 0$ and the equation $f(t)=0$, with starting point $x_0$ and $t_0=0$, respectively, \begin{equation} \label{eq:DNS} 0\in F(x_k)+F'(x_k)(x_{k+1}-x_k)+T(x_{k+1}), \qquad t_{k+1} ={t_k}-f(t_k)/f'(t_k),\qquad k=0,1,\ldots\,, \end{equation} are well defined, $\{t_k\}$ is strictly increasing, is contained in $(0, t^*)$ and converges to $t^*$, $\{x_k\}$ is contained in $B(x_0, t^*)$ and converges to the point $x^*\in B[x_0, t^*]$ which is the unique solution of the generalized equation $F(x)+T(x)\ni 0$ in $B[x_0, t^*]$. Moreover, the sequences $\{x_k\}$ and $\{t_k\}$ satisfies, \begin{equation}\label{eq:q2} \|x_*-x_k\| \leq t_* -t_k, \qquad \qquad \|x_*-x_{k+1}\| \leq \frac{t_*-t_{k+1}}{(t_* -t_k)^2}\|x_*-x_k\|^2, \end{equation} for all k=0,1,..., and the sequences $\{t_k\}$ and $\{x_k\}$ converge $Q$-linearly as follows \begin{equation}\label{eq:rates0} \|x_*-x_{k+1}\| \leq \frac{1}{2}\|x_* -x_k\|, \qquad \qquad t_* -t_{k+1} \leq \frac{1}{2} (t_* -t_k) \qquad k=0,1, \ldots\ . \end{equation} If, additionally \begin{itemize} \item[{\bf h4)}] $f'(t_*)<0$, \end{itemize} then the sequences, $\{t_k\}$ and $\{x_k\}$ converge $Q$-quadratically as follows \begin{equation}\label{ine.rates1} \|x_*-x_{k+1}\| \leq \frac{D^{-}f'(t_*)}{-2f'(t_*)}\|x_*-x_k\|^2, \qquad \qquad t_* -t_{k+1} \leq \frac{D^{-}f'(t_*)}{-2f'(t_*)} (t_* -t_k)^2, \end{equation} for all $k=0,1, \ldots\ .$ \end{theorem} \begin{remark} When $F\equiv {0}$ and $f'$ satisfies a Lipschitz-type condition, we will obtain a particular instance of Theorem~\ref{th:nt}, which retrieves the classical convergence theorem on Newton's method under the Lipschitz condition; see \cite{Rall1974, Traub1979}. \end{remark} \begin{remark}\label{def.good} Since $T$ is monotone maximal, if there exists a constant $c>0$ such that \begin{equation}\label{eq.gooddef} \langle F'(x_k)y,y\rangle \geq c\|y\|^2 \end{equation} for each $y\in H$, then there exist an unique point $x_{k+1}$ such that the first inclusion in \eqref{eq:DNS} holds. The proof of this result can be found in \cite[Lemma~2.2]{Uko1996}. Hence, if for each $k$, there exist a constant $c>0$ such that \eqref{eq.gooddef} holds, then the sequence generated by \eqref{eq:DNS} is well defined. \end{remark} {\it From now on, we assume that the hypotheses of Theorem \ref{th:nt} hold}. \subsection{Basic results} \label{sec:PMF} In this section we will establish some results about the majorant function $f:[0,\; R)\to \mathbb{R}$ and, some relationships between the majorant function and the set-valued mapping $F+T.$ For this, we begin by reminding that Proposition~3 of \cite{FerreiraSvaiter2009} state that the majorant function $f$ has a smallest root $t_*\in (0,R)$, is strictly convex, $f(t)>0,$ $f'(t)<0$ and $t<t-f(t)/f'(t)< t_*$ for all $t\in [0,t_ *)$. Moreover, $f'(t_*)\leqslant 0$ and $f'(t_*)<0$ if, and only if, there exists $t\in (t_*,R)$ such that $f(t)< 0$. Let $$ \bar{t}:=\sup \left\{t\in [0,R): f'(t)<0 \right\}. $$ Note that $t_*\leq \bar{t}$. Since $f'(t)<0$ for all $ t\in [0, \bar{t})$, the Newton iteration of the majorant function $f$ is well defined in $[0, \bar{t}).$ Let us call it $n_{f}: [0, \bar{t}) \to \mathbb{R}$ such that \begin{equation}\label{eq.majorfunc} n_{f}(t)=t-\frac{f(t)}{f'(t)}. \end{equation} The next result will be used to obtain the convergence rate of the sequence generated by Newton's method for solving $f(t)=0.$ Its proof can be found in \cite[Proposition 4]{FerreiraSvaiter2009}. \begin{lemma}\label{eq.ratemajor} For all $t\in [0,t_*)$ we have $n_{f}(t)\in [0,t_*),$ $t<n_{f}(t)$ and $t_*-n_{f}(t)\leq \frac{1}{2}(t_*-t).$ If $f$ also satisfies the condition {\bf h4} then $$ t_* -n_{f}(t) \leq \frac{D^{-}f'(t_*)}{-2f'(t_*)} (t_* -t)^2, \qquad \forall ~t\in [0,t_*). $$ \end{lemma} The definition on $\{t_k\}$ on \eqref{eq:DNS} is equivalent to the following one \begin{equation}\label{eq.majseq} t_0=0, \qquad t_{k+1}=n_{f}(t_k), \qquad k=0,1\ldots. \end{equation} The next result contain the main convergence properties of the above sequence and its prove, which is a consequence of Lemma~\ref{eq.ratemajor}, follows the same pattern as the proof of Corollary~2.15 of \cite{FerreiraMax2013}. \begin{corollary}\label{major.convergence} The sequence $\{t_k\}$ is well defined, strictly increasing and is contained in $[0,t_*).$ Moreover, it satisfies second inequality in \eqref{eq:rates0} and converges $Q$-linearly to $t_*.$ If also satisfies assumption {\bf h4} then $\{t_k\}$ satisfies the second inequality in \eqref{ine.rates1} and converges $Q$-quadratically. \end{corollary} Therefore, we have obtained all the statements about the majorant sequence $\{t_k\}$ on Theorem~\ref{th:nt}. Now we are going to establish some relationships between the majorant function and the set-valued mapping $F+T.$ In the sequel we will prove that the partial linearization of $F+T$ has a single-valued inverse, which is Lipschitz in a neighborhood of $x_0$. Since Newton's iteration at a point in this neighborhood happens to be a zero of the partial linearization of $F+T$ at such a point, it will be first convenient to study the {\it linearization error of $F$} at a point in $\Omega$ \begin{equation}\label{eq:def.er} E_F(x,y):= F(y)-\left[ F(x)+F'(x)(y-x)\right],\qquad y,\, x\in \Omega. \end{equation} In the next result we bound this error by the linearization error of the majorant function $f$, namely, $$ e_{f}(t,u):=f(u)-\left[f(t)+f'(t)(u-t)\right],\qquad t,\,u \in [0,R). $$ \begin{lemma} \label{pr:taylor} Take $x,y\in B(x_0,R)$ and $0\leq t<v< R$. If $\|x-x_0\|\leq t$ and $\|y-x\|\leq v-t$ then \begin{equation}\label{eq:errormajor} \|\widehat{F'(x_0)}^{-1}\| \|E_F(x,y)\| \leq e_{f}(t,v)\frac{\|y-x\|^2}{(v-t)^2}. \end{equation} \end{lemma} \begin{proof} Since $x+\tau(y-x)\in B(x_0,R),$ for all $\tau\in [0,1]$ and $F$ is continuously differentiable in $\Omega$, the linearization error of $F$ on \eqref{eq:def.er} is equivalent to $$ E_F(x,y)=\int_{0}^{1} [F'(x+\tau(y-x))-F'(x)](y-x) d\tau, $$ which combined with the assumption in \eqref{Hyp:MH} and after some simple algebraic manipulations we obtain $$ \|\widehat{F'(x_0)}^{-1}\| \|E_F(x,y)\| \leq \int_{0}^{1} [f'(\|x-x_0\| +\tau \|y-x\|) -f'(\|x-x_0\|)]\|y-x\| d\tau. $$ Now, using the convexity of $f',$ the assumptions $\|x-x_0\|\leq t$ and $\|y-x\|<v-t,$ $v<R$ we have \begin{eqnarray*} f'(\|x-x_0\| +\tau \|y-x\|) -f'(\|x-x_0\|) &\leq& f'(t +\tau \|y-x\|) -f'(t) \\ &\leq& [f'(t +\tau \|v-t\|) -f'(t)]\frac{\|y-x\|}{v-t}, \end{eqnarray*} for any $\tau \in [0,1]$. Combining these inequalities we conclude that $$ \|\widehat{F'(x_0)}^{-1}\| \|E_F(x,y)\| \leq \int_{0}^{1} [f'(t +\tau \|v-t\|) -f'(t)]\frac{\|y-x\|^2}{v-t}d\tau, $$ which, after performing the integration we obtain the desired result. \end{proof} In the next result we will present the main relationships between the majorant function $f$ and the operator $F$. The result is a consequence of Banach's lemma and its statement is: \begin{lemma} \label{le:wdns} Let $x_0 \in \Omega$ be such that $\widehat{F'(x_0)}$ is a positive operator and $\widehat{F'(x_0)}^{-1}$ exists. If $\|x-x_0\|\leq t <t^*$, then $\widehat{F'(x)}$ is a positive operator and $\widehat{F'(x)}^{-1}$ exists. Moreover, $$ \|\widehat{F'(x)}^{-1}\|\leq -\frac{\|\widehat{F'(x_0)}^{-1}\|}{f'(t)}. $$ \end{lemma} \begin{proof} Firstly note that \begin{equation}\label{eq.matriz} \|\widehat{F'(x)}-\widehat{F'(x_0)}\|\leq \frac{1}{2}\|F'(x)-F'(x_0)\| + \frac{1}{2}\|(F'(x)-F'(x_0))^*\|=\|F'(x)-F'(x_0)\|. \end{equation} Take $x\in B[x_0, t],$ $0\leq t<t^*$. Thus $f'(t)<0$. Using \eqref{eq.matriz}, \eqref{Hyp:MH} and taking into account {\bf h1} and {\bf h2} we obtain that \begin{equation}\label{eq:majcond} \|\widehat{F'(x_0)}^{-1}\| \|\widehat{F'(x)}-\widehat{F'(x_0)}\|\leq \|\widehat{F'(x_0)}^{-1}\| \|F'(x)-F'(x_0)\| \leq f'(\|x-x_0\|)-f'(0)<f'(t)+1< 1. \end{equation} Thus, by Banach's lemma, we conclude that $\widehat{F'(x)}^{-1}$ exists. Moreover by above inequality, $$ \|\widehat{F'(x)}^{-1}\|\leq \frac{\|\widehat{F'(x_0)}^{-1}\|}{1-\|\widehat{F'(x_0)}^{-1}\|\|F'(x)-F'(x_0)\|}\leq \frac{\|\widehat{F'(x_0)}^{-1}\|}{1-(f'(t)+1)} =-\frac{\|\widehat{F'(x_0)}^{-1}\|}{f'(t)}. $$ On the other hand, using \eqref{eq:majcond} we have \begin{equation}\label{eq.selfadj} \|\widehat{F'(x)}-\widehat{F'(x_0)}\|\leq \frac{1}{\|\widehat{F'(x_0)}^{-1}\|}. \end{equation} Take $y\in H$. Then, it follows by above inequality that $$ \langle (\widehat{F'(x_0)} -\widehat{F'(x)})y,y\rangle \leq \|\widehat{F'(x_0)} -\widehat{F'(x)}\|\|y\|^2\leq \frac{\|y\|^2}{\|\widehat{F'(x_0)}^{-1}\|}, $$ which implies, after of simple manipulations that $$ \langle \widehat{F'(x_0)}y,y\rangle -\frac{\|y\|^2}{\|\widehat{F'(x_0)}^{-1}\|} \leq \langle \widehat{F'(x)}y,y\rangle. $$ Since $\widehat{F'(x_0)}$ is a positive operator and $\widehat{F'(x_0)}^{-1}$ exists by assumption, we obtain by Lemma~\ref{eq:adjunt} that $$ \langle \widehat{F'(x_0)}y,y\rangle \geq \frac{\|y\|^2}{\|\widehat{F'(x_0)}^{-1}\|}. $$ Therefore, combining the two last inequalities we conclude that $\langle \widehat{F'(x)}y,y\rangle \geq 0$, i.e, $\widehat{F'(x)}$ is a positive operator. \end{proof} Lemma~\ref{le:wdns} shows that $\widehat{F'(x)}$ is a positive operator and $\widehat{F'(x)}^{-1}$ exists, thus by Lemma~\ref{eq:adjunt} we have that for any $y\in H$ $$ \langle \widehat{F'(x)}y,y\rangle \geq \frac{\|y\|^2}{\|\widehat{F'(x)}^{-1}\|}. $$ Note that $\langle \widehat{F'(x)}y,y\rangle=\langle F'(x)y,y\rangle$, thus by the second part of Lemma~\ref{le:wdns} and $h_2$ we conclude that $F'(x)$ satisfies \eqref{eq.gooddef} and consequently, the Newton iteration mapping is well-defined. Let us call $N_{F+T}$, the Newton iteration mapping for $F+T$ in that region, namely, $N_{F+T}:B(x_0, t^*) \to H$ is defined by \begin{equation} \label{eq:NFef1} N_{F+T}(x):= L_F(x,0)^{-1}. \end{equation} Using \eqref{eq:invplm} we conclude that the definition of the Newton iteration mapping in \eqref{eq:NFef1} is equivalent to \begin{equation} \label{eq:NFef} 0\in L_F(x,N_{F+T}(x)):= F(x)+F'(x)(N_{F+T}(x)-x)+T(N_{F+T}(x)),\qquad \forall ~x\in B(x_0, t^*). \end{equation} \begin{remark}\label{eq.initialpoint} Since $0\in F(x_0)+F'(x_0)(x_1-x_0) +T(x_1)$, $x_1\in \Omega$, is single-valued, it follow from \eqref{eq:invplm} that $$ \{x_1\}=L_F (x_0,0)^{-1}. $$ \end{remark} Therefore, one can apply a \emph{single} Newton iteration on any $x\in B(x_0, t^*)$ to obtain $N_{F+T}(x)$ which may not belong to $B(x_0, t^*)$, or even may not belong to the domain of $F$. Thus, this is enough to guarantee the well-definiteness of only one iteration of Newton's method. To ensure that Newtonian iterations may be repeated indefinitely or in particular, invariant on subsets of $B(x_0, t_*)$, we need some additional results. First, define some subsets of $B(x_0,t_*)$ in which, as we shall prove, Newton iteration mapping \eqref{eq:NFef} are ``well behaved''. Define \begin{equation}\label{eq:ker} {K}(t):=\left\{x\in \Omega ~: ~ \|x-x_0\| \leq t, \quad \|L_F(x,0)^{-1}-x\| \leq -\frac{f(t)}{f'(t)}\right\}, \qquad t\in [0,t_*), \end{equation} \begin{equation} \label{eq:kt} {K}:=\bigcup_{t\in {[0,t_ *)}} K(t). \end{equation} \begin{lemma} \label{le:cl} For each $0\leq t< t_*$ we have ${K}(t) \subset B(x_0,t_*)$ and $ N_{F+T}({K}(t)) \subset {K}(n_{f}(t)). $ As a consequence, ${K}\subseteq B(x_0,t_ *)$ and $N_{F+T}({K}) \subset {K}$. \end{lemma} \begin{proof} The first inclusion follows trivially from the definition of ${K}(t).$ Take $x\in {K}(t).$ Using definition \eqref{eq:ker} and \eqref{eq.majorfunc} we concluded that \begin{equation}\label{ini.cond} \|x-x_0\| \leq t, \quad \qquad \|L_F(x,0)^{-1}-x\| \leq -\frac{f(t)}{f'(t)}, \qquad \quad t< n_{f}(t) <t_ *. \end{equation} Definition of Newton iteration mapping in \eqref{eq:NFef} implies that for all $x \in {K}(t)$ we have $$ \|N_{F+T}(x)-x_0\| \leq \|x-x_0\| + \|N_{F+T}(x)-x\|= \|x-x_0\| + \|L_F(x,0)^{-1}-x\|, $$ and consequently, using \eqref{eq.majorfunc} and \eqref{ini.cond}, the last inequality imply \begin{equation} \label{eq:fcmt} \|N_{F+T}(x)-x_0\| \leq t -\frac{f(t)}{f'(t)} = n_{f}(t) <t_ *. \end{equation} For simplify the notations define $x_+= N_{F+T} (x)$ and $y=L_F(x_+,0)^{-1}$. Thus, from \eqref{eq:NFef} we have $$ 0\in F(x)+F'(x)(x_+ -x)+T(x_+),\qquad 0\in F(x_+)+F'(x_+)(y -x_+)+T(y). $$ As $T$ is a maximal monotone, it follows that $$ \langle F(x)+F'(x)(x_+ -x) -F(x_+)-F'(x_+)(y -x_+), y-x_+ \rangle \geq 0 $$ which implies that \begin{equation}\label{eq.monmax} \langle F(x)-F(x_+)+F'(x)(x_+-x), y-x_+ \rangle \geq \langle F'(x_+)(y-x_+), y-x_+ \rangle. \end{equation} Since, by Lemma~\ref{le:wdns}, $\widehat{F'(x)}$ is a positive operator and $\widehat{F'(x)}^{-1}$ exists, we obtain from Lemma~\ref{eq:adjunt} that \begin{equation}\label{eq.monmax1000} \frac{\|y-x_+\|^2}{\|\widehat{F'(x_+)}^{-1}\|}\leq \langle \widehat{F'(x_+)}(y-x_+),y-x_+\rangle. \end{equation} Note that $ \langle \widehat{F'(x_+)}(y-x_+),y-x_+\rangle = \langle F'(x_+)(y-x_+), y-x_+\rangle, $ this together \eqref{eq.monmax1000} and \eqref{eq.monmax} yields that \begin{equation*}\label{eq.monmax2} \|y-x_+\|^2 \leq \|\widehat{F'(x_+)}^{-1}\| \langle F'(x_+)(y-x_+),y-x_+\rangle \leq \|\widehat{F'(x_+)}^{-1}\| \langle F(x)-F(x_+)+F'(x)(x_+-x), y-x_+\rangle. \end{equation*} Hence, after simple manipulations, above inequality becomes \begin{equation}\label{eq.monmax222} \|y-x_+\| \leq \|\widehat{F'(x_+)}^{-1}\| \|F(x)-F(x_+)+F'(x)(x_+-x)\|. \end{equation} Due to $x_+= N_{F+T} (x)$ we have from \eqref{eq:fcmt} that $\|x_+-x_0\| \leq n_{f}(t)$. Then, taking into account that $f'$ is increasing and negative, it follows from \eqref{eq.monmax222} , using second part in Lemma~\ref{le:wdns}, \eqref{eq:def.er} and Lemma~\ref{pr:taylor} we obtain that \begin{equation}\label{eq.monmax3} \|y-x_+\| \leq -\frac{\|\widehat{F'(x_0)}^{-1}\|}{f'(n_{f}(t))}\|E_F(x,x_+)\| \leq \frac{-1}{f'(n_{f}(t))}e_{f}(t, n_{f}(t))\frac{\|x_+-x\|^2}{(n_{f}(t)-t)^2}. \end{equation} On the other hand, using the definition \eqref{eq.majorfunc}, after some manipulations we conclude that $$ f(n_{f}(t))= f(n_{f}(t)) -[f(t) + f'(t)(n_{f}(t)-t)]= e_{f}(t,n_{f}(t)), $$ and because $x_+= N_{F+T} (x)$, \eqref{eq.majorfunc} and the second inequality in \eqref{ini.cond} imply $\|x-x_+\|\leq n_{f}(t)-t$, thus inequality in \eqref{eq.monmax3} becomes $$ \|y-x_+\| \leq -\frac{f(n_{f}(t))}{f'(n_{f}(t))}. $$ Therefore, since \eqref{eq:fcmt} implies $\|x_+-x_0\| \leq n_{f}(t)$ we conclude that the second inclusion of the proposition is proved. The third inclusion ${K}\subseteq B(x_0,t_*)$ follows trivially from \eqref{eq:ker} and \eqref{eq:kt}. To prove the last inclusion $N_{F+T}({K}) \subset {K}$, take $x\in {K}$. Thus $x\in K(t)$ for some $t\in [0,t_ *)$. From the second inclusion of the proposition, we have $N_{F+T}(x) \in {K}(n_{f}(t))$. Since $n_{f}(t)\in [0,t_ *)$ and using the definition of ${K}$ in \eqref{eq:kt} we concluded the proof. \end{proof} \subsection{Convergence analysis}\label{convanalysis} To prove the convergence result, which is a consequence of the above results, firstly we note that the definition \eqref{eq:NFef1} implies that the sequence $\{x_k\}$ defined in \eqref{eq:DNS}, can be formally stated by \begin{equation}\label{eq.seq} x_{k+1}=N_{F+T}(x_k), \qquad k=0,1,\ldots, \end{equation} or equivalently, $$ 0\in F(x_k)+F'(x_k)(x_{k+1}-x_k)+T(x_{k+1}), \qquad k=0,1, \ldots. $$ First we will show that the sequence generated by Newton method is well behaved with respect to the set defined in \eqref{eq:ker}. \begin{corollary}\label{res.solution} The sequence $\{x_k\}$ is well defined, is contained in $B(x_0,t_ *),$ converges to a point $x_*\in B[x_0,t_ *]$ satisfying $0\in F(x_*)+T(x_*).$ Moreover, $x_k\in {K}(t_k)$, for $k=0,1\ldots$ and $$ \|x_*-x_k\|\leq t_ *-t_k, \qquad k=0,1\ldots. $$ \end{corollary} \begin{proof} From assumption in \eqref{eq.ipoint1}, Remark~\ref{eq.initialpoint}, assumption ${\bf h1}$ and definitions \eqref{eq:ker} and \eqref{eq:kt} we have \begin{equation} \label{eq:fs} \{x_0\}={K}(0)\subset {K}. \end{equation} We know from Lemma~\ref{le:cl} that $N_{F+T}({K}) \subset {K}$. Thus, using \eqref{eq:fs} and \eqref{eq.seq} we conclude that the sequence $\{x_k\}$ is well defined and rests in ${K}.$ From the first inclusion on second part of the Lemma~\ref{le:cl} we have trivially that $\{x_k\}$ is contained in $B(x_0,t_ *).$ To prove the convergence, first we are going to prove by induction that \begin{equation}\label{eq.defseq} x_k\in {K}(t_k), \qquad k=0,1\ldots. \end{equation} The above inclusion, for $k=0$, follows from \eqref{eq:fs}. Assume now that $x_k\in {K}(t_k).$ Then combining Lemma~\ref{le:cl}, \eqref{eq.seq} and \eqref{eq.majorfunc} we conclude that $x_{k+1}\in {K}(t_{k+1}),$ which completes the induction proof. Now, using \eqref{eq.defseq} and \eqref{eq:ker} we have $$ \|L_F(x_k,0)^{-1}-x_k\| \leq -\frac{f(t_k)}{f'(t_k)}, \qquad k=0,1 \ldots, $$ which, combined with \eqref{eq.seq} and \eqref{eq:DNS} becomes \begin{equation}\label{des.conver} \|x_{k+1}-x_k\|\leq t_{k+1}-t_k, \qquad k=0,1 \ldots. \end{equation} Taking into account that $\{t_k\}$ converges to $t_ *,$ we easily conclude from the above inequality that $$ \sum_{k=k_0}^{\infty} \|x_{k+1}-x_k\| \leq \sum_{k=k_0}^{\infty} t_{k+1}-t_k =t_ *- t_{k_0} < +\infty, $$ for any $k_0 \in \mathbb{N}.$ Hence, we conclude that $\{x_k\}$ is a Cauchy sequence in $B(x_0, t_ *)$ and thus it converges to some $x_* \in B[x_0, t_ *].$ Therefore, using again \eqref{des.conver} we also conclude that the inequality in the corollary holds. Since $-F(x_k)-F'(x_k)(x_{k+1}-x_k)\in T(x_{k+1})$, $F$ is a continuously differentiable mapping, $x_k$ converges to $x_*$ and $T$ is maximal monotone, we conclude that $0 \in F(x_*)+T(x_*)$. \end{proof} We have already proved that the sequence $\{x_k\}$ converges to a solution $x_*$ of generalized equation $F(x)+T(x)\ni 0$ and $x_*\in B(x_0,t_*)$. Now, we will prove that this convergence is $Q$-linearly and that $x^*$ is the unique solution of $F(x)+T(x)\ni 0$ in $B(x_0,t_*)$. Furthermore, by assuming that $f$ satisfies ${\bf h4}$, we will also prove that $\{x_k\}$ converges $Q$-quadratically to $x_*$. For that, we need of the following result: \begin{lemma}\label{ine.rates} Take $x,y\in B(x_0,R)$ and $0\leq t<v<R$. If \begin{equation} \label{eq:siqc} t\leq t^*,\quad \|x-x_0\|\leq t, \quad \|y-x\|\leq v-t, \quad f(v)\leq 0, \quad 0\in F(y)+T(y), \end{equation} then the following inequality holds $$ \|y-N_{F+T}(x)\|\leq [v-n_{f}(t)]\frac{\|y-x\|^2}{(v-t)^2}. $$ \end{lemma} \begin{proof} For simplify the notations define $z= N_{F+T} (x)$. Since $0\in F(y)+T(y)$ using \eqref{eq:NFef} and that $T$ is maximal monotone we have $$ \langle -F(x)+F'(x)(x-z)+F(y), z-y\rangle \geq 0, $$ which, after simple manipulations, implies that \begin{equation}\label{eq.monmax1} \langle F(y)-F(x)+F'(x)(x-y), z-y\rangle \geq \langle F'(x)(z-y),z-y\rangle. \end{equation} Since $\|x-x_0\|\leq t<t^*$ we obtain by Lemma~\ref{le:wdns} that $\widehat{F'(x)}$ is a positive operator and $\widehat{F'(x)}^{-1}$ exists. Thus from Lemma~\ref{eq:adjunt} we have \begin{equation}\label{eq.max} \frac{\|z-y\|^2}{\|\widehat{F'(x)}^{-1}\|}\leq \langle \widehat{F'(x)}(z-y),z-y\rangle. \end{equation} Combining $\langle \widehat{F'(x)}(z-y),z-y\rangle = \langle F'(x)(z-y), z-y\rangle,$ with \eqref{eq.max} and \eqref{eq.monmax1} yields that \begin{equation*}\label{eq.monmax2} \|y-z\|^2 \leq \|\widehat{F'(x)}^{-1}\| \langle F'(x)(z-y),z-y\rangle \leq \|\widehat{F'(x)}^{-1}\| \langle F(y)-F(x)+F'(x)(x-y), z-y\rangle. \end{equation*} Hence, after simple manipulations, above inequality becomes \begin{equation}\label{eq.max20} \|y-z\| \leq \|\widehat{F'(x)}^{-1}\| \|F(y)-F(x)+F'(x)(x-y)\|. \end{equation} Now, using Lemma~\ref{le:wdns} and Lemma~\ref{pr:taylor} together with the assumptions of the lemma we obtain $$ \|y-z\| \leq \frac{-1}{f'(t)}e_{f}(t, v)\frac{\|y-x\|^2}{(v-t)^2}. $$ As, $0\leq t<t_ *,$ $f'(t)<0$. Using definition of $e_{f}(t, v)$, \eqref{eq.majorfunc} and the assumption $f(v)\leq 0$ we have $$ -\frac{e_{f}(t,v)}{f'(t)}=v-t + \frac{f(t)}{f'(t)}-\frac{f(v)}{f'(t)} \leq v-t + \frac{f(t)}{f'(t)}=v-n_{f}(t). $$ To end the proof, combine the two above inequalities. \end{proof} \begin{corollary} The sequences $\{x_k\}$ and $\{t_k\}$ satisfy the following inequality \begin{equation}\label{ine.quadr} \|x_*-x_{k+1}\|\leq \frac{t_*-t_{k+1}}{(t_* -t_k)^2}\|x_*-x_k\|^2, \qquad k=0,1\ldots. \end{equation} As a consequence, the sequence $\{x_k\}$ converges $Q$-linearly to the solution $x^*$ as follows \begin{equation}\label{ine.quadr1} \|x_*-x_{k+1}\|\leq \frac{1}{2} \|x_*-x_k\|, \qquad k=0,1\ldots. \end{equation} Additionally, if $f$ satisfies ${\bf h4}$ then the sequence $\{x_k\}$ converges $Q$-quadratically to $x_*$ as follows \begin{equation}\label{ine.quadr2} \|x_*-x_{k+1}\| \leq \frac{D^{-}f'(t_*)}{-2f'(t_*)}\|x_*-x_k\|^2, \qquad k=0,1\ldots. \end{equation} \end{corollary} \begin{proof} For each $k$, we can apply Lemma~\ref{ine.rates} with $x=x_k,$ $y=x_*,$ $t=t_k$ and $v=t_*,$ to obtain $$ \|x_*-N_{F+T}(x_k)\|\leq [t_*-n_{f}(t_k)]\frac{\|x_*-x_k\|^2}{(t_*-t_k)^2}. $$ Thus inequality \eqref{ine.quadr} follows from the above inequality, \eqref{eq.seq} and \eqref{eq.majseq}. Note that by the first part in Lemma~\ref{eq.ratemajor}, \eqref{eq.majseq} and Corollary~\ref{res.solution} we have $$ \frac{t_*-t_{k+1}}{t_*-t_k}\leq \frac{1}{2}, \qquad \qquad \frac{\|x_*-x_k\|}{t_*-t_k}\leq 1. $$ Combining these inequalities with \eqref{ine.quadr} we obtain \eqref{ine.quadr1}. Now, assume that ${\bf h4}$ holds. Then, by Corollary~\ref{major.convergence}, the second inequality on \eqref{ine.rates1} holds, which combined with \eqref{ine.quadr} imply \eqref{ine.quadr2}. \end{proof} \begin{corollary} The limit $x_*$ of the sequence $\{x_k\}$ is the unique solution of the generalized equation $F(x)+T(x)\ni 0$ in $B[x_0, t_ *]$. \end{corollary} \begin{proof} Suppose there exist $y_* \in B[x_0, t_ *]$ such that $y_*$ is solution of $F(x)+T(x)\ni 0$. We will prove by induction that \begin{equation}\label{iq.indu} \|y_*-x_k\|\leq t_*-t_k, \qquad k=0,1,\ldots. \end{equation} The case $k=0$ is trivial, because $t_0=0$ and $y_* \in B[x_0,t_*]$. We assume that the inequality holds for some $k$. First note that Corollary~\ref{res.solution} implies that $x_k\in {K}(t_k)$, for $k=0,1\ldots$. Thus, from definition of ${K}(t_k)$ we conclude that $\|x_k-x_0\|\leq t_k$, for $k=0,1\ldots$. Since $\|x_k-x_0\|\leq t_k$, we may apply Lemma~\ref{ine.rates} with $x=x_k$, $y=y_*$, $t=t_k$ and $v=t_*$ to obtain $$ \|y_*-N_{F+T}(x_k)\|\leq [t_*-n_{f}(t_k)]\frac{\|y_*-x_k\|^2}{(t_*-t_k)^2}. $$ Using inductive hypothesis, \eqref{eq.seq} and \eqref{eq.majseq} we obtain, from latter inequality, that \eqref{iq.indu} holds for $k+1$. Since $x_k$ converges to $x_*$ and $t_k$ converges to $t_*$, from \eqref{iq.indu} we conclude that $y_*=x_*$. Therefore, $x_*$ is the unique solution of $F(x)+T(x)\ni 0$ in $B[x_0,t_*]$. \end{proof} \section{Some special cases} \label{apl} In this section, we will present some special cases of Theorem \ref{th:nt}. When $F\equiv \{0\}$ and $f'$ satisfies a Lipschitz-type condition, we will obtain a particular instance of Theorem~\ref{th:nt}, which retrieves the classical convergence theorem on Newton's method under the Lipschitz condition; see \cite{Rall1974, Traub1979}. A version of Smale's theorem on Newton's method for analytical functions is obtained in Theorem~\ref{theo:Smale}. \subsection{Under Lipschitz-type condition} In this section, we will present a version of classical convergence theorem for Newton's method under Lipschitz-type condition for generalized equations. The classical version for $F\equiv \{0\}$ have appeared in Rall \cite{Rall1974} and Traub and Wozniakowski \cite{Traub1979}. \begin{theorem} \label{th:cc} Let $H$ be a Hilbert space, $\Omega$ be an open nonempty subset of $H$, $F: \Omega \to H$ be continuous with Fr\'echet derivative $F'$ continuous, $T:H \rightrightarrows H$ be a set-valued operator and $x_0\in \Omega$. Suppose that $F'(x_0)$ is a positive operator and $\widehat{F'(x_0)}^{-1}$ exists and, there exists a constant $K>0$ such that $B(x_0, 1/K)\subset \Omega$ and \begin{equation} \label{eq:hc} \|\widehat{F'(x_0)}^{-1}\| \|f'(x)-f'(y)\| \leq K \|x-y\|,\qquad x,\, y\in B(x_0, 1/K). \end{equation} Moreover, suppose that \begin{equation} \label{eq.ipoint} \|x_1-x_0\|\leq b. \end{equation} Then, the sequence $\{x_k\}$ generated by Newton's method for solving $F(x)+T(x)\ni 0$ with starting point $x_0$ \begin{equation} \label{eq:Kant} F(x_k)+F'(x_k)(x_{k+1}-x_k)+T(x_{k+1})\ni 0, \qquad k=0,1,\ldots\,, \end{equation} is well defined, is contained in $B(x_0, t^*)$ and converges to the point $x^*$ which is the unique solution of $F(x)+T(x)\ni 0$ in $B(x_0, t^*)$, where $t^*= 1-\sqrt{1-2bK}/{K}$. Moreover, the sequence $\{x_k\}$ satisfies for any $k=0,1,\ldots,$ $$ \|x_*-x_{k+1}\| \leq \frac{K}{2\sqrt{1-2bK}}\|x_*-x_k\|^2. $$ \end{theorem} \begin{proof} Since $f:[0,1/K)\to \mathbb{R},$ defined by $f(t):=(K/2)t^2-t+b,$ is a majorant function for $F$ at point $x_0$, the result follows by invoking Theorem~\ref{th:nt}, applied to this particular context. \end{proof} \begin{remark} The above result contain, as particular instance, several theorem on Newton's method; see, for example, Rall \cite{Rall1974}, Traub and Wozniakowski \cite{Traub1979} and Daniel \cite{Daniel1973}. See also \cite{Wang2015_1}. \end{remark} \subsection{Under Smale's-type condition} In this section, we will present a version of classical convergence theorem for Newton's method under Smale's-type condition for generalized equations. The classical version has appeared in corollary of Proposition 3 pp.~195 of Smale \cite{Smale1986}, see also Proposition 1 pp.~157 and Remark 1 pp.~158 of Blum, Cucker, Shub, and Smale~\cite{BlumSmale1998}; see also \cite{Ferreira2009}. \begin{theorem} \label{theo:Smale} Let $H$ be a Hilbert space, $\Omega$ be an open nonempty subset of $H$, $F: \Omega \to H$ be an analytic function, $T:H \rightrightarrows H$ be a set-valued operator and $x_0\in \Omega$. Suppose that $F'(x_0)$ is a positive operator and $\widehat{F'(x_0)}^{-1}$ exists. Suppose that \begin{equation} \label{eq:SmaleCond} \gamma:= \|\widehat{F'(x_0)}^{-1}\|\sup _{ n > 1 }\left\| \frac {F^{(n)}(x_0)}{n!}\right\|^{1/(n-1)}<+\infty. \end{equation} Moreover, suppose that $B(x_0,1/\gamma)$ and there exists $b>0$ such that \begin{equation} \label{eq.ipoint} \|x_1-x_0\|\leq b \end{equation} and $\alpha := b \gamma\leq 3-2\sqrt{2}$. Then the sequence $\{x_k\}$ generated by Newton's method for solving $F(x)+T(x)\ni 0$ with starting point $x_0$ \begin{equation} \label{eq:Kant} F(x_k)+F'(x_k)(x_{k+1}-x_k)+T(x_{k+1})\ni 0, \qquad k=0,1,\ldots\,, \end{equation} is well defined, is contained in $B(x_0, t^*)$ and converges to the point $x^*$ which is the unique solution of $F(x)+T(x)\ni 0$ in $B[x_0, t^*]$, where $t_ *=(\alpha +1-\sqrt{(\alpha+1)^2 -8\alpha})/4\gamma$. Moreover, $\{x_k\}$ converges $Q$-linearly as follows \[ \|x_*-x_{k+1}\| \leq \frac{1}{2}\|x_* -x_k\|, \qquad k=0,1,\ldots. \] Additionally, if $\alpha < 3-2\sqrt{2}$, then $\{x_k\}$ converges $Q$-quadratically as follows $$ \|x_*-x_{k+1}\| \leq \frac{\gamma}{(1-\gamma t_ *)[2(1-\gamma t_ *)^2-1]}\|x_*-x_k\|^2,\qquad k=0,1,\ldots. $$ \end{theorem} Before proving above theorem we need of two results. The next results gives a condition that is easier to check than condition \eqref{Hyp:MH}, whenever the functions under consideration are twice continuously differentiable, and its proof follows the same path of Lemma~21 of \cite{FerreiraGoncalvesOliveira2011}. \begin{lemma}\label{lem.cond1} Let $\Omega \subset H$ be an open set, and let $F:{\Omega}\to H$ be an analytic function. Suppose that $x_0 \in \Omega$ and $B(x_0, 1/ \gamma)\subset \Omega,$ where $\gamma$ is defined in \eqref{eq:SmaleCond}. Then for all $x\in B(x_0, 1/ \gamma),$ it holds that $ \|F''(x)\|\leq 2 \gamma/(1- \gamma\|x-x_0\|)^3. $ \end{lemma} The next result gives a relationship between the second derivatives $F''$ and $ f''$, which allow us to show that $F$ and $f$ satisfy \eqref{Hyp:MH}, and its proof is similar to Lemma~22 of \cite{FerreiraGoncalvesOliveira2011}. \begin{lemma} \label{lc} Let $H$ be a Hilbert space, $\Omega\subseteq H$ be an open set, $F:{\Omega}\to H$ be twice continuously differentiable. Let $x_0 \in \Omega$, $R>0$ and $\kappa=\sup\{t\in [0, R): B(x_0, t)\subset \Omega\}$. Let \mbox{$f:[0,R)\to \mathbb {R}$} be twice continuously differentiable such that $ \|\widehat{F'(x_0)}^{-1}\|\|F''(x)\|\leqslant f''(\|x-x_0\|),$ for all $x\in B(x_0, \kappa)$, then $F$ and $f$ satisfy \eqref{Hyp:MH}. \end{lemma} \noindent {\bf [Proof of Theorem \ref{theo:Smale}]}. Consider $f:[0, 1/ \gamma) \to \mathbb{R}$ defined by $f(t)=t/(1- \gamma t)-2t+b$. Note that $f$ is analytic and $f(0)=b$, $f'(t)=1/(1- \gamma t)^2-2$, $f'(0)=-1$, $f''(t)=2 \gamma/(1-\gamma t)^3$. It follows from the last equalities that $f$ satisfies {\bf h1} and {\bf h2}. Combining Lemma~\ref{lc} with Lemma~\ref{lem.cond1}, we conclude that $F$ and $f$ satisfy \eqref{Hyp:MH}. Therefore, the result follows by applying the Theorem~\ref{th:nt}. \qed \section{Final remarks } \label{rf} We have obtained a semi local convergence result to Newton's method for solving generalized equation in Hilbert spaces and under the majorant condition. The majorant condition allow to unify several convergence results pertaining to Newton's method. Besides, the study of inexact versions of this method would be welcome.
1,314,259,992,938
arxiv
\section{Introduction} Lagrangian cobordism is a natural relation between Legendrian submanifolds, and it is a crucial ingredient in the definition of the functorial properties of invariants of Legendrian submanifolds in the spirit of symplectic field theory as introduced by Eliashberg, Givental and Hofer in \cite{Eliashberg_&_SFT}. This relation is at the heart of many recent developments in the study of Legendrian submanifolds and its properties have been investigated by the authors and many others over the past years. In the present paper we study rigidity phenomena in the topology of exact Lagrangian cobordisms in the symplectisation of the contactisation of a Liouville manifold. In \cite{Eliash_Mur_Caps}, Eliashberg and Murphy showed that exact Lagrangian cobordisms are flexible when their negative ends are loose (in the sense of Murphy \cite{Murphy_loose}). On the contrary, we will show that they become rigid if we restrict our attention to cobordisms whose negative ends admit augmentations (or more generally finite-dimensional representations) of their Chekanov-Eliashberg algebras. In order to study the topology of such cobordisms, we introduce a version of Lagrangian Floer homology (originally defined for closed Lagrangian submanifolds by Floer in \cite{FloerHFlag}) for pairs of exact Lagrangian cobordisms. This construction finds its inspiration in the work of Ekholm in \cite{Ekholm_FloerlagCOnt}, which gives a symplectic field theory point of view on wrapped Floer homology of Abouzaid and Seidel from \cite{WrappedFuk}. The definition of this new Floer theory requires the use of augmentations of the Chekanov-Eliashberg algebras of the negative ends as bounding cochains in order to algebraically cancel certain ``bad'' degenerations of the holomorphic curves at the negative ends of the cobordisms. Bounding cochains have been introduced, in the closed case, by Fukaya, Oh, Ohta and Ono in \cite{fooo}, while augmentations, which play a similar role in the context of Legendrian contact homology, have been introduced by Chekanov in \cite{Chekanov_DGA_Legendrian}. For a pair of exact Lagrangian cobordisms obtained by a suitable small Hamiltonian push-off, our invariant gives rise to various long exact sequences relating the singular homology of the cobordism with the Legendrian contact homology of its ends. We then use these long exact sequences to give restrictions on the topology of exact Lagrangian cobordisms under various hypotheses on the topology of the Legendrian ends. In the context of generating family homology for Legendrian submanifolds in jet spaces, Sabloff and Traynor in \cite{Sabloff_Traynor} describe exact sequences similar to ours for cobordisms which admit compatible generating families. The notion of Lagrangian cobordism between Legendrian submanifolds studied in this article is (in general) different from the notion of Lagrangian cobordisms between Lagrangian submanifolds introduced by Arnol'd in \cite{Arnold_Lagrange_Cobordism1} and \cite{Arnold_Lagrange_Cobordism2} and recently popularised by Biran and Cornea in \cite{BiranCornea_LCI} and \cite{BirCo2}. Lagrangian cobordisms in the sense of Arnol'd between Lagrangian submanifolds of a symplectic manifold $M$ are Lagrangian submanifolds of $M\times\ensuremath{\mathdj{C}}$ which project to horizontal half-lines of $\ensuremath{\mathdj{C}}$ outside of a compact set. The main difference between the two theories is that Arnol'd-type cobordisms do not distinguish between positive and negative ends and therefore are closer in spirit to the notion of cobordism in classical topology. Despite the differences, for Lagrangian cobordisms between Legendrian submanifolds with no Reeb chords, some of the results we obtain resemble some of the results obtained by Biran and Cornea \cite{BiranCornea_LCI, BirCo2} and Su\'arez \cite{suarezthesis}. \begin{Rem} In fact, under the strong assumption that the Legendrian submanifolds $\Lambda_\pm \subset (P \times \ensuremath{\mathdj{R}},dz+\theta)$ have no Reeb chords, an exact Lagrangian cobordism from $\Lambda_-$ to $\Lambda_+$ inside the symplectisation \[(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}},d(e^t(dz+\theta))) \cong (P \times \ensuremath{\mathdj{C}},d\theta \oplus d(xdy))\] can be deformed to yield an exact Lagrangian cobordism between the exact Lagrangian embeddings $\Pi_{\operatorname{Lag}}(\Lambda_-),\Pi_{\operatorname{Lag}}(\Lambda_+) \subset (P,d\theta)$ in the sense of Arnol'd, and vice versa. In some sense, these two notions of cobordisms thus coincide in this case. For readers familiar with the language of \cite{Ekhoka}, this can be explained as follows: one can go between Lagrangian cobordism in the sense studied here and so-called Morse cobordisms. The latter are embedded in the case when the Legendrian ends have no Reeb chords, and they are thus Lagrangian cobordisms in the sense of Arnol'd. \end{Rem} \subsection{Main results.} \label{sec:main-result} Let $(P, \theta)$ be a Liouville manifold and $(Y, \alpha):= (P \times \ensuremath{\mathdj{R}}, dz+ \theta)$ its contactisation. We consider a pair of exact Lagrangian embeddings $\Sigma_0,\Sigma_1\hookrightarrow X$, where $(X, \omega) = (\ensuremath{\mathdj{R}}\times Y, d(e^t \alpha))$ is the symplectisation of $(Y, \alpha)$. We assume that the positive and negative ends of $\Sigma_i$ $i=0,1$ are cylindrical over Legendrian submanifolds $\Lambda_i^-$ and $\Lambda_i^+$ respectively, and thus $\Sigma_i$ is a Lagrangian cobordisms from $\Lambda_i^-$ to $\Lambda_i^+$; see Figure \ref{fig:paircob} for a schematic representation and Section \ref{sec:geometrical-context} for the precise formulation of our geometrical setup. We assume that $\Sigma_0$ and $\Sigma_1$ intersect transversely and that their Legendrian ends are chord-generic in the sense of Section \ref{sec:basic-definitions}. The first rigidity phenomena for Lagrangian submanifolds in this setting were proven by Gromov \cite{Gromov_Pseudo_Holomorphic}, who showed that there are no closed exact Lagrangian submanifolds in a symplectisation as above (note that they automatically would be displaceable). Differently put, this means that an exact Lagrangian cobordism must have at least one non-empty end. \begin{figure}[htp] \labellist \pinlabel $t$ at 4 155 \pinlabel $\color{red}{\Lambda^+_1}$ at 45 152 \pinlabel $\Lambda^+_0$ at 117 152 \pinlabel $\Sigma_0$ at 124 98 \pinlabel $\color{red}{\Sigma_1}$ at 30 80 \pinlabel $\color{red}{\Lambda^-_1}$ at 45 8 \pinlabel $\Lambda^-_0$ at 106 8 \endlabellist \centering \includegraphics{pairlagcob} \caption{Two Lagrangian cobordisms inside a symplectisation $\ensuremath{\mathdj{R}} \times Y$, where the vertical axis corresponds to the $\ensuremath{\mathdj{R}}$-coordinate.} \label{fig:paircob} \end{figure} We denote by $\mathcal{R}(\Lambda_i^\pm)$ the set of Reeb chords of $\Lambda_i^\pm$ for $i=0,1$, and by $\mathcal{R}(\Lambda_1^\pm,\Lambda_0^\pm)$ the set of Reeb chords from $\Lambda_1^\pm$ to $\Lambda_0^\pm$. Let $R$ be a field of characteristic $2$ or, if all $\Sigma_i$'s and $\Lambda_i^\pm$'s are (relatively) spin, any commutative ring. (See Section \ref{sec:remarks-about-grad}.) We denote by $C(\Lambda_0^\pm,\Lambda_1^\pm)$ the free $R$-module spanned by $\mathcal{R}(\Lambda_1^\pm,\Lambda_0^\pm)$. \begin{Rem} In fact the commutativity of $R$ can be dropped -- Chekanov's linearisation can be generalised to arbitrary rings, and our long exact sequences exist in this setting as well. See Section \ref{sec:augm-finite-dimens} for more details as well as \cite{NonCommAug}. However, we point out that in order to use arguments involving ranks, one must impose additional requirements on $R$, for example commutativity, or finite dimensionality as an algebra over a field. \end{Rem} We assume that the Chekanov-Eliashberg algebra $\mathcal{A}(\Lambda^-_i;R)$ of $\Lambda^-_i$ admits an augmentation $\varepsilon^-_i$ over $R$ for $i=0,1$ (see Section \ref{sec:prel-legendr-cont} for the definitions). It follows from the results of Ekholm, Honda and K\'alm\'an in \cite{Ekhoka} that $\mathcal{A}(\Lambda^+_i;R)$ also admits an augmentation $\varepsilon^+_i= \varepsilon^-_i \circ \Phi_{\Sigma_i}$, where $\Phi_{\Sigma_i} \colon\thinspace \mathcal{A}(\Lambda^+_i;R) \to \mathcal{A}(\Lambda^-_i;R)$ is the unital DGA morphism induced by the cobordism $\Sigma_i$. Thus the bilinearised contact cohomologies $LCH_{\varepsilon_0^\pm, \varepsilon_1^\pm}(\Lambda_0^\pm, \Lambda_1^\pm)$ are defined. See Chekanov \cite{Chekanov_DGA_Legendrian} and Bourgeois and Chantraine \cite{augcat} for the notions of linearisation and bilinearisation of a differential graded algebra. We denote by $CF(\Sigma_0,\Sigma_1)$ the free $R$-module spanned by the intersection points $\Sigma_0\cap\Sigma_1$. In Section~\ref{sec:action-energy} we define the notion of the action of an intersection point and use it to filter $CF(\Sigma_0,\Sigma_1)$: we denote by $CF_\pm(\Sigma_0,\Sigma_1)$ the submodule of $CF(\Sigma_0,\Sigma_1)$ generated by intersection points of positive (respectively, negative) action. The main construction in this article provides a differential on the modules $CF_\pm(\Sigma_0,\Sigma_1)$, leading to homology groups $HF_\pm(\Sigma_0,\Sigma_1)$ of Floer type. The differentials and the resulting homology groups depend on the choice of the augmentations $\varepsilon_i^-$. In order to define a graded theory, we need that $2c_1(X)=0$ and that all Lagrangian cobordisms have vanishing Maslov classes. This implies that all Lagrangian cobordisms admit Maslov potentials (as defined in Section \ref{sec:grading}); a particular choice of such a potential leads to the notion of a \emph{graded} Lagrangian cobordisms, for which $HF(\Sigma_0,\Sigma_1)$ has a well-defined grading in $\ensuremath{\mathdj{Z}}$. In general the grading must be taken in a (possibly trivial) cyclic group. Our main result is the following relation between the Floer homology of a pair $(\Sigma_0,\Sigma_1)$ of exact Lagrangian cobordisms and the bilinearised Legendrian contact homologies of their ends; see Section \ref{sec:long-exact-sequence}. \begin{Thm}\label{thm:principal} Let $\Sigma_i$, $i=0,1$, be a graded exact Lagrangian cobordisms from the Legendrian submanifold $\Lambda^-_i$ to $\Lambda^+_i$ inside the symplectisation of the contactisation of a Liouville manifold, and assume that there are augmentations $\varepsilon^-_i$ of $\mathcal{A}(\Lambda^-_i)$ for $i=0,1$. Then there exists a spectral sequence whose first page is \begin{gather} E^{3,\bullet}_1 \oplus E^{2,\bullet}_1 \oplus E^{1,\bullet}_1 \oplus E^{0,\bullet}_1 \notag \\ \parallel \\ LCH^{\bullet-2}_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda_0^+,\Lambda^+_1)\oplus HF^{\bullet}_+(\Sigma_0,\Sigma_1)\oplus LCH^{\bullet-1}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda_0^-,\Lambda^-_1) \oplus HF^{\bullet}_-(\Sigma_0,\Sigma_1)\label{eq:16} \notag \end{gather} and which collapses to $0$ at the fourth page. \end{Thm} This theorem follows from the acyclicity of a complex $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$ associated to a pair of Lagrangian cobordisms which we call the \emph{Cthulhu complex} (see Section \ref{sec:Cthulhu-complex}). Its underlying $R$-module is $$\operatorname{Cth}(\Sigma_0,\Sigma_1) = C(\Lambda_0^+, \Lambda_1^+) \oplus CF_+(\Sigma_0, \Sigma_1) \oplus C(\Lambda_0^-, \Lambda_1^-) \oplus CF_-(\Sigma_0, \Sigma_1).$$ The spectral sequence is induced by the filtration of length four given by $$C(\Lambda_0^+, \Lambda_1^+) > CF_+(\Sigma_0, \Sigma_1) > C(\Lambda_0^-, \Lambda_1^-) > CF_-(\Sigma_0, \Sigma_1),$$ and the acyclicity of the complex $(\operatorname{Cth}(\Sigma_0, \Sigma_1), \mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$ follows from its invariance properties with respect to a large class of Hamiltonian deformations which, in the contactisation of a Liouville manifold, allow us to displace any pair of Lagrangian cobordisms. When the negative ends are empty, this complex recovers the wrapped Floer cohomology complex as described by Ekholm in \cite{Ekholm_FloerlagCOnt}. When the positive ends are empty and there are no homotopically trivial Reeb chords of both $\Lambda_i^-$'s, this complex is similar to the Floer complex sketched in the work of Akaho in \cite[Section 8]{Akaho}. \begin{Rem} The latter situation cannot occur in the symplectisation of a contactisation of a Liouville manifold: an exact Lagrangian cobordism with no positive end cannot have a negative end admitting an augmentation by Corollary \ref{cor:nonemptypos} below. This also follows from an even stronger result due to the second author in \cite{Caps}, where it is shown that such a Legendrian submanifold must have an acyclic Chekanov-Eliashberg algebra. \end{Rem} We will introduce two classes of pairs $(\Sigma_0,\Sigma_1)$ of exact Lagrangian cobordisms for which this filtration is of length three: directed and $V$-shaped pairs (see Section \ref{sec:long-exact-sequence}). A pair is directed when there are no intersection points of positive action, and $V$-shaped when there are no intersection points of negative action (see Section \ref{sec:long-exact-sequence} for more details). In these situations, the spectral sequence collapses to $0$ at the third page, giving rise to the following long exact sequences. \begin{Cor} \label{cor:posnegLES} Let $(\Sigma_0,\Sigma_1)$ be a pair of exact Lagrangian cobordisms satisfying the assumptions of Theorem \ref{thm:principal}. \begin{itemize} \item If $(\Sigma_0,\Sigma_1)$ is directed, then there exists a long exact sequence \begin{equation} \label{eq:17} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots\ar[r]& LCH^{k-1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+_0,\Lambda^+_1) \ar[d] & & & \\ & HF_-^{k}(\Sigma_0,\Sigma_1) \ar[r] & LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-_0,\Lambda_1^-) \ar[r] & LCH^{k}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+_0,\Lambda^+_1)\ar[r] &\cdots} \end{equation} \item If $(\Sigma_0,\Sigma_1)$ is $V$-shaped, then there exists a long exact sequence \begin{equation} \label{eq:18} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots \ar[r] & LCH^{k}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+_0,\Lambda^+_1) \ar[r] & LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-_0,\Lambda_1^-)\ar[r] & HF_+^{k+2}(\Sigma_0,\Sigma_1) \ar[d] & \\ & & & LCH^{k+1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+_0,\Lambda^+_1)\ar[r] & \cdots } \end{equation} \end{itemize} \end{Cor} \subsubsection{Remarks about grading and orientation.} \label{sec:remarks-about-grad} Most of the result here are stated for graded Lagrangian cobordisms. However, our methods apply in the ungraded cases as well. The only difference is that the long exact sequences in Corollary \ref{cor:posnegLES} and in Section \ref{sec:les} become exact triangles (the maps are ungraded or, alternatively, graded modulo the Maslov number). For the results to holds using coefficients in a ring $R$ different from a field of characteristic two, as well for the results in Section \ref{sec:cobobs}, we need to be able to define the theory using integer coefficients. In order to perform counts with signs, one has to define coherent orientations for the relevant moduli spaces of pseudoholomorphic curves. This can be done in the case when the Legendrian submanifolds and Lagrangian cobordisms are relatively pin (following Ekholm, Etnyre and Sullivan in \cite{Ekholm_&_Orientation_Homology} and Seidel in \cite[Section 11]{Seidel_Fukaya}). \subsection{Long exact sequences for LCH induced by a Lagrangian cobordism.} \label{sec:les} If $\Sigma_1$ is a Hamiltonian deformation of $\Sigma_0$ for some suitable and sufficiently small Hamiltonian, the Floer homology groups $HF_\pm(\Sigma_0, \Sigma_1)$ can be identified with Morse homology groups of $\Sigma_0$. Similarly, the bilinearised Legendrian contact homology groups $LCH_{\varepsilon^\pm_0,\varepsilon_1^\pm} (\Lambda^\pm_0,\Lambda^\pm_1)$ can be identified with the bilinearised contact homology groups $LCH_{\varepsilon^\pm_0,\varepsilon_1^\pm}(\Lambda^\pm_0)$ (as defined in Subsection \ref{subsec: LCH}) following \cite{Duality_EkholmetAl}. Thus the long exact sequences in Corollary~\ref{cor:posnegLES} can be reinterpreted as long exact sequences relating the singular homology of a Lagrangian cobordism and the Legendrian contact homology of its ends. These results are proved in Section \ref{sec:small-pert-lagr}. Analogous long exact sequences have previously been found by Sabloff and Traynor in \cite{Sabloff_Traynor} in the setting of generating family homology under the additional assumption that the cobordism admits a compatible generating family, and by the fourth author in \cite{GolovkoLagCob} in the case when the negative end of the cobordism admits an exact Lagrangian filling. The latter results have been put in a much more general framework in recent work by Cieliebak-Oancea \cite{SymplecticEilenbergSteenrod}. In the rest of this introduction, $\Lambda^+$ and $\Lambda^-$ will always denote closed Legendrian submanifolds of dimension $n$ in the contactisation of a Liouville manifold, and every Lagrangian cobordism between them, as well as any Lagrangian filling of them, will always live in the corresponding symplectisation. We will denote by $\overline{\Sigma}$ the natural compactification of $\Sigma$ obtained by adjoining its Legendrian ends $\Lambda_\pm$. Note that $\overline{\Sigma}$ is diffeomorphic to $\Sigma \cap [-T, +T] \times Y$ for some $T \gg 0$ sufficiently large. We will also use the notation $\partial_\pm\overline{\Sigma}:=\Lambda_\pm \subset \overline{\Sigma}$, which implies that $\partial \overline{\Sigma}=\partial_+\overline{\Sigma} \sqcup \partial_-\overline{\Sigma}$. \subsubsection{A generalisation of the long exact sequence of a pair.} The first exact sequence we produce from a Lagrangian cobordism (see Section \ref{sec:push-inducing-long}) is given by the following: \begin{Thm}\label{thm:lespair} Let $\Sigma$ be a graded exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$ and let $\varepsilon^-_0$ and $\varepsilon^-_1$ be two augmentations of $\mathcal{A}(\Lambda^-)$ inducing augmentations $\varepsilon_0^+$, $\varepsilon_1^+$ of $\mathcal{A}(\Lambda^+)$. There is a long exact sequence \begin{equation} \label{leqtr} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots\ar[r]& LCH^{k-1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+) \ar[d] & & & \\ & H_{n+1-k}(\overline{\Sigma},\partial_- \overline{\Sigma};R) \ar[r] & LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-) \ar[r] & LCH^{k}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+)\ar[r] &\cdots,}. \end{equation} where the map $\Phi^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma \colon LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-) \to LCH^{k}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+)$ is the adjoint of the bilinearised DGA morphism $\Phi_\Sigma$ induced by $\Sigma$ (see Section \ref{subsec: LCH}). \end{Thm} When the negative end $\Lambda_-=\emptyset$ is empty, i.e.~when $\Sigma$ is an exact Lagrangian filling of $\Lambda_+$, and $\varepsilon^+_i$, $i=0,1$ both are augmentations induced by this filling, the resulting long exact sequence simply becomes the isomorphism \[ LCH^{k-1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+) \xrightarrow{\simeq} H_{n+1-k}(\overline{\Sigma};R) \] appearing in the work of Ekholm in \cite{Ekholm_FloerlagCOnt}. This isomorphism was first observed by Seidel, and is sometimes called \emph{Seidel's isomorphism}. (See the map $G^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma$ in Section \ref{sec:computingpushoff} for another incarnation.) Its proof was completed by the second author in \cite{LiftingPseudoholomorphic}, also see \cite{Floer_Conc} by the authors for an analogous isomorphism induced by a \emph{pair} of fillings. \subsubsection{A generalisation of the duality long exact sequence and fundamental class} \label{sec:dualityles} A Legendrian submanifold $\Lambda$ is horizontally displaceable if there exists a Hamiltonian isotopy $\phi_t$ of $(P,d\theta)$ which displaces the Lagrangian projection $\Pi_{\operatorname{Lag}}(\Lambda) \subset P$ from itself. In Section \ref{sec:push-inducing-long} we obtain the following: \begin{Thm}\label{thm:lesduality} Let $\Sigma$ be an exact graded Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$ and let $\varepsilon^-_0$ and $\varepsilon^-_1$ be two augmentations of $\mathcal{A}(\Lambda^-)$ inducing augmentations $\varepsilon_0^+$, $\varepsilon_1^+$ of $\mathcal{A}(\Lambda^+)$. Assume that $\Lambda^-$ is horizontally displaceable; then there is a long exact sequence \begin{equation} \label{ledual} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots \ar[r] & LCH^{k}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+) \ar[r] & LCH_{n-k-1}^{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-)\ar[r] & H_{n-k-1}(\Sigma;R) \ar[d & \\ & & & LCH^{k+1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+)\ar[r] & \cdots, } \end{equation} where the map $G^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma \colon H_{n-k-1}(\Sigma;R)\to LCH^{k+1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+)$ is defined in Section \ref{sec:seidel}. \end{Thm} When $\Sigma=\ensuremath{\mathdj{R}} \times \Lambda$ then $H_\bullet(\Sigma)=H_\bullet(\Lambda)$, and hence the above long exact sequence recovers the duality long exact sequence for Legendrian contact homology, which was proved by Sabloff in \cite{Sabloff_Duality} for Legendrian knots and later generalised to arbitrary Legendrian submanifolds in \cite{Duality_EkholmetAl} by Ekholm, Etnyre and Sabloff. In the bilinearised setting, the duality long exact sequence was introduced by Bourgeois and the first author in \cite{augcat}. In Section \ref{sec:fund-class-twist} we use Exact Sequence \eqref{ledual} to prove that the fundamental class in LCH defined by Sabloff in \cite{Sabloff_Duality} and Ekholm, Etnyre and Sabloff in \cite{Duality_EkholmetAl} is functorial with respect to the maps induced by exact Lagrangian cobordisms. \subsubsection{A generalisation of the Mayer-Vietoris long exact sequence} The last exact sequence that we will extract from Corollary \ref{cor:posnegLES} generalises the Mayer-Vietoris exact sequence (see Section \ref{sec:push-inducing-mayer}). \begin{Thm}\label{thm:lesmayer-vietoris} Let $\Sigma$ be an exact graded Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$ and let $\varepsilon^-_0$ and $\varepsilon^-_1$ be two augmentations of $\mathcal{A}(\Lambda^-)$ inducing augmentations $\varepsilon_0^+$, $\varepsilon_1^+$ of $\mathcal{A}(\Lambda^+)$. Then there is a long exact sequence \begin{equation} \label{eq:mayer-vietoris} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots\ar[r]& LCH^{k-1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+) \ar[d] & & & \\ & H_{n-k}(\partial_- \overline{\Sigma};R) \ar[r] & LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-) \oplus H_{n-k}(\overline{\Sigma};R) \ar[r] & LCH^{k}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+)\ar[r] &\cdots,} \end{equation} where the component \[ H_{n-k}(\partial_- \overline{\Sigma};R) \to H_{n-k}(\overline{\Sigma};R) \] of the left map is induced by the topological inclusion of the negative end. If $\varepsilon^-_0=\varepsilon^-_1=\varepsilon$, it moreover follows that the image of the fundamental class under the component $H_n(\partial_- \overline{\Sigma};R) \to LCH^{0}_{\varepsilon,\varepsilon}(\Lambda^-)$ of the above morphism vanishes. Under the additional assumption that $\Lambda^-$ is horizontally displaceable, it is moreover the case that the image of a generator under $H_0(\partial_- \overline{\Sigma};R) \to LCH^{n}_{\varepsilon,\varepsilon}(\Lambda^-)$ is equal to the fundamental class in Legendrian contact homology. \end{Thm} In particular we get that the fundamental class in $H_{n}(\partial_- \overline{\Sigma};R)$ either is non-zero in $H_n(\overline{\Sigma})$, or is the image of a class in $LCH^{-1}_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda^+)$. In both cases, $\Lambda^+\not=\emptyset$. Thus we obtain a new proof of the following result. \begin{Cor}[\cite{Caps}] \label{cor:nonemptypos} If $\Lambda \subset P \times \ensuremath{\mathdj{R}}$ admits an augmentation, then there is no exact Lagrangian cobordism from $\Lambda$ to $\emptyset$, i.e.~there is no exact Lagrangian ``cap'' of $\Lambda$. \end{Cor} \begin{Rem} Assume that $\Lambda_-$ admits an exact Lagrangian filling $L$ inside the symplectisation, and that $\varepsilon^-$ is the augmentation induced by this filling. It follows that $\varepsilon^+$ is the augmentation induced by the filling $L \odot \Sigma$ of $\Lambda_+$ obtained as the concatenation of $L$ and $\Sigma$. Using Seidel's isomorphisms \begin{gather*} LCH^{k}_{\varepsilon^-,\varepsilon^-}(\Lambda^-) \simeq H_{n-k}(L;R),\\ LCH^{k}_{\varepsilon^+,\varepsilon^+}(\Lambda^+) \simeq H_{n-k}(L\odot \Sigma;R) \end{gather*} to replace the relevant terms in the long exact sequences \eqref{leqtr} and \eqref{eq:mayer-vietoris}, we obtain the long exact sequence for the pair $(L \odot \Sigma, L)$ and the Mayer-Vietoris long exact sequence for the decomposition $L\odot \Sigma = L \cup \Sigma$, respectively. This fact was already observed and used by the fourth author in \cite{GolovkoLagCob}. \end{Rem} \subsection{Topological restrictions on Lagrangian cobordisms} \label{sec:cobobs} Using the long exact sequences from the previous subsection and their refinements to coefficients twisted by the fundamental group, as defined in Section \ref{sec:l2-legendr-cont}, we find strong topological restrictions on exact Lagrangian cobordisms between certain classes of Legendrian submanifolds. \subsubsection{The homology of an exact Lagrangian cobordism from a Legendrian submanifold to itself} We recall that $\Lambda$ always will denote a Lagrangian submanifold of the contactisation of a Liouville manifold. In the following results we study the homology of any exact Lagrangian cobordism from $\Lambda$ to itself in the symplectisation. One of the consequences of Theorem~\ref{thm:lesmayer-vietoris} is the following theorem, proved in Section \ref{sec:topol-endoc}. A similar statement has been proven by the second and the fourth author in \cite[Theorem 1.6]{Rigidityofendo} under the more restrictive assumption that $\Lambda$ bounds an exact Lagrangian filling. \begin{Thm} \label{homrigidityold} Let $\Sigma$ be an exact Lagrangian cobordism from $\Lambda$ to $\Lambda$ and $\ensuremath{\mathdj{F}}$ a field (of characteristic two if $\Lambda$ is not spin). If the Chekanov-Eliashberg algebra ${\mathcal A}(\Lambda; \ensuremath{\mathdj{F}})$ admits an augmentation, then: \begin{itemize} \item[(i)] There is an equality $\dim_{\ensuremath{\mathdj{F}}} H_\bullet(\Sigma;\ensuremath{\mathdj{F}}) = \dim_{\ensuremath{\mathdj{F}}} H_\bullet(\Lambda;\ensuremath{\mathdj{F}})$; \item[(ii)] The map \begin{align*} (i^-_{\ast}, i^+_{\ast}): H_{\bullet}(\Lambda; \ensuremath{\mathdj{F}})\to H_{\bullet}(\Sigma; \ensuremath{\mathdj{F}})\oplus H_{\bullet}(\Sigma; \ensuremath{\mathdj{F}}) \end{align*} is injective; and \item[(iii)] The map $$i^+_* \oplus i^-_* \colon H_\bullet (\Lambda \sqcup \Lambda) \to H_\bullet(\Sigma)$$ is surjective. \end{itemize} Here $i^{+}$ is the inclusion of $\Lambda$ as the positive end of $\Sigma$, while $i^{-}$ is the inclusion of $\Lambda$ as the negative end of $\Sigma$. \end{Thm} \begin{Rem} The above equalities hold for the $\ensuremath{\mathdj{Z}}$-graded singular homology groups without assuming that the cobordism $\Sigma$ is graded. \end{Rem} An immediate corollary of Theorem \ref{thm:lespair} is the following result, which had already appeared in \cite[Theorem 1.7]{Rigidityofendo} under the stronger assumption that the negative end is fillable. \begin{Thm}\label{thm:homologycylinder} If $\Lambda$ is a homology sphere which admits an augmentation over $\mathbb{Z}$, then any exact Lagrangian cobordism $\Sigma$ from $\Lambda$ to itself is a homology cylinder (i.e. $H_\bullet(\Sigma,\Lambda)=0$). \end{Thm} Inspired by the work of Capovilla-Searle and Traynor \cite{NonorLagcobbetLegknots}, in Section \ref{sec:nonorientable} we prove the following restriction on the characteristic classes of an exact Lagrangian cobordism from a Legendrian submanifold to itself. Given a manifold $M$, we denote by $w_i(M)$ the $i$-th Stiefel-Whitney class of $TM$. \begin{Thm}\label{thm:w_ivanish} Let $\Sigma$ be an exact Lagrangian cobordism from $\Lambda$ to itself, and $\ensuremath{\mathdj{F}} = \ensuremath{\mathdj{Z}} / 2 \ensuremath{\mathdj{Z}}$. Assume that ${\mathcal A}(\Lambda; \ensuremath{\mathdj{F}})$ admits an augmentation. If, for some $i\in\mathbb{N}$, $w_i(\Lambda)=0$, then $w_i(\Sigma)=0$. If $\Lambda$ is spin, the same holds for the Pontryagin classes. \end{Thm} By specialising to $w_1$ we obtain the following corollary, which extends the main result in \cite{NonorLagcobbetLegknots}; in particular we partially answer Question~6.1 of the same article. \begin{Cor}\label{cor:nonorien} If $\Lambda$ is an orientable Legendrian submanifold admitting an augmentation, then any exact Lagrangian cobordism from $\Lambda$ to itself is orientable. \end{Cor} \subsubsection{Restrictions on the fundamental group of certain exact Lagrangian fillings and cobordisms} Since Theorem \ref{thm:homologycylinder} shows that an exact Lagrangian cobordism from a Legendrian homology sphere to itself is a homology cylinder, it is natural to ask under what conditions this cobordism in fact is an h-cobordism. We therefore need to incorporate the fundamental group in our constructions. To that end, following ideas of Sullivan in \cite{KFloer} and Damian in \cite{Damian_Lifted}, we define a ``twisted'' version of the Floer homology groups $HF_\pm(\Sigma_0, \Sigma_1)$ with coefficient ring $R[\pi_1(\Sigma,*)]$ in Section \ref{sec:twisted}. We also establish a result analogous to Theorem~\ref{cor:posnegLES}, as well as the long exact sequences in Section \ref{sec:les}, with twisted coefficients in $R[\pi_1(\Sigma,*)]$. In the setting of Legendrian contact homology, these techniques were introduced by Eriksson-\"Ostman in \cite{Albin}. Using generalisations of the long exact sequence from Theorem~\ref{thm:lespair} and the functoriality of the fundamental class from Proposition~\ref{prp:fundclasstwisted} (see Section \ref{sec:proofpi_1carclass}) we prove the following theorem: \begin{Thm}\label{thm:pi_1carclass} Let $\Sigma$ be a graded exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$. Assume that $\mathcal{A}(\Lambda^-;R)$ admits an augmentation and that $\Lambda^+$ has no Reeb chords in degree zero. If $\Lambda^-$ and $\Lambda^+$ both are simply connected, then $\Sigma$ is simply connected as well. \end{Thm} \begin{Rem} The seemingly unnatural condition that $\Lambda_+$ has no Reeb chords in degree zero is used to ensure that the Chekanov-Eliashberg algebra $\mathcal{A}(\Lambda^+; A)$ has \emph{at most} one augmentation in $A$ for every unital $R$-algebra $A$. (This algebraic condition does not ensure the \emph{existence} of an augmentation, but rather it states that $\Lambda^+$ admits exactly one augmentation in the case when $\Lambda^-$ admits an augmentation.) This condition is clearly not invariant under Legendrian isotopy, but the conclusion of Theorem~\ref{thm:pi_1carclass} can be extended to every Legendrian submanifold which is Legendrian isotopic to $\Lambda^+$ because Legendrian isotopies induce Lagrangian cylinders by \cite[4.2.5]{Eliashberg_&_Lagrangian_Intersection_Finite} (also, see \cite{chantraine_conc}). Ideally, one should replace the algebraic condition with one that only depends on the DGA homotopy type. \end{Rem} We now present another result which imposes constraints on the fundamental group of an exact Lagrangian cobordism from a Legendrian submanifold to itself (see Section \ref{sec:proofl2rigidity}). Its proof uses an $L^2$-completion of the Floer homology groups with twisted coefficients and the $L^2$-Betti numbers of the universal cover (using results of Cheeger and Gromov in \cite{CheeGro}). \begin{Thm}\label{thm:l2rigidity} Let $\Lambda$ be a simply connected Legendrian submanifold which is spin, and let $\Sigma$ be an exact Lagrangian cobordism from $\Lambda$ to itself. If $\mathcal{A}(\Lambda;\ensuremath{\mathdj{C}})$ admits an augmentation, then $\Sigma$ is simply connected as well. \end{Thm} Combining Theorem \ref{thm:homologycylinder} with Theorem \ref{thm:l2rigidity}, we get the following result. \begin{Cor}\label{cor:trivhspheres} Let $\Sigma$ be an $n$-dimensional Legendrian homotopy sphere and assume that ${\mathcal A}(\Lambda; \ensuremath{\mathdj{Z}})$ admits an augmentation. Then any exact Lagrangian cobordism $\Sigma$ from $\Lambda$ to itself is an h-cobordism. In particular: \begin{enumerate} \item If $n \neq 3,4$, then $\Sigma$ is diffeomorphic to a cylinder; \item If $n=3$, then $\Sigma$ is homeomorphic to a cylinder; and \item If $n=4$ and $\Lambda$ is diffeomorphic to $S^4$, then $\Sigma$ is diffeomorphic to a cylinder. \end{enumerate} \end{Cor} When $n=1$, a stronger result is known. Namely, in \cite[Section 4]{Floer_Conc} we proved that any exact Lagrangian cobordism $\Sigma$ from the standard Legendrian unknot $\Lambda_0$ to itself is compactly supported Hamiltonian isotopic to the trace of a Legendrian isotopy of $\Lambda_0$ which is induced by the complexification of a rotation by $k\pi$, $k \in \ensuremath{\mathdj{Z}}$. This classification makes use of the uniqueness of the exact Lagrangian filling of $\Lambda_0$ up to compactly supported Hamiltonian isotopy, which was proved in \cite{Eliashberg_&_Local_Lagrangian_knots} by Eliashberg and Polterovich. In contrast, the methods we develop in this article give restrictions only on the \textbf{smooth} type of the cobordisms and little information is known about their symplectic knottedness in higher dimension. \subsubsection{Obstructions to the existence of a Lagrangian concordance} A {\em Lagrangian concordance} from $\Lambda^-$ to $\Lambda^+$ is a symplectic cobordism from $\Lambda^-$ to $\Lambda^+$ which is diffeomorphic to a product. In particular this implies that $\Lambda^-$ and $\Lambda^+$ are diffeomorphic as smooth manifolds. Note that a Lagrangian concordance automatically is exact. If $\Sigma$ is a Lagrangian concordance, then $H_{\bullet}(\overline{\Sigma},\partial_- \overline{\Sigma};R)=0$, and thus Theorem \ref{thm:lespair} implies the following corollary. \begin{Cor}\label{cor:concobstruction} Let $\Sigma$ be an exact Lagrangian concordance from $\Lambda^-$ to $\Lambda^+$. If, for $i=0,1$, $\varepsilon^-_i$ is an augmentation of $\mathcal{A}(\Lambda^-; R)$ and $\varepsilon^+_i$ is the pull-back of $\varepsilon^-_i$ under the DGA morphism induced by $\Sigma$, then the map $$\Phi_\Sigma^{\varepsilon_0^-,\varepsilon_1^-} \colon LCH^\bullet_{\varepsilon^-_0,\varepsilon_1^-}(\Lambda^-)\rightarrow LCH^\bullet_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+)$$ is an isomorphism. Consequently, there is an inclusion \[ \{LCH^\bullet_{\varepsilon^-_0,\varepsilon_1^-}(\Lambda^-) \}/\text{isom.} \:\: \hookrightarrow \{LCH^\bullet_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+) \}/\text{isom.} \] of the sets consisting of isomorphism classes of bilinearised Legendrian contact cohomologies, for all possible pairs of augmentations. \end{Cor} This corollary can be used to obstruct the existence of Lagrangian concordances. For example, it can be applied to the computation of the linearised Legendrian contact homologies given by Chekanov in \cite[Theorem 5.8]{Chekanov_DGA_Legendrian} to prove that there is no exact Lagrangian concordance from either of the two Chekanov-Eliashberg knots to the other. We also use Corollary~\ref{cor:concobstruction} to deduce new examples of non-symmetric concordances in the spirit of the example given by the first author in \cite{Chantraine_Non_symmetry}. We refer to Section \ref{sec:noninv} for a simply connected example in high dimensions. We recall that a Legendrian isotopy induces a Lagrangian concordance. Since Legendrian isotopies are invertible, two isotopic Legendrian submanifolds thus admit Lagrangian concordances going in either direction. On the other hand, we have now many examples of non-symmetric Lagrangian concordances, and hence the following natural question can be asked. \begin{Quest} Assume that there exists Lagrangian concordances from $\Lambda_0$ to $\Lambda_1$ as well as from $\Lambda_1$ to $\Lambda_0$. Does this imply that the Legendrian submanifolds $\Lambda_0$ and $\Lambda_1$ are Legendrian isotopic? Are such Lagrangian concordances moreover Hamiltonian isotopic to one induced by a Legendrian isotopy (as constructed by \cite[4.2.5]{Eliashberg_&_Lagrangian_Intersection_Finite})? \end{Quest} We argue that this question will not be easily answered by Legendrian contact homology. Chekanov showed in \cite{Chekanov_DGA_Legendrian} that the set of isomorphism classes of all linearised Legendrian contact homology groups is invariant under Legendrian isotopy. Later Bourgeois and the first author in \cite{augcat} extended this result to \emph{bilinearised} Legendrian contact homology. However, Corollary~\ref{cor:concobstruction} says that bilinearised Legendrian contact homology in fact is an invariant of Lagrangian concordances, rather than of Legendrian isotopies. This means that, every time two Legendrian submanifolds have been proved not to be Legendrian isotopic by exhibiting two non-isomorphic bilinearised Legendrian contact homology groups, what has in fact been proved is that there cannot exist Lagrangian concordances between the Legendrian submanifolds going both directions. \subsection{Remarks about the hypotheses.} \label{sec:remarks-about-hypoth} \subsubsection{Restrictions on the ambient manifolds.} \label{sec:restriction-y} The reasons for restricting our attention to Lagrangian cobordisms in the symplectisation of the contactisation of a Liouville manifold are two-fold. First, the analytic framework to have a well defined complex $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$ is vastly simplified from the fact that the Reeb flow has no periodic Reeb orbits. Using recent work of Pardon in \cite{Pardon_CH} (or the polyfold technology being developed by Hofer, Wysocki and Zehnder), it is possible to extend the construction of the complex $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$ to more general symplectic cobordisms. Second, our applications use exact sequences arising from the acyclicity of the complex $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$, which is a consequence of the fact that that any Lagrangian cobordism can be displaced in the symplectisation of a contactisation. Floer theory for Lagrangian cobordisms in more general symplectic cobordisms will be investigated in a future article. \subsubsection{Restrictions on the Lagrangian submanifolds.} \label{sec:restriction-} Now we describe some examples showing that many of the hypotheses we made in Subsection~\ref{sec:cobobs} are in fact essential, and not merely artefacts of the techniques used. First, an exact Lagrangian cobordism having a negative end whose Chekanov-Eliashberg algebra admits no augmentation can be a quite flexible object: in fact Eliashberg and Murphy proved in \cite{Eliash_Mur_Caps} that exact Lagrangian cobordisms with a loose negative end satisfy an h-principle, and therefore one cannot hope for a result in the spirit of Theorem \ref{thm:homologycylinder} to hold in complete generality. Indeed, we refer to the work of the second and fourth authors in \cite{Rigidityofendo} for examples of exact Lagrangian cobordisms from a loose Legendrian sphere to itself having arbitrarily big Betti numbers. Second, the condition that $\Lambda$ is a homology sphere in the statement of Theorem \ref{thm:homologycylinder} was shown to be essential already in \cite[Section 2.3]{Rigidityofendo}. Finally, the importance of the condition on the Reeb chords of the positive end in Theorem \ref{thm:pi_1carclass} is emphasised by the following example, which will be detailed in Section~\ref{sec:some-expl-lagr}. \begin{Prop}\label{prop:example} There exists a non-simply connected exact Lagrangian cobordism from the two-dimensional standard Legendrian sphere to a Legendrian sphere inside the symplectisation of standard contact $(\ensuremath{\mathdj{R}}^5,\xi_{\operatorname{std}})$. \end{Prop} As a converse to Theorem \ref{thm:pi_1carclass}, the existence of a non-simply connected exact Lagrangian cobordism can be used to show the existence of degree zero Reeb chords on the positive end of the cobordism. \subsection{Outline of the article.} \label{sec:ouline-paper} This article is organised as follows. In sections from \ref{sec:geometrical-context} to \ref{sec:prel-legendr-cont} we collect some basic material: in Section \ref{sec:geometrical-context} we describe our geometric setup and give a precise definition of the class of Lagrangian cobordisms that we consider, in Section \ref{sec:analyt-prel} we introduce the moduli spaces used to define $\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}$ and discuss their compactification, in Section \ref{sec:fredholm-properties} we study their Fredholm theory (e.g.~indices and transversality), and in Section \ref{sec:prel-legendr-cont} we briefly review the definitions of Legendrian contact homology and augmentations. Sections \ref{sec:Cthulhu-complex} to \ref{sec:acyclicity} are the theoretical core of the paper: in Section \ref{sec:Cthulhu-complex} we define the complex $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$, in Section \ref{sec:conc-cobord} we show how $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$ behaves under concatenations of cobordisms, and in Section \ref{sec:acyclicity} we show that the complex $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$ is acyclic. The last part of the article is devoted to applications. In Section \ref{sec:long-exact-sequence} we prove Theorem \ref{thm:principal} and in Section \ref{sec:lesproof} we deduce the long exact sequences of Section \ref{sec:les}. In Section \ref{sec:l2-legendr-cont} we lift the coefficient ring to the group ring $R[\pi_1(\Sigma)]$ and to its $L^2$-completion. Finally, in Section \ref{sec:examples} we prove the results concerning the topology of Lagrangian cobordisms stated in Section \ref{sec:cobobs}, and then describe examples which exhibit the necessity of their hypotheses. \section*{Acknowledgements} \label{sec:aknowledgments} While this research was conducted, the authors benefited from the hospitality of various institutions in addition to their own; in particular they are indebted to CIRGET and CRM in Montr\'{e}al, the Centro De Giorgi in Pisa, and the institute Mittag-Leffler in Stockholm. We warmly thank Stefan Friedl for suggesting that we should look at $L^2$-homology theory to gain information on fundamental groups. \section{Geometric preliminaries} \label{sec:geometrical-context} \subsection{Basic definitions}\label{sec:basic-definitions} A {\em contact manifold} $(Y,\xi)$ is a $(2n+1)$-manifold $Y$ equipped with a smooth maximally non-integrable field of hyperplanes $\xi\subset TY$, which is called a {\em contact structure}. Non-integrability implies that locally $\xi$ can be written as the kernel of a $1$-form $\alpha$ satisfying $\alpha\wedge (d\alpha)^{n}\neq 0$. We will be interested only in {\em coorientable} contact structures, which are {\em globally} the kernel of a $1$-form $\alpha$ called a {\em contact form}. A contact form $\alpha$ defines a canonical vector field $R_{\alpha}$, called the {\em Reeb vector field}, via the equations \[ \left \{ \begin{array}{l} i_{R_{\alpha}}d\alpha=0, \\ \alpha(R_{\alpha})=1. \end{array} \right. \] We will use $\phi^t \colon (Y,\alpha) \to (Y,\alpha)$ to denote the flow of the Reeb vector field $R_\alpha$, which can be seen to preserve $\alpha$. Also, in the following we will always assume the contact form to be fixed. An $n$-dimensional submanifold $\Lambda\subset Y$ which is everywhere tangent to $\xi$ is called {\em Legendrian}. The {\em Reeb chords} of $\Lambda$ are the trajectories of the Reeb flow starting and ending on $\Lambda$. We denote the set of the Reeb chords of $\Lambda$ by $\mathcal{R} (\Lambda)$. Let $\gamma$ be a periodic orbit of $R_\alpha$ of length $T$. It is \textit{non-degenerate} if $d\phi^T_q-\operatorname{Id}$ is invertible for one (and thus all) $q\in \gamma$. Let $\gamma$ be a Reeb chord of length $T$. The flow $\phi^t$ of the Reeb vector field preserves $\xi$, and therefore $d\phi^T_{\gamma(0)}(T_{\gamma(0)}\Lambda) \subset \xi_{\gamma(T)}$. We say that a Reeb chord $\gamma$ is {\em non-degenerate} if $d\phi^T_{\gamma(0)}(T_{\gamma(0)}\Lambda)$ is transverse to $T_{\gamma(T)}\Lambda$ in $\xi_{\gamma(T)}$. We say that $\Lambda$ is {\em chord-generic} if all its Reeb chords are non-degenerate. From now on we assume that all Legendrian submanifolds are chord-generic. This is not a restrictive assumption because chord genericity is a property which can be achieved by a generic Legendrian perturbation of $\Lambda$, provided that all periodic Reeb orbits are non-degenerate. We will here restrict ourselves to the case when $(Y, \alpha)$ is the contactisation of a Liouville manifold. We recall that a {\em Liouville manifold} is a pair $(P, \theta)$, where $P$ is a $2n$-dimensional open manifold and $\theta$ is a one-form on $P$ such that $d \theta$ is symplectic. The Liouville vector filed $v$, which is defined by the equation $$\iota_v d \theta = \theta,$$ is moreover required to be a pseudo-gradient for an exhausting function $f \colon P \to \ensuremath{\mathdj{R}}_{\ge 0}$ outside of a compact set. For simplicity we will assume that $f$ has a finite number of critical points. We define the {\em contactisation} $(Y, \alpha)$ of $(P, \theta)$ to be $Y = P\times \ensuremath{\mathdj{R}}$ and $\alpha:=dz + \theta$, where $z$ is the coordinate on the $\ensuremath{\mathdj{R}}$-factor. Note that in this case $R_{\alpha}=\partial_{z}$ and then there are no periodic Reeb orbits. This implies, in particular, that if $\Lambda$ is a chord-generic closed Legendrian submanifold, then $|\mathcal{R}(\Lambda)|<\infty$. The prototypical example of contactisation of a Liouville manifold is the standard contact structure on $\ensuremath{\mathdj{R}}^{2n+1}$ defined by the contact form $$\alpha_0 = dz-\sum_{i=1}^n y_i dx_i.$$ There is a natural projection $\Pi_{\operatorname{Lag}}: P\times \ensuremath{\mathdj{R}} \to P$ defined by $\Pi_{\operatorname{Lag}}(x,z):=x$ which is called the {\em Lagrangian projection}. Given a Legendrian submanifold $\Lambda\subset P\times\ensuremath{\mathdj{R}}$, $\Pi_{\operatorname{Lag}}|_{\Lambda}: \Lambda\to P$ is an exact Lagrangian immersion. In this situation, there is a one-to-one correspondence between the Reeb chords of $\Lambda$ and the double points of $\Pi_{\operatorname{Lag}}(\Lambda)$. Furthermore, $\Lambda$ is chord-generic if and only if the only self-intersections of $\Pi_{\operatorname{Lag}}(\Lambda)$ are transverse double points. \subsection{Lagrangian cobordisms} \label{sec:lagr-cobord} The main object of study in this article are exact Lagrangian cobordisms in the symplectisation of a contactisation. Recall that, for a general contact manifold $(Y,\alpha)$, its \emph{symplectisation} is the exact symplectic manifold \[ (X,\omega):=(\ensuremath{\mathdj{R}} \times Y,d(e^t\alpha)), \] where $t$ denotes the standard coordinate on the $\ensuremath{\mathdj{R}}$-factor. In the case when $\dim Y=2n+1$, a $(n+1)$-dimensional submanifold of the above symplectisation is \emph{exact Lagrangian} if the pull-back of the one-form $e^t\alpha$ is exact. \begin{defn}\label{defn: exact lagrangian cobordism} Let $\Lambda^{-}$ and $\Lambda^{+}$ be two closed Legendrian submanifolds of $(Y,\alpha)$. An \emph{exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$} in $(\ensuremath{\mathdj{R}} \times Y, d(e^t \alpha))$ is a properly embedded submanifold $\Sigma \subset \ensuremath{\mathdj{R}}\times Y$ without boundary satisfying the following conditions: \begin{enumerate} \item for some $T\gg 0$, \begin{itemize} \item[(a)] $ \Sigma \cap ( (-\infty,-T) \times Y)= (-\infty, -T)\times \Lambda^{-}$, \item[(b)] $ \Sigma \cap ((T,+\infty) \times Y) = (T,+\infty)\times \Lambda^{+}$, and \item[(c)] $ \Sigma \cap ([-T,T] \times Y)$ is compact; \end{itemize} \item There exists a smooth function $f_\Sigma:\Sigma\to \ensuremath{\mathdj{R}}$ for which \begin{itemize} \item[(a)] $e^{t}\alpha|_{T\Sigma} = df_\Sigma$, \item[(b)] $f_\Sigma|_{(-\infty,-T)\times \Lambda^{-}}$ is constant, and \item[(c)] $f_\Sigma|_{(T,\infty)\times \Lambda^{+}}$ is constant. \end{itemize} \end{enumerate} We will call $(T,+\infty)\times \Lambda_+ \subset \Sigma$ and $(-\infty,-T) \times \Lambda_- \subset \Sigma$ the \emph{positive end} and the \emph{negative end} of $\Sigma$, respectively. We will call a cobordism from a submanifold to itself an \textit{endocobordism}. \end{defn} Conditions (2b) and (2c) are equivalent to saying that for any smooth paths $\gamma_{-}: ([0,1],\{0,1\})\rightarrow (\Sigma,(-\infty,-T)\times \Lambda^{-})$ and $\gamma_{+}: ([0,1],\{0,1\})\rightarrow (\Sigma,(T,\infty)\times \Lambda^{+})$, we have $\int_{\gamma_{\pm}} e^{t}\alpha=0$. Condition (2b) will later be used to rule out certain bad breakings of pseudoholomorphic curves. Condition (2c) is used to ensure that the concatenation of two exact Lagrangian cobordisms (as in Defintion \ref{def:concat}) still is an exact Lagrangian cobordism. If one does not care about concatenations, then this condition can be dropped. \begin{Ex} If $\Lambda$ is a closed Legendrian submanifold of $(Y, \xi)$, then $\ensuremath{\mathdj{R}} \times \Lambda$ is an exact Lagrangian cobordism inside $(\ensuremath{\mathdj{R}} \times Y, d(e^t\alpha))$ from $\Lambda$ to itself. Cobordisms of this type are called {\em (trivial) Lagrangian cylinders}. \end{Ex} In the case when there exists an exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$ we say that \emph{$\Lambda^-$ is exact Lagrangian cobordant to $\Lambda^+$}. If $\Sigma$ is an exact Lagrangian cobordism from the empty set to $\Lambda$, we call $\Sigma$ an \textit{exact Lagrangian filling} of $\Lambda$. In the latter case we also say that $\Lambda$ is {\em exactly fillable}. The group $\ensuremath{\mathdj{R}}$ acts on $\ensuremath{\mathdj{R}} \times Y$ by translations in the first factor. For any $s \in \ensuremath{\mathdj{R}}$ we define \begin{gather*} \tau_s \colon \ensuremath{\mathdj{R}} \times Y \to \ensuremath{\mathdj{R}} \times Y,\\ \tau_s(t,p)=(t+s,p). \end{gather*} It is easy to check that the translate of an exact Lagrangian cobordism still is an exact Lagrangian cobordism. \begin{defn}\label{def:concat} Given exact Lagrangian cobordisms $\Sigma_a$ from $\Lambda^-$ to $\Lambda$ and $\Sigma_b$ from $\Lambda$ to $\Lambda^+$, their \emph{concatenation} $\Sigma_a \odot \Sigma_b$ is defined as follows. First, translate $\Sigma_a$ and $\Sigma_b$ so that \begin{align*} & \Sigma_a \cap ((- 1, + \infty) \times Y) = (-1,+\infty) \times \Lambda,\\ & \Sigma_b \cap ((- \infty, 1) \times Y) = (- \infty, 1) \times \Lambda. \end{align*} Then we define \[ \Sigma_a \odot \Sigma_b := (\Sigma_a \cap ((- \infty, 0] \times Y)) \cup (\Sigma_b \cap ([0, + \infty) \times Y)).\] \end{defn} Conditions (2b) and (2c) of Definition \ref{defn: exact lagrangian cobordism} imply that $\Sigma_a \odot \Sigma_b$ is an exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$. \begin{Lem} The compactly supported Hamiltonian isotopy class of $\Sigma_a \odot \Sigma_b$ is independent of the above choices of translations. \end{Lem} \begin{proof} In order to prove the lemma, it is enough to show that any translation of an exact Lagrangian cobordism can be realised by a compactly supported Hamiltonian isotopy. It is a standard fact that a smooth isotopy of exact Lagrangian submanifolds can be realised by a Hamiltonian isotopy. In this case, the exact Lagrangian submanifolds considered are obviously non-compact and translations act on them in a non compactly supported way. However, given an exact symplectic cobordism $\Sigma \subset \ensuremath{\mathdj{R}} \times Y$ and $S \ge 0$, we can find a compactly supported smooth isotopy $\psi_s \colon \ensuremath{\mathdj{R}} \times Y \to \ensuremath{\mathdj{R}} \times Y$ such that $\psi_s(\Sigma)= \tau_s(\Sigma)$ for all $s \in [-S, S]$. The isotopy $\psi_s$ can be defined by integrating the vector field $\widetilde{\chi} \partial_t$, where $\widetilde{\chi} \colon \ensuremath{\mathdj{R}} \times Y \to \ensuremath{\mathdj{R}}$ is the pull-back of a bump function $\chi \colon \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ such that $\chi(t)=1$ for $t \in [-S-T, S+T]$, while $\chi(t)=0$ for $t \not \in [-S-T-1, S+T+1]$. (Here $T \ge 0$ is the constant in Definition \ref{defn: exact lagrangian cobordism}.) \end{proof} \subsection{Deligne-Mumford spaces and labels} \label{sec:deligne-mumf-space} \subsubsection{Universal families of pointed discs.} \label{sec:univ-family-point} For fixed $d\in \mathbb{N}$, we denote by $\mathcal{R}^{d+1}$ the moduli space of Riemann discs having a number $d+1$ of ordered marked points on its boundary. One realisation of $\mathcal{R}^{d+1}$ is given by \[ \mathcal{R}^{d+1} = \left \{ (a_0, \ldots,a_d) : a_j=e^{i\theta_j}\text{ with } \theta_0<\ldots<\theta_d<\theta_0+2\pi \right \} / \operatorname{Aut}(D^2),\] where the action of $\operatorname{Aut}(D^2)$ is defined as $f \cdot (a_0,\ldots,a_d) := (f(a_0), \ldots, f(a_d))$ for $f\in \operatorname{Aut}(D^2)$. We denote by $\mathcal{S}^{d+1}\rightarrow \mathcal{R}^{d+1}$ the universal curve over $\mathcal{R}^{d+1}$. One of its realisations is \[ \mathcal{S}^{d+1} = \left \{ (z, a_0,\ldots,a_d) : z \in D^2, \, a_j=e^{i\theta_j}, \theta_0<\ldots<\theta_d<\theta_0+2\pi \right \} / \operatorname{Aut}(D^2),\] where the action of $\operatorname{Aut}(D^2)$ is defined as $f \cdot (z, a_0,\ldots,a_d) := (f(z), f(a_0), \ldots, f(a_d))$ for $f\in \operatorname{Aut}(D^2)$. The map $\mathcal{S}^{d+1} \rightarrow \mathcal{R}^{d+1}$ is induced by the projection \[(z, a_0,\ldots,a_d) \mapsto (a_0,\ldots,a_d),\] and therefore is a fibre bundle with $d+1$ canonical sections $\sigma_0,\ldots, \sigma_{d}$, \[ \sigma_i \colon \mathcal{R}^{d+1} \to \mathcal{S}^{d+1},\] where $\sigma_i$ is induced by $(a_0,\ldots,a_d) \mapsto (a_i, a_0, \ldots, a_d)$. By an abuse of notation, when $r$ is clear from the context, we will denote $\sigma_i(r)=a_i$. For $r\in\mathcal{R}^{d+1}$, we denote by $\overline{S}_r \subset \mathcal{S}^{d+1}$ its preimage under the projection and write $S_r=\overline{S}_r \setminus \{a_0,\ldots,a_d\}$. The connected components of $\partial S_r$ are oriented arcs, and we let $\partial_i S_r$, $i=0,\ldots,d$, denote the arc whose closure has $a_i$ as a starting point. \subsubsection{Strip-like ends.} \label{sec:strip-like-ends} We denote by $Z^+$ and $Z^-$ the Riemann surfaces $(0,+\infty)\times [0,1]$ and $(-\infty,0)\times [0,1]$ respectively, with coordinates $(s,t)$ and conformal structure induced by the complex coordinate $s+it$. A \textit{universal choice of strip-like ends} is given by $d+1$ disjoint neighbourhoods $\nu_i$ of the images of the sections $\sigma_i$ together with identifications \[\varepsilon_0:\mathcal{R}^{d+1}\times Z^+ \to \nu_0\setminus \sigma_0 \quad \text{and} \quad \varepsilon_i:\mathcal{R}^{d+1}\times Z^- \to \nu_i\setminus \sigma_i, \text{ for } i\geq 1,\] such that for each $r\in\mathcal{R}^{d+1}$ we have: \begin{itemize} \item The maps $\varepsilon_i\vert_{\{r\}\times Z^{\pm}}:\{r\}\times Z^{\pm}\rightarrow \nu_i\cap S_r$ are holomorphic, \item $\lim_{s\rightarrow\pm\infty}\varepsilon_i(s,t)=a_i$, and \end{itemize} \begin{figure}\centering \includegraphics[width=5cm]{striplikends} \caption{A surface with strip-like ends. The incoming end is on the right, the outgoing ends are on the left.} \label{fig: striplikends} \end{figure} The point $a_0$ is called an \textit{incoming end}, all the other $a_i$'s are called \textit{outgoing ends} (see Remark \ref{sec:deligne-mumf-comp-1}). See Figure~\ref{fig: striplikends}. We still denote by $(s, t)$ the coordinates on the strip-like ends $S_r \cap (\nu_i\setminus \sigma_i)$ induced by the identifications $\varepsilon_i$ and the coordinates $(s,t)$ on $Z^{\pm}$. In \cite[Section 9a]{Seidel_Fukaya}, it is shown that such universal choices of strip-like ends exist. \subsubsection{Deligne-Mumford compactification of $\mathcal{R}^{d+1}$} \label{sec:deligne-mumf-comp} In this Section, we describe a compactification of the space $\mathcal{R}^{d+1}$ into a manifold with corners $\overline{\mathcal{R}^{d+1}}$. We describe the faces of this compactification using the language of stable trees following \cite[Section 9]{Seidel_Fukaya}. A {\em stable rooted tree } is a tree with one distinguished exterior vertex (i.e. of valence one), called the {\em root}, and whose interior vertices all have valence at least three. Exterior vertices other than the root are called {\em leaves}. For a stable rooted tree with $d$ leaves $T$, we denote by $\mathcal{R}^{T}$ the product $\Pi_{v\in \operatorname{Ve}(T)}\mathcal{R}^{|v|}$, where $\operatorname{Ve}(T)$ is the set of interior vertices and $|v|$ denotes the valence of the vertex $v$, which is at least $3$ by the stability condition. \begin{Rem}\label{sec:deligne-mumf-comp-1} The root induces a natural orientation of each edges of a rooted tree $T$ by the following convention: the edge at the root is oriented so that it \emph{leaves} the root. And at each other vertex there is exactly one incoming edge. \end{Rem} If a stable rooted tree $T'$ is obtained from the stable rooted tree $T$ by collapsing $k$ edges, in \cite[Sections 9e and 9f]{Seidel_Fukaya} there is a description of a gluing map \begin{equation}\label{bad tooth} \gamma^{T,T'}:\mathcal{R}^{T}\times (-1,0)^{k}\rightarrow \mathcal{R}^{T'}. \end{equation} Naively, for an element $(r_v)_{v\in \operatorname{Ve}(T)}$ of $\mathcal{R}^{T}$ and a $k$-tuple $(l_e)$ of positive numbers indexed by the collapsed edges, gluing is performed by cutting the strip-like ends of $S_{r_{v_1}}$ and $S_{r_{v_2}}$ corresponding to a collapsed edge $e$ from $v_1$ to $v_2$ at time $|t|=e^{-\pi l_e}$, identifying the remaining part of the strip-like ends, and then uniformising the resulting disc. \begin{Rem} Actually, \cite{Seidel_Fukaya} defines only the map $\gamma^{T,T_d}$, where $T_d$ is the tree with only one interior vertex, which corresponds to $\mathcal{R}^{d+1}$. \end{Rem} The gluing maps satisfy the cocycle relation: \begin{eqnarray} \lefteqn{\gamma^{T',T''}(\gamma^{T,T'}(\{r_v\},\rho_1,\ldots,\rho_{k_1}),\rho_{k_1+1},\ldots,\rho_{k_1+k_2})}\label{eq:29}\\ \nonumber &=& \gamma^{T,T''}(\{r_v\},\rho_1,\ldots,\rho_{k_1+k_2}). \end{eqnarray} We define $\overline{\mathcal{R}^{d+1}}=\sqcup_{T}\mathcal{R}^T$ (as a set). The gluing maps in Equation~\eqref{bad tooth} allow us to build an atlas on $\overline{\mathcal{R}^{d+1}}$, and therefore one can prove the following. \begin{Lem}\cite[Lemma 9.2]{Seidel_Fukaya} The set $\overline{\mathcal{R}^{d+1}}$ has a structure of manifold with corners which is induced by the gluing maps \eqref{bad tooth}. For every stable rooted tree $T$ with $d$ leaves, $\mathcal{R}^{T}$ is a stratum of the stratification of $\overline{\mathcal{R}^{d+1}}$ defined as follows: if $T'$ is obtained from $T$ by collapsing some interior edges, then $\mathcal{R}^T$ is a face of the compactification of $\mathcal{R}^{T'}$. \end{Lem} Elements of $\overline{\mathcal{R}^{d+1}}$ are called {\em nodal stable $(d+1)$-punctured discs}. \begin{Rem} Note that the definition of the maps $\gamma^{T,T'}$ depends on the choice of strip-like ends for all $\mathcal{R}^{d'}$ for $d'<d$. However, the structure of a manifold with corners on $\overline{\mathcal{R}^{d+1}}$ turns out to be independent of these choices. \end{Rem} An element $\{r_v\}$ of $\mathcal{R}^{T}$ is called a \textit{nodal stable $(d+1)$-punctured disc}. In this situation, marked points of $r_v$ corresponding to interior edges are called \textit{boundary nodes}, while those corresponding to edges connected to leaves or to the root are called \textit{ends}. A nodal stable $(d+1)$-punctured disc in $\mathcal{R}^{T'}$ which is in the image of a map $\gamma^{T,T'}$ inherits two strip-like ends: one coming from the universal choices of strip-like ends in the moduli spaces $\mathcal{R}^{|v|}$ for $v\in \operatorname{Ve}(T')$, and one coming from the universal choices of strip-like ends in the moduli spaces $\mathcal{R}^{|v|}$ for $v\in \operatorname{Ve}(T)$. If there exists an $\epsilon>0$ such that, for all gluing parameters smaller than $\epsilon$ and for all nodal stable $(d+1)$-punctured discs, those two strip-like ends agree, then we call the universal choices of strip-like ends in the moduli spaces $\mathcal{R}^{d+1}$ \textit{consistent}. Consistent universal choices of strip-like ends exist because the space of strip-like ends is contractible, and the relation \eqref{eq:29} allows an inductive argument. From now on, we assume that such consistent choices have been made. \subsubsection{Asymptotics labels for Deligne-Mumford spaces} \label{sec:asymptotic-data} In the present paper we will consider various moduli spaces of punctured holomorphic discs whose punctures are asymptotic to either Reeb chords or intersection points between Lagrangian cobordisms. We therefore make the following definitions. Let $\Sigma_0$ and $\Sigma_1$ be two Lagrangian cobordisms from Legendrian submanifolds $\Lambda^-_0$ and $\Lambda^-_1$ at the negative ends to Legendrian submanifolds $\Lambda^+_0$ and $\Lambda^+_1$ at the positive ends, respectively. A \textit{Lagrangian label} $\underline{L}$ for $\mathcal{R}^{d+1}$ is a map from $\mathbb{Z}/(d+1)\mathbb{Z}$ to $\{\Sigma_0,\Sigma_1\}$. If $\underline{L}(i-1)\not=\underline{L}(i)$, then the marked point $a_i$ for $i\in \{0,\ldots, d\}$ is called a \textit{jump} of $\underline{L}$. Note that a Lagrangian label has an even number of jumps. A Lagrangian label is \textit{simple} if it satisfies the following conditions: \begin{itemize} \item If $a_0$ is not a jump, then none of the $a_i$'s are jumps. \item It has at most two jumps. \item If $a_0$ is a jump, then $\underline{L}(0)=\Sigma_0$. \end{itemize} \begin{Rem} A Lagrangian label for $\mathcal{R}^{d+1}$ induces compatible Lagrangian labels for all faces of $\overline{\mathcal{R}^{d+1}}$. If the Lagrangian label is simple, then the induced labels are also simple. \end{Rem} The set of \textit{asymptotics} of a given pair of Lagrangian cobordisms $\Sigma_0$, $\Sigma_1$ is the union $$A(\Sigma_0,\Sigma_1):=(\Sigma_0\cap\Sigma_1)\cup \mathcal{R}(\Lambda^+_0\sqcup \Lambda^+_1)\cup \mathcal{R}(\Lambda^-_0\sqcup\Lambda^-_1).$$ The main definition in this section is the following. \begin{defn} \begin{itemize} \item An \emph{asymptotics label} for $\mathcal{R}^{d+1}$ is an assignment of an asymptotic in $A(\Sigma_0,\Sigma_1)$ to each section $\sigma_i$, $i=0,\ldots,d$ (thus inducing asymptotics for each marked point $a_i$ given any $r\in\mathcal{R}^{d+1}$). \item A Lagrangian label is \emph{compatible} with a given asymptotics label if:\begin{itemize} \item Every marked point asymptotic to a double point is a jump; \item In the case when the asymptotic of $a_i$ is a Reeb chord starting on $\Lambda^\pm_j$ and ending on $\Lambda^\pm_k$ (here $\{j,k\} \subset \{0,1\}$), it follows that $\underline{L}(i-1) =\Sigma_j$ while $\underline{L}(i)=\Sigma_k$. \end{itemize} \end{itemize} \end{defn} \begin{Rem} \label{rem:jump} It is an obvious fact that there can be at most one \emph{simple} Lagrangian label compatible with a given asymptotics label, thus we will always only specify a set of asymptotics assuming that the implicit compatible label is chosen. \end{Rem} \subsubsection{Semi-stable nodal discs} \label{sec:semi-stable-nodal} In the paper we will also consider more general domains called semi stable whose combinatorics is still described by decorated trees similarly as in Section~\ref{sec:deligne-mumf-comp}, but possibly containing semi-stable vertices. We use the convention that $\mathcal{R}^{d+1}$ consists of a single point when $d=0,1$. Elements of $\mathcal{S}^{0+1}$ or $\mathcal{S}^{1+1}$ is called a \textit{semi-stable} punctured disc. In other word a punctured disc is semi-stable if it has either one or two punctures on the boundary. After removing the punctures, they can be uniformised either as the \textit{strip} $Z= \mathbb{R}\times [0,1]$ or the \textit{half-plane} $H=\{\operatorname{im} z>0\}\in \mathbb{C}$ with their standard Riemann structures. We call $(s,t)$ the global coordinates on $Z$, and we require that any strip-like end on $Z$ is a restriction of these coordinates. A {\em semi-stable rooted tree with $d$ leaves} is a tree $T$ with one chosen valence one vertex (the root) and $d$ chosen valence one vertices (the leaves). All other vertices, including the remaining valence one vertices, will be called {\em internal vertices}. The \textit{stabilisation} of $T$, denoted $S(T)$, is the stable tree obtained by performing the following two operations: \begin{itemize} \item promoting valence one interior vertices to leaves, and \item suppressing all vertices of valence $2$ and merging the corresponding edges. \end{itemize} Note that the interior vertices of $S(T)$ are in natural bijection with the vertices of $T$ of valence at least three and, therefore, $S(T)$ is empty in the case when every vertex of $T$ has valence one or two. To any semi-stable tree $T$ together with an element of ${\mathcal R}^{S(T)}$, we associate a punctured nodal disc as follows. \begin{itemize} \item To any interior vertex of $T$ we associate a disc; \item For any edge of $T$ we put a boundary node between the discs corresponding to the vertices joined by the edge; and \item For any exterior vertex we put a marked point in the disc associated to the closest interior vertex. \end{itemize} The element of ${\mathcal R}^{S(T)}$ specifies a conformal structure for any disc associated to a vertex of valence at least three (there are no moduli for discs with one or two punctures). Finally, the boundary marked points associated to the root will become an incoming puncture and the marked points associated to the leaves will become outgoing punctures. For any semi-stable tree $T$ with $k$ interior vertices and $d$ leaves, there is a gluing map $$\gamma^{T, T_d} \colon {\mathcal R}^{S(T)} \to {\mathcal R}^{d+1}$$ similar to the one described in Section \ref{sec:deligne-mumf-comp}, but since $H$ and $Z$ have nontrivial automorphism groups, these gluing maps are not local embeddings. Lagrangian and asymptotics labels can be defined for unstable curves in the same way. Obviously, the Lagrangian label on $H$ must have no jumps. To a (stable or semi-stable) nodal discs $S=\{r_v\}\in \mathcal {R}^{T}$ one associates its \textit{normalisation} which is simply the quadruple $(\widehat{S},\mathfrak{m}, \mathfrak{n},\iota)$ with \begin{itemize} \item $\widehat{S}=\sqcup_{v\in \operatorname{Ve}(T)}^k S_{r_v}$. \item $\mathfrak{m}$ is the union of the marked points of $\widehat{S}$ corresponding to edges of $T$ connecting to leaves or to the root of $T$. \item $\mathfrak{n}$ is the union of the marked point of $\widehat{S}$ corresponding to interior edges of $T$. It comes equipped with a fixed point free involution $\iota$ determined by $\iota(n_0)=n_1$ if $n_0$ and $n_1$ are connected by an edge $e$. \end{itemize} Elements of $\mathfrak{m}$ are called \textit{ends} of $S$ and elements of $\mathfrak{n}$ are called \textit{nodes}. Note that one can rebuild the gluing tree out of the data $(\widehat{S},\mathfrak{m}, \mathfrak{n},\iota)$. \section{Analytic preliminaries} \label{sec:analyt-prel} \subsection{Almost complex structures} \label{sec:holomorphic-curves} Before defining the moduli spaces of pseudoholomorphic curves that are relevant for the theory (see Section \ref{sec:Cthulu-moduli-spaces}), it is necessary to describe the almost complex structures. For technical reasons it will be necessary to make certain additional assumptions on them that we here describe. \subsubsection{Cylindrical almost complex structures} \label{sec:cylindrical} Let $(Y,\alpha)$ be a contact manifold with the choice of a contact form. We denote by $\mathcal{J}^{\operatorname{cyl}}(Y)$ the set of \emph{cylindrical} almost complex structures on the symplectisation $(\mathbb{R}\times Y, d(e^t\alpha))$, i.e.~almost complex structures $J$ satisfying the following conditions: \begin{itemize} \item $J$ is invariant under the natural (symplectically conformal) action of $\mathbb{R}$ on $\mathbb{R}\times Y$; \item $J\frac{\partial}{\partial t}=R_\alpha$; \item $J(\xi)=\xi$, where $\xi :=\ker \alpha \subset TY$; and \item $J$ is compatible with $d \alpha|_{\xi}$, i.e.~$d\alpha_{\xi}(\cdot,J \cdot)$ is a metric on $\xi$. \end{itemize} We will say that an almost complex structure defined on a subset of the symplectisation of the form $I\times Y$ (where $I$ is an interval) comes from $\mathcal{J}^{\operatorname{cyl}}(Y)$ if it is the restriction of an almost complex structure in $\mathcal{J}^{\operatorname{cyl}}(Y)$. \subsubsection{Almost complex structures on Liouville manifolds.} \label{sec:almost-compl-struct} Let $(P,\theta)$ be a Liouville manifold. Recall that there is a subset $P_\infty \subset P$ that is exact symplectomorphic to half a symplectisation \[ ([0,+\infty) \times V,d(e^{\tau}\alpha_V)),\] and where $P \setminus P_\infty \subset P$ is pre-compact. We say that an almost complex structure $J_P$ compatible with $d\theta$ is \textit{admissible} if the almost complex structure $J_P$ on $P$ comes from $\mathcal{J}^{\operatorname{cyl}}(V)$ outside of a compact subset of $P_\infty$. We denote by $\mathcal{J}^{\operatorname{adm}}(P)$ the set of these almost complex structures. \subsubsection{Cylindrical lifts} \label{sec:cyllift} We will now restrict our attention to the manifolds that we will be considering here; namely, the contactisation of a Liouville manifold $(P, \theta)$, i.e. \[(Y, \alpha) := (P \times \ensuremath{\mathdj{R}}, dz + \theta), \] and its symplectisation \[(X,\omega):=(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}},d(e^t(dz+\theta))).\] From now on $(Y, \alpha)$ and $(X, \omega)$ will always denote manifolds of this type. Given a compatible almost complex structure $J_P$ in $\mathcal{J}^{\operatorname{adm}}(P)$ as defined above, there is a unique cylindrical almost complex structure $\widetilde{J}_P$ on $(X,d(e^t\alpha))$ which makes the projection \[ \pi \colon X=\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to P \] a $(\widetilde{J}_P,J_P)$-holomorphic map. We will call this almost complex structure the \emph{cylindrical lift of $J_P$}. An important feature of the cylindrical lift is that the diffeomorphisms $(t,p,z) \mapsto (t,p,z+z_0)$, $z_0 \in \ensuremath{\mathdj{R}}$, induced by the Reeb flow all are $J$-holomorphic. We denote by $\mathcal{J}^{\operatorname{cyl}}_\pi(Y) \subset \mathcal{J}^{\operatorname{cyl}}(Y)$ the set of cylindrical lifts of almost complex structures in $\mathcal{J}^{\operatorname{adm}}(P)$. \subsubsection{Compatible almost complex structures with cylindrical ends.} \label{sec:admissible} Let $J^+$ and $J^-$ be almost complex structures in $\mathcal{J}_{\pi}^{\operatorname{cyl}}(Y)$, and let $T\in\mathbb{R}^+$. We will require that both almost complex structures $J^\pm$ coincide outside of $\ensuremath{\mathdj{R}} \times K$ for some compact subset $K \subset Y$. We denote by $\mathcal{J}_{J^-,J^+,T}^{\operatorname{adm}}(X)$ the set of almost complex structures on $X=\mathbb{R}\times Y$ that tame $d(e^t\alpha)$ and satisfy the following. \begin{enumerate}[label={(\alph*)}] \item The almost complex structure $J$ is equal to the cylindrical almost complex structures $J^-$ and $J^+$ on subsets of the form \begin{gather*} (-\infty,-T] \times P \times \ensuremath{\mathdj{R}},\\ [T,+\infty) \times P \times \ensuremath{\mathdj{R}}, \end{gather*} respectively; and \item outside of $\ensuremath{\mathdj{R}} \times K$, the almost complex structure $J$ coincides with a cylindrical lift in $\mathcal{J}^{\operatorname{cyl}}_\pi(X)$. \end{enumerate} Condition (b) is needed in order to deal with compactness issues. Recall that the contact manifold $P \times \ensuremath{\mathdj{R}}$ as as well as the Liouville manifold $P$ are non-compact. In the case when we do not care about the parameter $T>0$, we will simply write $\mathcal{J}^{\operatorname{adm}}_{J^-,J^+}(X)$. The union of all $\mathcal{J}^{\operatorname{adm}}_{J^-,J^+}(X)$ over all $J^-,J^+\in \mathcal{J}^{\operatorname{cyl}}(Y)$ is denoted $\mathcal{J}^{\operatorname{adm}}(X)$, almost complex structure in this set will be called \textit{admissible}. \subsubsection{Domain dependent almost complex structures.} \label{sec:pert-floer-equa} Let $J^\pm\in\mathcal{J}^{\operatorname{cyl}}(Y)$ be two cylindrical almost complex structures and let $\{J_t\}$ be a smooth path in $\mathcal{J}^{\operatorname{adm}}_{J^-,J^+}(X)$ which is locally constant near $t=0$ and $t=1$. Let $\Sigma_0$ and $\Sigma_1$ be two Lagrangian cobordisms in the symplectisation $(X,d(e^t\alpha))$ of $(Y,\xi)$ with Legendrian ends $\Lambda^\pm_i$ $i=0,1$. To a punctured disc $S$ (either stable or semi-stable) with simple Lagrangian label $\underline{L}$ we associate a domain-dependent almost complex structure $J_{(S, \underline{L})} \colon S \to \mathcal{J}^{\operatorname{adm}}_{J^-,J^+}(X)$ using the path $\{J_t\}$, as described below. For every punctured disc $S$, if $\underline{L}$ has no jumps, then $J_{(S, \underline{L})}$ is constant and has value $J_0$ or $J_1$ depending if the Lagrangian label $\underline{L}$ is constant at $\Sigma_0$ or $\Sigma_1$. Note that $S=H$ (i.e.~a half-plane) always falls into this case. If $\underline{L}$ has two jumps (which means, in particular, that $S$ has $d+1$ boundary punctures with $d \ge 1$), we uniformise $S$ to a strip $\ensuremath{\mathdj{R}} \times [0,1]$ (with coordinates $(s,t)$) with $d-1$ punctures in the boundary, such that the jumps of $\underline{L}$ correspond to the ends of the strip, and moreover the incoming puncture of $S$ corresponds to the end of the strip at $s \ll 0$. This uniformisation, which is unique up to translations in the $s$-coordinate of the strip, defines a map $t \colon S \to [0,1]$ by composition with the $t$-coordinate on the strip, and we define $J_{(S, \underline{L})} \colon S \to\mathcal{J}^{\operatorname{adm}}_{J^-,J^+}(X)$ by $J_{(S, \underline{L})}(z) = J_{t(z)}$. Note that $J_{(S, \underline{L})}$ is constantly equal to $J_0$ near the boundary components associated to $\Sigma_0$ and constantly equal to $J_1$ near the boundary components associated to $\Sigma_1$. When $S$ is a stable $(d+1)$-punctured disc, the maps $J_{(S, \underline{L})}$ fit into smooth maps $$J_{(d, \underline{L})}: \mathcal{S}^{d+1}\rightarrow \mathcal{J}^{\operatorname{adm}}_{J^-,J^+}$$ which are obviously compatible with the degenerations of the Deligne-Mumford moduli spaces. In order to uniformise the notation we will denote $J_{(2, \underline{L})} := J_{(Z, \underline{L})}$ and $J_{(1, \underline{L})} := J_{(H, \underline{L})}$, The collection of these maps is called a \textit{universal choice of domain dependent almost complex structures} induced by $(J^+,J^-,J_t)$. For a point $z\in\mathcal{S}^{d+1}$ we denote by $J(z)$ the corresponding almost complex structure on $X$ induced by this construction. \subsection{Moduli space of holomorphic discs.} \label{sec:Cthulu-moduli-spaces} We describe now the moduli space of discs used in Section \ref{sec:Cthulhu-complex} in order to define the differential of the Cthulhu complex. We begin with the general definition of these moduli spaces, and then detail some of them with particular asymptotics. \subsubsection{General definitions} In this subsection we set up some terminology which will be used in all subsequent discussions about moduli spaces. Let $S$ be a $d+1$-punctured disc ($d \ge 0$) and $\underline{L}$ a simple Lagrangian label for $S$ with values in a pair of Lagrangian cobordisms $(\Sigma_0,\Sigma_1)$. Suppose we have a (possibly domain dependent) almost complex structure $J$ on $X$. We say that a map $u: S \rightarrow X$ is \textit{$J$-holomorphic with boundary conditions in $\underline{L}$} if $\forall z\in S$ \begin{equation} d_zu \circ j =J(z)\circ d_zu, \label{eq:32} \end{equation} where $j$ denotes the standard complex structure on $D^2$, and $u(\partial_i S)\subset \underline{L}(i)$. If the Lagrangian label $\underline{L}$ is constant, we will say that $u$ is {\em pure}; if it has jumps, we will say that $u$ is {\em mixed}. Here we will remain vague about the almost complex structure $J$, because it will depend on the specific moduli space that we will consider. Given an intersection point $p\in \Sigma_0\cap\Sigma_1$, we say that $u$ is \textit{asymptotic} to $p$ at $a_i$ if \begin{itemize} \item the marked point $a_i$ is a jump, and \item $\lim_{z\rightarrow a_i}u(z)=p$. \end{itemize} Let $\gamma$ be a Reeb chord of $\Lambda^{\pm}_0\sqcup\Lambda^{\pm}_1$ of length $T$. The map $u=(a,v)$ into $\ensuremath{\mathdj{R}} \times Y$ has a {\em positive asymptotic} to $\gamma$ at $a_i$ if: \begin{itemize} \item $\lim_{s\rightarrow + \infty}v(\varepsilon_i(s,t))=\gamma(Tt)$ and $\lim_{s\rightarrow +\infty}a(\varepsilon_i(s,t))= +\infty$, given that $a_i=a_0$ is the incoming puncture, or \item $\lim_{s\rightarrow - \infty}v(\varepsilon_i(s,t)) = \gamma(T(1-t))$ and $\lim_{s\rightarrow -\infty}a(\varepsilon_i(s,t))= +\infty$, given that $i\not= 0$. \end{itemize} Let $\gamma$ be a Reeb chord of $\Lambda^{\pm}_0\sqcup\Lambda^{\pm}_1$ of length $T$. The map $u=(a,v)$ has a {\em negative asymptotic} to $\gamma$ at $a_i$, $i \neq 0$, given that \begin{itemize} \item $\lim_{s\rightarrow -\infty}a(\varepsilon_i(s,t))=-\infty$ and $\lim_{s\rightarrow -\infty}v(\varepsilon_i(s,t))=\gamma(Tt)$. \end{itemize} We will never consider holomorphic curves which have a negative asymptotic at the incoming end. Associated to a pair of cobordisms there are three types of possible targets for Lagrangian labels that will be considered here: $(\Sigma_0,\Sigma_1)$, $(\mathbb{R}\times\Lambda^\pm_0,\mathbb{R}\times\Lambda^\pm_1)$ and $\mathbb{R}\times\Lambda^\pm_i$ for $i=0,1$; note that the asymptotics of the latter two are subsets of those of the first pair. We will use $\underline{L}$ to denote any of those labels. Given $x_0,\ldots,x_d$ in $A(\Sigma_0,\Sigma_1)$ and $r\in\mathcal{R}^{d+1}$, we denote by $$\mathcal{M}^r_{\underline{L}}(x_0;x_1,\ldots,x_d ; J)$$ the space of $J$-holomorphic maps from $S_r$ to $X$ with asymptotics to $x_i$ at $a_i$ modulo reparametrisations of $S_r$. In other words, $x_0$ is the asymptotic of the incoming puncture. We denote by $$\mathcal{M}_{\underline{L}}(x_0;x_1,\ldots,x_d ; J)$$ the union of the $\mathcal{M}^r_{\underline{L}}(x_0;x_1,\ldots,x_d; J)$ over all $r\in\mathcal{R}^{d+1}$. Note that once the asymptotics for the moduli space is fixed, the actual Lagrangian label is uniquely determined and, hence, we do not need to specify it. In the case when both $\underline{L}$ and $J$ are invariant under translations of the symplectisation coordinate, there is an induced $\mathbb{R}$-action on $\mathcal{M}_{\underline{L}}(x_0;x_1,\ldots,x_d; J)$. We use $$\widetilde{\mathcal{M}}_{\underline{L}}(x_0;x_1,\ldots,x_d; J)$$ to denote the quotient of the moduli space by this action. In the following subsections we describe the moduli spaces with simple Lagrangian labels that appear in the definition of the Cthulhu differential in Section \ref{sec:Cthulhu-complex}. \subsubsection{Strips and half-planes} \label{sec:strips} As already seen, the pseudoholomorphic discs considered here will have a number of different types of asymptotics. However, it will be useful to make a distinction between the following two types of discs considered. \begin{itemize} \item A pseudoholomorphic disc with no jumps is called a \emph{(punctured) half-plane}; while \item A pseudoholomorphic disc having precisely two jumps will be called a \emph{(punctured) strip}. The puncture corresponding to the unique incoming end will be called the \emph{output} while the puncture corresponding to the unique outgoing end at which a jump occurs will be called the \emph{input}. \end{itemize} \begin{Rem} The fact that outgoing ends are inputs and incoming ends are outputs might seem confusing. The incoming/outgoing dichotomy comes from the notion of incoming and outgoing edges in a rooted tree (see Remark \ref{sec:deligne-mumf-comp-1}) and refers to particular coordinates of the domain (the strip like ends). This follows the convention of \cite{Seidel_Fukaya}. The dichotomy input/output refers to what will belong to the domain/codomain of the differential as defined in Section \ref{sec:Cthulhu-complex}. \end{Rem} The above two types of pseudoholomorphic discs (i.e.~halfplanes and strips) will play radically different roles in our theory. Roughly speaking, punctures half-planes appear when defining Chekanov-Eliashberg DGA as well as DGA morphisms, while punctured strips appear when defining the Floer homology, the bilinearised Legendrian contact cohomology, and the Cthulhu homology. \subsubsection{LCH moduli spaces.} \label{sec:lch-moduli-spaces} The LCH moduli spaces are the moduli spaces appearing in the definition of linearised contact homology. We will consider two types of LCH moduli spaces, depending on whether the involved Reeb chords are mixed or not. We fix two cylindrical almost complex structures $J^\pm \in {\mathcal J}^{\operatorname{cyl}}(Y)$ on $X= \ensuremath{\mathdj{R}} \times Y$ and a path $\{J_t\}$ of almost complex structures in ${\mathcal J}^{\operatorname{adm}}_{J^+,J^-}(X)$. We denote $J_{(d, \underline{L})}$ the corresponding universal choice of domain dependent almost complex structures. Denote $\Sigma = \Sigma_i$ and $\Lambda^\pm = \Lambda_i^\pm$ for $i=0, 1$. Let $\gamma^+, \delta^+_i, \ldots, \delta^+_d$ be Reeb chords of $\Lambda^+$ and $\gamma_-, \delta^-_1,\ldots,\delta^-_d$ Reeb chords of $\Lambda^-$. Throughout the paper we will write a $d$-tuple of pure Reeb chords as a word; this notation is reminiscent of the multiplicative structure of the Chekanov-Eliashberg algebra. Therefore, we set $\boldsymbol{\delta}^\pm= \delta_1^\pm \ldots \delta_d^\pm$. We will consider three types of pure LCH moduli spaces: $$\widetilde{M}_{\mathbb{R}\times\Lambda^+}(\gamma^+;\boldsymbol{\delta}^+; J^+), \quad \widetilde{M}_{\mathbb{R}\times\Lambda^-} (\gamma^-;\boldsymbol{\delta}^-; J^-), \quad \text{and} \quad \mathcal{M}_{\Sigma}(\gamma^+;\boldsymbol{\delta}^-; J_{(d, \underline{L})}).$$ The first two moduli spaces will be called pure {\em cylindrical} LCH moduli spaces and are used to define the Legendrian contact homology differential of $\Lambda^\pm$. The third moduli space will be called pure {\em cobordism} LCH moduli space and is used in \cite{Ekhoka} to define maps between Legendrian contact homology algebras. Since the Lagrangian labels are constant for these moduli spaces, the almost complex structures $J_{(d, \underline{L})})$ are actually domain \emph{independent}. However, their value depends on whether $\Sigma= \Sigma_0$ or $\Sigma = \Sigma_1$. Finally, recall that in the cylindrical moduli spaces, we take a quotient by the $\ensuremath{\mathdj{R}}$-action, while such an operation is not possible (nor desirable) for the cobordism moduli space. Now we describe mixed LCH moduli spaces. Let $\boldsymbol{\delta}^\pm:=\delta^\pm_1\ldots\delta^\pm_{i-1}$ be Reeb chords of $\Lambda^\pm_1$ and $\boldsymbol{\zeta}^\pm=\zeta^\pm_{i+1}\ldots\zeta^\pm_d$ be Reeb chords of $\Lambda_0^\pm$. We will consider three types of mixed LCH moduli spaces: $$\widetilde{\mathcal{M}}_{\mathbb{R}\times\Lambda_0^\pm,\mathbb{R}\times\Lambda_1^\pm} (\gamma^+; \boldsymbol{\delta}^\pm, \gamma^-, \boldsymbol{\zeta}^\pm; J^\pm) \quad \text{and} \quad \mathcal{M}_{\Sigma_0,\Sigma_1} (\gamma^+; \boldsymbol{\delta}^-, \gamma^-, \boldsymbol{\zeta}^-; J_{(d, \underline{L})}),$$ where $\gamma^\pm \in \mathcal{R}(\Lambda^+_1,\Lambda^+_0)$ for $\widetilde{\mathcal{M}}_{\mathbb{R}\times\Lambda_0^+,\mathbb{R}\times\Lambda_1^+} (\gamma^+; \boldsymbol{\delta}^+, \gamma^-, \boldsymbol{\zeta}^+; J^+)$, $\gamma^\pm \in \mathcal{R}(\Lambda^-_1,\Lambda^-_0)$ for $\widetilde{\mathcal{M}}_{\mathbb{R}\times\Lambda_0^-,\mathbb{R}\times\Lambda_1^-} (\gamma^+; \boldsymbol{\delta}^-, \gamma^-, \boldsymbol{\zeta}^-; J^-)$, and $\gamma^+ \in \mathcal{R}(\Lambda^+_1,\Lambda^+_0)$, $\gamma^- \in \mathcal{R}(\Lambda^-_1,\Lambda^-_0)$ for $\mathcal{M}_{\Sigma_0,\Sigma_1} (\gamma^+; \boldsymbol{\delta}^-, \gamma^-, \boldsymbol{\zeta}^-; J_{(d, \underline{L})})$. The first two moduli spaces will be called mixed {\em cylindrical} LCH moduli spaces and are used to define the bilinearised Legendrian contact homology differential of $(\Lambda^\pm_0, \Lambda^\pm_1)$ (see Section \ref{subsec: LCH} or \cite{augcat}). The third moduli space will be called mixed {\em cobordism} LCH moduli space and is used to define maps between bilinearised Legendrian contact homology groups. An illustration of a curve in the mixed cobordism LCH moduli spaces is shown in Figure \ref{fig:LCHmap}. \begin{figure}[!h] \centering \vspace{0.5cm} \labellist \pinlabel $\gamma^+$ [bl] at 196 447 \pinlabel $\delta_1$ [tl] at 13 0 \pinlabel $\delta_2$ [tl] at 120 0 \pinlabel $\gamma^-$ [tl] at 215 2 \pinlabel $\zeta_1$ [tl] at 313 0 \pinlabel $\zeta_2$ [tl] at 414 0 \pinlabel $\Sigma_0$ [bl] at 50 240 \pinlabel ${\color{red}\Sigma_1}$ [bl] at 366 240 \endlabellist \includegraphics[height=5cm]{bilindiff} \caption{A mixed cobordism LCH curve.} \label{fig:LCHmap} \end{figure} \subsubsection{Floer moduli space.} \label{sec:floer-moduli-space} Let $p,q\in \Sigma_0\cap\Sigma_1$ be intersection points, $\boldsymbol{\delta}^-= \delta^-_1\ldots\delta^-_{i-1}$ a word of Reeb chords on $\Lambda^-_0$, and $\boldsymbol{\zeta}^-= \zeta^-_{i+1},\ldots \zeta^-_d$ a word of Reeb chords on $\Lambda^-_1$. $J$-holomorphic curves in the moduli space $$\mathcal{M}_{\Sigma_0,\Sigma_1}(p;\boldsymbol{\delta}^-,q,\boldsymbol{\zeta}^-; J_{(d, \underline{L})})$$ will be called \textit{Floer strips}. (Note that this is an abuse of terminology as the actual domain is not a strip unless $\boldsymbol{\delta}^-$ and $\boldsymbol{\zeta}^-$ are empty.) See Figure \ref{fig:floerdiff}. Furthermore, the punctured disc is here required to makes a jump from $\Sigma_1$ to $\Sigma_0$ at the puncture asymptotic to $p$ (this is the incoming puncture), while it makes a jump from $\Sigma_0$ to $\Sigma_1$ at the puncture asymptotic to $q$. This follows from our convention for Lagrangian labels; see Remark \ref{rem:jump}. \begin{figure}[!h] \centering \vspace{0.5cm} \labellist \pinlabel $p$ [bl] at 207 382 \pinlabel $\delta_1$ [tl] at 11 1 \pinlabel $\delta_2$ [tl] at 121 1 \pinlabel $q$ [tl] at 209 82 \pinlabel $\zeta_1$ [tl] at 312 2 \pinlabel $\zeta_2$ [tl] at 410 3 \pinlabel $\Sigma_0$ [br] at 80 200 \pinlabel ${\color{red}\Sigma_1}$ [bl] at 380 200 \endlabellist \includegraphics[height=5cm]{flodiff} \caption{A Floer strip.} \label{fig:floerdiff} \end{figure} \subsubsection{LCH to Floer moduli space.} \label{sec:lch-floer-moduli} Let $\gamma^-\in \mathcal{R}(\Lambda^-_1,\Lambda^-_0)$ be a mixed chord, $p\in \Sigma_0\cap\Sigma_1$ an intersection point, $\boldsymbol{\delta}^-=\delta^-_1\ldots\delta^-_{i-1}$ a word of Reeb chords on $\Lambda^-_0$, and $\boldsymbol{\zeta}^-=\zeta^-_{i+1} \ldots \zeta^-_d$ a word of Reeb chords on $\Lambda^-_1$. Curves in the moduli space $$\mathcal{M}_{\Sigma_0,\Sigma_1}(p;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-; J_{(d, \underline{L})})$$ will be called \textit{holomorphic Cthulhus}. See Figure \ref{fig:cultist1}. \begin{figure}[!h] \centering \vspace{0.5cm} \labellist \pinlabel $p$ [bl] at 207 382 \pinlabel $\delta_1$ [tl] at 11 1 \pinlabel $\delta_2$ [tl] at 121 1 \pinlabel $\gamma^-$ [tl] at 217 1 \pinlabel $\zeta_1$ [tl] at 312 2 \pinlabel $\zeta_2$ [tl] at 415 3 \pinlabel $\Sigma_0$ [br] at 75 200 \pinlabel ${\color{red}\Sigma_1}$ [bl] at 380 200 \endlabellist \includegraphics[height=5cm]{cultist1} \caption{A holomorphic Cthulhu.} \label{fig:cultist1} \end{figure} \subsubsection{Floer to LCH moduli space.} \label{sec:floer-lch-moduli} Let $\gamma^+\in \mathcal{R}(\Lambda^+_1,\Lambda^+_0)$ be a mixed chord, $p\in \Sigma_0\cap\Sigma_1$ an intersection point, $\boldsymbol{\delta}^-=\delta^-_1\ldots\delta^-_{i-1}$ Reeb chords of $\Lambda^-_0$ and $\boldsymbol{\zeta}^-=\zeta^-_{i+1} \ldots \zeta^-_d$ Reeb chords of $\Lambda_1^-$. Curves in the moduli space $$\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma^+; \boldsymbol{\delta}^-, p, \boldsymbol{\zeta}^-; J_{(d, \underline{L})})$$ will be called \textit{$J$-holomorphic cultists}. See Figure \ref{fig:cultist2}. \begin{figure}[!h] \centering \vspace{0.5cm} \labellist \pinlabel $\gamma^+$ [bl] at 199 445 \pinlabel $\delta_1$ [tl] at 14 1 \pinlabel $\delta_2$ [tl] at 121 1 \pinlabel $q$ [tl] at 205 92 \pinlabel $\zeta_1$ [tl] at 312 2 \pinlabel $\zeta_2$ [tl] at 418 3 \pinlabel $\Sigma_0$ [br] at 90 215 \pinlabel ${\color{red}\Sigma_1}$ [bl] at 380 215 \endlabellist \includegraphics[height=5cm]{cultist2} \caption{Curve contributing to the second cultist map.} \label{fig:cultist2} \end{figure} \subsubsection{Bananas moduli space.} \label{sec:banana-moduli-space} Let $\gamma_{1,0}\in \mathcal{R}(\Lambda^+_1,\Lambda^+_0)$ and $\gamma_{0,1} \in \mathcal{R}(\Lambda^+_0,\Lambda^+_1)$ be mixed Reeb chords (and note that they go in opposite directions), let $\boldsymbol{\delta}^- = \delta^-_1 \ldots \delta^-_{i-1}$ be Reeb chords of $\Lambda^-_0$ and $\boldsymbol{\zeta}^- =\zeta^-_{i+1} \ldots \zeta^-_d$ Reeb chords of $\Lambda_1^-$. Curves in the moduli space $$\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma_{1,0}; \boldsymbol{\delta}^-, \gamma_{0,1}, \boldsymbol{\zeta}^-; J_{(d, \underline{L})})$$ will be called \textit{$J$-holomorphic bananas}. See Figure \ref{fig:banana}. \begin{figure}[!h] \centering \vspace{0.5cm} \labellist \pinlabel $\gamma_{1,0}$ [bl] at 5 445 \pinlabel $\delta_1$ [tl] at 14 1 \pinlabel $\delta_2$ [tl] at 129 1 \pinlabel $\gamma_{0,1}$ [bl] at 388 435 \pinlabel $\zeta_1$ [tl] at 304 2 \pinlabel $\zeta_2$ [tl] at 413 3 \pinlabel $\Sigma_0$ [br] at 73 185 \pinlabel ${\color{red}\Sigma_1}$ [bl] at 370 215 \endlabellist \includegraphics[height=5cm]{banana} \caption{A $J$-holomorphic banana.} \label{fig:banana} \end{figure} When $\Sigma_i=\mathbb{R}\times\Lambda_i$, we again use $\widetilde{\mathcal{M}}(\gamma_{1,0};\boldsymbol{\delta},\gamma_{0,1},\boldsymbol{\zeta}^-)$ to denote the quotient of the moduli space by the natural $\mathbb{R}$-action. Note that, by our our definition of an exact Lagrangian cobordism, there are no non-constant pseudoholomorphic curves with boundary on $\Sigma_0 \cup \Sigma_1$ with all punctures having negative asymptotics to Reeb chords; see Section \ref{sec:action-energy} for more details. \subsection{Energy and compactness} \label{sec:energy-compactness} In this section, we recall the notion of the Hofer energy for holomorphic curves in the symplectisation of a contact manifold as introduced in \cite{Hofer93} and \cite{Bourgeois_&_Compactness}. We also give estimates for this energy in terms of the asymptotics of the curves appearing in the moduli spaces of Section \ref{sec:Cthulu-moduli-spaces}. The goal is to formulate the compactness theorem for pseudoholomorphic curves in the present setting. \subsubsection{The Hofer energy} \label{sec:energy-1} Assume that we are given two exact Lagrangian cobordisms $\Sigma_0$ and $\Sigma_1$ in the symplectisation $(X=\ensuremath{\mathdj{R}} \times Y,d(e^t\alpha))$ of $(Y,\alpha)$. Let $f_i \colon \Sigma_i \to \ensuremath{\mathdj{R}}$ be primitives of $e^t\alpha\vert_{\Sigma_i}$ which are constant at the cylindrical ends. (By a slight abuse of notation, we will denote by $\cdot |_{\Sigma_i}$ the pull-back of $e^t\alpha$ under the inclusion of $\Sigma_i$.) Without loss of generality we will assume that both constants are $0$ on the negative ends, while the constants on the positive end of $\Sigma_i$ will be denoted by $\mathfrak{c}_i$, $i=0,1$. Here we rely on Definition \ref{defn: exact lagrangian cobordism} of an exact cobordism. Take any $T > 0$ and $T>\epsilon > 0$ for which \begin{eqnarray*} & & \Sigma_i \cap ((- \infty, -T + \varepsilon) \times Y) = (- \infty, -T + \varepsilon) \times \Lambda_i^-\\ & & \Sigma_i \cap (( T - \varepsilon, + \infty) \times Y) = ( T - \varepsilon, + \infty) \times \Lambda_i^+, \end{eqnarray*} for $i=0,1$. Now, we let $\phi:\mathbb{R}\rightarrow [e^{-T},e^T]$ be a smooth function satisfying: \begin{itemize} \item $\phi(\pm t)=e^{\pm T}$ for $t>T$; \item $\phi(t)=e^t$ for $t\in [-T+\epsilon,T-\epsilon]$; \item $\phi'(t)\geq 0$. \end{itemize} In the case when both $\Sigma_0$ and $\Sigma_1$ are trivial cylinders over Legendrian submanifolds, we will also allow the case $T=\epsilon=0$, and $\phi \equiv 1$. By construction we have $\phi\alpha\vert_{\Sigma_i} = e^t \alpha|_{\Sigma_i}$ for the reason that $\alpha|_{\Sigma_i}=0$ in the subset where $\phi$ is not equal to $e^t$. A primitive of $e^t \alpha|_{\Sigma_i}$ (which exists by exactness) is hence also a primitive of $\phi\alpha\vert_{\Sigma_i}$. Let $\mathcal{C}^-$ be the set of compactly supported smooth functions \[w_- \colon (-\infty,-T+\epsilon) \to [0,+\infty)\] satisfying $\displaystyle\int_{-\infty}^{-T+\epsilon} w_-(s)ds=e^{-T}$, and let $\mathcal{C}^+$ be the set of compactly supported smooth functions \[w_+ \colon (T-\epsilon,+\infty) \to [0,+\infty)\] satisfying $\displaystyle\int_{T-\epsilon}^{+\infty}w_+(s)ds=e^T$. We are now ready to introduce different versions of energies for pseudoholomorphic curves in the symplectisation, all which are standard. We refer to \cite{Bourgeois_&_Compactness} for the absolute case, and \cite{RationalSFT} as well as \cite{Abbas_Chord_Asy} for the relative case. \begin{defn} Let $S$ be a punctured disc and $u=(a,v)\colon S \rightarrow \mathbb{R}\times Y$ be a smooth map. \begin{itemize} \item The \emph{$d(\phi\alpha)$-energy} of $u$ is given by \begin{align*} E_{d(\phi\alpha)}(u)=\displaystyle\int_Su^*(d(\phi\alpha)). \end{align*} \item The \emph{$\alpha$-energy of $u$} is given by \begin{align*} E_\alpha(u)=\sup\limits_{(w_-,w_+)\in \mathcal{C}^-\times\mathcal{C}^+}\left(\displaystyle\int_S(w_-\circ a) da\wedge v^*\alpha + \displaystyle\int_S(w_+\circ a) da\wedge v^*\alpha\right). \end{align*} \item The \emph{total energy}, or the \emph{Hofer energy}, of $u$ is given by \begin{align*} E(u)=E_{\alpha}(u)+E_{d(\phi\alpha)}(u). \end{align*} \end{itemize} In the case when $u$ is a proper map for which $E(u)<\infty$, we say that $u$ is a \textit{finite energy pseudoholomorphic disc}. \end{defn} Non-constant holomorphic curves have positive total energy, as stated in the following simple lemma. We leave the proof to the reader. \begin{Lem} If $u$ is non-constant punctured pseudoholomorphic disc with boundary on a pair of exact Lagrangian cobordisms, and if the almost complex structure is cylindrical outside of $[-T+\epsilon,T-\epsilon] \times Y$, then $E(u) >0$, $E_\alpha(u) \ge 0$, and $E_{d(\phi\alpha)}(u) \ge 0$. Moreover, $E_{d(\phi\alpha)}(u) =0$ implies that $u$ is contained inside a trivial cylinder over a Reeb orbit. \end{Lem} The techniques in \cite{Hofer_&_Symplectisation1} can be to the applied to the current setting with non-empty boundary, similarly to what was done in \cite{Abbas_Chord_Asy}, in order to show that \begin{Prop} Assume that we endow $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ with an admissible complex structure in the sense of Section \ref{sec:admissible}. A proper punctured pseudoholomorphic disc inside the symplectisation having boundary on a Lagrangian cobordism $\Sigma_0 \cup \Sigma_1$ is of finite energy if and only if all of its punctures are contained in the boundary, and such that the disc is exponentially converging to either trivial strips over Reeb chords on $\Lambda^\pm_0 \cup \Lambda^\pm_1$ or intersection points $\Sigma_0 \cap \Sigma_1$ at each of its boundary punctures. \end{Prop} \subsubsection{Action and energy} \label{sec:action-energy} Consider a pair of exact Lagrangian cobordisms $\Sigma_i$, $i=0,1$, from $\Lambda^-_i$ to $\Lambda^+_i$. Here we define the action of intersection points and Reeb chords and relate it to the different energies of holomorphic discs having boundary on this pair of cobordisms. For a Reeb chord $c$ we define \[ \ell(c) := \int_c \alpha.\] Recall that we are given a choice of $T \ge 0$ in the construction of the $E_{d(\phi\alpha)}$-energy above, where equality only is possible under the assumption that both cobordisms are trivial cylinders. The action of a mixed Reeb chord $\gamma$ on $\Lambda^\pm_1 \cup \Lambda^\pm_0$ is defined by \begin{align*} \mathfrak{a}(\gamma) := e^T\ell(\alpha) + (\mathfrak{c}_i- \mathfrak{c}_j) & \quad \text{if $\gamma$ is a chord from $\Lambda_i^+$ to $\Lambda_j^+$, and} \\ \mathfrak{a}(\gamma) := e^{-T}\ell(\alpha) \phantom{+ (\mathfrak{c}_1- \mathfrak{c}_0)} & \quad \text{if $\gamma$ is a chord on $\Lambda_0^- \cup \Lambda_1^-$.} \end{align*} In particular, we observe that the action of a pure Reeb chord $\gamma$ on $\Lambda_i^\pm$ is defined by \[\mathfrak{a}(\gamma):=e^{\pm T}\ell(\gamma). \] The action of an intersection point $p \in \Sigma_0 \cap \Sigma_1$ is defined by \[ \mathfrak{a}(p):=f_1(p)-f_0(p).\] Applications of Stoke's theorem gives the following proposition (see \cite{LegendrianAmbient} for details), where we heavily rely on the fact that each cobordism $\Sigma_i$, $i=0,1$, is exact. \begin{Prop} \label{prop:energy} Let $\gamma^\pm \in {\mathcal R}(\Lambda_1^\pm, \Lambda_0^\pm)$ be mixed Reeb chords, $\boldsymbol{\delta}^-= \delta_1^- \ldots \delta_{i-1}^-$ and $\boldsymbol{\zeta}^- = \zeta_{i+1} \ldots \zeta_d$ words of pure Reeb chords on $\Lambda_1^-$ and $\Lambda_0^-$, respectively, and $p, q \in \Sigma_0 \cap \Sigma_1$ intersection points. We denote $\mathfrak{a}(\boldsymbol{\delta}^-) := \sum \limits_{k=1}^{i-1} \mathfrak{a}(\delta_k^-)$ and $\mathfrak{a}(\boldsymbol{\zeta}^-) := \sum \limits_{k=i+1}^{d} \mathfrak{a}(\zeta_k^-)$. \begin{itemize} \item If $u\in\mathcal{M}_{\underline{L}}(\gamma^+;\boldsymbol{\delta}^-,\gamma^-, \boldsymbol{\zeta}^-)$, then \begin{align} E_{d(\phi\alpha)}(u)&=\mathfrak{a}(\gamma^+) - \mathfrak{a}(\gamma^-)-\big(\mathfrak{a}(\boldsymbol{\delta}^-) +\mathfrak{a}(\boldsymbol{\zeta}^-) \big).\\\label{eq:7} E_\alpha(u)&\leq 2\mathfrak{a}(\gamma^+)\nonumber \end{align} \item If $u\in\mathcal{M}_{\underline{L}}(\gamma^+;\boldsymbol{\delta}^-,p,\boldsymbol{\zeta}^-)$, then \begin{align} E_{d(\phi\alpha)}(u)&=\mathfrak{a}(\gamma^+) - \mathfrak{a}(p) - \big(\mathfrak{a}(\boldsymbol{\delta}^-) +\mathfrak{a}(\boldsymbol{\zeta}^-)\big).\\\label{eq:8} E_\alpha(u)&\leq 2\mathfrak{a}(\gamma^+)\nonumber \end{align} \item If $u\in\mathcal{M}_{\underline{L}}(p;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-)$, then \begin{align} E_{d(\phi\alpha)}(u)&= \mathfrak{a}(p) -\mathfrak{a}(\gamma^-)-\big(\mathfrak{a}(\boldsymbol{\delta}^-) + \mathfrak{a}(\boldsymbol{\zeta}^-)\big).\\\label{eq:9} E_\alpha(u)&\leq \mathfrak{a}(p)\nonumber \end{align} \item If $u\in\mathcal{M}_{\underline{L}}(p;\boldsymbol{\delta}^-,q,\boldsymbol{\zeta}^-)$, then \begin{align} E_{d(\phi\alpha)}(u)&= \mathfrak{a}(p) - \mathfrak{a}(q) -\big(\mathfrak{a}(\boldsymbol{\delta}^-)+\mathfrak{a}(\boldsymbol{\zeta}^-)\big).\\\label{eq:10} E_\alpha(u)&\leq \mathfrak{a}(p)\nonumber \end{align} \item If $u\in\mathcal{M}_{\underline{L}}(\gamma_{1,0};\boldsymbol{\delta}^-,\gamma_{0,1},\boldsymbol{\zeta}^-)$, then \begin{align} E_{d(\phi\alpha)}(u)&=\big(\mathfrak{a}(\gamma_{1,0})+\mathfrak{a}(\gamma_{0,1})\big)-\big(\mathfrak{a}(\boldsymbol{\delta}^-)+\mathfrak{a}(\boldsymbol{\zeta}^-)\big).\\\label{eq:12} E_\alpha(u)&\leq 2\mathfrak{a}(\gamma_{1,0})+2\mathfrak{a}(\gamma_{0,1}) \nonumber \end{align} \end{itemize} For the above inequalities involving the $\alpha$-energy we must assume that the almost complex structure is cylindrical outside of $[-T+\epsilon,T-\epsilon]\times Y$. \end{Prop} \subsubsection{Holomorphic buildings with boundary on Lagrangian cobordisms.} \label{sec:nodal-curves-with} The moduli space of punctured pseudoholomorphic discs with boundary on a Lagrangian cobordism can be compactified by the space of pseudoholomorphic buildings, which we now proceed to define. \begin{defn} Let $S$ be a (stable or semi-stable) nodal disc with normalisation $(\widehat{S}, \mathfrak{m}\cup \mathfrak{n} ,\iota)$ as described in Section \ref{sec:semi-stable-nodal}. By a \textit{level function} for $S$ we mean a locally constant map $$f:\widehat{S}\rightarrow\{-k^-,-k^-+1,\ldots,k^+-1,k^+\}$$ for some $k^-,k^+\in\mathbb{Z}_{\ge 0}$ such that for any node $n\in \mathfrak{n}$ we have $|f(n)- f(\iota(n))| \le 1$, while $f(n)=f(\iota(n))$ is allowed only if $f(n)=0$. \end{defn} If $S$ is a nodal disc with level function $f$, a Lagrangian label $\underline{L}$ on $(S, f)$ will be a collection of Lagrangian labels $\underline{L}_i$ on each connected component $S_i$ of the normalisation such that: \begin{itemize} \item $\underline{L}_i$ takes values in $\{ \ensuremath{\mathdj{R}} \times \Lambda_0^+, \ensuremath{\mathdj{R}} \times \Lambda_1^+\}$ if $f(S_i)>0$, in $\{ \Sigma_0, \Sigma_1 \}$ if $f(S_i)=0$ and in $\{\ensuremath{\mathdj{R}} \times \Lambda_0^-, \ensuremath{\mathdj{R}} \times \Lambda_1^- \}$ if $f(S_i)<0$; and \item At a node $n\in \mathfrak{n}$ the asymptotic corresponding to $n$ agrees with the asymptotic corresponding to $\iota(n)$. \end{itemize} In particular, performing boundary connected sums at all nodes of $S$, one can canonically assign a Lagrangian label on the resulting nodal disc taking values in $\{ \Sigma_0,\Sigma_1\}$. A Lagrangian label on $(S, f)$ is {\em simple} if, after performing boundary connected sums at all nodes, the resulting Lagrangian label is simple. Let $\Sigma_0,\Sigma_1$ be two admissible exact Lagrangian cobordisms, and $\{J_t\}$ a path in $\mathcal{J}^{\operatorname{adm}}_{J^+,J^-}$ inducing domain dependent almost complex structures $J_{(d, \underline{L})}$ for any $d \ge 0$ and any Lagrangian label $\underline{L}$ for elements in ${\mathcal R}^{d+1}$. \begin{defn} Let $S$ be a nodal disc with Lagrangian label $\underline{L}$ and level function $f$ mapping onto $\{-k^-,-k^-+1,\ldots,k^+-1,k^+\}$ for some integers $k^\pm \ge 0$. A holomorphic building of height $k^-|1|k^+$ with domain $S$ is given by a family $\{u_i\ |\ i=1,\ldots,k\}$ of punctured pseudo-holomorphic discs, where the connected component $S_i$ of the normalisation of $S$ corresponds to $u_i$, such that the following conditions moreover are satisfied: \begin{itemize} \item All discs satisfying $f(S_i)=l$ for some fixed $l$ are said to live in level $l \in \ensuremath{\mathdj{Z}}$. We require that each level different from zero contains at least one component which is not a trivial strip over a Reeb chord. \item If $f(S_i)>0$, then $u_i\in \mathcal{M}^{r_i}_{\ensuremath{\mathdj{R}}\times\Lambda^+_0,\ensuremath{\mathdj{R}} \times \Lambda^+_1}(y_0;y_1,\ldots,y_{d_i}; J^+)$. These discs are said to live in a \emph{top level}. \item If $f(S_i)<0$, then $u_i\in \mathcal{M}^{r_i}_{\ensuremath{\mathdj{R}}\times\Lambda^-_0,\ensuremath{\mathdj{R}} \times \Lambda^-_1}(y_0;y_1,\ldots,y_{d_i}; J^-)$. These discs are said to live in a \emph{bottom level}. \item If $f(S_i)=0$, then $u_i\in \mathcal{M}^{r_i}_{\Sigma_0,\Sigma_1}(y_0;y_1,\ldots,y_{d_i}; J_{(d_i, \underline{L}_i)})$. These discs are said to live in the \emph{middle level}. \item For any node $n$ whose asymptotic corresponds to a Reeb chord, we require that $|f(n)- f(\iota(n))| = 1$; \item For any node $n$ such that $0\leq f(n)< f(\iota(n))$, let $y_n$ and $y_{\iota(n)}$ be the corresponding asymptotics. Then $y_n=y_{\iota(n)}$ is a Reeb chord on $\mathbb{R}\times(\Lambda^+_0\sqcup\Lambda^+_1)$, such that $y_n$ and $y_{\iota(n)}$ is a positive and negative asymptotic, respectively. \item For any node $n$ such that $f(n)< f(\iota(n))\leq 0$ let $y_n$ and $y_{\iota(n)}$ be the corresponding asymptotics. Then $y_n=y_{\iota(n)}$ is a Reeb chord on $\mathbb{R}\times(\Lambda^-_0\sqcup\Lambda^-_1)$, such that $y_n$ and $y_{\iota(n)}$ is a positive and negative asymptotic, respectively. \item For any node $n$ such that $f(n)=f(\iota(n))$ (and thus is equal to $0$) the corresponding asymptotics $y_n$ and $y_{\iota(n)}$ correspond to a given intersection point in $\Sigma_0 \cap \Sigma_1$. \item The positive (negative) punctures asymptotic to Reeb chords which are not nodes correspond precisely to the positive (negative) punctures in the $k_+$:th ($k_-$:th) level. \end{itemize} Furthermore, we identify two buildings consisting of the same nodal domain whenever the images of the discs in level $l=0$ coincide, while the images in level $l \neq 0$ differ by a translation of the symplectisation coordinate by a number that only depends on the level $l$. \end{defn} By \[\mathcal{M}^{k^-|1|k^+}_{\Sigma_0,\Sigma_1}(x_0;x_1,\hdots,x_d)\] we denote the buildings having a single incoming puncture $x_0$ which is not a node (necessarily contained in level $k^+$), and whose outgoing punctures which are not nodes correspond to $x_1,\hdots,x_d$ (respecting the order of the punctures induced by the boundary orientation after a boundary connected sum). Observe that, by definition, we have \[ \mathcal{M}_{\Sigma_0,\Sigma_1}(x_0;x_1,\hdots,x_d) \subset \mathcal{M}^{0|1|0}_{\Sigma_0 \cup \Sigma_1}(x_0;x_1,\hdots,x_d).\] The latter subspace consists of honest one-component pseudoholomorphic curves, and they will sometimes be referred to as \emph{unbroken solutions}, while the pseudoholomorphic buildings not being of this form will be referred to as \emph{broken solutions}. The definition of a building is analogous in the case when the boundary condition is a \emph{single} Lagrangian cobordism $\Sigma$, and we use \[ \mathcal{M}^{k^-|1|k^+}_{\Sigma}(x_0;x_1,\hdots,x_d) \] to denote the space of such buildings. \begin{Rem} Observe that the dichotomy between punctured pseudo-holomorphic half-planes and strips (see Section \ref{sec:strips}) also applies to the above pseudo-holomorphic buildings. We will call a pseudoholomorphic building a \emph{broken punctured pseudoholomorphic half-plane (resp.~strip)} in the case when taking a boundary connected sum at each node produces a topological punctured half-plane (resp.~strip). \end{Rem} The space of buildings has a topology as defined in \cite{Bourgeois_&_Compactness} and \cite{Abbas_book}, where it also is shown that a sequence of buildings with fixed outgoing punctures has a subsequence converging to another building of the same type (but with possibly additional levels). Observe that, here we also rely on the compactness results for Lagrangian intersection Floer homology \cite{FloerHFlag}. We do not reproduce the definition of the topology used in order to formulate this convergence, and we refer the reader to the relevant sections of the papers above. The following result is the main compactness result that we will need, obtained by combining the above mentioned compactness results in the SFT and Floer settings. \begin{Thm}\label{thm:compactness} For an admissible almost complex structure, the disjoint unions of all buildings \begin{gather*} \bigsqcup_{k^+,k^- \ge 0}\mathcal{M}^{k^-|1|k^+}_{\Sigma_0,\Sigma_1}(x_0;x_1,\hdots,x_d)\\ \bigsqcup_{k^+,k^- \ge 0}\mathcal{M}^{k^-|1|k^+}_{\Sigma}(x_0;x_1,\hdots,x_d) \end{gather*} are compact and, in particular, these unions are finite. \end{Thm} The crucial property needed in order to apply the compactness theorems is that the above buildings consist of components having an a priori upper bound on the sum of their total energies. Indeed, this upper bound can be expressed solely in terms of the action $\mathfrak{a}(x_0)$ of the asymptotic $x_0$. This can be readily seen to follow from Proposition \ref{prop:energy}, using the fact that there is only a finite number of Reeb chords and intersection points below a fixed action. (In fact, in the setting considered here it is even the case that the totality of Reeb chords and intersection points for a given pair $(\Sigma_0,\Sigma_1)$ is finite.) \section{Fredholm theory and transversality} \label{sec:fredholm-properties} \subsection{Linear Cauchy-Riemann operators} \subsubsection{The Grassmannian of Lagrangian planes} \label{sec:grassm-lagr-plan} Let $(V,\omega)$ be a symplectic vector space of dimension $2n+2$. We denote by $\operatorname{Gr}(V,\omega)$ the space of Lagrangian subspaces of $(V,\omega)$. The choice of a symplectic basis of $V$ leads to an identification $\operatorname{Gr}(V, \omega) \cong U(n+1) / O(n+1)$. Given a path $\lambda \colon [0, 1] \to \operatorname{Gr}(V, \omega)$, for $|s-r|$ small enough there is a family of linear isomorphisms $\phi_{r,s} \colon \lambda(s) \to \lambda(r)$ such that $\phi_{s,s} = \operatorname{Id}$. If $\overline{\lambda} \in \operatorname{Gr}(V, \omega)$, the {\em crossing form} $q_{\lambda, \overline{\lambda}, s}$ is the quadratic form on $\lambda(s) \cap \overline{\lambda}$ defined by \begin{equation}\label{crossing form} q_{\lambda, \overline{\lambda}, s}(v) := - \left. \frac{d}{dr}\right |_{r=s}\omega(\phi_{r,s}(v), v). \end{equation} \begin{Ex}\label{ex: computation of q} Let $\eta \in \operatorname{Gr}(V, \omega)$ be a Lagrangian plane and $f \colon [0,1] \to \ensuremath{\mathdj{R}}$ a smooth function. We fix a complex structure on $V$ compatible with $\omega$ and define the path $\lambda \colon [0,1] \to \operatorname{Gr}(V, \omega)$ as $\lambda(t):= e^{if(t)}\eta$. Then, at any point $s \in [0,1]$, the crossing form $q_{\lambda, \lambda(s), s}$ is \[q_{\lambda, \lambda(s), s}(v) = f'(s) \| v \|^2.\] \end{Ex} We denote by $\mathcal{P}^-(\operatorname{Gr}(V,\omega))$ the set of paths $\lambda \colon [0,1] \to \operatorname{Gr}(V, \omega)$ such that $\lambda(0) \pitchfork\lambda(1)$ and the crossing form $q_{\lambda, \lambda(1), 1}$ is negative definite. For a generic path $\lambda \in \mathcal{P}^-(\operatorname{Gr}(V,\omega))$ the spaces $\lambda(s) \cap \lambda(1)$ has positive dimension only at at finite set of $s$, and the crossing forms on the positive dimensional intersections is non-degenerate. (We can also assume, generically, that $\lambda(s) \cap \lambda(1)$ has dimension at most one for $s \ne 1$.) If $\lambda$ is a generic path, we define \begin{equation}\label{index from crossing form} I(\lambda):=\sum_{s<1} \operatorname{ind}(q_{\lambda,\lambda(1),s}), \end{equation} where $\operatorname{ind}(q)$ denotes the number of negative eigenvalues of a non-degenerate quadratic form $q$. Let $\det \colon U(n+1) \to S^1$ denote the complex determinant. The map $$A \mapsto \det(A)^2$$ descends to a well-defined map $\alpha \colon \operatorname{Gr}(V, \omega) \to S^1$ which induces an isomorphism $\alpha_* \colon \pi_1(\operatorname{Gr}(W, \omega)) \to \pi_1(S^1)$. We regard $\alpha_*$ as a cohomology class $\mu \in H^1(\operatorname{Gr}(V, \omega); \ensuremath{\mathdj{Z}})$ called the {\em Maslov class}. We denote by $\operatorname{Gr}^{\#}(V,\omega)\rightarrow \operatorname{Gr}(V,\omega)$ the universal covering of $\operatorname{Gr}(V,\omega)$. The elements of $\operatorname{Gr}^{\#}(V,\omega)$ are called {\em graded Lagrangian planes}. The map $\alpha \colon \operatorname{Gr}(V,\omega) \to S^1$ lifts, in a non-unique way, to a map $\alpha^{\#} \colon \operatorname{Gr}^{\#}(V,\omega) \to \ensuremath{\mathdj{R}}$ such that $\alpha = e^{2 \pi i \alpha^\#}$. Different choices for $\alpha^\#$ differ by an integer constant. We can think of a graded Lagrangian plane $\lambda^\#$ as a pair $(\lambda, \alpha^\#(\lambda^\#))$. Similarly, we denote by $\mathcal{P}^-(\operatorname{Gr}^\#(V,\omega))$ the set of paths $\lambda^\# \colon [0,1] \to \operatorname{Gr}^\#(V,\omega)$ that project to paths $\lambda \in \mathcal{P}^-(\operatorname{Gr}(V,\omega))$. Given $\lambda_0^\#, \lambda_1^\# \in \operatorname{Gr}^\#(V, \omega)$ whose projections $\lambda_0$ and $\lambda_1$ intersect transversely, we chose a path $\lambda^\# \in \mathcal{P}^-(\operatorname{Gr}^\#(V,\omega))$ such that $\lambda^\#(0) = \lambda^\#_0$ and $\lambda^\#(1) = \lambda^\#_1$ and define the {\em absolute index} $$i(\lambda^\#_0, \lambda^\#_1):= I(\lambda).$$ Since $\lambda_0^\#$ and $\lambda_1^\#$ determine the homotopy class of $\lambda$ and $I$ is invariant under homotopies relative to the boundary, the absolute index is well defined. Now we want to extend the absolute index to a map $$i \colon \operatorname{Gr}^\#(V,\omega) \times \operatorname{Gr}^\#(V,\omega) \to \ensuremath{\mathdj{Z}}.$$ Let $\lambda^\#_0$ and $\lambda^\#_1$ be graded Lagrangian planes. If $\lambda_0 \cap \lambda_1 = \boldsymbol{0}$, then $i(\lambda^\#_0, \lambda^\#_1)$ is defined as above. Otherwise, let $(\lambda_0 \cap \lambda_1)^\omega$ be the symplectic orthogonal to $\lambda_0 \cap \lambda_1$ and define $\widetilde{V} := (\lambda_0 \cap \lambda_1)^\omega / (\lambda_0 \cap \lambda_1)$. The symplectic form $\omega$ induces, by linear symplectic reduction, a symplectic form $\widetilde{\omega}$ on $\widetilde{V}$. If $\lambda \subset (\lambda_0 \cap \lambda_1)^\omega$ is a Lagrangian subspace of $(V, \omega)$, then $\widetilde{\lambda} = \lambda / (\lambda_0 \cap \lambda_1)$ is a Lagrangian subspace of $(\widetilde{V}, \widetilde{\omega})$. This construction gives an embedding $\operatorname{Gr}(\widetilde{V}, \widetilde{\omega}) \hookrightarrow \operatorname{Gr}(V, \omega)$ which lifts to an embedding $\operatorname{Gr}^\#(\widetilde{V}, \widetilde{\omega}) \hookrightarrow \operatorname{Gr}^\#(V, \omega)$. Let $\widetilde{\lambda}_0^\#$ and $\widetilde{\lambda}_1^\#$ be the preimages of $\lambda_0^\#$ and $\lambda_1^\#$ in $\operatorname{Gr}^\#(\widetilde{V}, \widetilde{\omega})$. Since $\widetilde{\lambda}_0$ and $\widetilde{\lambda}_1$ intersect transversely in $\widetilde{V}$, we define \begin{equation}\label{caldaia 1} i (\lambda_0^\#, \lambda_1^\#) := i (\widetilde{\lambda}_0^\#, \widetilde{\lambda}_1^\#). \end{equation} Next we study how the index changes for small perturbation of the Lagrangian planes. From now on we fix a complex structure on $V$ which is compatible with $\omega$. For a Lagrangian plane $\lambda^\#$ and a real number $\delta \ne 0$, we define $e^{i \delta}\lambda^\#$ as the end point of the path $[0,1] \to \operatorname{Gr}^\#(V, \omega)$ which starts at $\lambda^\#$ and lifts the path $t \mapsto e^{i \delta t} \lambda$. In particular $\alpha^\#(e^{i \delta}\lambda^\#)= \alpha^\#(\lambda^\#) + \frac{(n+1) \delta}{\pi}$. Given two graded Lagrangian planes $\lambda_0^\#$ and $\lambda_1^\#$, we say that $\delta \in \ensuremath{\mathdj{R}}^*$ is ``small'' if $\lambda_0 \pitchfork e^{i \delta t} \lambda_1$ for all $t \in (0,1]$. \begin{Lem}\label{grading and perturbation} Let $\lambda_0^\#$ and $\lambda_1^\#$ be graded Lagrangian planes and $\delta \ne 0$ a small real number. \begin{enumerate} \item If $\delta >0$, then $i(\lambda_0^\#, e^{i \delta} \lambda_1^\#) = i(\lambda_0^\#, \lambda_1^\#) + \dim (\lambda_0 \cap \lambda_1)$, and \item If $\delta <0$, then $i(\lambda_0^\#, e^{i \delta} \lambda_1^\#) = i(\lambda_0^\#, \lambda_1^\#)$. \end{enumerate} \end{Lem} \begin{proof} There is an equivalent definition of the absolute index which is less intrinsic, but more practical for this computation. Let $\eta = \lambda_0 \cap \lambda_1$, and choose a symplectic subspace $(V', \omega') \subset (V, \omega)$ containing $\eta$ as a Lagrangian subspace. Let $(V'', \omega'')$ be the symplectic orthogonal of $(V', \omega')$; then the subspaces $\zeta_i = \lambda_i \cap V''$, for $i=0,1$, are Lagrangian in $V''$. Let $\zeta \in {\mathcal P}^-(\operatorname{Gr}(V'', \omega'')$ be a path from $\zeta_0$ to $\zeta_1$ such that $\lambda = \eta \oplus \zeta$ lifts to a path in $\operatorname{Gr}^\#(V, \omega)$ from $\lambda_0^\#$ to $\lambda_1^\#$. Here $\eta$ is regarded as a constant path. The path $\zeta$ exists because $(V'', \omega'')$ is symplectically isomorphic to $(\widetilde{V}, \widetilde{\omega})$. Then we have \begin{equation}\label{caldaia 2} i (\lambda_0^\#, \lambda_1^\#)=I(\zeta). \end{equation} Choose a complex structure on $V$ which is compatible with $\omega$ and preserves the direct sum decomposition $V = V' \oplus V''$, and define $\zeta^\delta(t) := e^{i \delta t}\zeta(t)$ and $\eta^\delta(t):= e^{i \delta t} \eta$. Since $\delta$ is small, $\zeta^\delta \in {\mathcal P}^-(\operatorname{Gr}(V'', \omega''))$ and $I(\zeta^\delta)=I(\zeta)= i (\lambda_0^\#, \lambda_1^\#)$. If $\delta <0$, then by Example~\ref{ex: computation of q} the crossing form $q_{\eta^\delta, \eta^\delta(1), 1}$ is negative definite, so $\eta^\delta \in {\mathcal P}^-(\operatorname{Gr}(V', \omega'))$ and $I(\eta^\delta)=0$. Then $\lambda^\delta = \eta^\delta \oplus \zeta^\delta \in {\mathcal P}^-(\operatorname{Gr}(V, \omega))$, and therefore $i(\lambda_0^\#, e^{i \delta t} \lambda_1^\#) = I(\lambda^\delta)= i(\lambda_0^\#, \lambda_1^\#)$. If $\delta > 0$, then by Example~\ref{ex: computation of q} the crossing form $q_{\eta^\delta, \eta^\delta(1), 1}$ is positive definite, so we have to modify $\eta^\delta$ to obtain a path in ${\mathcal P}^-(\operatorname{Gr}(V', \omega'))$. One possible modification consists in replacing the path $\eta^\delta$ with the path $\widetilde{\eta}^\delta$ defined as $\widetilde{\eta}^\delta(t) := e^{i \delta (-2t^2 + 3t)}\eta$. If $\delta$ is small enough, $\widetilde{\eta}^\delta(t) = e^{i \delta}\eta$ only for $t = 1$ and $t = \frac 12$. Moreover, by Example~\ref{ex: computation of q}, the crossing form $q_{\widetilde{\eta}^\delta, \widetilde{\eta}^\delta(1), 1/2}$ is positive definite and the crossing form $q_{\widetilde{\eta}^\delta, \widetilde{\eta}^\delta(1), 1}$ is negative definite. Then $\widetilde{\eta}^\delta \in {\mathcal P}^-(\operatorname{Gr}(V', \omega')$ and $I(\widetilde{\eta}^\delta)= \dim \eta$. Therefore $i(\lambda_0^\#, e^{i \delta t} \lambda_1^\#) = I(\lambda^\delta)= i(\lambda_0^\#, \lambda_1^\#) + \dim (\lambda_0 \cap \lambda_1)$. \end{proof} \subsubsection{Cauchy-Riemann operators over punctured discs.} \label{sec:index-line-oper} Let $r\in \mathcal{R}^{d+1}$. We assume that the punctured disc $\Sigma_r$ is equipped with strip-like ends $\varepsilon_i$ as described in Section \ref{sec:holomorphic-curves} such that $\varepsilon_0$ is an incoming end and $\varepsilon_i$, for $i=1, \ldots,d$, are outgoing ends. For $i=0,\ldots,d$ we choose orientation preserving parametrisations $l_i:\mathbb{R}\rightarrow\partial \Sigma_r$ of the connected components $\partial_i \Sigma_r$ of $\partial \Sigma_r$, which we assume to be cyclically ordered according to the natural boundary orientation. Given $\boldsymbol{\lambda}= (\lambda_0, \ldots, \lambda_d)$, where each $\lambda_i$ is a map $\lambda_i \colon \ensuremath{\mathdj{R}} \to \operatorname{Gr}(\ensuremath{\mathdj{C}}^n, \omega_0)$ which is constant outside a compact set, for every $p>2$ we define the Banach space $$W^{1,p}(\Sigma_r,\boldsymbol{\lambda}):=\{\zeta\in W^{1,p}(\Sigma_r,\mathbb{C}^n)\ \vert\ \forall i \;\zeta(l_i(s)) \in \lambda_i(s)\}.$$ We denote by $D_{r,\boldsymbol{\lambda}} \colon W^{1,p}(\Sigma_r, \boldsymbol{\lambda}) \rightarrow L^p(\Sigma_r,\mathbb{C}^n)$ the standard linear Cauchy-Riemann operator. Here we have identified $T^{0,1}\Sigma \otimes_{\ensuremath{\mathdj{C}}} \ensuremath{\mathdj{C}}^n$ with $\ensuremath{\mathdj{C}}^n$. We denote $\lambda_i^+ = \lim \limits_{s \to + \infty} \lambda_i(s)$ and $\lambda_i^- = \lim \limits_{s \to - \infty} \lambda_i(s)$. It is well known (see \cite[Proposition 4.1]{Floer_morse_index} for the case of strips; the general case is similar) that $D_{r,\boldsymbol{\lambda}}$ is a Fredholm operator if $\lambda_i^+ \pitchfork \lambda_{i+1}^-$ for all $i = 0, \ldots, d$ (with the convention that $\lambda_{d+1}= \lambda_0$). If this is the case, the index of $D_{r,\boldsymbol{\lambda}}$ can be computed as follows. For each path $\lambda_i \colon \ensuremath{\mathdj{R}} \to \operatorname{Gr}(\ensuremath{\mathdj{C}}^n, \omega_0)$ we make the choice of a continuous lift $\lambda_i^\# \colon \ensuremath{\mathdj{R}} \to \operatorname{Gr}(\ensuremath{\mathdj{C}}^n, \omega_0)^\#$. Then it follows from \cite[Proposition 11.13]{Seidel_Fukaya} that \begin{equation}\label{eq:27} \operatorname{ind}(D_{r, \boldsymbol{\lambda}}) = i((\lambda_0^-)^\#, (\lambda_d^+)^\#)-\sum_{i=0}^{d-1} i((\lambda_i^+)^\#, (\lambda_{i+1}^-)^\#). \end{equation} Note that the above expression is independent of the choice of lifts made. We will need to consider also more general boundary conditions which do not intersect transversely on the strip-like ends. It is well known that the Cauchy-Riemann operator with these boundary conditions is not Fredholm, unless we use weighted Sobolev spaces, which we are now going to describe briefly. We define $\chi:\Sigma_r\rightarrow \mathbb{R}$ to be a smooth function which is equal to $0$ outside the strip-like ends (i.e. on $\Sigma_r \setminus (\nu_0 \cup \ldots \cup \nu_d)$), is equal to $1$ on the outgoing strip-like end $\varepsilon_0$ (where $s \ge 1$), and is equal to $-1$ on the incoming strip-like ends $\varepsilon_i$, for $i=1, \ldots,d$ (where $s \le -1$). Given $\delta>0$, we introduce the Banach spaces \begin{align*} & W^{1,p}_\delta(\Sigma_r,\boldsymbol{\lambda}):=\{\zeta : e^{\delta \chi(z)s}\zeta\in W^{1,p}(\Sigma_r,\boldsymbol{\lambda})\} \quad \text{and} \\ & L^p_\delta(\Sigma_r):=\{\zeta : e^{\delta \chi(z)s}\zeta\in L^p(\Sigma_r,\mathbb{C}^n)\}; \end{align*} note that the function $\chi(z)s$ is well defined as $\chi(z)=0$ outside the neighbourhoods where $s$ is a well defined coordinate. Moreover the function $e^{\delta \chi(z)s}$ goes to $+ \infty$ as $s \to \pm \infty$, and this forces the functions $\zeta$ in the weighted Sobolev spaces to go to zero sufficiently fast in the strip-like ends. We will denote by $D^\delta_{r, \boldsymbol{\lambda}}$ the Cauchy-Riemann operator from $W^{1,p}_\delta(\Sigma_r,\boldsymbol{\lambda})$ to $L^p_\delta(\Sigma_r)$. We define the Lagrangian label $\boldsymbol{\lambda}^\delta = (\lambda_0^\delta, \ldots, \lambda_d^\delta)$ by the isomorphism \[W^{1,p}_\delta(\Sigma_r,\boldsymbol{\lambda})\rightarrow W^{1,p}(\Sigma_r,\boldsymbol{\lambda}^\delta), \quad \zeta \mapsto e^{\delta \chi(z)(s+it)}\zeta. \] In order to understand the Fredholm properties of $D^\delta_{r, \boldsymbol{\lambda}}$, we introduce the operator $\widetilde{D}_{r, \boldsymbol{\lambda}^\delta}$ defined by the following commutative diagram: \begin{equation*} \xymatrix{ W_\delta^{1,p}(\Sigma_r,\boldsymbol{\lambda}) \ar[d]_{(e^{\delta\chi(z)(s+it)})\cdot}\ar[r]^{D^\delta_{r,\boldsymbol{\lambda}}} & L_\delta^p(\Sigma_r,\mathbb{C}^n)\ar[d]^{(e^{ \delta\chi(z) (s+it)}) \cdot} \\ W^{1,p}(\Sigma_r,\boldsymbol{\lambda}^\delta) \ar[r]^{\widetilde{D}_{r, \boldsymbol{\lambda}^\delta}} & L^p(\Sigma_r,\mathbb{C}^n) .} \end{equation*} In other words, \[\widetilde{D}_{r, \boldsymbol{\lambda}^\delta}(\zeta)=e^{ \delta\chi(z)(s+it)} D^\delta_{r, \boldsymbol{\lambda}} (e^{ - \delta\chi(z) (s+it)}\zeta)=D_{r, \boldsymbol{\lambda}^\delta}(\zeta)- \delta \overline{\partial}(\chi(z)(s+it)) \zeta\] for $\zeta \in W^{1,p}(\Sigma_r,\boldsymbol{\lambda}^\delta)$, and thus $\widetilde{D}_{r, \boldsymbol{\lambda}^\delta}$ is a compact deformation of $D_{r, \boldsymbol{\lambda}^\delta}$ because $\chi$ is constant outside a compact set. For a generic choice of $\delta$, the asymptotic Lagrangian labels associated to $\boldsymbol{\lambda}^\delta$ are transverse, and therefore $\widetilde{D}_{r, \boldsymbol{\lambda}}^\delta$ is Fredholm. Standard Fredholm theory implies that $D_{r, \boldsymbol{\lambda}}^\delta$ also is Fredholm and that \[\operatorname{Ind}(D_{r, \boldsymbol{\lambda}}^\delta) = \operatorname{Ind}(\widetilde{D}_{r, \boldsymbol{\lambda}^\delta}) = \operatorname{Ind}(D_{r, \boldsymbol{\lambda}^\delta}). \] Choose lifts $(\lambda^\delta_0)^\#, \ldots, (\lambda^\delta_d)^\#$; then by Equation \eqref{eq:27} the index of $D_{r, \boldsymbol{\lambda}}^\delta$ is \begin{equation} \label{index formula 2} \operatorname{ind}(D^\delta_{r, \boldsymbol{\lambda}}) = i((\lambda_0^{\delta,-})^\#, (\lambda_d^{\delta,+})^\#)-\sum_{i=0}^{d-1} i((\lambda_i^{\delta,+})^\#, (\lambda_{i+1}^{\delta,-})^\#). \end{equation} In order to make Equation~\eqref{index formula 2} more explicit, we relate the absolute indices of the asymptotic Lagrangian labels before and after the perturbation. At the incoming puncture \[(\lambda_0^{\delta, -})^\# = e^{i \delta}(\lambda^-_0)^\#, \quad \text{and} \quad (\lambda_d^{\delta, +})^\# = (\lambda_d^+)^\#, \] while at the outgoing punctures , for $i= 0, \ldots, d-1$, \[(\lambda_i^{\delta,+})^\# = e^{-i \delta}(\lambda_i^+)^\#, \quad \text{and} \quad (\lambda_{i+1}^{\delta,-})^\# = (\lambda_{i+1}^-)^\#.\] Since $i(e^{i \delta}(\lambda^-_0)^\#, (\lambda_d^+)^\#) = i((\lambda^-_0)^\#, e^{-i \delta}(\lambda_d^+)^\#)$ and $i(e^{-i \delta}(\lambda_i^+)^\#, (\lambda_{i+1}^-)^\#)=i((\lambda_i^+)^\#, e^{i \delta} (\lambda_{i+1}^-)^\#)$, Example~\ref{ex: computation of q} implies that the index of $D^\delta_{r,\boldsymbol{\lambda}}$ is \begin{equation}\label{index formula with weights} \operatorname{Ind}(D^\delta_{r, \boldsymbol{\lambda}}) = i((\lambda_0^-)^\#, (\lambda_d^+)^\#)-\sum_{i=0}^{d-1} (i((\lambda_i^+)^\#, (\lambda_{i+1}^-)^\#) + \dim (\lambda_i^+ \cap \lambda_{i+1}^-)). \end{equation} \subsection{Grading} \label{sec:grading} We are now ready to define the gradings of the generators that will appear in the Cthulhu complex described Section \ref{sec:Cthulhu-complex}. For intersection points, the grading follows Seidel \cite{Seidel_Fukaya}, while for Reeb chords we recover the definition in \cite{Ekholm_Contact_Homology} by Ekholm-Etnyre-Sullivan. \paragraph{\textbf{The Maslov potential of a Lagrangian cobordism.}} \label{sec:brane-struct-cobord} We assume that $2c_1(P)=0$, which is equivalent to saying that $\Lambda^{n}_{\mathbb{C}}(TX)^{\otimes 2}\simeq X\times \mathbb{C}$. Let $\widetilde{\nu}$ be such a trivialisation. Note that since $\Sigma$ is Lagrangian for any basis $(v_1,\ldots,v_{n+1})$ of $T_pL$, the value $$\nu(v_1,\ldots,v_{n+1}):=\widetilde{\nu}(v_1,\ldots,v_{n+1},v_1,\ldots,v_{n+1})$$ is non-zero and, moreover, $$\frac{\nu(v_1,\ldots,v_{n+1})}{\|\nu(v_1,\ldots,v_{n+1})\|}\in S^1$$ does not depend on the choice of the basis. This defines a function $\alpha: L\rightarrow S^1$ whose homotopy class $[\alpha]\in [L,S^1]\simeq H^1(L)$ is mapped to the Maslov class $\mu\in H^2(M,L)$ through the connecting homomorphisms $\rho:H^1(L)\rightarrow H^2(M,L)$. We now make the assumption that the Maslov class vanishes, from which it follows that there exists a (not uniquely determined) function $\alpha^{\#}: L\rightarrow\mathbb{R}$ satisfying $\alpha=e^{2i\pi\alpha^{\#}}$. Note that, for any $p\in \Sigma$, the pair $(T_p \Sigma,\alpha^{\#}(p))$ can be identified with an element of $\operatorname{Gr}^{\#}(T_p X,\omega_p)$. \begin{Rem} In the case when the Maslov does not vanish one defines the potential to take values in the cover of $S^1$ associated to $\operatorname{im}[\alpha] \subset H^1(S^1)=\ensuremath{\mathdj{Z}}$; this leads to a cyclic grading defined modulo $\operatorname{im}[\alpha]$. \end{Rem} Let us now assume that we have fixed choices of functions $\alpha_0^\#$ and $\alpha_1^\#$ as above for both of the cobordisms $\Sigma_0$ and $\Sigma_1$; these functions are called \emph{Maslov potentials} for the cobordisms. Using the same notation as in Section for $p\in \Sigma_i$, we let $\alpha_i^\#(p)$ denote $(T_p\Sigma_i)^\#$. \paragraph{\textbf{Grading of intersection points.}} \label{sec:grad-inters-points} Let $p$ be an intersection point between $\Sigma_0$ and $\Sigma_1$. Given a choice of ordering $(\Sigma_0,\Sigma_1)$, we define the grading of $p$ by $$\operatorname{gr}(p)=i(\alpha_0^\#(p),\alpha_1^\#(p)).$$ \paragraph{\textbf{Grading of Reeb chords.}} \label{sec:grading-reeb-chords} Let $\gamma$ be a Reeb chord of the link $\Lambda^\pm:=\Lambda^{\pm}_0\cup \Lambda^{\pm}_1$ with starting point $p^- \in \Lambda_{i^-}^\pm$, endpoint $p^+ \in \Lambda_{i^+}^\pm$, and length $\ell$. The Maslov potential on $\Sigma_0 \cup \Sigma_1$ restricts to a Maslov potential on the cylindrical ends $\ensuremath{\mathdj{R}}\times\Lambda^\pm$. Let $\phi_t$ be the flow of the Reeb vector field $\frac{\partial}{\partial z}$. Observe that a Hamiltonian isotopy acts on the space of graded Lagrangian planes by the homotopy lifting property. Let $\alpha^\#_{\gamma,t}$ be the lift of the path of Lagrangian planes $d\phi_t(T_{(t_0,p^-)}(\ensuremath{\mathdj{R}} \times \Lambda^\pm))$ starting at $\alpha_{i}(t_0,p^-)^\#$ for any $t_0\in\mathbb{R} $. We define $$\operatorname{gr}(\gamma)=i(\alpha_{i^+}(t_0,p^+)^\#,\alpha^\#_{\gamma,\ell})-1.$$ \begin{Rem} This grading coincides with the grading of the Reeb chord generators of the Chekanov-Eliashberg algebra used in \cite{LCHgeneral}, which are defined in terms of the Conley-Zehnder index. \end{Rem} Note that the above Lagrangian subspaces do not intersect transversely. Here we must use the definition that utilises the symplectic reduction as in Section \ref{sec:grassm-lagr-plan}. In this case, this symplectic reduction can be seen to geometrically correspond to taking the canonical projection $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to P$. \subsubsection{Virtual dimension of the moduli spaces.} \label{sec:virt-dimens-moduli} We now turn our attention to the general settings and prove the virtual dimension formulae for the moduli spaces of holomorphic discs $\mathcal{M}_{\underline{L}}(x_0,x_1,\hdots,x_{j^++j^-+l})$ for a Lagrangian label $\underline{L}$ taking values in $\{\Sigma_0,\Sigma_1\}$. Let $p$ and $\{q_k\}_{k=1}^{l}$ be intersection points of $\Sigma_0\cap\Sigma_1$. In addition, we let $\{\gamma_0^+\}$ and $\{\gamma^+_k\}_{k=1}^{j^+}$ be Reeb chords of $\Lambda_0^+\cup\Lambda^+_1$, and $\{\gamma^-_k\}_{k=1}^{j^-}$ be Reeb chords of $\Lambda_0^{-}\cup\Lambda^{-}_1$. We require that the Lagrangian label satisfies the property that: \begin{itemize} \item The asymptotic $x_0$ at the incoming puncture is either an intersection point $p_0$, or a positive puncture asymptotic to the Reeb chord $\gamma^+_0$. In either case, this puncture is required to be a jump from $\Sigma_1$ to $\Sigma_0$; and \item The asymptotics of the other punctures $x_1,\hdots,x_{j^++j^-+l}$ correspond bijectively to $\{\gamma^+_k\}_{k=1}^{j^+} \cup \{\gamma^-_k\}_{k=1}^{j^-} \cup \{q_k\}_{k=1}^{l}$. Here we make no requirements regarding the jumps made at the different punctures. (In the case of a Reeb chord, such a jump is uniquely determined. The jumps at intersection points are determined by the order of the asymptotics $(x_1,\ldots,x_{{j^++j^-+l}})$ since the jump at $x_0$ is fixed.) For an intersection point, we define the function $(\sigma_{q_k}(0),\sigma_{q_k}(1)):=(0,1)$ in case the jump is from $\Sigma_0$ to $\Sigma_1$, while we define $(\sigma_{q_k}(0),\sigma_{q_k}(1)):=(1,0)$ in case the jump is from $\Sigma_1$ to $\Sigma_0$. Note that here we do not insist on the Lagrangian label being simple. \end{itemize} For a punctured pseudoholomorphic disc $u:S_r\rightarrow X$ inside the above moduli space $\mathcal{M}_{\underline{L}}(x_0,x_1,\hdots,x_{j^++j^-+l})$ we let $F_u$ be the linearisation of Equation \eqref{eq:32}. The following result provides a computation of the Fredholm index of $F_u$. Recall that this Fredholm index is equal to the dimension of the moduli space of such discs modulo reparametrisation in the case when this moduli space is transversely cut out. \begin{Thm} Let $(\Sigma_0,\Sigma_1)$ be an ordered pair of Lagrangian cobordisms of dimension $n+1$, and let $u$ be a punctured disc with boundary on this pair of Lagrangian submanifolds as described above. The Fredholm index of $F_u$ given by \begin{align} \operatorname{Ind}(F_u) =& \operatorname{gr}(p_0)+\sum_{k=1}^{j^+} \operatorname{gr}(\gamma^+_k)-\sum_{\sigma_{q_k}(0)=1} (n+1-\operatorname{gr}(q_k)) +\label{eq:21} \\ &-\sum_{\sigma_{q_k}(0)=0} \operatorname{gr}(q_k) -\sum_{k=1}^{j^-} \operatorname{gr}(\gamma^-_k)+\nonumber\\ &+(2-n)j^++l-2\quad \text{if}\quad x_0=p_0,\nonumber\\ \operatorname{Ind}(F_u) =& \operatorname{gr}(\gamma^+_0)+\sum_{k=1}^{j^+} \operatorname{gr}(\gamma^+_k)-\sum_{\sigma_{q_k}(0)=1} (n+1-\operatorname{gr}(q_k)) +\label{eq:31}\\ &-\sum_{\sigma_{q_k}(0)=0} \operatorname{gr}(q_k) -\sum_{k=1}^{j^-} \operatorname{gr}(\gamma^-_k) + \nonumber\\ &+(2-n)j^++l\quad\text{if}\quad x_0=\gamma^+_0,\nonumber \end{align} where the gradings of intersection points are defined using the order $(\Sigma_0,\Sigma_1)$ as in Section \ref{sec:grad-inters-points}. \end{Thm} \begin{proof} The operator $F_u$ decomposes as $D_u\oplus K_u\oplus \tau_{m}$ where $D_u$ acts on $W^{1,p}_\delta$, $K_u \colon \mathbb{R}^{j} \to 0$ with $j=j_++j_-$ (resp. $j=j_++j_-+1$) in the case when $x_0=p_0$ (resp. $x_0=\gamma^+_0$), and $\tau_{j^+ + j^- +l+1}$ is the operator associated to the deformation of the Teichm\"{u}ller space at $r$. The latter operator has index equal to the dimension of $\mathcal{R}^{j^+ + j^- +l+1}$, minus the dimension of the automorphisms group of the curve $S_r$. The index of $K_u$ is $j^++j^-$ if $x_0=p_0$ and $j^++j^-+1$ if $x_0=\gamma_0^+$. Moreover, $\operatorname{Ind}\tau_m=j^++j^-+l-2$. For $k\in \{1\ldots j^\pm\}$ we denote by $p_k^\pm$ the end of the chord $\gamma_k^\pm$. From Equation \eqref{index formula with weights}we get that \begin{align*} &\operatorname{Ind}(D_u) = i((T_{p_0}\Sigma_0)^\#,(T_{p_0}\Sigma_1)^\#)-\sum_{k=1}^{j^+} \big(i(\alpha^\#_{\gamma^+_k,\ell_k},T_{(0,p_k^+)}(\ensuremath{\mathdj{R}}\times \Lambda^{+})^\#)+1\big)\\ & -\sum_{k} i(T_{q_k}\Sigma_{\sigma_{q_k}(0)})^\#,(T_{q_k}\Sigma_{\sigma_{q_k}(1)})^\#)-\sum_{k=1}^{j^-} \big(i(T_{(0,p_k^-)}(\ensuremath{\mathdj{R}}\times \Lambda^{-})^\#),\alpha^\#_{\gamma^-_k,\ell_k})+1\big)\quad \text{or} \\ &\operatorname{Ind}(D_u)= i(T_{(0,p_0^+)}(\ensuremath{\mathdj{R}}\times \Lambda^{+})^\#,\alpha^\#_{\gamma^+_0,\ell_0})-\sum_{k=1}^{j^+} \big( i(\alpha^\#_{\gamma^+_k,\ell_k},T_{(0,p_k^+)}(\ensuremath{\mathdj{R}}\times \Lambda^{+})^\#)+1\big)\\ &-\sum_{k} i(T_{q_k}\Sigma_{\sigma_{q_k}(0)})^\#,(T_{q_k}\Sigma_{\sigma_{q_k}(1)})^\#)-\sum_{k=1}^{j^-} \big(i(T_{(0,p_k^-)}(\ensuremath{\mathdj{R}}\times \Lambda^{-})^\#,\alpha^\#_{\gamma^-_k,\ell_k})+1\big).\quad \end{align*} Depending on whether $x_0$ is $p_0$ or $\gamma^+_0$. Note that for $q\in \Sigma_0\cap \Sigma_1$, we have $$i((T_q\Sigma_0)^\#,(T_q\Sigma_1)^\#)=n+1-i((T_q\Sigma_1)^\#,(T_q\Sigma_0)^\#)$$ whereas that for a Reeb chord $\gamma$ with end point $p^+$ we have that $$i(\alpha_{\gamma,\ell}^\#,T_{(0,p^+)}(\ensuremath{\mathdj{R}}\times \Lambda^{\pm})^\#)=n-i(T_{(0,p^+)}(\ensuremath{\mathdj{R}}\times \Lambda^{\pm})^\#,\alpha_{\gamma,\ell}^\#).$$ Thus we obtain \begin{align*} &\operatorname{Ind}(D_u)=\operatorname{gr}(p_0)-\sum_{i=1}^{j^+}(n-\operatorname{gr}(\gamma_k^+))-\sum_{\sigma(q_k)(0)=0}\operatorname{gr}(q_k)\\ &-\sum_{\sigma_{q_k}(0)=1}(n+1-\operatorname{gr}(q_k))-\sum_{1}^{j^-}(\operatorname{gr}(\gamma_k^-)+2)\quad\text{or}\\ &\operatorname{Ind}(D_u)=\operatorname{gr}(\gamma^+_0)+1-\sum_{i=1}^{j^+}(n-\operatorname{gr}(\gamma_k^+))\\ &-\sum_{\sigma(q_k)(0)=0}\operatorname{gr}(q_k)-\sum_{\sigma_{q_k}(0)=1}(n+1-\operatorname{gr}(q_k))-\sum_{1}^{j^-}(\operatorname{gr}(\gamma_k^-)+2)\quad\text{or} \end{align*} Since $\operatorname{Ind}(F_u)=\operatorname{Ind}(D_u)+\operatorname{Ind}(K_u)+\operatorname{Ind}(\tau_m)$ the result follows. \end{proof} We apply Equation \eqref{eq:21} to compute the virtual dimension of the moduli spaces defined in Section \ref{sec:Cthulu-moduli-spaces}. For $(n+1)$-dimensional Lagrangian cobordisms, we obtain \begin{align} \label{eq:22} \dim(\widetilde{\mathcal{M}}(\gamma^+;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-))&=\dim(\mathcal{M}(\gamma^+;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-))-1\\&=\operatorname{gr}(\gamma^+)-\operatorname{gr}(\gamma^-)-\operatorname{gr}(\boldsymbol{\delta})-\operatorname{gr}(\boldsymbol{\zeta})-1\nonumber,\\ \label{eq:23}\dim(\mathcal{M}(\gamma^+;\boldsymbol{\delta}^-,q,\boldsymbol{\zeta}^-))&=\operatorname{gr}(\gamma^+)-\operatorname{gr}(q)-\operatorname{gr}(\boldsymbol{\delta})-\operatorname{gr}(\boldsymbol{\zeta})+1,\\ \label{eq:24}\dim(\mathcal{M}(p;\boldsymbol{\delta}^-,q,\boldsymbol{\zeta}^-))&=\operatorname{gr}(p)-\operatorname{gr}(q)-\operatorname{gr}(\boldsymbol{\delta})-\operatorname{gr}(\boldsymbol{\zeta})-1,\\ \label{eq:25}\dim(\mathcal{M}(p;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-))&=\operatorname{gr}(p)-\operatorname{gr}(\gamma^-)-\operatorname{gr}(\boldsymbol{\delta})-\operatorname{gr}(\boldsymbol{\zeta})-2,\\ \label{eq:26}\dim(\widetilde{\mathcal{M}}(\gamma_{2,1};\boldsymbol{\delta}^-,\gamma_{1,2},\boldsymbol{\zeta}^-))&=\dim(\mathcal{M}(\gamma_{2,1};\boldsymbol{\delta}^-,\gamma_{1,2},\boldsymbol{\zeta}^-))-1\\&=\operatorname{gr}(\gamma_{2,1})+\operatorname{gr}(\gamma_{1,2})-\operatorname{gr}(\boldsymbol{\delta})-\operatorname{gr}(\boldsymbol{\zeta})-n+1\nonumber. \end{align} \subsection{Transversality} \label{sec:transversality} Here we have gather various results from the literature that are needed in order to achieve transversality for the relevant moduli spaces of punctured pseudoholomorphic discs inside $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$. Recall that the transversality of a moduli space in particular implies that it is a smooth manifold of dimension equal to the Fredholm index of the corresponding linearised problem (see Section \ref{sec:index-line-oper}). As usual, an almost complex structure for which the relevant moduli spaces are transversely cut out is called \emph{regular}. \subsubsection{Punctured discs and strips with a cylindrical boundary condition} Consider the moduli spaces for a cylindrical boundary condition, and where all punctures are asymptotic to Reeb chords of which precisely one is positive with asymptotic $\gamma^+$. In other words, we are interested in moduli spaces of the form \begin{align*} & \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda}(\gamma^+;\gamma_1^-,\hdots,\gamma_d^-),\\ & \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda_0,\ensuremath{\mathdj{R}}\times\Lambda_1}(\gamma^+;\gamma_1^-,\hdots,\gamma_d^-), \:\: \gamma^+ \in \mathcal{R}(\Lambda_1,\Lambda_0), \end{align*} as described in Section \ref{sec:lch-moduli-spaces}. Note that solutions in the latter moduli space also are solutions (of a special kind) inside the former moduli space for $\Lambda=\Lambda_0 \cup \Lambda_1$. We also consider moduli spaces of the form \[ \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda_0,\ensuremath{\mathdj{R}}\times\Lambda_1}(\gamma_{1,0};\boldsymbol{\delta^-},\gamma_{0,1},\boldsymbol{\zeta^-}),\] as considered in Section \ref{sec:banana-moduli-space}. Recall that here all punctures have negative asymptotics to Reeb chords except precisely \emph{two}, where these two punctures moreover have positive asymptotics to two \emph{distinct} Reeb chords $\gamma_{1,0} \in \mathcal{R}(\Lambda_1,\Lambda_0)$, $\gamma_{0,1} \in \mathcal{R}(\Lambda_0,\Lambda_1)$, and where $\boldsymbol{\delta^-}$ and $\boldsymbol{\zeta^-}$ denote words of chords on $\Lambda_0$ and $\Lambda_1$, respectively. The result \cite[Proposition 3.13]{LegendrianAmbient} can be applied to all of these moduli spaces, showing that they are transversely cut-out for a Baire first category subset of almost complex structures $\mathcal{J}^{\operatorname{cyl}}(P \times \ensuremath{\mathdj{R}})$. This result can be seen as an adaptation of Dragnev's result in \cite{FredholmTheory} to the case of a cylindrical boundary condition. Note that, even if the aforementioned result assumes that there is precisely one positive puncture, in this case its proof can still be applied, since the two positive punctures of the solution have different asymptotics. For technical reasons it will be necessary to achieve transversality also for the subset $\mathcal{J}^{\operatorname{cyl}}_\pi(P \times \ensuremath{\mathdj{R}}) \subset \mathcal{J}^{\operatorname{cyl}}(P \times \ensuremath{\mathdj{R}})$ of cylindrical lifts of almost complex structures (see Section \ref{sec:cyllift}). In this case, \cite[Lemma 8.2]{LiftingPseudoholomorphic} shows that a $\widetilde{J}_P$-holomorphic disc $\widetilde{u}$ in $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ is transversely cut out if and only if its $J_P$-holomorphic projection $u:=\pi \circ u$ is transversely cut out. That the latter discs are transversely cut out for a Baire first category subset of admissible almost complex structures was shown in \cite[Proposition 2.3]{LCHgeneral} under the following additional assumption: \begin{enumerate}[label=(A), ref=(A)] \item Fix a chord-generic Legendrian submanifold $\Lambda \subset P \times \ensuremath{\mathdj{R}}$. The pair $(\Lambda,J_P)$ said to be \emph{admissible} if the following is satisfied. The almost complex structure $J_P$ is integrable in some neighbourhood of each double point of the Lagrangian projection $\Pi_{\operatorname{Lag}}(\Lambda)$, where the two sheets of this Lagrangian immersion moreover are real-analytic. \label{A} \end{enumerate} Note that this condition imposes no restriction on the Legendrian submanifold. Namely, every transverse Lagrangian intersection is symplectomorphic to the intersection of the real and imaginary parts in $(\ensuremath{\mathdj{C}}^n,\omega_0)$ by a version of Weinstein's Lagrangian neighbourhood theorem. Combining the above results, we obtain: \begin{Prop} The above moduli spaces are transversely cut out for a Baire first category subset of the cylindrical almost complex structures $\mathcal{J}^{\operatorname{cyl}}(P \times \ensuremath{\mathdj{R}})$, as well as for a Baire first category subset of the admissible cylindrical lifts $\mathcal{J}^{\operatorname{cyl}}_\pi(P \times \ensuremath{\mathdj{R}})$ of almost complex structures on $J_P$ satisfying condition \ref{A}. \end{Prop} \subsubsection{Discs with boundary on a general embedded Lagrangian cobordism} We are here interested in the case when the boundary condition is a general embedded Lagrangian cobordism $\Sigma$ from $\Lambda^-$ to $\Lambda^+$, and when all asymptotics are Reeb chords. In other words, we consider moduli spaces of the form \[\mathcal{M}_\Sigma(\gamma^+;\gamma_1^-,\hdots,\gamma_d^-)\] as described in Section \ref{sec:lch-moduli-spaces}. \begin{Prop} Assume that we are given cylindrical almost complex structures $J^\pm \in \mathcal{J}^{\operatorname{cyl}}(P \times \ensuremath{\mathdj{R}})$ which are regular for $\mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda^\pm}(\gamma^+;\gamma^-_1,\hdots,\gamma^-_d)$, as well as a number $T>0$ for which $\Sigma \setminus ([-T,T]\times P \times \ensuremath{\mathdj{R}})$ is cylindrical. The above moduli spaces of the form \[ \mathcal{M}_\Sigma(\gamma^+;\gamma_1^-,\hdots,\gamma_d^-) \] are transversely cut out for a Baire first category subset of $\mathcal{J}^{\operatorname{adm}}_{J^-,J^+,T}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$. \end{Prop} \begin{proof} From \cite[Theorem 2.8]{Floer_Conc} it follows that the discs in these moduli spaces are simple, and a standard transversality argument \cite[Chapter 3]{McDuff_Salamon_Hol_Curv} can thus be applied. To that end, observe that it suffices to find an injective point of the disc contained inside $(-T,T)\times P \times \ensuremath{\mathdj{R}}$, where the sought perturbation of the almost complex structure will be supported. (In the case when the disc is disjoint from this subset, it is in fact already transversely cut out by the assumptions of the proposition.) \end{proof} \subsubsection{Strips with at least one asymptotic to an intersection point} Let $\Sigma_i$, $i=0,1$, be two transversely intersecting exact Lagrangian cobordisms from $\Lambda^-_i$ to $\Lambda^+_i$. We now consider moduli spaces of strips being of the form \[\mathcal{M}_{\Sigma_0,\Sigma_1}(x_0;\boldsymbol{\delta}^-,x_1,\boldsymbol{\zeta}^-),\] where $\boldsymbol{\delta}^-$ and $\boldsymbol{\zeta}^-$ denote words of Reeb chords on $\Lambda^-_1$ and $\Lambda^-_0$, respectively, and where at least one of $x_0$ and $x_1$ is an intersection point of $\Sigma_0 \cap \Sigma_1$. See Sections \ref{sec:floer-moduli-space}, \ref{sec:lch-floer-moduli}, and \ref{sec:floer-lch-moduli}. Recall that exactness implies that $x_0 \neq x_1$ holds in either of the cases. It is here important to note that $\Sigma_0 \cup \Sigma_1$ is not an embedded boundary condition. One option for achieving transversality for these moduli spaces is via the technique in \cite{LCHgeneral}, while imposing a condition analogous to \ref{A} near the intersection points. In other words, one has to consider almost complex structures which are integrable in a neighbourhood of $\Sigma_0 \cap \Sigma_1$, which moreover make the Lagrangian submanifolds real-analytic inside the same neighbourhood. Alternatively, transversality is also possible to achieve when using so-called ``time-dependent'' almost complex structures as described in Section \ref{sec:pert-floer-equa}. This technique goes back to Floer \cite{FloerHFlag}. Recall that the strips in the above moduli space are maps of the form \[ u \colon (\ensuremath{\mathdj{R}} \times [0,1],\ensuremath{\mathdj{R}} \times \{0\},\ensuremath{\mathdj{R}}\times\{1\}) \to (\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}},\Sigma_0,\Sigma_1),\] where the domain is endowed with the conformal structure coming from the standard holomorphic coordinate $s+it$ on $\ensuremath{\mathdj{R}} \times [0,1] \subset \ensuremath{\mathdj{R}}^2=\ensuremath{\mathdj{C}}$ (this parametrisation is determined up to a translation of the $s$ coordinate). We consider sufficiently regular one-parameter families $\{J_t\}_{t\in[0,1]}$ of almost complex structures $J_t \in \mathcal{J}^{\operatorname{adm}}_{J^-,J^+}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$, and define the moduli spaces for the corresponding time-dependent version of the Cauchy-Riemann operation on a strip as in Section \ref{sec:pert-floer-equa}. The somewhere injectivity result in e.g.~\cite[Theorem 5.1]{Oh:Structure} or \cite[Section 8.6]{AudinDamian} also holds in the current setting, showing that: \begin{Prop} Assume that we are given cylindrical almost complex structures $J^\pm \in \mathcal{J}^{\operatorname{cyl}}(P \times \ensuremath{\mathdj{R}})$ which are regular for $\mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda^\pm}(\gamma^+;\gamma^-_1,\hdots,\gamma^-_d)$, as well as a number $T>0$ for which $\Sigma \setminus ([-T,T]\times P \times \ensuremath{\mathdj{R}})$ is cylindrical. The above moduli spaces of the form \[ \mathcal{M}_{\Sigma_0,\Sigma_1}(x_0;\boldsymbol{\delta}^-,x_1,\boldsymbol{\zeta}^-)\] are transversely cut out for a Baire first category subset of the time-dependent almost complex structures $J_t \colon I \to \mathcal{J}^{\operatorname{adm}}_{J^-,J^+,T}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$. \end{Prop} \subsubsection{Strips with only Reeb chord asymptotics} We now consider moduli spaces of strips being of the form \[\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma^+;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-)\] as above, but where $\gamma^\pm$ both are Reeb chords; see Sections \ref{sec:lch-moduli-spaces} and \ref{sec:banana-moduli-space}. Observe that this strip has a puncture with a positive asymptotic to a Reeb chord $\gamma^+$ from $\Lambda^+_1$ to $\Lambda^+_0$, while $\gamma^-$ is either a Reeb chord from $\Lambda^-_1$ to $\Lambda^-_0$, which hence is a negative asymptotic, or a Reeb chord from $\Lambda^+_0$ to $\Lambda^+_1$, which hence is a positive asymptotic. In either case, we have $\gamma^+ \neq \gamma^-$. \begin{Prop} Assume that we are given cylindrical almost complex structures $J^\pm \in \mathcal{J}^{\operatorname{cyl}}(P \times \ensuremath{\mathdj{R}})$ which are regular for every moduli space of the form \begin{gather*} \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda^\pm_0 \cup \ensuremath{\mathdj{R}} \times \Lambda^\pm_1}(\gamma^+;\gamma^-_1,\hdots,\gamma^-_d),\\ \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda^\pm_0, \ensuremath{\mathdj{R}} \times \Lambda^\pm_1}(\gamma^+_1;\gamma^-_1,\hdots,\gamma^-_{d_1},\gamma^+_2,\gamma^-_{d_1+1},\hdots, \gamma^-_d), \end{gather*} as well as a number $T>0$ for which $\Sigma \setminus ([-T,T]\times P \times \ensuremath{\mathdj{R}})$ is cylindrical. Any almost complex structure $J \in \mathcal{J}^{\operatorname{adm}}_{J^-,J^+,T}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$ can be perturbed to an almost complex structure $J' \in \mathcal{J}^{\operatorname{adm}}_{J^-,J^+,T'}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$ which is regular for the moduli spaces of the form \[\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma^+;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-),\] under the assumption that $T' \gg T$ was chosen sufficiently large. \end{Prop} \begin{proof} The argument is done by induction on the energy of the discs in the moduli spaces. To that end, the following feature of the SFT compactness theorem is crucial: \begin{itemize} \item In our setting, the space of solutions of $d(\phi\alpha)$-energy at most $E>0$ has also a uniform bound on the total energy. Hence, the space $\mathcal{M}^E$ of these solutions can be compactified to $\overline{\mathcal{M}}^E$ by adding broken configurations consisting of pseudoholomorphic buildings; \item The boundary strata of the compactified moduli space $\overline{\mathcal{M}}^E$ consists of buildings whose components $u_1,\hdots,u_k$ have $d(\phi\alpha)$-energies whose sum is at most $E$; and \item The complement of any open neighbourhood of the boundary strata in $\overline{\mathcal{M}}^E$ is a compact subspace of $\mathcal{M}^E$ in the appropriate Whitney topology. \end{itemize} We also note that, in our setting, the possible $d(\phi\alpha)$-energies attained by a non-trivial pseudoholomorphic curve having at most one output puncture is a \emph{finite set of numbers $E_k > E_{k-1} > \hdots> E_1 >0$}, as follows from the finite possibilities of asymptotics in the current setting. The base case of the induction is as follows. The $J$-holomorphic discs in $\mathcal{M}^{E_1}$ of lowest $d(\phi\alpha)$-energy cannot break by the above, and must thus be compact as a space of maps. Using the asymptotic properties of finite energy discs, all solutions must have injective points in some subset of the form $[T'',T''+1] \times P \times \ensuremath{\mathdj{R}}$, for $T'' \gg T$ sufficiently big. After a perturbation of $J$ supported inside $[T'',T''+1] \times P \times \ensuremath{\mathdj{R}}$, we may thus assume that the moduli space $\mathcal{M}^{E_1}=\overline{\mathcal{M}}^{E_1}$ of lowest energy is transversely cut out. By induction we assume that all moduli spaces of energy strictly less than $E_i>0$, $i>1$, are transversely cut out. Pseudoholomorphic gluing as in Theorem \ref{thm:gluing} can be upgraded to the following statement: Gluing of transversely cut out solutions can always be performed and, a priori, produces transversely cut out solutions. Since the boundary strata of $\overline{\mathcal{M}}^{E_i}$ consists of transversely cut out components by the induction hypothesis together with the assumptions of the proposition, the moduli space $\mathcal{M}^{E_i}$ thus also consists of transversely cut out solutions in some neighbourhood of its boundary strata. We are thus left to achieve transversality for a subset of solutions contained in the complement of some neighbourhood of the boundary of $\overline{\mathcal{M}}^{E_i}$. By the above, these solutions form a compact space of maps. An argument as in the base case can again be used to show the following. After a suitable perturbation of the almost complex structure $J$ in a region containing the positive end (where these maps all have injective points by the asymptotic properties), the full moduli space $\mathcal{M}^{E_i}$ can be assumed to be transversely cut out. \end{proof} We also give the following alternative approach to transversality, using time-dependent almost complex structures. \begin{Prop} Assume that we are given a pair $J^\pm \in \mathcal{J}^{\operatorname{cyl}}(P \times \ensuremath{\mathdj{R}})$ of cylindrical almost complex structures which are regular for all moduli spaces of the form \begin{gather*} \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda^\pm_0 \cup \ensuremath{\mathdj{R}} \times \Lambda^\pm_1}(\gamma^+;\gamma^-_1,\hdots,\gamma^-_d),\\ \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda^\pm_0, \ensuremath{\mathdj{R}} \times \Lambda^\pm_1}(\gamma^+_1;\gamma^-_1,\hdots,\gamma^-_{d_1},\gamma^+_2,\gamma^-_{d_1+1},\hdots, \gamma^-_d), \end{gather*} as well as a number $T>0$ for which $\Sigma \setminus ([-T,T]\times P \times \ensuremath{\mathdj{R}})$ is cylindrical. The above moduli spaces of the form \[\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma^+;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-)\] are transversely cut out for a Baire first category subset of the time-dependent almost complex structures $J_t \colon I \to \mathcal{J}^{\operatorname{adm}}_{J^-,J^+,T}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$. \end{Prop} \begin{proof} The somewhere injectivity result in \cite[Section 8.6]{AudinDamian} can be extended to the case when all asymptotics of the strip are Reeb chords. Observe that the strips that do not pass through $[-T,T]\times P \times \ensuremath{\mathdj{R}}$ are transversely cut out by assumption. Consequently, it suffices to turn on the time-dependence in that region. \end{proof} \subsection{Gluing} \label{sec:gluing} In this section we explain how transversely cut-out buildings as in Section \ref{sec:nodal-curves-with} can be glued to give a family of holomorphic disc which converges to the original building. In other words, the moduli space of buildings, which can be compactified by Theorem \ref{thm:compactness}, will under transversality assumptions be a compact manifold whose boundary strata consists of broken solutions. We assume from now on that all almost complex structures are regular (see Section \ref{sec:transversality}). To a nodal curve $S$ with normalisation $\widehat{S}$, we associate a tree $T_{\widehat{S}}$ as in Section \ref{sec:deligne-mumf-comp} whose vertices correspond to connected components of $\widehat{S}$, interior edges connecting two vertices correspond to nodes, and whose leaves correspond to ends. The following theorem is a standard consequence of the techniques used in \cite[Section 8]{Ekholm_Contact_Homology}. \begin{Thm}\label{thm:gluing} Assume that the almost complex structure is regular and admissible, and let $\$$ denote either ``$\Sigma$'' or ``$\Sigma_0,\Sigma_1$''. Consider a holomorphic building in $\mathcal{M}^{k^-|1|k^+}_\$(x_0;x_1,\cdots,x_d)$ associated a nodal curve $S$, a tree $T_S$, and consisting of the pseudoholomorphic discs $\{u_i\}$. For $\epsilon_0>0$ sufficiently small, let $\epsilon_0 > \rho_1,\hdots, \rho_\nu>0$ be numbers associated to each pairs of nodes asymptotic to an intersection point, and let $\epsilon_0 > \rho_{\nu+1},\hdots,\rho_{\nu+k}>0$, $k=k_++k_- \ge 0$, be numbers associated to each non-empty level of the building except the 0:th level. Then, there exists a uniquely determined punctured pseudoholomorphic disc $$u_{\boldsymbol{\rho}} \in \mathcal{M}_\$(x_0;x_1,\cdots,x_d), \:\: \boldsymbol{\rho}=(\rho_1,\hdots,\rho_{\nu+k}),$$ the so-called \emph{glued solution}, satisfying the property that $u_{\boldsymbol{\rho}_i}$, $i \to \infty$, converges to the original building whenever $\boldsymbol{\rho}_i=(\rho_{1,i},\hdots,\rho_{\nu+k,i})$ satisfies $\lim_{i \to \infty}\rho_{j,i}=0$, $j=1,\hdots,k+\nu$. \end{Thm} \begin{Rem} The expected dimension of a glued solution in $\mathcal{M}_\$(x_0;x_1,\cdots,x_d)$ produced by the above theorem is given by the sum \[\nu+\sum_i\operatorname{Ind}(u_i),\] in terms of the expected dimensions of all involved components $u_1,\hdots,u_{m}$ in the original building, where $\nu$ is the total number of the involved pairs of nodes asymptotic to intersection points. \end{Rem} Together with the compactness result in Theorem \ref{thm:compactness} we obtain the following crucial result. \begin{Cor} \label{cor:boundary} Assume that the almost complex structure is regular and admissible, and let $\$$ denote either ``$\Sigma$'' or ``$\Sigma_0,\Sigma_1$''. The compactification $\overline{\mathcal{M}}_\$(x_0;x_1,\ldots,x_d)$ of a moduli space $\mathcal{M}_\$(x_0;x_1,\ldots,x_d)$ of punctured pseudoholomorphic discs being of index one is a transversely cut out one-dimensional manifold with boundary. Moreover, its boundary points are in bijective correspondence with the broken solutions inside $\mathcal{M}^{k^-|1|k^+}_\$(x_0;x_1,\cdots,x_d)$ whose components $u_1,\hdots,u_{m}$ satisfy \[\nu+\sum_i\operatorname{Ind}(u_i)=1,\] and where $\nu$ denotes the total number of pairs of nodes asymptotic to intersection points. \end{Cor} A geometric analysis of the a priori possibilities of building leads to the following structure of the boundary of the relevant moduli spaces of dimension one. Let $a,b$ be asymptotics and $\boldsymbol{\delta}$ and $\boldsymbol{\zeta}$ be sets of Reeb chords of $\Lambda_0^-$ and $\Lambda_1^-$ respectively such that for regular $J$ the moduli space $\mathcal{M}(a;\boldsymbol{\delta},b,\boldsymbol{\zeta})$ is a $1$-dimensional manifold. By Corollary \ref{cor:boundary} it follows that the compactified moduli space $\overline{\mathcal{M}}(a;\boldsymbol{\delta},b,\boldsymbol{\zeta})$ is a compact $1$-dimensional manifold whose boundary consists of broken solutions. For words $w$, $w'$ and $w''$ in a free group such that $w'$ is a subword of $w$ (denoted $w'\subset w$) we denote by $w_{w'}$ the word obtain removing $w'$ from $w$ and by $w_{w'}(w'')$ the word obtain replacing $w'$ by $w''$. We proceed to explicitly describe the boundary of $\overline{\mathcal{M}}(a;\boldsymbol{\delta},b,\boldsymbol{\zeta})$ for different incoming and outgoing ends. Note that, by the energy estimates of Section \ref{sec:action-energy} (which use the exactness assumptions), there are no components all whose punctures have negative asymptotics. Also, since every possible component of a broken configuration has non-negative index by the regularity assumptions, we have a restriction on the number of breakings involved in these compactification. In the following lists all unions are assumed to be over $0$-dimensional moduli spaces. For $a=p\in \Sigma_0\cap\Sigma_1$ and $b=\gamma^-\in\mathcal{R}(\Lambda_1^-,\Lambda_0^-)$ we conclude that (see Figure \ref{fig:mpg-}) \begin{align} \label{eq:15} &\partial\overline{\mathcal{M}}(p;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta})=\\ &\bigcup_{c,\boldsymbol{\delta}'\boldsymbol{\delta}''=\boldsymbol{\delta},\boldsymbol{\zeta}'\boldsymbol{\zeta}''=\boldsymbol{\zeta}}\mathcal{M}(p;\boldsymbol{\delta}',c,\boldsymbol{\zeta}'')\times \mathcal{M}(c;\boldsymbol{\delta}'',\gamma^-,\boldsymbol{\zeta}')\nonumber\\ &\bigcup_{\boldsymbol{\delta}'\subset\boldsymbol{\delta},\delta_0}\mathcal{M}(p;\boldsymbol{\delta}_{\boldsymbol{\delta}'}(\delta_0),\gamma^-,\boldsymbol{\zeta})\times \mathcal{M}(\delta_0;\boldsymbol{\delta}')\nonumber\\ &\bigcup_{\boldsymbol{\zeta}'\subset\boldsymbol{\zeta},\zeta_0}\mathcal{M}(p;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta}_{\boldsymbol{\zeta}'}(\zeta_0))\times\mathcal{M}(\zeta_0;\boldsymbol{\zeta}')\nonumber \end{align} \begin{figure}[ht!] \centering \includegraphics[height=7cm]{Mpg-_n} \caption{A schematic view of the boundary of $\overline{\mathcal{M}}(p;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta})$.} \label{fig:mpg-} \end{figure} Observe that the latter two types of boundary points in the above union, i.e.~involving pseudoholomorphic half-planes, always can appear in the boundary of a $1$-dimensional moduli space. We call them $\partial$-breakings and denote them by $$\overline{\mathcal{M}}^\partial (a;\boldsymbol{\delta},b,\boldsymbol{\zeta}).$$ The upshot of the constructions of the Cthulhu complex will be that, when counting the boundary points of the moduli space weighted by an augmentation, the boundary points corresponding to $\partial$-breakings will give a total contribution by $0$; see \ref{II} below for more details. For $a=p,b=q\in \Sigma_0\cap\Sigma_1$ we conclude that \begin{align} \label{eq:37} &\partial\overline{\mathcal{M}}(p;\boldsymbol{\delta},q,\boldsymbol{\zeta})=\\ &\bigcup_{r,\boldsymbol{\delta}'\boldsymbol{\delta}''=\boldsymbol{\delta},\boldsymbol{\zeta}'\boldsymbol{\zeta}''=\boldsymbol{\zeta}}\mathcal{M}(p;\boldsymbol{\delta}'',r,\boldsymbol{\zeta}')\times \mathcal{M}(r;\boldsymbol{\delta}',q,\boldsymbol{\zeta}'')\nonumber\\ &\bigcup\mathcal{M}(p;\boldsymbol{\delta}',\gamma_{10},\boldsymbol{\zeta}''')\times\mathcal{M}(\gamma_{10};\boldsymbol{\delta}'',\gamma_{01},\boldsymbol{\zeta}'')\times\mathcal{M}(\gamma_{01};\boldsymbol{\delta}''',q,\boldsymbol{\zeta}')\nonumber\\ &\bigcup\overline{\mathcal{M}}^\partial(p;\boldsymbol{\delta},q,\boldsymbol{\zeta}).\nonumber \end{align} Where the second union is over: \begin{itemize} \item $\gamma_{01}\in\mathcal{R}(\Lambda_0^-,\Lambda_1^-),\gamma_{10}\in\mathcal{R}(\Lambda_1^-,\Lambda_0^-)$; and \item $\boldsymbol{\delta}'\boldsymbol{\delta}''\boldsymbol{\delta}'''=\boldsymbol{\delta},\boldsymbol{\zeta}'\boldsymbol{\zeta}''\boldsymbol{\zeta}'''=\boldsymbol{\zeta}.$ \end{itemize} The second type of breaking here is depicted in Figure \ref{fig:nessiebrek}. \begin{figure}[ht!] \centering \includegraphics[height=4cm]{nessiebreak_n} \caption{A boundary point in $\overline{\mathcal{M}}(p;\boldsymbol{\delta},q,\boldsymbol{\zeta})$} \label{fig:nessiebrek} \end{figure} For $a=\gamma^+\in\mathcal{R}(\Lambda_1^+,\Lambda_0^+)$ and $b=\gamma^-\in\mathcal{R}(\Lambda_1^-,\Lambda_0^-)$, we have \begin{align} \label{eq:28} &\partial\overline{\mathcal{M}}(\gamma^+;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta})=\\ &\bigcup\mathcal{M}(\gamma^+;\delta^+_1\cdots\delta^+_l,\gamma_0,\zeta_1^+\cdots\zeta_m^+)\times\big( \mathcal{M}(\delta^+_1;\boldsymbol{\delta}^-_1)\sqcup\cdots\sqcup\mathcal{M}(\delta^+_m;\boldsymbol{\delta}^-_m)\nonumber\\ &\sqcup\mathcal{M}(\gamma_0;\boldsymbol{\delta}_{l+1}^-,\gamma^-,\boldsymbol{\zeta}_0^-)\sqcup\mathcal{M}(\zeta^+_1;\boldsymbol{\zeta}^-_1)\sqcup\cdots\sqcup\mathcal{M}(\zeta^+_m;\boldsymbol{\zeta}^-_m)\big)\nonumber\\ &\bigcup\mathcal{M}(\gamma^+;\boldsymbol{\delta}',c,\boldsymbol{\zeta}'')\times \mathcal{M}(c;\boldsymbol{\delta}'',\gamma^-,\boldsymbol{\zeta}')\nonumber\\ &\bigcup\overline{\mathcal{M}}^\partial(\gamma^+;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta})\nonumber \end{align} with the first union being over \begin{itemize} \item $\gamma^+\in \mathcal{R}(\Lambda_1^+,\Lambda_0^+)$; \item$\delta^+_1,\ldots,\delta^+_l\in\mathcal{R}(\Lambda^+_0)$; \item $\zeta^+_1,\ldots,\zeta^+_m\in\mathcal{R}(\Lambda^+_1)$; and \item $\boldsymbol{\delta}_1^-\boldsymbol{\delta}_2^-\cdots\boldsymbol{\delta}_{l+1}^-=\boldsymbol{\delta},\boldsymbol{\zeta}_0^-\boldsymbol{\zeta}_1^-\cdots\boldsymbol{\zeta}_{m}^-=\boldsymbol{\zeta}$, \end{itemize} and the second being over \begin{itemize} \item $c\in \Sigma_0\cap\Sigma_1\cup \mathcal{R}(\Lambda_1^-,\Lambda_0^-)$; and \item $\boldsymbol{\delta}'\boldsymbol{\delta}''=\boldsymbol{\delta},\boldsymbol{\zeta}'\boldsymbol{\zeta}''=\boldsymbol{\zeta}$. \end{itemize} The first type of breaking is depicted in Figure \ref{fig:mg+p}. \begin{figure}[ht!] \centering \includegraphics[height=4cm]{mg+p_n} \caption{A boundary point in $\overline{\mathcal{M}}(\gamma^+;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta})$} \label{fig:mg+p} \end{figure} Finally, for $a=\gamma^+\in\mathcal{R}(\Lambda_1^+,\Lambda_0^+)$ and $b=p\in\Sigma_0\cap\Sigma_1$, we have \begin{align} \label{eq:38} &\partial\overline{\mathcal{M}}(\gamma^+;\boldsymbol{\delta},p,\boldsymbol{\zeta})=\\ &\bigcup\mathcal{M}(\gamma^+;\delta^+_1\cdots\delta^+_l,\gamma_0,\zeta_1^+\cdots\zeta_m^+)\times\big( \mathcal{M}(\delta^+_1;\boldsymbol{\delta}^-_1)\sqcup\cdots\sqcup\mathcal{M}(\delta^+_m;\boldsymbol{\delta}^-_m)\nonumber\\ &\sqcup\mathcal{M}(\gamma_0;\boldsymbol{\delta}_{l+1}^-,p,\boldsymbol{\zeta}_0^-)\sqcup\mathcal{M}(\zeta^+_1,\boldsymbol{\zeta}^-_1)\sqcup\cdots\sqcup\mathcal{M}(\zeta^+_m,\boldsymbol{\zeta}^-_m)\big)\nonumber\\ &\bigcup_{q\in\Sigma_0\cap\Sigma_1}\mathcal{M}(\gamma^+;\boldsymbol{\delta}',q,\boldsymbol{\zeta}'')\times \mathcal{M}(q;\boldsymbol{\delta}'',p,\boldsymbol{\zeta}')\nonumber\\ &\bigcup\mathcal{M}(\gamma^+;\boldsymbol{\delta}',\gamma_{10},\boldsymbol{\zeta}''')\times\mathcal{M}(\gamma_{10};\boldsymbol{\delta}'',\gamma_{01},\boldsymbol{\zeta}'')\times\mathcal{M}(\gamma_{01},\boldsymbol{\delta}''',p,\boldsymbol{\zeta}')\nonumber\\ &\bigcup\overline{\mathcal{M}}^\partial(\gamma^+;\boldsymbol{\delta},p,\boldsymbol{\zeta})\nonumber \end{align} The union being over the same sets as in \eqref{eq:28} and \eqref{eq:37}. \section{Preliminaries of Legendrian contact homology}\label{sec:prel-legendr-cont} Here we give a quick review of the theory of {\em Legendrian contact homology}, and we direct the reader to \cite{LCHgeneral} and \cite{RationalSFT} for more details. Legendrian contact homology is a Legendrian isotopy invariant associated to a Legendrian submanifold which was defined by Chekanov for Legendrian knots in the standard $\ensuremath{\mathdj{R}}^3$ \cite{Chekanov_DGA_Legendrian} and then extended to Legendrian submanifolds of contactisations of Liouville manifolds by Ekholm, Etnyre and Sullivan \cite{LCHgeneral}. \subsection{The Chekanov-Eliashberg DGA} Let $(Y, \xi)$ be the contactisation of a Liouville manifold $(P, \lambda)$ and $\Lambda\subset (Y,\xi)$ a chord-generic Legendrian submanifold. Fix a unital commutative ring $R$ which, most often, we will be $R= \ensuremath{\mathdj{Z}}$ or $R= \ensuremath{\mathdj{Z}}/ 2 \ensuremath{\mathdj{Z}}$. However, recall that we have to assume $\Lambda$ to be spin and fix a spin structure in order to use $R$ different from a unital algebra over a field of characteristic $2$. The Legendrian contact homology differential graded algebra (DGA) of $\Lambda$, also called the \emph{Chekanov-Eliashberg algebra} and denoted by $(\mathcal A(\Lambda),\partial)$, is a unital tensor $R$-algebra freely generated by the Reeb chords of $\Lambda$. The grading of a Reeb chord generator $a\in\mathcal{R}(\Lambda)$ is given by \[ |\gamma|:=\operatorname{gr}(\gamma) \in \ensuremath{\mathdj{Z}}/\ensuremath{\mathdj{Z}}_{\mu_\Lambda(H_2(P \times \ensuremath{\mathdj{R}},\Lambda))}\] as defined in Section \ref{sec:grading-reeb-chords}. Here $\mu_\Lambda \colon H_2(P \times \ensuremath{\mathdj{R}},\Lambda) \to \ensuremath{\mathdj{Z}}$ is the Maslov class of $\Lambda$ associated to the contact planes. We define the degree of the unit to be zero. Its differential $\partial$ is of degree $-1$ and counts certain punctured holomorphic discs in the symplectisation of $Y$ which have boundary on $\ensuremath{\mathdj{R}} \times \Lambda$ and are asymptotic to Reeb chords. More precisely, $\partial$ is defined on a generator $\delta_0 \in \mathcal{R}(\Lambda)$ as \[ \partial(\delta_0):= \sum_{\dim \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda}(\delta_0;\boldsymbol{\delta})=1} \#\widetilde{\mathcal{M}}_{\ensuremath{\mathdj{R}} \times \Lambda}(\delta_0;\boldsymbol{\delta}) \boldsymbol{\delta},\] where $\boldsymbol{\delta} = \delta_1 \ldots \delta_{m(\boldsymbol{\delta})}$ is a formal product of Reeb chords and $\widetilde{\mathcal{M}}_{\ensuremath{\mathdj{R}} \times \Lambda}(\delta_0;\boldsymbol{\delta})$ is the moduli space defined in Section~\ref{sec:lch-moduli-spaces}. Here (and from now on) the symbol `$\#$' indicates the signed count of $0$-dimensional moduli spaces and we use the convention that it gives $0$ for moduli spaces of dimension different from zero. Then we extend $\partial$ to all of $\mathcal{A}(\Lambda)$ by $R$-linearity together with the Leibniz rule \[ \partial(\delta\boldsymbol{\delta'})=\partial(\delta)\boldsymbol{\delta'}+(-1)^{|\delta|}\delta\partial(\boldsymbol{\delta'}).\] \begin{Rem} By the results in \cite{LiftingPseudoholomorphic}, we may equivalently define $\partial$ by counting holomorphic polygons in $P$ having boundary on the Lagrangian projection of $\Lambda$. For us, however, the above perspective will turn out to be more useful, since it fits better with the SFT framework. \end{Rem} The fact that $\partial^2=0$ and that the homology is independent of the choice of almost complex structure and invariant under Legendrian isotopy was shown in \cite{LCHgeneral}. \subsection{The DGA morphism induced by an exact Lagrangian cobordism} Given an exact Lagrangian cobordism $\Sigma \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ from $\Lambda^-$ to $\Lambda^+$, there is an induced unital DGA morphism \[ \Phi_{\Sigma} \colon (\mathcal{A}(\Lambda^+),\partial_{\Lambda^+}) \to (\mathcal{A}(\Lambda^-),\partial_{\Lambda^-}),\] constructed in \cite{RationalSFT} and \cite{Ekhoka}, following the general philosophy of SFT. This DGA morphism is defined on a generator $\delta^+$ of $\mathcal{A}(\Lambda^+)$ by the $J$-holomorphic disc count \[ \Phi_{\Sigma}(\delta^+):= \sum_{\dim \mathcal{M}_\Sigma(\delta^+;\boldsymbol{\delta^-})=0}\# \mathcal{M}_\Sigma(\delta^+;\boldsymbol{\delta^-}) \boldsymbol{\delta^-},\] and then extended as a unital algebra map. Here $\boldsymbol{\delta^-}=\delta^-_1 \ldots \delta^-_{m(\boldsymbol{\delta^-})}$ is an element of $\mathcal{A}(\Lambda^-)$, and the moduli space $\mathcal{M}_\Sigma(a;\mathbf{b})$ is defined in Section~\ref{sec:lch-moduli-spaces}. Observe that, in order to define the above map with coefficients in $\ensuremath{\mathdj{Z}}$, we must fix a spin structure on each end $\Lambda^\pm$ which can be extended to a spin structure on $\Sigma$. \begin{Ex} For the trivial (cylindrical) cobordism $\ensuremath{\mathdj{R}} \times \Lambda$, we get $\Phi_{\ensuremath{\mathdj{R}} \times \Lambda}=\mathrm{id}_{\mathcal{A}(\Lambda)}$. \end{Ex} The fact that $\Phi_{\Sigma}$ is a chain map was shown in \cite{RationalSFT}. In fact, based upon the abstract perturbations in the same paper, it follows that the DGA-homotopy class (as defined in \cite[Lemma 3.13]{Ekhoka}) of $\Phi_{\Sigma}$ is independent of the choices of almost complex structure and Hamiltonian isotopy class of $\Sigma$. \begin{Rem} The invariance properties of our Floer theory does not rely on the above abstract perturbation argument in the cases under consideration here, i.e.~that of a symplectisation of a contactisation. The reason is that, in these cases, our invariance statement boils down to showing that the complex $\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1)$ is \emph{acyclic}. \end{Rem} \subsection{Augmentations and bilinearised Legendrian contact homology}\label{subsec: LCH} An {\em ($R$-valued) augmentation} is a unital DGA morphism $\varepsilon: {\mathcal A}(\Lambda) \to R$, where $R$ is regarded as a DGA with trivial differential. In general, the Chekanov-Eliashberg algebra need not admit any augmentations. However, in the case when the Legendrian submanifold $\Lambda \subset Y$ has an exact Lagrangian filling $\Sigma \subset \ensuremath{\mathdj{R}} \times Y$, the above unital DGA morphism \[\varepsilon_{\Sigma} := \Phi_{\Sigma}\] is indeed an augmentation (also see \cite{Ekholm_FloerlagCOnt}). In fact, one can think of the trivial DGA $R$ as the Chekanov-Eliashberg DGA of the empty set. On the other hand, there are plenty of examples of Legendrian submanifolds which do not admit any exact Lagrangian filling in the symplectisation, but whose Chekanov-Eliashberg algebra still admits augmentations. For instance, this is the case for the Legendrian twist knots constructed in \cite{LegendrianTwist}; see the discussion in \cite[Section 10.1]{Sabloff_Traynor}. Augmentations are important for the following reason. Loosely speaking, they should be seen as ``obstruction cocycles'' for the Chekanov-Eliashberg algebra. Given an augmentation, Chekanov defined linearised Legendrian contact homology in \cite{Chekanov_DGA_Legendrian}, which is a Legendrian isotopy invariant in the form of a chain complex spanned by the Reeb chords on $\Lambda$. Linearised Legendrian contact homology is computationally tractable at the expense of discarding non-linear (and non-commutative) information. Recently, Bourgeois and the first author have introduced a generalisation of linearised LCH which is called bilinearised Legendrian contact homology \cite{augcat}. It is constructed using \emph{a pair} of augmentations $\varepsilon_0$ and $\varepsilon_1$ of $(\mathcal A(\Lambda),\partial)$. We proceed to give a brief description of this complex. The bilinearised Legendrian contact homology complex is the free module \[ LCC^{\varepsilon_0,\varepsilon_1}_\bullet(\Lambda):=R\langle \mathcal{R}(\Lambda)\rangle\] spanned by the Reeb chords with the above grading, whose differential is of degree $-1$ and defined by \begin{align*} \partial^{\varepsilon_0,\varepsilon_1}(\delta)=\sum_{\dim \mathcal{M}_{\ensuremath{\mathdj{R}} \times \Lambda}(\delta;\boldsymbol{\delta})=1} \sum_{i=1}^{m_{\boldsymbol{\delta}}} \#\widetilde{\mathcal{M}}_{\ensuremath{\mathdj{R}} \times \Lambda}(\delta;\boldsymbol{\delta})\varepsilon_0(\delta_1\hdots \delta_{i-1}) \varepsilon_1(\delta_{i+1}\hdots \delta_{m_{\boldsymbol{\delta}}})\delta_i. \end{align*} The corresponding homology groups will be denoted by \[LCH^{\varepsilon_0,\varepsilon_1}_\bullet(\Lambda).\] The set of augmentations of the Chekanov-Eliashberg algebra $({\mathcal A}(\Lambda), \partial)$ is not a Legendrian isotopy invariant of $\Lambda$. However, as shown in \cite{augcat}, the invariance proof of \cite{Chekanov_DGA_Legendrian} can be generalised to show that the isomorphism classes \[ \{LCH^{\varepsilon_0,\varepsilon_1}_\bullet(\Lambda)\}/\sim\] of the graded modules, where the union is over all pairs of augmentations, is a Legendrian isotopy invariant of $\Lambda$. Observe that, when $\varepsilon_0=\varepsilon_1$, we simply recover Chekanov's linearised LCH. We will also be interested in the dual complex of $(LCC^{\varepsilon_0,\varepsilon_1}_\bullet(\Lambda),\partial^{\varepsilon_0,\varepsilon_1})$, the so-called bilinearised Legendrian cohomology complex, where the differential $\partial^{\varepsilon_0,\varepsilon_1}$ is of degree $1$. This complex will be denoted by \[ (LCC_{\varepsilon_0,\varepsilon_1}^\bullet(\Lambda),d_{\varepsilon_0,\varepsilon_1}),\] while we write \[LCH_{\varepsilon_0,\varepsilon_1}^\bullet(\Lambda)\] for the corresponding cohomology group. The bilinearised LCH is a stronger invariant compared to linearised LCH, due to the fact that it remembers some of the ``non-commutativity'' of $\mathcal A(\Lambda)$. In \cite{augcat} it was shown that these homology groups are the morphisms spaces of an $A_\infty$-category $\mathcal{A}\mathrm{ug}_-(\Lambda)$ called the \emph{augmentation category}, whose objects are the augmentations of the Chekanov-Eliashberg algebra of $\Lambda$. Given an exact Lagrangian cobordisms $\Sigma$ from $\Lambda^-$ to $\Lambda^+$, and two augmentations $\varepsilon^-_0$ $\varepsilon^-_1$ of $\Lambda^-$, the DGA map $\Phi_\Sigma$ described in the previous section induces a linear chain map from $LCC^\bullet_{\varepsilon_0^-\varepsilon_1^-}(\Lambda^-)$ to $LCC^\bullet_{\varepsilon_0^+\varepsilon_1^+}(\Lambda^+)$ via the formula $$\Phi_\Sigma^{\varepsilon^-_0\varepsilon^-_1}(\gamma^-)=\sum\limits_{\gamma^+,\boldsymbol{\delta}^-,\boldsymbol{\zeta}^-}\#\mathcal{M}_\Sigma(\gamma^+;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-)\varepsilon_0^-(\boldsymbol{\delta}^-)\varepsilon_1^-(\boldsymbol{\zeta}^-)\gamma^+,$$ where $\varepsilon_i^+=\varepsilon^-_i\circ\Phi_\Sigma$ for $i=0,1$. \section{The Cthulhu complex} \label{sec:Cthulhu-complex} Let $\Sigma_0$ and $\Sigma_1$ be two exact Lagrangian cobordisms inside the symplectisation $(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}},d(e^t\alpha))$ of a contactisation. We assume that: \begin{itemize} \item $\Sigma_0\pitchfork \Sigma_1$ (in particular this implies that $\Lambda_0^\pm\cap \Lambda_1^\pm=\emptyset$), \item The links $\Lambda_0^\pm \sqcup \Lambda_1^\pm$ are chord-generic. \end{itemize} The \emph{Cthulhu complex} of the pair $(\Sigma_0,\Sigma_1)$ is the complex whose underlying graded $R$-module \[\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1):=C^{\bullet}(\Lambda_0^+,\Lambda_1^+)[2]\oplus C^{\bullet}(\Sigma_0,\Sigma_1)\oplus C^{\bullet}(\Lambda^-_0,\Lambda^-_1)[1]\] for a unital ring $R$. Here $C^{\bullet}(\Lambda_0^+,\Lambda_1^+)$ is the free graded module spanned by the Reeb chords from $\Lambda_1^\pm$ to $\Lambda_0^\pm$ and $C^{\bullet}(\Sigma_0,\Sigma_1)$ is the free graded module spanned by the intersection points $\Sigma_0 \cap \Sigma_1$. The gradings are taken as described in Section \ref{sec:grading}, depending on the choice of a Maslov potential. \subsection{The Cthulhu differential} Fix two augmentations $\varepsilon^-_0$ and $\varepsilon^-_1$ of the Chekanov-Eliashberg algebras of $\Lambda^-_0$ and $\Lambda^-_1$, respectively, both of which are defined using a cylindrical almost complex structure $J^-$. We will define the \emph{Cthulhu differential} $\mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1}$, which is a differential of degree $1$ on the above graded module. With respect to the above decomposition, this differential takes the form \begin{equation}\label{eq:20} \mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1}=\begin{pmatrix} d_{++} & d_{+0} & d_{+-} \\ 0 & d_{00} & d_{0-} \\ 0 & d_{-0} & d_{--} \end{pmatrix}. \end{equation} Loosely speaking, every non-zero entry in this matrix is given by a count of rigid \emph{punctured pseudoholomorphic strips} of appropriate type, as described in Section \ref{sec:Cthulu-moduli-spaces}, where the counts are ``weighted by'' the above augmentations. First, however, we need to fix the choice of an admissible almost complex structure $J \in \mathcal{J}_{J^-,J^+,T}^{\operatorname{adm}}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$, i.e.~a compatible almost complex structure on $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ satisfying the assumptions in Section \ref{sec:admissible}. In particular, $J$ coincides with the cylindrical almost complex structures $J^-$ and $J^+$ in subsets of the form $(-\infty,-T]\times P \times \ensuremath{\mathdj{R}}$ and $[T,+\infty) \times P \times \ensuremath{\mathdj{R}}$, respectively. In order for these moduli spaces to be transversely cut out, we moreover assume that the almost complex structure was generically chosen as in Section \ref{sec:transversality}. Below we give a careful description of each term in the above matrix, where the degrees mentioned are the degrees as maps between the above summands \emph{without} the shifts in grading as appearing in the above definition of $\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1)$. In the cases when we want to emphasise to which pair of Lagrangian cobordisms the differential belongs, we use the superscript ``$\Sigma_0,\Sigma_1$'', i.e.~$\mathfrak{d}_{\varepsilon^+_0,\varepsilon^-_1}^{\Sigma_0,\Sigma_1}$, $d_{+,-}^{\Sigma_0,\Sigma_1}$,..., etc. \subsubsection{The bilinearised LCH differential} \label{sec:bilin-diff} We define $\varepsilon^+_i := \varepsilon^-_i \circ \Phi_{\Sigma_i,J}$, $i=0,1$, for the pull-backs of the above augmentations to augmentations of the Chekanov-Eliashberg algebras of $\Lambda^+_0$ and $\Lambda^+_1$, respectively. The term $d_{\pm\pm}$ is the bilinearised Legendrian cohomology differential for $(\Lambda_0^\pm,\Lambda_1^\pm)$ induced by the pair $(\varepsilon_0^\pm,\varepsilon_1^\pm)$ of augmentations as defined in \cite{augcat} and described in Section \ref{subsec: LCH}. In other words, it is given by \begin{eqnarray} \label{eq:2} \lefteqn{d_{\pm\pm}(\gamma_2^\pm):=d_{\varepsilon_0^\pm,\varepsilon_1^\pm}(\gamma_2^\pm)=}\\ \nonumber &=&\sum_{\gamma_1^\pm}\sum_{\boldsymbol{\delta}^\pm,\boldsymbol{\zeta}^\pm}\#\widetilde{\mathcal{M}}_{\ensuremath{\mathdj{R}} \times \Lambda^\pm_0,\ensuremath{\mathdj{R}} \times \Lambda^\pm_1}(\gamma_1^\pm;\boldsymbol{\delta}^\pm,\gamma_2^\pm,\boldsymbol{\zeta}^\pm;J^\pm)\cdot\varepsilon^\pm_0(\boldsymbol{\delta}^\pm)\varepsilon_1^\pm(\boldsymbol{\zeta}^\pm)\cdot \gamma_1^\pm. \end{eqnarray} It follows can be computed using Equation \eqref{eq:22} that this term is of degree $1$. \subsubsection{The Floer differential} \label{sec:floer-differential} The differential $d_{00}$ can be seen as a modification of the differential in Lagrangian Floer homology as introduced by Floer in \cite{FloerHFlag}, where the version defined here has found its inspiration in \cite{Ekholm_FloerlagCOnt}. For an intersection point $q$, it is defined by the count \begin{align} \label{eq:1} d_{00}(q):=\sum_{p}\sum_{\boldsymbol{\delta}^-,\boldsymbol{\zeta}^-}\#\mathcal{M}_{\Sigma_0,\Sigma_1}(p;\boldsymbol{\delta}^-,q,\boldsymbol{\zeta}^-;J)\cdot\varepsilon^-_0(\boldsymbol{\delta}^-)\varepsilon_1^-(\boldsymbol{\zeta}^-)\cdot p. \end{align} From Equation \eqref{eq:24} we deduce that this map is of degree $1$. \subsubsection{The Cultist maps} \label{sec:cultists-maps} The maps $d_{+0}$ and $d_{0-}$ are defined using the moduli spaces described in Section \ref{sec:floer-lch-moduli} and in Section \ref{sec:lch-floer-moduli}, respectively. A version of the map $d_{+0}$ appears in \cite{Ekholm_FloerlagCOnt} in the case when the negative ends of the cobordisms are empty. More precisely, we define \begin{align} \label{eq:3} & d_{0-}(\gamma^-):=\sum_{p}\sum_{\boldsymbol{\delta}^-,\boldsymbol{\zeta}^-}\#\mathcal{M}_{\Sigma_0,\Sigma_1}(p;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-)\cdot\varepsilon^-_0(\boldsymbol{\delta}^-)\varepsilon_1^-(\boldsymbol{\zeta}^-)\cdot p, \\&d_{+0}(q):=\sum_{\gamma^+}\sum_{\boldsymbol{\delta}^-,\boldsymbol{\zeta}^-}\#\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma^+;\boldsymbol{\delta}^-,q,\boldsymbol{\zeta}^-)\cdot\varepsilon^-_0(\boldsymbol{\delta}^-)\varepsilon_1^-(\boldsymbol{\zeta}^-)\cdot \gamma^+. \end{align} Equations \eqref{eq:23} (resp. \eqref{eq:25}) show that $d_{0-}$ (resp. $d_{+0}$) is of degree $2$ (resp. $-1$). \subsubsection{The LCH map} \label{sec:lch-map} The map $d_{+-}$ is defined analogously to the bilinearised map in LCH induced by an exact Lagrangian cobordism. It is given as follows: \begin{align} \label{eq:4} & d_{+-}(\gamma^-):=\sum_{\gamma^+}\sum_{\boldsymbol{\delta}^-,\boldsymbol{\zeta}^-}\#\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma^+;\boldsymbol{\delta}^-,\gamma^-,\boldsymbol{\zeta}^-)\cdot\varepsilon^-_0(\boldsymbol{\delta}^-)\varepsilon_1^-(\boldsymbol{\zeta}^-)\cdot \gamma^+. \end{align} As follows from Equation \eqref{eq:22}, this map is of degree $0$. \subsubsection{The Nessie map} \label{sec:banana-map} Let $C_\bullet(\Lambda_1^\pm,\Lambda_0^\pm)$ be the dual of $C^\bullet(\Lambda_1^\pm,\Lambda_0^\pm)$ with $\delta_{\pm\pm}$ the induced adjoint of the differential $d_{\pm \pm}^{\Sigma_1,\Sigma_0}$. These differentials are entries in the Cthulhu differential $\mathfrak{d}^{\Sigma_1,\Sigma_0}_{\varepsilon^-_1,\varepsilon^-_0}$. Further, let $CF_\bullet(\Sigma_0,\Sigma_1)$ be the dual of $CF^\bullet(\Sigma_0,\Sigma_1)$. Observe that, since all the above spaces are endowed with a canonical basis, we are free to identify any such space with its dual. Moreover, with a fixed choice of Maslov potentials for the two cobordisms, there is also a canonical identification \begin{gather*} CF^\bullet(\Sigma_0,\Sigma_1)=CF_\bullet(\Sigma_0,\Sigma_1)=CF_{n+1-\bullet}(\Sigma_1,\Sigma_0), \end{gather*} when reversing the ordering of the pair of cobordisms. The count of ``banana'' pseudoholomorphic strips gives rise to a map \begin{gather*} b \colon C_{n-1-\bullet}(\Lambda_1^\pm,\Lambda_0^\pm)\rightarrow C^\bullet(\Lambda_0^\pm,\Lambda_1^\pm),\\ \gamma_{01} \mapsto \sum_{\gamma_{10}}\sum_{\boldsymbol{\delta},\boldsymbol{\zeta}}\#\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma_{10};\boldsymbol{\delta},\gamma_{01},\boldsymbol{\zeta})\cdot \varepsilon_0(\boldsymbol{\delta})\varepsilon_1(\boldsymbol{\zeta})\cdot\gamma_{10}, \end{gather*} where the degree of the map follows from Equation \eqref{eq:26}. Using $\delta_{-0}:CF_\bullet(\Sigma_1,\Sigma_0)\rightarrow C_{\bullet-2}(\Lambda_1^-,\Lambda_0^-)$ to denote the adjoint of the map $d^{\Sigma_1,\Sigma_0}_{0-}$ in the entry of $\mathfrak{d}^{\Sigma_1,\Sigma_0}_{\varepsilon_1,\varepsilon_0}$, we are finally ready to define \[ d_{-0}:=b\circ \delta_{-0} \colon CF^\bullet(\Sigma_0,\Sigma_1) = CF_{n+1-\bullet}(\Sigma_1,\Sigma_0)\rightarrow C^\bullet(\Lambda_0^-,\Lambda_1^-),\] which is of degree 0. The moduli spaces above were previously considered in the Floer theory involving concave ends due to Akaho in \cite{Akaho_conc}. Also, see \cite{Akaho} where Morse homology in the presence of a non-empty boundary was considered by the same author. \subsection{The proof of $\mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1}\,^2=0$} \label{sec:Cthulu-differential} We are now ready to present and prove the following central result. \begin{Thm}\label{thm:dsquare} Let $\Sigma_i \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$, $i=0,1$, be a pair of exact Lagrangian cobordisms from $\Lambda^-_i$ to $\Lambda^+_i$ as above. Given a generic admissible almost complex structure $J\in \mathcal{J}_{J^-,J^+}^{\operatorname{adm}}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$ in the sense of Section \ref{sec:admissible}, and augmentations $\varepsilon^-_i$ of the Chekanov-Eliashberg algebras of $\Lambda^-_i$ defined using $J^-$, then \begin{itemize} \item $\mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1}$ is well-defined, and \item $\mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1}\,^2=0$, \end{itemize} under the assumption that the entries above have been defined using $J$. In the case when $\Sigma_i$, $i=0,1$, both are spin, the above counts can moreover be defined by signed counts with coefficients in $\ensuremath{\mathdj{Z}}$ given the choice of a spin structure on each cobordism. In general, the count can always be performed in $\ensuremath{\mathdj{Z}}_2$. \end{Thm} In order to prove $\mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1}\,^2=0$, we need to study the boundary points of one-dimensional moduli spaces of pseudoholomorphic strips. As usual in Floer theory, this composition is defined by counting broken pseudoholomorphic strips that correspond to the boundary points of these moduli spaces. Recall that, in our setting, every strip is punctured, and that all counts are \emph{weighted} by the augmentations chosen. For this reason, we start by prescribing two important points that need to be taken into account when performing these counts: \begin{enumerate}[label={(\Roman*)}] \item Recall that the punctured pseudoholomorphic strips used in the definition of $d_{++}$ live in the top level and are allowed to have negative punctures. When adjoining the rigid punctured strips in the definition of $d_{++}$ and $d_{+*}$, where $*=0,-$, i.e.~the glued configurations corresponding to the compositions $d_{++} d_{+*}$, we do not necessarily obtain a broken pseudoholomorphic strip. In order to obtain a broken strip, we will adjoin the pseudoholomorphic half-planes that appear in the count defining the right-hand side of \[ \varepsilon^+_i = \varepsilon^-_i \circ \Phi_{\Sigma_i}, \:\: i=0,1,\] to the middle level. From the latter equality, it also follows that the composition $d_{++} d_{+*}$ indeed is obtained by counting buildings of precisely this form. \label{I} \item Not all broken punctured pseudoholomorphic strips correspond to two glued pseudoholomorphic \emph{strips}. Namely, as shown in Section \ref{sec:gluing}, there are so-called $\partial$-breakings that consist of a punctured strip together with a punctured half-plane of index $1$ having boundary on $\ensuremath{\mathdj{R}} \times \Lambda^\pm_i$, $i=0,1$. Recall the fact that the counts of the latter half-planes define the differential $\partial_\pm$ of the Chekanov-Eliashberg algebras of $\Lambda^\pm_0 \cup \Lambda^\pm_1$, and that the equality \[ \varepsilon^\pm_i \circ \partial_\pm = 0, \:\: i=0,1,\] holds by definition. It thus follows that the counts of the totality of the broken strips of this kind must vanish in the case when this count is weighted by the augmentations. \label{II} \end{enumerate} \begin{proof} The fact that the map is well-defined follows form the compactness result Theorem \ref{thm:compactness} together with the transversality results in Section \ref{sec:transversality}. Namely, all moduli spaces of index $0$ having fixed asymptotics are compact (since there is a uniform upper bound on its total energy) $0$-dimensional manifolds. We now make a term-by-term argument for the matrix \begin{eqnarray} \nonumber \lefteqn{\mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1}\,^2 = } \\ \nonumber & &\begin{pmatrix} d_{++}\,^2& d_{++}d_{+0}+d_{+0}d_{00}+d_{+-}d_{-0} & d_{++}d_{+-}+d_{+0}d_{0-}+d_{+-}d_{--} \\ 0 & d_{00}d_{00}+d_{0-}d_{-0} & d_{00}d_{0-}+d_{0-}d_{--} \\ 0 & d_{-0}d_{00}+d_{--}d_{-0} & d_{-0}d_{0-}+d_{--}d_{--} \end{pmatrix} \end{eqnarray} in order to show that all entries vanish. \begin{itemize} \item $d_{++}^2=0$. The term $d_{++}$ is the standard bilinearised Legendrian contact cohomology differential \cite{augcat} restricted to mixed chords from $\Lambda^+_1$ to $\Lambda^+_0$. More precisely, the subspace generated by these chords form a subcomplex of the linearised cohomology complex of the link $\Lambda^+_0\sqcup \Lambda_1^+$, under the assumption that we use an augmentation of the link which vanishes on mixed chords, while it takes the value $\varepsilon^+_i(\gamma_i)$ on a chord $\gamma_i\in \mathcal{R}(\Lambda_i^+)$. This term thus vanishes, as mentioned in Section \ref{subsec: LCH}. \item $d_{++}d_{+0}+d_{+0}d_{00}+d_{+-}d_{-0}=0$. We must study the boundary of a moduli space $\mathcal{M}_{\Sigma_0,\Sigma_1}(\gamma;\boldsymbol{\delta},p,\boldsymbol{\zeta})$ being of dimension $1$. The possibilities for the breakings involved are schematically depicted in Figure \ref{fig:breaking3}. We claim that, when counting these boundary points weighted by the augmentations $\varepsilon^-_i$, $i=0,1$, we get the contribution \[ \langle (d_{++}d_{+0}+d_{+0}d_{00}+d_{+-}d_{-0})(p),\gamma \rangle.\] Indeed, inspecting the boundary components of different types as described in Equation \eqref{eq:38} we obtain the terms $\langle d_{++}d_{+0}(p),\gamma \rangle$ (here we use $\varepsilon^+_i=\varepsilon^-_i \circ \Phi_{\Sigma_i}$; see \ref{I}), $\langle d_{+0}d_{00}(p),\gamma \rangle$, and $\langle d_{+-}d_{-0})(p),\gamma \rangle$, together with the $\partial$-breakings which can be seen to contribute to zero (here we use $\varepsilon^\pm_i\circ \partial^\pm=0$; see \ref{II}). \begin{figure}[ht!] \centering \vspace{0.5cm} \labellist \pinlabel $\Lambda^+_0\cup{\color{red}\Lambda^+_1}$ at -40 145 \pinlabel $\Sigma_0\cup{\color{red}\Sigma_1}$ at -40 90 \pinlabel $\Lambda^-_0\cup{\color{red}\Lambda^-_1}$ at -40 30 \pinlabel $0$ at 16 160 \pinlabel $0$ at 17 103 \pinlabel $0$ at 44 81 \pinlabel $1$ at 97 160 \pinlabel $0$ at 98 103 \pinlabel $0$ at 168 160 \pinlabel $0$ at 188 80 \pinlabel $1$ at 192 32 \pinlabel $0$ at 257 160 \pinlabel $0$ at 257 104 \pinlabel $1$ at 282 14 \pinlabel $0$ at 306 70 \endlabellist \includegraphics[scale=0.65]{breaking3} \caption{Breakings involved in $d_{++}d_{+0}+d_{+0}d_{00}+d_{+-}d_{-0}=0$. The number on each component denotes its Fredholm index.} \label{fig:breaking3} \end{figure} \item $d_{++}d_{+-}+d_{+0}d_{0-}+d_{+-}d_{--}=0$. The argument is similar to the argument above, but where Equation \eqref{eq:28} has been used. The possibilities for the breakings involved are schematically depicted in Figure \ref{fig:breaking1} (also, see Figure \ref{fig:mg+p} for some of the breakings with explicit negative ends). \begin{figure}[ht!] \centering \vspace{0.5cm} \labellist \pinlabel $\Lambda^+_0\cup{\color{red}\Lambda^+_1}$ at -40 145 \pinlabel $\Sigma_0\cup{\color{red}\Sigma_1}$ at -40 90 \pinlabel $\Lambda^-_0\cup{\color{red}\Lambda^-_1}$ at -40 30 \pinlabel $0$ at 40 160 \pinlabel $0$ at 40 69 \pinlabel $1$ at 40 9 \pinlabel $0$ at 95 9 \pinlabel $0$ at 95 69 \pinlabel $1$ at 95 160 \pinlabel $0$ at 151 160 \pinlabel $0$ at 151 102 \pinlabel $0$ at 151 69 \pinlabel $0$ at 151 9 \pinlabel $0$ at 200 9 \pinlabel $1$ at 240 32 \pinlabel $0$ at 216 102 \pinlabel $0$ at 216 160 \pinlabel $0$ at 287 160 \pinlabel $0$ at 287 69 \pinlabel $1$ at 312 15 \pinlabel $0$ at 375 93 \pinlabel $0$ at 384 9 \endlabellist \includegraphics[scale=0.65]{breaking1} \caption{Breakings involved in $d_{++}d_{+-}+d_{+0}d_{0-}+d_{+-}d_{--}=0$. The number on each component denotes its Fredholm index.} \label{fig:breaking1} \end{figure} \item $d_{00}d_{00}+d_{0-}d_{-0}=0$. Again this follows as above, but while using Equation \eqref{eq:37}. The possibilities for the breakings involved are schematically depicted in Figure \ref{fig:breaking2} (also, see Figure \ref{fig:nessiebrek}). \begin{figure}[ht!] \centering \vspace{0.5cm} \labellist \pinlabel $\Sigma_0\cup{\color{red}\Sigma_1}$ at -40 90 \pinlabel $\Lambda^-_0\cup{\color{red}\Lambda^-_1}$ at -40 30 \pinlabel $0$ at 20 86 \pinlabel $0$ at 45 83 \pinlabel $0$ at 112 73 \pinlabel $1$ at 112 36 \pinlabel $0$ at 176 72 \pinlabel $1$ at 200 16 \pinlabel $0$ at 224 72 \endlabellist \includegraphics[scale=0.65]{breaking2} \caption{Breakings involved in $d_{00}d_{00}+d_{0-}d_{-0}=0$. The number on each component denotes its Fredholm index.} \label{fig:breaking2} \end{figure} \item $d_{00}d_{0-}+d_{0-}d_{--}=0$. This follows similarly as above, but while using Equation \eqref{eq:15}. The possibilities for the breakings involved are schematically depicted in Figure \ref{fig:breaking4}. \begin{figure}[ht!] \centering \vspace{0.5cm} \labellist \pinlabel $\Lambda^+_0\cup{\color{red}\Lambda^+_1}$ at -40 160 \pinlabel $\Sigma_0\cup{\color{red}\Sigma_1}$ at -40 105 \pinlabel $\Lambda^-_0\cup{\color{red}\Lambda^-_1}$ at -40 45 \pinlabel $0$ at 16 63 \pinlabel $0$ at 16 87 \pinlabel $0$ at 45 105 \pinlabel $0$ at 94 95 \pinlabel $1$ at 90 63 \pinlabel $0$ at 160 63 \pinlabel $0$ at 188 98 \pinlabel $1$ at 194 50 \pinlabel $0$ at 250 63 \pinlabel $0$ at 272 114 \pinlabel $1$ at 309 35 \pinlabel $0$ at 345 87 \endlabellist \includegraphics[scale=0.65]{breaking4} \caption{Breakings involved in $d_{00}d_{0-}+d_{0-}d_{--}=0$. The number on each component denotes its Fredholm index.} \label{fig:breaking4} \end{figure} \item $d_{-0}d_{00}+d_{--}d_{-0}=0$. Analysing the breakings of holomorphic bananas, we get that the map $b$ satisfies $b\circ \delta_{--}=d_{--}\circ b$ (see Figure \ref{fig:breakingbanana}), where $\delta_{--}$ again denotes the adjoint of $d_{--}^{\Sigma_1,\Sigma_0}$. Hence, we get that \[d_{-0}d_{00}+d_{--}d_{-0}=b\delta_{-0}d_{00}+d_{--}b\delta_{-0}=b\big(\delta_{-0}d_{00}+\delta_{--}\delta_{-0}\big),\] where $\delta_{-0}$ is the adjoint of $d^{\Sigma_1,\Sigma_0}_{0-}$. Since $\partial_{00}:=d_{00}$ is the adjoint of $d^{\Sigma_1,\Sigma_0}_{00}$, the factor $\delta_{-0}d_{00}+\delta_{--}\delta_{-0}$ above is actually the adjoint of $d^{\Sigma_1,\Sigma_0}_{00}d^{\Sigma_1,\Sigma_0}_{0-}+d^{\Sigma_1,\Sigma_0}_{0-}d^{\Sigma_1,\Sigma_0}_{--}$. Since the latter term vanishes by the previous case, the claim now follows. See Figure \ref{fig:nessiebrek}. \begin{figure}[ht!] \centering \vspace{0.5cm} \labellist \pinlabel $\Lambda^-_0\cup{\color{red}\Lambda^-_1}$ at -40 90 \pinlabel $\Lambda^-_0\cup{\color{red}\Lambda^-_1}$ at -40 30 \pinlabel $1$ at 16 102 \pinlabel $0$ at 65 102 \pinlabel $1$ at 40 16 \pinlabel $1$ at 133 71 \pinlabel $1$ at 134 33 \pinlabel $0$ at 193 102 \pinlabel $1$ at 243 104 \pinlabel $0$ at 219 16 \endlabellist \includegraphics[scale=0.65]{breakingbanana} \caption{Breakings involved in $b\circ \delta_{--}=d_{--}\circ b$.} \label{fig:breakingbanana} \end{figure} \item $d_{-0}d_{0-}+d_{--}d_{--}=0$. For action reasons we must have $d_{-0}d_{0-}=0$. Finally, $d_{--}d_{--}=0$ holds since $d_{--}$ is the bilinearised Legendrian contact cohomology differential, i.e.~for the same reason to why $d_{++}d_{++}=0$. \end{itemize} \end{proof} \section{The transfer and co-transfer map for concatenations of cobordisms} \label{sec:conc-cobord} Recall that two exact Lagrangian cobordisms having a common end can be concatenated; see Section \ref{sec:lagr-cobord}. In this section we will provide formulas which relate the Floer homologies of the different pieces of such a concatenation. This will be done by introducing a relative version of Viterbo's transfer map, originally defined in \cite{Viterbo_Functors} for symplectic (co)homology. (Recall that Viterbo's transfer map concerns concatenations of \emph{symplectic cobordisms.}) For the Hamiltonian formulation of wrapped Floer homology, the transfer map was constructed and treated in \cite{WrappedFuk}. In the following we will consider the exact Lagrangian cobordisms $V_0,V_1,W_0,W_1 \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ inside the symplectisation of a contactisation. We assume that $V_i$ is an exact Lagrangian cobordism from $\Lambda_i^-$ to $\Lambda_i$, and that $W_i$ is an exact Lagrangian cobordism from $\Lambda_i$ to $\Lambda_i^+$, $i=0,1$. It follows that we can form the concatenations \[ V_i \odot W_i \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}, \:\:i=0,1,\] being exact Lagrangian cobordisms from $\Lambda^-_i$ to $\Lambda^+_i$. Under the further assumption that the negative ends of $V_i$, $i=0,1$, are empty, a transfer map \[ \Phi_{W_0,W_1} \colon\thinspace \operatorname{Cth}_\bullet(V_0,V_1) \to \operatorname{Cth}_\bullet(V_0 \odot W_0, V_1 \odot W_1)\] was constructed in \cite[Section 4.2.2]{Ekholm_FloerlagCOnt}; recall that the analytic set-up of the latter article is the same as the one used here. Our construction of the transfer map will be a straight-forward generalisation of this construction to the case when the negative ends are non-empty. We will also construct a map that we call a \emph{co-transfer map} \[\Phi^{V_0,V_1} \colon\thinspace \operatorname{Cth}_\bullet(V_0 \odot W_0, V_1 \odot W_1) \to \operatorname{Cth}_\bullet(W_0,W_1).\] This map should be thought of as a quotient projection associated to a transfer map. \subsection{Concatenations and stretching of the neck} \label{sec:conc} Recall that the Hamiltonian isotopy class of a concatenation is unique. However, for us it will be necessary to concatenate the cobordism together with an almost complex structure, i.e.~keeping track of conformal data as well, thus breaking this symmetry. In order to pin-point the almost complex structure obtained it will be useful to introduce a parameter keeping track of how the concatenation was performed. We start with the following hypotheses. Assume that we are given almost complex structures $J_a$ and $J_b$ on $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ which are cylindrical in the subsets $\{ t \ge -1\}$ and $\{t \le 1\}$, respectively, and which moreover agree in the subset $\{ -1 \le t \le 1\}$. We also assume that $V$ and $W$ are cylindrical in the subsets $\{ t \ge -1\}$ and $\{t \le 1\}$, respectively, where they coincide. For each $N \ge 0$ we define \begin{eqnarray*} V \odot_N W &:=& (V \cap \{ t \le 0\}) \cup (\tau_N(W) \cap \{ t \ge 0\}),\\ (J_a \odot_N J_b)(t,p,z)&:=& \begin{cases} J_a(t,p,z) & t \le 0\\ J_b(t-N,p,z) & t \ge 0, \end{cases} \end{eqnarray*} where we recall that $\tau_T$ is the translation of the $t$-coordinate by $T \in \ensuremath{\mathdj{R}}$. We also write $J_a \odot J_b := J_a \odot_0 J_b$. The Hamiltonian isotopy class of $V \odot_N W \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ is independent of $N \ge 0$. In the case when $J_a$ and $J_b$ are cylindrical outside of a compact subset we have produced a family of boundary-value problems of $J_a \odot_N J_b$-holomorphic curves in $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ having boundary on $V \odot_N W$, $N \ge 0$. It can be seen that this family of boundary-value problems in fact is conformally equivalent to the family which ``stretches the neck'' along the contact-type hypersurface $\{0\} \times P \times \ensuremath{\mathdj{R}} \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ with boundary condition $V \odot W$; see \cite[Section 3.4]{Bourgeois_&_Compactness} as well as \cite[Section 1.3]{Eliashberg_&_SFT} for more details. This fact will be important below. There is a compactness theorem analogous to Theorem \ref{thm:compactness} in the case of a neck-stretching sequence of almost complex structure; see \cite[Section 10]{Bourgeois_&_Compactness} for the precise formulation. The key fact is that a sequence of $J_a \odot_N J_b$-holomorphic discs, with $N \to +\infty$, has a subsequence converging to a building consisting of \emph{several} levels whose components satisfy non-cylindrical boundary conditions. In the case under consideration, the limit buildings consist of: \begin{itemize} \item An upper level containing punctured $J_b$-holomorphic discs with boundary on $W_0 \cup W_1$; and \item A lower level containing punctured $J_a$-holomorphic discs with boundary on $V_0 \cup V_1$. \end{itemize} A priori there can also be intermediate levels consisting of pseudoholomorphic discs for a cylindrical almost complex structure satisfying a cylindrical boundary condition. Since we are only interested in rigid configurations, and since the latter solutions will have positive dimension (unless they are trivial strips), they can be omitted from our breaking analysis (given the assumption that transversality is achieved for every level). The gluing result Theorem \ref{thm:gluing} also generalises to this setting (see \cite[Lemma 3.14]{RationalSFT}), giving a bijection between buildings of the above type where all components are of Fredholm index zero, and punctured $J_a \odot_N J_b$-holomorphic discs for each $N \gg 0$ sufficiently large. Figure \ref{fig:strech_hol_buil} schematically depicts two such buildings. \begin{figure}[ht!] \vspace{0.5cm} \labellist \pinlabel $W_0\cup {\color{red} W_1}$ at -33 95 \pinlabel $V_0\cup {\color{red} V_1}$ at -38 24 \pinlabel (1) at 70 145 \pinlabel (2) at 170 145 \endlabellist \centering \includegraphics[scale=0.65]{breaking_neck_streched_n} \caption{A holomorphic buildings appearing after stretching the neck along $\Lambda$} \label{fig:strech_hol_buil} \end{figure} \subsection{The complex after a neck stretching procedure} \label{sec:splitting} The first goal is to find a description of the complex \[(\operatorname{Cth}_\bullet(V_0 \odot_N W_0, V_1 \odot_N W_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}^{V\odot W})\] when the almost complex structure is given by $J_a \odot_N J_b$ for $N \gg 0$ sufficiently large. Here we assume that the almost complex structures $J_a$ and $J_b$ satisfy the properties described in Section \ref{sec:conc}, i.e.~so that their concatenations can be taken. We first consider the complex defined using $J_a$ \begin{gather*} (\operatorname{Cth}_\bullet(V_0,V_1)=C_{\bullet-{2}}(\Lambda_0,\Lambda_1) \oplus C_\bullet(V_0,V_1) \oplus C_{\bullet -1}(\Lambda^-_0,\Lambda^-_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}^V),\\ \mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}^V=\begin{pmatrix} -d_{++}^{V_0,V_1} & d^{V_0,V_1}_{+0} & d^{V_0,V_1}_{+-} \\ 0 & d^{V_0,V_1}_{00} & d^{V_0,V_1}_{0-} \\ 0 & d^{V_0,V_1}_{-0} & d_{--}^{V_0,V_1} \end{pmatrix}. \end{gather*} Consider the entries $d^{V_1,V_0}_{+0}$, $d^{V_1,V_0}_{+-}$, and $d^{V_0,V_1}_{\pm\pm}$ in the differential of the complex $\operatorname{Cth}_\bullet(V_1,V_0)$, where again the almost complex structure $J_a$ has been used. We will need their adjoints \begin{eqnarray*} \delta^{V_0,V_1}_{0+} &:=& (d^{V_1,V_0}_{+0})^* \colon C_\bullet(\Lambda_1,\Lambda_0) \to C_{\bullet-2}(V_0,V_1),\\ \delta^{V_0,V_1}_{-+} &:=& (d^{V_1,V_0}_{+-})^* \colon C_\bullet(\Lambda_1,\Lambda_0) \to C_{\bullet-1}(\Lambda^-_1,\Lambda^-_0),\\ \delta_{++}^{V_0,V_1} &:=& (d_{++}^{V_1,V_0})^* \colon C_\bullet (\Lambda_1,\Lambda_0) \to C_{\bullet -1}(\Lambda_1,\Lambda_0),\\ \delta_{--}^{V_0,V_1} &:=& (d_{--}^{V_1,V_0})^* \colon C_\bullet (\Lambda_1^-,\Lambda_0^-) \to C_{\bullet -1}(\Lambda_1^-,\Lambda_0^-),\\ \delta_{-+}^{V_0,V_1} &:=& (d_{+-}^{V_1,V_0})^* \colon C_\bullet (\Lambda_1,\Lambda_0) \to C_{\bullet -1}(\Lambda_1^-,\Lambda_0^-),\end{eqnarray*} where the canonical basis of Reeb chords and double points has been used in order to identify the modules and their duals. Observe that, exploiting the same notation, we also get $\delta^{V_0,V_1}_{00}=d^{V_0,V_1}_{00}$. We write \[ \varepsilon_i' := \varepsilon_i \circ \Phi_{V_i,J_a}, \:\: i=0,1, \] for the pull-backs of the augmentations under the DGA morphisms induced by the respective cobordisms. These augmentations now give rise to a complex \begin{gather*} (\operatorname{Cth}_\bullet(W_0,W_1)=C_{\bullet-2}(\Lambda_0^+,\Lambda_1^+) \oplus C_\bullet(W_0,W_1) \oplus C_{\bullet-1}(\Lambda_0,\Lambda_1),\mathfrak{d}_{\varepsilon'_0\varepsilon'_1}^W),\\ \mathfrak{d}_{\varepsilon'_0\varepsilon'_1}^W=\begin{pmatrix} -d^{W_0,W_1}_{++} & d^{W_0,W_1}_{+0} & d^{W_0,W_1}_{+-} \\ 0 & d^{W_0,W_1}_{00} & d^{W_0,W_1}_{0-} \\ 0 & d^{W_0,W_1}_{-0} & d^{W_0,W_1}_{--} \end{pmatrix}, \end{gather*} where the moduli spaces are defined using the almost complex structure $J_b$. Note that $d^{W_0,W_1}_{--}=d^{V_0,V_1}_{++}$. In addition we will also need the map \[ b^{V_0,V_1} \colon C_\bullet(\Lambda_1,\Lambda_0) \to C^{n-2-\bullet}(\Lambda_0,\Lambda_1) \] which is defined similarly to \[b^{\Lambda_0,\Lambda_1} \colon C_\bullet(\Lambda_1,\Lambda_0) \to C^{n-1-\bullet}(\Lambda_0,\Lambda_1)\] as defined in Section \ref{sec:banana-map}, but which instead counts \emph{rigid} ``bananas'' having boundary on $V_0 \cup V_1$, and two punctures with positive asymptotics to Reeb chords. Recall that the compactness theorem for a neck-stretching sequence together with pseudoholomorphic gluing shows the following. For $N \gg 0$ sufficiently large, the rigid $J_a \odot_N J_b$-holomorphic curves in $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ having boundary on $(V_0 \odot_N W_0) \cup (V_1 \odot_N W_1)$ are in bijective correspondence with pseudoholomorphic buildings of the form described in Section \ref{sec:conc}, in which every involved component is rigid. Analysing the possible such pseudoholomorphic buildings, we obtain the following. When $N \gg 0$ is sufficiently large, the differential of the complex for the concatenated cobordisms, defined using the almost complex structure $J_a \odot_N J_b$ as in the previous paragraph, is given by \begin{gather*} (\operatorname{Cth}_\bullet(V_0 \odot_N W_0, V_1 \odot_N W_1)\\= C_{\bullet-2}(\Lambda_0^+,\Lambda_1^+) \oplus C_\bullet(W_0,W_1) \oplus C_\bullet(V_0,V_1) \oplus C_{\bullet-1}(\Lambda^-_0,\Lambda^-_1),\mathfrak{d}^{V\odot W}_{\varepsilon^-_0\varepsilon^-_1}),\\ \mathfrak{d}^{V\odot W}_{\varepsilon^-_0\varepsilon^-_1} = \begin{pmatrix} -d^{W_0,W_1}_{++} & d^{W_0,W_1}_{+0}+d^{W_0,W_1}_{+-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+-} \\ 0 & d^{W_0,W_1}_{00}+d^{W_0,W_1}_{0-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+-} \\ 0 & d^{V_0,V_1}_{0+} \delta^{W_0,W_1}_{-0} & d^{V_0,V_1}_{00} & d^{V_0,V_1}_{0-} \\ 0 & b^{\Lambda_0^-,\Lambda_1^-}\delta^{V_0,V_1}_{-+} \delta^{W_0,W_1}_{-0} & b^{\Lambda_0^-,\Lambda_1^-}\delta^{V_0,V_1}_{-0} & d_{--}^{V_0,V_1} \end{pmatrix}, \end{gather*} in terms of pseudoholomorphic strips on $V_0\cup V_1$ and $W_0 \cup W_1$ for each $N \gg 0$ sufficiently large. (For instance the term $d^{W_0,W_1}_{+-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0}$ corresponds to the breaking (1) in Figure \ref{fig:strech_hol_buil} and the term $d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+0}$ corresponds to the breaking (2) in the same figure.) We have here relied on the exactness assumptions in Definition \ref{defn: exact lagrangian cobordism} and action considerations from Section \ref{sec:action-energy} in order to rule out certain configurations. \subsection{Definition of the transfer and co-transfer maps} \label{sec:transfer} The transfer and co-transfer maps on the chain level are defined for a very ``stretched'' almost complex structure on a concatenated cobordism (i.e.~when the parameter $N \gg 0$ in Section \ref{sec:conc} is sufficiently large), so that the complexes take the form as described in Section \ref{sec:splitting} above. \begin{defn} The \emph{transfer map} is defined by \begin{gather*} \Phi_{W_0,W_1} \colon\thinspace \operatorname{Cth}_\bullet (V_0,V_1) \to \operatorname{Cth}_\bullet(V_0 \odot_N W_0,V_1 \odot_N W_1),\\ \Phi_{W_0,W_1} = \begin{pmatrix} d^{W_0,W_1}_{+-} & 0 & 0\\ d^{W_0,W_1}_{0-} & 0 & 0 \\ 0 & \mathrm{id} & 0 \\ 0 & 0 & \mathrm{id} \\ \end{pmatrix}, \end{gather*} while the \emph{co-transfer map} is defined by \begin{gather*} \Phi^{V_0,V_1} \colon\thinspace \operatorname{Cth}_\bullet(V_0 \odot_N W_0,V_1 \odot_N W_1) \to \operatorname{Cth}_\bullet (W_0,W_1),\\ \Phi^{V_0,V_1} = \begin{pmatrix} \mathrm{id} & 0 & 0 & 0\\ 0 & \mathrm{id} & 0 & 0 \\ 0 & b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{V_0,V_1}_{+0} & d^{V_0,V_1}_{+-} \end{pmatrix}. \end{gather*} \end{defn} \begin{Lem} \label{lem:exptrans} Under the assumption that $\mathfrak{d}^V_{\varepsilon_0^-\varepsilon_1^-}$, $\mathfrak{d}^W_{\varepsilon'_0\varepsilon'_1}$ and $\mathfrak{d}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}$ are computed using $J_a$, $J_b$ and $J_a\odot_N J_b$ (for $N \gg 0$ as before) respectively, the transfer and co-transfer maps defined above are chain maps. \end{Lem} \begin{proof} We start by showing the claim for the transfer map; i.e.~we have to establish the equality \[\Phi_{W_0,W_1} \circ \mathfrak{d}^{V}_{\varepsilon_0^-\varepsilon_1^-} = \mathfrak{d}^{V\odot W}_{\varepsilon'_0\varepsilon'_1} \circ \Phi_{W_0,W_1}.\] The matrices on the left and right hand sides become \[\begin{pmatrix} -d^{W_0,W_1}_{+-}d^{V_0,V_1}_{++} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+-}\\ -d^{W_0,W_1}_{0-}d^{V_0,V_1}_{++} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+-} \\ 0 & d^{V_0,V_1}_{00} & d^{V_0,V_1}_{0-} \\ 0 & d^{V_0,V_1}_{-0} & d^{V_0,V_1}_{--} \\ \end{pmatrix}\] and \[\begin{pmatrix} -d^{W_0,W_1}_{++}d^{W_0,W_1}_{+-}+d^{W_0,W_1}_{+0}d^{W_0,W_1}_{0-}+A_1 & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+-} \\ d^{W_0,W_1}_{00}d^{W_0,W_1}_{0-}+A_2 & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+-} \\ A_3 & d^{V_0,V_1}_{00} & d^{V_0,V_1}_{0-} \\ A_4 & d^{V_0,V_1}_{-0} & d^{V_0,V_1}_{--} \\ \end{pmatrix},\] respectively, where \begin{eqnarray*} A_1&:=&d^{W_0,W_1}_{+-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{0-},\\ A_2&:=&d^{W_0,W_1}_{0-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{0-},\\ A_3&:=&d^{V_0,V_1}_{0+} \delta^{W_0,W_1}_{-0} d^{W_0,W_1}_{0-},\\ A_4&:=&b^{\Lambda_0^-,\Lambda_1^-}\delta^{V_0,V_1}_{-+} \delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{0-}. \end{eqnarray*} As in Section \ref{sec:Cthulu-differential} \[\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{0-}=0\] by action reasons, and hence $A_i=0$, $i=1,2,3,4$. What now remains is showing the equalities \begin{eqnarray*} -d^{W_0,W_1}_{++}d^{W_0,W_1}_{+-}+d^{W_0,W_1}_{+-}d^{V_0,V_1}_{++}+d^{W_0,W_1}_{+0}d^{W_0,W_1}_{0-} & = & 0,\\ d^{W_0,W_1}_{00}d^{W_0,W_1}_{0-}+d^{W_0,W_1}_{0-}d^{V_0,V_1}_{++} & = & 0. \end{eqnarray*} Recall that $d^{V_0,V_1}_{++}=d^{W_0,W_1}_{--}$ and, hence, both these equalities hold since the left-hand sides are defined by the signed counts of the number of boundary components of certain one-dimensional moduli spaces of pseudoholomorphic strips with boundary on $(W_0,W_1)$. The first one comes from the boundary of $\mathcal{M}_{W_0,W_1}(\gamma^+;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta})$ for $\gamma^\pm$ a chord from $\Lambda_1^\pm$ to $\Lambda_0^\pm$. The second comes from $\mathcal{M}(q;\boldsymbol{\delta},\gamma^-,\boldsymbol{\zeta})$ for $q\in W_0\cap W_1$. Also, see Figures \ref{fig:breaking1} and \ref{fig:breaking4}. We now continue with the co-transfer map; i.e.~we have to establish the equality \[\Phi^{V_0,V_1} \circ \mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}^{V\odot W} = \mathfrak{d}_{\varepsilon'_0\varepsilon'_1}^{W}\circ \Phi^{V_0,V_1}.\] The matrices on the left and right hand side become \[ \begin{pmatrix} -d^{W_0,W_1}_{++} & d^{W_0,W_1}_{+0}+d^{W_0,W_1}_{+-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+-} \\ 0 & d^{W_0,W_1}_{00}+d^{W_0,W_1}_{0-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+-} \\ 0 & A_1+B_1 & A_2+B_2 & A_3+B_3 \end{pmatrix},\] where \begin{eqnarray*} A_1&=&b^{V_0,V_1}\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{0-} b^{V_0,V_1}\delta^{W_0,W_1}_{-0},\\ A_2&=&b^{V_0,V_1}\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+0},\\ A_3&=&b^{V_0,V_1}\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+-},\\ B_1&=&b^{V_0,V_1}\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{00} + d^{V_0,V_1}_{+-}b^{\Lambda_0^-,\Lambda_1^-}\delta^{V_0,V_1}_{-+} \delta^{W_0,W_1}_{-0} + d^{V_0,V_1}_{+0}\delta^{V_0,V_1}_{0+} \delta^{W_0,W_1}_{-0},\\ B_2&=&d^{V_0,V_1}_{+0}d^{V_0,V_1}_{00} + d^{V_0,V_1}_{+-}d^{V_0,V_1}_{-0},\\ B_3&=&d^{V_0,V_1}_{+0}d^{V_0,V_1}_{0-} + d^{V_0,V_1}_{+-}d^{V_0,V_1}_{--}, \end{eqnarray*} and \[ \begin{pmatrix} -d^{W_0,W_1}_{++} & d^{W_0,W_1}_{+0}+d^{W_0,W_1}_{+-}b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{+-}d^{V_0,V_1}_{+-} \\ 0 & d^{W_0,W_1}_{00} + d^{W_0,W_1}_{0-}b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{W_0,W_1}_{0-} d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{0-}d^{V_0,V_1}_{+-} \\ 0 & d^{W_0,W_1}_{-0} +d^{W_0,W_1}_{--}b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & d^{W_0,W_1}_{--}d^{V_0,V_1}_{+0} & d^{W_0,W_1}_{--}d^{V_0,V_1}_{+-} \end{pmatrix}, \] respectively. Again, we have $A_i=0$, $i=1,2,3$, by action reasons. The equalities remaining to be shown are \begin{eqnarray*} d^{V_0,V_1}_{+0}d^{V_0,V_1}_{00} + d^{V_0,V_1}_{+-}d^{V_0,V_1}_{-0} - d^{V_0,V_1}_{++}d^{V_0,V_1}_{+0} & = & 0\\ d^{V_0,V_1}_{+0}d^{V_0,V_1}_{0-} + d^{V_0,V_1}_{+-}d^{V_0,V_1}_{--} -d^{V_0,V_1}_{++}d^{V_0,V_1}_{+-}& = & 0 \end{eqnarray*} (recall again that $d_{--}^{W_0W_1}=d_{++}^{V_0V_1}$) together with the equality \begin{eqnarray*} \lefteqn{(b^{\Lambda_0,\Lambda_1}+d^{V_0,V_1}_{++}b^{V_0,V_1})\delta^{W_0,W_1}_{-0}=}\\ & =& b^{V_0,V_1}\delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{00} + (d^{V_0,V_1}_{+-}b^{\Lambda_0^-,\Lambda_1^-}\delta^{V_0,V_1}_{-+} + d^{V_0,V_1}_{+0}d^{V_0,V_1}_{0+})\delta^{W_0,W_1}_{-0}. \end{eqnarray*} (Recall that $d^{W_0,W_1}_{-0}=b^{\Lambda_0,\Lambda_1}\delta^{W_0,W_1}_{-0}$.) The first two equalities again follow from the fact that the left-hand sides correspond to counts of number of boundary component of appropriate one-dimensional moduli spaces of pseudoholomorphic discs ($\mathcal{M}(\gamma^+;\boldsymbol{\delta},p,\boldsymbol{\zeta})$ for the first one, the second follows by an analysis similar to the one for the transfer map). Also, see Figures \ref{fig:breaking4} and \ref{fig:breaking1}. The third equality finally follows from \begin{eqnarray*} & & \delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{00} - \delta^{W_0,W_1}_{--}\delta^{W_0,W_1}_{-0} = 0\\ & & b^{\Lambda_0,\Lambda_1}+d^{V_0,V_1}_{++}b^{V_0,V_1} = b^{V_0,V_1}\delta^{W_0,W_1}_{--} + d^{V_0,V_1}_{+-}b^{\Lambda_0^-,\Lambda_1^-}\delta^{V_0,V_1}_{-+} + d^{V_0,V_1}_{+0}\delta^{V_0,V_1}_{0+}, \end{eqnarray*} both which can be shown by arguments similar to the above (more precisely, studying $\mathcal{M}(q;\boldsymbol{\delta},\gamma^1,\boldsymbol{\zeta})$ for the first, and degeneration of bananas on $(V_0,V_1)$ for the second). Also, see Figures \ref{fig:breaking3} and \ref{fig:fundclassbreaking}. \end{proof} \begin{Rem} \label{rem:transfer} In the special case when there are no Reeb chords from $\Lambda_1$ to $\Lambda_0$ (recall that these are the Legendrian submanifolds along which the concatenations are performed), the corresponding transfer and co-transfer maps take the following particularly simple form. Since $C(\Lambda_0,\Lambda_1)=0$, the transfer map $\Phi_{W_0,W_1}$ is simply the inclusion of a subcomplex, while the co-transfer map $\Phi^{V_0,V_1}$ becomes the corresponding quotient projection. In fact, as will be shown below in Section \ref{sec:wrap}, this situation can always be achieved after the application of a Hamiltonian isotopy that ``wraps'' the positive and negative ends of $V_1$ and $W_1$, respectively. \end{Rem} The following lemma is standard. It follows from the fact that, in the cylindrical situation, regular $0$-dimensional moduli space are trivial, together with a stretching-the-neck argument. \begin{Lem}\label{lem:composition} The (co)transfer map satisfies the following properties \begin{itemize} \item In the case when $W_i=\ensuremath{\mathdj{R}} \times \Lambda_i$, $i=0,1$, and $J_b$ is a cylindrical almost complex structure we have \[\Phi_{W_0,W_1}=\Phi_{\ensuremath{\mathdj{R}} \times \Lambda_0,\ensuremath{\mathdj{R}} \times \Lambda_1}=\mathrm{id}.\] \item In the case when $V_i=\ensuremath{\mathdj{R}} \times \Lambda_i$, $i=0,1$, and $J_a$ is a cylindrical almost complex structure we have \[\Phi^{V_0,V_1}=\Phi^{\ensuremath{\mathdj{R}} \times \Lambda_0,\ensuremath{\mathdj{R}} \times \Lambda_1}=\mathrm{id}.\] \item In the case when $W_i=U_i \odot_M U_i'$, $i=0,1$, and $J_b=J_c \odot_M J_d$ we have \[\Phi_{W_0,W_1}=\Phi_{U_0',U_1'} \circ \Phi_{U_0,U_1}\] in the case when $M \gg 0$ is sufficiently large \item In the case when $V_i=U_i \odot_M U_i'$, $i=0,1$, and $J_a=J_c \odot_M J_d$ we have \[\Phi^{V_0,V_1}=\Phi^{U_0',U_1'} \circ \Phi^{U_0,U_1}\] in the case when $M \gg 0$ is sufficiently large. \end{itemize} \end{Lem} In the case when $W_1=\ensuremath{\mathdj{R}} \times \Lambda$, we write \[\Phi_{W_0}:=\Phi_{W_0,W_1}=\Phi_{W_0,\ensuremath{\mathdj{R}} \times \Lambda}\] and, similarly, when $V_1=\ensuremath{\mathdj{R}} \times \Lambda \subset \ensuremath{\mathdj{R}} \times Y$, we write \[\Phi^{V_0}:=\Phi^{V_0,V_1}=\Phi^{V_0,\ensuremath{\mathdj{R}} \times \Lambda}.\] \subsection{An auxiliary complex} \label{sec:auxcplx} Here we try to shed some light on the algebraic relationship between the transfer and the co-transfer map. We assume that all differentials are computed using almost complex structures such that Lemma \ref{lem:exptrans} holds. Starting with the above complex \[(C_\bullet:=\operatorname{Cth}_\bullet(V_0 \odot_N W_0, V_1 \odot_N W_1),\mathfrak{d}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}),\] we construct the complex \begin{gather*} (\widetilde{C}_\bullet:=C_\bullet \oplus C_\bullet(\Lambda_0,\Lambda_1) \oplus C_{\bullet-1} (\Lambda_0,\Lambda_1),\widetilde{\mathfrak{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}),\\ \widetilde{\mathfrak{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}:=\begin{pmatrix} \mathfrak{d}^{V\odot W} & 0 & 0 \\ 0 & d_{--}^{W_0,W_1} & 0 \\ 0 & \mathrm{id} & d_{++}^{V_0,V_1} \\ \end{pmatrix}. \end{gather*} (Again, $d_{--}^{W_0W_1}=d_{++}^{V_0V_1}$.) This complex is clearly homotopic to $(C_\bullet,\mathfrak{d}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-})$, since the canonical inclusion and projection maps \begin{gather*} \iota \colon\thinspace (C_\bullet,\mathfrak{d}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}) \to (\widetilde{C}_\bullet,\mathfrak{\widetilde{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}),\\ \pi \colon\thinspace (\widetilde{C}_\bullet,\mathfrak{\widetilde{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}) \to (C_\bullet,\mathfrak{d}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}), \end{gather*} are homotopy inverses of each other. We have a canonical identification \[ \widetilde{C}_\bullet = \operatorname{Cth}_\bullet(V_0,V_1) \oplus \operatorname{Cth}_\bullet(W_0,W_1) \] on the level of modules. We let \begin{gather*} \iota_{V_0,V_1} \colon\thinspace \operatorname{Cth}_\bullet(V_0, V_1) \hookrightarrow \widetilde{C}_\bullet,\\ \iota_{W_0,W_1} \colon\thinspace \operatorname{Cth}_\bullet(W_0, W_1) \hookrightarrow \widetilde{C}_\bullet, \end{gather*} be the canonical inclusions of the respective summands. Even though $\iota_{V_0,V_1}$ is not a chain-map in general, after a change of coordinates it will be apparent that $(\widetilde{C}_\bullet,\mathfrak{\widetilde{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-})$ in fact is the mapping-cone of a chain map \[\delta_{VW} \colon\thinspace \operatorname{Cth}_\bullet(W_0, W_1) \to \operatorname{Cth}_\bullet(V_0, V_1 ).\] To that end, we start by considering the map \begin{gather*} \Psi \colon\thinspace C_\bullet \oplus C_\bullet(\Lambda_0,\Lambda_1) \oplus C_{\bullet-1} (\Lambda_0,\Lambda_1)\to C_\bullet \oplus C_\bullet(\Lambda_0,\Lambda_1) \oplus C_{\bullet-1} (\Lambda_0,\Lambda_1),\\ \Psi = \begin{pmatrix} \mathrm{id} & 0 & d^{W_0,W_1}_{+-}+d^{W_0,W_1}_{0-}\\ -(d^{V_0,V_1}_{+-} +d^{V_0,V_1}_{+0}+b^{V_0,V_1}\delta^{W_0,W_1}_{-0}) & \mathrm{id} & 0 \\ 0 & 0& \mathrm{id} \\ \end{pmatrix}. \end{gather*} The property that \[ \delta^{W_0,W_1}_{-0} d^{W_0,W_1}_{0-}=0,\] which holds by action considerations, implies the equality \[(d^{V_0,V_1}_{+-} +d^{V_0,V_1}_{+0}+b^{V_0,V_1}\delta^{W_0,W_1}_{-0})(d^{W_0,W_1}_{+-}+d^{W_0,W_1}_{0-})=0.\] From this it follows that $\Psi$ in fact is an isomorphism of modules, with inverse given by \[\Psi^{-1} = \begin{pmatrix} \mathrm{id} & 0 & -(d^{W_0,W_1}_{+-}+d^{W_0,W_1}_{0-})\\ d^{V_0,V_1}_{+-} +d^{V_0,V_1}_{+0}+b^{V_0,V_1}\delta^{W_0,W_1}_{-0} & \mathrm{id} & 0 \\ 0 & 0& \mathrm{id} \\ \end{pmatrix}.\] Using $\Psi$ we define the complex \begin{gather*} \Psi \colon\thinspace (\widetilde{C}_\bullet,\mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}) \to (\widetilde{C}_\bullet,\mathfrak{\widetilde{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}),\\ \mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-} := \Psi^{-1} \circ \mathfrak{\widetilde{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}\circ\Psi, \end{gather*} obtained by applying a coordinate change to the original complex. The upshot is that the differential of $(\widetilde{C}_\bullet,\mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-})$ has the following particularly nice and apparent cone structure. The proof follows from a breaking analysis similar to the one in the proof of Lemma \ref{lem:exptrans}. \begin{Lem}\label{lem:inclusion} The complex $(\widetilde{C}_\bullet,\mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-} )$ is equal to the mapping cone \begin{gather*} (\widetilde{C}_\bullet=\operatorname{Cth}_\bullet(W_0,W_1) \oplus \operatorname{Cth}_\bullet(V_0,V_1),\mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-} ),\\ \mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-} = \begin{pmatrix} \mathfrak{d}^W_{\varepsilon'_0\varepsilon'_1} & \delta_{VW} \\ 0 & \mathfrak{d}^V_{\varepsilon_0^-\varepsilon_1^-} \end{pmatrix}, \end{gather*} where \begin{gather*} C_{\bullet-1}(\Lambda^+_0,\Lambda^+_1) \oplus C_\bullet(W_0,W_1) \oplus C_\bullet(\Lambda_0,\Lambda_1) \\ \downarrow \delta_{VW} \\ C_{\bullet-1}(\Lambda_0,\Lambda_1) \oplus C_\bullet(V_0,V_1) \oplus C_\bullet(\Lambda^-_0,\Lambda^-_1), \end{gather*} is of the form \[\delta_{VW}=\begin{pmatrix} 0 & -b^{V_0,V_1}\delta^{W_0,W_1}_{-0}& \mathrm{id}_{C(\Lambda_0,\Lambda_1)} \\ 0 & d^{V_0,V_1}_{0+}\delta^{W_0,W_1}_{-0} & 0\\ 0& b^{\Lambda^-_0,\Lambda^-_1}\delta^{V_0,V_1}_{-+}\delta^{W_0,W_1}_{-0} & 0 \end{pmatrix}.\] In particular, the canonical inclusion \[\iota_{V_0,V_1} \colon\thinspace (\operatorname{Cth}_\bullet(V_0, V_1),\mathfrak{d}^V_{\varepsilon_0^-\varepsilon_1^-}) \hookrightarrow (\widetilde{C}_\bullet,\mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-})\] as well as the canonical projection \[ \pi_{W_0,W_1} \colon\thinspace (\widetilde{C}_\bullet,\mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-} ) \to (\operatorname{Cth}_\bullet(W_0, W_1),\mathfrak{d}^W_{\varepsilon'_0\varepsilon'_1}),\] are both chain maps. Furthermore, the transfer and co-transfer map can be expressed as \begin{eqnarray*} \Phi_{W_0,W_1}&=&\pi \circ \Psi \circ \iota_{V_0,V_1},\\ \Phi^{V_0,V_1}&=&\pi_{W_0,W_1} \circ \Psi^{-1} \circ \iota, \end{eqnarray*} where all factors are chain maps. \end{Lem} \begin{proof} The claim that \[\iota_{V_0,V_1} \colon\thinspace (\operatorname{Cth}_\bullet(V_0, V_1),\mathfrak{d}^V_{\varepsilon_0^-\varepsilon_1^-}) \hookrightarrow (\widetilde{C}_\bullet,\mathfrak{\overline{d}}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-})\] is a chain map can be seen by considering the relations \[ \begin{cases} \mathfrak{d}^{W}_{\varepsilon'_0,\varepsilon'_1} (d^{W_0,W_1}_{+-}+d^{W_0,W_1}_{0-})+(d^{W_0,W_1}_{+-}+d^{W_0,W_1}_{0-}) d^{\Lambda_0,\Lambda_1}=0, \\ d^{\Lambda_0,\Lambda_1} (d^{V_0,V_1}_{+-}+d^{V_0,V_1}_{+0})=(d^{V_0,V_1}_{+-}+d^{V_0,V_1}_{+0}) \mathfrak{d}^V_{\varepsilon^-_0,\varepsilon^-_1}, \end{cases} \] which follow from the fact that $(\mathfrak{d}^W_{\varepsilon_0'\varepsilon_1'})^2=0$ and $(\mathfrak{d}^V_{\varepsilon_0^-\varepsilon_1^-})^2=0$, respectively, together with the expressions of $\mathfrak{d}^{V\odot W}_{\varepsilon_0^-\varepsilon_1^-}$, $\mathfrak{d}^W_{\varepsilon_0'\varepsilon_1'}$ and $\mathfrak{d}^V_{\varepsilon_0^-\varepsilon_1^-}$ as given in Section \ref{sec:splitting}. To see that the corresponding quotient complex is as claimed, one must use the identity \[(-d^{\Lambda_0,\Lambda_1} +d^{V_0,V_1}_{+-} +d^{V_0,V_1}_{+0})\delta_{VW}+b^{V_0,V_1}\delta^{W_0,W_1}_{-0})d^{W_0,W_1}_{00}= b^{\Lambda_0,\Lambda_1}\delta^{W_0,W_1}_{-0},\] which in turn follows from \[\begin{cases} -d^{\Lambda_0,\Lambda_1} b^{V_0,V_1}+b^{V_0,V_1}\delta^{V_0V_1}_{++}+d^{V_0,V_1}_{+-}b^{\Lambda_0^-,\Lambda_1^-}\delta^{V_0,V_1}_{-+}+\delta^{V_0,V_1}_{+0}d^{V_0,V_1}_{0+}=b^{\Lambda_0,\Lambda_1},\\ \delta^{W_0,W_1}_{-0}d^{W_0,W_1}_{00}=\delta^{W_0,W_1}_{--}\delta^{W_0,W_1}_{-0}. \end{cases}\] The latter identities can be seen by analysing the possible boundaries of the appropriate one-dimensional moduli spaces. \end{proof} \section{Proof of the acyclicity (the invariance)} \label{sec:acyclicity} In this section we establish the invariance result for our Floer theory. In fact, in our context, the invariance is simply the fact the complex $\operatorname{Cth}(\Sigma_0,\Sigma_1)$ is acyclic (actually null-homotopic). The naive reason for this is that the symplectisation of a contactisation $(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}},d(e^t(dz+\theta)))$ is ``subcritical'': it is symplectomorphic to $(P \times \ensuremath{\mathdj{R}}^2,d\theta \oplus \omega_0)$. More precisely, the main feature that will be used is that one can use the Reeb flow (which is Hamiltonian) in order to displace an exact Lagrangian cobordism from any other given cobordism. In order to circumvent certain technical difficulties, we will here restrict our attention to almost complex structures on the symplectisation $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ that are admissible, as defined in Section \ref{sec:admissible}, and moreover coinciding with cylindrical lifts of almost complex structures in $P$ outside of a compact subset; see Section \ref{sec:cyllift}. \subsection{Wrapping the ends} \label{sec:wrap} Let $\Sigma_i$, $i=0,1$, be exact Lagrangian cobordisms from $\Lambda^-_i$ to $\Lambda^+_i$, $i=0,1$. We assume that $\Sigma_i$, $i=0,1$, both are cylindrical in the subset $\{ |t| \ge T \}$ for some $T>0$. Fix a smooth non-decreasing cut-off function $\rho \colon\thinspace \ensuremath{\mathdj{R}} \to [0,1]$ satisfying $\rho(t)=0$ for $t \le 1$ and $\rho(t)=1$ for $t \ge 2$. Consider the smooth real-valued function \begin{gather*} h_N \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}, \\ (t,p,z) \mapsto e^t\rho_N(t), \end{gather*} where $\rho_N \colon \ensuremath{\mathdj{R}} \to [0,1]$ is defined uniquely by the relations $\rho_N(-t) = \rho_N(t)$, and $\rho_N(t)=\rho(t-(T+N))$ for all $t \ge 0$. We also consider the smooth real-valued functions \begin{gather*} h_{N,\pm} \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}, \\ (t,p,z) \mapsto e^t\rho(\pm t-(T+N)), \end{gather*} and set $h_\pm:=h_{0,\pm}$. The above functions are then used to construct the Hamiltonian isotopies \begin{eqnarray*} \phi^s_{h_N\partial_z}(t,p,z)&=&(t,p,z+s\rho_N(t)),\\ \phi^s_{h_{N,\pm}\partial_z}(t,p,z)&=&(t,p,z+s\rho(\pm t-(T+N))), \end{eqnarray*} which should be thought of as ``wrapping'' the ends of $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$. For instance, after applying the isotopy $\phi^{-S}_{h_N\partial_z}$ to $\Sigma_0$ for $S \gg 0$ sufficiently large, we get additional double points $\phi^{-S}_{h_N\partial_z}(\Sigma_0) \cap \Sigma_1$, all which correspond to the Reeb chords going from the ends of $\Sigma_1$ to the corresponding end of $\Sigma_0$. More precisely, the following is true: \begin{Lem} \label{lem:wrap} When \[S \ge S_0:= 2\max_{c \in \mathcal{R}(\Lambda^-_1,\Lambda^-_0) \cup \mathcal{R}(\Lambda^+_1,\Lambda^+_0)} \ell(c)\] there is a canonical bijective correspondence \[ w_\pm \colon \phi^{-S}_{h_{N,\pm}\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^\pm_0) \cap (\ensuremath{\mathdj{R}} \times \Lambda^\pm_1)\to \mathcal{R}(\Lambda^\pm_1,\Lambda^\pm_0)\] induced by the Lagrangian projection, i.e.~by identifying elements on both sides with a double point in $\Pi_{\operatorname{Lag}}(\Lambda^\pm_0 \cup \Lambda^\pm_1) \subset P$. On the level of gradings this bijection moreover satisfies \[|w_-(p)|=|p| \:\: \text{and} \:\: |w_+(p)|=|p|-1.\] In particular, there is a canonical identification \[\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1) = \operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma_0),\Sigma_1) \] on the level of \emph{graded modules}. After taking $N \gg 0$ sufficiently large, we may assume that the action of a generator in \[C(\Lambda^-_0,\Lambda^-_1)\subset \operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma_0),\Sigma_1),\] is arbitrarily small, the action of a generator in \[C(\Lambda^+_0,\Lambda^+_1)\subset \operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma_0),\Sigma_1)\] is arbitrarily large, while the action of a generator in \[C(\Sigma_0,\Sigma_1) \subset \operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma_0),\Sigma_1)\] coincides with its original action. \end{Lem} The first goal will be showing that the identification given by the above lemma, in fact may be assumed to hold on the level of \emph{complexes} as well. \begin{Prop} \label{prp:wrap} For each $N \gg 0$ sufficiently large and $S \ge S_0$ as defined above, there is a canonical identification of complexes \[ (\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-})= (\operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma_0),\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}),\] under the assumption that $J \in \mathcal{J}^{\operatorname{adm}}_{J^-,J^+}(\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}})$ is a regular admissible almost complex structure and where, moreover, $J^\pm =\widetilde{J}_P$ both are the cylindrical lift of an almost complex structure on $(P,d\theta)$. \end{Prop} \begin{Rem} \begin{itemize} \item Recall that $J$ is equal to $\widetilde{J}_P$ outside of a compact set by assumption, where the latter is a cylindrical almost complex structure which is invariant under the Reeb flow. Since the negative ends of $\Sigma_0$ and $\phi^{-S}_{h_N\partial_z}(\Sigma_0)$ differ by the time-$S$ Reeb flow, it follows that these Legendrian submanifolds have canonically isomorphic Chekanov-Eliashberg algebras when defined by $\widetilde{J}_P$ and, in particular, we can identify their augmentations. \item For a general choice of almost complex structure, there should again exist an analogous isomorphism, albeit non-canonical. \end{itemize} \end{Rem} \begin{proof} Consider the transfer and co-transfer maps \begin{gather*} \Phi_{\phi^{-S}_{h_+\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^+_0)} \colon\thinspace \operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1) \to \operatorname{Cth}_\bullet(\phi^{-S}_{h_{N,+}\partial_z}(\Sigma_0),\Sigma_1),\\ \Phi^{\phi^{-S}_{h_-\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^-_0)} \colon\thinspace \operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma_0),\Sigma_1) \to \operatorname{Cth}_\bullet(\phi^{-S}_{h_{N,+}\partial_z}(\Sigma_0),\Sigma_1), \end{gather*} defined by counting $\widetilde{J}_P$-holomorphic strips having boundary on $\phi^{-S}_{h_{N,+}\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^+_0) \cup (\ensuremath{\mathdj{R}} \times \Lambda^+_1)$ and $\phi^{-S}_{h_{N,-}\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^-_0) \cup (\ensuremath{\mathdj{R}} \times \Lambda^-_1)$, respectively. In order to identify the domains and codomains of the above maps we have used the fact that, \[ J \odot_N \widetilde{J}_P = \widetilde{J}_P \odot_N J=J, \:\: N \ge 0,\] which holds by the assumptions made on $J$, as well as the facts that \begin{eqnarray*} \phi^{-S}_{h_{N,+}\partial_z}(\Sigma_0) &=& \Sigma_0 \odot_N \phi^{-S}_{h_+\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^+_0), \\ \phi^{-S}_{h_N\partial_z}(\Sigma_0) &=& \phi^{-S}_{h_-\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^-_0) \odot_N \phi^{-S}_{h_{N,+}\partial_z}(\Sigma_0),\\ \end{eqnarray*} hold for every $N \ge 0$ by construction. Recall that the transfer and co-transfer maps are chain maps by Lemma \ref{lem:exptrans}, assuming that $N \gg 0$ has been chosen sufficiently large. The proposition will follow from the claim that $\Phi_{\phi^{-S}_{h_{N,+}\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^+_0)}$ and $\Phi^{\phi^{-S}_{h_{N,-}\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^-_0)}$ both are isomorphisms that, moreover, induce the respective canonical identifications of graded modules described in Lemma \ref{lem:wrap}. The latter facts follows by the explicit disc count performed in the proof of \cite[Theorem 2.15]{Floer_Conc}; also, see \cite[Proposition 5.11]{LiftingPseudoholomorphic} for a similar argument. Roughly speaking, it is there shown that every $\widetilde{J}_P$-holomorphic disc of index zero in the definition of the above (co)transfer map is a transversely cut-out strip having one positive puncture and one negative puncture, and whose image under the canonical projection to $P$ is constant. Conversely, there is an explicitly defined such strip for every double point in $P$ corresponding to a Reeb chord. In particular, under an appropriate choice of basis, the matrices of both maps $\Phi_{\phi^{-S}_{h_{N,+}\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^+_0)}$ and $\Phi^{\phi^{-S}_{h_{N,-}\partial_z}(\ensuremath{\mathdj{R}} \times \Lambda^-_0)}$ are equal to the identity matrices. \end{proof} \subsection{Invariance under compactly supported Hamiltonian isotopies} The following is the core of the invariance result that we need in order to deduce the acyclicity of the Cthulhu complex. (As we need to compare the complexes computed with two different almost structures, here we denote by $\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}(J)$ the differential on $\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1)$ computed using the almost complex structure $J$.) \begin{Prop}\label{prp:invariance-comp} Let $(\Sigma^s_0,\Sigma_1)$, $s \in [0,1]$, be a compactly supported one-parameter family of pairs of exact Lagrangian cobordisms from $\Lambda^-_i$ to $\Lambda^+_i$, $i=0,1$. Also, consider a one-parameter family $\{J_s\}_{s \in [0,1]}$ of admissible almost complex structures which agree outside of a compact set. There is an induced homotopy equivalence \[\Psi_{\{(\Sigma^s_0,J_s)\}} \colon\thinspace (\operatorname{Cth}_\bullet(\Sigma^0_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}(J_0)) \to (\operatorname{Cth}_\bullet(\Sigma^1_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}(J_1)).\] Furthermore, the restriction of $\Psi_{\{(\Sigma^s_0,J_s)\}}$ to the subcomplex \[(C_\bullet(\Lambda^+_0,\Lambda^+_1),-d_{++}) \subset \operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1))\] induces an isomorphism \[\Psi_{\{(\Sigma^s_0,J_s)\}}|_{C(\Lambda^+_0,\Lambda^+_1)} \colon\thinspace (C(\Lambda^+_0,\Lambda^+_1),-d_{++}) \to (C(\Lambda^+_0,\Lambda^+_1),-d_{++}).\] \end{Prop} \begin{Rem} \begin{itemize} \item Under the additional assumption that $\Lambda^-_i=\emptyset$, $i=0,1$, this result was established in \cite[Section 4.2.1]{Ekholm_FloerlagCOnt}. \item Our proof of the above proposition does not rely on abstract perturbations. This is possible since we can use Proposition \ref{prp:wrap} in order to replace Reeb chords with intersection points, thereby breaking some of the symmetry. An alternative proof, more closely adapted to the SFT formalism, would be simply to generalise \cite[Section 4.2.1]{Ekholm_FloerlagCOnt} to the current setting. The latter approach depends on the abstract perturbation scheme outlined in \cite[Appendix B]{RationalSFT}. \end{itemize} \end{Rem} \begin{proof} After an application of Proposition \ref{prp:wrap}, we may assume that there are no Reeb chords from $\Lambda^\pm_1$ to $\Lambda^\pm_0$ and, hence, that the involved complexes are generated by intersection points only. More precisely, the aforementioned proposition provides such natural identifications \begin{eqnarray*} (\operatorname{Cth}_\bullet(\Sigma^0_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}(J_0)) & = & (\operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma^0_0),\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}(J_0)), \\ (\operatorname{Cth}_\bullet(\Sigma^1_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}(J_1)) & = & (\operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma^1_0),\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-}(J_1)), \end{eqnarray*} of complexes for $S,N \gg 0$ sufficiently large. Obviously $\phi^{-S}_{h_N\partial_z}(\Sigma^s_0)$ still is a compactly supported family of exact Lagrangian cobordisms. This family is moreover fixed in a neighbourhood of the intersection points corresponding to the generators of $C(\Lambda^\pm_0,\Lambda^\pm_1)$, assuming that $N \gg 0$ is chosen sufficiently large. To sum up, we have reduced the invariance problem to the case when there are no Reeb chords from $\Lambda^\pm_1$ to $\Lambda^\pm_0$. In this case the involved complexes thus have generators corresponding to the intersection points of $\Sigma^s_0 \cap \Sigma_1$, $s=0,1$, and the one-parameter family $\Sigma^s_0$ is a compactly supported family of exact Lagrangian cobordisms. In this case, the bifurcation analysis conducted in \cite{Ekholm_Contact_Homology}, \cite{LCHgeneral} can be generalised to produce the sought homotopy equivalence. We proceed to provide a sketch of this argument, highlighting the points where some additional care must be taken to adapt it to the current setting. Recall that the differential is defined via a count of $J_s$-holomorphic strips having boundary on $\Sigma^s_0 \cup \Sigma_1$, two punctures asymptotic to intersection points, and possibly additional negative punctures asymptotic to Reeb chords on either $\Lambda^-_0$ or $\Lambda^-_1$. For a generic one-parameter family $\Sigma^s_0 \cap \Sigma_1$ and $J_s$ $s\in[0,1]$, we can assume that for all but a finite number of instances $s \in [0,1]$, \begin{itemize} \item the intersection $\Sigma^s_0 \cap \Sigma_1$ is transverse, and \item the above spaces of $J_s$-holomorphic strips are transversely cut out. \end{itemize} Moreover, in the case when the above two conditions do not hold for some $s_0 \in [0,1]$ we may assume that precisely one of the following cases occur. {\bf Birth/death:} All intersection points of $\Sigma^{s_0}_0 \cap \Sigma_1$ are transverse \emph{except a single intersection point}. The latter intersection point moreover arises in the family as a standard birth/death-intersection point, which was described in \cite[Section 3]{Ekholm_Contact_Homology} (note that the Lagrangian cobordisms considered are of dimension at least \emph{two} by assumption). Moreover, all $J_{s_0}$-holomorphic strips as above whose punctures are asymptotic to transverse intersection points are transversely cut out. {\bf Handleslide:} The intersection $\Sigma^{s_0}_0 \cap \Sigma_1$ is transverse, and all $J_s$-holomorphic strips with boundary on $\Sigma^{s_0}_0 \cup \Sigma_1$ are regular \emph{except a single solution}. Moreover, the latter strip has index $-1$, and is generic when considered in the one-parameter family of boundary value problems. Observe that, it suffices to consider each case above separately in order to deduce the sought invariance. Namely, for a generic family in which none of the above cases occur, all involved complexes are naturally isomorphic. The case of a birth/death moment is treated analogously as in the closed case in \cite{FloerHFlag}, a careful account appearing also in \cite{KFloer}. Note that the difference of action between the newly created intersection points is small near the birth point and, thus, the involved holomorphic curves cannot have negative asymptotics to Reeb chords. In the case of a handle slide we construct an isomorphism of complexes by the following formula. Assume that the unique strip $u$ of index $-1$ has the intersection points $x,y$ as incoming and outgoing ends, respectively, and that it has additional negative punctures asymptotic to the words $\mathbf{a}$, $\mathbf{b}$ of Reeb chords on $\Lambda^-_0$ and $\Lambda^-_1$, respectively. On the intersection point generators we define the isomorphism \begin{gather*} p \mapsto p+K(p),\\ K(p)=\begin{cases} \varepsilon_0(\mathbf{a})\varepsilon_1(\mathbf{b})x, & p=y,\\ 0, & p \neq y, \end{cases} \end{gather*} of graded modules. \begin{figure}[ht!] \centering \vspace{0.5cm} \labellist \pinlabel $0$ at 86 101 \pinlabel $-1$ at 45 85 \pinlabel $-1$ at 181 85 \pinlabel $1$ at 275 57 \pinlabel $-1$ at 293 85 \pinlabel $0$ at 338 85 \pinlabel $1$ at 184 57 \pinlabel $\Lambda^+_0\cup\textcolor{red}{\Lambda^+_1}$ at -36 150 \pinlabel $\Sigma_0\cup\textcolor{red}{\Sigma_1}$ at -36 93 \pinlabel $\Lambda^-_0\cup\textcolor{red}{\Lambda^-_1}$ at -36 40 \endlabellist \includegraphics[scale=0.65]{handleslide} \caption{The a priori possibilities for the boundary points of a one-parameter family of rigid strips with both incoming and outgoing ends asymptotic to intersection points. The number on each component denotes its Fredholm index.} \label{fig:handleslide} \end{figure} We must show that this isomorphism in fact is a chain map. Again, this is shown by gluing of pseudoholomorphic curves. The only difference between this case and the case when $\mathbf{a}=\mathbf{b}=\emptyset$ comes from the gluing of $\partial$-breakings. More precisely, the difference between the differential $d_{s_0-\epsilon}$ before the handle slide and $d_{s_0+\epsilon}$ come from glued rigid configuration of index $0$ curve to $u$ such configuration are of two types (see Figure \ref{fig:handleslide}): The first type of gluing does not involve pure negative Reeb chords, and correspond to the term $K$ in the definition of the isomorphism. The second type of gluing involves a pure Reeb chord, and corresponds to that of a $\partial$-breaking. The totality of the glued solutions of this kind thus contributes to $0$ when counted utilising the augmentation (see the proof of Theorem \ref{thm:dsquare}). Finally, note that breaking involving an index $-1$ curve with a negative asymptotic to a mixed chord cannot occur; such a breaking would involve a holomorphic banana and, one of its two ends would necessarily be a chord from $\Lambda^-_1$ to $\Lambda^-_0$ (which does not exist by assumption). Finally, in order to deduce the last claim of the proposition, it will be necessary to use the following additional property of the identifications of complexes described in the proof of Proposition \ref{prp:wrap}. Consider the subset $C \subset \phi^{-S}_{h_N\partial_z}(\Sigma^s_0) \cap \Sigma_1$ of intersection points corresponding to the Reeb chords from $\Lambda^+_1$ to $\Lambda^+_0$; these intersection points are contained in a subset of the form $(N,+\infty) \times P \times \ensuremath{\mathdj{R}}$, which may be assumed to be fixed in the one-parameter family $\phi^{-S}_{h_N\partial_z}(\Sigma^s_0)$ of cobordisms. Even though these intersection points are fixed, their actions will in general vary with $s \in [0,1]$. We use $M$ to denote the minimum of the action of a intersection point in $C \subset \phi^{-S}_{h_N\partial_z}(\Sigma^s_0) \cap \Sigma_1$ taken over all $s\in [0,1]$ . For $N \gg 0$ sufficiently large, Lemma \ref{lem:wrap} shows that any intersection point $(\phi^{-S}_{h_N\partial_z}(\Sigma^s_0) \cap \Sigma_1) \setminus C$ has action strictly less than $M$. In conclusion, we must have $K(C(\Lambda_0^+,\Lambda_1^+)) \subset C(\Lambda_0^+,\Lambda_1^+)$ for the map $K$ as defined above, thus implying the claim. \end{proof} \subsection{The proof of the acyclicity} \label{sec:proof-acyclicity} We are finally ready to prove the main result of this section. Recall that we are in the special case of a symplectisation of a contactisation. In this setting, the idea is to show that a complex having no Reeb chord generators is acyclic by alluding to Proposition \ref{prp:invariance-comp}, together with an explicitly defined compactly supported Hamiltonian displacement. Finally, Proposition \ref{prp:wrap} shows that we can replace every complex with a complex induced by a pair of Lagrangian cobordisms having no Reeb chord generators (which is obtained by wrapping the cylindrical ends of one of the Lagrangian cobordisms). \begin{Thm} Assume that we are given a choice of regular admissible almost complex structure coinciding with a cylindrical lifts outside some subset of the form $[-T,T]\times Y$. For any pair $\Sigma_0,\Sigma_1 \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ of exact Lagrangian cobordisms from $\Lambda^-_i$ to $\Lambda^+_i$, and choices $\varepsilon_i^-$ of augmentations of the Chekanov-Eliashberg algebra of $\Lambda^-_i$, $i=0,1$, the complex \[ (\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-})\] is homotopic to the trivial complex, \end{Thm} \begin{proof} We use the fact that, for $S \gg 0$ sufficiently large, there is an exact Lagrangian cobordism $\Sigma_0' \subset \ensuremath{\mathdj{R}} \times (P \times \ensuremath{\mathdj{R}})$ satisfying the properties that, first, it is isotopic to $\phi^{-S}_{h_N\partial_z} (\Sigma_0)$ by a compactly supported Hamiltonian isotopy and, second, the complex \[(\operatorname{Cth}(\Sigma_0',\Sigma_1), \mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-})=0 \] has no generators. To that end, observe that the complex $\operatorname{Cth}(\phi^{-S}_{h_N\partial_z} (\Sigma_0),\Sigma_1)$ has no Reeb chord generators for $S \gg 0$ sufficiently large by Lemma \ref{lem:wrap}. Moreover, the compactly supported Hamiltonian isotopy $\phi_{e^t}^{-s} \circ \phi_{h_N\partial_z}^{-(S-s)}(\Sigma_0)$ can be seen to remove all intersection points, whenever $s \gg 0$ is sufficiently large. The invariance result for a compactly supported Hamiltonian isotopy provided by Proposition \ref{prp:invariance-comp} thus implies that the complex $\operatorname{Cth}_\bullet(\phi^{-S}_{h_N\partial_z}(\Sigma_0),\Sigma_1)$ is null-homotopic. The fact that the same is true for the complex $\operatorname{Cth}_\bullet(\Sigma_0,\Sigma_1)$ is now an immediate consequence of Proposition \ref{prp:wrap}, for $N \gg 0$ that have been chosen sufficiently large. \end{proof} \section{Proof of the main theorem} \label{sec:long-exact-sequence} Now we proceed to prove the main result of the paper. In Section \ref{sec:lesproof}, this will then be used to deduce several long exact sequences. We denote the quotient complex $C(\Sigma_0,\Sigma_1)\oplus C(\Lambda^-_0,\Lambda^-_1)$ of $\operatorname{Cth}(\Sigma_0,\Sigma_1)$ by $CF_{-\infty}(\Sigma_0,\Sigma_1)$ with differential given by $ d_\infty=\begin{pmatrix} d_{00} & d_{0-}\\ d_{-0} & d_{--} \end{pmatrix},$ its homology is denoted by $HF_{-\infty}(\Sigma_0,\Sigma_1)$. The complex $\operatorname{Cth}(\Sigma_0,\Sigma_1)$ is the cone of the chain map $d_{+0}+d_{+-}: CF_{\infty}(\Sigma_0,\Sigma_1)\rightarrow C(\Lambda_0^+,\Lambda_1^+)$. The acyclicity of the complex $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-\varepsilon_1^-})$ implies that this map is a quasi-isomorphism, and hence that \begin{equation} \label{eq:19} LCH^k_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda_0^+,\Lambda^+_1)\simeq HF^{k+1}_{-\infty}(\Sigma_0,\Sigma_1). \end{equation} Let $f_i$, $i=0,1$, be primitives of $e^t\alpha\vert_{\Lambda_i}$ as in Section \ref{sec:energy-compactness}. A point $p\in \Sigma_0\cap \Sigma_1$ is \textit{positive} (resp. \textit{negative}) if $f_1(p)-f_0(p)$ is positive (resp. negative). This leads to a decomposition $CF(\Sigma_0,\Sigma_1)=CF_+(\Sigma_0,\Sigma_1)\oplus CF_-(\Sigma_0,\Sigma_1)$. \begin{Prop} With respect to the decomposition $$CF_{-\infty}(\Sigma_0,\Sigma_1)=CF_+(\Sigma_0,\Sigma_1)\oplus C(\Lambda^-_0,\Lambda^-_1) \oplus CF_-(\Sigma_0,\Sigma_1),$$ the differential takes the form \begin{equation} \label{eq:5} d_{-\infty}= \begin{pmatrix} d_{0^+0^+} & d_{0^+-} & d_{0^+0^-} \\ 0 & d_{--} & d_{-0^-} \\ 0 & 0 & d_{0^-0^-} \end{pmatrix} \end{equation} of an upper-triangular matrix. \end{Prop} \begin{proof} This follows from the energy estimates of Section \ref{sec:energy-compactness}. \end{proof} This decomposition allows us to prove Theorem \ref{thm:principal}. \begin{proof}[Proof of Theorem \ref{thm:principal}] From Equations \eqref{eq:20} and \eqref{eq:5}, the decomposition $$\operatorname{Cth}(\Sigma_0,\Sigma_1) = C(\Lambda^+_0,\Lambda^+_1)\oplus CF_+(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda^-_1)\oplus CF_-(\Sigma_0,\Sigma_1)$$ induces a filtration for the differential $\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}$. The first page thus takes the prescribed form by construction. The fact that $(\operatorname{Cth}(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})$ is acyclic implies that the associated spectral sequence collapses on the fourth page. \end{proof} Note that if either all intersection points have positive actions or they all have negative actions, then the differential described in Formula \eqref{eq:5} is a mapping cone from or to $C(\Lambda^-_0,\Lambda^-_1)$. This motivates the following definition for a pair $(\Sigma_0,\Sigma_1)$ of cobordisms. \begin{defn} \label{def:directed} We say that $(\Sigma_0,\Sigma_1)$ is \textit{directed} if $CF_+(\Sigma_0,\Sigma_1)=0$, while it is called \textit{$V$-shaped} if $CF_-(\Sigma_0,\Sigma_1)=0$. \end{defn} We are now able to prove Corollary \ref{cor:posnegLES}. \begin{proof}[Proof of Corollary \ref{cor:posnegLES}] For directed cobordisms, from Equation \eqref{eq:5} we get that the differential on $CF_{-\infty}(\Sigma_0,\Sigma_1)$ is the cone of the map $d_{-0}: CF(\Sigma_0,\Sigma_1)\rightarrow C(\Lambda_0^-,\Lambda_1^-)$. Thus we get a long exact sequence \begin{equation*} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots\ar[r]& HF^{k}_{-\infty}(\Sigma_0,\Sigma_1) \ar[d] & & & \\ & HF_-^{k}(\Sigma_0,\Sigma_1) \ar[r] & LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-_0,\Lambda_1^-) \ar[r] & HF^{k+1}_{-\infty}(\Sigma_0,\Sigma_1)\ar[r] &\cdots} \end{equation*} For the case of $V$-shaped cobordisms we get that the differential on $CF_{-\infty}(\Sigma_0,\Sigma_1)$ is the cone of the map $d_{0-}: C(\Lambda^-_0,\Lambda^-_1)\rightarrow CF(\Sigma_0,\Sigma_1)$ which leads to the following exact sequence \begin{equation*} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots \ar[r] & HF^{k+1}_{-\infty}(\Sigma_0,\Sigma_1) \ar[r] & LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-_0,\Lambda_1^-)\ar[r] & HF_+^{k+2}(\Sigma_0,\Sigma_1) \ar[d] & \\ & & & HF^{k+2}_{-\infty}(\Sigma_0,\Sigma_1)\ar[r] & \cdots } \end{equation*} In both cases the isomorphism $LCH^k_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda_0^+,\Lambda^+_1)\simeq HF^{k+1}_{-\infty}(\Sigma_0,\Sigma_1)$ of equation \eqref{eq:19} concludes the proof. \end{proof} \section{Long exact sequences induced by an exact Lagrangian cobordism} \label{sec:lesproof} Here we establish the long exact sequences described in Section \ref{sec:les} associated to a Lagrangian cobordism $\Sigma$. In order to do this, we need to make use of the invariance result proven in the former section. More precisely, the sought long exact sequence will be induced by Corollary \ref{cor:posnegLES} applied to a pair $(\Sigma_0,\Sigma_1)$ of exact Lagrangian cobordisms, where $\Sigma_0:=\Sigma$ and $\Sigma_1$ is obtained from $\Sigma$ by a suitable Hamiltonian perturbation. \subsection{Various push-offs of an exact Lagrangian cobordism} \label{sec:small-pert-lagr} In the following, we assume that we are given an exact Lagrangian cobordism $\Sigma \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ from $\Lambda_-$ to $\Lambda_+$ inside the symplectisation of a contactisation. We furthermore assume that $\Sigma$ is cylindrical outside of the set $[A,B] \times P \times \ensuremath{\mathdj{R}}$ for some $A<B$. We shall write \begin{gather*} \overline{\Sigma}:= \Sigma \cap \{ t \in [A,B] \}, \\ \partial_- \overline{\Sigma} := \Sigma \cap \{ t = A\},\\ \partial_+ \overline{\Sigma} := \Sigma \cap \{ t = B \}, \end{gather*} so that clearly $\partial \overline{\Sigma}=\partial_- \overline{\Sigma} \cup \partial_+\overline{\Sigma}$. We will consider different push-offs constructed via autonomous Hamiltonians $h \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ induced by a function $h(t)$ only depending on the symplectisation coordinate. We write \[ \Sigma^h:=\phi^1_h(\Sigma) \] and observe that the Hamiltonian flow takes the particularly simple form \[ \phi^s_h(t,p,z)=(t,p,s e^{-t}h'(t)+z).\] In particular, the one-form $e^t\alpha$ pulls back to $e^t\alpha+(h''-h')dt$ under $\phi^1_h$. So, if $\Sigma_0$ is an exact Lagrangian cobordism with a primitive $f_0 \colon \Sigma_0 \to \ensuremath{\mathdj{R}}$ of $\Sigma_0^*e^t\alpha$, then the primitive of $(\phi^1_h(\Sigma_0))^*(e^t\alpha)$ is given by \begin{equation} \widetilde{f}_0(q)=f_0(q)+(h'-h)(a(\Sigma_0(q))),\label{eq:14} \end{equation} where $a$ is the canonical projection to $\mathbb{R}_t$, and $\Sigma_0 \colon \Sigma_0 \hookrightarrow \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ denotes the inclusion. Assuming that $h$ coincides with $\lambda e^t$ near $-\infty$ for a constant $\lambda \in \ensuremath{\mathdj{R}}$ (i.e. $h'=h$), the primitive $f_0$ vanishes at $-\infty$ if and only if $\widetilde{f}_0$ vanishes there. We are now ready to define the Hamiltonians $h \colon \Sigma \to \ensuremath{\mathdj{R}}$ needed for the different perturbations needed. The long exact sequences of Theorems \ref{thm:lespair}, \ref{thm:lesduality} and \ref{thm:lesmayer-vietoris} are then derived by combining Corollary \ref{cor:posnegLES} applied to $(\Sigma,\Sigma^h)$ with Theorems \ref{thm:twocopyends} and \ref{thm:twocopy} proven below. \subsubsection{The push-off inducing the long exact sequence of a pair}\label{sec:push-inducing-long} \begin{figure}[htp] \centering \labellist \pinlabel $h_{\operatorname{dir}}(t)$ at 106 137 \pinlabel $A$ at 79 79 \pinlabel $B$ at 129 79 \pinlabel $1$ at 38 42 \pinlabel $1$ at 42 115 \pinlabel $t$ at 190 92 \pinlabel $t$ at 190 11 \pinlabel $e^{-t}h_{\operatorname{dir}}'(t)$ at 160 46 \pinlabel $A$ at 79 -2 \pinlabel $B$ at 129 -2 \endlabellist \includegraphics{pushoff-directed} \caption{The Hamiltonian $h_{\operatorname{dir}} \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ applied to an exact Lagrangian cobordism $\Sigma$ produces a directed pair. The corresponding Hamiltonian vector field is given by $e^{-t}h_{\operatorname{dir}}'(t)\partial_z$.} \label{fig:pushoff-directed} \end{figure} Consider the Hamiltonian $h_{\operatorname{dir}} \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ depending only on the $t$-coordinate and satisfying \begin{itemize} \item $h_{\operatorname{dir}}(t)=e^t$ for $t \le 0$; \item $h_{\operatorname{dir}}(t)=e^t-C$ for all $t \ge B+1$, and some $C\geq 0$; and \item $(h_{\operatorname{dir}}')^{-1}(0)$ is a connected interval containing $[A,B]$ (in particular $h_{\operatorname{dir}}$ is constant on $[A,B]$. \end{itemize} See Figure \ref{fig:pushoff-directed} for a schematic picture of $h_{\operatorname{dir}}(t)$ and the corresponding Hamiltonian vector field $e^{-t}h'_{\operatorname{dir}}(t)\partial_z$. Let $\Sigma':=\Sigma^{\epsilon h_{\operatorname{dir}}}$, for $\epsilon>0$ sufficiently small, and take $\Sigma_1$ to be a generic sufficiently small and compactly supported Hamiltonian perturbation of $\Sigma'$. Let $f_0$ be the primitive of $\Sigma^*(e^t\alpha)$ which vanishes at $-\infty$. From Equation \eqref{eq:14} and from the fact that $h>0$ and $h'=0$ on $[A,B]$ we get that the primitive of $(\Sigma')^*(e^t\alpha)$ is smaller than $f_0$ on $[A,B]\times Y$. Writing $\Sigma_0:=\Sigma$, for $\epsilon>0$ sufficiently small, all intersections between $\Sigma_0$ and $\Sigma_1$ are contained inside $[A,B]\times Y$. Thus, if the Hamiltonian perturbation of $\Sigma'$ is small enough, we get that $CF_+(\Sigma_0,\Sigma_1)=0$ or, using the terminology of Definition \ref{def:directed}, that the pair of cobordisms $(\Sigma_0,\Sigma_1)$ is directed. Furthermore, under the assumption that $\Sigma_1$ is a sufficiently $C^1$-small perturbation of $\Sigma_0$, and that the almost complex structures are chosen appropriately, Theorem \ref{thm:lespair} now follows by applying Theorems \ref{thm:twocopyends} and \ref{thm:twocopy} to the terms in the long exact sequence \eqref{eq:17} of Corollary \ref{cor:posnegLES} (also see Remark \ref{rem:morsefloer}). \subsubsection{The push-off inducing the duality long exact sequence} \label{sec:dualitypushoff} \begin{figure}[htp] \centering \labellist \pinlabel $h_V(t)$ at 106 108 \pinlabel $A$ at 79 122 \pinlabel $B$ at 129 122 \pinlabel $1$ at 45 60 \pinlabel $-1$ at 45 5 \pinlabel $-1$ at 45 112 \pinlabel $t$ at 190 136 \pinlabel $t$ at 190 35 \pinlabel $e^{-t}h_V'(t)$ at 160 70 \pinlabel $A$ at 79 22 \pinlabel $B$ at 129 22 \endlabellist \includegraphics{pushoff-lambda} \caption{The Hamiltonian $h_V \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ applied to an exact Lagrangian cobordism $\Sigma$ produces a $V$-shaped pair. The corresponding Hamiltonian vector field is given by $e^{-t}h_V'(t)\partial_z$.} \label{fig:pushoff-lambda} \end{figure} Consider a Hamiltonian $h_V \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ depending only on the $t$-coordinate and satisfying \begin{itemize} \item $h_V(t)=-e^t$ for $t \le A-1$; \item $h_V(t)=e^t-C$ for all $ t \ge B+1$, and some $C>0$; and \item $(h_V')^{-1}(0)$; is a connected interval containing $[A,B]$. \end{itemize} See Figure \ref{fig:pushoff-directed} for a schematic picture of $h_V(t)$ as well as of the corresponding Hamiltonian vector field $e^{-t}h'_V(t)\partial_z$. Let $\Sigma':=\Sigma^{\epsilon h_V}$, for $\epsilon>0$ sufficiently small, and let $\Sigma_1$ be a generic, sufficiently $C^1$-small, and compactly supported Hamiltonian perturbation of $\Sigma'$. Again, from Equation \eqref{eq:14}, we deduce that the primitive of $e^t\alpha$ on $\Sigma'$ is equal to $f_0+(h'_V-h_V)\circ a\circ \Sigma$, which is this time greater than $f_0$ on $[A,B]$. In conclusion, $CF_-(\Sigma_0,\Sigma_1)=0$ or, using the terminology of Definition \ref{def:directed}, $(\Sigma_0,\Sigma_1)$ is a $V$-shaped pair. When $\Sigma_1$ is a sufficiently small perturbation of $\Sigma_0$, and the almost complex structures are chosen appropriately, Theorem \ref{thm:lesduality} follows by applying Proposition \ref{prop:twocopy} and Theorems \ref{thm:twocopyends} to the terms in the long exact sequence \eqref{eq:18} of Corollary \ref{cor:posnegLES} (also see Remark \ref{rem:morsefloer}). \subsubsection{The push-off inducing the Mayer-Vietoris long exact sequence} \label{sec:push-inducing-mayer} The Hamiltonian $h_V$ considered above also gives rise to the long exact sequence in Theorem \ref{thm:lesmayer-vietoris}. It is simply a matter of applying Theorem \ref{thm:principal} to a \emph{different} filtration of the same complex. Consider the filtration $$C(\Lambda_0^+,\Lambda_1^+)> C_+(\Lambda_0^-,\Lambda_1^-) \oplus CF_+(\Sigma_0,\Sigma_1) > C_0(\Lambda_0^-,\Lambda_1^-)$$ where the decomposition $$C(\Lambda_0^-,\Lambda_1^-)=C_+(\Lambda_0^-,\Lambda_1^-) \oplus C_0(\Lambda_0^-,\Lambda_1^-)$$ has been made so that the left summand is generated by those Reeb chords corresponding to Reeb chords on $\Lambda^-$ (being of length bounded from below by the minimal length of a Reeb chord on $\Lambda^-$, i.e.~of some significant length), while the right summand is generated by those Reeb chords corresponding to the critical points of a Morse function on $\Lambda^-$ (being of length roughly equal to $\epsilon>0$, which thus may be assumed to be arbitrarily small). To see this, observe that we have $\Lambda_0^-=\Lambda^-$, and that $\Lambda_1^-$ is obtained as the one-jet $j^1g$ of a negative Morse function $g \colon \Lambda^- \to (-\epsilon,0)$ in some standard contact neighbourhood of $\Lambda^-$, where the latter is identified with a neighbourhood of the zero-section in $J^1\Lambda^-$. For simplicity, we here make the assumption that there is a unique local minimum and maximum of $g$. The above filtration induces a spectral sequence whose first page is given by \begin{gather*} E^{2,\bullet}_1 \oplus E^{1,\bullet}_1 \oplus E^{0,\bullet}_1 \\ \parallel \\ LCH^{\bullet-2}_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda_0^+,\Lambda^+_1)\oplus (LCH^{\bullet-1}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-) \oplus HF_{\bullet}^+(\Sigma_0,\Sigma_1))\oplus H_\bullet(\Lambda^-), \end{gather*} where we rely on \cite{Duality_EkholmetAl} together with Proposition \ref{prop:twocopy} in order to make the identifications \begin{gather*} H^i(C_+^\bullet(\Lambda_0^-,\Lambda^-_1)) = LCH^i_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-),\\ H^i(C_-^\bullet(\Lambda_0^-,\Lambda^-_1)) = H^{\operatorname{Morse}}_{n-1-i}(g). \end{gather*} Note that the middle term on the first page of the above spectral sequence really is a direct sum. This follows from the fact that the positive intersection points may be assumed to have arbitrarily small action and hence, in particular, action smaller than the length of any Reeb chord generator of $C_+(\Lambda^-_0,\Lambda^-_1)$. Consequently, there are no pseudoholomorphic strips as in the definition of the differential with input being a Reeb chord in $C_+(\Lambda^-_0,\Lambda^-_1)$ and output being an intersection point contained in $CF_+(\Sigma_0,\Sigma_1)$, or vice versa. The technique in the proof of \cite[Theorem 6.2(ii)]{LiftingPseudoholomorphic} shows that the Cthulhu differential restricts to the natural map $$H_{n-1-i}(\Lambda^-) \simeq H^i(C_-^\bullet(\Lambda_0^-,\Lambda^-_1)) \to H_{n-1-i}(\Sigma)$$ in homology induced by the topological inclusion $\Lambda^- \hookrightarrow \Sigma$. To that end, we start by considering a perturbation $\widetilde{h}_V(t)$ of $h_V(t)$ being of the form as shown in Figure \ref{fig:pushoff-mixed}. Note that any sufficiently $C^1$-small perturbation $f \colon \Sigma \to \ensuremath{\mathdj{R}}$ of the restriction $\widetilde{h}_V|_\Sigma$ has the property that the map $H_\bullet(\Lambda^-) \to H_\bullet(\Sigma)$ can be realised as the inclusion of the Morse homology complex generated by the critical points near $\{ t = a\}$ into the full Morse homology complex of $f$. Also, see Section \ref{sec:carefulpushoff} for a similar analysis. It then suffices to show that the differential is equal to the identity map $$H_{n-1-i}(\Lambda^-) \simeq H^i(C_-^\bullet(\Lambda_0^-,\Lambda^-_1)) \to H_{n-1-i}(\Lambda^-)$$ under the appropriate natural identifications. \begin{figure}[htp] \centering \labellist \pinlabel $\widetilde{h}_V(t)$ at 106 108 \pinlabel $a$ at 52 125 \pinlabel $A$ at 79 125 \pinlabel $B$ at 126 125 \pinlabel $1$ at 43 60 \pinlabel $-1$ at 45 5 \pinlabel $-1$ at 45 112 \pinlabel $t$ at 190 136 \pinlabel $t$ at 190 35 \pinlabel $e^{-t}\widetilde{h}_V'(t)$ at 160 70 \pinlabel $a$ at 46 43 \pinlabel $A$ at 79 25 \pinlabel $B$ at 126 25 \endlabellist \includegraphics{pushoff-mixed} \caption{The Hamiltonian $\widetilde{h}_V \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ applied to an exact Lagrangian cobordism $\Sigma$ produces a $V$-shaped pair. The corresponding Hamiltonian vector field is given by $e^{-t}\widetilde{h}_V'(t)\partial_z$.} \label{fig:pushoff-mixed} \end{figure} Finally, the statements concerning the fundamental classes is a consequence of \cite[Theorem 5.5]{Duality_EkholmetAl}, a result which is only valid if we are working with $\varepsilon^-_0=\varepsilon^-_1=\varepsilon$. Namely, the latter result shows that the minimum of $-g$ defines a nonvanishing cycle inside $LCH^{-1}_{\varepsilon,\varepsilon}(\Lambda_0^-,\Lambda^-_1)$. Similarly, under the additional assumption that $\Lambda^-$ is \emph{horizontally displaceable}, the differential of the maximum of $-g$ is a non-zero class in $$H^n(C^\bullet_+(\Lambda_0^-,\Lambda^-_1)) \simeq LCH^{n}_{\varepsilon,\varepsilon}(\Lambda^-)$$ called the \emph{fundamental class in Legendrian contact cohomology} (also see Section \ref{sec:fund-class-twist}). With the above considerations, we have now managed to establish Theorem \ref{thm:lesmayer-vietoris}. \subsection{Computing the Floer homology of a Hamiltonian push-off} \label{sec:computingpushoff} We now consider the pair $(\Sigma_0,\Sigma_1)$, where $\Sigma_0=\Sigma$ and $\Sigma_1$ is equal to $\Sigma^h$ for a $C^1$-small smooth function $h \colon \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ as constructed above. To that end, we will assume that $\widetilde{J}_P$ is the cylindrical lift of a regular almost complex structure $J_P$ on $P$, i.e.~the unique cylindrical almost complex structure on $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ for which the canonical projection to $P$ is $(\widetilde{J}_P,J_P)$-holomorphic. Recall that the time-$s$ Hamiltonian flow generated by the Hamiltonian $e^t$ on the symplectisation $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ simply is a translation of the $z$-coordinate by $s$. Recall that this is the same as the time-$s$ flow $\phi^s \colon P \times \ensuremath{\mathdj{R}}$ of the Reeb vector field induced by the standard contact form. \begin{Prop} \label{prop:twocopy} For the cylindrical lift $\widetilde{J}_P$ of an almost complex structure, we have a canonical isomorphism \[LCC^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda,\phi^\epsilon(\Lambda')) \simeq LCC^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda)\] of complexes for any sufficiently small $epsilon>0$, and generic and sufficiently $C^1$-small Legendrian perturbation $\Lambda'$ of $\Lambda$. Under the additional assumption that $\Lambda$ is horizontally displaceable, we moreover have a quasi-isomorphism \[LCC^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda,\phi^{-\epsilon}(\Lambda')) \sim LCC_{n-1-\bullet}^{\varepsilon_0,\varepsilon_1}(\Lambda)\] of complexes, where $n$ is the dimension of $\Lambda$. \end{Prop} \begin{Rem} The above identifications of augmentations, despite the fact that the DGAs are associated to geometrically \emph{different} Legendrians, can be justified as follows. For $\Lambda,\Lambda'' \subset P \times \ensuremath{\mathdj{R}}$ being sufficiently $C^1$-close together with a fixed choice of compatible almost complex structure $J_P$ on $P$, the invariance theorem in \cite{LCHgeneral} gives a canonical isomorphism between the Chekanov-Eliashberg algebras $({\mathcal A}(\Lambda), \partial_{\Lambda})$ and $({\mathcal A}(\Lambda''), \partial_{\Lambda''})$ induced by the canonical bijection identifying the Reeb chords on $\Lambda$ with the Reeb chords on $\Lambda''$. \end{Rem} \begin{proof} The first isomorphism is simply \cite[Proposition 2.7]{Floer_Conc}. The second isomorphism follows by combining this result with the isomorphism \[ LCC^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda,\phi^{-\epsilon}(\Lambda')) \simeq LCC_{n-1-\bullet}^{\varepsilon_0,\varepsilon_1}(\Lambda) \] established in \cite[Proposition 4.1]{Duality_EkholmetAl}. Here we must use the assumption of horizontal displaceability. \end{proof} Consider an admissible almost complex structure $J$ on $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ which coincides with the cylindrical lifts $\widetilde{J}^+_P$ and $\widetilde{J}^-_P$ of almost complex structures on $P$ in subsets of the form \begin{gather*} (-\infty,-T] \times P \times \ensuremath{\mathdj{R}},\\ [T,+\infty) \times P \times \ensuremath{\mathdj{R}}, \end{gather*} respectively. In the following we will also assume that $h \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ is an autonomous Hamiltonian being of the form $\pm e^t +C$ in each of the two latter cylindrical subsets. \begin{Thm} \label{thm:twocopyends} Let $\Sigma$ be an $(n+1)$-dimensional exact Lagrangian cobordisms from $\Lambda^-$ to $\Lambda$. Assume that there are augmentations $\varepsilon_i$, $i=0,1$, of the Chekanov-Eliashberg algebra of $\Lambda^-$ defined using $\widetilde{J}^-_P$. For $\epsilon>0$ sufficiently small, and $\Sigma'$ being a sufficiently $C^1$-small compactly supported perturbation of $\Sigma$, we may assume that $J$ is a regular admissible almost complex structure for which: \begin{enumerate} \item There is a natural isomorphism \[LCC^\bullet_{\varepsilon_0\circ \Phi_\Sigma,\varepsilon_1\circ \Phi_{\phi^\epsilon_h(\Sigma')}}(\Lambda,\phi^\epsilon(\Lambda)) = LCC^\bullet_{\varepsilon_0\circ \Phi_\Sigma,\varepsilon_1\circ \Phi_\Sigma}(\Lambda)\] of complexes; \item There is an equality \[ \Phi_\Sigma^{\varepsilon_0^-,\varepsilon_1^-} = d_{+-}\] of chain maps, where the former is the map induced by the linearised DGA morphism $\Phi_\Sigma$, as described in Section \ref{subsec: LCH}, and the latter is the corresponding component of the differential $\mathfrak{d}_{\varepsilon^-_0\varepsilon^-_1}$ on $\operatorname{Cth}_\bullet(\Sigma,\phi^\epsilon_h(\Sigma'))$. \end{enumerate} \end{Thm} \begin{Rem} All Legendrian contact homology complexes above are induced by $\widetilde{J}^\pm_P$, while the DGA morphisms, as well as $d_{+-}$, are induced by $J$. \end{Rem} \begin{proof} Both results follow from Proposition \ref{prop:twocopy} together with the bijections provided by \cite[Theorem 2.15]{Floer_Conc}, where the latter provides the necessary identifications of pseudoholomorphic strips on the cobordism $\Sigma$ and its two-copy $\Sigma \cup \phi^{\epsilon}_h(\Sigma')$ that are used in the definitions of the DGA morphisms induced by the respective cobordisms. To that end, an admissible almost complex structure as above must be used, i.e.~which coincides with cylindrical lifts in the prescribed subsets. \end{proof} \begin{Thm} \label{thm:twocopy} For an appropriately chosen Maslov potential and for any generic smooth function $f \colon \Sigma \to \ensuremath{\mathdj{R}}$ obtained as a sufficiently $C^1$-small and compactly supported perturbation of $h|_{\Sigma'} \colon \Sigma \to \ensuremath{\mathdj{R}}$, there is an induced Hamiltonian perturbation $\Sigma'$ of $\Sigma$ inducing a canonical identification of complexes \[ CF_\bullet (\Sigma,\phi^\epsilon_h(\Sigma')) = C^{\operatorname{Morse}}_{n+1-\bullet}(f).\] Here we require that the almost complex structure is regular, admissible, and induced by a Riemannian metric on $g$ for which $(f,g)$ is a Morse-Smale pair in some compact neighbourhood of the non-cylindrical part of $\Sigma$. If $\Sigma$ is pin, then the Hamiltonian perturbation admits an pin structure for which the above identification holds with coefficients in $\ensuremath{\mathdj{Z}}$. \end{Thm} \begin{proof} Observe that the Weinstein Lagrangian neighbourhood theorem implies that there is a symplectomorphism identifying a neighbourhood of $\Sigma$ with a neighbourhood of $T^*\Sigma$, such that $\Sigma$ moreover gets identified with the zero-section; see Section \ref{sec:carefulpushoff}. For an appropriate such identification, we may assume that $\phi^\epsilon_h(\Sigma)=\phi^1_{\epsilon h}(\Sigma)$ is contained in this neighbourhood and that this Lagrangian is identified with the section $-\epsilon d(h|_\Sigma)$. In particular, every intersection point $\phi^1_{\epsilon h}(\Sigma) \cap \Sigma$ corresponds to a critical point of $\epsilon h$ and may be assumed to have action corresponding to $\epsilon h$. After replacing $\epsilon h$ with $\epsilon f$, we can choose $\Sigma'$ to be the exact Lagrangian cobordism given as the graph of $\epsilon df$, while using the above Weinstein neighbourhood. The isomorphism of Morse and Floer complexes is standard, going back to the original work made by Floer \cite{FloerHFlag}. For the current setting, the analogous computation made in \cite[Theorem 6.2]{LiftingPseudoholomorphic} is also relevant. \end{proof} \begin{Rem} \label{rem:morsefloer} Consider the autonomous Hamiltonian $h$ in Theorem \ref{thm:twocopy} above, together with the induced perturbation $f$ of $h|_\Sigma$. \begin{enumerate} \item If $h=h_{\operatorname{dir}}$ as defined in Section \ref{sec:push-inducing-long}, then \[H(C^{\operatorname{Morse}}_{n+1-\bullet}(f))=H_{n+1-\bullet}(\overline{\Sigma},\partial_-\overline{\Sigma};R) \simeq H^{\bullet}(\Sigma,\partial_+\overline{\Sigma};R).\] \item If $h=h_V$ as defined in \ref{sec:dualitypushoff}, then \[H(C^{\operatorname{Morse}}_{n+1-\bullet}(f))=H_{n+1-\bullet}(\overline{\Sigma};R).\] \end{enumerate} \end{Rem} \subsection{Seidel's isomorphism} \label{sec:seidel} We end this section by recalling the definition of Seidel's isomorphism. Let $h \colon \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ be an autonomous Hamiltonian coinciding with $e^t+C_+$ on the positive end, and with $-e^t$ on the negative end; we can take e.g.~the Hamiltonian $h_V$ constructed in Section \ref{sec:dualitypushoff}. Combining Theorem \ref{thm:twocopy} and Proposition \ref{prop:twocopy} we obtain a module morphism \[G^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma \colon C^{\operatorname{Morse}}_{n+1-\bullet}(f) \to LCC^{\bullet-1}_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+,\phi^\epsilon(\Lambda^+);R)=LCC^{\bullet-1}_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+;R),\] identified with the term $d_{+0}$ in the differential of $(\operatorname{Cth}(\Sigma,\phi^\epsilon_f(\Sigma)),\mathfrak{d}_{\varepsilon^-_0,\varepsilon^-_1})$. Observe that, since the pair of cobordisms is $V$-shaped, we can conclude that \begin{Lem} The above map $G^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma$ is a chain map which, in the case when the negative end of $\Sigma$ is empty, is a quasi-isomorphism. \end{Lem} \section{Twisted coefficients, $L^2$-completions, and applications} \label{sec:l2-legendr-cont} In order to deduce information about the fundamental group of an exact Lagrangian cobordism it is necessary to introduce a version of our Floer complex coefficients twisted by the fundamental group, analogous to that defined for Lagrangian Floer homology in \cite{KFloer} by Sullivan and in \cite{Damian_Lifted} by Damian. Since it is not possible to make sense of the rank of a general module with group ring coefficients, it will also be necessary to introduce a version of an $L^2$-completion of this complex. So-called $L^2$-coefficients were first considered by Atiyah in \cite{AtiyahL2}. We start by describing the version of the complex with twisted coefficients, and we also introduce a version of the fundamental class in this setting. The fundamental class will be crucial for the proof of Theorems \ref{thm:pi_1carclass} (see Section \ref{sec:proofpi_1carclass}). We then continue by defining the $L^2$-completion of this complex, for which we recall some basic properties. The proof of Theorem \ref{thm:l2rigidity} will use this theory (see Section \ref{sec:proofl2rigidity}). \subsection{Floer homology with twisted coefficients.} \label{sec:twisted} First, we introduce the algebraic setup needed in order to define the Chekanov-Eliashberg algebra, along with its linearisations, in the setting of twisted coefficients. We refer the reader to \cite{NonCommAug} by the authors for a more detailed treatment. \subsubsection{The tensor ring of a bimodule} \label{sec:tens-algebra-bimod} First we recall a few classical and general algebraic constructions. Let $A$ be a (not necessarily commutative) unital ring. We start with the definition of tensor product of $A-A$-bimodules. Given two $A-A$-bimodules $M$ and $N$ , their (balanced) tensor product $M\otimes_A N$ is the quotient of the abelian group $M\otimes_{\mathbb{Z}} N$ by the relation $$\forall m\in M ,\;n\in N,\;a\in A:\quad ma\otimes n=m\otimes an.$$ Observe that $M\otimes_A N$ has the structure of a $A-A$-bimodule, where left and right multiplications are given by the formula $a(m\otimes n)a'=(am)\otimes (na')$ with $m\in M$, $n\in N$, and $a,a'\in A$. In order to avoid confusion, we would like to emphasise that in the bimodule $M\otimes_A N$ elements of the form $m\otimes n$ and $m\otimes an$ are in general not in relation to each other. Given a $A-A$-bimodule $M$, \textit{the tensor ring of M} is the graded ring $$T_A(M)=\bigoplus\limits_{k=0}^{\infty} M^{\otimes_A k},$$ where $M^{\otimes_A 0}=A$, and $M^{\otimes_A k}=M\otimes_A M^{\otimes_A k-1}$. Multiplication in $T_A(M)$ is induced by the natural isomorphism from $M^{\otimes_A l}\otimes_A M^{\otimes_A m}$ to $M^{\otimes_A l+m}$. Observe that $T_A(M)$ contains a subring $M^{\otimes_A 0}=A$ as well as a $A-A$-bimodule $M^{\otimes_A 1}=M$, and that $T_A(M)$ is the universal ring satisfying these properties. (Note that, as $A$ is not necessarily commutative, $T_A(M)$ is not an algebra according to standard terminology. This is the reason why we refer to it as the \emph{tensor ring}). If $M$ is freely generated by elements $\{\gamma_1,\ldots,\gamma_k\}$ as a bimodule, homogeneous elements of $T_A(M)$ are generated by elements of the form $$a_1\gamma_{i_1}\otimes a_2\gamma_{i_2}\otimes\cdots\otimes a_j\gamma_{i_j}a_{j+1},$$ where $a_1,\dots,a_{j+1}\in A$. In most cases $A$ will be the group ring $R[\pi]$ over a commutative ring $R$. In this situation, the tensor ring of $M$ over $A=R[\pi]$ will be denoted by $T_\pi(M)$ for simplicity. \subsubsection{The Chekanov-Eliashberg algebra with twisted coefficients} \label{sec:lch-with-homological} Legendrian contact homology with twisted coefficients has previously been considered in \cite{EkholmSmith}, and a detailed account is currently under development in \cite{Albin}. Here we consider a version of the Chekanov-Eliashberg algebra for a Legendrian submanifold $\Lambda \subset P\times \ensuremath{\mathdj{R}}$ with twisted coefficients, constructed as tensor ring over the group ring $R[\pi_1(\Lambda)]$, as defined above, where $R$ is a commutative ring. Fix a base point $* \in \Lambda$ and write $\pi_1(\Lambda):=\pi_1(\Lambda,*)$ for short. Let $A$ be a unital, not necessarily commutative, ring for which: \begin{itemize} \item There is a ring homomorphism $i:R[\pi_1(\Lambda)]\to A$. This induces an $R[\pi_1(\Lambda)]-R[\pi_1(\Lambda)]$-bimodule structure on $A$; \item There is an augmentation homomorphism $a:A\rightarrow R$ such that $\Pi:=a\circ i$ is the standard augmentation $\Pi:R[\pi_1(\Lambda)]\to R$. \end{itemize} By abuse of notation we will see any element $a$ in $R[\pi_1(\Lambda)]$ as an element of $A$ by identifying it with its image under the ring homomorphism $i$. For example, take any group homomorphism $\pi_1(\Lambda)\rightarrow G$, this induces a ring homomorphism $R[\pi_1(\Lambda)]\rightarrow R[G]$ and the augmentation corresponds to the standard ring homomorphism $R[G]\rightarrow R$. When $G=\{1\}$, the construction we describe below will recover the standard Chekanov-Eliashberg DGA. For any Reeb chord $\gamma$ of $\Lambda$, we fix a \textit{capping path} $\ell_\gamma^e$ (resp. $\ell_\gamma^s$) on $\Lambda$ which connects the end point (resp. starting point) of $\gamma$ to the base point $*$. (Such paths exist because we assume that $\Lambda$ is connected.) Let $C(\Lambda)$ be the free $A-A$-bimodule generated by the Reeb chords of $\Lambda$. A punctured pseudoholomorphic disc $u\in \mathcal{M}^r(\gamma^+;\gamma_1^-,\gamma_2^-,\ldots,\gamma_k^-)$ determines an element $\mathbf{c}_u$ of $C(\Lambda)^{\otimes_{A}k}$ via the following procedure. Let $\partial_0 S_r,\ldots, \partial_k S_r$ be the connected components of $\partial S_r$ ordered as in Section \ref{sec:univ-family-point}. We denote by $p$ the canonical projection $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}} \to P \times \ensuremath{\mathdj{R}}$ from the symplectisation to the contact manifold. \begin{itemize} \item For $j\in \{1,\ldots, k-1\}$, we denote by $a_j$ the based loop $(\ell_{\gamma^-_{j+1}}^e)^{-1}*(p\circ u|_{\partial_j})*\ell_{\gamma^-_j}^s$; \item For $j=0$, we denote by $a_j$ the based loop $\ell_{\gamma^-_{1}}^e*(p\circ u|_{\partial_0})*(\ell_{\gamma^+}^e)^{-1}$ ; and \item For $j=k$, we denote by $a_j$ the based loop $\ell_{\gamma^+}^s *(p\circ u|_{\partial_k})*(\ell_{\gamma^-_k}^s)^{-1}$. \end{itemize} The element $\mathbf{c}_u$ is then given by $$\mathbf{c}_u=a_0\gamma_1^- a_1\otimes \gamma_2^- a_2\otimes\cdots\otimes \gamma_k^- a_k.$$ The Chekanov-Eliashberg differential is defined on $T_A(C(\Lambda))$ by the formula $$\partial(\gamma^+)=\sum_{\gamma_1,\ldots,\gamma_k}\sum_{u\in\mathcal{M}(\gamma^+;\gamma_1^-,\gamma_2^-,\ldots,\gamma_k^-)} \text{sign}(u)\mathbf{c}_u$$ on generators, where the sum is taken over the rigid components of the moduli spaces. The differential is then extended as a bimodule homomorphism to $C(\Lambda)$, and ultimately to the whole tensor ring using the Leibniz rule. The DGA obtained will be denoted by $\mathcal{A}_A(\Lambda)$ (or $\mathcal{A}_G(\Lambda)$ if $A=R[G]$). If in this notation we omit the subscript $A$, then we just mean the standard Chekanov-Eliashberg algebra of $\Lambda$, which is the particular case $\mathcal{A}(\Lambda):=\mathcal{A}_R(\Lambda)$. By an \emph{augmentation} of the Chekanov-Eliashberg DGA we mean a homomorphism of $R[\pi_1(\Lambda)]-R[\pi_1(\Lambda)]$-bimodules $\varepsilon: T_A(C(\Lambda))\rightarrow A$ being a unital ring homomorphism satisfying $\varepsilon\circ\partial=0$. \begin{Rem} \begin{enumerate} \item An augmentation in this setting is still determined by its values on the Reeb chord generators. \item Any homomorphism $G \to H$ of groups induces a unital DGA morphism $r \colon \mathcal{A}_G(\Lambda) \to \mathcal{A}_H(\Lambda)$. In particular, when $H$ is the trivial group, we get a canonical DGA homomorphism $r \colon \mathcal{A}_G(\Lambda) \to \mathcal{A}(\Lambda)$. The pre-composition $\widetilde{\varepsilon}:=\varepsilon \circ r$ of an augmentation $\varepsilon$ of $\mathcal{A}(\Lambda)$ is clearly an augmentation of $\mathcal{A}_G(\Lambda)$; this augmentation will be called the \emph{lift of $\varepsilon$}. \item Similarly to the definition of the differential of the DGA with twisted coefficients, an exact Lagrangian cobordism $\Sigma$ from $\Lambda^-$ to $\Lambda^+$ can be seen to induce a unital DGA homomorphism $\widetilde{\Phi}_\Sigma \colon \mathcal{A}_{\pi_1(\Sigma)}(\Lambda^+) \to \mathcal{A}_{\pi_1(\Sigma)}(\Lambda^-)$ with twisted coefficients. In particular, an exact Lagrangian filling induces an augmentation in the group ring of its fundamental group. \end{enumerate} \end{Rem} For any pair of augmentations, the linearisation procedure gives rise to a differential $d_{\varepsilon_0,\varepsilon_1}: LCC_\bullet^{\varepsilon_0,\varepsilon_1}(\Lambda;A) \to LCC_\bullet^{\varepsilon_0,\varepsilon_1}(\Lambda;A)$ on the free $A-A$-bimodule spanned by the Reeb chords in the usual way. The map $d_{\varepsilon_0,\varepsilon_1}$ is in this situation a bimodule homomorphism. We again denote the resulting homology by $LCH_\bullet^{\varepsilon_0,\varepsilon_1}(\Lambda;A)$ called the \textit{bilinearised Legendrian contact homology with twisted coefficients}. In the case when $\Lambda=\Lambda_0 \sqcup \Lambda_1$ and $\varepsilon_i$ comes from an augmentation of $\Lambda_i$ for $i=0,1$, we can again define the sub-complex $LCC^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda_0,\Lambda_1;A)$ which is the free right $A$-module spanned by the Reeb chords starting on $\Lambda_1$ and ending on $\Lambda_0$, as well as the corresponding cohomology groups $LCH^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda_0,\Lambda_1;A)$. The result \cite[Proposition 2.7]{Floer_Conc} carries over immediately to this setting, and thus the identification \[ LCH^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda,\Lambda';A) = LCH^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda;A)\] holds on the level of homology (again, for a suitable small push-off $\Lambda'$ of $\Lambda$, together with a suitable lifted almost complex structure). \begin{Rem} Note that, if $\varepsilon_1$ takes values in $R$, then the left action of $\varepsilon_1(\gamma)$ on the free module is the same as the right action. In this case, the whole module structure factors through a complex defined as a free left $R[\pi_1(\Sigma_0)]$-module generated by the Reeb chords. This is relevant for the next section, where we will twist coefficients using $R[\pi_1(\Sigma_0)]$ in a way so that the augmentation $\varepsilon_1$ still will take values in $R$. \end{Rem} \subsubsection{The Floer complex with twisted coefficients} Let $R$ be a unital commutative ring. We are now ready to define our Floer complex for a pair $(\Sigma_0,\Sigma_1)$ of exact Lagrangian cobordisms with twisted coefficients, defined as an $R[\pi_1(\Sigma_0)]-R[\pi_1(\Sigma_0)]$ bimodule. For Lagrangian intersection Floer homology, such a construction has previously been carried out in \cite{KFloer} and \cite{Damian_Lifted}. This was subsequently generalised to the case of Wrapped Floer homology in \cite[Section 4.2]{Floer_Conc}, i.e.~for a pair of Lagrangian cobordisms having empty negative ends. Here we will define this theory for a pair of exact Lagrangian cobordisms having non-empty negative ends whose Chekanov-Eliashberg algebras admit augmentations. As before, we let $\Sigma_i \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ be exact Lagrangian cobordisms from $\Lambda_i^-$ to $\Lambda^+_i$, $i=0,1$, where both $\Sigma_0$ and $\Lambda^-_0$ are assumed to be connected. We consider the non-free $R[\pi_1(\Lambda_0^\pm)]-R[\pi_1(\Lambda_0^\pm)]$-bimodule $R[\pi_1(\Sigma_0)]$ with structure coming from the ring homomorphism induced by the inclusion maps $\{\pm T\}\times\Lambda_0^\pm \rightarrow \Sigma_0$. In order to obtain this bimodule structure, the base point of $\Sigma_0$ need to be chosen of the form $(-T,*)$, where $*$ is the based point of $\Lambda_0^-$. We then choose, once and for all, a path connecting $(-T,*)$ and $(T,*')$, where $*'$ is the based point of $\Lambda_0^+$. Fix augmentations $\varepsilon_0^-$ and $\varepsilon_1^-$ of $\mathcal{A}_{\pi_1(\Sigma_0)}(\Lambda^-_0)$ and $\mathcal{A}(\Lambda^-_1)$, respectively (which by definition take values in $R[\pi_1(\Sigma_0)]$ and $R$, respectively). Let $u\in\mathcal{M}^r(x;\boldsymbol{\zeta},y,\boldsymbol{\delta})$ be a pseudoholomorphic strip involved in the Cthulhu differential $\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}$ as defined in Section \ref{sec:Cthulu-moduli-spaces}. Order the connected components of the boundary of $\partial S_r$ which are mapped to $\Sigma_0$ starting with the arc adjacent to the incoming puncture (as in Section \ref{sec:univ-family-point}), and denote them by $\partial_0S_r,\ldots,\partial_kS_r$. We associate to each of these arcs an element $a_j \in \pi_1(\Sigma_0)$ in the same manner as in the definition of the differential of the Chekanov-Eliashberg algebra with twisted coefficients described above. These paths, together with the above augmentations, now determine an element $$c^{\varepsilon_0^-,\varepsilon_1^-}_u=a_1\varepsilon_0^-(\zeta_1)a_2\varepsilon_0^-(\zeta_2)\cdots a_{k-1}\varepsilon_0^-(\zeta_{k-1})a_k\varepsilon_1^-(\boldsymbol{\delta}) \in R[\pi_1(\Sigma_0)].$$ This construction allows us to define the Cthulhu differential $\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}$ on the non-free $R[\pi_1(\Sigma_0)]$--$R[\pi_1(\Sigma_0)]$-bimodule $$\operatorname{Cth}(\Sigma_0,\Sigma_1;R[\pi_1(\Sigma_0)]):=\operatorname{Cth}(\Sigma_0,\Sigma_1)\otimes_{R}R[\pi_1(\Sigma_0)]$$ First, when $y$ is either a intersection point or a Reeb chord from $\Lambda^-_0$ to $\Lambda^-_1$, we define $$\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}(y)=\sum_x\sum_{u \in \mathcal{M}^r(x;\boldsymbol{\zeta},y,\boldsymbol{\delta})}\text{sign}(u)c^{\varepsilon_0^-,\varepsilon_1^-}_ux,$$ where the sum is taken over the rigid components of the moduli space. The formula for $y$ being a Reeb chord from $\Lambda^+_1$ to $\Lambda^+_0$ is similar, but involves the pull-backs $\varepsilon_0^- \circ \widetilde{\Phi}_{\Sigma_0}$ and $\varepsilon_1^- \circ \Phi_{\Sigma_1}$ of the augmentations under the DGA homomorphism induced by the cobordisms with and without twisted coefficients, respectively. The differential is then extended to all of $\operatorname{Cth}(\Sigma_0,\Sigma_1;R[\pi_1(\Sigma_0)])$ as a \emph{right} $R[\pi_1(\Sigma_0)]$-module homomorphism. The techniques in Section \ref{sec:acyclicity} can be used to prove the following theorem. \begin{Thm} \label{thm:d2lifted} The map $\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}:\operatorname{Cth}(\Sigma_0,\Sigma_1;R[\pi_1(\Sigma_0)])\rightarrow \operatorname{Cth}(\Sigma_0,\Sigma_1;R[\pi_1(\Sigma_0)])$ satisfies $\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}^2=0$, i.e.~it is a differential, and it gives rise to an acyclic complex, i.e.~$H(\operatorname{Cth}(\Sigma_0,\Sigma_1;R[\pi_1(\Sigma_0)]),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-})=0$. \end{Thm} \begin{proof} The proof is similar to the one in Sections \ref{sec:Cthulu-differential} and \ref{sec:proof-acyclicity}. To that end, we observe the following important feature of the above construction of the coefficient $c^{\varepsilon_0^-,\varepsilon_1^-}_u$. Let $u$ and $v$ be holomorphic strips whose outgoing and incoming punctures, respectively, agree so that one can pre-glue them to a strip $u *v$. It then follows that $c^{\varepsilon_0^-,\varepsilon_1^-}_{u*v}=c^{\varepsilon_0^-,\varepsilon_1^-}_v*c^{\varepsilon_0^-,\varepsilon_1^-}_u$. It now follows from the compactness theorem that that the boundary of a component of a one-dimensional moduli space of holomorphic strips either consists of two broken configurations $u * v$ and $u' * v'$, both which contribute with the same coefficient (with opposite signs) $$c^{\varepsilon_0^-,\varepsilon_1^-}_v*c^{\varepsilon_0^-,\varepsilon_1^-}_u=-c^{\varepsilon_0^-,\varepsilon_1^-}_{v'}*c^{\varepsilon_0^-,\varepsilon_1^-}_{u'},$$ or at least one boundary point corresponds to a broken configuration involving a pure chord. However, when counted with the augmentations as above, the counts of the latter boundary points cancel for the algebraic reason that the augmentation is a chain map (see \ref{II} in Section \ref{sec:Cthulhu-complex}). \end{proof} In view of the above theorem, the computations in Section \ref{sec:lesproof} can be carried over immediately to the case of twisted coefficients. We proceed to explicitly describe the long exact sequence analogous to \eqref{leqtr} in Theorem \ref{thm:lespair}. Let $\Sigma$ be an exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$. Let $\varepsilon^-_{0}$ and $\varepsilon^-_1$ be two augmentations of $\mathcal{A}_{\pi(\Sigma)}(\Lambda^-)$ and $\mathcal{A}(\Lambda^-)$ into $R[\pi(\Sigma)]$ and $R$, respectively. Further, we consider the pull-backs $\varepsilon^+_0:=\varepsilon^-_0 \circ \widetilde{\Phi}_\Sigma$ and $\varepsilon^+_1:=\varepsilon^-_0 \circ \Phi_\Sigma$ of these augmentations. \begin{Rem} It is important to note that $\varepsilon_0^+$ need \emph{not} be the lift of an augmentation into $R$ in general, even in the case when $\varepsilon_0^-$ is. \end{Rem} Writing $\widetilde{\Sigma}$ for the universal cover of $\Sigma$, and $\overline{\widetilde{\Sigma}}$ for its compactification to a manifold with boundary, there is a long exact sequence: \begin{equation} \label{leqtrlifted} \xymatrixrowsep{0.15in} \xymatrixcolsep{0.15in} \xymatrix{ \cdots\ar[r]& LCH^{k-1}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+;R[\pi_1(\Sigma)]) \ar[d] & & & \\ & H_{n+1-k}(\overline{\widetilde{\Sigma}},\partial_- \overline{\widetilde{\Sigma}};R) \ar[r] & LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-; R[\pi_1(\Sigma)]) \ar[d] & \\ & & LCH^{k}_{\varepsilon^+_0,\varepsilon_1^+}(\Lambda^+;R[\pi_1(\Sigma)])\ar[r] &\cdots} . \end{equation} The identification of the topological term $H_{n+1-k}(\overline{\widetilde{\Sigma}},\partial_- \overline{\widetilde{\Sigma}};R)$ is proven in the same manner as before (see Theorem \ref{thm:twocopy}), while making the observation that the Morse homology of a manifold with coefficients twisted by its fundamental group computes the homology of its universal cover. Finally, we point out that $$ LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-; R[\pi_1(\Sigma)])=LCH^{k}_{\varepsilon^-_0,\varepsilon^-_1}(\Lambda^-) \otimes R[\pi_1(\Sigma)]$$ is satisfied in the case when $\Lambda^-$ is simply connected. \subsection{Augmentations in finite-dimensional non-commutative algebras.} \label{sec:augm-finite-dimens} The DGAs with ``coefficients'' in non-commutative unital algebras as treated in Section \ref{sec:twisted} can also be considered in a setting where the algebra is more general than a group ring, and without using twisted coefficient. Here we describe augmentations in non-commutative unital algebras, as done by the second and fourth authors in \cite{EstimNumbrReebChordLinReprCharAlg}, which can be seen to fit into this framework. Since we will be interested in computing the ranks of involved linearised complexes, we will restrict ourselves to the case when the involved algebra is finite-dimensional over the ground field $\ensuremath{\mathdj{F}}$. A finite-dimensional augmentation of the Chekanov-Eliashberg algebra is a unital DGA homomorphism \[ \varepsilon \colon (\mathcal{A}(\Lambda),\partial) \to (A,0),\] where $A$ is a \emph{not necessarily commutative} unital algebra which is finite-dimensional over the ground field $\ensuremath{\mathdj{F}}$. Here $\ensuremath{\mathdj{F}}$ denotes the field that was used as coefficient ring for $\mathcal{A}(\Lambda)$. Recall that the existence of such a (graded) augmentation is equivalent to the existence of a finite-dimensional representation of the so-called \emph{characteristic algebra}, which is defined as the quotient algebra $\mathcal{A}(\Lambda)/\langle \partial(\mathcal{A})\rangle$ by the two-sided ideal generated by the boundaries (see \cite{Ngcomputable}). Given two such augmentations \[\varepsilon_i \colon (\mathcal{A}(\Lambda^-_i), \partial_i) \to (A_i,0), \:\:i=0,1,\] we can form all our complexes as free $A_0 \otimes_\ensuremath{\mathdj{F}} (A_1)^{\operatorname{op}}$-modules or, differently put, a free $A_0-A_1$ bimodule. It is important to note that the dimension over $\ensuremath{\mathdj{F}}$ still makes sense, since the latter modules are of dimension $(\dim_\ensuremath{\mathdj{F}} A_0)(\dim_\ensuremath{\mathdj{F}} A_1)$ times the number of generators. To construct the differentials in this setting one proceeds as in \cite{EstimNumbrReebChordLinReprCharAlg}, which also is analogous to the construction of the complexes in Section \ref{sec:twisted} with twisted coefficients. In particular, the differentials are of the form \[d((a_0\otimes a_1)x)=\sum_{y}\sum_{\boldsymbol{\delta}^-,\boldsymbol{\zeta}^-}\#\mathcal{M}(y;\boldsymbol{\delta}^-,x,\boldsymbol{\zeta}^-)\cdot\varepsilon_0(\boldsymbol{\delta}^-)a_0 \otimes a_1\varepsilon_1(\boldsymbol{\zeta}^-)\cdot y,\] where $x$ and $y$ denote either intersection points or Reeb chords, and where $a_i \in A_i$, $i=0,1$. \begin{Rem} This convention tells us that the differential is defined by multiplication of $A_0 \otimes A_1^{\operatorname{op}}$ from the \emph{left}, and is hence a morphism of \emph{right} modules. \end{Rem} The long exact sequences in homology for these bimodules now follow verbatim from the proofs in the case when the augmentation is taken into $\ensuremath{\mathdj{F}}$. It is important to notice that all the complexes above are of finite dimension over $\ensuremath{\mathdj{F}}$. More precisely, \[ \dim_\ensuremath{\mathdj{F}} LCC^\bullet(\Lambda^\pm_0,\Lambda^\pm_1)=|\mathcal{R}(\Lambda_1^\pm,\Lambda^\pm_0)| \cdot \dim_\ensuremath{\mathdj{F}} A_0 \cdot \dim_\ensuremath{\mathdj{F}} A_1,\] while \[ \dim_\ensuremath{\mathdj{F}} H_\bullet(X,Y;A_0 \otimes_\ensuremath{\mathdj{F}} (A_1)^{\operatorname{op}})=\dim_\ensuremath{\mathdj{F}} H_\bullet(X,Y;\ensuremath{\mathdj{F}})\dim_\ensuremath{\mathdj{F}}(A_0)\dim_\ensuremath{\mathdj{F}}(A_1)\] holds by the universal coefficients theorem. \begin{Ex} There are examples of Legendrian submanifolds which admit augmentations into finite-dimensional non-commutative unital algebras, but which do not admit any augmentation into any commutative unital algebra. We refer to Part (1) of Example \ref{rem:examplesthatanswernonsymqofcsandt} below for such Legendrian torus knots found by Sivek in \cite{TheContHomofLegKNotswithMAXTBI}, which admit augmentations into the matrix algebra $\mathrm M_{2}(\ensuremath{\mathdj{Z}}_2)$. The second and the fourth author later used these examples in order to construct plenty of Legendrian submanifolds inside contact spaces $(\ensuremath{\mathdj{R}}^{2n+1},\xi_{\operatorname{std}})$ for arbitrary $n\in\mathbb{N}$ whose Chekanov-Eliashberg algebras admit augmentations into $\mathrm M_{2}(\ensuremath{\mathdj{Z}}_2)$, but not into any commutative algebra. \end{Ex} \subsection{The fundamental class and twisted coefficients} \label{sec:fund-class-twist} In this section we will introduce the fundamental class in the setting of twisted coefficients. We will prove that this class coincides with the fundamental class introduced in \cite{Duality_EkholmetAl} in the general twisted coefficients setting. We will also prove that this class is functorial under exact Lagrangian cobordisms. In Section \ref{sec:proofpi_1carclass} we will use the naturality of this class to prove Theorem \ref{thm:pi_1carclass}. In the following we let $\Sigma$ be a connected exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$, where the latter Legendrian submanifolds are connected as well. \subsubsection{The definition of the fundamental class} Recall the map \[G^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma \colon H_\bullet(\Sigma;R) \to LCH^{n-\bullet}_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+,\Lambda^+_1;R)\] in homology constructed in Section \ref{sec:seidel}, whose underlying chain map is defined by a count of punctured strips with boundary on $\Sigma_1 \cup \Sigma$. Here $\Sigma_1$ is an exact Lagrangian cobordism from $\Lambda^-_1$ to $\Lambda^+_1$ obtained by the Hamiltonian push-off of $\Sigma$ as defined in Section \ref{sec:dualitypushoff}. Roughly speaking, $\Sigma_1$ is obtained by a small perturbation using the positive and negative Reeb flows at the positive and negative end of $\Sigma$, respectively. The underlying chain map of $G^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma$ lifts to the corresponding complexes with twisted coefficients. Namely, we define the chain map by the same counts of pseudoholomorphic strips, but where the count takes the homotopy class of the boundary of the strips into account in the manner described above. The lifted map on homology will be denoted by \[\widetilde{G}^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma \colon H_\bullet(\Sigma;R[\pi_1(\Sigma)]) \to LCH^{n-\bullet}_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+,\Lambda^+_1;R[\pi_1(\Sigma)]).\] We will be particularly interested in the restriction to the degree 0 part of $H_0(\Sigma;R[\pi_1(\Sigma)])$ \[\widetilde{G}^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma \colon H_0(\Sigma;R[\pi_1(\Sigma)]) \to LCH^n_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+,\Lambda^+_1;R[\pi_1(\Sigma)]).\] Observe that this map is linear over $R[\pi_1(\Sigma)]$ or, put differently, it is $\pi_1(\Sigma)$-equivariant. Also, we recall that $H_\bullet(\Sigma;R[\pi_1(\Sigma)])$ of the twisted complex here computes the homology $H_\bullet(\widetilde{\Sigma},R)$ of the universal cover $\widetilde{\Sigma} \to \Sigma$, and that \begin{equation} \label{eq:pushid} LCH^\bullet_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+,\Lambda^+_1;R[\pi_1(\Sigma)]) \simeq LCH^\bullet_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+;R[\pi_1(\Sigma)]). \end{equation} holds by \cite[Proposition 2.7]{Floer_Conc}. Later we will be particularly interested in the case when $\Lambda^+$ is simply connected and when $\varepsilon^+_i$, $i=0,1$, both take values in $R$. In this situation the universal coefficients theorem gives us an identification \begin{equation} \label{eq:free} LCH^\bullet_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+;R[\pi_1(\Sigma)])=LCH^\bullet_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+;R) \otimes_R R[\pi_1(\Sigma)]. \end{equation} Choosing a generator $m \in H_0(\Sigma;R[\pi_1(\Sigma)])$, the fundamental class induced by $\Sigma$ is defined to be the image \[ \widetilde{c}^{\varepsilon^-_0,\varepsilon^-_1}_{\Sigma,m}:=\widetilde{G}^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma(m) \in LCH^n_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+;R[\pi_1(\Sigma)]),\] where we rely on the identification \eqref{eq:pushid} above. Let $\Lambda^+_1$ be obtained from $\Lambda$ by a $C^1$-small perturbation of its image under the time-$\epsilon$ Reeb flow, where $\epsilon>0$ is sufficiently small. We moreover assume that $\Lambda^+_1$ can be identified with the graph $j^1f^+ \subset J^1\Lambda^+$ in a standard contact neighbourhood of $\Lambda^+$, where $f^+ \colon \Lambda^+ \to \ensuremath{\mathdj{R}}$ is a Morse function with a unique local minimum $m^+ \in \Lambda^+$. Recall the definition of the fundamental class in \cite{Duality_EkholmetAl}, which was defined by the following count of pseudoholomorphic strips having boundary on $\Lambda^+_1 \cup \Lambda^+$: \[\widetilde{c}^{\varepsilon^+_0,\varepsilon^+_1}_{\Lambda^+,m^+}:=\sum_{u\in \widetilde{\mathcal{M}}(\gamma,\boldsymbol{\delta},m^+,\boldsymbol{\zeta})} \text{sign}(u)c^{\varepsilon^+_0,\varepsilon^+_1}_u\gamma \in LCH^n_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+;R[\pi_1(\Sigma)]).\] Here we have again used identification \eqref{eq:pushid} above. In the case when $\varepsilon^+_0=\varepsilon^+_1$ and $\Lambda^+$ is horizontally displaceable, this class has moreover been shown to be non-vanishing; see \cite[Theorem 5.5]{Duality_EkholmetAl}. The following proposition shows that the two definitions of the fundamental class given above in fact coincide. \begin{Prop} \label{prp:fundclasstwisted}Assume that the natural map $$H_0(\Lambda^+;R[\pi_1(\Sigma)]) \to H_0(\Sigma;R[\pi_1(\Sigma)])$$ sends $m^+ \in H_0(\Lambda^+,R[\pi_1(\Sigma)])$ to $m \in H_0(\Sigma;R[\pi_1(\Sigma)])$. For appropriate choices of almost complex structures and Hamiltonian perturbations in the constructions, there is an identification \[\widetilde{c}^{\varepsilon^+_0,\varepsilon^+_1}_{\Lambda^+,m^+} = \widetilde{c}^{\varepsilon^-_0,\varepsilon^-_1}_{\Sigma,m} \in LCH^n_{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+;R[\pi_1(\Sigma)])\] of fundamental classes. \end{Prop} In Section \ref{sec:proofpi_1carclass} we will see that Theorem \ref{thm:pi_1carclass} is a direct consequence of the above proposition. Its proof is postponed until Section \ref{sec:proof-prop-refprp:f}, and it relies on studying a specific push-off of $\Sigma$ to be describe in the subsequent subsection. \subsubsection{A more careful version of the push-off defined in Section \ref{sec:dualitypushoff}} \label{sec:carefulpushoff} In order to facilitate the proof of Proposition \ref{prp:fundclasstwisted} we will choose a push-off $\Sigma_1$ of $\Sigma$ of a very special form, where $\Sigma_1$ will be an exact Lagrangian cobordism from $\Lambda^-_1$ to $\Lambda^+_1$. We will suppose that $\Sigma$ is cylindrical outside of the subset $[-T,T] \times P \times \ensuremath{\mathdj{R}}$. First we need to construct a Weinstein neighbourhood of $\Sigma$ of a particular form. Begin by fixing contact-form preserving contactomorphisms $\phi^\pm \colon U^\pm \to J^1\Lambda^\pm$ identifying a neighbourhood $U^\pm \subset P \times \ensuremath{\mathdj{R}}$ of $\Lambda^\pm$ with a neighbourhood $\phi^\pm(U^\pm)$ of $\Lambda^\pm \subset (J^1\Lambda^\pm,dz-\theta_{\Lambda^\pm})$, where $\phi^\pm(\Lambda^\pm)=0_{\Lambda^\pm} \subset J^1\Lambda^\pm$ is the zero-section. We also get induced exact symplectomorphisms \[(\operatorname{Id}_\ensuremath{\mathdj{R}},\phi^\pm) \colon \ensuremath{\mathdj{R}} \times U^\pm \to \ensuremath{\mathdj{R}} \times \phi^\pm(U^\pm) \subset \ensuremath{\mathdj{R}} \times J^1\Lambda^\pm.\] Pre-composing $(\operatorname{Id}_\ensuremath{\mathdj{R}},(\phi^\pm)^{-1})$ with the (non-exact) symplectomorphism \begin{gather*} \psi^\pm \colon (T^*(I_\pm \times \Lambda^\pm),d(p_i dq_i)) \to (\ensuremath{\mathdj{R}} \times J^1(\Lambda^\pm),d(e^t(dz-y_idx_i))),\\ ((q,\mathbf{q}),(p,\mathbf{p})) \mapsto (\log{q},(\mathbf{q},-\mathbf{p}/q,-p)), \end{gather*} where $I_+ = [e^T,+\infty)$ and $I_- = (0,e^{-T}]$, we get an induced Weinstein neighbourhood of the cylindrical ends of $\Sigma$ parametrised by $(\operatorname{Id}_\ensuremath{\mathdj{R}},(\phi^\pm)^{-1}) \circ \psi^\pm$. Fix a proper open embedding \[ I_- \times \Lambda^- \: \sqcup \: I_+ \times \Lambda^+ \hookrightarrow \Sigma.\] An adaptation of the proof of Weinstein's Lagrangian neighbourhood theorem (see e.g.~\cite{McDuff_&_Introduction_Symplectic}) shows that we can construct a symplectic identification \[ \Psi \colon (D^*_\delta \Sigma,d(p_idq_i)) \hookrightarrow (\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}},d(e^t(dz+\theta)))\] of a co-disc bundle of some small radius $\delta>0$, for which the zero-section is identified with $\Psi(0_\Sigma)=\Sigma \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$, and whose restriction to \[(D_\delta^*(I_- \times \Lambda^- \sqcup I_+ \times \Lambda^+),d(p_idq_i))\subset (D^*_\delta \Sigma,d(p_idq_i)),\] coincides with the symplectomorphisms $(\operatorname{Id}_\ensuremath{\mathdj{R}},(\phi^\pm)^{-1}) \circ \psi^\pm$ constructed above. For the definition of the above disc bundle we will pick a metric on the cotangent bundle $T^*\Sigma$ which is induced by a Riemannian metric on $\Sigma$ being of the form $dq \otimes dq + q^{-1}g_\pm$ on the ends $I_\pm \times \Lambda^\pm$, where $g_\pm$ denote Riemannian metrics on $\Lambda^\pm$. We are now ready to describe the construction of the Hamiltonian push-off $\Sigma_1$ of $\Sigma$, which is done by performing the push-off in Section \ref{sec:dualitypushoff} while taking extra care. First, we choose a $C^1$-small push-off $\Sigma_1'$ of $\Sigma$ by applying the time-$\epsilon$ flow of the Hamiltonian vector-field $e^{-t}h'_{\Lambda}(t)\partial_z$ defined in Section \ref{sec:dualitypushoff}. Here $\epsilon>0$ is chosen smaller than the shortest Reeb chords on both ends $\Lambda^\pm$. (Recall that this flow coincides with the positive and negative Reeb flow at the positive and negative end, respectively.) Second, we perform a non-compact perturbation of the cylindrical ends of $\Sigma_1'$ induced by cylindrically extending a perturbation of the Legendrian ends (where this perturbation has been appropriately cut off). Finally, we perform a compactly supported Hamiltonian perturbation of $\Sigma_1'$ yielding the sought exact Lagrangian cobordism $\Sigma_1$. These perturbations will moreover be performed so that the following properties are satisfied. \begin{enumerate} \item Under the above identifications of standard contact neighbourhoods of $\Lambda^\pm$, we require that $\phi^\pm(\Lambda^\pm_1)=\pm j^1f^\pm \subset J^1\Lambda^\pm$ for positive Morse functions $f^\pm \colon \Lambda^\pm \to (0,\epsilon]$. We moreover require that $f^+$ has a \emph{unique} local minimum $m^+$. (This is possible since $\Lambda^+$ is connected by assumption.) \item Under the above identification of a Weinstein neighbourhood of $\Sigma$, we require that $\Psi^{-1}(\Sigma_1)=-dF \subset T^*\Sigma$ for a Morse function $F \colon \Sigma \to \ensuremath{\mathdj{R}}$. We moreover require that $F$ has a \emph{unique} local minimum $m$ (note the sign!). (This is possible since $\Sigma$ is connected and since $F$ increases as $|t|\to +\infty$ along either of the cylindrical ends.) \item Above $I_- \times \Lambda^-$, the function $F$ is required to restrict to a function of the form \[ F|_{I_- \times \Lambda^-}=-qf^-,\] where $q$ denotes the standard coordinate on $I_-=(0,e^{-T}] \subset \ensuremath{\mathdj{R}}$.\label{carefulpushoff:3} \item \label{carefulpushoff:4} Above $I_+ \times \Lambda^+ \subset \Sigma$, the function $F$ restricts to a function of the form \[ F|_{I_+ \times \Lambda^+}=qf^+ + C(q)\] by construction, where $q$ is the standard coordinate on $I_+ =[e^T,+\infty) \subset \ensuremath{\mathdj{R}}$ and $C(q)$ is constant outside of a compact set. We moreover require that $C \colon \ensuremath{\mathdj{R}} \to \ensuremath{\mathdj{R}}$ satisfies: \begin{enumerate} \item $C(q) = -2(\max_{\Lambda^+}f^+)\cdot q< \min_{I_- \times \Lambda^-} F <0 $ holds near $q=e^T$; \label{carefulpushoff:4a} \item $C(q) \equiv 0$ for all $q \gg 0$ sufficiently large; \item $C'(q)$ is non-decreasing; \item $C'(e^{T'})=-f^+(m^+)$, i.e.~$(e^{T'},m^+) \in I_+ \times \Lambda^+$ is a critical point being a local minimum for some $T'>T$; and \item $C''(e^{T'}) > 0$, i.e.~this critical point is a non-degenerate local minimum. \end{enumerate} Note that, in particular, we require that the unique local minimum of $F$ is given by the above critical point $m=(e^{T'},m^+) \in I_+ \times \Lambda^+ \subset \Sigma$. Also, see Figure \ref{fig:uniquestrip}. \end{enumerate} In order to see that we indeed can find the above function $F$, we observe that a function defined on the subsets $I_- \times \Lambda^- \cup I_+ \times \Lambda^+ \subset \Sigma$ satisfying the above requirements \eqref{carefulpushoff:3} and \eqref{carefulpushoff:4} can be extended to a Morse function $F \colon \Sigma \to \ensuremath{\mathdj{R}}$ without introducing additional local minima, as follows by considering property \eqref{carefulpushoff:4a}. \begin{figure}[htp] \centering \labellist \pinlabel $f^+(m^+)$ at 55 60 \pinlabel $f^+(m^+)-2\max_{\Lambda^+}f^+$ at -20 5 \pinlabel $q$ at 185 35 \pinlabel $f^+(m^+)+C'(q)$ at 140 70 \pinlabel $e^T$ at 60 25 \pinlabel $e^{T'}$ at 100 25 \endlabellist \includegraphics{uniquestrip} \caption{The graph of the differential $\partial_qF$ along $[e^T,+\infty) \times \{m^+\} \subset I_+ \times \Lambda^+$. Observe that $F|_{\{q\} \times \Lambda^+}$ has a non-degenerate local minimum at $(q,m^+)$ for each $q \in I_+$.} \label{fig:uniquestrip} \end{figure} \begin{Lem}\label{lem:m} Let $\Sigma_1$ be the Hamiltonian push-off of $\Sigma$ described by all the previous conditions. For a suitable choice of almost complex structure $J$ there is a \emph{unique} and transversely cut out $J$-holomorphic disc having boundary on $\Sigma \cup \Sigma_1$ and a single positive puncture asymptotic to $\gamma_{m^+}$. This disc is moreover a rigid strip having precisely two punctures, where the second puncture maps to the intersection point $m =(e^{T'},m_+) \in \Sigma \cap \Sigma_1$. \end{Lem} \begin{proof} We will choose an admissible almost complex structure $J$ on $\ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$ which, on the subset $[T,+\infty) \times P \times \ensuremath{\mathdj{R}}$, will be the cylindrical lift of a compatible almost complex structure $J_P$ on $(P,d\theta)$ as described in Section \ref{sec:cyllift}. First we show that a punctured disc as in the assumption cannot pass through the hypersurface $\{t = T \}$ for a suitable choice of almost complex structure $J$. Namely, consider a neck-stretching limit around this hypersurface, and observe that \[\{t = T\} \cap (\Sigma \cup \Sigma_1)=\ensuremath{\mathdj{R}} \times (\Lambda^+ \cup \phi^{-2\max_{\Lambda^+}f^+}(\Lambda^+_1))\] holds by property \eqref{carefulpushoff:4a} above. (Here $\phi^s(t,x,z)=(t,x,z+s)$ denotes the Reeb flow as usual.) By action reasons it thus follows that no such strip can pass through this hypersurface for a sufficiently stretched almost complex structure, since otherwise we would get a component of negative energy in the SFT limit. Observe that stretching the neck in this setting can be equivalently performed by fixing the almost complex structure $J$, but while changing the boundary condition in a way so that $T' \gg 0$ above becomes arbitrarily large. We may hence assume that any disc as in the assumption is contained in $\{t \ge T\}$. The canonical $(J,J_P)$-holomorphic projection $[T,+\infty) \times P \times \ensuremath{\mathdj{R}} \to P$ maps $\{ t \ge T \} \cap (\Sigma \cup \Sigma_1)$ to the Lagrangian projection $\Pi_{\operatorname{Lag}}(\Lambda^+ \cup \Lambda^+_1) \subset (P,d\theta)$. The fact that the positive puncture of the projection of the disc in the assumption is mapped to $m^+ \in \Pi_{\operatorname{Lag}}(\Lambda^+) \cap \Pi_{\operatorname{Lag}}(\Lambda^+_1)$ implies that this disc must have a constant projection to $P$. In other words, the disc is contained inside the $J$-holomorphic plane $[T,+\infty) \times \{m^+\} \times \ensuremath{\mathdj{R}} \subset \ensuremath{\mathdj{R}} \times P \times \ensuremath{\mathdj{R}}$. Finally, it can be checked by hand that there exists a unique $J$-holomorphic strip contained in the above plane $[T,+\infty) \times \{m^+\} \times \ensuremath{\mathdj{R}}$ having boundary on $\Sigma \cup \Sigma_1$; see Figure \ref{fig:uniquestrip} for a picture. This strip is transversely cut out by the explicit calculation made in \cite[Lemma 8.2]{LiftingPseudoholomorphic}. (This argument is similar to the proof of \cite[Theorem 2.15]{Floer_Conc}.) \end{proof} \subsubsection{The proof of Proposition \ref{prp:fundclasstwisted}} \label{sec:proof-prop-refprp:f} We now proceed with the proof of Proposition \ref{prp:fundclasstwisted}. \begin{proof}[Proof of Proposition \ref{prp:fundclasstwisted}] Let $l_\gamma$ denote the coefficient of the Reeb chord generator $\gamma$ of the fundamental class $c^{\varepsilon_0^+,\varepsilon_1^+}_{\Lambda^+,m^+}$ (using the canonical basis of the Reeb chord generators). Recall that this coefficient is given by the count of rigid punctured strips inside the moduli spaces of the form $\widetilde{\mathcal{M}}(\gamma;\boldsymbol{\delta},m^+,\boldsymbol{\zeta})$, where each strip is counted with the weight $\varepsilon^+_0(\boldsymbol{\delta})\varepsilon^+_1(\boldsymbol{\zeta})$. Consider punctured Floer strips with boundary on $\Sigma_0 \cup \Sigma_1$ having precisely two positive punctures asymptotic to Reeb chords: one being the Reeb chord $m^+$ from $\Lambda^+_0$ to $\Lambda^+_1$ corresponding to the minimum of the Morse function $f$, and one corresponding to the above Reeb chord $\gamma$ from $\Lambda^+_1$ to $\Lambda^+_0$. By the non-negativity of the Fredholm index for a generic almost complex structure, together with the positivity of the energy, the compactification of this moduli space a priori consists of pseudoholomorphic buildings of the following form: \begin{enumerate} \item Pseudoholomorphic buildings with: \begin{itemize} \item A top level consisting of a single punctured strip with boundary on $\ensuremath{\mathdj{R}} \times \Lambda^+_0 \cup \ensuremath{\mathdj{R}} \times \Lambda^+_1$ (which hence is rigid up to translation); \item A middle level consisting of punctured half-planes of index zero having boundary on $\Sigma_i$, $i=0,1$. \end{itemize} \item Pseudoholomorphic buildings with: \begin{itemize} \item A top level consisting of one punctured strip of index one having boundary on $\ensuremath{\mathdj{R}} \times \Lambda^+_0 \cup \ensuremath{\mathdj{R}} \times \Lambda^+_1$ (which hence is rigid up to translation) together with a trivial strip over a Reeb chord; \item A middle level consisting of a single punctured strip of index zero having boundary on $\Sigma_0 \cup \Sigma_1$. \end{itemize} \item Pseudoholomorphic buildings with: \begin{itemize} \item A middle level consisting of two punctured strips of index zero having boundary on $\Sigma_0 \cup \Sigma_1$; \item A bottom level consisting of a single punctured strip with boundary on $\ensuremath{\mathdj{R}} \times \Lambda^-_0 \cup \ensuremath{\mathdj{R}} \times \Lambda^-_1$ which is of index one (and hence rigid up to translation). \end{itemize} \item Pseudoholomorphic buildings with: \begin{itemize} \item A middle level consisting of a single punctured strip of index zero having boundary on $\Sigma_0 \cup \Sigma_1$; \item A bottom level consisting of a single punctured half-plane of index one with boundary on $\ensuremath{\mathdj{R}} \times \Lambda^-_i$, $i=0,1$ (which hence are rigid up to translation), together with additional trivial strips over Reeb chords. \end{itemize} \item A broken punctured strip having boundary on $\Sigma_0 \cup \Sigma_1$. \end{enumerate} See Figure \ref{fig:fundclassbreaking} for a schematic picture of the above pseudoholomorphic buildings. A gluing argument implies that the configurations in (1) are in bijection with the configurations contributing to the above coefficient $l_\gamma$ in front of $\gamma$ of the fundamental class. Furthermore, the count of the configurations in (5) gives the coefficients of $\widetilde{G}^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma$ by Lemma \ref{lem:m}. We proceed to infer that the signed counts of all buildings of type (2)-(4) is equal to the coefficient of $\gamma$ in the expression $d_{\varepsilon^+_0,\varepsilon^+_1}\circ b^{\Sigma_1,\Sigma}(m_+)$, from which the sought equality on the level of homology now follows. (As usual, all counts above are weighted by the augmentations $\varepsilon^-_i$, $i=0,1$.) (2): There are two cases: either the non-trivial strip in the top level has a positive puncture asymptotic to $m^+$, or it has positive puncture asymptotic to $\gamma$. The former case can be excluded by actions reasons, while the count of the latter configurations corresponds exactly to the coefficient in front of $\gamma$ of the boundary $d_{\varepsilon^+_0,\varepsilon^+_1}\circ b^{\Sigma_1,\Sigma}(m_+)$. (3): There are no buildings of this type. Namely, by Lemma \ref{lem:m}, we may assume there are no punctured pseudoholomorphic strips with boundary on $\Sigma_0 \cup \Sigma_1$ having positive asymptotic to the minimum $m^+$ and a negative asymptotic to a Reeb chord from $\Lambda^-_0$ to $\Lambda^-_1$. (4): The sum of these contributions vanishes, as follows from the fact that $\varepsilon^-_i$, $i=0,1$, vanishes on any boundary of the Chekanov-Eliashberg algebra of $\Lambda^-_i$ (see \ref{II} in Section \ref{sec:Cthulhu-complex}). Recall that the latter differential is defined by a count of punctured pseudoholomorphic half-planes of index one having boundary on $\ensuremath{\mathdj{R}} \times \Lambda^-_i$. \begin{figure}[ht!] \centering \vspace{0.5cm} \labellist \pinlabel (1) at 50 185 \pinlabel (2) at 136 185 \pinlabel (3) at 223 185 \pinlabel (4) at 315 185 \pinlabel (5) at 400 185 \pinlabel $1$ at 24 160 \pinlabel $1$ at 112 160 \pinlabel $0$ at 112 104 \pinlabel $0$ at 160 160 \pinlabel $0$ at 200 160 \pinlabel $0$ at 248 160 \pinlabel $0$ at 200 104 \pinlabel $0$ at 248 104 \pinlabel $1$ at 200 48 \pinlabel $0$ at 288 160 \pinlabel $0$ at 288 104 \pinlabel $0$ at 337 160 \pinlabel $1$ at 327 48 \pinlabel $0$ at 376 160 \pinlabel $0$ at 376 104 \pinlabel $0$ at 425 160 \pinlabel $0$ at 425 104 \pinlabel $\Lambda^+\cup\textcolor{red}{\Lambda^+_1}$ at -34 140 \pinlabel $\Sigma\cup\textcolor{red}{\Sigma_1}$ at -34 83 \pinlabel $\Lambda^-\cup\textcolor{red}{\Lambda^-_1}$ at -34 30 \endlabellist \includegraphics[scale=0.65]{fundclassbreaking} \caption{The pseudoholomorphic buildings (1)-(5) described in the proof of Proposition \ref{prp:fundclasstwisted}. The number on each component denotes its Fredholm index.} \label{fig:fundclassbreaking} \end{figure} \end{proof} Not that the construction allows us to give a proof of the fact (already pointed out in \cite{Ekhoka}) that the fundamental class is functorial with respect to exact Lagrangian cobordisms. Indeed, stretching the neck in the slice $\{t=-T\}$ decomposes the map $G^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma$ into $\Phi_\Sigma^{\varepsilon_0^-,\varepsilon_1^-}\circ G^{\varepsilon^-_0,\varepsilon^-_1}_{\mathbb{R}\times\Lambda^-}$, where $\Phi_\Sigma$ is the DGA morphism induced by the cobordism. Alternatively, one can also use the long exact sequence produced by Theorem \ref{thm:lesmayer-vietoris} together with Proposition \ref{prp:fundclasstwisted} in order to deduce this. In either case, we have: \begin{Thm}\cite[Theorem 7.7]{Ekhoka} \label{prp:fundclassfunct} Let $\Sigma$ be a connected exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$, and let $\varepsilon^-_i$, $i=0,1$, be augmentations of the Chekanov-Eliashberg algebra of $\Lambda^-$ which pull back to augmentations $\varepsilon^+_i$ under the DGA morphism $\Phi_\Sigma$ induced by $\Sigma$. It follows that \[\Phi^{\varepsilon^-_0,\varepsilon^-_1}_\Sigma(c^{\varepsilon^-_0,\varepsilon^-_1}_{\Lambda^-,m^-})=c^{\varepsilon^+_0,\varepsilon^+_1}_{\Lambda^+,m^+},\] i.e.~the fundamental class is preserved under the bilinearised dual of the DGA morphism induced by $\Sigma$, under the additional assumption that the images of $m^\pm$ under the natural maps $H_0(\Lambda^\pm,\ensuremath{\mathdj{F}}) \to H_0(\Sigma)$ agree. \end{Thm} \subsection{A brief introduction to homology with $L^2$-coefficients.} \label{sec:l2} We use the technology of $L^2$-Betti numbers, introduced by Atiyah in \cite{AtiyahL2}, as a tool to study rank properties of Legendrian contact cohomology when the coefficient ring is a group ring $\mathbb{C}[\pi]$ for a group $\pi$. In the following, $\pi$ denotes the fundamental group of the cobordism. Observe that in this case $\pi$ is countable. The main idea is to replace $\mathbb{C}[\pi]$, which is not a priori a Noetherian ring, with a more manageable module. Namely, we consider its $L^2$-completion $\ell^2(\pi)$ defined by the set of functions $f\colon\thinspace \pi\rightarrow\mathbb{C}$ satisfying $\sum_{g\in\pi}|f(g)|^2<\infty$, endowed with its natural structure of a Hilbert space. We do not intend to give a comprehensive introduction to the subject of $L^2$-ranks and refer the reader to the book of L{\"u}ck~\cite{L2book} and the introductory paper of Eckmann~\cite{L2intro} as the main references for the results used, but we still try to give an understandable overview of the needed techniques. The main result that we will need is a version of the snake lemma for $L^2$-cohomology, due to Cheeger and Gromov \cite{CheeGro} (also, see \cite[Theorem 1.21]{L2book}), which applies since all our complexes are finitely generated as $\ell^2(\pi)$-modules. A \textit{Hilbert $\pi$-module} $V$ is a Hilbert space on which $\pi$ acts by isometry. It is said to be of finite type if it can be realised as a closed subspace of $\ell^2(\pi)\otimes_{\mathbb{C}}\mathbb{C}^m$ for a certain $m\in \mathbb{N}$. Morphisms of Hilbert $\pi$-modules are bounded linear maps which are $\pi$-equivariant. Given an endomorphism $f\colon\thinspace V\rightarrow V$ of a Hilbert module of finite type, we define its \textit{von Neumann trace} by \begin{equation} \operatorname{tr}_{L^2}(f):=\sum_{i=1}^m\langle \overline{f}(1\otimes e_i),1\otimes e_i \rangle. \label{eq:34} \end{equation} Here $\overline{f}:=i\circ f\circ p$, $i\colon V \rightarrow \ell^2(\pi)\otimes_{\mathbb{C}}\mathbb{C}^m$ is the inclusion, $p\colon\ell^2(\pi)\otimes_{\mathbb{C}}\mathbb{C}^m\rightarrow V$ is the orthogonal projection, and $\{e_i\}$ is the standard basis of $\mathbb{C}^m$. A simple computation shows that this trace only depends on $f$ and not on the particular choice of the embedding. The \textit{von Neumann dimension} of $V$ is $\operatorname{rk}_{L^2}(V)=\operatorname{tr}_{L^2}\operatorname{Id}$, which clearly is a non-negative number bounded from above by $m$, under the assumption that $V$ can be embedded in $\ell^2(\pi)\otimes_{\mathbb{C}}\mathbb{C}^m$. Note that the von Neumann dimension can take non-integer values. The following basic properties will be crucial: \begin{Lem}[Theorem 1.12 in \cite{L2book}] \label{lem:l2basics} \begin{enumerate} \item $V=0$ if and only if $\operatorname{rk}_{L^2}(V)=0$; and \item If $0\rightarrow U\rightarrow V\rightarrow W\rightarrow 0$ is weakly exact, i.e.~$\overline{\operatorname{im} i}=\ker p$, then $\operatorname{rk}_{L^2}(V)=\operatorname{rk}_{L^2}(U)+\operatorname{rk}_{L^2}(W)$. \end{enumerate} \end{Lem} We will be interested in applying this theory to a complex $(\overline{C}_\bullet,\overline{\partial})$ which is the $L^2$-completion of a $G$-equivariant complex $(C_\bullet,\partial)$ consisting of finitely generated free $\ensuremath{\mathdj{C}}[G]$-modules. In this case, the $L^2$-modules are all of finite type. The corresponding $L^2$-homology will be denoted by $H^{(2)}_\bullet(C_\bullet,\partial)$, where we note that $H^{(2)}_i(C_\bullet,\partial)$ is defined as the quotient of the subspace of cycles by the \emph{closure} of the subspace of boundaries. It follows that $H^{(2)}_\bullet(C_\bullet,\partial)$ again is a $G$-equivariant $L^2$-module of finite type. \begin{Lem} \label{lem:l2ranks} In the above situation, we have \[\operatorname{rk}_{L^2} H^{(2)}_i (C_\bullet,\partial) \le \operatorname{rk}_{\ensuremath{\mathdj{C}}[G]} C_i. \] Furthermore, for a finite-dimensional complex $(C'_\bullet,\partial')$ over $\ensuremath{\mathdj{C}}$, we have \[H^{(2)}_i(C'_\bullet \otimes \ensuremath{\mathdj{C}}[G] ,\partial' \otimes \operatorname{Id}_{\ensuremath{\mathdj{C}}[G]})=H_i (C'_\bullet,\partial') \otimes_\ensuremath{\mathdj{C}} \ell^2(G),\] and thus, in particular, \[\operatorname{rk}_{L^2} H^{(2)}_i (C'_\bullet \otimes \ensuremath{\mathdj{C}}[G] ,\partial' \otimes \operatorname{Id}_{\ensuremath{\mathdj{C}}[G]}) = \dim_\ensuremath{\mathdj{C}} H_i (C'_\bullet,\partial'). \] \end{Lem} \begin{proof} The first statement follows from Lemma \ref{lem:l2basics} together with the Hodge decomposition in \cite[Lemma 1.18]{L2book}. The second statement follows by a direct computation. \end{proof} For a pair of CW complexes $(X,Y)$, $Y \subset X$ and a choice of homomorphism $\varphi \colon\thinspace \pi_1(X) \to G$, there is an induced covering $(\widetilde{X},\widetilde{Y}) \to (X,Y)$ with fibre $G$, monodromy described by $\varphi$, and where there is a natural free $G$-action on the covering. In the case when $(C_\bullet,\partial)$ is the $G$-equivariant cellular complex associated to such a covering, we will write the corresponding $L^2$-homology groups by $H^{(2)}_\bullet(X,Y;\varphi)$ or, by abuse of notation, $H^{(2)}_\bullet(X,Y;G)$. \subsection{Estimating the first $L^2$-Betti number of a tower} Suppose that $\overline{\Sigma}$ is a compact $(n+1)$-dimensional manifold with boundary $\partial \overline{\Sigma}=\partial_+ \overline{\Sigma} \sqcup \partial_- \overline{\Sigma}$, such that $\partial_\pm \overline{\Sigma} \cong \Lambda$ both are simply connected. Let $\overline{\Sigma}^{\odot k}$ be the quotient of $\sqcup_{i=1}^k \overline{\Sigma}_i$, $\overline{\Sigma}_i \cong \overline{\Sigma}$, which identifies $\partial_+(\overline{\Sigma}) \subset \overline{\Sigma}_i$ with $\partial_-(\overline{\Sigma}) \subset \overline{\Sigma}_{i+1}$. We will write $\partial \overline{\Sigma}^{\odot k}=\partial_-\overline{\Sigma}^{\odot k} \sqcup \partial_+\overline{\Sigma}^{\odot k}$, where $\partial_-\overline{\Sigma}^{\odot k} = \partial_- \overline{\Sigma}_1$, $\partial_+\overline{\Sigma}^{\odot k} = \partial_+ \overline{\Sigma}_k$. Further, consider the covering space $\widetilde{\Sigma}^{\odot k} \to \overline{\Sigma}^{\odot k}$ obtained by gluing the boundary of the universal cover $\sqcup_{i=1}^k \widetilde{\Sigma}_i \to \sqcup_{i=1}^k \overline{\Sigma}_i$ via the identification of the induced cover \[ \widetilde{\Sigma}_i \supset \sqcup_{g \in \pi_1(\overline{\Sigma})} \partial_+(\overline{\Sigma})\to \partial_+(\overline{\Sigma}) \subset \overline{\Sigma}_i\] with the induced cover \[ \widetilde{\Sigma}_{i+1} \supset \sqcup_{g \in \pi_1(\overline{\Sigma})} \partial_-(\overline{\Sigma})\to \partial_-(\overline{\Sigma}) \subset \overline{\Sigma}_{i+1}.\] Observe that the covering $\widetilde{\Sigma}^{\odot k} \to \overline{\Sigma}^{\odot k}$ obtained is induced by a group epimorphism \[ \pi_1(\overline{\Sigma}^{\odot k}) \simeq \underbrace{\pi * \hdots * \pi}_{k} \to \pi.\] \begin{Lem} \label{lem:rankgrowth} In the case when $\pi$ is finite, we have \[ \dim_\ensuremath{\mathdj{F}}(H_1(\widetilde{\Sigma}^{\odot k},\partial_-{\widetilde{\Sigma}^{\odot k}};\ensuremath{\mathdj{F}})) = (|\pi|-1)k, \:\:k \ge 1.\] If $\pi:=\pi_1(\overline{\Sigma})$ is infinite, the $L^2$-homology $H^{(2)}_1(\overline{\Sigma}^{\odot k},\partial_-{\overline{\Sigma}^{\odot k}};\pi)$ with coefficients twisted by the above covering $\widetilde{\Sigma}^{\odot k} \to \overline{\Sigma}^{\odot k}$ satisfies \[ \operatorname{rk}_{L^2}(H^{(2)}_1(\overline{\Sigma}^{\odot k},\partial_-{\overline{\Sigma}^{\odot k}};\pi)) \ge k.\] \end{Lem} \begin{proof} The first statement is proven by induction on $k$, using a standard comparison of dimensions obtained via the Mayer-Vietoris long exact sequence \begin{eqnarray*} \lefteqn{\hdots \to H_1(\partial_-{\widetilde{\Sigma}};\ensuremath{\mathdj{F}}) \to } \\ & \to & H_1(\widetilde{\Sigma};\ensuremath{\mathdj{F}}) \oplus H_1(\widetilde{\Sigma}^{\odot (k-1)},\partial_-{\widetilde{\Sigma}^{\odot (k-1)}};\ensuremath{\mathdj{F}}) \to\\ & \to & H_1(\widetilde{\Sigma}^{\odot k},\partial_-{\overline{\Sigma}^{\odot k}};\ensuremath{\mathdj{F}}) \\ & \to & H_0(\partial_-{\widetilde{\Sigma}};\ensuremath{\mathdj{F}}) \to \hdots. \end{eqnarray*} We now show the statement concerning the $L^2$-ranks, which follows by analogous computations. Lemma \ref{lem:l2ranks} implies that $$ H^{(2)}_0(\partial_\pm \overline{\Sigma};\pi)=\ell^2(\pi) \:\: \text{and} \:\: H^{(2)}_1(\partial_\pm \overline{\Sigma};\pi)=0,$$ since $\partial_\pm \overline{\Sigma}$ are simply connected. Observe that we also have \[H^{(2)}_0(\overline{\Sigma}^{\odot k};\pi)=0, \:\:k \geq 1,\] as follows from \cite[Theorem 1.35(8)]{L2book}, using the fact that $\pi$ is infinite. The (weak) long exact sequence of a pair \cite[Theorem 1.21]{L2book} immediately implies the base case \[\operatorname{rk}_{L^2}H^{(2)}_1(\overline{\Sigma}^{\odot 1},\partial_-{\overline{\Sigma}^{\odot 1}};\pi) =\operatorname{rk}_{L^2}H^{(2)}_1(\overline{\Sigma},\partial_-{\overline{\Sigma}};\pi)\ge 1\] as well as the vanishing \[H^{(2)}_0(\overline{\Sigma}^{\odot k},\partial_-{\overline{\Sigma}^{\odot k}};\pi)=0.\] The Mayer-Vietoris long (weakly) exact sequence \begin{eqnarray*} \lefteqn{\hdots \to H^{(2)}_1(\partial_-{\overline{\Sigma}};\pi) \to } \\ & \to & H^{(2)}_1(\overline{\Sigma};\pi) \oplus H^{(2)}_1(\overline{\Sigma}^{\odot (k-1)},\partial_-{\overline{\Sigma}^{\odot (k-1)}};\pi) \to\\ & \to & H^{(2)}_1(\overline{\Sigma}^{\odot k},\partial_-{\overline{\Sigma}^{\odot k}};\pi) \\ & \to & H^{(2)}_0(\partial_-{\overline{\Sigma}};\pi) \to 0 \to \hdots, \end{eqnarray*} together with $H^{(2)}_1(\partial_-{\overline{\Sigma}};\ell^2(\pi))=0$ and \cite[Theorem 1.12(2)]{L2book} gives that \begin{eqnarray*} \lefteqn{\operatorname{rk}_{L^2} H^{(2)}_1(\overline{\Sigma}^{\odot k},\partial_-{\overline{\Sigma}^{\odot k}};\pi) \ge } \\ & \ge & \operatorname{rk}_{L^2} H^{(2)}_1(\overline{\Sigma}^{\odot (k-1)},\partial_-{\overline{\Sigma}^{\odot (k-1)}};\pi)+\operatorname{rk}_{L^2} H^{(2)}_0(\partial_-{\overline{\Sigma}};\pi). \end{eqnarray*} Since $\operatorname{rk}_{L^2} H^{(2)}_0(\partial_-{\overline{\Sigma}};\pi)=1$, the claim now follows by induction. \end{proof} \section{Applications and examples} \label{sec:examples} In this section we deduce all applications mentioned in the introduction of the paper. In addition, we provide explicit examples of Lagrangian cobordisms: both examples to which our results apply, but also examples showing the importance of the different hypotheses used. \subsection{The homology of an endocobordism} \label{sec:topol-endoc} The following proofs of Theorems~\ref{homrigidityold} and \ref{thm:homologycylinder} are similar to the proofs given in \cite{Rigidityofendo}. \begin{proof}[Proof of Theorem \ref{homrigidityold}] We begin by showing the result in the case when $\ensuremath{\mathdj{F}}=\mathbb{Z}_2$. (i): First, recall the elementary fact from algebraic topology that \begin{align}\label{ineqgoeintheformnat} \dim_{\ensuremath{\mathdj{F}}} H(\Sigma;\mathbb F) \geq \dim_{\ensuremath{\mathdj{F}}} H(\Lambda;\mathbb F) \end{align} is satisfied, which follows by studying the long exact sequence of the pair $(\overline{\Sigma},\partial\overline{\Sigma})$ together with Poincar\'{e} duality (see \cite[Lemma 2.1]{Rigidityofendo}). We proceed to prove the opposite inequality $\dim_{\ensuremath{\mathdj{F}}} H(\Sigma;\mathbb F)\leq\dim_{\ensuremath{\mathdj{F}}}(\Lambda;\mathbb F)$. The linearised Legendrian contact cohomology satisfies the bound \[ \dim_\ensuremath{\mathdj{F}} LCH_{\varepsilon'}(\Lambda) \le |\mathcal{R}(\Lambda)|\] for any $\varepsilon'$. Thus we can fix an augmentation $\varepsilon$ of $\mathcal{A}(\Lambda;\ensuremath{\mathdj{F}})$ satisfying \begin{align}\label{maxdeffff} \dim_{\ensuremath{\mathdj{F}}} LCH_{\varepsilon}(\Lambda;\mathbb F) = \max_{\varepsilon'}\{\dim_{\ensuremath{\mathdj{F}}} LCH_{\varepsilon'}(\Lambda;\mathbb F)\}. \end{align} The exact triangle in Theorem~\ref{thm:lesmayer-vietoris} gives us \begin{eqnarray*} \lefteqn{\dim_{\ensuremath{\mathdj{F}}} LCH_{\varepsilon_+}(\Lambda;\mathbb F) \ge} \\ & \ge & \dim_{\ensuremath{\mathdj{F}}} LCH_{\varepsilon}(\Lambda;\mathbb F)+\dim_\ensuremath{\mathdj{F}} H(\Sigma;\mathbb{F})-\dim_\ensuremath{\mathdj{F}} H(\Lambda;\mathbb{F}) \end{eqnarray*} where $\varepsilon_+$ is the augmentation of $\mathcal{A}(\Lambda;\ensuremath{\mathdj{F}})$ obtained as the pull-back $\varepsilon_+ :=\varepsilon\circ \Phi_\Sigma$. Formula \eqref{maxdeffff} implies that $\dim_\ensuremath{\mathdj{F}} H(\Sigma;\mathbb{F})-\dim_\ensuremath{\mathdj{F}} H(\Lambda;\mathbb{F})\leq 0$. Together with inequality \eqref{ineqgoeintheformnat}, we obtain \begin{align}\label{equalityofdim} \dim_{\ensuremath{\mathdj{F}}} H(\Sigma;\mathbb F)=\dim_{\ensuremath{\mathdj{F}}} H(\Lambda;\mathbb F) \end{align} of the dimension of the total homology. In order to show that $\dim_{\ensuremath{\mathdj{F}}} H_i(\Sigma;\mathbb F)=\dim_{\ensuremath{\mathdj{F}}} H_i(\Lambda;\mathbb F)$ for all $i$, we argue by contradiction, assuming that \[d_{i_0}(\Sigma):=\dim_{\ensuremath{\mathdj{F}}} H_{i_0}(\Sigma;\ensuremath{\mathdj{F}})-\dim_{\ensuremath{\mathdj{F}}} H_{i_0}(\Lambda;\ensuremath{\mathdj{F}})>0\] for some $i_0$. By the Mayer-Vietoris sequence we conclude that the inequality \[\dim_\ensuremath{\mathdj{F}} H_{i_0}(\Sigma \odot \Sigma;\ensuremath{\mathdj{F}}) \ge 2\dim_\ensuremath{\mathdj{F}} H_{i_0}(\Sigma;\mathbb F)-\dim_\ensuremath{\mathdj{F}} H_{i_0}(\Lambda;\ensuremath{\mathdj{F}})\] holds. In particular, \[d_{i_0}(\Sigma \odot \Sigma) := \dim_\ensuremath{\mathdj{F}} H_{i_0}(\Sigma \odot \Sigma;\ensuremath{\mathdj{F}}) -\dim_{\ensuremath{\mathdj{F}}} H_{i_0}(\Lambda;\ensuremath{\mathdj{F}}) \ge 2d_{i_0}(\Sigma),\] which by induction leads to a contradiction with equality \eqref{equalityofdim}. (ii): The argument is the same as the one in the proof of \cite[Theorem 1.6 (ii)]{Rigidityofendo}, and follows form Part (i) applied to the concatenation $\Sigma \odot \Sigma$. Namely the Mayer-Vietoris sequence for the concatenation $\Sigma \odot \Sigma$ seen as two copies of $\overline{\Sigma}$ glued along the boundary component $\Lambda$ shows that \[ \dim_\ensuremath{\mathdj{F}} H(\Sigma \odot \Sigma;\ensuremath{\mathdj{F}}) \ge 2 \dim_\ensuremath{\mathdj{F}} H(\Sigma;\ensuremath{\mathdj{F}}) - \dim_\ensuremath{\mathdj{F}} \operatorname{im}(i^-_*,i^+_*) \] and by the above result, we conclude that \[\dim_\ensuremath{\mathdj{F}} \operatorname{im}(i^-_*,i^+_*) =\dim_\ensuremath{\mathdj{F}} H(\Sigma;\ensuremath{\mathdj{F}})=\dim_\ensuremath{\mathdj{F}} H(\Lambda;\ensuremath{\mathdj{F}}),\] from which the claim follows. (iii): By contradiction, we assume that $i^+_* \oplus i^-_* \colon H(\Lambda \sqcup \Lambda) \to H(\Sigma)$ is not a surjection. Considering a representative $V \subset H(\Sigma)$ of the cokernel of this map, which hence is of dimension $\dim_\ensuremath{\mathdj{F}} V > 0$, the Mayer-Vietoris long exact sequence implies that the image of $V \oplus V$ under the map \[ H(\Sigma) \oplus H(\Sigma) \to H(\Sigma \odot \Sigma) \] has image being of dimension $2 \dim_\ensuremath{\mathdj{F}} V>0$. Moreover, $V \oplus V$ can again be seen to not be contained in the image of \[i^+_* \oplus -i^-_* \colon H(\Lambda \sqcup \Lambda) \to H(\Sigma \odot \Sigma).\] Namely, the above inclusion factorises through the canonical maps as \[i^+_* \oplus -i^-_* \colon H(\Lambda \sqcup \Lambda) \to H(\Sigma \sqcup \Sigma) \to H(\Sigma \odot \Sigma),\] where the latter morphism is the one from the above Mayer-Vietoris long exact sequence. In conclusion, the cokernel of \[i^+_* \oplus i^-_* \colon H(\Lambda \sqcup \Lambda) \to H(\Sigma \odot \Sigma)\] is of dimension at least $2 \dim_\ensuremath{\mathdj{F}} V$. Arguing by induction, now we arrive at the sought contradiction with Part (i) above. The proof is now complete for $\ensuremath{\mathdj{Z}}_2$. Under the additional assumption that $\Lambda$ is spin, and admitting an augmentation in an arbitrary field $\ensuremath{\mathdj{F}}$, Corollary \ref{cor:spin-orient} of Theorem \ref{thm:w_ivanish} implies that any endocobordism of $\Lambda$ is spin as well. This allows us to repeat the previous argument with coefficients in the field $\ensuremath{\mathdj{F}}$. Note that Theorem \ref{thm:w_ivanish} relies on Theorem \ref{homrigidityold}, which we established above in the needed case $\ensuremath{\mathdj{F}}=\ensuremath{\mathdj{Z}}_2$. \end{proof} We now prove the following Theorem of which Theorem \ref{thm:homologycylinder} is an immediate corollary. Observe that the following result also can be proved by alluding to Theorem \ref{homrigidityold}. \begin{Thm}\label{homrigidity} Let $\Lambda$ be a Legendrian homology sphere inside a contactisation, $\Sigma$ be an exact Lagrangian cobordism from $\Lambda$ to itself inside the symplectisation, and $\ensuremath{\mathdj{F}}$ a field. If $\mathcal{A}(\Lambda;\ensuremath{\mathdj{F}})$ admits an augmentation, then $H_\bullet(\Sigma,\Lambda;\ensuremath{\mathdj{F}})=0$, i.e.~$\Sigma$ is a $\ensuremath{\mathdj{F}}$-homology cylinder. \end{Thm} \begin{proof} Let $\Sigma^{\odot k}$, $k \ge 1$, be the $k$-fold concatenation of $\Sigma$ with itself, which again is an exact Lagrangian cobordism from $\Lambda$ to $\Lambda$. Since $\Lambda$ is a homology sphere it is spin and, hence, $\Sigma^{\odot k}$ is spin for all $k \ge 1$ by Corollary \ref{cor:spin-orient}. We fix an augmentation $\varepsilon$ of $\mathcal{A}(\Lambda;\ensuremath{\mathdj{F}})$ and let $\varepsilon_k$ be the augmentation of $\mathcal{A}(\Lambda;\ensuremath{\mathdj{F}})$ obtained by the pull-back of $\varepsilon$ under the unital DGA morphism induced by $\Sigma^{\odot k}$. The (ungraded version of the) long exact sequence in Theorem \ref{thm:lespair} becomes \begin{equation} \label{eq:13} \xymatrix{ LCH^\bullet_{\varepsilon}(\Lambda) \ar[rr] & & LCH^\bullet_{\varepsilon_k}(\Lambda)\ar[dl] \\ & H_\bullet(\overline{\Sigma}^{\odot k},\partial_{-}\overline{\Sigma}^{\odot k};\ensuremath{\mathdj{F}}) \ar[ul] & } \end{equation} Observe that \[\dim H_i(\overline{\Sigma}^{\odot k},\partial_{-}\overline{\Sigma}^{\odot k};\ensuremath{\mathdj{F}})=\begin{cases} 0, & i=0,n+1, \\ k\dim H_i(\overline{\Sigma},\partial_{-}\overline{\Sigma};\ensuremath{\mathdj{F}}), & 0<i<n+1, \end{cases} \] as follows from the Mayer-Vietoris long exact sequence together with the assumption that $\Lambda$ is a $\ensuremath{\mathdj{F}}$-homology sphere. Since the linearised contact cohomology satisfies the bound \[ \dim_\ensuremath{\mathdj{F}} LCH_{\varepsilon'}(\Lambda) \le |\mathcal{R}(\Lambda)|\] for any $\varepsilon'$, we get the inequality \[ k\dim H_i(\overline{\Sigma},\partial_{-}\overline{\Sigma};\ensuremath{\mathdj{F}}) = \dim H_i(\overline{\Sigma}^{\odot k},\partial_{-}\overline{\Sigma}^{\odot k};\ensuremath{\mathdj{F}}) \le 2 |\mathcal{R}(\Lambda)|, \:\: 0<i<n+1,\] for each $k$, where the exactness of the above triangle has been used to show the last inequality. In conclusion, we have established \[ \dim H_i(\overline{\Sigma},\partial_{-}\overline{\Sigma};\ensuremath{\mathdj{F}}) = 0, \:\: 0<i<n+1,\] which finishes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:homologycylinder}] Since $\Lambda$ is assumed to have an augmentation over $\mathbb{Z}$ it admits an augmentation over $\mathbb{Q}$ as well. And thus it follows from Theorem \ref{homrigidity} that $H_\bullet(\Sigma,\Lambda;\mathbb{Q})=0$ and thus that $H_\bullet(\Sigma,\Lambda;\mathbb{Z})$ is torsion. The augmentation over $\mathbb{Z}$ also induces an augmentation over any finite field, and thus Theorem \ref{homrigidity} implies that $H_\bullet(\Sigma,\Lambda;\mathbb{Z})$ has no $p$-torsion for any prime $p$. Thus $H_\bullet(\Sigma,\Lambda;\mathbb{Z})=0$. \end{proof} \begin{Rem} Following the discussion in Section \ref{sec:augm-finite-dimens} we get that Theorem \ref{thm:homologycylinder} holds under the weaker assumption that the Chekanov-Eliashberg algebra admits a non-commutative augmentation in a finite-dimensional $\ensuremath{\mathdj{F}}$-algebra. (The proof is a verbatim reproduction of the precedent.) \end{Rem} \subsection{Characteristic classes of endocobordisms} \label{sec:nonorientable} Recall that from Section \ref{sec:remarks-about-grad}, that Theorem \ref{homrigidityold} still applies in the case when the cobordism $\Sigma$ is not orientable and of Maslov number one i.e. when the Cthulhu complexes involving $\Sigma$ necessarily are \emph{ungraded}. In this case, we obtain exact triangles instead of long exact sequences. Assume now that we are given a chord-generic \emph{orientable} Legendrian submanifold $\Lambda\subset P\times \ensuremath{\mathdj{R}}$ whose Chekanov-Eliashberg algebra admits an augmentation over $\ensuremath{\mathdj{Z}}_2$ (or, more generally, a linear $m$-dimensional representation over $\ensuremath{\mathdj{Z}}_2$). The dual statement of Part (iii) of Theorem \ref{homrigidityold} reads as follows. Let $\Sigma$ be an exact Lagrangian endocobordism $\Sigma$ of $\Lambda$. The map $(i_+^*,i_-^*):H^*(\Sigma,\mathbb{Z}_2)\rightarrow H^*(\Lambda\sqcup \Lambda,\mathbb{Z}_2)$ is injective. Theorem \ref{thm:w_ivanish} is an immediate corollary of this and of the naturality of characteristic classes. In turn, it implies the following: \begin{Cor}\label{cor:spin-orient} If $\Lambda$ is orientable (respectively, spin), and that it admits an augmentation into a finite-dimensional algebra, then any exact endocobordism $\Sigma$ of $\Lambda$ is orientable (respectively, spin) as well. \end{Cor} This result can be seen as a generalisation of the result of Capovilla-Searle and Traynor, see \cite[Theorem 1.2]{NonorLagcobbetLegknots}. The proof of Theorem \ref{thm:w_ivanish} for Pontryagin classes follows similarly assuming that $\Lambda$ is spin. \begin{figure}[ht!] \centering \includegraphics[height=4cm]{tp-g} \caption{The front projection of $\Lambda_{(5,-8)}$.} \label{fig:tp-g} \end{figure} \begin{Ex} Recall that a Legendrian knot in the standard contact $\ensuremath{\mathdj{R}}^3$ for which the Kauffman bound on $\operatorname{tb}$ is not sharp does not admit an augmentation in a commutative ring \cite{KauffmanBound}. \begin{enumerate}\label{rem:examplesthatanswernonsymqofcsandt} \item Consider a family of the Legendrian representatives of torus $(p,-q)$-knots $\Lambda_{(p,-q)}\subset \ensuremath{\mathdj{R}}^3$ with $q>p\geq 3$ and $p$ odd; see Figure~\ref{fig:tp-g}. Following Sivek \cite{TheContHomofLegKNotswithMAXTBI}, we observe that $\operatorname{tb}(\Lambda_{(p,-q)})=-pq$ and, hence, from the classification result of Etnyre and Honda \cite{Etnyre_&_Knots_Contact1} it follows that $\Lambda_{(p,-q)}$ is $\operatorname{tb}$-maximising. Recall that Sivek \cite{TheContHomofLegKNotswithMAXTBI} proved that the Chekanov-Eliashberg algebra of $\Lambda_{(p,-q)}$ admits a $2$-dimensional representation over $\ensuremath{\mathdj{Z}}_2$, but for which the Kauffman bound on $\operatorname{tb}$ is not sharp. Therefore, these Legendrian knots do not admit non-orientable exact Lagrangian endocobordisms. \item Consider $\Lambda_{(p,-q)}\#\Lambda$, where $p$ is odd, $q>p\geq 3$, and let $\Lambda$ be a $\operatorname{tb}$-maximising Legendrian knot of $\ensuremath{\mathdj{R}}^3$ whose Chekanov-Eliashberg algebra admits an augmentation (or, more generally, $m$-dimensional linear representation) over $\ensuremath{\mathdj{Z}}_2$. Then, following the discussion in \cite[Lemma 4.3]{EstimNumbrReebChordLinReprCharAlg}, we see that the Kauffman bound for $\Lambda_{(p,-q)}\#\Lambda$ is not sharp and that the Chekanov-Eliashberg algebra of $\Lambda_{(p,-q)}\#\Lambda$ admits a finite-dimensional linear representation over $\ensuremath{\mathdj{Z}}_2$. In addition, from the fact that $\Lambda_{(p,-q)}$ and $\Lambda$ are $\operatorname{tb}$-maximising, together with \cite[Corollary 3.5]{Etnyre_&_Connected_Sums} (or \cite[Theorem 1.1]{AdditivityofTBofLegknots}), it follows that $\Lambda_{(p,-q)}\#\Lambda$ also is $\operatorname{tb}$-maximising. This leads us to many other examples, besides $\Lambda_{(p,-q)}$, which do not admit non-orientable exact Lagrangian endocobordisms. \item There is also an example due to Sivek, see \cite[Sections 2.2 and 3]{TheContHomofLegKNotswithMAXTBI}, of a $\operatorname{tb}$-maximising knot with non-sharp Kauffman bound on $\operatorname{tb}$, whose Chekanov-Eliashberg algebra does not admit a finite-dimensional linear representation over $\ensuremath{\mathdj{Z}}_2$. \end{enumerate} \end{Ex} \begin{Rem} The above examples provide a negative answer to a question of Capovilla-Searle and Traynor, see \cite[Question 6.1]{NonorLagcobbetLegknots}. \end{Rem} \subsection{Restrictions on the fundamental group of an endocobordism between simply connected Legendrians} \label{sec:fond-group-some} We now prove the results concerning the fundamental groups of endocobordisms between simply connected Legendrian submanifolds. \subsubsection{Proof of Theorem \ref{thm:pi_1carclass}} \label{sec:proofpi_1carclass} \begin{proof}[Proof of Theorem \ref{thm:pi_1carclass}] Recall the construction of the fundamental class in the setting of twisted coefficients carried out in Section \ref{sec:fund-class-twist}. The proof will be a straightforward consequence of Proposition \ref{prp:fundclasstwisted} therein. From the assumptions of the theorem, the Legendrian submanifold $\Lambda^+$ has a unique augmentation. It follows that \cite[Theorem 5.5]{Duality_EkholmetAl} can be applied, and hence the fundamental class $\widetilde{c}^{\varepsilon^+_0,\varepsilon^+_1}_{\Lambda^+,m^+}$ is non-vanishing. By Proposition \ref{prp:fundclasstwisted} we, moreover, conclude that this fundamental class is the image of a generator $m$ of $H_0(\Sigma;R[\pi_1(\Sigma)])$ under the map $\widetilde{G}^{\varepsilon^-,\varepsilon^-}_\Sigma.$ Since $\Lambda^+$ is simply connected by assumption, it follows from \eqref{eq:free} above that this image is not torsion. In particular $$g\cdot \widetilde{c}^{\varepsilon^+_0,\varepsilon^+_1}_{\Lambda^+,m^+}\not=\widetilde{c}^{\varepsilon^+_0,\varepsilon^+_1}_{\Lambda^+,m^+},\:\: \forall g\in\pi_1(\Sigma).$$ Thus, $m$ is not torsion either, and since it generates $H_0(\Sigma;R[\pi_1(\Sigma)])$ we conclude that $H_0(\Sigma;R[\pi_1(\Sigma)])=R[\pi_1(\Sigma)]$. However, since $\widetilde{\Sigma}$ is connected, we know that $H_0(\Sigma;R[\pi_1(\Sigma)])=H_0(\widetilde{\Sigma})=R$. In other words, $\pi_1(\Sigma)$ is the trivial group, as sought. \end{proof} \subsubsection{Proof of Theorem \ref{thm:l2rigidity}} \label{sec:proofl2rigidity} \begin{proof}[Proof of Theorem \ref{thm:l2rigidity}] Here it will be crucial to use the machinery of $L^2$-coefficients as described in Section \ref{sec:l2}. We will let $\Sigma^{\odot k}$, $k \ge 1$, denote the $k$-fold concatenation of the cobordism $\Sigma$ from $\Lambda$ to $\Lambda$. Since $\Lambda$ is spin by assumption, it follows from Corollary \ref{cor:spin-orient} that the cobordisms $\Sigma^{\odot k}$ are spin for all $k \ge 1$. We also consider the cover $p \colon \widetilde{\Sigma}^{\odot k}\to \overline{\Sigma}^{\odot k}$ as constructed in the previous section. First, we argue that the claim follows from the fact $|\pi_1(\Sigma)|<\infty$, which will be shown below. Indeed, under these assumptions, the version of the long exact sequence in Theorem \ref{thm:lespair} applied to the system of local coefficients induced by the above covering (see Section \ref{sec:twisted}), becomes \begin{equation} \label{eq:13pi1} \xymatrix{ LCH^\bullet_{\varepsilon}(\Lambda;\ensuremath{\mathdj{C}}[\pi_1(\Sigma)]) \ar[rr] & & LCH^\bullet_{\varepsilon_k}(\Lambda;\ensuremath{\mathdj{C}}[\pi_1(\Sigma)])\ar[dl] \\ & H_\bullet(\widetilde{\Sigma}^{\odot k},\partial_-\widetilde{\Sigma}^{\odot k};\ensuremath{\mathdj{C}}) \ar[ul] & } \end{equation} Here the augmentation $\varepsilon_k$ is the pull-back of the augmentation $\varepsilon$ under the unital DGA morphism induced by $\Sigma^{\odot k}$ and the covering. Observe that $\varepsilon_k$ takes values in $\ensuremath{\mathdj{C}}[\pi_1(\Sigma)]$. For $k \gg 0$ sufficiently large, unless $|\pi_1(\Sigma)|=1$, the equality \[ \dim_\ensuremath{\mathdj{C}}(H_1(\widetilde{\Sigma}^{\odot k},\partial_-{\widetilde{\Sigma}^{\odot k}};\ensuremath{\mathdj{C}})) = (|\pi_1(\Sigma)|-1)k, \:\:k \ge 1,\] as shown in Lemma \ref{lem:rankgrowth}, together with the universal bound \[\dim_\ensuremath{\mathdj{C}} LCH^\bullet_{\varepsilon_0,\varepsilon_1}(\Lambda;\ensuremath{\mathdj{C}}[\pi_1(\Sigma)]) \le |\pi_1(\Sigma)||\mathcal{R}(\Lambda)|\] gives a contradiction. It remains to show that $|\pi_1(\Sigma)|$ is finite. Assuming the contrary, we use the (weakly) long exact sequence obtained from (\ref{eq:13pi1}) by taking the $L^2$-completions of the above $\ensuremath{\mathdj{C}}[\pi_1(\Sigma)]$-equivariant complexes (since the complexes are \emph{freely and finitely generated} we can again apply Cheeger and Gromov's result in \cite{CheeGro}), establishing the exact triangle \[ \xymatrix{ LCH^{(2) \bullet}_{\varepsilon}(\Lambda) \ar[rr] & & LCH^{(2) \bullet}_{\varepsilon_k}(\Lambda)\ar[dl] \\ & H^{(2)}_\bullet(\overline{\Sigma}^{\odot k},\partial_-\overline{\Sigma}^{\odot k};\pi_1(\Sigma)) \ar[ul] & } \] The inequality \[ \operatorname{rk}_{L^2}(H^{(2)}_1(\overline{\Sigma}^{\odot k},\partial_-{\overline{\Sigma}^{\odot k}};\pi_1(\Sigma))) \ge k, \] as shown in Lemma \ref{lem:rankgrowth}, together with the universal bound \[\operatorname{rk}_{L^2} LCH^{(2) \bullet}_{\varepsilon'}(\Lambda) \le |\mathcal{R}(\Lambda)|, \] which follows by Lemma \ref{lem:l2ranks}, finally gives the sought contradiction, from which it follows that $\pi_1(\Sigma)$ is finite. \end{proof} \subsection{Explicit examples of Lagrangian cobordisms} \label{sec:some-expl-lagr} We start by recalling a few general constructions of Legendrian submanifolds and exact Lagrangian cobordisms. Below these will be used in order to construct explicit examples of Lagrangian cobordisms. \subsubsection{A Legendrian ambient surgery on the front-spin} \label{sec:frontspin} The front $S^m$-spinning construction described in \cite{NoteSpin} by the fourth author constructs a Legendrian embedding $\Sigma_{S^m}\Lambda \subset (\ensuremath{\mathdj{R}}^{2(m+n)+1},\xi_{\operatorname{std}})$ of $S^m \times \Lambda$, given a Legendrian embedding $\Lambda \subset (\ensuremath{\mathdj{R}}^{2n+1},\xi_{\operatorname{std}})$. In the same article, it was also shown that the same construction can be applied to an exact Lagrangian cobordism $\Sigma \subset \ensuremath{\mathdj{R}} \times \ensuremath{\mathdj{R}}^{2n+1}$ from $\Lambda^-$ to $\Lambda^+$ inside the symplectisation, producing an exact Lagrangian cobordism $\Sigma_{S^m}\Sigma \subset \ensuremath{\mathdj{R}} \times \ensuremath{\mathdj{R}}^{2(n+m)+1}$ from $\Sigma_{S^m}\Lambda^-$ to $\Sigma_{S^m}\Lambda^+$ that is diffeomorphic to $S^m \times \Sigma$. Consider a Legendrian knot $\Lambda \subset (\ensuremath{\mathdj{R}}^3,\xi_{\operatorname{std}})$. Its left-most cusp edge in the front projection for a generic representative corresponds to a cusp edge diffeomorphic to $S^m$ in the front projection of the front spin $\Sigma_{S^m}\Lambda \subset (\ensuremath{\mathdj{R}}^{2m+3},\xi_{\operatorname{std}})$. Moreover, this cusp edge bounds an obvious embedding of a Lagrangian $(m+1)$-disc $D \subset (\ensuremath{\mathdj{R}}^{2(m+n)+1},\xi_{\operatorname{std}})$ whose interior is disjoint from $\Sigma_{S^m}\Lambda$, while its boundary coincides with this cusp edge; see Figure \ref{fig:beforesurgery}. \begin{figure}[ht!] \centering \labellist \pinlabel $z$ at 33 138 \pinlabel $x_1$ at -2 35 \pinlabel $x_2$ at 96 76 \pinlabel $D$ at 175 77 \pinlabel $\Sigma_{S^1}\Lambda$ at 175 135 \pinlabel $\Lambda$ at 313 77 \endlabellist \includegraphics[height=3.5cm]{beforesurgery} \caption{The front projection of the front spin $\Sigma_{S^1}\Lambda \subset (\ensuremath{\mathdj{R}}^5,\xi_{\operatorname{std}})$ near the left-most cusp of $\Lambda \subset(\ensuremath{\mathdj{R}}^3,\xi_{\operatorname{std}}) $. The corresponding cusp-edge for the front projection of the front spin bounds an obvious embedded Legendrian disc $D$ intersecting $\Sigma_{S^1}\Lambda$ cleanly along this cusp edge.} \label{fig:beforesurgery} \end{figure} A Legendrian ambient $m$-surgery, as described in \cite{LegendrianAmbient} by the second author, can be applied to the sphere $S^m \hookrightarrow \Sigma_{S^m}\Lambda$ corresponding to the cusp edge $\partial D$, utilising the bounding Legendrian disc $D$. The Legendrian submanifold $\Lambda^+ \subset (\ensuremath{\mathdj{R}}^{2(m+n)+1},\xi_{\operatorname{std}})$ resulting from the surgery has the front projection shown in Figure \ref{fig:aftersurgery} in the case of $m=1=\dim \Lambda$. Recall that there also is a corresponding elementary Lagrangian $(m+1)$-handle attachment, which is an exact Lagrangian cobordism from $\Sigma_{S^m}\Lambda$ to the Legendrian submanifold $\Lambda^+$ obtained after the surgery. Topologically, this cobordism is simply the handle attachment corresponding to the surgery. \begin{figure}[ht!] \centering \labellist \pinlabel $z$ at 33 138 \pinlabel $x_1$ at -2 35 \pinlabel $x_2$ at 96 76 \pinlabel $\Lambda_+$ at 175 135 \pinlabel $\Lambda$ at 313 77 \endlabellist \includegraphics[height=3.5cm]{aftersurgery} \caption{The front projection of the Legendrian submanifold $\Lambda^+ \subset (\ensuremath{\mathdj{R}}^5,\xi_{\operatorname{std}})$ obtained after a Legendrian ambient surgery on the front spin $\Sigma_{S^1}\Lambda \subset (\ensuremath{\mathdj{R}}^5,\xi_{\operatorname{std}})$, utilising the Legendrian disc $D$ as shown in Figure \ref{fig:beforesurgery}.} \label{fig:aftersurgery} \end{figure} \subsubsection{Non-simply connected exact Lagrangian fillings of Legendrian spheres (the proof of Proposition \ref{prop:example})} Using the constructions in Section \ref{sec:frontspin} above, the sought examples will not be difficult to produce. We start with a Legendrian knot $\Lambda\subset(\mathbb{R}^3,\xi_{\operatorname{std}})$ which admits a non-simply connected Lagrangian filling $\Sigma$. For instance, we can take the Legendrian right handed trefoil knot and its exact Lagrangian filling by a punctured torus; see \cite{Ekhoka}. It follows that $\Sigma_{S^m}\Lambda \subset (\ensuremath{\mathdj{R}}^{2(m+1)+1},\xi_{\operatorname{std}})$ is a Legendrian $S^m \times S^1$ which admits an exact Lagrangian filling $\Sigma_{S^m}\Sigma$ diffeomorphic to $S^m \times \Sigma$ that is not simply connected. The Legendrian ambient surgery along a cusp-edge in the class $S^m \times \{p\}$ for $p \in \Lambda$ corresponding to the left-most cusp edge of $\Lambda\subset(\mathbb{R}^3,\xi_{\operatorname{std}})$ as described above produces a Legendrian sphere, and concatenating $\Sigma_{S^m}\Sigma$ with the corresponding elementary Lagrangian $(m+1)$-handle provides a non-simply connected filling of $\Sigma_{S^m}\Lambda$. These are the sought non-simply connected exact Lagrangian cobordisms. \begin{Rem} Theorem \ref{thm:l2rigidity} is applicable to the constructed sphere in order to rule out non-simply connected endocobordisms. In addition, note that since the conclusion of Theorem \ref{thm:pi_1carclass} is not satisfied, the Reeb chords created by the surgery are essential, and the Legendrian sphere admits at least two distinct augmentations. \end{Rem} \subsubsection{Non-invertible Lagrangian concordances} \label{sec:noninv} Here we will prove the statement that \begin{Prop} \label{prop:noninvertible} In all contact spaces $(\ensuremath{\mathdj{R}}^{2n+1},\xi_{\operatorname{std}})$ with $n \ge 1$ there exists a Legendrian $n$-sphere $\Lambda$ of $\operatorname{tb}=-1$ which is fillable by a Lagrangian disc, but for which there is no Lagrangian concordance to the standard Legendrian sphere $\Lambda_0$ of $\operatorname{tb}=-1$. (Recall that the filling induces a Lagrangian concordance from $\Lambda_0$ to $\Lambda$.) \end{Prop} In \cite{Chantraine_Non_symmetry} the first author proved that the relation of Lagrangian concordance is not symmetric by establishing the above proposition in the case $n=1$. In particular, it was shown that the Legendrian representative $\Lambda_{9_{46}} \subset (\ensuremath{\mathdj{R}}^3,\xi_{\operatorname{std}})$ of the knot $9_{46}$ as depicted in Figure \ref{fig:946} (satisfying $\operatorname{tb}=-1$; this is maximal for this smooth knot class), which is fillable by a Lagrangian disc, is not concordant to the standard Legendrian unknot $\Lambda_0$ of $\operatorname{tb}=-1$. Recall that an exact Lagrangian filling by a disc can be used to construct a concordance $C$ from $\Lambda_0$ to $\Lambda_{9_{46}}$, which was explicitly described in the same article. One such concordance is described in Figure \ref{fig:concordance946} below. Notice that along the entire concordance the leftmost cusp-edge $p$ is fixed, and so we can assume that the cylinder $C$ coincides with the trivial cylinder $\mathbb{R}\times l$ for a small arc $p \in l \subset \Lambda_{9_{46}}$ inside a neighbourhood of this cusp. This fact will be important below. Using the results in the current article, the non-existence of a concordance from $\Lambda_{9_{46}}$ to $\Lambda_0$ can be reproved by applying Corollary \ref{cor:concobstruction} together with the calculations in \cite{Chantraine_Non_symmetry}. Namely, in the latter article it is shown that, for an appropriate pair $\varepsilon_0,\varepsilon_1$ of augmentations of the Chekanov-Eliashberg algebra of $\Lambda_0$, we have \[ LCH_{-1}^{\varepsilon_0,\varepsilon_1}(\Lambda_{9_{46}}) \neq 0,\] and no concordance going the other way can thus exist by Corollary \ref{cor:concobstruction}. The front spinning construction produces exact Lagrangian concordances $\Sigma_{S^m}C \subset \ensuremath{\mathdj{R}} \times \ensuremath{\mathdj{R}}^{3+2m}$, obtained as the front spin of $C$, from $\Sigma_{S^m}\Lambda_0 \subset (\ensuremath{\mathdj{R}}^{3+2m},\xi_{\operatorname{std}})$ to $\Sigma_{S^m}\Lambda_{9_{46}} \subset (\ensuremath{\mathdj{R}}^{3+2m},\xi_{\operatorname{std}})$. Here, the latter Legendrian submanifolds are the front spins of $\Lambda_0$ and $\Lambda_{9_{46}}$, respectively. In \cite[Section 5]{Floer_Conc} the authors proved using the K\"{u}nneth formula in Floer homology that again \[ LCH_{-1}^{\widetilde{\varepsilon}_0,\widetilde{\varepsilon}_1}(\Sigma_{S^m}\Lambda_{9_{46}}) \neq 0\] holds for a suitable pair of augmentations, which together with Corollary \ref{cor:concobstruction} implies that there is no Lagrangian concordance from $\Sigma_{S^m}\Lambda_{9_{46}}$ to $\Sigma_{S^m}\Lambda_0$. \begin{figure}[ht!] \centering \includegraphics[height=2.2cm]{frontlag946_n} \caption{Front (left) and Lagrangian (right) projections of the maximal TB $m(9_{46})$ knot.} \label{fig:946} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=7cm]{nontrivconc_n} \caption{A Lagrangian concordance from $\Lambda_0$ to $\Lambda_{9_{46}}$.} \label{fig:concordance946} \end{figure} Recall that $\Sigma_{S^m}\Lambda_0 \simeq \Sigma_{S^m}\Lambda_{9_{46}} \simeq S^m \times S^1$, while $\Sigma_{S^m}C \simeq \ensuremath{\mathdj{R}} \times S^m \times S^1$. We will now perform an explicit modification of the above example to produce an example of Legendrian \emph{spheres} in all dimensions which admit a concordance \emph{from} the standard sphere, but which do not admit a concordance \emph{to} the standard sphere; this establishes Proposition \ref{prop:noninvertible}. \begin{proof}[Proof of Proposition \ref{prop:noninvertible}] The Legendrian ambient surgery can be performed to the cusp-edge of the front projection of $\Sigma_{S^m}\Lambda_{9_{46}}$ corresponding to the left-most cusp edge $p \in \Lambda_{9_{46}}$. In this way, a Legendrian sphere $\Lambda^+ \subset (\ensuremath{\mathdj{R}}^{2(m+1)+1},\xi_{\operatorname{std}})$ is produced. Since the concordance $C$ moreover may be assumed to be a trivial cylinder over a neighbourhood of $p \in \Lambda$ and, hence, so is $\Sigma_{S^m}C$, we obtain a Lagrangian concordance from $\Lambda^-$ to $\Lambda^+$, where $\Lambda^-$ is the Legendrian sphere obtained by performing the corresponding Legendrian ambient surgery on $\Sigma_{S^m}\Lambda_0$. In fact, the latter sphere is the standard Legendrian $(m+1)$-sphere of $\operatorname{tb}=-1$. Recall that the Legendrian ambient surgery also produces an exact Lagrangian handle attachment cobordism from $\Sigma_{S^m}\Lambda_{9_{46}}$ to $\Lambda^+$. Inspecting the long exact sequence induced by Theorem \ref{thm:lespair}, we immediately conclude that there are augmentations $\varepsilon^+_i$, $i=0,1$ for the Chekanov-Eliashberg algebra of the Legendrian sphere $\Lambda^+$ satisfying \[ LCH_{-1}^{\varepsilon^+_0,\varepsilon^+_1}(\Lambda^+) \simeq LCH_{-1}^{\widetilde{\varepsilon}_0,\widetilde{\varepsilon}_1}(\Sigma_{S^m}\Lambda_{9_{46}}) \neq 0.\] Once again, Corollary \ref{cor:concobstruction} shows that there is no concordance from $\Lambda^+$ to $\Lambda^-$. \end{proof} \bibliographystyle{plain}
1,314,259,992,939
arxiv
\section{Introduction} In \cite{EF} Elvang and Figueras have presented a family of axisymmetric black hole solutions to vacuum 4+1-dimensional Einstein equations. Due to the specific topology of the event horizon: $\mathbb{R}\times\big( (S^1\times S^2) \cup S^3)\big)$ it has been named Black Saturn. It can be regarded as a spherical Myers-Perry black hole \cite{MP} surrounded by a black ring \cite{ER,PS}. The configuration is kept in balance by the angular momenta. The Black Saturn metrics are of great significance since they provide an example of well-behaved stationary black hole space-times with disconnected Killing horizon. This shows a sharp contrast between solutions to Einstein equations in 4+1 and 3+1-dimensions since, as proven recently \cite{NH1,NH2,NH3,N2K}, in the latter case analytic stationary two black hole space-times are nakedly singular. The family of Black Saturn solutions is constructed via the inverse scattering method \cite{BZ} which introduces 8 real parameters: $a_i$ with $ i = 1, \ldots , 5$ and $c_1, \, c_2, \, k$. Moreover, a ninth one - $q$ is brought in by a change of coordinates to facilitate the asymptotic flatness (see \cite[p. 10, footnote 3]{EF}). The ordering of parameters $a_i$ assumed in \cite{EF} reads \begin{align} \label{aOrd} a_1\le a_5 \le a_4 \le a_3 \le a_2 \end{align} or in terms of the dimensionless ones $\kappa_i = \frac{a_{i+2}-a_1}{(a_2-a_1)^2}$, \begin{align*} 0 \leq \kappa_3 \leq \kappa_2 \leq \kappa_1 \leq 1. \end{align*} A thorough analysis (see \cite{EF,MY,Sebastian}) has shown that, under the assumption of parameters $a_i$ being pairwise distinct, the metrics in the Black Saturn family describe asymptotically flat, stably causal black hole space-times with smooth domains of outer communications. To guarantee the above listed desired properties of a well-behaved black hole space-time one needs to tune the parameters $c_1, \, c_2, \, k$ and $q$ in terms of $a_i$'s. The purpose of this work is to investigate the possibility of obtaining a well-behaved metric from the Black Saturn family in the case of coalescence of some of the parameters $a_i$. Such coalescence corresponds to the ``pole fusion effect'' in the inverse scattering method, which may lead to extremal black-hole solutions (see \cite[Chapter 8.3]{BZ}). There are however various ways (paths in the parameter space) in which one can obtain a coalescence of two or more $a_i$'s. The result, will \textit{a priori} depend on the chosen limiting procedure as demonstrated in \cite{Geroch} \footnote{We thank Sebastian Szybka for pointing out this reference to us.}. For instance, in \cite[Section A.1]{EF} it has been shown that to obtain the limiting case of a Myers-Perry black hole from the balanced Black Saturn configuration one needs to take first $a_5 \nearrow a_4$ and then $a_1 \nearrow a_4$. In general, the assumption that the coalescence is to be considered after the fine tuning of parameters $c_1, \, c_2, \, k, \, q$ already imposes restrictions on the limiting procedure, since, for example, $a_1 \nearrow a_5$ causes $c_1$ to diverge (see \cite[(3.7)]{EF} or \cite[(2.3)]{MY}). Let us note that the parameters $c_1$ and $c_2$ may \textit{a priori} assume infinite values. Indeed, the line element \eqref{SaturnMetric} has a well-defined limit for $c_1 \to \pm\infty$ and/or $c_2 \to \pm\infty$, which moreover commutes with every coalescence considered in this paper. However, these cases need separate analysis of possible balance conditions. When the balance conditions are imposed on the Black Saturn solution, the areas of the horizons of the two disconnected components \cite[(3.26, 3.27)]{EF} tend to zero in the limits $a_3 \nearrow a_2$ and $a_5 \nearrow a_4$ respectively. This suggests that the possible degenerate solutions are nakedly-singular. However, there is no \textit{a priori} reason for the procedures of coalescence of parameters and imposition of the balance conditions to commute, so this observation does not exclude the possibility of obtaining well-behaved extremal solutions via some other limiting procedure. Let us note, that the same coalescence that leads to the vanishing of the horizon area of the black ring component implies the divergence of its temperature \cite[(3.28)]{EF}. This suggests that the limiting procedures adopted in \cite{EF} are not the right ones, as one should expect $T = 0$ for an extremal black-hole solution, since the temperature is proportional to surface gravity. The strategy we adopt in this paper is to consider the limits $a_i \to a_j$ at the level of metric functions of the full Black Saturn solution and then investigate whether the balance conditions can be fulfilled by a fine tuning of parameters $c_1, \, c_2, \, k$ and $q$. To make the paper self-contained we present in the Appendix the Black Saturn metric of \cite{EF} in generalised Weyl coordinates. For the details of construction and properties we refer the reader to \cite{EF} and \cite{MY,Sebastian}. \section{Analysis} Since we are interested only in the solutions with two disconnected components of the event horizon (compare with the rod structure \cite[Figure 1]{EF}), we shall assume the strict inequality $a_4 < a_3$ in the ordering \eqref{aOrd}. We have thus 3 possible two-fold coalescences to be considered in the next subsections. Moreover, there are 3 three-fold and 1 four-fold limit that need to be investigated. When more then two $a_i$ parameters coalesce, one can consider various different paths in the parameter space that lead to the same coalescence. Fortunately, if the limiting procedure is performed at the level of the metric, the ordering of the limits does not play a role (compare \cite[Section A.1]{EF}). This is because $a_i \nearrow a_j$ implies $\mu_i \nearrow \mu_j$ \eqref{mu} and all of the metric functions (see Appendix) are smooth as functions of $\mu_i$'s. In each of the subsections we consider a particular coalescence of the $a_i$'s parameters while keeping the other distinct. The reason for that is that the behaviour of the metric functions on the axis ($\rho = 0$) should be studied separately in each region of the axis $a_i \leq z \leq a_j$ (see \cite[Section 5.4]{MY}). This means that each coalescence needs a separate procedure of investigation of the metric functions on the axis. The detailed analysis of the regularity, asymptotic flatness and causality of the seven limiting cases of the Black Saturn solution is straightforward, but lengthy - one essentially follows the strategy adopted in \cite{MY}. However, since our analysis shows that in neither of the investigated limits can one tune the parameters to obtain a balanced configuration we shall only present the part of reasoning that leads to this conclusion. \subsection{\texorpdfstring{$a_1 \nearrow a_5$}{a1 --> a5}}\label{a1a5} Let us note first, that if one takes the limit $a_1 \nearrow a_5$, then the resulting metric does not depend on the parameter $c_1$ anymore. Indeed, $\mu_1 = \mu_5$ implies $M_1 = M_3 = M_4 = 0$ (see Appendix), thus the parameter $c_1$ completely drops out of the line element. According to \cite[p. 7]{EF} this configuration would describe a static black ring around an $S^3$ black hole, which are kept apart by a conically singular membrane. Indeed, one can detect the conical singularity by investigating the periodicity of the variable $\varphi$ (compare \cite[Section 4]{MY}). To avoid conical singularity at zeros of the Killing vector $\partial_{\varphi}$ one needs the ratio \begin{align*} \lim_{\rho \to 0} \, \frac{\rho^2 g_{\rho \rho}}{g_{\varphi \varphi}} \end{align*} to be constant on the set $\{z < a_1\} \cup \{a_4 < z < a_3\}$, which is an axis of rotation for $\partial_{\varphi}$. By investigating the leading behaviour in $\rho$ of the metric functions $g_{\varphi \varphi}$ and $g_{\rho \rho}$ in the relevant region of the space-time we obtain \begin{align*} \lim_{\rho \to 0} \, \frac{\rho^2 g_{\rho \rho}}{g_{\varphi \varphi}} = \begin{cases} k^2, & \text{for } z < a_1 \\ k^2 \, \frac{ (a_2-a_1) (a_3-a_4)^2}{(a_3-a_1)^2 (a_2-a_4)}, & \text{for } a_4 < z < a_3 \end{cases}. \end{align*} Hence, to avoid conical singularities one would need to have \begin{align*} \frac{ (a_2-a_1) (a_4-a_3)^2}{(a_3-a_1)^2 (a_2-a_4)} = 1, \end{align*} which is equivalent to \begin{align*} a_4 = a_1 && \text{ or } && a_4 = \frac{a_1 a_2 - 2 a_2 a_3 + a_3^2}{a_1 - a_2}. \end{align*} The first case is excluded, whereas the second one would require \begin{align*} \frac{a_1 a_2 - 2 a_2 a_3 + a_3^2}{a_1 - a_2} < a_3, \end{align*} as $a_4 < a_3$ by assumption. The latter however imply that either $a_3 < a_1$ or $a_3 > a_2$, which contradicts the ordering \eqref{aOrd}. This means that the conical singularity on the axis cannot be avoided. \subsection{\texorpdfstring{$a_5 \nearrow a_4$}{a5 --> a4}}\label{a5a4} Let us now investigate the coalescence $a_5 \nearrow a_{4}$. We shall start with the ana-lysis of the Killing vector field $\partial_t$ on the set $\{\rho = 0,\, z \leq a_1\}$. A {\sc Mathema-tica} calculation shows that $g_{tt}$ is a rational function with the denominator given by \begin{align*} & \left(2 (a_3-a_1) (a_2-a_4)+(a_4-a_1) c_1 c_2\right)^2 (z-a_1) (z-a_2) (z-a_4) \;, \end{align*} which vanishes as $z$ approaches $a_1$ from below. On the other hand, its numerator has the following limit as $z \nearrow a_1$, \begin{align*} (a_2-a_1)^2 (a_3-a_1) (a_4-a_1)^2 \left(2 (a_3-a_1) - c_1^2\right) c_2^2. \end{align*} Hence, we have now two possibilities of tuning the parameters to avoid a naked singularity at $\rho=0,z=a_1$: \begin{flalign} 1. \quad & c_1 = \pm \sqrt{2(a_3-a_1)}, \label{cond2} \\ 2. \quad & c_2 = 0. \label{cond3} \end{flalign} Keeping them in mind, we shall investigate the behaviour of the Killing vector field $\partial_t$ on the set $\{\rho = 0,\, a_4 \leq z \leq a_3\}$. The function $g_{tt}$ on this domain is a rational function with the denominator \begin{align*} 2 (a_1 - a_2)^2 (z - a_1) (z - a_2) (z-a_4), \end{align*} vanishing at $z=a_4$. On the other hand, the numerator of $g_{tt}$ at $\rho=0$, $z = a_4$ reads \begin{align*} (a_1 - a_4)^2 (a_2 - a_4)^2 (c_1 - c_2)^2. \end{align*} Thus, there is only one possibility to avoid a naked singularity at $z=a_4$: set $c_1 = c_2$. Combining the results obtained so far we end up with the following possible fine tunings: \begin{flalign} 1. \quad & c_1 = c_2 = \pm \sqrt{2(a_3-a_1)}, \label{condB1} \\ 2. \quad & c_1 = c_2 = 0. \label{condB2} \end{flalign} The choice $c_1 = c_2 = 0$ would bring us back to the seed solution \cite{EF}, which is nakedly singular, so we are forced to set $c_1 = c_2 = \pm \sqrt{2(a_3-a_1)}$. Let us now analyse the behaviour of the Killing vector field $\partial_{\psi}$ on the set $\{\rho = 0,\, a_1 \leq z \leq a_4\}$. A {\sc Mathematica} calculation shows that $g_{\psi\psi}$ is a rational function with the denominator given by \begin{align*} & -2 \left((a_2 - a_4) c_1 + (a_4-a_1) c_2\right)^2 (z-a_1) (z-a_2) (z-a_4) \;. \end{align*} The singularity at $z = a_1$ is cancelled by the tuning \eqref{condB1} since the numerator of $g_{\psi\psi}$ at $z = a_1$ reads \begin{align*} & -(a_2 - a_1)^2 (a_4 - a_1)^2 (2 (a_3 - a_1) - c_1^2) (2 (a_2 - a_4) + c_2 q)^2 \;. \end{align*} On the other hand, the denominator of $g_{\psi\psi}$ is singular at $z = a_4$ and the numerator has the following limit for $z \nearrow a_4$, \begin{multline*} 2 (a_4 - a_1)^2 (a_2 - a_4)^2 (a_3 - a_4) (2 (a_2 - a_1) - (c_1 - c_2) q)^2 =\\ = 8 (a_4 - a_1)^2 (a_2 - a_4)^2 (a_3 - a_4) (a_2 - a_1)^2, \end{multline*} which does not vanish. This means that the naked singularity at $\rho = 0$, $z = a_4$ persists regardless of the fine tuning of parameters. We have so far dealt with the situation of the parameters $c_1$ and $c_2$ assuming finite values. Let us now turn to the case $c_1 \to \pm \infty$. In this instance $g_{tt}$, being the norm of the Killing vector $\partial_t$, is given in the region $\{\rho = 0,\, z \leq a_1\}$ by the following formula, \begin{align*} -\frac{(a_2-z)(a_3-z)}{(a_1-z)(a_4-z)}. \end{align*} This expression diverges as $z \nearrow a_1$ and the singularity cannot be cancelled by any fine-tuning of the free parameters. For $c_2 \to \pm \infty$ we obtain that $g_{tt}$ on the set $\{\rho = 0,\, a_1 \leq z \leq a_4\}$ is a rational function with the denominator, \begin{align*} 2 (a_4 - a_1)^2 (a_2 - z) (a_4 - z) (z-a_1), \end{align*} vanishing at $z = a_4$. On the other hand, its numerator has the following limit for $z \nearrow a_4$, \begin{align*} 2 (a_4 - a_1)^2 (a_2 - a_4)^2 (a_3 - a_4). \end{align*} We conclude, that in this configuration there is a naked singularity at $\rho = 0$, $z = a_4$ that cannot be avoided. For $c_1, c_2 \to \pm \infty$ we have $g_{tt} = - \frac{\mu_2 \mu_3}{\mu_1 \mu_4}$, which is singular on the axis $\{\rho = 0\}$ in the region $a_1 \leq z \leq a_2$. \subsection{\texorpdfstring{$a_3 \nearrow a_2$}{a3 --> a2}}\label{a2a3} Let us now consider the coalescence $a_3 \nearrow a_2$. To rule out smooth non-trivial solutions it is sufficient to investigate the behaviour of the Killing vector field $\partial_t$ in the region $\{\rho = 0,\, a_4 \leq z \leq a_2\}$. With the help of {\sc Mathematica} we obtain that $g_{tt}$ is a rational function with the denominator given by \begin{align*} & 2 (a_2-a_1)^2 (z-a_1) (a_2-z) (a_5-z), \end{align*} which vanishes as $z$ approaches $a_2$ from below. On the other hand, its numerator has the following limit as $z \nearrow a_2$, \begin{align*} - (a_2 - a_1)^2 (a_2 - a_5)^2 c_2^2. \end{align*} This means, that one should impose the condition $c_2 = 0$ to avoid a naked singularity at $z=a_2$. But setting $a_3 = a_2$ and $c_2 = 0$ completely removes the $S^3$ black hole component \cite[Section A.2]{EF} and we are left with a single $\psi$-spinning black ring. In the case $c_1 \to \pm \infty$ the function $g_{tt}$ in the region $\{\rho = 0,\, z \leq a_1\}$ reads, \begin{align*} -\frac{(a_2-z)^2}{(a_1-z)(a_4-z)}. \end{align*} Thus, a naked singularity pops out at $z = a_1$. For $c_2 \to \pm \infty$, $g_{tt}$ in the region $\{\rho = 0,\, a_5 \leq z \leq a_4\}$ turns out to also be given by, \begin{align*} -\frac{(a_2-z)^2}{(a_1-z)(a_4-z)}, \end{align*} now leading to a singularity at $z = a_4$. Similarly to the case described in Section \ref{a5a4}, for $c_1, c_2 \to \pm \infty$ we have $g_{tt} = - \frac{\mu_2^2}{\mu_1 \mu_4}$, which becomes singular on the axis $\{\rho = 0\}$ in the whole region $a_1 \leq z \leq a_2$. \subsection{\texorpdfstring{$a_1 \nearrow a_5 \nearrow a_4$}{a1 --> a5 --> a4}} According to \cite[Section A.1]{EF} in this limit the Black Saturn metric reduces to a Myers-Perry black hole with a single angular momentum, hence no further analysis is needed. Let us stress however, that to obtain this result independently of the order of the limits one needs to compute the limits at the level of the metric functions - before the imposition of the balance conditions. \subsection{\texorpdfstring{$a_1 \nearrow a_5, \; a_3 \nearrow a_2$}{a1 --> a5, a3 --> a2}} Let us first investigate the behaviour of the Killing vector field $\partial_t$ on the set $\{\rho = 0, \, a_4 \leq z \leq a_2\}$. Again with the help of {\sc Mathematica} we obtain the following formula for $g_{tt}$ function in this region \begin{align*} \frac{z-a_4}{a_1 - z} + \frac{c_2^2}{2 (a_2 - z)}. \end{align*} We have a naked singularity at $z=a_2$ unless we set $c_2 = 0$. As argued in Section \ref{a2a3} this completely removes the $S^3$ black hole component. What is more, the conical singularity detected in Section \ref{a1a5} persists. Indeed, we have \begin{align*} \lim_{\rho \to 0} \, \frac{\rho^2 g_{\rho \rho}}{g_{\varphi \varphi}} = \begin{cases} k^2, & \text{for } z < a_1 \\ k^2 \, \frac{ a_2-a_4}{a_2-a_1}, & \text{for } a_4 < z < a_2 \end{cases}. \end{align*} Hence, to guarantee the correct periodicity of $\varphi$ we would have to set $a_4 = a_1$, which is excluded by the assumptions of this section. Since the parameter $c_1$ has dropped out of the line element in the coalescence considered in this Subsection, we need only to comment on the instance $c_2 \to \pm \infty$. In this case, the function $g_{tt}$ in the region $\{z \leq a_1\}$ behaves near the axis $\{\rho = 0\}$ like \begin{align*} \frac{4 (a_1 - z)(a_2 - z)^2}{(a_4 - z) \rho^2} + \mathcal{O}(\rho^0). \end{align*} This excludes the possibility of $c_2 \to \pm \infty$ leading to a well-behaved space-time. \subsection{\texorpdfstring{$a_5 \nearrow a_4, \; a_3 \nearrow a_2$}{a5 --> a4, a3 --> a2}} It is sufficient to analyse the behaviour of the Killing vector field $\partial_t$ on the axis. In the region $\{\rho = 0,\, z \leq a_1\}$ $g_{tt}$ is a rational function with the denominator given by \begin{align*} & \left(2 (a_2-a_1) (a_2-a_4)+(a_4-a_1) c_1 c_2\right)^2 (z-a_1) (z-a_2) (z-a_4) \;. \end{align*} As $z \nearrow a_1$ its numerator reads \begin{align*} (a_2-a_1)^3 (a_4-a_1)^2 \left(2 (a_2-a_1) - c_1^2\right) c_2^2. \end{align*} Thus, to avoid a naked singularity at $\rho=0,z=a_1$ one has to set \begin{align} \label{c1c2} c_1 = \pm \sqrt{2(a_2-a_1)}, && \text{or} && c_2 = 0. \end{align} Let us now switch to the region $\{\rho = 0,\, a_1 \leq z \leq a_4\}$. A {\sc Mathematica} calculation shows that $g_{tt}$ is a rational function with the denominator equal to \begin{align*} & 2 \left( (a_2 - a_4) c_1 + (a_4 - a_1) c_2 \right)^2 (z-a_1) (z-a_2) (z-a_4) \;. \end{align*} The continuity of $g_{tt}$ at $z=a_1$ is easily verified for both choices of parameters \eqref{c1c2}. On the other hand, as $z$ approaches $a_4$, $g_{tt}$ becomes singular since its numerator at $z=a_4$ reads \begin{align*} 2 (a_1 - a_4)^2 (a_2 - a_4)^3 (c_1 - c_2)^2. \end{align*} To bypass the naked singularity at $\rho=0,z=a_4$ we need to set $c_1 = c_2$ in addition to \eqref{c1c2}. Finally, in the region $\{\rho = 0,\, a_4 \leq z \leq a_2\}$ the denominator of $g_{tt}$ is given by \begin{align*} 2 (a_1 - a_2)^2 (z-a_1) (z-a_2) (z-a_4). \end{align*} Again, the continuity of $g_{tt}$ at $z=a_4$ is guaranteed by the tuning of para-meters imposed so far. However, the numerator of $g_{tt}$ at $z=a_2$ reads \begin{align*} -(a_1 - a_2)^2 (a_2 - a_4)^2 c_2^2, \end{align*} so the only way to avoid a singularity at $z=a_2$ is to set $c_2=0$. Combining this with the previous results we conclude that to assure the smoothness of the Killing vector field $\partial_t$ on the axis $\{\rho =0\}$ one needs to set $c_1 = c_2 = 0$. As already argued, this would bring us back to the seed solution \cite{EF}, which is singular itself. It remains to check the possibility of cancelling the singularities by letting one or both of the parameters $c_1, c_2$ go to $\pm \infty$. As $c_1 \to \pm \infty$ we obtain that $g_{tt}$ in the region $\{\rho = 0, \, z \leq a_1\}$ is given by the expression \begin{align*} -\frac{(a_2-z)^2}{(a_1-z)(a_4-z)}, \end{align*} singular at $z=a_1$. For $c_2 \to \pm \infty$ on the other hand, we obtain the following behaviour of $g_{tt}$ near the axis $\{\rho = 0\}$ in the region $\{a_4 \leq z \leq a_2\}$, \begin{align*} \frac{4 (a_2 - z)^2 (a_4-z)}{(a_1 - z) \rho^2} + \mathcal{O}(\rho^0). \end{align*} Moreover, if we let both $c_1$ and $c_2$ tend to infinity we again obtain $g_{tt} = - \frac{\mu_2^2}{\mu_1 \mu_4}$. We conclude that the Black Saturn solution with $a_5 \nearrow a_4, \; a_3 \nearrow a_2$ and one or both of the $c_i$ parameters infinite is nakedly singular. \subsection{\texorpdfstring{$a_1 \nearrow a_5 \nearrow a_4, \; a_3 \nearrow a_2$}{a1 --> a5 --> a4, a3 --> a2}} As in the previous cases (see Section \ref{a1a5}) the limit $a_1 \nearrow a_5$ implies that the parameter $c_1$ is no longer present in the line element. Furthermore, an investigation of the behaviour of the Killing vector $\partial_t$ on the axis forces us to impose $c_2 = 0$. Indeed, in the region $\{\rho = 0, \, a_1 \leq z \leq a_2\}$ the metric function $g_{tt}$ reads \begin{align*} \frac{2(z - a_2) + c_2^2}{2 (a_2 - z)}, \end{align*} so only $c_2=0$ allows to avoid a singularity at $z=a_2$. But if $c_1$ drops out of the metric functions and $c_2$ vanishes we are again back at the seed solution \cite{EF}, which is of no physical interest. Moreover, in the case $c_2 \to \pm \infty$ we obtain $g_{tt} = \frac{\mu_2^2}{\rho^2}$, that clearly leads to singularities on the axis. \section{Conclusions} We have investigated various different coalescences of parameters defining the Black Saturn solution. We have shown that either the resulting metric is nakedly singular or it reduces to a black hole with one connected component of the event horizon: a Myers-Perry black hole or Emperano-Reall black ring. Led by the example given by Geroch in \cite{Geroch} one might think that there can still be a way of obtaining a meaningful coalescence limit in the Black Saturn family by employing a smart change of coordinate chart. However, as demonstrated in \cite{Geroch}, the Killing vectors are inherited by any limit of a space-time with some parameters. Strictly speaking, this property has been demonstrated for a 3+1-dimensional case. Nevertheless, as the technique developed in \cite[Appendix B]{Geroch} is general, the proof can be adapted in a straightforward way to a 4+1 dimensional space-time with three Killing vectors. Now, since our analysis consisted in uncovering singularities in the norms of Killing vector fields, we conclude that any coordinate transformation would either lead to the same results or not yield a proper limit space-time at all. We have thus exhausted the possibility of constructing a smooth extremal Black Saturn configuration in the family of solutions of Elvang-Figueras. This outcome is in consent with the known properties of $4+1$-dimensional black holes. Both spherical black holes \cite{MP} and black rings \cite{ChruscielCortier,extremal} require two non-vanishing angular momenta to admit smooth extremal configurations. Unfortunately, the Black Saturn solution of Elvang-Figueras has angular momentum in a single plane only and it is not clear if doubly-spinning components can at all be kept in balance \cite{EF}. Thus, the question of existence of smooth stationary axisymmetric black hole with disconnected degenerate Killing horizons in $4+1$ dimensions remains open. \section*{Acknowledgements} We would like to thank Piotr T. Chru\'sciel for suggesting this problem to us and for his illuminating remarks. We also thank Patryk Mach and Sebastian Szybka for comments on the manuscript. The main part of the calculations was carried out using {\sc Mathematica} 8.0.4 by Wolfram Research. Project operated within the Foundation for Polish Science IPP Programme ``Geometry and Topology in Physical Models'' co-financed by the EU European Regional Development Fund, Operational Program Innovative Economy 2007-2013. Partial support of the Polish Ministry of Science and Higher Education under the grant for young scientists and PhD students is acknowledged.
1,314,259,992,940
arxiv
\section{\label{Intro}Introduction} In quantum chromodynamics (QCD), parton distribution and fragmentation functions (PDFs and FFs) are used to describe the long distance bound state nature of hadrons. Historically PDFs and FFs have been taken to only be dependent on the longitudinal momentum fraction $x$ and $z$, respectively. In the last several decades this has been extended to include transverse momentum dependence so that the ``unintegrated" transverse-momentum-dependent (TMD) PDFs and FFs are written explicitly as dependent on both the longitudinal momentum and transverse momentum of the partons. Since the framework explicitly includes small transverse momenta, the reevaluation of important principles of QCD like factorization and universality has been necessary. \par In particular, the role of color interactions due to soft gluon exchanges between participants of the hard scattering and remnants of the interaction in collisions involving a hadron have been found to have profound effects regarding these principles. For example, the Sivers function, a particular TMD PDF that correlates the transverse spin of the proton with the orbital angular momentum of the parton, was predicted to be the same magnitude but opposite in sign when measured in semi-inclusive deep-inelastic scattering (SIDIS) and Drell-Yan (DY)~\cite{collins_signchange}. This prediction arises from the different color flows that are possible between these two interactions due to the possibility of soft gluon exchanges in the initial-state and final-state for DY and SIDIS, respectively. Factorization of the nonperturbative functions is still predicted to hold in both SIDIS and DY.\par In $p$$+$$p$ collisions where two nearly back-to-back hadrons are measured, soft gluon exchanges are possible in both the initial and final state since there is color present in both the initial and final states. In this process, factorization breaking was predicted in a TMD framework~\cite{trog_fact,collins_qiu_fact}. For processes where factorization breaking is predicted, individual TMD PDFs and TMD FFs become correlated with one another and cannot be written as a convolution of nonperturbative functions. The ideas behind the predicted sign change of certain TMD PDFs and factorization breaking result from the same physical process of soft gluons being exchanged between participants of the hard scattering and remnants of the collision. These predictions represent major qualitative departures from purely perturbative approaches which do not consider the remnants of the collision at all. \par In calculations of TMD processes where factorization is predicted to hold, the Collins-Soper (CS) equation is known to govern the evolution of nonperturbative functions in a TMD framework with the hard scale of the interaction $Q^2$~\cite{cs1,cs2}. Contrary to the purely perturbative collinear DGLAP evolution equations, the CS evolution equation for TMD processes involves the Collins-Soper-Sterman (CSS) soft factor~\cite{css_soft}, which contains nonperturbative contributions. The theoretical expectation from CSS evolution is that momentum widths sensitive to nonperturbative transverse momentum should increase with the hard scale of the interaction. This can intuitively be thought of as due to an increased phase space for hard gluon radiation, and thus a broader transverse momentum distribution. This behavior has been studied and confirmed in several phenomenlogical analyses of DY and Z boson data (see e.g.~\cite{dy1,dy2,sidisdy}) as well as phenemonological analyses of SIDIS data (see e.g.~\cite{sidisdy,sidis1,sidis2}). Since the CS evolution equation comes directly out of the derivation of TMD factorization~\cite{css_tmd}, it then follows that a promising avenue to potentially observe factorization breaking effects is to study possible deviations from CSS evolution in processes where factorization breaking is predicted such as dihadron and direct photon-hadron angular correlations in $p$$+$$p$ collisions. \par \section{Dihadron and Direct Photon-Hadron Angular Correlations} Dihadron and direct photon-hadron angular correlations are both predicted to be sensitive to factorization breaking effects because there are hadrons in both the initial and final states, thus the potential for soft gluon exchanges in both the initial and final states exists. Additionally these processes can be treated in a TMD framework when the two particles are nearly back-to-back and have large $p_T$; a hard scale is defined with the large $p_T$ of the particles and the process is also sensitive to the convolution of initial-state and final-state small transverse momentum scales $k_T$ and $j_T$. Here $k_T$ refers to the initial-state partonic transverse momentum due to the confined nature of the partons and soft or hard gluon radiation, and $j_T$ refers to the final-state fragmentation transverse momentum due to soft or hard gluon radiation in the hadronization process. \par \begin{figure}[thb] \includegraphics[width=1.0\linewidth]{kTkinematics.pdf} \\ \includegraphics[width=1.0\linewidth]{dp_kt_kinematics.pdf} \caption{\label{fig:ktkinematics} A schematic diagram showing event hard-scattering kinematics for (a) dihadron and (b) direct photon-hadron processes in the transverse plane. Two hard scattered partons with transverse momenta $\hat{p}_T^{\rm trig}$ and $\hat{p}_T^{\rm assoc}$ are acoplanar due to the initial-state partonic $k_T$, given by $|\vec{k}_T^1+\vec{k}_T^2|$. The scattered partons result in two fragmented hadrons that are additionally acoplanar due to the final-state transverse momentum perpendicular to the jet axis $j_{T_y}$. For (b) direct photon-hadrons only one away-side jet fragment is produced since the direct photon is colorless. The transverse momentum component perpendicular to the trigger particle's axis is labeled as $\mbox{$p_{\rm out}$}\xspace$. } \end{figure} Figure~\ref{fig:ktkinematics} shows a kinematic diagram in the transverse plane for both dihadron (top) and direct photon-hadron (bottom) events. At leading order the hard scattered partons are exactly back-to-back, but due to initial-state $k_T$ the partons are acoplanar by some amount $|\vec{k}_T^1+\vec{k}_T^2|$. The fragmentation process introduces an additional transverse momentum component $j_{T_y}$ which is assumed to be Gaussian distributed about the parton axes such that $\sqrt{\langle j_T^2\rangle}=\sqrt{2\langle j_{T_y}^2\rangle}$. The transverse momentum component perpendicular to the trigger particle's axis is labeled as $\mbox{$p_{\rm out}$}\xspace$ and is sensitive to initial and final state $k_T$ and $j_T$, where the trigger particle refers collectively to the direct photon or near-side hadron. Measuring the azimuthal angular separation between the near-side trigger particle and away-side associated particle allows calculating \mbox{$p_{\rm out}$}\xspace with the following equation: \begin{equation} \mbox{$p_{\rm out}$}\xspace = \mbox{$p_{\rm T}^{\rm assoc}$}\xspace\sin\mbox{$\Delta\phi$}\xspace \end{equation} The results presented were measured by the PHENIX collaboration at the Relativistic Heavy Ion Collider at Brookhaven National Lab. The PHENIX detector covers a pseudorapidity interval of $|\eta|<0.35$, and has two arms which in total span an azimuthal region of $\Delta\phi\sim\pi$. Lead scintillator and lead glass electromagnetic calorimeters provide measurements of isolated direct photons and neutral pions via their two photon decay. To measure away-side particles, the PHENIX detector employs a drift chamber and pad chamber tracking system that measures nonidentified charged hadrons. These results were measured from $p$$+$$p$ collisions at a center-of-mass energy of $\sqrt{s}=510$ GeV, and were recently submitted to the arXiv~\cite{ppg195}. \section{Results} \begin{figure*}[thb] \includegraphics[width=1.0\linewidth]{dphi_correlations_rmspoutfits_nearsidefits.pdf} \caption{\label{fig:dphis} Per trigger yields of charged hadrons as a function of $\mbox{$\Delta\phi$}\xspace$ are shown in several \mbox{$p_{\rm T}^{\rm trig}$}\xspace and \mbox{$p_{\rm T}^{\rm assoc}$}\xspace bins for both \mbox{$\pi^{0}$}\xspace-hadron and direct photon-hadron correlations. The near side yield of the isolated direct photon-hadron correlations is not shown due to the presence of an isolation cut. The blue and red solid lines shown on the away-sides of the distributions are fits to extract the quantity \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace. } \end{figure*} Figure~\ref{fig:dphis} shows the measured per-trigger yields as a function of \mbox{$\Delta\phi$}\xspace for both \mbox{$\pi^{0}$}\xspace-hadron and direct photon-hadron correlations. The \mbox{$\pi^{0}$}\xspace-hadron correlations show the expected two jet structure, with two peaks that are nearly back-to-back around \mbox{$\Delta\phi$}\xspace$\sim0$ and \mbox{$\Delta\phi$}\xspace$\sim\pi$. The near side of the direct photon-hadron correlations is omitted due to the presence of an isolation cut on the direct photons; thus the near side is not physically interpretable. Additionally, the away-side jets are sensitive to the effects of $k_T$ broadening, so they are the yields of interest. The direct photon-hadron yields have a smaller yield than the \mbox{$\pi^{0}$}\xspace-hadron yields due to the direct photon emerging from the hard scattering at leading order; thus the direct photon-hadron correlations probe a smaller jet energy than the \mbox{$\pi^{0}$}\xspace-hadron correlations. \par The measured $\mbox{$p_{\rm out}$}\xspace$ per-trigger yields are shown in Fig.~\ref{fig:pouts} for both $\mbox{$\pi^{0}$}\xspace$-$\mbox{\text{h}$^\pm$}\xspace$ and direct photon-$\mbox{\text{h}$^\pm$}\xspace$ angular correlations. The open points show the distributions for $\mbox{$\pi^{0}$}\xspace$-hadron correlations while the filled points show the distributions for the direct photon-hadron correlations. The distributions are constructed for only away-side charged hadrons that are sensitive to initial-state $k_T$ and final-state $j_T$. The $\mbox{$p_{\rm out}$}\xspace$ distributions show two distinct regions; at small $\mbox{$p_{\rm out}$}\xspace\sim0$ where the particles are nearly back-to-back a Gaussian shape can be seen, while at larger $\mbox{$p_{\rm out}$}\xspace$ a power law shape is clear. These two shapes indicate a transition from nonperturbatively generated transverse momentum due to soft gluon emission in the Gaussian region to perturbatively generated transverse momentum due to hard gluon emission in the power law region. The distributions are fit with a Gaussian function at small $\mbox{$p_{\rm out}$}\xspace$ and a Kaplan function over the entire distribution. The Gaussian function clearly fails at $\mbox{$p_{\rm out}$}\xspace\sim$1.3 while the Kaplan function accurately describes the transition from Gaussian behavior to power law behavior. \par \begin{figure*}[thb] \includegraphics[width=1.0\linewidth]{pouts_allfits.pdf} \caption{\label{fig:pouts} Per trigger yields of charged hadrons as a function of $\mbox{$p_{\rm out}$}\xspace$ are shown in several $\mbox{$p_{\rm T}^{\rm trig}$}\xspace$ bins for both $\mbox{$\pi^{0}$}\xspace$-$\mbox{\text{h}$^\pm$}\xspace$ and direct photon-$\mbox{\text{h}$^\pm$}\xspace$ correlations. The distributions are fit with a Gaussian function at small $\mbox{$p_{\rm out}$}\xspace$ and a Kaplan function over the entire range. The Gaussian fit clearly fails after $\sim$1.3 GeV/c, indicating a transition from nonperturbatively generated $k_T$ and $j_T$ to perturbatively generated $k_T$ and $j_T$. } \end{figure*} To search for effects from factorization breaking, a comparison to the expectation from CSS evolution must be made using momentum widths which are sensitive to nonperturbative transverse momentum. To make a comparison, two different momentum widths were extracted from the measured correlations. Figure~\ref{fig:rmspouts} shows the root mean square of \mbox{$p_{\rm out}$}\xspace as a function of $\mbox{$p_{\rm T}^{\rm trig}$}\xspace$, which is extracted from fits to the entire away-side jet region in Fig.~\ref{fig:dphis}. The away-side fits can be seen in Fig.~\ref{fig:dphis} as red and blue solid lines around $\mbox{$\Delta\phi$}\xspace\sim\pi$. The values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace clearly decrease for both the \mbox{$\pi^{0}$}\xspace-hadron correlations and direct photon-hadron correlations, which is the opposite of what is predicted from CSS evolution. The direct photon-hadron correlations show a stronger dependence than the \mbox{$\pi^{0}$}\xspace-hadron correlations in the same region of \mbox{$p_{\rm T}^{\rm trig}$}\xspace. Since the values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace are extracted from the entire away-side jet region, \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace is sensitive to both perturbatively and nonperturbatively generated $k_T$ and $j_T$. While it is dominated by nonperturbatively generated transverse momentum since the majority of charged hadrons are in the nearly back-to-back region $\mbox{$p_{\rm out}$}\xspace\sim0$ or $\mbox{$\Delta\phi$}\xspace\sim\pi$, an observable that is sensitive to only nonperturbatively generated transverse momentum is the most ideal for comparisons to CSS evolution.\par \begin{figure}[thb] \includegraphics[width=\linewidth]{rmspout_vspttrig.pdf} \caption{\label{fig:rmspouts} The extracted values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace are shown as a function of the interaction hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace for both \mbox{$\pi^{0}$}\xspace-hadron and direct photon-hadron correlations. The momentum widths decrease with the interaction hard scale, which is opposite the prediction from CSS evolution. } \end{figure} Since the Gaussian fits to the \mbox{$p_{\rm out}$}\xspace distributions are taken in only the nearly back-to-back region, the widths of the Gaussian functions are momentum widths that are sensitive to only nonperturbative transverse momentum. Figure~\ref{fig:widths} shows the measured Gaussian widths as a function of the interaction hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace for both direct photon-hadron and \mbox{$\pi^{0}$}\xspace-hadron correlations. Similarly to the values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace from Fig.~\ref{fig:rmspouts}, the Gaussian widths decrease as a function of the hard scale, which is the opposite of the prediction from CSS evolution. The direct photon-hadron widths show a stronger dependence on \mbox{$p_{\rm T}^{\rm trig}$}\xspace than the \mbox{$\pi^{0}$}\xspace-hadron widths, similar to the values of \mbox{$\sqrt{\langle p_{\rm out}^{2}\rangle}$}\xspace. \par \begin{figure}[thb] \includegraphics[width=\linewidth]{gausswidths_withpythia.pdf} \caption{\label{fig:widths} The measured and {\sc pythia} simulated Gaussian widths from the \mbox{$p_{\rm out}$}\xspace distributions are shown as a function of the hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace. The widths, sensitive to only nonperturbative transverse momentum, decrease with \mbox{$p_{\rm T}^{\rm trig}$}\xspace. Surprisingly, {\sc pythia} nearly replicates the evolution behavior for both the direct photon-hadron and \mbox{$\pi^{0}$}\xspace-hadron correlations despite a $\sim$15\% difference in the magnitude of the {\sc pythia} widths. } \end{figure} Since theoretical calculations were not available, {\sc pythia}~\cite{pythia} was used to simulate direct photon-hadron and \mbox{$\pi^{0}$}\xspace-hadron correlations and study the evolution of momentum widths with the hard scale of the interaction. The Perugia0~\cite{perugia} tune was used since it was tuned to Tevatron Z boson data at small \mbox{$p_T$}\xspace; therefore it should adequately reproduce events with a small amount of total \mbox{$p_T$}\xspace. {\sc pythia} direct photon and dijet events were produced and the \mbox{$p_{\rm out}$}\xspace distributions were constructed directly from the simulation in exactly the same way that they were measured in data. Although the magnitude of the widths from the simulation is roughly 15\% different in each bin, the simulation remarkably nearly reproduces the measured evolution of the Gaussian widths as seen in Fig.~\ref{fig:widths}. It is plausible that {\sc pythia} could be sensitive to the effects from factorization breaking due to the way a \mbox{$p$+$p$}\xspace event is processed. Unlike a standard perturbative QCD calculation, {\sc pythia} forces all particles to color neutralize in the event. This includes allowing initial and final-state gluon interactions, which are the necessary physical mechanism for factorization breaking and are additionally necessary to color neutralize all of the objects in the event. \par \section{Conclusion} In hadronic collisions where at least one final-state hadron is measured, factorization breaking has been predicted in a TMD framework. When color is present in both the initial and final states, soft gluon exchanges between participants in the hard scattering and the remnants of the collision are possible, leading to novel color flows throughout the entire scattering process. Nearly back-to-back dihadron and direct photon-hadron angular correlations in \mbox{$p$+$p$}\xspace collisions at \mbox{$\sqrt{s}$}\xspace=510 GeV from the PHENIX experiment at the Relativistic Heavy Ion Collider were measured to probe possible effects from factorization breaking~\cite{ppg195}. The transverse momentum component perpendicular to the near-side trigger particle, \mbox{$p_{\rm out}$}\xspace, was used to compare predictions from CSS evolution. CSS evolution, which comes directly out of the derivation of TMD factorization~\cite{css_tmd}, predicts that momentum widths sensitive to nonperturbative transverse momentum should increase with increasing hard scale due to the broadened phase space for perturbatively generated gluon radiation. This dependence has been observed in phenomenological fits to both DY and SIDIS data~\cite{dy1,dy2,sidisdy,sidis1,sidis2}. \par The measured correlations at PHENIX show the opposite dependence from the prediction of CSS evolution; momentum widths in hadronic collisions where at least one final-state hadron is measured decrease with the interaction hard scale \mbox{$p_{\rm T}^{\rm trig}$}\xspace~\cite{ppg195}. Remarkably, {\sc pythia} replicates this behavior in both direct photon-hadron correlations and \mbox{$\pi^{0}$}\xspace-hadron correlations. While {\sc pythia} certainly does not consider effects from factorization breaking as it relies on collinear factorization, the necessary physical mechanism that results in the predicted factorization breaking is present in {\sc pythia}; gluon exchanges with the remnants are possible in a {\sc pythia} simulated event since all colored objects are forced to color neutralize in any given event, unlike a standard perturbative QCD calculation. \par
1,314,259,992,941
arxiv
\section{\label{SI sec: Microscopic model of dipolar-octupolar pyrochlores} Microscopic model of dipolar-octupolar pyrochlores} \subsection{Coordinate systems} \subsubsection{Lattice coordinates} As illustrated in Fig.~\ref{fig: pyrochlore lattice}, the magnetically active ions in spin ice form a pyrochlore lattice, an FCC Bravais lattice with four sublattices shaping into a network of corner-sharing tetrahedra. To identify the position of a unit cell on the pyrochlore lattice, we introduce the \emph{global cartesian coordinates} (GCC), which are the standard frame coordinates of the FCC cube with edge length set to unity, and the following three basis vectors \begin{subequations} \label{eq: basis for SIPC expressed in GCC} \begin{align} &\hat{\mathbf{e}}_1 = \frac{1}{2}\left( 0,1,1 \right)\\ &\hat{\mathbf{e}}_2 = \frac{1}{2}\left( 1,0,1 \right)\\ &\hat{\mathbf{e}}_3 = \frac{1}{2}\left( 1,1,0 \right). \end{align} \end{subequations} For later convenience, we also introduce $\hat{\mathbf{e}}_0 = \left( 0,0,0 \right)$. The position of the sites within the unit cell is expressed by defining $\hat{\epsilon}_i = \frac{1}{2}\hat{\mathbf{e}}_i$ ($i=1,2,3$) to be the displacement of the $i=1,2,3$ sublattices from the $i=0$ sublattice respectively (where $\hat{\epsilon}_0=\hat{\mathbf{e}}_0=\mathbf{0}$). The diamond lattice is premedial to the pyrochlore lattice \cite{henley2010coulomb}. We refer to it as the parent diamond lattice. This parent lattice is an FCC Bravais lattice with two sublattices positioned at the center of the up and down-pointing tetrahedra. The initial pyrochlore lattice sites are at the center of the bonds on the diamond lattice. Each down tetrahedron is connected to four nearest-neighbor up tetrahedra by \begin{subequations} \label{eq: NN vectors in parent diamond lattice} \begin{align} &\mathbf{b}_0 = \frac{-1}{4}\left( 1, 1, 1 \right)\\ &\mathbf{b}_1 = \frac{1}{4}\left( -1,1,1 \right)\\ &\mathbf{b}_2 = \frac{1}{4}\left( 1,-1,1 \right)\\ &\mathbf{b}_3 = \frac{1}{4}\left( 1,1,-1 \right). \end{align} \end{subequations} Each up tetrahedron is connected to four down tetrahedra by the opposite vectors. To label the position of the sites on this parent diamond lattice, we introduce the \emph{sublattice indexed diamond coordinates} (SIDC), where the unit cell is identified by a linear combination of the three basis vectors in Eq.~\eqref{eq: basis for SIPC expressed in GCC}. The two sublattices are defined by the sublattice displacement vectors $-\eta_{\alpha} \mathbf{b}_0/2$, where $\eta_{A}=1$ and $\eta_{B}=-1$ with $\alpha$ labeling the sublattice, and $A$ ($B$) stands for down (up). This coordinate system is related to the GCC by \begin{align*} \mathbf{r}_{\alpha} &= \left( r_1, r_2, r_3 \right)_{\alpha} = r_1 \hat{\mathbf{e}}_1 + r_2 \hat{\mathbf{e}}_2 + r_3 \hat{\mathbf{e}}_3 - \frac{\eta_{\alpha}}{2} \mathbf{\mathbf{b}}_{0} \hspace{2.5mm} \text{(SIDC)} \nonumber\\ &= \frac{1}{2} \left( r_2 + r_3, r_1 + r_3, r_1 + r_2 \right) - \frac{\eta_{\alpha}}{2}\mathbf{b}_{0} \hspace{12mm}\text{(GCC)} \nonumber \end{align*} \begin{figure} \includegraphics[width=0.60\linewidth]{pyrochlore_lattice.pdf} \caption{(a) The network of corner-sharing tetrahedra forming the pyrochlore lattice and the sites of its parent (premedial) diamond lattice at the center of the tetrahedra. The down (up) tetrahedra are colored purple (green). (b) The hexagonal plaquettes of the pyrochlore lattice are threaded by a 0-flux ($\nabla\times\overline{A}=0$) or $\pi$-flux ($\nabla\times\overline{A}=\pi$) of the emergent gauge field in the 0-O-QSI and $\pi$-O-QSI phases respectively. \label{fig: pyrochlore lattice}} \end{figure} \subsubsection{Local coordinates} Spins on the four different sublattices are defined in a local frame. The basis vectors of these sublattice-dependant coordinates systems are defined in table~\ref{tab: Local basis}. \begin{table}[!ht] \caption{\label{tab: Local basis}% Local sublattice basis vectors } \begin{ruledtabular} \begin{tabular}{ccccc} $i$ & 0 & 1 & 2 & 3 \\ \hline $\hat{z}_{i}$ & $\frac{1}{\sqrt{3}}\left(1,1,1\right)$ & $\frac{-1}{\sqrt{3}}\left(-1,1,1\right)$ & $\frac{-1}{\sqrt{3}}\left(1,-1,1\right)$ & $\frac{-1}{\sqrt{3}}\left(1,1,-1\right)$ \\[2mm] $\hat{y}_{i}$ & $\frac{1}{\sqrt{2}}\left(0,-1,1\right)$ & $\frac{1}{\sqrt{2}}\left(0,1,-1\right)$ & $\frac{-1}{\sqrt{2}}\left(0,1,1\right)$ & $\frac{1}{\sqrt{2}}\left(0,1,1\right)$ \\[2mm] $\hat{x}_{i}$ & $\frac{1}{\sqrt{6}}\left(-2,1,1\right)$ & $\frac{-1}{\sqrt{6}}\left(2,1,1\right)$ & $\frac{1}{\sqrt{6}}\left(2,1,-1\right)$ & $\frac{1}{\sqrt{6}}\left(2,-1,1\right)$ \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Microscopic Hamiltonian} A well-isolated ground state doublet can be represented by a pseudospin-$1/2$ operator~\cite{smith2022reply}. From symmetry considerations, certain multipole operators are non-vanishing in this doublet and can be used to write the pseudospin operators. For dipolar-octupolar (DO) doublets, the pseudospin operators are \begin{subequations} \label{eq: definition of the pseudospin operators} \begin{align} \tilde{\mathrm{S}}^x &= \mathcal{P} \left( \mathcal{C}_{0}\left( (J^{x})^{3}-\overline{J^{x} J^{y} J^{y}}\right)+\mathcal{C}_{1} J^{z} \right) \mathcal{P} \\ \tilde{\mathrm{S}}^y &= \mathcal{C}_{2} \mathcal{P} \left( (J^{y})^{3}-\overline{J^{y} J^{x} J^{x}} \right) \mathcal{P} \\ \tilde{\mathrm{S}}^z &= \mathcal{C}_3 \mathcal{P}J^{z}\mathcal{P}, \end{align} \end{subequations} where $\mathcal{P}$ is a projection operator into the DO doublet, and the overline denotes symmetrized products. The constants $\mathcal{C}_0$, $\mathcal{C}_1$, $\mathcal{C}_2$ and $\mathcal{C}_3$ are determined by CEF parameters and are needed to ensure that the pseudospins form a $\mathfrak{su}(2)$ algebra (i.e., $\comm{\tilde{\mathrm{S}}^m}{\tilde{\mathrm{S}}^n}= i\sum_l \varepsilon^{mnl}\tilde{\mathrm{S}}^{l}$). $\mathcal{C}_1=0$ for $\mathrm{Ce}_2 \mathrm{Sn}_2 \mathrm{O}_7$ and $\mathrm{Ce}_2 \mathrm{Zr}_2 \mathrm{O}_7$ \cite{patri2020theory}. Even though $\mathrm{S}^{x}$ is octupolar, it is referred to as a dipolar moment since it transforms identically to a dipole under the symmetry operations of the $D_{3d}$ local site symmetry group. From the transformation properties of these pseudospin operators defined above, the most general symmetry-allowed Hamiltonian with nearest-neighbor bilinear interactions has the form \begin{align}\label{eq: initial XYZ+XZ Hamiltonian} \mathcal{H}=\sum_{\langle \mathbf{R}_{i}, \mathbf{R}_{j}' \rangle}\left[\tilde{J}_{x x} \tilde{\mathrm{S}}_{\mathbf{R}_i}^x \tilde{\mathrm{S}}_{\mathbf{R}_j'}^x + \tilde{J}_{yy} \tilde{\mathrm{S}}_{\mathbf{R}_i}^{y} \tilde{\mathrm{S}}_{\mathbf{R}_j'}^{y} + \tilde{J}_{zz} \tilde{\mathrm{S}}_{\mathbf{R}_i}^{z} \tilde{\mathrm{S}}_{\mathbf{R}_j'}^{z} + \tilde{J}_{x z}\left(\tilde{\mathrm{S}}_{\mathbf{R}_i}^{x} \tilde{\mathrm{S}}_{\mathbf{R}_j'}^{z} + \tilde{\mathrm{S}}_{\mathbf{R}_i}^{z} \tilde{\mathrm{S}}_{\mathbf{R}_j'}^{x} \right)\right], \end{align} where $\mathbf{R}_{i}$ represent the sites of the pyrochlore lattice and the index $i\in\{0,1,2,3\}$ labels the sublattices. The $\tilde{J}_{xz}$ term can be eliminated by performing a uniform rotation around the local $\hat{y}_{i}$ axes \begin{subequations} \label{eq: Trf: nn pyrochlore -> XYZ} \begin{align} \mathrm{S}^{x} &= \cos(\theta) \tilde{\mathrm{S}}^{x}-\sin (\theta) \tilde{\mathrm{S}}^{z} \\ \mathrm{S}^{y} &= \tilde{\mathrm{S}}^{y},\\ \mathrm{S}^{z} &= \sin(\theta) \tilde{\mathrm{S}}^{x} + \cos(\theta) \tilde{\mathrm{S}}^{z}, \end{align} \end{subequations} where $\tan (2 \theta)= 2 \tilde{J}_{x z}/(\tilde{J}_{z z}-\tilde{J}_{x x})$. Such a transformation yields the XYZ model \begin{align}\label{eq: XYZHamiltonian} \mathcal{H}=\sum_{\langle \mathbf{R}_{i}, \mathbf{R}_{j}' \rangle}\left[ J_{x x} \mathrm{S}_{\mathbf{R}_i}^x \mathrm{S}_{\mathbf{R}_j'}^x + J_{yy} \mathrm{S}_{\mathbf{R}_i}^{y} \mathrm{S}_{\mathbf{R}_j'}^{y} + J_{zz} \mathrm{S}_{\mathbf{R}_i}^{z} \mathrm{S}_{\mathbf{R}_j'}^{z}\right] \end{align} with the renormalized couplings \begin{subequations} \label{eq: Relation couplings nn pyrochlore -> XYZ} \begin{align} &J_{xx}=\frac{\tilde{J}_{z z} + \tilde{J}_{x x}}{2}-\frac{\sqrt{\left(\tilde{J}_{z z}-\tilde{J}_{x x}\right)^{2}+4 \tilde{J}_{x z}^{2}}}{2}, \\ &J_{yy}=\tilde{J}_{y y}, \\ &J_{zz}=\frac{\tilde{J}_{z z}+\tilde{J}_{x x}}{2}+\frac{\sqrt{\left(\tilde{J}_{z z}-\tilde{J}_{x x}\right)^{2}+4 \tilde{J}_{x z}^{2}}}{2}. \end{align} \end{subequations} When evaluating the equal-time and dynamical spin structure factor in the main text, it is assumed that $\tilde{J}_{xz}\approx 0$ such that $\theta\approx 0$. This appears to be true for Ce$_2$Zr$_2$O$_7$~\cite{smith2022case}. The raising and lowering operators $\mathrm{S}^{\pm}=\mathrm{S}^z \pm i \mathrm{S}^x$ can further be introduced to rewrite the XYZ Hamiltonian in the form \begin{align} \label{eq: XYZ with raising/lowering} \mathcal{H}=\sum_{\langle \mathbf{R}_{i}, \mathbf{R}_{j}' \rangle} \left[J_{y y} \mathrm{S}_{\mathbf{R}_{i}}^y \mathrm{S}_{\mathbf{R}_{j}^{\prime}}^y-J_{\pm}\left(\mathrm{S}_{\mathbf{R}_{i}}^{+} \mathrm{S}_{\mathbf{R}_{j}^{\prime}}^{-}+\mathrm{S}_{\mathbf{R}_{i}}^{-} \mathrm{S}_{\mathbf{R}_{j}^{\prime}}^{+}\right) +J_{\pm \pm}\left(\mathrm{S}_{\mathbf{R}_{i}}^{+} \mathrm{S}_{\mathbf{R}_{j}^{\prime}}^{+}+\mathrm{S}_{\mathbf{R}_{i}}^{-} \mathrm{S}_{\mathbf{R}_{j}^{\prime}}^{-}\right)\right], \end{align} where $J_{\pm}=-\left( J_{xx} + J_{zz} \right)/4$ and $J_{\pm\pm}=\left( J_{xx} - J_{zz} \right)/4$. \section{\label{SI sec: Gauge mean-field approximation} Gauge mean-field construction} \subsection{Parton construction} In GMFT, the initial spin-1/2 Hilbert space on the pyrochlore lattice $\mathscr{H}_{\text{spin}}= \otimes_N \mathscr{H}_{\text{S=1/2}}$ is augmented to a new larger one $\mathscr{H}_{\text{big}} = \mathscr{H}_{\text{spin}} \otimes \mathscr{H}_Q$ where bosonic degrees of freedom are introduced on the parent diamond lattice. This new bosonic Hilbert space $\mathscr{H}_Q$ describes the bosonic field $Q_{\mathbf{r}_{\alpha}} \in \mathbb{Z}$ defined on each site of the parent diamond lattice site. For this mapping to be exact, the discretized Gauss's law \begin{equation} Q_{\mathbf{r}_{\alpha}} = \eta_{\alpha} \sum_{\mu=0}^3 \mathrm{S}^{y}_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_\mu/2} , \label{eq: physical constraint charge GMFT} \end{equation} needs to be enforced for every tetrahedron. The canonically conjugate variable to the bosonic charge is $\varphi_{\mathbf{r}_{\alpha}}$ (i.e., $\comm{\varphi_{\mathbf{r}_{\alpha}}}{Q_{\mathbf{r}_{\alpha}}}=i$). This naturally leads to the definition of raising and lowering operators $\Phi_{\mathbf{r}_{\alpha}}^\dag = e^{i\varphi_{\mathbf{r}_{\alpha}}}$ and $\Phi_{\mathbf{r}_{\alpha}} = e^{-i\varphi_{\mathbf{r}_{\alpha}}}$ respectively. These rotors respect the constraint $|\Phi_{\mathbf{r}_{\alpha}}^{\dagger}\Phi_{\mathbf{r}_{\alpha}}|=1$ by construction. As explained in the main text, the spin variables are mapped to operators defined in this enlarged Hilbert space as in Eq. (2), where $A$ and $E$ are canonical conjugate fields that act within the $\mathscr{H}_{\text{spin}}$ subspace of $\mathscr{H}_{\text{big}}$. The local $y$-component of the spin now corresponds to the emergent electric field, and the raising/lowering operators create a pair of spinons on the parent lattice while creating/annihilating an electric field quanta to respect Eq.~\eqref{eq: physical constraint charge GMFT}. Making this replacement directly into Eq.~\eqref{eq: XYZ with raising/lowering}, we get \begin{align} \label{eq: XYZ Hamiltonian with parton operators} \mathcal{H} =&\frac{J_{y y}}{2} \sum_{\mathbf{r}_\alpha} Q_{\mathbf{r}_\alpha}^2 -\frac{J_{\pm}}{4} \sum_{\mathbf{r}_\alpha} \sum_{\mu, \nu \neq \mu} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}^{\dagger} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} e^{i \eta_\alpha\left(A_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}-A_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)} \nonumber \\ &+\frac{J_{\pm \pm}}{8} \sum_{\mathbf{r}_\alpha} \sum_{\mu, \nu \neq \mu}\left( e^{i \eta_\alpha\left(A_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}+A_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)}\Phi_{\mathbf{r}_\alpha}^{\dagger} \Phi_{\mathbf{r}_\alpha}^{\dagger} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} + \text{ h.c.}\right). \end{align} The total partition function is \begin{align} \mathcal{Z}=&\int \mathcal{D}[\Phi^{*},\Phi, Q,A,E,\lambda,\zeta] e^{-S_{\text {matter}} - S_{\text {EM}}}, \end{align} where \begin{equation} S_\mathrm{EM}=\frac{U}{2} \int_{0}^{\beta}\dd{\tau} \sum_{\left\langle\mathbf{r}_\alpha \mathbf{r}_\beta^{\prime}\right\rangle} \left(E_{\mathbf{r}_{\alpha} \mathbf{r}_{\beta}^{\prime}}^{\tau }\right)^{2} \end{equation} enforces the odd vacuum condition $E_{\mathbf{r}_{\alpha} \mathbf{r}_{\beta}^{\prime}}=\pm 1/2$ by taking the $U\to \infty$ limit, and $S_{\text {matter}}$ describes the quantum rotors coupled to the $U(1)$ gauge field \begin{align} S_{\text{matter}}=&\int_{0}^{\beta} \dd{\tau} \left[\sum_{\mathbf{r}_{\alpha}}\left(i Q_{\mathbf{r}_{\alpha}}^{\tau} \partial_{\tau} \varphi_{\mathbf{r}_{\alpha}}^{\tau}+i \lambda_{\mathbf{r}_\alpha}^{\tau} \left(\Phi_{\mathbf{r}_{\alpha}}^{\tau *} \Phi_{\mathbf{r}_{\alpha}}^{\tau} -1\right)+ i\zeta_{\mathbf{r}_{\alpha}}^{\tau}\left(\sum_{\mu} E_{\mathbf{r}_\alpha, \mathbf{r}_\alpha + \eta_{\alpha} \mathbf{b}_{\mu}}^{\tau}-Q_{\mathbf{r}_\alpha}^{\tau} \right) \right) + \mathcal{H} \right]. \end{align} The Lagrange multipliers $\lambda_{\mathbf{r}_\alpha}^{\tau}$ and $\zeta_{\mathbf{r}_\alpha}^{\tau}$ enforce the constraint $|\Phi_{\mathbf{r}_{\alpha}}^{\dagger}\Phi_{\mathbf{r}_{\alpha}}|=1$ and Eq.~\eqref{eq: physical constraint charge GMFT} respectively. \subsection{Mean-field decoupling and saddle point approximation} To get a tractable model, we first decouple the four bosons term associated with the $J_{\pm\pm}$ coupling. To do so, we follow the prescription of Refs.~\cite{lee2012generic, savary2021quantum} and apply the following decoupling \begin{align} \Phi^{\dagger}_{1} \Phi^{\dagger}_ {2} \Phi_{3} \Phi_{4} \mathbf{s} \mathbf{s} \rightarrow &\langle\mathbf{s}\rangle\langle\mathbf{s}\rangle \left(\left\langle\Phi^{\dagger}_{1} \Phi^{\dagger}_{2}\right\rangle \Phi_{3} \Phi_{4} + \Phi^{\dagger}_{1} \Phi^{\dagger}_{2}\langle\Phi_{3} \Phi_{4}\rangle+\left\langle\Phi^{\dagger}_{1} \Phi_{3}\right\rangle \Phi^{\dagger}_{2} \Phi_4+\Phi^{\dagger}_{1} \Phi_{3} \left\langle\Phi^{\dagger}_{2} \Phi_{4}\right\rangle + \left\langle\Phi^{\dagger}_{1} \Phi_{4}\right\rangle \Phi^{\dagger}_{2} \Phi_{3} + \Phi^{\dagger}_{1} \Phi_{4}\left\langle\Phi^{\dagger}_{2} \Phi_{3}\right\rangle \right) \nonumber \\ & +(\langle\mathbf{s}\rangle \mathbf{s}+\mathbf{s}\langle\mathbf{s}\rangle)\left(\left\langle\Phi^{\dagger}_{1} \Phi^{\dagger}_{2}\right\rangle\langle\Phi_{3} \Phi_{4}\rangle+\left\langle\Phi^{\dagger}_{1} \Phi_{3}\right\rangle\left\langle\Phi^{\dagger}_{2} \Phi_{4}\right\rangle+\left\langle\Phi^{\dagger}_{1} \Phi_{4} \right\rangle\left\langle\Phi^{\dagger}_{2} \Phi_{3} \right\rangle\right) \nonumber \\ & -2\langle\mathbf{s}\rangle\langle\mathbf{s}\rangle\left(\left\langle\Phi^{\dagger}_{1} \Phi^{\dagger}_{2}\right\rangle\langle\Phi_{3} \Phi_{4}\rangle + \left\langle\Phi^{\dagger}_{1} \Phi_{3}\right\rangle\left\langle\Phi^{\dagger}_{2} \Phi_{4}\right\rangle + \left\langle\Phi^{\dagger}_{1} \Phi_{4}\right\rangle\left\langle\Phi^{\dagger}_{2} \Phi_{3}\right\rangle\right), \end{align} where we use the shorthand notation $\mathbf{s}=e^{iA}/2$, and taking the expectation value $\expval{\mathbf{s}}=e^{i\overline{A}}/2$ correspond to fixing the gauge field to a constant background. If we further fix all gauge connections to a constant background (i.e., $A\to \bar{A}$), the decoupling simplifies to \begin{align} \Phi^{\dagger}_{1} \Phi^{\dagger}_ {2} \Phi_{3} \Phi_{4} \mathbf{s} \mathbf{s} \rightarrow \langle\mathbf{s}\rangle\langle\mathbf{s}\rangle & \left(\left\langle\Phi^{\dagger}_{1} \Phi^{\dagger}_{2}\right\rangle \Phi_{3} \Phi_{4} + \Phi^{\dagger}_{1} \Phi^{\dagger}_{2}\langle\Phi_{3} \Phi_{4}\rangle + \left\langle\Phi^{\dagger}_{1} \Phi_{3}\right\rangle \Phi^{\dagger}_{2} \Phi_4 \right.\nonumber\\ &\left. + \Phi^{\dagger}_{1} \Phi_{3}\left\langle\Phi^{\dagger}_{2} \Phi_{4}\right\rangle + \left\langle\Phi^{\dagger}_{1} \Phi_{4}\right\rangle \Phi^{\dagger}_{2} \Phi_{3} + \Phi^{\dagger}_{1} \Phi_{4}\left\langle\Phi^{\dagger}_{2} \Phi_{3}\right\rangle \right). \end{align} We can then introduce the inter-site pairing $\chi$, on-site pairing $\chi^{0}$, and inter-sublattice hopping $\xi$ MF parameters to rewrite the MF Hamiltonian as \begin{align} \label{eq: GMFT Hamiltonian} \mathcal{H}_{\mathrm{MF}}=&\frac{J_{y y}}{2} \sum_{\mathbf{r}_\alpha} Q_{\mathbf{r}_\alpha}^2 -\frac{J_{\pm}}{4} \sum_{\mathbf{r}_\alpha} \sum_{\mu, \nu \neq \mu} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}^{\dagger} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} e^{i \eta_\alpha\left(\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}-\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)} \nonumber \\ &+\frac{J_{\pm \pm}}{8} \sum_{\mathbf{r}_\alpha} \sum_{\mu, \nu \neq \mu}\left[e^{i \eta_\alpha\left(\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}+\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)}\right. \left(\Phi_{\mathbf{r}_\alpha}^{\dagger} \Phi_{\mathbf{r}_\alpha}^{\dagger} \chi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}+\bar{\chi}^{0}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}\right. \nonumber \\ &\hspace{2.8cm}\left.\left. +2 \Phi_{\mathbf{r}_\alpha}^{\dagger} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} \xi_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}+2 \Phi_{\mathbf{r}_\alpha}^{\dagger} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} \xi_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)+\text { h.c. }\right], \end{align} where the MF parameters have to satisfy the self-consistency conditions \begin{subequations} \label{eq: self-consistency conditions} \begin{align} \chi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_\mu, \mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} &= \expval{\Phi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_\mu} \Phi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}}} \\ \overline{\chi}^{0}_{\mathbf{r}_{\alpha}, \mathbf{r}_{\alpha}} &= \expval{\Phi_{\mathbf{r}_{\alpha}}^{\dagger} \Phi_{\mathbf{r}_{\alpha}}^{\dagger} }\\ \xi_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} &= \expval{\Phi_{\mathbf{r}_{\alpha}}^{\dagger} \Phi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}}}. \end{align} \end{subequations} We see that the first term of $\mathcal{H}_{\text{MF}}$ corresponds to the energy cost for the existence of spinons. The second describes intra-sublattice hopping of the spinon while coupled to the fixed constant background, whereas the $J_{\pm\pm}$ term describes both inter-sublattice hopping and pairing. Turning back to the total partition function, we further allow the gauge charges to take on any integer value $Q_{\mathbf{r}_{\alpha}}\in(-\infty,\infty)$ instead of being constrained to $|Q_{\mathbf{r}_{\alpha}}|<2S$ such that we can integrate them out and get \begin{align} \mathcal{Z}_{\text{MF}}= \int \mathcal{D}[\Phi^{*}, \Phi] e^{-S_{\text{GMFT}}}, \end{align} with the saddle point action \begin{align} S_{\text {GMFT}} = \int_{0}^{\beta} \dd{\tau} &\left( \sum_{\mathbf{r}_{\alpha}} \left(\frac{1}{2 J_{yy}} \partial_{\tau}\Phi_{\mathbf{r}_\alpha}^{\tau *} \partial_{\tau} \Phi_{\mathbf{r}_\alpha}^{\tau} + i \lambda_{\mathbf{r}_{\alpha}}^{\tau} \left(\Phi_{\mathbf{r}_{\alpha}}^{\tau*} \Phi_{\mathbf{r}_{\alpha}}^{\tau} -1\right) \right) + \mathcal{H}_{\text {GMFT}} \right), \end{align} and $\mathcal{H}_{\text{GMFT}}$ contains both the $J_{\pm}$ and $J_{\pm\pm}$ terms of Eq.~\eqref{eq: GMFT Hamiltonian}. \section{\label{SI sec: Transformation of the parton operators} Transformation of the parton operators} \subsection{\label{SI subsec: Space group} Space group} The space group (SG) of the diamond lattice ($F d \overline{3} m$) is minimally generated by five operators: three translations $T_{i}$ ($i=1,2,3$), a rotoreflection $\overline{C}_6$ (i.e., $\overline{C}_6 = IC_3$ where $C_3$ is a threefold rotation around $\left[111\right]$ and $I$ is the inversion), and a non-symmorphic screw operation $S$. These space-group generators act on the position vector written in the SIDC as \begin{subequations} \begin{align} T_{i}:& \mathbf{r}_{\alpha} \mapsto \left(r_{1}+\delta_{i, 1}, r_{2}+\delta_{i, 2}, r_{3}+\delta_{i, 3}\right)_{\alpha} \\ \overline{C}_{6}:& \mathbf{r}_{\alpha} \mapsto \left(-r_{3}, -r_{1}, -r_{2} \right)_{\pi_{A,B}(\alpha)}\\ S:& \mathbf{r}_{\alpha} \mapsto \left(-r_{1},-r_{2}, r_{1}+r_{2}+r_{3}+\delta_{\alpha,A}\right)_{\pi_{A, B}(\alpha)}, \end{align} \end{subequations} where $\pi_{A,B}(\alpha)$ are cyclic permutations of the $A$ and $B$ sublattices. \subsection{Space group transformations} For dipolar-octupolar doublets \cite{rau2019frustrated}, the pseudospins transform under the space group generators as \begin{subequations} \begin{align} T_i : \qty\Big{ \mathrm{S}^{+}_{\mathbf{R}_{i}}, \mathrm{S}^{-}_{\mathbf{R}_{i}}, \mathrm{S}^{z}_{\mathbf{R}_{i}} } \mapsto & \qty\Big{ \mathrm{S}^{+}_{T_i(\mathbf{R}_{i})}, \mathrm{S}^{-}_{T_i(\mathbf{R}_{i})}, \mathrm{S}^{z}_{T_i(\mathbf{R}_{i})} }\\ \overline{C}_6 : \qty\Big{ \mathrm{S}^{+}_{\mathbf{R}_{i}}, \mathrm{S}^{-}_{\mathbf{R}_{i}}, \mathrm{S}^{z}_{\mathbf{R}_{i}} } \mapsto & \qty\Big{ \mathrm{S}^{+}_{\overline{C}_6(\mathbf{R}_{i})}, \mathrm{S}^{-}_{\overline{C}_6(\mathbf{R}_{i})}, \mathrm{S}^z_{\overline{C}_6(\mathbf{R}_{i})} } \\ S : \qty\Big{ \mathrm{S}^{+}_{\mathbf{R}_{i}}, \mathrm{S}^{-}_{\mathbf{R}_{i}}, \mathrm{S}^{z}_{\mathbf{R}_{i}} } \mapsto & \qty\Big{ - \mathrm{S}^{+}_{S(\mathbf{R}_{i})} , - \mathrm{S}^{-}_{S(\mathbf{R}_{i})} , \mathrm{S}^{z}_{S(\mathbf{R}_{i})} }. \end{align} \end{subequations} In terms of the GMFT parton construction, these transformations translate to \begin{widetext} \begin{subequations} \begin{align} T_{i} &: \qty\Big{ \frac{1}{2} \Phi^{\dag}_{\mathbf{r}_A} e^{i A_{\mathbf{r}_{A},\mathbf{r}_{A}+ \mathbf{b}_\mu}} \Phi_{\mathbf{r}_{A}+\mathbf{b}_\mu}, \frac{1}{2} \Phi^{\dag}_{\mathbf{r}_A+\mathbf{b}_\mu} e^{-iA_{\mathbf{r}_{A},\mathbf{r}_{A}+ \mathbf{b}_\mu}} \Phi_{\mathbf{r}_{A}}, E_{\mathbf{r}_{A},\mathbf{r}_{A}+\mathbf{b}_\mu} }\nonumber \\ &\mapsto \qty\Big{ \frac{1}{2} \Phi^{\dag}_{T_{i} (\mathbf{r}_A)} e^{i A_{T_{i} (\mathbf{r}_{A}),T_{i} (\mathbf{r}_{A}+ \mathbf{b}_\mu)}} \Phi_{T_{i} (\mathbf{r}_{A}+\mathbf{b}_\mu)}, \frac{1}{2} \Phi^{\dag}_{T_{i} (\mathbf{r}_A+\mathbf{b}_\mu)} e^{-iA_{T_{i} (\mathbf{r}_{A}),T_{i} (\mathbf{r}_{A}+ \mathbf{b}_\mu)}} \Phi_{T_{i} (\mathbf{r}_{A})}, E_{T_{i} (\mathbf{r}_{A}),T_{i} (\mathbf{r}_{A}+\mathbf{b}_\mu)} } \\ \overline{C}_{6} &: \qty\Big{ \frac{1}{2} \Phi^{\dag}_{\mathbf{r}_A} e^{i A_{\mathbf{r}_{A},\mathbf{r}_{A}+ \mathbf{b}_\mu} } \Phi_{\mathbf{r}_{A}+\mathbf{b}_\mu}, \frac{1}{2} \Phi^{\dag}_{\mathbf{r}_A+\mathbf{b}_\mu} e^{-iA_{\mathbf{r}_{A},\mathbf{r}_{A}+ \mathbf{b}_\mu}} \Phi_{\mathbf{r}_{A}}, E_{\mathbf{r}_{A},\mathbf{r}_{A}+\mathbf{b}_\mu} }\nonumber \\ &\mapsto \qty\Big{ \frac{1}{2} \Phi^{\dag}_{\overline{C}_{6}(\mathbf{r}_A)} e^{i A_{\overline{C}_{6}(\mathbf{r}_{A}), \overline{C}_{6}(\mathbf{r}_{A}+\mathbf{b}_\mu)} } \Phi_{\overline{C}_{6}(\mathbf{r}_{A}+\mathbf{b}_\mu)} , \frac{1}{2} \Phi^{\dag}_{\overline{C}_{6}(\mathbf{r}_A+\mathbf{b}_\mu)} e^{-iA_{\overline{C}_{6}(\mathbf{r}_{A}),\overline{C}_{6}(\mathbf{r}_{A}+ \mathbf{b}_\mu)}} \Phi_{\overline{C}_{6}(\mathbf{r}_{A})} , E_{\overline{C}_{6}(\mathbf{r}_{A}),\overline{C}_{6}(\mathbf{r}_{A}+\mathbf{b}_\mu)} } \\ S &: \qty\Big{ \frac{1}{2} \Phi^{\dag}_{\mathbf{r}_A} e^{i A_{\mathbf{r}_{A},\mathbf{r}_{A}+ \mathbf{b}_\mu}} \Phi_{\mathbf{r}_{A}+\mathbf{b}_\mu}, \frac{1}{2} \Phi^{\dag}_{\mathbf{r}_A+\mathbf{b}_\mu} e^{-iA_{\mathbf{r}_{A},\mathbf{r}_{A}+ \mathbf{b}_\mu}} \Phi_{\mathbf{r}_{A}}, E_{\mathbf{r}_{A},\mathbf{r}_{A}+\mathbf{b}_\mu} }\nonumber \\ &\mapsto \qty\Big{ -\frac{1}{2} \Phi^{\dag}_{S(\mathbf{r}_A)} e^{i A_{S(\mathbf{r}_{A}), S(\mathbf{r}_{A}+\mathbf{b}_\mu)} } \Phi_{S(\mathbf{r}_{A}+\mathbf{b}_\mu)} , -\frac{1}{2} \Phi^{\dag}_{S(\mathbf{r}_A+\mathbf{b}_\mu)} e^{-iA_{S(\mathbf{r}_{A}), S(\mathbf{r}_{A}+ \mathbf{b}_\mu)}} \Phi_{S(\mathbf{r}_{A})} , E_{S(\mathbf{r}_{A}),S(\mathbf{r}_{A}+\mathbf{b}_\mu)} } . \end{align} \end{subequations} \end{widetext} \subsection{Local \texorpdfstring{$U(1)$}{U(1)} transformations} Before performing a saddle point approximation and fixing the gauge connection to a constant background, the Hamiltonian has the following $U(1)$ gauge structure \begin{subequations}\label{eq: gauge transformations} \begin{align} \Phi_{\mathbf{r}_{\alpha}} &\rightarrow \Phi_{\mathbf{r}_{\alpha}} e^{i \theta_{\mathbf{r}_{\alpha}}} \\ A_{\mathbf{r}_{\alpha} \mathbf{r}_{\beta}^{\prime}}&\rightarrow A_{\mathbf{r}_{\alpha} \mathbf{r}_{\beta}^{\prime}} - \theta_{\mathbf{r}^{\prime}_{\beta}} + \theta_{\mathbf{r}_{\alpha}} \end{align} \end{subequations} as a direct consequence of the physical constraint~\eqref{eq: physical constraint charge GMFT}. However, at the MF level, $\mathcal{H}_{\text{GMFT}}$ does not have a $U(1)$ gauge structure anymore. Let us then study how the GMFT Hamiltonian transforms under an arbitrary local $U(1)$ transformation of the form \begin{align} G: \Phi_{\mathbf{r}_{\alpha}} \mapsto \Phi_{\mathbf{r}_{\alpha}} e^{i\theta_{\mathbf{r}_{\alpha}}}. \end{align} First, for the $J_{\pm}$ term describing intra-sublattice spinon hopping transforms as \begin{align} G&: \Phi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}}^{\dagger} \Phi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} e^{i\eta_{\alpha}\left(\overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} - \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} \right)} \nonumber \\ &\mapsto \Phi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}}^{\dagger} \Phi_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} e^{i\eta_{\alpha}\left[\left(\overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) - \left( \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) \right]}. \end{align} The local $U(1)$ transformation can be absorbed in the gauge field background by mapping it to \begin{align} \label{eq: gauge trf gauge field} G: \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} \mapsto \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} + \eta_{\alpha} \left(\theta_{\mathbf{r}_{\alpha}+ \eta_{\alpha} \mathbf{b}_{\mu}} - \theta_{\mathbf{r}_{\alpha}}\right) = G_{\theta}\left( \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} \right). \end{align} Next, for the inter-site pairing coupling associated with the $J_{\pm\pm}$ term, we get the mapping \begin{align} G&: e^{i \eta_\alpha\left(\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}+\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)}\Phi_{\mathbf{r}_\alpha}^{\dagger} \Phi_{\mathbf{r}_\alpha}^{\dagger} \chi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} \nonumber \\ &\mapsto e^{i\eta_{\alpha}\left[\left(\overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) + \left( \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) \right]} \nonumber \\ &\hspace{2cm} \times \Phi_{\mathbf{r}_\alpha}^{\dagger} \Phi_{\mathbf{r}_\alpha}^{\dagger} \chi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} e^{-i \left( \theta_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} + \theta_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} \right)}, \end{align} such that the inter-site pairing field transforms as \begin{align} \label{eq: gauge trf inter-site pairing} G: \chi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} \mapsto \chi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} e^{-i \left( \theta_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} + \theta_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} \right)} = G_{\theta}\left( \chi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}\right). \end{align} For the on-site pairing coupling associated with the $J_{\pm\pm}$ term, \begin{align} G &: e^{i \eta_\alpha\left(\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}+\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)} \bar{\chi}^{0}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} \nonumber \\ &\mapsto e^{i\eta_{\alpha}\left[\left(\overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) + \left( \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) \right]} \bar{\chi}^{0}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha} e^{i2\theta_{\mathbf{r}_{\alpha}}} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu} , \end{align} such that the on-site pairing field transforms as \begin{align}\label{eq: gauge trf on-site pairing} G: \overline{\chi}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha}^{0} \mapsto \overline{\chi}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha}^{0} e^{i 2 \theta_{\mathbf{r}_\alpha} } = G_{\theta}\left( \overline{\chi}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha}^{0} \right). \end{align} Finally, the inter-site hopping term transforms under a local $U(1)$ transformation as \begin{align} G &: e^{i \eta_\alpha\left(\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\nu}+\overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}\right)} \xi_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\nu}} \Phi_{\mathbf{r}_\alpha}^\dagger \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu} \nonumber \\ &\mapsto e^{i\eta_{\alpha}\left[\left(\overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) + \left( \overline{A}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} + \eta_{\alpha}\left(\theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\mu}} - \theta_{\mathbf{r}_{\alpha}}\right) \right) \right]} \xi_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\nu}} e^{i (\theta_{\mathbf{r}_\alpha} - \theta_{\mathbf{r}_\alpha + \eta_{\alpha} \mathbf{b}_{\nu}})} \Phi_{\mathbf{r}_\alpha}^\dagger \Phi_{\mathbf{r}_\alpha+\eta_\alpha \mathbf{b}_\mu}, \end{align} such that we have \begin{align} \label{eq: gauge trf inter-sublattice hopping} G: \xi_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} \mapsto \xi_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} e^{i(\theta_{\mathbf{r}_{\alpha}} - \theta_{\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}})} = G_{\theta}\left( \xi_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_{\nu}} \right). \end{align} To summarize, the GMFT Hamiltonian transforms under a local $U(1)$ gauge transformation as \begin{align} \label{eq: transformation GMFT Hamiltonian local U(1) trsf} G: \mathcal{H}_{\text{GMFT}}\left(\left\{ \overline{A},\chi,\chi^{0},\xi \right\}\right) \mapsto \mathcal{H}_{\text{GMFT}}\left(\left\{ G_{\theta}(\overline{A}), G_{\theta}(\chi), G_{\theta}(\chi^{0}), G_{\theta}(\xi) \right\}\right), \end{align} where the gauge field background and MF parameters transform as in Eqs.~\eqref{eq: gauge trf gauge field},~\eqref{eq: gauge trf inter-site pairing},~\eqref{eq: gauge trf on-site pairing}, and~\eqref{eq: gauge trf inter-sublattice hopping}. This defines an equivalence relation between different configurations $\left\{ \overline{A},\chi,\chi^{0},\xi \right\}$. \section{\label{SI sec: Classification of symmetric spin liquids} Classification of \texorpdfstring{$U(1)$}{U(1)} symmetric spin liquids} \subsection{Generalities} \subsubsection{Projective construction} We now discuss the general ideas behind the classification of symmetry fractionalization classes within GMFT. We briefly summarize the argument outlined in Ref.~\cite{desrochers2022symmetry} to which we refer the interested reader for a more thorough exposition. It should first be noted that the general gauge transformation of Eq. \eqref{eq: gauge transformations} is generated by \begin{align} U(\{\theta\})&=\prod_{\mathbf{r}_\alpha} \exp\left(i\theta_{\mathbf{r}_\alpha} \left(Q_{\mathbf{r}_\alpha}- \sum_{\mu} E_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}+\eta_{\alpha}\mathbf{b}_\mu}\right)\right). \end{align} Under such a transformation, the GMFT Hamiltonian transforms as in Eq.~\eqref{eq: transformation GMFT Hamiltonian local U(1) trsf}. Therefore, starting from the ground state $\ket{\Psi_{\text{GS}} \left( \left\{ \overline{A},\chi,\chi^{0},\xi \right\} \right) }$ of a GMFT Hamiltonian $\mathcal{H}_{\text{GMFT}}\left( \left\{ \overline{A},\chi,\chi^{0},\xi \right\} \right)$, one can think of using a projector-like transformation $\mathcal{P}_{\text{Gauss}}$ to recover a physical spin wavefunction. Such a projection operator should remove charge configurations that do not respect $|Q_{\mathbf{r}_{\alpha}}|<2S$ and acts on the $\mathscr{H}_{\text{spin}}$ part of the state to enforce Eq.~\eqref{eq: physical constraint charge GMFT}. However, since $\comm{U}{\mathcal{P}_{\text{Gauss}}}=0$ because $U$ acts trivially on any state that respects the lattice Gauss's law, we get the crucial observation that all GMFT eigenstates that only differ by a gauge transformation yield the same physical spin wavefunction \begin{align*} &\mathcal{P}_{\text{Gauss}} U \ket{\Psi_{\text{GS}} \left( \left\{ \overline{A},\chi,\chi^0,\xi \right\} \right) } = \mathcal{P}_{\text{Gauss}} \ket{\Psi_{\text{GS}} \left( \left\{ G_{\theta}(\overline{A}), G_{\theta}(\chi), G_{\theta}(\chi^0), G_{\theta}(\xi) \right\} \right) } \\\ =& U \mathcal{P}_{\text{Gauss}} \ket{\Psi_{\text{GS}} \left(\left\{ \overline{A},\chi,\chi^0,\xi \right\} \right)} \\ =& \mathcal{P}_{\text{Gauss}} \ket{\Psi_{\text{GS}} \left(\left\{ \overline{A},\chi,\chi^0,\xi \right\} \right)}. \end{align*} This argument is independent of the way one chooses to implement the projection back to the physical spin space $\mathcal{P}_{\text{Gauss}}$. We can thus conclude that although the gauge structure is not explicitly present, the MF theory still has redundancies in its description. This redundancy has important consequences when considering how to implement symmetries in GMFT. It implies that a GMFT eigenstate for a given Ansatz $\{\overline{A},\chi,\chi^0,\xi\}$ corresponds to a physical spin wavefunction symmetric under a given transformation $\mathcal{O}$ if it is symmetric under $\mathcal{O}$ \emph{up to a gauge transformation}, in complete analogy to the projective symmetry group (PSG) construction for Abrikosov fermions and Schwinger bosons~\cite{wen2002quantum, wang2006spin, messio2013time, bieri2016projective}. Equivalently stated, a GMFT wavefunction is symmetric under $\mathcal{O}$ if there exist a gauge transformation $G_{\mathcal{O}}$ such that \begin{align} G_{\mathcal{O}}\circ\mathcal{O}: \mathcal{H}_{\text{GMFT}}\left( \left\{ \overline{A},\chi,\chi^0,\xi \right\} \right) \mapsto \mathcal{H}_{\text{GMFT}}\left( \left\{ \overline{A},\chi,\chi^0,\xi \right\} \right). \end{align} As a result, all Ansätze corresponding to physical states invariant under a given set of symmetries $\{\mathcal{O}_1,\mathcal{O}_2,...\}$ can be classified by identifying the associated gauge-enriched operations $\{\tilde{\mathcal{O}}_1, \tilde{\mathcal{O}}_2,...\} = \{G_{\mathcal{O}_1}\circ\mathcal{O}_1,G_{\mathcal{O}_2}\circ\mathcal{O}_2,...\}$ that leave $\mathcal{H}_{\text{GMFT}}$ invariant. The subgroup of pure gauge transformations $G_{\text{IGG}}\circ\mathds{1}$ that leave the GMFT Hamiltonian invariant is known as the invariant gauge group (IGG). The IGG corresponds to the emergent low-energy gauge structure of the model. The ideas behind the classification of symmetry fractionalization classes with GMFT are summarized in Fig.~\ref{fig: projective symmetry}. \begin{figure} \includegraphics[width=1.00\linewidth]{Projective_symmetry.pdf} \caption{Graphical representation of the projective construction. (a) The space represents all possible configurations $\{\overline{A},\chi,\chi^{0},\xi\}$, and the red curves correspond to configurations related by a gauge transformation $G$. A GMFT eigenstate corresponding to a certain equivalence class of configuration will yield a symmetric spin wavefunction under a transformation $\mathcal{O}$ if this operation maps a representative configuration to a gauge equivalent one, as in the right part of the figure. On the other hand, the GMFT eigenstate corresponds to a spin wavefunction that is not symmetric under $\mathcal{O}$ if it maps the configuration to another one that is not related by a local $U(1)$ transformation as in the left part of the figure. (b) All Ansätze symmetric under given transformations are invariant under a subgroup of pure gauge transformation known as the invariant gauge group (IGG). The family of Ansätze to the right of the figure is invariant under $\text{IGG}_{1}$ whereas the one to the left is not but rather has $\text{IGG}_{2}\subset \text{IGG}_{1}$ as invariant gauge group. \label{fig: projective symmetry}} \end{figure} \subsubsection{Consistency conditions} To classify symmetry classes, one starts from all algebraic constraints of the form \begin{equation} \label{eq: algebraic SG relations} \mathcal{O}_{1} \circ \mathcal{O}_{2} \circ \cdots=1 \end{equation} which translates directly to the gauge-enriched relations \begin{equation} \label{eq: algebraic SG relations gauge-enriched} \widetilde{\mathcal{O}}_{1} \circ \widetilde{\mathcal{O}}_{2} \circ \cdots=\left(G_{\mathcal{O}_{1}} \circ \mathcal{O}_{1}\right) \circ\left(G_{\mathcal{O}_{2}} \circ \mathcal{O}_{2}\right) \circ \cdots = e^{i\psi} \in \text{IGG}, \end{equation} with $\psi\in\left[0, 2\pi \right)$ if IGG$=U(1)$ and $\psi\in\left\{0, \pi \right\}$ if IGG$=\mathbb{Z}_{2}$. We can use the following conjugation relation \begin{widetext} \begin{align} \mathcal{O}_{i} \circ G_{\mathcal{O}_{j}} \circ \mathcal{O}_{i}^{-1}: \Phi_{\mathbf{r}_{\alpha}} \mapsto e^{i \phi_{\mathcal{O}_{j}}\left[\mathcal{O}_{i}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]} \Phi_{\mathbf{r}_{\alpha}}, \end{align} \end{widetext} to map all these gauge-enriched constraints to phase relations of the form \begin{equation} \begin{aligned}\label{eq: phase equations for PSG} &\phi_{\mathcal{O}_{1}}\left(\mathbf{r}_{\alpha}\right) + \phi_{\mathcal{O}_{2}}\left[\mathcal{O}_{1}^{-1}\left(\mathbf{r}_{\alpha}\right)\right] + \phi_{\mathcal{O}_{3}}\left[\mathcal{O}_{2}^{-1}\circ\mathcal{O}_{1}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]+\cdots= \psi\hspace{3mm} \text{mod }2\pi. \end{aligned} \end{equation} The GMFT classes for a given IGG are then obtained by listing the gauge inequivalent solutions of all phase equations of the form~\eqref{eq: phase equations for PSG}. It must be impossible to relate two distinct GMFT classes by a general gauge transformation $G$ that maps the phase factor to \begin{align} \phi_{\mathcal{O}}(\mathbf{r}_{\alpha}) &\to \phi_{\mathcal{O}}(\mathbf{r}_{\alpha}) + \phi_{G}(\mathbf{r}_{\alpha}) - \phi_{G}(\mathcal{O}^{-1}(\mathbf{r}_{\alpha})). \end{align} \subsection{\label{SI subsec: PSG solution - Algebraic constraints} Algebraic constraints} For the parent diamond lattice, the algebraic constraints are \begin{subequations} \label{eq: SG generators constraints} \begin{align} T_{i} T_{i+1} T_{i}^{-1} T_{i+1}^{-1} &=1, i=1,2,3 \\ \bar{C}_{6}^{6} &=1, \\ S^{2} T_{3}^{-1} &=1, \\ \bar{C}_{6} T_{i} \bar{C}_{6}^{-1} T_{i+1} &=1, i=1,2,3 \\ S T_{i} S^{-1} T_{3}^{-1} T_{i} &=1, i=1,2, \\ S T_{3} S^{-1} T_{3}^{-1} &=1 \\ \left(\bar{C}_{6} S\right)^{4} &=1 \\ \left(\bar{C}_{6}^{3} S\right)^{2} &=1. \end{align} \end{subequations} The corresponding gauge-enriched operations are \begin{widetext} \begin{subequations} \begin{align} \left(G_{T_{i}} T_{i}\right)\left(G_{T_{i+1}} T_{i+1}\right)\left(G_{T_{i}} T_{i}\right)^{-1}\left(G_{T_{i+1}} T_{i+1}\right)^{-1}&\in IGG, \\ \left(G_{\bar{C}_{6}} \bar{C}_{6}\right)^{6} &\in IGG, \\ \left(G_{S} S\right)^{2}\left(G_{T_{3}} T_{3}\right)^{-1} &\in IGG,\\ \left(G_{\bar{C}_{6}} \bar{C}_{6}\right)\left(G_{T_{i}} T_{i}\right)\left(G_{\bar{C}_{6}} \bar{C}_{6}\right)^{-1}\left(G_{T_{i+1}} T_{i+1}\right) &\in IGG, \\ \left(G_{S} S\right)\left(G_{T_{i}} T_{i}\right)\left(G_{S} S\right)^{-1}\left(G_{T_{3}} T_{3}\right)^{-1}\left(G_{T_{i}} T_{i}\right) &\in IGG, \\ \left(G_{S} S\right)\left(G_{T_{3}} T_{3}\right)\left(G_{S} S\right)^{-1}\left(G_{T_{3}} T_{3}\right)^{-1} &\in IGG, \\ \left[\left(G_{\bar{C}_{6}} \bar{C}_{6}\right)\left(G_{S} S\right)\right]^{4} &\in IGG,\\ \left[\left(G_{\bar{C}_{6}} \bar{C}_{6}\right)^{3}\left(G_{S} S\right)\right]^{2} &\in IGG . \end{align} \end{subequations} These constraints are explicitly \begin{subequations} \begin{empheq}[]{align} \phi_{T_{i}}\left(\mathbf{r}_{\alpha}\right)+\phi_{T_{i+1}}\left[T_{i}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{T_{i}}\left[T_{i+1}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{T_{i+1}}\left(\mathbf{r}_{\alpha}\right) &=\psi_{T_i},\label{eq: psg classification T_i}\\ \phi_{\bar{C}_{6}}\left(\mathbf{r}_{\alpha}\right)+\phi_{\bar{C}_{6}}\left[\bar{C}_{6}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{\bar{C}_{6}}\left[\bar{C}_{6}^{-2}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{\bar{C}_{6}}\left[\bar{C}_{6}^{-3}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{\bar{C}_{6}}\left[\bar{C}_{6}^{-4}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{\bar{C}_{6}}\left[\bar{C}_{6}^{-5}\left(\mathbf{r}_{\alpha}\right)\right] &=\psi_{\bar{C}_{6}} \label{eq: psg classification C} \\ \phi_{S}\left(\mathbf{r}_{\alpha}\right) + \phi_{S}\left[S^{-1}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{T_{3}}\left(\mathbf{r}_{\alpha}\right) &=\psi_{S} \label{eq: psg classification S} \\ \phi_{\bar{C}_{6}}\left(\mathbf{r}_{\alpha}\right)+\phi_{T_{i}}\left[\bar{C}_{6}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{\bar{C}_{6}}\left[T_{i+1}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{T_{i+1}}\left[T_{i+1}\left(\mathbf{r}_{\alpha}\right)\right] &=\psi_{\bar{C}_{6} T_{i}} \label{eq: psg classification CT_i}\\ \phi_{S}\left(\mathbf{r}_{\alpha}\right) + \phi_{T_{i}}\left[S^{-1}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{S}\left[T_{3}^{-1} T_{i}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{T_{3}}\left[T_{i}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{T_{i}}\left[T_{i}\left(\mathbf{r}_{\alpha}\right)\right]&=\psi_{S T_{i}} \label{eq: psg classification S T_i} \\ \phi_{S}\left(\mathbf{r}_{\alpha}\right) + \phi_{T_{3}}\left[S^{-1}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{S}\left[T_{3}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]-\phi_{T_{3}}\left(\mathbf{r}_{\alpha}\right) &=\psi_{S T_{3}} \label{eq: psg classification S T_3} \\ \phi_{\overline{C}_{6}}\left(\mathbf{r}_{\alpha}\right)+\phi_{S}\left[\bar{C}_{6}^{-1}\left(\mathbf{r}_{\alpha}\right)\right] + \phi_{\bar{C}_{6}}\left[\left(\bar{C}_{6} S\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] + \phi_{S}\left[\left(\bar{C}_{6} S \bar{C}_{6}\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{\bar{C}_{6}}\left[\left(\bar{C}_{6} S \bar{C}_{6} S\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] & \nonumber\\ +\phi_{S}\left[\left(\bar{C}_{6} S \bar{C}_{6} S \bar{C}_{6}\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] +\phi_{\bar{C}_{6}}\left[\left(\bar{C}_{6} S \bar{C}_{6} S \bar{C}_{6} S\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] + \phi_{S}\left[\left(\bar{C}_{6} S \bar{C}_{6} S \bar{C}_{6} S \bar{C}_{6}\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] &=\psi_{\bar{C}_{6} S} \label{eq: psg classification CS} \\ \phi_{\bar{C}_{6}}\left(\mathbf{r}_{\alpha}\right)+\phi_{\bar{C}_{6}}\left[\bar{C}_{6}^{-1}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{\bar{C}_{6}}\left[\bar{C}_{6}^{-2}\left(\mathbf{r}_{\alpha}\right)\right]+\phi_{S}\left[\bar{C}_{6}^{-3}\left(\mathbf{r}_{\alpha}\right)\right] + \phi_{\bar{C}_{6}}\left[\left(\bar{C}_{6}^{3} S\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] \hspace{2.3cm} & \nonumber\\ + \phi_{\bar{C}_{6}}\left[\left(\bar{C}_{6}^{3} S \bar{C}_{6}\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] + \phi_{\bar{C}_{6}}\left[\left(\bar{C}_{6}^{3} S \bar{C}_{6}^{2}\right)^{-1}\left(\mathbf{r}_{\alpha}\right)\right] + \phi_{S}\left[S\left(\mathbf{r}_{\alpha}\right)\right] &=\psi_{S \bar{C}_{6}} \label{eq: psg classification SC} \end{empheq} \end{subequations} \end{widetext} where all $\psi\in\left[0,2\pi\right)$ if IGG=$U(1)$ or $\psi\in\left\{0,\pi\right\}$ if IGG=$\mathbb{Z}_2$, $i=1,2,3$ for Eqs.~\eqref{eq: psg classification T_i} and~\eqref{eq: psg classification CT_i} and $i=1,2$ for Eq.~\eqref{eq: psg classification S T_i}. All phase equations are defined modulo $2\pi$. We will not indicate that subtlety explicitly for the sake of simplicity. We emphasize that these constraints are distinct from the effective spin-1/2 case discussed in Ref.~\cite{desrochers2022symmetry} as a consequence of the different ways the pseudospins transform. \subsection{\label{SI subsec: PSG solution - Solution of the PSG constraints} Solution of the constraints} The IGG of the GMFT Hamiltonian of Eq. (3) in the main text is $\mathbb{Z}_2$ in the presence of inter-site and on-site pairing field. Here we will arbitrarily set these pairing fields to zero to classify $U(1)$ QSLs. \subsubsection{\label{SI subsubsec: PSG solution - Solution of the PSG constraints: inter-unit cell part} Inter-unit cell part} Let us first consider the constraints coming from the commutativity of the translation operators given in Eq.~\eqref{eq: psg classification T_i}. Using our gauge freedom, we can set $\phi_{T_1}(r_1,r_2,r_3)_{\alpha}=\phi_{T_2}(0,r_2,r_3)_{\alpha}=\phi_{T1}(0,0,r_3)_{\alpha}=0$, which then leads to \begin{subequations} \label{eq U(1) classification: T1, T2, T3 first equation} \begin{align} \phi_{T_1}(\mathbf{r}_{\alpha}) &= 0 \\ \phi_{T_2}(\mathbf{r}_{\alpha}) &= -\psi_{T_1} r_1\\ \phi_{T_3}(\mathbf{r}_{\alpha}) &= \psi_{T_3} r_1 - \psi_{T_2} r_2. \end{align} \end{subequations} Then using Eq.~\eqref{eq: psg classification CT_i}, we get the equations \begin{subequations} \begin{align} \psi_{\overline{C}_6 T_1} =& \phi_{\overline{C}_6}(r_1,r_2,r_3)_{\alpha} -\phi_{\overline{C}_6}(r_1,r_2+1,r_3)_{\alpha} -r_1 \psi_{T_1} \\ \psi_{\overline{C}_6 T_2} =&\phi_{\overline{C}_6}(r_1,r_2,r_3)_{\alpha} -\phi_{\overline{C}_6}(r_1,r_2,r_3+1)_{\alpha} + \psi_{T_1} r_2 - \psi_{T_2} r_2 + \psi_{T_3} r_1 \\ \psi_{\overline{C}_6 T_3}=&\phi_{\overline{C}_6}(r_1,r_2,r_3)_{\alpha} -\phi_{\overline{C}_6}(r_1+1,r_2,r_3)_{\alpha} + \psi_{T_2} r_3 - \psi_{T_3} r_2. \end{align} \end{subequations} that yield $\psi_{T_1}=\psi_{T_2}=\psi_{T_3}$ and \begin{align} \label{eq U(1) classification: first equation for phi C6} \phi_{\overline{C}_6}(\mathbf{r}_{\alpha}) =& \phi_{\overline{C}_6}(\mathbf{0}_{\alpha}) - r_2 \psi_{\overline{C}_6 T_1} -r_3 \psi_{\overline{C}_6 T_2} - r_1 \psi_{\overline{C}_6 T_{3}} - \psi_{T_1}(r_1 r_2 - r_1 r_3). \end{align} Replacing the translation phase factors in the constraints~\eqref{eq: psg classification S T_i} and~\eqref{eq: psg classification S T_3} leads to \begin{subequations} \begin{align} \psi_{S T_1} =&\phi_{S}(r_1,r_2,r_3)_{\alpha} -\phi_{S}(r_1+1,r_2,r_3-1)_{\alpha} + (-1-r_1+r_2)\psi_{T_1} \\ \psi_{S T_2} =&\phi_{S}(r_1,r_2,r_3)_{\alpha} -\phi_{S}(r_1,r_2+1,r_3-1)_{\alpha} + (1-r_1+r_2) \psi_{T_1} \\ \psi_{S T_3} =&\phi_{S}(r_1,r_2,r_3)_{\alpha} -\phi_{S}(r_1,r_2,r_3-1)_{\alpha} + (r_2 - r_1) \psi_{T_1}, \end{align} \end{subequations} which impose $\psi_{T_1}=n_1 \pi$ with $n_1 \in\left\{0, 1 \right\}$ and \begin{align}\label{eq U(1) classification: first equation for phi S} \phi_{S}(\mathbf{r}_{\alpha}) =& \phi_{S}(\mathbf{0}_{\alpha}) -r_1 \psi_{ST_1} - r_2 \psi_{S T_2} + \frac{1}{2} n_1 \pi \left( -r_1 + r_2 + 2 r_1 r_2 - r_1^2 + r_2^2 \right) + (r_1 + r_2 + r_3) \psi_{ST_3} . \end{align} We can find all other constraints by replacing the phase factors for $T_{i}$, $S$, and $\overline{C}_{6}$ in the remaining equations. Eq.~\eqref{eq: psg classification C} expressing the finite order of the rotoreflection leads to \begin{subequations} \begin{align} 3\phi_{\overline{C}_6}(\mathbf{0}_A) + 3\phi_{\overline{C}_6}(\mathbf{0}_B) &= \psi_{\overline{C}_6}. \end{align} \end{subequations} Eq.~\eqref{eq: psg classification S} associated with $\psi_{S}$ yields \begin{subequations} \begin{align} \psi_{S} =& (r_1 + r_2 + 2 r_3) \psi_{S T_3} + \phi_S(\mathbf{0}_A) + \phi_S(\mathbf{0}_B) \\ \psi_S =& ( r_1 + r_2 + 2 r_3 - 1) \psi_{S T_3} + \phi_S(\mathbf{0}_A) + \phi_S(\mathbf{0}_B) \end{align} \end{subequations} which leads to the constraints \begin{subequations} \begin{align} \psi_{ST_3} &= 0 \\ \phi_{S}(\mathbf{0}_A) + \phi_{S}(\mathbf{0}_B) &= \psi_{S}. \end{align} \end{subequations} Eq.~\eqref{eq: psg classification CS} gives \begin{subequations} \begin{align} \psi_{\overline{C}_6 S} &= \psi_{\overline{C}_6 T_1} + \psi_{\overline{C}_6 T_2} + \psi_{\overline{C}_6 T_3} - \psi_{ST_1} - \psi_{ST_2} + 4 (\phi_{\overline{C}_{6}}(\mathbf{0}_A) + \phi_{S}(\mathbf{0}_B)) \\ \psi_{\overline{C}_6 S} &= 4 (\phi_{\overline{C}_{6}}(\mathbf{0}_B) + \phi_{\overline{C}_{6}}(\mathbf{0}_A)). \end{align} \end{subequations} At last, Eq.~\eqref{eq: psg classification SC} gives the constraints \begin{subequations} \begin{align} -3 \psi_{\overline{C}_6 T_1} + 3 \psi_{\overline{C}_6 T_2} - \psi_{\overline{C}_6 T_3} + 2 \psi_{ST_1} &= 0 \\ -3 \psi_{\overline{C}_6 T_1} - \psi_{\overline{C}_6 T_2} + 3 \psi_{\overline{C}_6 T_3} + 2 \psi_{ST_2} &= 0 \\ - \psi_{\overline{C}_6 T_1} + \psi_{\overline{C}_6 T_2} + \psi_{\overline{C}_6 T_3} + 4 \phi_{\overline{C}_{6}}(\mathbf{0}_A) + 2 \phi_{\overline{C}_{6}}(\mathbf{0}_B) + 2 \phi_{S}(\mathbf{0}_B) &= \psi_{S \overline{C}_6} \\ 2 \phi_{\overline{C}_{6}}(\mathbf{0}_A) + 4 \phi_{\overline{C}_{6}}(\mathbf{0}_B) + 2 \phi_{S}(\mathbf{0}_A) &= \psi_{S \overline{C}_6}. \end{align} \end{subequations} \subsubsection{\label{SI subsubsec: PSG solution - Solution of the PSG constraints: intra-unit cell part} Gauge fixing and intra-unit cell part} We must fix all remaining gauge degrees of freedom and solve the intra-unit cell constraints. Let us briefly summarize the results we have determined thus far. We obtained the phase equations~\eqref{eq U(1) classification: T1, T2, T3 first equation},~\eqref{eq U(1) classification: first equation for phi C6} and~\eqref{eq U(1) classification: first equation for phi S}, and the constraints \begin{subequations} \label{eq: intra-cell condition} \begin{align} 3\phi_{\overline{C}_6}(\mathbf{0}_A) + 3\phi_{\overline{C}_6}(\mathbf{0}_B) &= \psi_{\overline{C}_6} \label{eq: intra-cell condition Eq.1} \\ \phi_{S}(\mathbf{0}_A) + \phi_{S}(\mathbf{0}_B) &=\psi_{S} \label{eq: intra-cell condition Eq.2} \\ \psi_{\overline{C}_6 T_1} + \psi_{\overline{C}_6 T_2} + \psi_{\overline{C}_6 T_3} - \psi_{ST_1} - \psi_{ST_2} + 4 (\phi_{\overline{C}_{6}}(\mathbf{0}_A) + \phi_{S}(\mathbf{0}_B)) &= \psi_{\overline{C}_6 S} \label{eq: intra-cell condition Eq.3} \\ 4 (\phi_{\overline{C}_{6}}(\mathbf{0}_B) + \phi_{\overline{C}_{6}}(\mathbf{0}_A)) &= \psi_{\overline{C}_6 S} \label{eq: intra-cell condition Eq.4} \\ -3 \psi_{\overline{C}_6 T_1} + 3 \psi_{\overline{C}_6 T_2} - \psi_{\overline{C}_6 T_3} + 2 \psi_{ST_1} &= 0 \label{eq: intra-cell condition Eq.5} \\ -3 \psi_{\overline{C}_6 T_1} - \psi_{\overline{C}_6 T_2} + 3 \psi_{\overline{C}_6 T_3} + 2 \psi_{ST_2} &= 0 \label{eq: intra-cell condition Eq.6} \\ - \psi_{\overline{C}_6 T_1} + \psi_{\overline{C}_6 T_2} + \psi_{\overline{C}_6 T_3} + 4 \phi_{\overline{C}_{6}}(\mathbf{0}_A) + 2 \phi_{\overline{C}_{6}}(\mathbf{0}_B) + 2 \phi_{S}(\mathbf{0}_B) &= \psi_{S \overline{C}_6} \label{eq: intra-cell condition Eq.7} \\ 2 \phi_{\overline{C}_{6}}(\mathbf{0}_A) + 4 \phi_{\overline{C}_{6}}(\mathbf{0}_B) + 2 \phi_{S}(\mathbf{0}_A) &= \psi_{S \overline{C}_6} \label{eq: intra-cell condition Eq.8} \end{align} \end{subequations} These constraints can be simplified by fixing some gauge degrees of freedom to remove redundant solutions. We can fix $\psi_{\overline{C}_6 T_1}=\psi_{\overline{C}_6 T_2}=0$ by using our IGG structure ($\phi_{\mathcal{O}}\to\phi_{\mathcal{O}}+\theta$, where $\theta \in\left[ 0,2\pi\right)$) for $\phi_{T_1}$ and $\phi_{T_2}$ since the phase associated with $T_1$, $T_2$, and $T_3$ appear an odd number of times in Eq.~\eqref{eq: psg classification CT_i}. Similarly, we can fix $\psi_{ST_1}=0$ because $T_3$ is also present an odd number of times in Eq.~\eqref{eq: psg classification S T_i}. This elimination of redundant degrees of freedom directly implies $\psi_{\overline{C}_6 T_3}=\psi_{S T_2}=0$ from Eqs.~\eqref{eq: intra-cell condition Eq.5} and~\eqref{eq: intra-cell condition Eq.6}. In the same vein, we can use the remaining IGG degree of freedom of $\overline{C}_6$ and $S$ to fix $\phi_{\overline{C}_6}(\mathbf{0}_{A})=\phi_{S}(\mathbf{0}_{B})$ which directly implies $\psi_{\overline{C}_{6}S}=0$ from Eq.~\eqref{eq: intra-cell condition Eq.3}. Next, we can use a constant sublattice-dependent gauge transformation \begin{equation} \label{eq: intra-cell condition sublattice gauge transformation} \phi(\mathbf{r}_{\alpha}) = \phi_{\alpha}, \hspace{5mm} \text{where }\alpha\in\{\text{A},\text{B}\}. \end{equation} Given that the phase factor generally transform as $\phi_{\mathcal{O}}(\mathbf{r}_{\alpha})\to \phi_{\mathcal{O}}(\mathbf{r}_{\alpha}) + \phi(\mathbf{r}_{\alpha}) - \phi\left[ \mathcal{O}^{-1}(\mathbf{r}_{\alpha}) \right]$, our initial gauge fixing for $\phi_{T_1}$, $\phi_{T_{2}}$ and $\phi_{T_{3}}$ remains unaffected by such a gauge transformation while $\phi_{\overline{C}_6}$ and $\phi_{S}$ are mapped to \begin{subequations} \begin{align} \phi_{\overline{C}_6}(\mathbf{0}_{\alpha}) &\to \eta_{\alpha} (\phi_{A} - \phi_{B}) + \phi_{\overline{C}_6}(\mathbf{0}_{\alpha}) \\ \phi_{S}(\mathbf{0}_{\alpha}) &\to \eta_{\alpha} (\phi_{A} - \phi_{B}) + \phi_{S}(\mathbf{0}_{\alpha}) \end{align} \end{subequations} We can then choose $\phi_{\alpha}$ to fix \begin{equation} \phi_{\overline{C}_{6}} (\mathbf{0}_B) = 0. \label{eq U(1) classification: gauge fixing phi intra sublattice C 1} \end{equation} We directly get from Eqs.~\eqref{eq: intra-cell condition Eq.1},~\eqref{eq: intra-cell condition Eq.2}, and~\eqref{eq: intra-cell condition Eq.7} that $\phi_{S}(\mathbf{0}_A) = \psi_{\overline{C}_{6}}=\psi_{S \overline{C}_{6}}=0$. Finally, Eqs.~\eqref{eq: intra-cell condition Eq.4} and~\eqref{eq: intra-cell condition Eq.4} imply $\psi_{S}=n_{S}\pi$ with $n_{S}\in\{0,1\}$ and $\psi_{\overline{C}_{6} S}=0$. We conclude that there are four GMFT classes given by the phase factors \begin{subequations} \label{eq: phase factors U(1) PSG classification} \begin{align} \phi_{T_{1}}\left(\mathbf{r}_{\alpha}\right)=& 0 \\ \phi_{T_{2}}\left(\mathbf{r}_{\alpha}\right)=& n_{1} \pi r_{1} \\ \phi_{T_{3}}\left(\mathbf{r}_{\alpha}\right)=& n_{1} \pi\left(r_{1}+r_{2}\right) \\ \phi_{\bar{C}_{6}}\left(\mathbf{r}_{\alpha}\right) =& n_1 \pi r_1 (r_2 + r_3) \\ \phi_{S}\left(\mathbf{r}_{\alpha}\right) =& n_{S}\pi\delta_{\alpha,A} + \frac{n_{1} \pi}{2} \left(-r_{1}(r_{1}+1) + r_{2}(r_{2}+1) + 2r_{1}r_{2} \right). \end{align} \end{subequations} Only two fully symmetric fractionalization classes (corresponding to $n_S=0$ in the current classification) are present in the effective spin-1/2 case~\cite{desrochers2022symmetry}. The presence of novel classes in the octupolar dominant regime of the dipolar-octupolar doublet is a direct consequence of the non-trivial pseudospin transformation properties, as highlighted previously. \section{\label{SI sec: From symmetry classification to saddle point action} From symmetry classification to saddle point action} \subsection{\label{SI subsec: Relating fields on different bonds} Relating fields on different bonds} The value of the gauge field and MF parameters on every bond needs to be determined to build the GMFT action of every classified symmetry fractionalization class. To do so, one arbitrarily fixes the gauge field and solves the self-consistency conditions for the MF parameters on a subset of representative bonds and sites that are not symmetry-related. Since all bonds and sites can be related by space group operations for the diamond lattice, we only need to fix the gauge field and know the MF parameter on a single bond/site. The configuration on the rest of the lattice is then deduced by using the transformation properties of the gauge field and various MF parameters. The transformation properties of the GMFT parameters can be deduced by using the spinon transformations and requiring that the Hamiltonian is invariant under the gauge-enriched symmetry operations. Indeed, the gauge-enriched operators $\widetilde{\mathcal{O}}$ must be symmetries of the GMFT Hamiltonian \begin{align} \widetilde{\mathcal{O}}:\mathcal{H}_{\text{GMFT}}\mapsto \mathcal{H}_{\text{GMFT}}. \end{align} For the GMFT Hamiltonian to be invariant under the projective operation $\widetilde{\mathcal{O}}=G_{\mathcal{O}}\circ \mathcal{O}$, the requirements \begin{subequations} \label{eq: mapping of the fields under SG operations} \begin{align} \overline{A}_{\mathcal{O}(\mathbf{r}_\alpha),\mathcal{O}(\mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\mu})} &= \overline{A}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\mu}} + \eta_{\alpha} \left( \phi_{\mathcal{O}}\left(\mathcal{O}\left(\mathbf{r}_{\alpha} + \eta_{\alpha} \mathbf{b}_{\mu}\right)\right) - \phi_{\mathcal{O}}\left(\mathcal{O}\left(\mathbf{r}_{\alpha} \right) \right) \right) \label{eq: gauge field background} \\ \chi_{\mathcal{O}\left(\mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\mu} \right), \mathcal{O}\left(\mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\nu} \right)} &= \chi_{\mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\mu} , \mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\nu}} \exp\left[ -i \left( \phi_{\mathcal{O}}\left(\mathcal{O}\left(\mathbf{r}_{\alpha} + \eta_{\alpha} \mathbf{b}_{\mu}\right)\right) + \phi_{\mathcal{O}}\left(\mathcal{O}\left(\mathbf{r}_{\alpha} + \eta_{\alpha} \mathbf{b}_{\nu} \right) \right) \right) \right] \label{eq: inter-site pairing} \\ \overline{\chi}_{\mathcal{O}\left(\mathbf{r}_\alpha\right), \mathcal{O}\left(\mathbf{r}_\alpha \right)}^{0} &= \overline{\chi}_{\mathbf{r}_\alpha, \mathbf{r}_\alpha}^{0} \exp\left[ 2 i \phi_{\mathcal{O}}\left(\mathcal{O}\left(\mathbf{r}_{\alpha} \right)\right) \right] \label{eq: transformation on-site pairing field} \\ \xi_{\mathcal{O}(\mathbf{r}_\alpha),\mathcal{O}(\mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\mu})} &= \xi_{\mathbf{r}_\alpha, \mathbf{r}_\alpha+\eta_{\alpha}\mathbf{b}_{\mu}} \exp\left[ i \left( \phi_{\mathcal{O}}\left(\mathcal{O}\left(\mathbf{r}_{\alpha} \right)\right) - \phi_{\mathcal{O}}\left(\mathcal{O}\left(\mathbf{r}_{\alpha} + \eta_{\alpha} \mathbf{b}_{\mu} \right) \right) \right) \right] \label{eq: inter-sublattice hopping} \end{align} \end{subequations} must be satisfied. \subsection{\label{SI subsec: Relating different bonds} Relating different bonds} From the above transformation properties, we see that after fixing all fields on a representative bond/point, we can obtain the complete configurations if we know how to relate a single point to all other points (because of the on-site paring $\chi^{0}$), a nearest-neighbor bond to all other nearest-neighbor bonds (because of the gauge field background $\overline{A}$ and inter-sublattice hopping $\xi$), and a the second-nearest-neighbor bond to all other second-nearest-neighbor bonds (because of the inter-site pairing $\chi$). Consequently, let us find these transformations in terms of the space group generators. For the on-site pairing field $\chi^{0}$, if we pick $\mathbf{0}_A$ to be the representative point, then it can be related to all other points of the lattice by: \begin{subequations} \begin{align} T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}}: \mathbf{0}_{A}\mapsto (r_1,r_2,r_3)_{A} \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}}\circ \overline{C}_6: \mathbf{0}_{A}\mapsto (r_1,r_2,r_3)_{B} \end{align} \end{subequations} For the gauge field background $\overline{A}$ and inter-sublattice inter-sublattice hopping $\xi$, we can pick $\mathbf{0}_A\to\mathbf{0}_B$ as the representative nearest-neighbor bond. All other nearest-neighbor bonds of the lattice are related to it by the transformations \begin{subequations} \begin{align} T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}}:& (\mathbf{0}_A \to \mathbf{0}_B) \mapsto ((r_1,r_2,r_3)_A \to (r_1,r_2,r_3)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}}\circ\overline{C}_6^{4}\circ S\circ\overline{C}_6:& (\mathbf{0}_A \to \mathbf{0}_B) \mapsto ((r_1,r_2,r_3)_A \to (r_1 + 1,r_2,r_3)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ\overline{C}_6^{2} \circ S \circ\overline{C}_6 :& (\mathbf{0}_A \to \mathbf{0}_B) \mapsto ((r_1,r_2,r_3)_A \to (r_1,r_2+1,r_3)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}}\circ S \circ\overline{C}_6:& (\mathbf{0}_A \to \mathbf{0}_B) \mapsto ((r_1,r_2,r_3)_A \to (r_1,r_2,r_3+1)_B). \end{align} \end{subequations} We can further map the representative bond to bonds with the opposite direction by first inverting it with $\overline{C}_6$ (i.e., $\overline{C}_6:(\mathbf{0}_A \to \mathbf{0}_B) \mapsto (\mathbf{0}_B \to \mathbf{0}_A)$) and then use the same transformations as above. For the inter-site pairing $\chi$, we can pick $\mathbf{0}_B\to(1,0,0)_B$ as the representative second-nearest-neighbor bond. All other second-nearest-neighbor bonds of the parent diamond lattice can be obtained by \begin{subequations} \begin{align} T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}}:& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1,r_2,r_3)_B \to (r_1+1,r_2,r_3)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6^{4}:& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1,r_2,r_3)_B \to (r_1,r_2+1,r_3)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6^{2}:& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1,r_2,r_3)_B \to (r_1,r_2,r_3+1)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6^{4}\circ S \circ \overline{C}_6^{3}:& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1+1,r_2,r_3)_B \to (r_1,r_2+1,r_3)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6^{4}\circ S \circ \overline{C}_6:& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1+1,r_2,r_3)_B \to (r_1,r_2,r_3+1)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6^{2}\circ S \circ \overline{C}_6^{3} :& (\mathbf{0}_B\to(0,1,0)_B) \mapsto ((r_1,r_2+1,r_3)_B \to (r_1,r_2,r_3+1)_B) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}}\circ \overline{C}_6^{3} :& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1,r_2,r_3)_A \to (r_1-1,r_2,r_3)_A) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6:& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1,r_2,r_3)_A \to (r_1,r_2-1,r_3)_A) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6^{5} :& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1,r_2,r_3)_A \to (r_1,r_2,r_3-1)_A) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6 \circ S \circ \overline{C}_6^{3}:& (\mathbf{0}_B\to(1,0,0)_A) \mapsto ((r_1-1,r_2,r_3)_A \to (r_1,r_2-1,r_3)_A) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6 \circ S \circ \overline{C}_6 :& (\mathbf{0}_B\to(1,0,0)_B) \mapsto ((r_1-1,r_2,r_3)_A \to (r_1,r_2,r_3-1)_A) \\ T_{1}^{r_{1}}\circ T_{2}^{r_2}\circ T_{3}^{r_{3}} \circ \overline{C}_6^{5} \circ S \circ \overline{C}_6^{3}:& (\mathbf{0}_B\to(0,1,0)_B) \mapsto ((r_1,r_2-1,r_3)_A \to (r_1,r_2,r_3-1)_A) \end{align} \end{subequations} We can further map the representative bond to bonds with the opposite direction by first inverting it with $S\circ\overline{C}_6\circ S \circ \overline{C}_6$ (i.e., $S\circ\overline{C}_6\circ S \circ \overline{C}_6:(\mathbf{0}_B\to(1,0,0)_B)) \mapsto ((1,0,0)_B\to\mathbf{0}_B)$) and then use the same transformations as above. \subsection{\label{SI subsec: Saddle point field configuration} Saddle point field configuration of the \texorpdfstring{$U(1)$}{U(1)} symmetric Ansätze} Now that the transformation properties of the GMFT parameters and the relation between bonds of the lattice in terms of space group generators are known, we can build the saddle point action for every symmetry fractionalization class. However, before doing so, we make a few crucial comments regarding our procedure to build the GMFT phase diagram of Fig. 1 in the main text. In our classification of symmetric $U(1)$ Ansätze, we have assumed that all pairing terms vanish (i.e., $\chi=\chi^0=0$). However, all MF parameters are nonzero within the long-range ordered phases where the bosonic spinons are condensed (i.e., $\expval{\Phi}\ne 0$). Therefore, a rigorous construction of the GMFT phase diagram would require a complete classification of both $U(1)$ and $\mathbb{Z}_2$ symmetric Ansätze before solving the self-consistency of all classes at different points to compare their ground state energy. Only the $\mathbb{Z}_2$ Ansätze could describe ordered phases since they do not assume that any MF parameter vanish. We have classified symmetric $\mathbb{Z}_2$ Ansätze for the octupolar regime and will present these results in an upcoming publication. In this analysis, we have concluded that the phase factors of Eq.~\eqref{eq: phase factors U(1) PSG classification} also describe a subset of four symmetric $\mathbb{Z}_2$ Ansätze that are obtained by allowing the pairing fields to be nonzero. For simplicity's sake, we have restricted our attention to the four symmetric Ansätze described by the phase factor of Eq.~\eqref{eq: phase factors U(1) PSG classification} that can describe symmetric $U(1)$ and $\mathbb{Z}_2$ QSLs as well as ordered phases to build the phase diagram. Of course, the phase diagram reported in the main text does not rule out the possibility of a stable symmetric $\mathbb{Z}_2$ QSL since we have only restricted ourselves to a subset of possible Ansätze. In light of these remarks, we allow the pairing fields to be nonzero in the rest of the analysis. Now for the saddle point field configuration, we first conclude that the on-site pairing field is constant over the whole lattice since it transforms as in Eq.~\eqref{eq: transformation on-site pairing field} and the phase factors \eqref{eq: phase factors U(1) PSG classification} are all always either 0 or $\pi$. Therefore, if we fix $\chi^{0}_{\mathbf{0}_{A},\mathbf{0}_{A}} = \chi^{0}$ we have $\chi^{0}_{\mathbf{r}_{\alpha},\mathbf{r}_{\alpha}} = \chi^{0}$. Fixing the gauge field background on the representative bond $\overline{A}_{\mathbf{0}_{A},\mathbf{0}_B}=\overline{A}$, we have for all other bonds \begin{subequations} \begin{align} \overline{A}_{(r_{1},r_{2},r_{3})_{A},(r_{1},r_{2},r_{3})_{B}} &= \overline{A} + n_{s}\pi \\ \overline{A}_{(r_{1},r_{2},r_{3})_{A},(r_{1}+1,r_{2},r_{3})_{B}} &= \overline{A} + n_{s}\pi + n_{1} \pi (r_{2} + r_{3}) \\ \overline{A}_{(r_{1},r_{2},r_{3})_{A},(r_{1},r_{2}+1,r_{3})_{B}} &= \overline{A} + n_{s}\pi + n_{1} \pi r_{3} \\ \overline{A}_{(r_{1},r_{2},r_{3})_{A},(r_{1},r_{2},r_{3}+1)_{B}} &= \overline{A} + n_{s}\pi. \end{align} \end{subequations} For the inter-sublattice hopping field, its value on other bonds is related to the one on the representative bond $\xi_{\mathbf{0}_{A},\mathbf{0}_B}=\xi$ by \begin{subequations} \begin{align} \xi_{(r_{1},r_{2},r_{3})_{A},(r_{1},r_{2},r_{3})_{B}} &= \xi \\ \xi_{(r_{1},r_{2},r_{3})_{A},(r_{1}+1,r_{2},r_{3})_{B}} &= \xi \exp\left[ i\pi(n_{s} + n_{1} (r_{2} + r_{3})) \right] \\ \xi_{(r_{1},r_{2},r_{3})_{A},(r_{1},r_{2}+1,r_{3})_{B}} &= \xi \exp\left[ i\pi(n_{s} + n_{1} r_{3}) \right] \\ \xi_{(r_{1},r_{2},r_{3})_{A},(r_{1},r_{2},r_{3}+1)_{B}} &= \xi \exp\left[ i\pi n_{s} \right] . \end{align} \end{subequations} Lastly, for the inter-site pairing $\chi_{\mathbf{0}_{B},(1,0,0)_{B}}=\chi$, we have \begin{subequations} \begin{align} \chi_{(r_1,r_2,r_3)_B , (r_1+1,r_2,r_3)_B} &= \chi \exp\left[ i n_1 \pi ( r_{2} + r_{3} ) \right] \\ \chi_{(r_1,r_2,r_3)_B , (r_1,r_2+1,r_3)_B} &= \chi \exp\left[ i n_1 \pi r_{3} \right] \\ \chi_{(r_1,r_2,r_3)_B , (r_1,r_2,r_3+1)_B} &= \chi \\ \chi_{(r_1+1,r_2,r_3)_B , (r_1,r_2+1,r_3)_B} &= \chi \exp\left[ i n_1 \pi ( r_2 + 1 ) \right] \\ \chi_{(r_1+1,r_2,r_3)_B , (r_1,r_2,r_3+1)_B} &= \chi \exp\left[ i n_1 \pi ( r_{2} + r_{3} +1 ) \right] \\ \chi_{(r_1,r_2+1,r_3)_B , (r_1,r_2,r_3+1)_B} &= \chi \exp\left[ i n_1 \pi ( r_{3} + 1 ) \right] \\ \chi_{(r_1,r_2,r_3)_A , (r_1-1,r_2,r_3)_A} &= \chi \exp\left[ i n_1 \pi ( r_{2} + r_{3} ) \right] \\ \chi_{(r_1,r_2,r_3)_A , (r_1,r_2-1,r_3)_A} &= \chi \exp\left[ i n_1 \pi r_{3} \right] \\ \chi_{(r_1,r_2,r_3)_A , (r_1,r_2,r_3-1)_A} &= \chi \\ \chi_{(r_1-1,r_2,r_3)_A , (r_1,r_2-1,r_3)_A} &= \chi \exp\left[ i n_1 \pi ( r_{2} + 1 ) \right] \\ \chi_{(r_1-1,r_2,r_3)_A , (r_1,r_2,r_3-1)_A} &= \chi \exp\left[ i n_1 \pi ( r_{2} + r_{3} +1 ) \right] \\ \chi_{(r_1,r_2-1,r_3)_A , (r_1,r_2,r_3-1)_A} &= \chi \exp\left[ i n_1 \pi ( r_{3} + 1 ) \right] \end{align} \end{subequations} \begin{figure} \includegraphics[width=0.75\linewidth]{Unit_cell_dipolar-octupolar.pdf} \caption{Configuration of the gauge field background and MF parameters within the unit cell for the fully symmetric fractionalization classes. The full black line is zero for the gauge field background, and the dashed red line is $\pi$. For the other MF parameters, the full black line is the value on the representative bond (i.e., $\xi$ and $\chi$), and the dashed red line is the opposite (i.e., $-\xi$ and $-\chi$). The full circles represent bonds coming out of the plane. The triangles are tetrahedra of the A (purple) and B sublattices (green) as seen from above. \label{fig: unit cell}} \end{figure} We can arbitrarily fix $\overline{A}=0$ for simplicity's sake, whereas the parameters $\chi^{0}$, $\chi$, and $\xi$ are determined by solving the self-consistency conditions \eqref{eq: self-consistency conditions}. Some restrictions can further be determined on the different MF parameters. For the pairing field, by acting with $S\circ\overline{C}_6\circ S\circ\overline{C}_6$, we get the conclusion that $\chi_{\mathbf{r}_1,\mathbf{r}_2}=\chi_{\mathbf{r}_2,\mathbf{r}_1}$, which could have also been deduced from the self-consistency condition. The definition of the inter-sublattice hopping field implies that $\xi_{\mathbf{r}_1,\mathbf{r}_2}=\xi_{\mathbf{r}_2,\mathbf{r}_1}^{*}$. By acting with $\overline{C}_{6}$ on the representative bond, we also conclude that $\xi_{\mathbf{0}_{B},\mathbf{0}_{A}} = \xi = \xi_{\mathbf{0}_{A},\mathbf{0}_{B}}^*$ which implies that $\xi$ is real. As a last remark, we note that if the inter-site pairing field vanishes, the field configuration for the symmetry classes with $n_{S}=0$ and $n_{S}=1$ can be related by the gauge transformation \begin{align} G: \Phi_{\mathbf{r}_\alpha} \mapsto \Phi_{(r_1,r_2,r_3)_\alpha} e^{i \pi \text{mod}(r_{1}+r_{2}+r_{3},2)}. \end{align} Therefore, in the deconfined phases with nearest-neighbor interactions, symmetry classes with $n_{S}=0$ and $n_{S}=1$ are identical. There are only two distinct deconfined phases $n_{1}=0$ and $n_{1}=1$ that we label as the 0- and $\pi$-flux state, respectively, since translation acts projectively in the later phase~\cite{desrochers2022symmetry}. However, the $n_{S}=0$ and $n_{S}=1$ classes correspond to distinct states in the presence of spinon pair condensation (i.e., distinct $\mathbb{Z}_2$ QSL) and the confined phase (i.e., distinct LRO phases). This mapping between the $n_{S}=0$ and $n_{S}=1$ classes may break down in the presence of interactions beyond the nearest-neighbor level which would introduce other MF parameters. The above results regarding the field configuration for every class are summarized in Fig.~\ref{fig: unit cell}. \section{\label{SI sec: Evaluating observables} Evaluating observables} \subsection{Saddle point action in the large-\texorpdfstring{$N$}{N} approximation} To be able to evaluate various observables we apply the usual prescription and perform a large-$N$ approximation by replacing the hard constraint on the rotor length at every site $|\Phi^{\dagger}_{\mathbf{r}_{\alpha}}\Phi_{\mathbf{r}_{\alpha}}|=1$ by an average one \begin{equation} \frac{1}{N}\sum_{\mathbf{r}_{\alpha}}\expval{\Phi_{\mathbf{r}_{\alpha}}^{\dagger}\Phi_{\mathbf{r}_{\alpha}}}=\kappa \end{equation} for $\alpha\in\{A,B\}$ where $N$ is the number of diamond lattice primitive unit cell and $\kappa$ is a real parameter. This amounts to replacing the site- and time-dependent Lagrange multiplier $i\lambda_{\mathbf{r}_{\alpha}}^{\tau}$ by sublattice-dependant global ones $\lambda^{\alpha}$. We can then use the translation symmetry of the problem and define the Fourier transform of the spinon field operator as \begin{align} \Phi_{\mathbf{r}_{\alpha}}^{\tau} &= \frac{1}{\sqrt{\beta N_{u.c.}}}\sum_{\mathbf{k}, i\omega_n} \Phi_{\mathbf{k},i\omega_n,\mathbf{r}_{s},\alpha}e^{-i\left(\omega_n \tau - \mathbf{k}\cdot\mathbf{r}_{\alpha} \right)}, \end{align} where $N_{u.c.}$ is the number of unit cells, $\beta=1/k_{B} T$ is the inverse temperature, and the position on the diamond lattice is \begin{align} \mathbf{r}_{\alpha} &= \mathbf{r}_{u.c.} + \mathbf{r}_{s} - \frac{\eta_{\alpha}}{2}\mathbf{b}_{0} \end{align} with $\mathbf{r}_{u.c.}$ and $\mathbf{r}_s$ labeling the position of the GMFT Ansatz unit cell and sublattice respectively. The wavevector sum is performed over the reduced first Brillouin zone associated with a GMFT Ansatz. The GMFT action then takes the form \begin{align} \label{eq: GMFT action after FT} S_{\text{GMFT}} &= \sum_{\mathbf{k},i\omega_n} \vec{\Gamma}^{\dagger}_{\mathbf{k},i\omega_{n}} \left[\mathscr{G}(\mathbf{k},i\omega_{n})\right]^{-1} \vec{\Gamma}_{\mathbf{k},i\omega_{n}}, \end{align} where the spinon vector field is \begin{align} \label{eq: spinon vector field after FT} \vec{\Gamma}_{\mathbf{k},i\omega_n}^{\dagger} =&\left( \Phi_{\mathbf{k},i\omega_n,1,A}^{*}, ..., \Phi_{\mathbf{k},i\omega_n,N_{sl},A}^{*},\Phi_{\mathbf{k},i\omega_n,1,B}^{*}, ..., \Phi_{\mathbf{k},i\omega_n,N_{sl},B}^{*},\right.\\ &\hspace{1.5mm}\left.\Phi_{-\mathbf{k},-i\omega_n,1,A}, ..., \Phi_{-\mathbf{k},-i\omega_n,N_{sl},A},\Phi_{-\mathbf{k},-i\omega_n,1,B}, ..., \Phi_{-\mathbf{k},-i\omega_n,N_{sl},B} \right) \end{align} with the indices labeling all sites of either the $A$ or $B$ sublattices inside the unit cell of a specific GMFT Ansatz, and the inverse spinon Matsubara Green's function is \begin{align} \left[\mathscr{G}(\mathbf{k},i\omega_{n})\right]^{-1} &= \frac{\omega_n^{2}}{2J_{yy}} \mathds{1}_{4N_{sl}\times 4N_{sl}} + M(\mathbf{k}). \label{eq: definition Matsubara Green's function} \end{align} $M^{\alpha}(\mathbf{k})$ is a $4N_{sl}\times 4N_{sl}$ matrix, with $N_{sl}$ being the number of primitive diamond lattice unit cells within the unit cell of a specific Ansatz (i.e., $N_{sl}=1$ and $N_{sl}=4$ if $n_{1}=0$ and $n_{1}=1$ respectively). The spinon dispersion is of the form \begin{align} \mathcal{E}_{\gamma}(\mathbf{k})=\sqrt{2J_{yy} \varepsilon_{\gamma}(\mathbf{k})}, \end{align} where $\varepsilon_{\gamma}(\mathbf{k})$ are the eigenvalues of the $M(\mathbf{k})$ matrix. \subsection{\label{subsec: Important remarks on the large-N approximation} Important remarks on the large-\texorpdfstring{$N$}{N} approximation} When interpreting the real and imaginary parts of $\Phi_{\mathbf{r}_\alpha}=q_{\mathbf{r}_{\alpha},1}+i q_{\mathbf{r}_{\alpha}, 2}$ as two-dimensional coordinates, the large-$N$ approximation amounts to releasing the particle from the unit circle $q_{\mathbf{r}_{\alpha},1}^2+q_{\mathbf{r}_{\alpha},2}^2=1$ and allowing it to move on the entire two-dimensional plane instead. As a direct consequence, the momentum of the particle $Q_{\mathbf{r}_{\alpha}}=p_{\mathbf{r}_{\alpha}, 1}+i p_{\mathbf{r}_{\alpha}, 2}$, where $p_{\mathbf{r}_{\alpha}, i}$ is the conjugate momentum of $q_{\mathbf{r}_{\alpha}, i}$, becomes continuous instead of taking on discrete values. The global constraint imposed by $\lambda^{\alpha}$ only impacts its average displacement. The crucial point is determining what this average displacement should be to reproduce results from the initial model accurately. In the existing GMFT literature, $\kappa=1$ is always chosen. However, we would like to argue that there are \emph{a priori} no reasons why such a choice should be made. Indeed, in analogy to how the average boson occupancy can be tuned to interpolate between the quantum and classical regime in Schwinger boson mean-field theory~\cite{wang2006spin, sachdev1992kagome, messio2010schwinger}, the $\kappa$ parameter should be tuned to reproduce results in a given limit without consideration for the initial hard constraint. \begin{figure} \includegraphics[width=0.85\linewidth]{comparison_rotor_scc.png} \caption{(a) Spinon gap of the 0-flux state as a function of $J_{\pm}/J_{yy}$ with $J_{\pm\pm}=0$ for $\kappa=1$ and $\kappa=2$. (b) Lower and upper edges of the two-spinon continuum of the 0-flux state along high symmetry lines in the first Brillouin zone for $\kappa=1$ and $\kappa=2$ with $J_{\pm}/J_{yy}=0.046$ and $J_{\pm\pm}=0$. The position of the continuum can be directly compared with the QMC results of Ref.~\cite{huang2018dynamics}. \label{fig: comparison scc}} \end{figure} A regime where GMFT should be expected to reproduce known results is the Ising limit. In such a limit, the spinon dispersion becomes completely flat at an energy of $J_{yy}/2$~\cite{lacroix2011introduction}. In the Ising limit, the rotor length self-consistency equation of GMFT reduces to \begin{align} \lambda^{\alpha} = \frac{ J_{yy}}{2\kappa^{2}}, \end{align} and the spinon dispersion \begin{align} \mathcal{E}_{\gamma}(\mathbf{k}) = \sqrt{J_{yy}^{2}/ \kappa^{2}}. \end{align} In order to respect the classical limit $\mathcal{E}_{\gamma}(\mathbf{k})=J_{yy}/2$, the parameter $\kappa=2$ needs to be chosen. As briefly mentioned in the main text and argued in Ref.~\cite{desrochers2022symmetry}, the choice $\kappa=2$ further improves the accuracy of GMFT. First, as illustrated in Fig.~\ref{fig: comparison scc}(a), we find a critical value of $J_{\pm}/J_{yy} \approx 0.048$ with $\kappa=2$ for the transition between 0-flux QSI and the ordered state --- in excellent agreement with QMC~\cite{banerjee2008unusual, shannon2012quantum, kato2015numerical, huang2020extended}. In contrast, GMFT with $\kappa=1$ widely overestimates the stability of the 0-flux state by predicting a transition at $J_{\pm}/J_{yy}\approx 0.192$. Next, the position of the lower and upper edges of the two-spinon continuum obtained by GMFT with $\kappa=2$ also agrees with QMC. The lower and upper edges of the two-spinon continuum for $\kappa=2$ presented in Fig.~\ref{fig: comparison scc}(b) are in excellent agreement with Ref.~\cite{huang2018dynamics}. It should be contrasted with the results obtained for $\kappa=1$, which has a much smaller width and is located at an average energy of about twice as large. \subsection{\label{subsec: Construction of the phase diagram} Construction of the phase diagram} The phase diagram is constructed by solving the self-consistency equations for every symmetric Ansatz at different points in phase space and comparing their energies. If the spinons are deconfined and gapped, the phase is labeled as 0-flux and $\pi$-flux QSI if $n_{1}=0$ and $n_{1}=1$ respectively. If the spinon gap vanishes at $\mathbf{k}_c$, the bosons condense, the emerged photon is removed through the Anderson-Higgs mechanism, and we get an ordered magnet with ordering wavevector $\mathbf{k}_c$ (see Refs.~\cite{savary2012coulombic, lee2012generic, savary2021quantum} for more details). We finally note that no $\mathbb{Z}_2$ QSL where spinons pairs are condensed (i.e., $\chi\ne 0$) is observed. It does not exclude the possibility of $\mathbb{Z}_2$ QSLs since we did not study every possible symmetric GMFT Ansatz with a $\mathbb{Z}_2$ gauge structure as explained in section~\ref{SI subsec: Saddle point field configuration}. We find a small intermediate region where the spinons are gapped and $\xi\ne 0$, which implies ordered transverse spin components $\expval{\mathrm{S}^{\pm}}\ne 0$ from the definitions of Eq. (2) in the main text. Even though such a phase was identified as the coexistence of deconfined excitations and LRO in the early GMFT litterature~\cite{savary2012coulombic, lee2012generic}, no numerical studies ever found evidence for the existence of such a phase, and it was later argued that it might be unatural~\cite{hao2014bosonic}. We thus adopt a conservative viewpoint and label this region as an ordered phase. \input{SupplementaryMat.bbl} \end{document}
1,314,259,992,942
arxiv
\section{Introduction} The advancements in deep neural networks (DNNs) have reinforced the use of DNNs for biometric authentication as a security protocol~\cite{KAUR202030, HOMCHOUDHURY2019202, article2}. However, biometric information such as facial, iris, palm print, ear, and fingerprint data is becoming increasingly entangled with sensitive personal information. For example, Australian border control has been verifying passports via the "SmartGates" face recognition authentication system for more than a decade~\cite{frontex2010automated}. Besides, with the onset of the COVID-19 pandemic, touchless face recognition payment methods have been developed to reduce infectious spread, demonstrating the need and widespread application of such technologies~\cite{9274654}. However, training DNNs for face recognition requires extensive computational power, which is often unavailable to conventional users and businesses. Outsourcing computational resources from ML-based cloud services such as Google's Cloud Machine Learning Engine\footnote{https://cloud.google.com/} or IBM's Watson Studio\footnote{https://www.ibm.com/au-en/watson} is popular. Unfortunately, these third-party services can foster an adversarial environment for DNNs, rendering them vulnerable to backdoor attacks~\cite{gu2017badnets} and allowing access to highly sensitive and confidential private data. Hence, conventional cloud-based (server-based) face recognition paradigms introduce a massive threat to user privacy~\cite{evtimov2021foggysight}. Generally, a backdoor attack involves the manipulation of a DNN, corrupting its ability to classify inputs accurately~\cite{saha2020hidden}. A malicious party may attempt to manipulate a biometric classifier (e.g. face recognition model) into erroneously misclassifying a target individual as another person (commonly the attacker). This allows a malicious party to gain access to the target's biometric authentication-protected systems, files, or physical places. This misclassification occurs when the model is presented with a patched source image during testing/identification time. The patch -- superimposed onto the image -- can be a certain perturbation or sticker that is known only to the attacker and represents the "backdoor" to trigger the attack itself. If the same image is presented without the patch, the model is able to classify the image correctly, furthering both the stealth of such attacks as well as the difficulty of mitigation~\cite{Trojannn,saha2019hidden}. With the introduction of hidden backdoor attacks, the stealth and precision of such adversarial attacks is increasing, highlighting the potential privacy risks when DNNs are employed for security purposes such as face recognition~\cite{Li_2021_ICCV, saha2019hidden}. In a backdoor attack, the curator receives poisoned training images from a malicious party that the curator believes to be trustworthy. It is also possible that the curator themself is the adversarial party. In any case, the curator is compromised, and regardless of the curator's intentions, a malicious party intends to access the training data to lay the groundwork for a backdoor attack. The serious privacy breach that such an attack creates threatens the integrity of any system or organization utilizing facial recognition for authentication. A hidden backdoor attack such as proposed by Saha et al. ~\cite{saha2019hidden} generates and injects poisoned images that are unidentifiable to the human eye. The model functions predictably until presented with a patched image during testing, triggering the backdoor attack. The components of this attack are illustrated in Figure~\ref{backdoor_attack}. An adversary may manifest in the form of an inside attacker who is well acquainted with an organization's security and authentication systems. Through different approaches such as social engineering~\cite{krombholz2015advanced}, the inside attacker may gain an identified victim's trust to use for model corruption. This would then allow the attacker to easily manipulate the model into misclassifying the victim's identity as their own to obtain valuable records or higher system permissions. If the attacker utilizes a hidden backdoor attack, despite potential advancements in the transparency of DNNs, the poisoned images will remain unidentifiable. Thus, our work remains crucial in maintaining the integrity of biometric authentication services. Various methods for attack identification and mitigation have been developed~\cite{chen2018detecting,tran2018spectral}. \textit{Activation clustering} is a poisoned data identification approach that analyzes the activations of the final hidden layer in a DNN~\cite{chen2018detecting}. This approach is notoriously complex as it requires the interpretation of internal DNN activations; hence, it is challenging to implement this approach in practical settings. \textit{Spectral signature detection} assesses statistical outlier values from feature representations of input images (named spectral signatures). Poison detection is possible as larger signatures arise from certain classifier signals being amplified in poisoned images\cite{tran2018spectral}. \textit{Trigger reverse engineering} reconstructs triggers on poisoned image inputs for detection~\cite{8835365}. Attack mitigation is accomplished by relabelling and retraining ~\cite{chen2018detecting,tran2018spectral}, as well as through techniques such as \textit{neuron pruning} and \textit{unlearning}~\cite{8835365}. However, these methods are not used effectively to identify and mitigate \emph{hidden} backdoor attacks, and none of the existing works investigate the backdoor attack threats on face recognition related privacy issues. In addition, the existing approaches often incur substantial computational costs in retraining after poison detection, especially for large recognition databases~\cite{tran2018spectral, chen2018detecting, DBLP:journals/corr/abs-2002-08313}. Further, removing poisoned images, neuron pruning, and unlearning diminishes model accuracy, significantly reducing model performance for small datasets. \begin{figure} \includegraphics[scale=0.27, trim={2.2cm 1.8cm 0cm 7.8cm}, clip]{figures/attack_flowchart_fig.pdf} \caption{Example of a hidden patch and clean label targeted backdoor attack allowing unauthorized access to sensitive data. CP is the attacker class and GWB is the target class.} \label{backdoor_attack} \end{figure} We propose a novel privacy-preserving face recognition approach to mitigate backdoor attacks and related privacy issues. The proposed approach utilizes the following modules for the backdoor attack evading protocol: \begin{enumerate} \item Identifying a generalizable setting for face recognition utilizing deep learning and transfer learning. \item Determining the most vulnerable face recognition model in order to test defensive mechanisms in the worst case scenario. \item Identifying a noise application mechanism for obfuscating poisoned images based on intuitions derived from differential privacy. \item Developing a precise poison image recognition model and utilizing it as a preliminary layer to precede the biometric authentication layer. \item Developing a method to dynamically apply calibrated noise to images categorized as poisoned, preventing the face recognition model from learning poison patterns while still enabling a model to train on essential biometric features. \end{enumerate} BA-BAM reduces the attack success rate by a minimum of 80\% in a face recognition scenario under our experimental configurations. BA-BAM nullifies the effects of poisoning on compromised face images rather than discarding them from the training dataset. As a result, the accuracy of our model is found to decrease by no more than 2.4\%, in the worst-case scenario. Besides, BA-BAM achieves an exceptionally low computational complexity while accomplishing poison detection and mitigation. The remaining sections of this paper are organized as follows: Section~\ref{background} describes background concepts necessary in supporting our work. Section~\ref{methodology} provides the BA-BAM methodology and its associated intuitions. Experiments, results, and a comparison to other methods are discussed in Section~\ref{results_discussion}. Section~\ref{related_works} considers related work and Section~\ref{conclusion} concludes the paper. \section{Background} \label{background} This section provides the background of the preliminaries utilized in developing the proposed approach (BA-BAM). This section discusses the basics of face recognition and backdoor attacks that substantiate the importance of developing BA-BAM. Next, we discuss the potential privacy threats that occur due to backdoor attacks on face recognition systems. Finally, we discuss the basics (transfer learning and calibrated noise generation based on the intuitions of differential privacy) that were used to develop BA-BAM. \subsection{Face recognition} \label{facerecognition} Face recognition involves identifying and verifying an individuals' identity given their set of distinct facial features. This process requires a pre-requisite set of images matched to a label of the person's identity to train on. This allows a future model to match an input face to a labeled face for identification within a given database~\cite{TASKIRAN2020102809}. Face recognition contains two main sub-problems, namely, the \emph{1:N identification problem} and the \emph{1:1 verification problem}. Face identification aims to match a previously unseen face image to a database of $n$ known images. This process often requires matching a face and then labeling that face to enable further verification. The 1:1 problem is commonly used for identity verification in mobile phones~\cite{10.1007/978-3-319-97909-0_46}. In this scenario, every query is compared to a database containing a single face (usually the owner's face). Verification is a less strenuous task than identification as data for only one identity is required for training, rather than for a myriad of identities~\cite{9134370}. Various face recognition models have been developed that exhibit exceptionally high recognition accuracy. Models such as VGG-Face~\cite{BMVC2015_41}, Google Facenet~\cite{DBLP:journals/corr/SchroffKP15}, Facebook DeepFace\footnote{https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/}, ArcFace~\cite{deng2019arcface} and Dlib~\cite{King2009DlibmlAM} generate accuracies of around 97\%-99.5\% on the LFW (Labelled Faces in the Wild) dataset\footnote{http://vis-www.cs.umass.edu/lfw/}. Currently, VarGFaceNet which achieves a 99.85\% accuracy is identified as the state-of-the-art~\cite{9022149}. \subsection{Backdoor attacks} \label{backdoor_section} The goal of a backdoor attack is to produce a model that behaves normally on benign data but performs erroneously when presented with a patched input. An attacker injects a backdoor attack by adding ``poisoned" data to the training dataset. Poisoned data refers to data that are manipulated to lead to erroneous model behavior, such as misclassification. A source class (also referred to as victim class) may be manipulated to be recognized as belonging to a target class (also referred to as the attacker class). In some backdoor attacks, there is no defined target class, as the aim may be to simply misclassify the victim class as any class other than its true class~\cite{DBLP:journals/corr/abs-1708-06733}. A malicious model is developed by constructing a dataset containing both benign and poisoned data. Following the notation from~\cite{DBLP:journals/corr/abs-2111-08429}, this can be formally defined as follows. The set of poisoned samples $\mathcal{D}^p_{tr}=(\tilde{{x}}^i_{tr}, \tilde{{y}}^i_{tr}),\{i=1,\dots,n\}$, where $\mathcal{D}^p_{tr}$ is the set of poisoned training samples and $(\tilde{{x}}^i_{tr}, \tilde{{y}}^i_{tr})$ is the $i^{th}$ pair of poisoned images $\tilde{{x}}^i_{tr}$ and $\tilde{{y}}^i_{tr}$ is the corresponding label. The benign set can be similarly expressed as $\mathcal{D}^b_{tr}= ({x}^i_{tr}, {y}^i_{tr}), \{i=1,\dots,n\}$. The poisoned training dataset used for malicious training is then $\mathcal{D}^b_{tr} \bigcup \mathcal{D}^p_{tr}$. Generally, an image is poisoned by superimposing a "patch" that is known only to an attacker. Suppose a patch, $v$, and a victim image, $x$. Then, $P(x, v) = \tilde{{x}}^i_{tr}$ where $P(\cdot)$ is some poisoning function that places trigger $v$ on $x$. The poisoning function must also, by a particular method, associate the poisoned image with a target class, $t$ where $\tilde{{y}}^i_{tr} = t$. A malicious model is then trained with poisoned data to instill an erroneous mapping between the attacker and victim face images. There are two main types of backdoor attacks, namely, \emph{corrupted label attacks} and \emph{clean label attacks}. Corrupted label attacks, such as the attack proposed by~\cite{DBLP:journals/corr/abs-1708-06733}, directly mislabel the poisoned images with labels pertaining to the target class, allowing the attacker to train a poisoned model with mislabelled classes. Clean label attacks do not interfere with any labels before or during training~\cite{DBLP:journals/corr/abs-2111-08429}. Such attacks may apply curated image perturbation, which allows them to surpass the need for mislabelling poisoned data. For example,~\cite{saha2019hidden} hides extracted features from the target class (attacker's face) along with the patch within an image of the victim. This allows the model to train and perform well using both poisoned and benign data. Misclassification will only then occur when the attacker presents their image during testing with the patch superimposed. Another example of a clean label attack is proposed by~\cite{Trojannn} which carefully curates triggers that excite certain neurons associated with the target output label. Common requirements for successful backdoor attacks: \begin{enumerate} \item \textbf{Model stealthiness.} After the injection of the backdoor attack and model training, the performance of the model should not noticeably diminish. If the performance results are unexpected, the user is likely to notice that the model is compromised~\cite{DBLP:journals/corr/abs-2111-08429}. \item \textbf{Patch stealthiness.} In a poisoned image, the patch should be unnoticeable to the human eye in order to avoid detection. Methods such as blending the patch into the image or hiding the patch within facial features have been used~\cite{DBLP:journals/corr/abs-1712-05526, DBLP:journals/corr/abs-2012-03816, xue_he_wang_liu_2021}. The patch will only be visible when the attacker decides to trigger the attack during testing, as shown in Figure~\ref{backdoor_attack}. \item \textbf{High attack confidence.} This can be quantified as the confidence that a malicious model predicts a target class label when presented with a patched source image. \end{enumerate} \subsection{Data privacy and privacy leaks caused by backdoor attacks} In the context of data sharing and analytics, privacy can be defined as controlled information release~\cite{CHAMIKARA2020101951}. Following this notion, in this paper, we define a privacy leak as the unauthorized distribution of sensitive private information from an otherwise secure database. This involves the release of often confidential or sensitive data to parties within an untrusted environment. Note that privacy leaks are deemed intentional in the context of targeted backdoor attacks; however, they may not be intentional in all scenarios. Hence, a backdoor attack enables a privacy leak. Suppose an inside attacker has identified and gained access to the training data for a face recognition authentication model being used in a high-security organization. The attacker will likely identify a victim based on either their high system privileges, or their access to specific sensitive data. The attacker can then inject poison images for training, allowing their image to be misclassified by the compromised model as the victim. By gaining unauthorized higher system access, the attacker may access bank details, personal health information, trade secrets, or intellectual property that they can then publicly distribute. \subsection{Transfer learning} Data dependency of DNNs is a significant hurdle for their potential application diversity, as well as the general accessibility of deep learning tools~\cite{tan2018survey, 10.1007/978-3-319-46349-0_5}. Transfer learning (TL) leverages the knowledge from a source domain to improve the learning efficiency within a target domain for a particular task~\cite{9134370}. When applied to deep learning, TL mitigates the need to acquire large amounts of new data to carry out a novel task by using weights from the pre-trained models, which can be stored and reused for a new task. Pre-trained model weights can be retained, or `frozen' from the pre-trained network and are not updated when the model is trained for a new task~\cite{DBLP:journals/corr/abs-1905-05901}. This can be done in two ways; (1) utilizing the first $N-1$ layers of the pre-trained model as a feature extractor that does not require updating the weights during training. The extracted features can then be fed into a fully connected classifier component which is trained for classification. (2) Partially re-training or finetuning the pre-trained model by freezing only the first $K$ layers. This allows for the model to adapt to a new task by using the pre-trained weights of the last $N-K$ layers as starting values for training~\cite{DBLP:journals/corr/abs-1912-0027sha1, DBLP:journals/corr/abs-1905-10447}. \subsection{Calibrated noise addition in differential privacy} \label{difpriv} Differential privacy (DP) is a privacy model that defines the probabilistic constraints that an individual's identity is leaked to a third party from a particular database~\cite{DBLP:journals/corr/abs-1907-13498}. The privacy budget ($\varepsilon$) is the primary parameter that defines the level of the privacy leak from a differentially private algorithm. $\varepsilon$ indicates the privacy loss of a randomized DP algorithm or the maximum knowledge an attacker can gain. The higher the value of $\varepsilon$, the higher the privacy loss and, thus, the less private the DP algorithm is. Consider two databases, $x$, and $y$, where $y$ contains the identity of one more individual than $x$. Then, as formally defined by~\cite{8894030}, an algorithm $M$ with range $R$ is $\varepsilon$-DP if for all $S \subseteq R$: \begin{equation} Pr[(M(x) \in S)] \leq e^{\varepsilon}Pr[(M(y) \in S)] \label{eq2} \end{equation} The Laplace mechanism is a generic method for achieving DP. Laplacian noise addition to a query output (scalar) can be denoted by Equation~\ref{lapnoise}. Equation~\ref{sensivitiy} defines the sensitivity ($\Delta f$) of a function (e.g. a query). This is the maximum influence that a single data point can have on the output of $f$. The scale of the Laplacian noise is equal to $\Delta f/\varepsilon$ ~\cite{CHAMIKARA2020101951}. \begin{equation} \Delta f = max{||f(x)-f(y)||_1} \label{sensivitiy} \end{equation} \begin{equation} \mathcal{PF(D)=F(D)}+Lap(\frac{\Delta f}{\varepsilon}) \end{equation} \begin{equation} \label{lapnoise} \mathcal{PF(D)=}\frac{\varepsilon}{2 \Delta f}e^{-\frac{|x-\mathcal{F(D)}|\varepsilon}{\Delta f}} \end{equation} \section{Methodology} \label{methodology} This section outlines our threat model and framework for mitigating backdoor attacks against face recognition models. The main components of our contributions include developing a generalizable face recognition model through transfer learning and an intelligent poison image recognition model, followed by a method for selective perturbation of poisoned images. Hence, as depicted in Figure \ref{fig:primary_steps}, we can identify the sequential flow of (1) generating a generalizable face recognition model with transfer learning $\rightarrow$ (2) data poisoning $\rightarrow$ (3) generating a binary classifier for poisoned image recognition (PIRM) $\rightarrow$ (4) applying calibrated Laplacian noise on poisoned images and $\rightarrow$ (5) backdoor attack mitigation scenario. \begin{figure}[H] \centering \includegraphics[width=150pt]{figures/BA-BAM_flowchart_text_fig.pdf} \caption{The flow of primary steps of BA-BAM} \label{fig:primary_steps} \end{figure} \subsection{The Threat Model} This section defines the threat model that this paper tries to solve. The corresponding threat model concentrates on the back door attacks in the context of face recognition systems. Take $\mathcal{D}_{tr}$, the face image dataset that a face recognition model is going to be trained. In a clean model, $\mathcal{D}_{tr}$ consists of benign data $\mathcal{D}^b_{tr}$. In a poisoned model $\mathcal{D}_{tr}$ is composed of image data injected with poison, $\mathcal{D}^b_{tr} \bigcup \mathcal{D}^p_{tr}$ (see Section~\ref{backdoor_section}). The poisoned data ($\mathcal{D}^p_{tr}$) are curated by the attacker to achieve a targeted adversarial goal. An attacker would have in mind a premeditated target/victim class $x$, and a trigger/patch $v$, which they utilize to generate poisoned data. In order to execute the attack, the attacker would approach the face recognition system during its testing phase with an unpoisoned image of themselves patched with a superimposed trigger $v$. The model would then classify the image as belonging to class $x$. In a real-world setting, the attacker would only require access to the face recognition model training data and not the entire training environment. The attacker also does not need access to the corresponding labels of training images as this is a \emph{clean} label attack. Thus, the malicious party would not require access to any other model parameters or layers. This is unlike~\cite{DBLP:journals/corr/abs-2110-07831} and~\cite{DBLP:journals/corr/abs-1902-06531} where the attacker requires full access to model architecture and parameters. In order to trigger the attack itself, the attacker must have access to the testing phase. Any party with access to the user interface associated with the face authentication system can present their face for verification. Thus, the attacker would readily have access during testing, allowing them to trigger the attack. Additionally, only the attacker would have knowledge of the shape, size, look, and position of the trigger/patch. Thus, the patch is completely unknown to both the victim/organization and defender. \subsection{Identify the most vulnerable settings for a generalizable face recognition model architecture} \label{genfacerecog} As discussed in Section~\ref{facerecognition}, literature shows different attempts to develop high-performance face recognition models. However, these are large models trained in large-scale supercomputing environments. Any change to the pre-trained weights directly harms base model accuracy. However, we require model flexibility to experiment on the dynamics of a face recognition system (regarding binary versus multiclass models). We utilize this flexibility to analyze the backdoor mitigating effects from the proposed solutions. Therefore, we identified and finetuned the architectural configurations of a face recognition model to identify its most vulnerable settings for backdoor attacks. When considering a wide range of model architectures, backdoor attack vulnerability varies significantly. For less vulnerable models, quantifying attack mitigation is difficult. Thus, we simulate the worst-case scenario by selecting a significantly vulnerable model. This enables us to test the extent to which BA-BAM effectively mitigates backdoor attacks. The BA-BAM framework can then be finetuned and tested to optimally reduce the success of backdoor attacks. To identify a vulnerable model as described, we poisoned and assessed the success of backdoor attacks on various generalizable DNN architectures. We used the following set of primary steps to generate a generalizable face recognition model that is vulnerable to backdoor attacks utilizing transfer learning. \begin{enumerate} \item Acquire an input face image dataset. \item Acquire a set of generalizable DNN architectures (e.g. AlexNet, Inception, ResNet, and VGG~\cite{canziani2016analysis}). \item Apply poisoning to data. \item Apply image augmentation to obtain a high model performance. \item Train frozen and unfrozen model architectures on given data. \item Identify a suitable DNN architecture for face recognition and attack vulnerability. \end{enumerate} We considered both frozen and unfrozen (a specific number of layers) versions of model architectures, as the effects of transfer learning influence the attack's efficacy due to varying levels of generalizability. The frozen models utilized pre-trained ImageNet weights~\cite{krizhevsky2012imagenet} for feature extraction before classification. In addition, each model was mounted with a fully-connected component for classification. Unfrozen models allowed all layers to be finetuned during training. \subsection{Data (image) poisoning} \label{poisoning} Before poisoning, a source class and a target class must be chosen from the classes available in the dataset. This supports the context for a targeted attack, where an attacker (source class) is misclassified as belonging to the target class (victim). A portion of the source class images have a superimposed patch that "triggers" the backdoor attack during testing. We apply the image poisoning process proposed by~\cite{saha2019hidden} that generates hidden, clean-label poison images for backdoor injection. The method involves optimizing a poisoned image that in pixel space (visually) looks as close as possible to a target image, but in feature space, looks as close as possible to a patched source image (attacker image). This generates a dataset that reinforces an erroneous mapping between the source and the target class during training. Formally, given target image, $t$, patch $p$ and source image $s$ ($\tilde{s}$ is the patched source image), optimization for a poisoned image $z$, involves solving Equation \eqref{eq3} where $f(\cdot)$ are the intermediate features of the DNN~\cite{saha2019hidden}. \begin{equation} arg_{z}min||f(z) - f(\tilde{s})||^2_2 \label{eq3} \end{equation} \subsection{Generating a binary classifier for poison image recognition (PIRM)} Here, we develop a poisoned image recognition model (PIRM) to detect the image distortions that correlate to poisoning. This was done by training a binary classifier on both poisoned and clean data. For this task, we first generated data under clean and poisoned classes. Poisoning was carried out according to the steps outlined in Section~\ref{poisoning}. We chose a general non-face dataset (e.g. ImageNet) for improved generalizability (to reduce the specificity of the PIRM on a given face dataset). Consequently, the PIRM would maintain a high accuracy regardless of the input dataset used. This ensures that it remains effective in a real-world setting where poisoned images injected by an attacker do not correlate to any specific face dataset.We employ transfer learning to train this model to enhance poison data learning generalizability. Utilizing pre-trained weights with a robust architecture such as VGG16 allowed the network to isolate poisoning with high confidence. This model was adapted to binary classification by adding a fully-connected component onto the base architecture and utilizing binary cross-entropy as the loss function. The first few layers of a DNN identify more generic image features, such as lines and shapes~\cite{SCHMIDHUBER201585}. Thus, in a face recognition DNN, basic image features would correlate to basic facial components such as eyes and mouths. Unlike these low-level components, the perturbations associated with poisoning are high-level patterns that are distributed throughout the entire area of an image. According to this intuition, the last few layers of a DNN are responsible for high-level components that identify such patterns. Following these dynamics, we finetuned the final two convolutional blocks of the PIRM to encourage learning poison perturbation with high confidence. \begin{figure} [t] \includegraphics[width=240pt]{figures/perturbed_images_fig.pdf} \caption{Poisoned and clean images of GWB with varying $\varepsilon$ values of calibrated Laplacian noise.} \label{fig:perturbed_images} \end{figure} \subsection{Applying calibrated Laplacian noise on poisoned images} \label{selective_noise} Once poison classification is complete, calibrated Laplacian noise is selectively applied to images in the training dataset. To decrease the attack success rate (ASR) and maintain model accuracy, noise is applied to only images that the PIRM deems poisoned. Utilizing calibrated noise to perturb poisoned images, as opposed to removing the poisoned images from the dataset entirely, enables the data to be preserved. Preserving the poisoned data permits the original facial features to be retained to an extent. This allows the model to still train on essential biometric features from the poisoned images whilst also nullifying the effects of the attack. Ultimately this increases the robustness and accuracy of the model. As illustrated in Figure~\ref{fig:perturbed_images}, adding noise to images reduces the ability to identify a particular face. During training, this acts to discourage the face recognition model from learning the poison-based face features of an individual. This aims to break the mapping between the poisoned source images and the target. As a result, during test time, a patched image of the attacker will likely be classified as one of the classes, other than the target - mitigating the backdoor attack. We initially investigated the trade-off between utility and attack resistance through noise addition to an entire compromised target class. Adding noise to the entirety of the victim’s class affects both clean images and poisoned images. Thus, we investigated the trade-off between model accuracy and ASR in an extreme case. We utilized the knowledge gained from this to identify an effective selective perturbation mechanism that can maintain a balance between model accuracy and attack resistance. \scalebox{0.96}{ \let\@latex@error\@gobble \begin{algorithm}[H] \label{algo1} \caption{Algorithm for applying Laplacian noise to poison flagged images per class} \label{ngramalgo} \KwIn{ \begin{tabular}{l c l} $\mathcal{D}_{tr, k}$ & $\gets $ & set of images in class $k$,\\ $ x^p_k$ & $\gets$ & image flagged as poisoned \end{tabular} } \KwOut{ \begin{tabular}{l c l} \\ $\tilde{x}^p_{k}$ & $ \gets $ & perturbed image \end{tabular}} investigate utility and attack resistance trade-off (refer to Section~\ref{selective_noise}) \\ calculate mean image $\mathcal{F}^i_m = \Sigma^n_{i=1} x^i_k$ in $\mathcal{D}_{tr, k}$ \\ \For{image, $x^k_i$ in $\mathcal{D}_{tr, k}$}{ calculate Euclidean distance $(x^k_i - \mathcal{F}^i_m)^2$} get maximum Euclidean distance, $E_{max}$ \\ $\Delta f = \frac{(x^k_i - \mathcal{F}^i_m)^2}{E_{max}}$ \\ apply $\frac{\varepsilon}{2 \Delta f}e^{-\frac{|x-x^p_k|\varepsilon}{\Delta f}}$ to $x^p_k$\\ return perturbed image $\tilde{x}^p_{k}$\; \end{algorithm} } Algorithm 1 shows the steps of adding calibrated Laplacian noise to an image. In this step, we follow the intuitions of differential privacy where more noise is added when the function sensitivity is high (refer to Section~\ref{difpriv}). This principle is used to regulate the amount of noise added to perturb poisoned images. The sensitivity, $\Delta f$, for applying noise to an image flagged as poisoned, $x^k_i$, is given by the image’s Euclidean distance from the mean image, $\mathcal{F}^m_k$, in class $k$. This value is then normalized by the maximum Euclidean distance of all images in class $k$. Noise is added to each image based on its normalized sensitivity, with higher sensitivity values leading to increased noise addition. Usually, the distance between poisoned images and the average image tends to be high due to the different patterns generated in a poisoned image. The proposed noise addition mechanism provides an intuitive system for adding controlled noise. A high noise level is applied to images with a higher deviation from the mean image, enabling poisoned images to be subjected to higher noise than their clean counterparts. This adds robustness to the PIRM in the case of misclassification, for example, where a clean image may be classified as poisoned. In this scenario, the misclassified image would likely deviate less from the mean image than a truly poisoned image and, therefore, have less perturbation applied. This preserves the accuracy of the model by reinforcing high classification accuracy in clean classes. \scalebox{0.96}{ \let\@latex@error\@gobble \begin{algorithm}[H] \label{algo2} \caption{Abstract algorithm for biometric authentication backdoor attack mitigation (BA-BAM)} \KwIn{ \begin{tabular}{l c l} $\mathcal{D}_{tr, n} = \mathcal{D}^b_{tr} \bigcup \mathcal{D}^p_{tr}$ & $\gets $ & untrusted training set,\\ && $i = (1,\dots,n)$ \\ $ PIRM(\cdot)=\{0,1\}$ & $\gets$ & poison image classifier\\ $\mathcal{F}(\cdot)$ & $\gets $ & Untrained face \\ && recognition DPNN \end{tabular} } \KwOut{ \begin{tabular}{l c l} \\ $\mathcal{F}(\mathcal{D}_{tr, clean})$ & $ \gets $ & Trusted face\\ & & recognition model \end{tabular}} identify generalizable and vulnerable face recognition DNN architecture \\ \For{image and label, $(x^i_{tr},y^i_{tr})$ in $\mathcal{D}_{tr,n}$}{ apply $\mathcal{PIRM}$ \\ \If{$\mathcal{PIRM}((x^i_{tr},y^i_{tr})) = 0 \therefore (x^i_{tr},y^i_{tr}) \subseteq \mathcal{D}^b_{tr}$} {add $(x^i_{tr},y^i_{tr})$ to clean trainset $\mathcal{D}_{tr, clean}$} \If{$\mathcal{PIRM}((x^i_{tr},y^i_{tr})) = 1$ $\therefore (x^i_{tr},y^i_{tr}) \subseteq \mathcal{D}^p_{tr}$} { \For{$k \in$ set of classes}{ apply Algorithm~\ref{algo1} } add perturbed $(x^i_{tr},y^i_{tr})$ to $\mathcal{D}_{tr, clean}$} } return trained $\mathcal{F}(\mathcal{D}_{tr, clean})$\; \end{algorithm} } \subsection{The backdoor attack mitigation framework} \begin{figure}[ht] \includegraphics[scale=0.26,trim={0cm 3cm 0cm 1.5cm}, clip]{figures/BA-BAM_flowchart_fig.pdf} \caption{From the left, the attack injects poison images into training. Untrusted images are fed through a poison classifier before entering model training. Images flagged as poisoned are perturbed with calibrated Laplacian noise.} \label{fig:selective_flow_chart} \end{figure} The poison recognition component and selective application of calibrated Laplacian noise are applied together as a preliminary layer to the final classification module of a face authentication system. We assume that this component runs on a dedicated server (e.g. a cloud server) as depicted in Figure~\ref{fig:selective_flow_chart}. As shown in Figure~\ref{fig:selective_flow_chart}, BA-BAM uses Algorithm~\ref{algo2} to filter training data sourced from any party (trusted or adversarial). This module acts as a data pre-processing component during face recognition model training. Hence, an agency can successfully mitigate the privacy threats that occur from backdoor attacks on face recognition models. \section{Results and Discussion} \label{results_discussion} This section discusses the experimental configurations, the experiments, and their results. All experiments were conducted on an HPC environment provided by Pawsey Supercomputing Research Centre. The partition we utilized consisted of 2x Intel Xeon Silver 4215 2.5GHz CPUs with 16 cores per node (192GB-RAM), 2x NVIDIA V100 GPUs (16 GB HBM2 memory each). Firstly, we demonstrate base model performance, investigating model architecture and applying transfer learning on face recognition models. Then, we investigate the effects of complete perturbation versus selective perturbation and experiment with various dynamics of the noise parameters. Subsequently, we train a binary poison detection model on two datasets to compare the effects on generalizability. Finally, we assess the computational complexity of BA-BAM and provide a comparative analysis against related methods. \subsection{Experimental specifications - Data description} Figure \ref{dataset_samples} provides an impression of the different images available in the three datasets (LFW Faces, CelebA Faces, ImageNet) used in our experiments. \begin{figure}[t] \centering \includegraphics[scale = 0.45]{figures/sample_data_fig.pdf} \caption{Samples images of CelebA, LFW and ImageNet datasets. Tiny ImageNet is a subset of ImageNet and is thus represented by ImageNet samples.} \label{dataset_samples} \end{figure} \begin{itemize} \item \textbf{LFW Faces} LFW contains 13233, 250x250 labelled face images with 5749 different people (classes). The dataset includes variations in pose, lighting, age, gender, race, color saturation, and other parameters. The images were detected and centralized by the Viola-Jones detector~\cite{article3}. LFW was used for face recognition training due to its high resolution, face centralization, diversity, and adaptability to many backdoor attack scenarios. The varying class sizes of LFW allowed analyzing the effect of poisoning on different class distributions. High-performing face recognition models often use centralized and high-resolution images for training~\cite{DBLP:journals/corr/SchroffKP15, BMVC2015_41}. High-quality images of faces closely simulate real-world face recognition scenarios, producing far more realistic results. Thus, we employed the LFW dataset for all experiments involving poisoned face recognition models to investigate the effects of BA-BAM. \item \textbf{CelebA Faces} CelebA contains 202,599, 178x218 face images of 10,177 celebrities. The images have pose variation and background clutter~\cite{liu2015faceattributes}. CelebA was partially poisoned to train the binary poison recognition classifier. \item \textbf{ImageNet} A subset of ImageNet~\cite{ILSVRC15} consisting of 20 classes, and 14,697 images was generated. The 20 classes were selected to replicate the hand-picked ImageNet pairs utilized in~\cite{saha2019hidden}. Tiny ImageNet, a subset of ImageNet containing 100,000 images of 200 classes (500 for each class), was also used. Tiny ImageNet was partially poisoned in order to train the binary poison recognition classifier. \end{itemize} \begin{center} \begin{table}[!ht] \centering \caption{LFW classes used in order of representation.} \label{table:1} \scalebox{0.82}{ \begin{tabular}{ |c|c||c| } \hline \multicolumn{2}{|c||}{\textbf{Binary Classifier}} & \textbf{Multiclassifier Classes}\\ \hline \textbf{Negative Class} & \textbf{Positive Class} & George\_W\_Bush \\ \hline Colin\_ Powell & George\_W\_Bush & Colin\_ Powell \\ \hline Tony\_Blair & & Tony\_Blair \\ \hline Donald\_Rumsfield & & Donald\_Rumsfield \\ \hline Gerhard\_Shroeder & & Gerhard\_Shroeder \\ \hline Ariel\_Sharon & & Ariel\_Sharon \\ \hline Hugo\_Chavez & & Hugo\_Chavez \\ \hline Junichiro\_Koizumi & & Junichiro\_Koizumi \\ \hline Jean\_Chretien & & Jean\_Chretien \\ \hline John\_Ashcroft & & John\_Ashcroft \\ \hline Jacques\_Chirac & & Jacques\_Chirac \\ \hline Serena\_Williams & & Serena\_Williams \\ \hline \end{tabular} } \end{table} \end{center} \subsection{Binary backdoor attack scenario setup} The two most represented classes in LFW were utilized to construct 1:1 face verification attack scenarios. These were George W. Bush (GWB) and Colin Powell (CP). GWB (530 images) was assigned as the source class (victim), with CP (236 images) as the attacker class. This allowed for greater flexibility in tuning attack parameters that would not be possible with lesser represented classes. For the binary scenario, training and testing involves two classes - a positive class consisting of only source class images (the authorized user) and the negative class, consisting of other faces (unauthorized users). As such, only images of GWB constituted the positive class, with other LFW classes contributing to the negative class, as seen in Table~\ref{table:1}. The negative class consisted of the 10 most highly represented classes (excluding GWB and CP), which amounted to 350 images in total. The negative class was composed of the same images throughout all binary experiments. This was done to provide consistency between experiments and to maintain a balance between positive and negative classes. Poisoned images of CP were added to the negative class for malicious model training. \subsection{Multi-class backdoor attack scenario setup} The 12 classes with the highest representation were selected from the LFW dataset for use in multi-classification scenarios. The selected classes are shown in Table~\ref{table:1}. Each class contains at least 50 images, with the data subset totaling 1560 images. As in the binary scenario, the two classes with the highest representation were selected as the target (GWB) and source (CP) classes. \begin{center} \begin{table}[!ht] \centering \caption{Binary and multiclass models on poisoned LFW and ImageNet data.} \scalebox{0.85}{ \begin{tabular}{ |c|c|c|c|c| } \hline \textbf{Model Type \& Dataset} &\textbf{Model Accuracy} & \textbf{ASR} \\ \hline \textbf{Binary ImageNet} & $98.0\%$ & $92.0\% \pm 0.0\%$\\ \hline \textbf{Binary LFW} & $78.3\%$ & $100.0\% \pm 0.0\%$\\ \hline \textbf{Multiclass ImageNet} & $92.8\%$ & $89.8\% \pm 0.6\%$\\ \hline \textbf{Multiclass LFW} & $42.5\%$ & $100.0\% \pm 0.0\%$\\ \hline \end{tabular} } \label{table:binary_multiclass} \end{table} \end{center} \subsection{Source class poisoning} Unless otherwise specified, 80\% of the CP class was used for poisoning for all experiments. This fraction was chosen to maximize the effect of the attack whilst leaving enough unpoisoned images of the attacker class for training and testing. This fraction of poisoned images was then added to the negative class for training the malicious models. \subsection{Attack success rate (ASR)} Attack success was quantified as a percentage in terms of equation~\ref{eq:ASR}. To calculate ASR, suppose $\#\tilde{\mathcal{F}}_\theta(\mathcal{D}_{patched})$ is the number of correct target misclassifications output by a poisoned model, $\tilde{\mathcal{F}}_\theta$, on a patched testing set, $\mathcal{D}_{patched}$, of attacker images. \begin{equation} ASR = \frac{\#\tilde{\mathcal{F}}_\theta(\mathcal{D}_{patched})}{|\mathcal{D}_{patched}|} \label{eq:ASR} \end{equation} The images in $\mathcal{D}_{patched}$ were exclusive from the training set. Patches were applied at a randomized position on the attacker images during testing. \subsection{Reproducing backdoor attack models on ImageNet and LFW (Threat Model)} We reproduced the exact attack scenario as outlined in the methodology of~\cite{saha2019hidden}. The attack settings were reproduced on AlexNet (fc8 unfrozen) as a feature extractor on the ImageNet class pairs described in the paper\footnote{epochs=24, learning rate=0.0001, momentum=0.9, batch size = 256}. A hand-picked source class (`warplane') and target class (`french\_bulldog') were chosen to create a poisoned classification model for the binary and multiclass scenarios. Both binary and multiclass models utilizing ImageNet data displayed a high accuracy and ASR, as expected (Table~\ref{table:binary_multiclass} and Figure~\ref{fig:base_performance}). Once we obtained a configuration that produced high accuracy on ImageNet, we followed similar steps to train a generalizable model on LFW using transfer learning. The primary goal of this step was to expose and mitigate backdoor attacks in vulnerable models rather than emphasizing the model's baseline accuracy. \begin{figure}[H] \centering \subfloat[]{\includegraphics[width=0.22\textwidth, trim={2cm 8.5cm 11cm 2.3cm}, clip]{figures/ImageNet_binary_model_plot.pdf}\label{fig11}} \hfill \subfloat[]{\includegraphics[width=0.22\textwidth, trim={2cm 8.5cm 11cm 2.3cm}, clip]{figures/LFW_binary_model_plot.pdf}\label{fig21}} \subfloat[]{\includegraphics[width=0.22\textwidth, trim={2cm 8.5cm 11cm 2.3cm}, clip]{figures/ImageNet_multiclass_model_plot.pdf}\label{fig12}} \hfill \subfloat[]{\includegraphics[width=0.22\textwidth, trim={2cm 8.5cm 11cm 2.3cm}, clip]{figures/LFW_multiclass_model_plot.pdf}\label{fig22}} \caption{Performance plots comparing binary and multiclassifier model training on poisoned LFW and ImageNet datasets.} \label{fig:base_performance} \end{figure} The ASRs of both models trained on LFW data were higher than the ImageNet models. Both binary and multiclass LFW models were susceptible to the backdoor attack, displaying a 100\% ASR. This highlights the vulnerability of biometric face data to such attacks. \begin{center} \begin{table}[!ht] \centering \caption{Model Attack Performance and Classification Accuracy} \label{table:3} \scalebox{0.85}{ \begin{tabular}{ |c|c|c|c| } \hline \textbf{Model} & \textbf{ASR} & \textbf{Accuracy} & \textbf{Trainable Params.}\\ \hline {AlexNet Unfrozen} & {100.0\%} & {93.0\%} & {57,012,034}\\ \hline {AlexNet Frozen} & {100.0\%} & {81.0\%} & {8,194}\\ \hline {Inception Unfrozen} & {13.2\%} & {93.0\%} & {24,348,900}\\ \hline {Inception Frozen} & {76.4\%} & {65.0\%} & {5,636}\\ \hline {ResNet Unfrozen} & {5.6\%} & {95.0\%} & {11,177,538}\\ \hline {ResNet Frozen} & {44.8\%} & {82.0\%} & {1,026}\\ \hline {VGG Unfrozen} & {8.0\%} & {96.0\%} & {128,780,034}\\ \hline {VGG Frozen} & {29.2\%} & {84.0\%} & {8,194}\\ \hline \end{tabular} } \label{architecture} \end{table} \end{center} \subsection{Investigating vulnerabilities of model architectures and transfer learning} We chose AlexNet, ResNet, VGG, and InceptionV3~\cite{canziani2016analysis} to investigate the impacts of different benchmarked architectures on model vulnerability. Experiments were executed on both partially frozen models (using pre-trained ImageNet weights) and unfrozen versions of these models. As shown in Table~\ref{architecture}, all unfrozen models displayed higher classification accuracy than their frozen counterparts. This is due to the decreased relevancy of the pre-trained ImageNet weights when applied to face data. The attack performs best on AlexNet, with both frozen and unfrozen models showing a 100\% ASR. Due to its high classification accuracy of 94\% and complete vulnerability to the attack, unfrozen AlexNet is used for all remaining experiments. \begin{center} \begin{table}[!ht] \centering \caption{Unpatched source image misclassification.} \label{table:source_misclassification} \scalebox{0.82}{ \begin{tabular}{ |c|c|c|c| } \hline \textbf{Model Type} & \textbf{State} & \textbf{ASR} \\ \hline Multiclass & Clean & $21.1\% \pm 3.4\%$\\ \hline Multiclass & Poisoned & $25.4\% \pm 0.9\%$\\ \hline Binary & Clean & $30.6\% \pm 1.6\%$\\ \hline Binary & Poisoned & $31.2\% \pm 3.0\%$\\ \hline \end{tabular} } \end{table} \end{center} \vspace{-0.7cm} \subsection{Source image misclassification} Unpatched images of CP (attacker) were tested on models trained on unpoisoned data in order to establish if a portion of the ASR was simply natural misclassification. As seen in Table~\ref{table:source_misclassification}, we found that the source image ASR for the clean models was approximately equivalent to the rate shown by the unfrozen AlexNet poisoned models. This supports our assumption that the source image attack successes are a byproduct of natural misclassification. \subsection{Investigating poisoned fractions of the attacker class} We generated poisoned models by poisoning 15\%, 30\%, 50\%, and 80\% of the CP attacker class. Poisoning more than 80\% of the class was eschewed, as a sufficient number of unpoisoned images of the attacker class must be left for validation and testing. \begin{center} \begin{table}[!ht] \centering \caption{Proportion of attacker (source) class used for poisoning.} \label{table:attacker_prop} \scalebox{0.82}{ \begin{tabular}{ |c|c|c|c| } \hline \textbf{\% Attacker Class Poisoned} & \textbf{Model Accuracy} & \textbf{ASR} \\ \hline \textbf{80\%} & $93.0\%$ & $100.0\% \pm 0.0\%$\\ \hline \textbf{50\%} & $85.0\%$ & $95.6\% \pm 1.2\%$\\ \hline \textbf{30\%} & $85.0\%$ & $0.914 \pm 1.6\%$\\ \hline \textbf{15\%} & $89.0\%$ & $0.82.80\% \pm 2.4\%$\\ \hline \end{tabular} } \end{table} \end{center} \vspace{-0.7cm} As shown in Table~\ref{table:attacker_prop}, the use of 80\% of the class resulted in a completely successful attack. The decrease in attack success when fewer poisoned images are used suggests that the strength of the mapping between the source and target class also decreases. Thus, we poison 80\% of the attacker (CP) class for subsequent experiments. By doing so, we optimize the strength of the attack in order to enable strenuous testing of BA-BAM. As expected, model accuracy dropped significantly (by 8\%) for the 50\%, and 30\% poisoned models; however, for the 15\% poisoned model, accuracy decreased by only 4\%. This insight may reflect a lower misclassification rate due to a lower proportion of poisoned images. The model training on less poisoned images may prevent it from building a strong erroneous mapping between the source and target class. As a result, the recognition model would be more likely to classify attacker images correctly. \begin{center} \begin{table}[!ht] \centering \caption{Poison Detection Model - Trained on Tiny ImageNet and CelebA} \scalebox{0.82}{ \begin{tabular}{ |c|c|c| } \hline \textbf{} & \textbf{Binary Classifier} & \textbf{Multiclassifier} \\ \hline \textbf{Accuracy - Tiny ImageNet } & $99.31\%$ & $99.15\%$\\ \hline \textbf{Accuracy - CelebA} & $99.99\%$ & $99.79\%$\\ \hline \textbf{No. Poisoned Images} & $372$ & $372$\\ \hline \textbf{No. Clean Images} & $644$ & $1042$\\ \hline \end{tabular} } \label{table:poison_detect} \end{table} \end{center} \vspace{-0.7cm} \subsection{Poison image recognition model (PIRM)} A pre-trained VGG16 model utilizing ImageNet weights was trained with the final three convolutional layers unfrozen for finetuning\footnote{activation=softmax, optimizer=Adam, learning rate=0.0001, loss=binary crossentropy}. A fully-connected module consisting of two dense layers with 35\% dropout was added as the classification component. This was trained on both poisoned and clean CelebA images. Classes containing at least 23 images were chosen to undergo poisoning (a total of 4320 classes). This generated a balanced dataset for training, where approximately 50\% (100,000 images) of the total CelebA dataset was poisoned. Source images to be used for patching were randomly chosen from the remaining CelebA dataset (these images did not overlap with the poisoned images). The dataset was split with 30\% used for testing and 70\% for training. The model achieved a 99.98\% testing accuracy in classifying poisoned and clean images. The same model settings and architecture were trained using Tiny ImageNet. Of the total 200 classes, 100 were poisoned, which provided 50,000 clean and 50,000 poisoned images for training. Similarly, the dataset was split into 70\% for training and 30\% for testing. \begin{figure} [H] \centering \subfloat[Trained on poisoned Tiny ImageNet.]{\includegraphics[width=0.22\textwidth, trim={2cm 2cm 7cm 2cm}, clip]{figures/binary_classifier_performance_Tiny_ImageNet_plot.pdf}\label{fig1}} \hfill \subfloat[Trained on poisoned CelebA.]{\includegraphics[width=0.22\textwidth, trim={2cm 2cm 7cm 2cm}, clip]{figures/binary_classifier_performance_CelebA_plot.pdf}\label{fig2}} \caption{Model loss and accuracy of the poison image recognition model over the iterations of the first epoch.} \label{pirm_plots} \end{figure} As shown in Table~\ref{table:poison_detect}, both models reached accuracies above 99\% over a single epoch, as shown in Figure~\ref{pirm_plots}, and converged thereafter. For binary classification, the PIRM trained on CelebA achieved an accuracy 0.68\% higher than the model trained on Tiny ImageNet. The multiclassification scenario yielded similar results, with the CelebA PIRM achieving a 0.64\% higher accuracy than its Tiny ImageNet counterpart. The minor deviation in accuracy between the two PIRMs exhibits the generalizability of image poison detection, showing that poison detection is accurate regardless of an image's original content. This confirms that the PIRM effectively contributes to producing high-performance backdoor attack mitigation in BA-BAM. \subsection{Deriving insights through noise addition to an entire compromised class} We applied $\varepsilon = 0.5$ Laplacian noise to all images in the GWB class to investigate the impact of perturbation on model accuracy and ASR. As expected and discussed in Section~\ref{selective_noise}, after complete perturbation, the ASR dropped to 0\%. The model accuracy decreased to 50\%, indicating no ability to distinguish the negative class from the perturbed positive class. The complete lack of attack success can be explained by the negative class not having noise added and the positive class being entirely perturbed. Noise addition to the entire positive class obscures all of the class' facial features. Due to this, only the features of the negative class would remain to teach the model during training. The entirely noisy positive class would not provide useful facial components for model training. Thus, any image that is presented during model testing with basic facial features would be classified as the negative class. In turn, as the patched images introduced by an attacker during testing do not have noise applied, the model classifies them as part of the negative class, causing the attack to fail. Therefore, complete perturbation diminishes model accuracy, making it practically unusable. \vspace{-0.7cm} \subsection {Selective and automated perturbation} While complete perturbation of the source class provides a substantial decrease in ASR, it also catastrophically decreases classification accuracy. In order to combat this accuracy loss, selective perturbation is introduced instead. Selective perturbation (SP) allows for classification accuracy to be preserved by only perturbing poisoned images. In order to investigate effectiveness, the SP process was initially carried out manually with $\varepsilon$ values of 0.5, 0.1, 0.05, 0.01, and 0.005. All known poisoned images within the GWB class were subjected to noise addition, simulating selective perturbation. The effect of each level of image perturbation can be seen in Figure~\ref{fig:perturbed_images}. Until $\varepsilon$ = 0.01, ASR decreases with $\varepsilon$. With values of $\varepsilon$ below 0.01, the attack success improves due to misclassification, as the high level of perturbation renders the image unrecognizable. Thus, we propose that $\varepsilon = 0.01$ provides the optimal balance of both classification accuracy and backdoor attack mitigation, and therefore will be used as the default setting for the PIRM. As shown in Figure~\ref{fig:performance_plots}, selective perturbation preserves model accuracy, with accuracy dropping by $2.5\%$ for the binary classifier and improving by $8.2\%$ for the multiclassifier ($\varepsilon = 0.01$). \begin{figure}[H] \includegraphics[scale = 0.32, trim={2cm 2cm 2cm 2cm}, clip]{figures/perturbation_plot.pdf} \caption{Plot comparing binary and multiclassifier model performance utilizing various levels of selective perturbation.} \label{fig:performance_plots} \end{figure} \subsection{Computational complexity} \label{compute_complx} BA-BAM includes two primary segments followed the final layer of face recognition model training. The first segment classifies whether a given image has been poisoned, and the second applies calibrated noise to flagged images. The complexity of classifying an image as poisoned or not is constant as the model has already converged. We can assume the image dimensions, and thus pixel count will be the same for each image across a dataset. Thus, we can consider the influence of image dimension as constant. Noise addition involves calculating the mean image of a given class as well as the maximum Euclidean distance between a given image and the mean image. Determining the mean image can be expressed as $O(nk)$ where $nk$ is the number of images in a given class, k. The maximum distance can similarly be described by $O(nk)$. Applying Laplacian noise is a constant operation, as displayed in line 8 of Algorithm 1 and line 7 of Algorithm 2. As the final facial recognition image classification is carried out on a converged model, the prediction time is constant, with a complexity of $O(1)$. In the case of filtering an entire dataset before training, the worst-case supposes that every image is poisoned. Thus, every image would have noise applied. Consequently, the overall complexity is $O(N)$, where $N$ is the total amount of images in the given dataset before training. The linear complexity of our approach demonstrates its high utility in a real-world setting. \subsection{Comparison} \begin{center} \begin{table}[!ht] \centering \caption{Comparison of BA-BAM to other detection/mitigation approaches.} \scalebox{0.82}{ \begin{tabular}{ |c||c|c|c|c|c|c| } \hline & \textbf{AFR} & \textbf{CC} & \textbf{PC} & \textbf{MAD} & \textbf{D\&M} \\ \hline\hline \textbf{RAB~\cite{DBLP:journals/corr/abs-2003-08904}} & \textcolor{green}{\checkmark} & Moderate & High & 6.1-15.8\% & \textcolor{green}{\checkmark}\\ \hline \textbf{RAP~\cite{DBLP:journals/corr/abs-2110-07831}} & {$\color{red}\times$} & Moderate & Low & N/A & {$\color{red}\times$}\\ \hline \textbf{STRIP~\cite{DBLP:journals/corr/abs-1902-06531}} & \textcolor{green}{\checkmark} & Moderate & Moderate & 2.19\% & {$\color{red}\times$}\\ \hline \underline{\textbf{BA-BAM}} & \textcolor{green}{\checkmark} & Low & High & 2.4\% & \textcolor{green}{\checkmark}\\ \hline \end{tabular} } \label{table:comparison} \tablefootnote{The comparison considers applicability to face recognition (AFR), computational complexity (CC), problem complexity (PC) and model accuracy drop (MAD). We additionally analyse whether both detection and mitigation are included in the defence (D\&M). The MAD score for RAB was assessed on k-NN models.} \end{table} \end{center} We provide a comparative analysis of BA-BAM against three other robust defense mechanisms against adversarial attacks. The comparison is summarized in Table~\ref{table:comparison}. STRIP~\cite{DBLP:journals/corr/abs-1902-06531} applies perturbations to each input image and evaluates the randomness of the associated outputs. Weber et al.~\cite{DBLP:journals/corr/abs-2003-08904} developed a method against backdoor attacks (RAB) to provide certified robustness. RAB generates a ``smoothed classifier" by aggregating multiple models trained on datasets with randomized smoothing applied. Robustness Aware Perturbations (RAP) by Yang et al.~\cite{DBLP:journals/corr/abs-2110-07831} is an efficient method of backdoor detection in NLP models. We consider RAP to be inapplicable to face recognition models as the defense focuses on text-based data. Without demonstrability on any image datasets, the methods employed by RAP cannot be confidently analogized to BA-BAM. STRIP and RAB were both evaluated on MNIST and CIFAR-10 benchmark datasets, and RAB was additionally evaluated on ImageNet. Although these methods generalize to these datasets, face image datasets (such as LFW) pose a more strenuous classification task due to the close feature similarity between classes (high inter-class complexity). The benchmark text datasets (IMDB, Amazon, Twitter, and Yelp reviews) evaluated against RAP are marginally less complex in contrast to face image datasets. Both STRIP and RAP simply detect the presence of poisoned data, providing no attack mitigation mechanisms. Conversely, RAB and BA-BAM provide both poison detection and comprehensive attack mitigation. RAB has significantly high computational complexity as it requires training multiple models for its defense. RAB performs well on k-NN models; however, it is only able to decrease ASR to $50.1\%$ in a DNN 10-way multiclassifier on MNIST. The polynomial computational complexity of RAB ($O(n \cdot K^2))$ is relatively high compared to that of BA-BAM, which is linear (see section~\ref{compute_complx}). STRIP and RAP are both relatively computationally efficient; however, RAP requires two model predictions per input, whereas STRIP must generate N perturbed copies for each input. BA-BAM requires two model predictions to complete both poison detection and mitigation, whereas RAP requires both predictions for the task of detection. BA-BAM suffers a 2.4\% accuracy drop for both detection and mitigation, and STRIP suffers a 2.19\% worst-case drop just for detection. RAB suffers an accuracy drop of 15.8\% under worst-case settings when evaluated on CIFAR-10. \section{Related Works} \label{related_works} Literature shows a few attempts at mitigating adversarial attacks with random noise addition. Wang et al. introduced one of the earlier attempts to generalize randomized smoothing to mitigate backdoor attacks~\cite{DBLP:journals/corr/abs-2002-11750}. Randomized smoothing involves the addition of random noise to the input image vector in order to overwhelm the patch/perturbation added by the attacker. Their method guarantees that only 36\% of testing images can be classified correctly when at most 2 pixels or labels are perturbed by the attacker during training. When more pixels/labels are perturbed, the model's classification accuracy drops. Another method named DP-InstaHide utilizes intuitions from differential privacy that involve applying Laplacian noise to poisoned images as a defense from adversarial attacks~\cite{DBLP:journals/corr/abs-2103-02079}. DP-InstaHide combined additive Laplacian noise with Mixup input and label augmentation which improve the robustness of the model~\cite{DBLP:journals/corr/abs-1710-09412}. They demonstrate successful adaptive gradient matching attack mitigation on CIFAR-10 and ImageNet datasets~\cite{geiping2021witches}. Other approaches inject random noise directly into network weights and activations; however, these methods lack robustness in model performance~\cite{8954187, zheltonozhskii2020colored}. Maintaining robustness as well as security against adversarial attacks has been a common focus. A trade-off between the adversarial robustness of a model and its backdoor robustness was identified by Weng et al.~\cite{NEURIPS2020_8b406655}. Their experiments show that increasing robustness against adversarial techniques also increases backdoor attack vulnerability. This trade-off can be exploited for both malicious and defensive purposes. They identify that adversarially trained networks utilize high-level features to make predictions, which amplify the success of corrupted/dirty label backdoor attacks~\cite{NEURIPS2020_8b406655}. RAP proposes a novel and computationally efficient method of detecting backdoor attacks in a text-based setting. A large discrepancy in robustness between poisoned and clean samples was detected, enabling poisoned samples to be distinguished from their clean counterparts. Word-based perturbation is then applied to nullify attack effectiveness while showing only slight drops in classification accuracy for clean data ~\cite{DBLP:journals/corr/abs-2110-07831}. Comparable methods for text-based applications are computationally more expensive and provide inferior attack prevention~\cite{DBLP:journals/corr/abs-2011-10369, DBLP:journals/corr/abs-1911-10312}. Weber et al. aim to produce a framework for certifying ML model robustness against backdoor attacks. A randomized smoothing technique is utilized, with a set of classifiers being trained and then combined to generate a single smoothed classifier that can provide this certifiable robustness. The framework is tested on various neural networks and datasets, and provides benchmarks of model robustness across these datasets. However, this approach is computationally expensive, as it requires the training of multiple neural networks utilizing large datasets from scratch~\cite{DBLP:journals/corr/abs-2003-08904}. \section{Conclusion} \label{conclusion} We proposed a novel backdoor attack mitigation mechanism named BA-BAM for protecting facial recognition authentication systems from potential privacy breaches. BA-BAM investigates potential privacy threats that can occur through backdoor attacks by providing a mechanism to detect manipulated face images from an untrusted database, and applying calibrated Laplacian noise to overcome the adverse effects on a face recognition system. BA-BAM successfully mitigates backdoor attacks in both binary and multiclass face recognition models without majorly affecting the models' accuracy while also securing their robustness. BA-BAM shows high robustness by incurring a maximum accuracy drop of 2.4\%, while also keeping the attack success rate at a maximum of 20\% in the worst-case scenario. These results show that our work is vital in mitigating real-world attacks on biometric authentication models in order to prevent potential privacy breaches. As future work, we aim to investigate the possibility of employing the proposed work to mitigate attacks on other image-based biometrics such as irises, fingerprints, and palm prints. \section*{Acknowledgements} This work has been supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government’s Cooperative Research Centres Programme. HPC resources for this research were facilitated by the Pawsey Supercomputing Research Centre. \input{main.bbl} \end{document}
1,314,259,992,943
arxiv
\section{Introduction} During the last decade, non-classical states of light became a necessary tool in many physical experiments, most notably, very high precision measurements \cite{Nature_2011}, quantum computations, and quantum cryptography (see e.g., review papers \cite{Gisin_RMP_74_145_2002, Kok_RMP_79_135_2007, Hammerer_RMP_82_1041_2010} and references therein). Non-classical light will be also used in the emerging class of experiments aimed at the preparation of mechanical objects in non-Gaussian quantum states \cite{Zhang_PRA_68_013808_2003, 10a1KhDaMiMuYaCh}. In all these experiments, some method of characterization and verification of the generated quantum state is required. The standard method for this is the quantum tomography \cite{Vogel_PRA_40_2847_1989, Raymer2004}, which allows one to restore the Wigner function \cite{Schleich2001} of the quantum state using the data acquired by a set of homodyne measurements. However, in many cases the practical implementation of this method could be difficult, in particular because it requires an additional local oscillator light source with the phase locked with the explored light. This requirement is especially hard to fulfill in the case of pulsed broadband light which is very typical in experiments with non-classical light. This problem can be avoided by using the polarization tomography, which allows one to restore the quasi-probability distribution for the three Stokes operators of the two polarization modes of light --- so called polarization quasi-probability distribution (PQPD) \cite{Bushev_OS_91_526_2001, Karassiov_JOB_4_S366_2002, Karassiov_LF_12_948_2002, Karassiov2004}. Evidently, it is not sensitive to the common phase of both polarization and therefore immune to the common phase fluctuations. Due to this very reason, it does not allow one to restore the full quantum state of the light, but only its so-called {\em polarization sector}. However, in most cases, the polarization sector information is sufficient~\cite{Marquardt_PRL_99_220401_2007, Kanseri_PRA_85_022126_2012, Mueller_NJP_14_085002_2012}. A distinctive feature of the PQPD, which it shares with the Wigner function, as well as with the classical probability distributions, is that its gives correct one-dimensional marginal distributions (in this particular case for the Stokes variables). Therefore, similar to the Wigner function, the PQPD represents the natural choice for the probability distributions in the classical hidden variables models. Expanding this analogy, it is possible to expect that PQPDs of `truly non-classical', {\it e.g.} non-Gaussian quantum states should demonstrate some non-trivial features, like negativity. However, as we show below, the discrete valued nature of the Stokes observables makes the situation a bit more complicated. For optomechanical experiments, especially interesting are bright (with large mean number of photons) states, because they more effectively interact with mechanical objects (note that the masses of even most tiny nanobeams and nanomembranes used in these experiments are huge in comparison with the optical quanta `masses' $\hbar\omega/c^2\lesssim10^{-35}\,{\rm kg}$). For example, it was shown more than 30 years ago that the squeezed vacuum state allows to improve the sensitivity of optical interferometric displacement sensors \cite{Caves1981}. Recently, this idea was implemented in the laser interferometric gravitation-wave detector GEO-600 \cite{Nature_2011}. In a similar way, bright quantum non-Gaussian states, like the squeezed single-photon state $\hat{\mathcal{S}}(r)\ket{1}$, where $\hat{\mathcal{S}}(r)$ is the squeezing operator, see Eq.\,\eqref{S_of_r}, are more attractive for the non-Gaussian optomechanics than their non-squeezed counterparts, for example the `ordinary' single-photon state $\ket{1}$, considered {\it e.g.} in Refs.\,\cite{Zhang_PRA_68_013808_2003, 10a1KhDaMiMuYaCh}. Note that depending on the degree of squeezing $r$, the mean energy of a squeezed single-photon state can be arbitrary large. But independently of its mean energy, this state always possesses such essentially non-classical features as the negative-valued Wigner function and orthogonality to other squeezed Fock states $\hat{\mathcal{S}}(r)\ket{n\ne1}$ with the same degree of squeezing $r$. The primary goal of this paper is to explore the applicability of the polarization tomography to the verification of bright non-Gaussian quantum states, and the second goal is to analyze the non-classical behavior of PQPD. In Sec.\,\ref{sec:review}, we reproduce the basic formalism of the polarization tomography that could be found in the literature. In Sec.\,\ref{sec:loss} we discuss the effects of photodetectors' non-idealities and of the optical losses. In Sec.\,\ref{sec:LP}, which is devoted to the second goal, we consider linearly polarized light pulses and show, using this simple particular case, that the PQPD can be negative even for the states of light typically considered as essentially classical (like the coherent quantum state). We also discuss a possible experimental setup aimed at the demonstration of this negativity. In Sec.\,\ref{sec:P2W} we return to our primary goal and consider light containing some quantum state in one polarization and a coherent quantum state $\ket{\alpha_0}$ in the other one. It easy to see that if $|\alpha_0|\to\infty$ then the polarization tomography of this state reduces to ordinary tomography with the coherent quantum state serving as the local oscillator. We formulate requirements for the minimal value of $|\alpha_0|$ and for the photodetectors' parameters that are necessary to obtain the negative-valued PQPD in this setup. The Appendix contains some cumbersome calculations, which are not necessary for understanding the main results of this paper. \section{PQPD and the polarization characteristic function}\label{sec:review} Following the literature (see, {\it e.g.}, \cite{Bushev_OS_91_526_2001, Karassiov_LF_12_948_2002, Karassiov2004}), we introduce the polarization characteristic function as follows: \begin{equation}\label{chi} \chi(u_1,u_2,u_3) := \mathop{\rm Tr}\nolimits\left[\hat{\rho}\hat{\chi}(u_1,u_2,u_3)\right] , \end{equation} where $\hat{\rho}$ is the density operator of a two-mode (horizontal and vertical polarizations) quantum state of light, \begin{gather} \hat{\chi}(u_1,u_2,u_3) = \exp\biggl(i\sum_{i=1}^3u_i\hat{S}_i\biggr) = \exp\left[ i(\hat{{\rm a}}_H^\dagger\ \hat{{\rm a}}_V^\dagger)\smatrix{u_1}{w^*}{w}{-u_1} \svector{\hat{{\rm a}}_H}{\hat{{\rm a}}_V} \right] , \label{hat_chi} \\ w = u_2 + iu_3 \,, \end{gather} $\hat{{\rm a}}_H$, $\hat{{\rm a}}_V$ are the annihilation opearators for these modes, \begin{align} & \hat{S}_1 = \hat{n}_H - \hat{n}_V \,, & & \hat{S}_2 = \hat{{\rm a}}_V^\dagger\hat{{\rm a}}_H + \hat{{\rm a}}_H^\dagger\hat{{\rm a}}_V \,, & & \hat{S}_3 = i(\hat{{\rm a}}_V^\dagger\hat{{\rm a}}_H - \hat{{\rm a}}_H^\dagger\hat{{\rm a}}_V) \end{align} are the Stokes operators, and \begin{align} \hat{n}_H &= \hat{{\rm a}}_H^\dagger\hat{{\rm a}}_H \,, & \hat{n}_V &= \hat{{\rm a}}^\dagger_V\hat{{\rm a}}_V \end{align} are the photon-number operators in the $H,V$ modes. The PQPD is given by the Fourier transform of $\chi(u_1,u_2,u_3)$: \begin{equation}\label{chi2W} W(S_1,S_2,S_3) = \displaystyle\int_{-\infty}^{\infty}\!\chi(u_1,u_2,u_3) \exp\biggl(-i\sum_{i=1}^3 u_iS_i\biggr)\,\frac{du_1du_2du_3}{(2\pi)^3}\,. \end{equation} An important feature of the Stokes operators, crucial for our consideration below, is that their eigenvalues are integer numbers varying from -$\infty$ to $\infty$. Therefore, the marginal characteristic functions $\bigl\langle\exp\bigl(iu_i\hat{S_i}\bigr)\bigr\rangle$ $(i=1,2,3)$ for these operators are $2\pi$-periodic in their argument, and the corresponding marginal probability distributions for $S_{1,2,3}$ are equal to sums of $\delta$-functions at the integer values of their arguments (we prefer to use the continuous-valued Fourier transformation here, which gives delta-functions instead of delta-symbols, for the sake of consistency with the treatment below). \begin{figure} \includegraphics[width=0.48\textwidth]{setup.png} \caption{The setup for polarization tomography~\cite{Bushev_OS_91_526_2001,Karassiov2004}. PBS is the polarizing beam splitter, ${\rm D}_\|$ and ${\rm D}_\bot$ are the photodetectors. The signals from the detectors are processed by either digital or analog electronics, after which a computer calculates the probability distributions $W_{\theta\phi}(n)$ and performs the Radon transformation.}\label{fig:scheme} \end{figure} The characteristic function \eqref{chi} can be readily restored using the polarization tomography setup shown in Fig.\,\ref{fig:scheme}. This setup provides the probability distribution $W_{\theta\phi}(n)$ for the difference of the photon numbers in two orthogonal polarization modes measured by two photon counters ${\rm D}_\|$, ${\rm D}_\bot$: \begin{multline}\label{S_theta_phi} \hat{S}_{\theta\phi} = \hat{{\rm a}}_\|^\dagger\hat{{\rm a}}_\| - \hat{{\rm a}}_\bot^\dagger\hat{{\rm a}}_\bot = (\hat{{\rm a}}_H^\dagger\ \hat{{\rm a}}_V^\dagger) \smatrix{\cos\theta}{e^{-i\phi}\sin\theta}{e^{i\phi}\sin\theta}{-\cos\theta} \svector{\hat{{\rm a}}_H}{\hat{{\rm a}}_V} \\ = \hat{S}_1\cos\theta + (\hat{S}_2\cos\phi + \hat{S}_3\sin\phi)\sin\theta , \end{multline} where \begin{subequations}\label{aHV2a_det} \begin{align} \hat{{\rm a}}_\| &= \hat{{\rm a}}_H\cos\frac{\theta}{2} + \hat{{\rm a}}_Ve^{-i\phi}\sin\frac{\theta}{2} \,,\\ \hat{{\rm a}}_\bot &= \hat{{\rm a}}_H\sin\frac{\theta}{2} - \hat{{\rm a}}_V e^{-i\phi}\cos\frac{\theta}{2} \end{align} \end{subequations} are the annihilation operators for these modes and the angles $\theta$, $\phi$ depend on the orientations of the half- and quarter-wave plates shown in Fig.\,\ref{fig:scheme}. The characteristic function of this probability distribution is equal to \begin{equation}\label{W2chi} \chi_{\theta\phi}(\lambda) = \sum_{n=-\infty}^{\infty}W_{\theta\phi}(n)e^{i\lambda n} = \mathop{\rm Tr}\nolimits[\rho\hat{\chi}_{\theta\phi}(\lambda)] , \end{equation} where \begin{equation}\label{hat_chi_lambda} \hat{\chi}_{\theta\phi}(\lambda) = \exp\bigl[i\bigl(\lambda\hat{S}_{\theta\phi}\bigr)\bigr] . \end{equation} Comparing Eqs.\,\eqref{hat_chi} and \eqref{hat_chi_lambda}, it is easy to see that \begin{equation}\label{chi2chi} \chi(u_1,u_2,u_3) = \chi_{\theta\phi}(\lambda) \,, \end{equation} with \begin{align}\label{Radon} u_1 &= \lambda\cos\theta \,, & w = \lambda e^{i\phi}\sin\theta \,. \end{align} The chain of equalities (\ref{W2chi}, \ref{chi2chi}, \ref{chi2W}) forms, in essence, the Radon transformation which allows to calculate the PQPD from the experimentally acquired set of the distributions $W_{\theta\phi}(\lambda)$. Taking into account that for any angle $\vartheta$, \begin{equation} \hat{\mathcal{U}}^\dagger(\vartheta)\hat{S}_{1,2,3}\hat{\mathcal{U}}(\vartheta) \equiv \hat{S}_{1,2,3} \,, \end{equation} where \begin{equation} \hat{\mathcal{U}}(\vartheta) = e^{-i\vartheta(\hat{n}_H + \hat{n}_V)} \end{equation} is the evolution operator which introduces a common phase shift $\vartheta$ into both polarizations, it is easy to see that the polarization characteristic function is invariant to this transformation: \begin{equation}\label{chi_zeta} \mathop{\rm Tr}\nolimits\left[ \hat{\rho}\, \hat{\mathcal{U}}^\dagger(\vartheta)\hat{\chi}(u_1,u_2,u_3)\hat{\mathcal{U}}(\vartheta) \right] \equiv \mathop{\rm Tr}\nolimits\left[\hat{\rho}\hat{\chi}(u_1,u_2,u_3)\right] . \end{equation} Therefore, the PQPD is not sensitive to any common (polarization-independent) fluctuations of the light optical path. At the same time, it follows from Eq.\,\eqref{chi_zeta} that \begin{equation}\label{chi_polar} \chi(u_1,u_2,u_3) = \mathop{\rm Tr}\nolimits\left[\hat{\rho}_{\rm polar}\hat{\chi}(u_1,u_2,u_3)\right] , \end{equation} where \begin{equation}\label{rho_polar} \hat{\rho}_{\rm polar} = \int_{2\pi} \hat{\mathcal{U}}(\vartheta)\hat{\rho}\,\hat{\mathcal{U}}^\dagger(\vartheta)\, \frac{d\vartheta}{2\pi}\ = \sum_{\substack{n_H,n_V = 0\\n_H',n_V' = 0}}^\infty \ket{n_Hn_V}\bra{n_Hn_V}\hat{\rho}\ket{n_H'n_V'}\bra{n_H'n_V'} \delta_{n_H+n_V\,n_H'+n_V'} \end{equation} is the polarization sector of the density operator equal to the incoherent sum of the `slices' of the density operator with given total numbers of quanta. Therefore, the polarization tomography restores only part of the light quantum state, namely, its polarization sector~\cite{Marquardt_PRL_99_220401_2007}. \section{Quantum efficiency, optical losses, and photon-number integration}\label{sec:loss} In the above consideration, it was assumed implicitly that the photodetectors are ideal and are able to exactly count all incident quanta. Their non-ideal quantum efficiency $\eta<1$ can be modeled by imaginary grey filters with the power transmissivity $\eta$, which mix the photodetectors input fields with some vacuum fields: \begin{equation}\label{eff_loss} \hat{{\rm a}}_{\|,\bot} \to \sqrt{\eta}\,\hat{{\rm a}}_{\|,\bot} + \sqrt{1-\eta}\,\hat{{\rm b}}_{\|,\bot} \,, \end{equation} where $\hat{{\rm b}}_{\|,\bot}$ are the annihilation operators of the vacuum fields. It is easy to show that these grey filters can be replaced by a single filter located at the input of the scheme of Fig.\,\ref{fig:scheme}, with some evident redefinition of the vacuum fields. This means that we can consider the photodetectors as ideal ones but take into account their non-unity quantum efficiency by introducing the corresponding effective losses into the incident light. Note that other optical losses can be also taken into account here by replacing the photodetectors quantum efficiency in Eq.\,\eqref{eff_loss} by the {\it unified quantum efficiency} of the scheme, equal to the probability for an incident photon to reach one of the photodetectors and be detected. Another important shortcoming of contemporary photon-counting detectors is that their counting rate does not exceed $\sim10^7\,{\rm s}^{-1}$, which means that in the case of nanosecond and shorter pulses typically used in non-linear optics, they can count only one photon per pulse. More advanced transition-edge sensors can resolve up to $10$ photons, having at the same time high quantum efficiency, up to 95\%, but they are slow, difficult to use, and expensive \cite{Wildfeuer_PRA_80_043822_2009}. In experiments with bright multi-photon pulses, {\it photon-number integrating} detectors are used instead, whose output signal is linearly proportional to the input number of quanta, but contaminated by additive noise. In the case of picosecond pulses used, {\it e.g.,} in \cite{Hansen_OL_26_1714_2001, Kanseri_PRA_85_022126_2012}, this noise is equivalent to a measurement error of $\sigma\sim10^2$ quanta \cite{Stobinska_PRA_86_063823_2012}. Here we will model this noise by means of the Gaussian smoothing of the probability distribution $W_{\theta\phi}$: \begin{equation} \tilde{W}_{\theta\phi}(y) = \sum_{n=0}^\infty \frac{W_{\theta\phi}(n)} {\sqrt{2\pi\sigma^2}}\,\exp\left[-\dfrac{(y-n)^2}{2\sigma^2}\right] \,. \end{equation} The corresponding smoothed characteristic function \begin{equation}\label{chi_smooth} \tilde{\chi}(u_1,u_2,u_3) = \displaystyle\int_{-\infty}^{\infty}\!\tilde{W}_{\theta\phi}(y)e^{i\lambda y}\,dy = \chi(u_1,u_2,u_3)e^{-\sigma^2\lambda^2/2} \,, \end{equation} being substituted into Eq.\,\eqref{chi2W}, gives the smoothed PQPD: \begin{equation}\label{tilde_chi2W} \tilde{W}(S_1,S_2,S_3) = \displaystyle\int_{-\infty}^{\infty}\!\tilde{\chi}(u_1,u_2,u_3) \exp\biggl(-i\sum_{i=1}^3 u_iS_i\biggr)\,\frac{du_1du_2du_3}{(2\pi)^3}\,. \end{equation} \section{Linearly polarized quantum states}\label{sec:LP} To explore the negativity features of the PQPD, consider a simple particular case of linearly polarized quantum states, with only the $H$ mode excited and the $V$ mode in the vacuum state: \begin{equation}\label{rho_LP} \hat{\rho} = \hat{\rho}_H\otimes\ket{0}_V\,{}_V\bra{0} \,. \end{equation} It follows from Eqs.\,(\ref{chi_polar}, \ref{rho_polar}) that in this case, \begin{equation}\label{chi_LP} \chi(u_1,u_2,u_3) = \sum_{n=0}^\infty\rho_{H\,nn}\chi(u_1,u_2,u_3|n) \,, \end{equation} where \begin{equation} \rho_{H\,nn} = \bra{n}\hat{\rho}_H\ket{n} \end{equation} and $\chi(u_1,u_2,u_3|n)$ is the characteristic function for the case of the $n$-photon Fock state in the $H$ mode; it was shown in paper \cite{Karassiov_LF_12_948_2002} that it is equal to \begin{equation} \chi(u_1,u_2,u_3|n) = (\cos\lambda + iu_1\mathop{\rm sinc}\nolimits\lambda)^n. \end{equation} The corresponding smoothed characteristic function, produced by photon-number integrating detectors, is equal to (assuming that $\sigma\gg1$ and, therefore, $\lambda\ll1$) \begin{multline} \tilde{\chi}(u_1,u_2,u_3) \approx\sum_{n=0}^\infty\rho_{H\,nn}\left(1-\frac{\lambda^2}{2} + iu_1\right)^n e^{-\lambda^2\sigma^2/2} \\ \approx\sum_{n=0}^\infty\rho_{H\,nn} \exp\left[-\frac{\sigma^2u_1^2}{2} + inu_1 - \frac{(n+\sigma^2)|w|^2}{2}\right], \end{multline} and the smoothed PQPD [see Eq.\,\eqref{tilde_chi2W}] is equal to \begin{equation} \tilde{W}(S_1,S_2,S_3) \approx\sum_{n=0}^\infty\frac{\rho_{H\,nn}}{(2\pi)^{3/2}\sigma(n+\sigma^2)} \exp\left[-\frac{(S_1-n)^2}{2\sigma^2} - \frac{S_{23}^2}{2(n+\sigma^2)}\right] , \end{equation} where \begin{equation} S_{23} = \sqrt{S_2^2 + S_3^2} \,. \end{equation} This result is completely intuitive and does not contain any `non-classical' features, like the negativity. Consider, however, the exact not-smoothed PQPD. Unfortunately, the general equation for $W(S_1,S_2,S_3)$ in this case can not be expressed in any simple analytical form, but for our purposes, its marginal distributions are sufficient. The marginal characteristic function for $S_1$ is given by \begin{equation} \chi(u_1,0,0) = \sum_{n=0}^\infty\rho_{H\,nn}e^{iu_1n} \,, \end{equation} The corresponding marginal probability distribution, \begin{equation}\label{W_1_LP} W_1(S_1) = \sum_{n=0}^\infty\rho_{H\,nn}\delta(S_1-n), \end{equation} is equal to the photon-number distribution for the state $\hat{\rho}_H$. The explanation is evident: the Stokes variable $S_1$ is equal to the difference of photon numbers in two polarizations, and in the case we consider here, the $V$ mode does not contain any quanta at all. Much more interesting is the behavior of the Stokes variables $S_2,\,S_3$. Note that the characteristic function \eqref{chi_LP} does not depend on the angle $\phi$ and therefore the corresponding PQPD is invariant with respect to rotation in the ${S_2,\,S_3}$ plane. From classical point of view, this symmetry is incompatible with the above-mentioned discreteness of the marginal distributions for $S_2$ and $S_3$: this combination of features can not be manifested by any (positive-valued) probability distribution. However, it is completely feasible in the case of quantum quasi-probability distributions, which can have negative valued areas. To analyze this feature in more detail, consider the two-dimensional marginal distribution for $S_2,\,S_3$, which in this particular case is equal to (see Appendix \ref{app:LPM}): \begin{multline}\label{W_23_LP} W_{23}(S_2,S_3) = \displaystyle\int_{-\infty}^{\infty}\! W(S_1,S_2,S_3)\,dS_1 = \displaystyle\int_{-\infty}^{\infty}\!\chi(0,u_2,u_3)e^{-iu_2S_2-iu_3S_3}\,\frac{du_2du_3}{(2\pi)^2} \\ = \sum_{n=0}^\infty\frac{\rho_{H\,nn}}{2^n} \sum_{k=0}^n\frac{n!}{k!(n-k)!}\,w_{|2k-n|}(S_{23})\,, \end{multline} where \begin{subequations}\label{w_m} \begin{gather} w_0(S_{23}) = \delta(S_2)\delta(S_3) \,, \\ w_{m>0}(S_{23}) = \frac{1}{2\pi}\partd{}{S_{23}}\begin{cases} -\dfrac{S_{23}}{|m|\sqrt{m^2-S_{23}^2}}\,, & S_{23} < m \,, \\[2ex] 0\,, & S_{23}\ge m \,. \end{cases} \end{gather} \end{subequations} \begin{figure*} \includegraphics[width=\textwidth]{W_LP.png} \caption{Left panel: plot of $w_1(S_{23})$. Right panel: color plot of $w_1(S_2,S_3)$ (blue: negative values, red: positive ones). Integration along the line $S_2=1$ gives infinity; integration along the lines $S_2<1$ gives zero due to the negative-valued areas.}\label{fig:W_LP} \end{figure*} The last equations, while looking a bit cumbersome, are actually very transparent. $W_{23}$ is equal to the weighted sum of functions $w_m$. The non-negative weight factors are given by the initial photon-number distribution convolved with the binomial distribution created by the beamsplitter. Each of the functions $w_m$, except for $w_0$, has negative values in the circular area $S_{23}<m$, see Fig.\,\ref{fig:W_LP} (left), where $w_1$ is plotted as the typical example. This means that the marginal distribution \eqref{W_23_LP} and therefore the corresponding PQPD $W(S_1,S_2,S_3)$ indeed has negative-valued areas for any quantum state $\hat{\rho}_H$. It is this negativity that reconciles the discreteness of the marginal distributions and the rotation symmetry in ${S_2,\,S_3}$ plane, nullifying the marginal distributions for non-integer values of $S_{2,3}$. How it is possible is demonstrated by the right panel of Fig.\,\ref{fig:W_LP}, where the two-dimensional color plot of the function $w_1(S_2,S_3)$ is shown, with the positive-valued area of this function marked by red color and the negative-valued one by blue color. It is easy to see that integration along the line $S_2=1$ involves only positive values of $w_1(S_2,S_3)$ and thus gives a positive net value (actually infinity); and integration along the line $S_2<1$ involves both positive and negative values and thus can (and actually does) give zero. Due to the rotational symmetry of the picture, this result holds also for the marginal distribution of $S_3$, as well as of any combination $S_\phi = S_2\cos\phi + S_3\sin\phi$ (a similar result has been reported recently by A.V.Masalov \cite{Masalov_Klyshko8}). This amazing structure of PQPD can be easily demonstrated experimentally using linearly polarized single-photon or even weak coherent light pulses. In the former case, with an account for the optical losses [see Eq.\,\eqref{eff_loss}], \begin{equation}\label{p_0p_1} \bra{n}\hat{\rho}_H\ket{n} = p_0\delta_{n0} + p_1\delta_{n1} \,. \end{equation} where \begin{align} & p_0 = 1-\eta \,, & & p_1 = \eta \,. \end{align} In the latter one, assuming that $\alpha\ll1$ and taking into account that the losses only decrease the mean number of quanta of the coherent state: $\alpha\to\alpha\sqrt{\eta}$, and it still remains coherent, we get the same equation \eqref{p_0p_1}, but with \begin{align} & p_0 = e^{-|\alpha|^2} \approx 1-|\alpha|^2 \,, & & p_1 \approx |\alpha|^2 \,. \end{align} In both these simple cases, in order to restore the marginal distribution \eqref{W_23_LP}, it is sufficient that the experimentalist measures only the distribution $W_{\theta\phi}$ for $\theta=\pi/2$ [see Eqs,\,\eqref{Radon}], which has a very simple form shown in Fig.\,\ref{fig:dist_QE}. Note that if the distributions are measured by a single-photon detector, no two-photon events will be observed. The presence of two-photon states in the density matrix $\rho_H$ (as in the case of a coherent state) will only increase the probability of a single-count event $p_1$ and reduce the probability of a no-count event $p_0$. The two-dimensional marginal Radon transformation \eqref{W_23_LP}, applied to this distribution, gives \begin{equation} W_{23}(S_2, S_3) = p_0\delta(S_2)\delta(S_3) + p_1w_1(S_{23}), \end{equation} i.e., a $\delta$-function peak at $S_2=S_3=0$, surrounded by the negative-valued area provided by $w_1$. \begin{figure} \includegraphics[width=0.48\textwidth]{dist_QE1.png} \caption{Typical probability distributions for a single-photon state (a) and a coherent state with $\alpha=1$ (b) at the input of the polarization tomography setup. The QE of the detectors is $\eta=0.6$ and the angle $\theta$ is chosen to be $\pi/2$.}\label{fig:dist_QE} \end{figure} At first sight it looks strange that PQPD can be negative valued even for such a `perfectly classical' state as the coherent one. However, it was emphasized {\it e.g.} in the review paper \cite{Braunstein2005} that classical local hidden-variable models require two necessary conditions: (i) the `classicality' of the quantum state, in the sense of positivity of its Wigner function, and (ii) the `classicality' of the measurement (only linear observables such as positions, momentums and their linear combinations have to be measured). The non-smoothed polarization tomography, which measures the discrete-valued Stokes variables, evidently violates the second assumption. Another conclusion that can be derived from the above consideration is that the polarization tomography of linearly polarized light \eqref{rho_LP} can not be used to segregate the `classical' (with the Wigner function positive everywhere) quantum states $\hat{\rho}_H$ from `non-classical' ones, because in the smoothed case (with photon-number integrating detectors) it always gives positive PQPD, and in the non-smoothed case (with photon-number resolving detectors) it always gives PQPD with negativities (except of the trivial case of the vacuum state). \section{`Highlighted' polarization quantum tomography}\label{sec:P2W} The evident solution to this problem is the ``highlighting'' of the nonclassical features by feeding bright coherent light into the second polarization mode: \begin{equation}\label{rho_alpha} \hat{\rho} = \hat{\rho}_H\otimes\ket{\alpha_0}_V\,{}_V\bra{\alpha_0} \end{equation} It is easy to see that in this case, the polarization tomography setup with fixed $\theta=\pi/2$ exactly reproduces the ordinary quantum tomography setup, with the vertical polarization light serving as the local oscillator and the angle $\phi$ as the homodyne angle. Indeed, consider the asymptotic case of a very strong coherent field, $|\alpha_0|\to\infty$. In this case, the operator $\hat{{\rm a}}_V$ in Eq.\,\eqref{hat_chi} can be replaced by its mean value $\alpha_0$, which gives the following equation for polarization characteristic function: \begin{equation}\label{C_s2C_pol_asy} \chi(0,u_2,u_3) \approx \chi_s(\alpha_0w^*) \,, \end{equation} where \begin{equation}\label{chi_s} \chi_s(z) = \mathop{\rm Tr}\nolimits\bigl\{ \hat{\rho}_H \exp\bigl[i\bigl(z\hat{{\rm a}}_H^\dagger + z^*\hat{{\rm a}}_H\bigr)\bigr] \bigr\} \end{equation} is the symmetrically ordered characteristic function for the state $\hat{\rho}_H$, whose Fourier transformation gives the Wigner function for this state: \begin{equation}\label{chi_s2W} W(x,p) = \displaystyle\int_{-\infty}^{\infty}\!\chi_s(z)\exp\bigl[-i\sqrt{2}(x\Re z + p\Im z)\bigr]\,\frac{d^2z}{2\pi^2} \,. \end{equation} A rigorous treatment of this problem (see Appendix \ref{app:RA}) shows that indeed a relation between the smoothed polarization characteristic function and the symmetrically ordered characteristic function exists, which in the reasonable particular case of not very bright quantum state $\rho_H$, \begin{equation}\label{small_n} \mean{n} \ll \sigma^2 \,, \end{equation} where $\mean{n}$ is the mean number of quanta, simplifies to the smoothed version of Eq.\,\eqref{C_s2C_pol_asy}: \begin{equation}\label{chi_s2chi} \tilde{\chi}(0,u_2,u_3) = \chi_s(\alpha_0w^*)e^{-\sigma^2|w|^2/2} \,. \end{equation} With an account for the optical losses [see the discussion around Eq.\,\eqref{eff_loss} and Appendix \ref{app:loss}], this equation takes the following form: \begin{equation}\label{chi_s2chi_loss} \tilde{\chi}(0,u_2,u_3) = \chi_s(\zeta)e^{-\epsilon^2\zeta^2/2} \end{equation} where \begin{equation}\label{zeta} \zeta = \zeta' + i\zeta'' = \sqrt{\eta}\,\alpha_0w^* \end{equation} and \begin{equation} \epsilon^2 = \frac{1}{\eta}\left(1 - \eta + \frac{\sigma^2}{|\alpha_0|^2}\right) . \end{equation} is the total `quantum inefficiency' of the tomography scheme, which takes into account both the optical losses and the finite value of $\alpha_0$. Finally, Fourier transformation of this equation gives the relation between the Wigner function and the smoothed PQPD: \begin{equation}\label{W2C_pol_s} \tilde{W}_{23}(S_2,S_3) = \frac{1}{\pi\eta|\alpha_0|^2\epsilon^2}\displaystyle\int_{-\infty}^{\infty}\! W(x,p) \exp\biggl[ -\frac{|S_2 - iS_3 - \sqrt{2\eta}\,\alpha_0^*(x+ip)|^2} {2\eta|\alpha_0|^2\epsilon^2} \biggr] dxdp \,. \end{equation} (compare with Eq.\,(7.35) of \cite{Raymer2004}). Note that in the ideal case of $\epsilon=0$, the Gaussian factor in this equation degenerates to the $\delta$-function, giving the exact one-by-one correspondence between $W_{23}(S_2,S_3)$ and $W(x,p)$. Consider two examples of quantum states \eqref{rho_alpha}: a Gaussian squeezed vacuum state $\hat{\mathcal{S}}(r)\ket{0}_H$, and a non-Gaussian squeezed single-photon state $\hat{\mathcal{S}}(r)\ket{1}_H$, where \begin{equation}\label{S_of_r} \hat{\mathcal{S}}(r) = \exp\left[\frac{r}{2}\left(\hat{{\rm a}}_H^\dagger{}^2-\hat{{\rm a}}_H^2\right)\right] \end{equation} is the squeezing operator. In the first case, \begin{equation}\label{chi_s_sqz0} \chi_s(z) = \exp\left(-\frac{z'{}^2e^{2r} + z''{}^2e^{-2r}}{2}\right) . \end{equation} It is shown in App.\,\ref{app:sqz0} that the corresponding smoothed marginal polarization characteristic function is equal to \begin{equation}\label{C_pol_sqz0s} \tilde{\chi}(0,u_2,u_3) = \exp\left(-\frac{\delta_+^2\zeta'{}^2 + \delta_-^2\zeta''{}^2}{2}\right) , \end{equation} where \begin{equation} \delta_\pm^2 = e^{\pm2r} + \epsilon^2 \,. \end{equation} [it is easy to see that is can be obtained simply by substitution of Eq.\,\eqref{chi_s_sqz0} into \eqref{chi_s2chi}; however, the direct calculation of App.\,\ref{app:sqz0} allows one to formulate the explicit analog of condition \eqref{small_n} for this particular case]. Using then Eq.\,\eqref{chi2W}, we obtain the marginal PQPD that is Gaussian and thus positive everywhere: \begin{equation}\label{W23_S0d} \tilde{W}_{23}(S_2,S_3) = \frac{1}{2\pi\eta|\alpha_0|^2\delta_+\delta_-}\exp\left[ -\frac{1}{2}\left(\frac{s_2^2}{\delta_+^2}+ \frac{s_3^2}{\delta_-^2}\right) \right] , \end{equation} where \begin{align}\label{S_norm} & s_2 = \Re\frac{S_2-iS_2}{\sqrt{\eta}\alpha_0^*} \,, & & s_3 = \Im\frac{S_2-iS_2}{\sqrt{\eta}\alpha_0^*} \end{align} are the normalized Stokes variables. In the case of the squeezed single-photon state, \begin{equation}\label{chi_s_sqz1} \chi_s(z) = \left(1 - z'{}^2e^{2r} - z''{}^2e^{-2r}\right) \exp\left(-\frac{z'{}^2e^{2r} + z''{}^2e^{-2r}}{2}\right) . \end{equation} It is shown in App.\,\ref{app:sqz1} that the corresponding smoothed marginal polarization characteristic function is equal to \begin{equation}\label{C_pol_sqz1s} \tilde{\chi}(0,u_2,u_3) = \left(1 - \zeta'{}^2e^{2r} - \zeta''{}^2e^{-2r}\right) \exp\left(-\frac{\delta_+^2\zeta'{}^2 + \delta_-^2\zeta''{}^2}{2}\right) , \end{equation} and correspondingly [using again Eq.\,\eqref{chi2W}], \begin{equation}\label{tilde_W_23} \tilde{W}_{23}(S_2,S_3) = \frac{1}{2\pi\eta|\alpha_0|^2\delta_+\delta_-} \left( \frac{s_2^2e^{2r}}{\delta_+^4} + \frac{s_3^2e^{-2r}}{\delta_-^4} + \frac{\epsilon^4 -1}{\delta_+^2\delta_-^2} \right) \exp\left[ -\frac{1}{2}\left(\frac{s_2^2}{\delta_+^2}+ \frac{s_3^2}{\delta_-^2}\right) \right] . \end{equation} It is easy to see that if \begin{equation}\label{small_eps} \epsilon < 1 \,, \end{equation} that is if the photon-number integration, given by $\sigma$, is not very strong, and the quantum efficiency $\eta$ is sufficiently high, then the PQPD manifests negativity, caused of course by the negativity of the Wigner function. Note that in particular, the condition \eqref{small_eps} requires that the unified quantum efficiency of the scheme has to be higher than $1/2$ \cite{Raymer2004}. However, for the negativity of the PQPD to be experimentally detectable it is important that the negative part is pronounced compared to the positive part. This imposes a requirement that $\epsilon$ should be smaller than a certain value, which strongly depends on the squeezing. \begin{figure*} \includegraphics[width=\textwidth]{W1gnuplot2.png} \caption{Contour plots of the quasi-probability distribution \eqref{tilde_W_23} as a function of the normalized Stokes parameters \eqref{S_norm} for the squeezed single-photon state in the absence of losses and photon-number integration (left column) and with $\epsilon^2=0.7$ (right column). Top row: no squeezing ($e^r=1$); Bottom row: 6\,db squeezing ($e^r=2$). The negative-valued areas are encircled by the white lines (the color corresponding to $W_{23}=0$ varies due to the different ratios of the maximal and the minimal values of $W_{23}$).}\label{fig:highlighted} \end{figure*} In Fig\,\ref{fig:highlighted}, the probability distribution \eqref{tilde_W_23} is plotted for the ordinary (non-squeezed) single-photon state and for the 6-db squeezed one. The left two plots correspond to the ideal case of $\epsilon=0$, the right ones, to the typical case of $\epsilon^2=0.7$. It can be seen from these plots that the negative-valued area of $\tilde{W}_{23}$ shrinks due to the losses but is only weakly affected by the squeezing. However, the depth of this area decreases very significantly in the squeezed case, due to the well known feature of vulnerability of the squeezing to the optical losses. \begin{figure} \includegraphics[width=0.49\textwidth]{neg_vol.png} \caption{The volume of the negative-valued area of the quasi-probability distribution \eqref{tilde_W_23} as a function of the total quantum inefficiency $\epsilon^2$}\label{neg_vol} \end{figure} The convenient quantitative measure of the negativity, which takes both these effects into account, is the volume of the negative-valued part of the quasi-probability distribution: \begin{equation} V_- = -\int_{\tilde{W}_{23}<0}\tilde{W}_{23}(S_2,S_3)dS_2dS_3 \,. \end{equation} It is plotted in Fig.\,\ref{neg_vol} as a function of $\epsilon^2$ for several values of the squeezing factor. It follows from this plot that unfortunately, for reasonable losses $\epsilon^2\gtrsim0.5$, only quite modest squeezing about 10\,db can be used. In order to use bright strongly squeezed states, the optical losses have to be reduced significantly, down to $\epsilon^2\lesssim0.1$ \section{Conclusion} Thus, we have shown that the polarization quantum tomography is an essentially discrete-variable technique. It is aimed at finding the quasi-probability distribution of the Stokes observables whose quantum counterparts, the Stokes operators, have discrete spectra. In its rigorous version, polarization quantum tomography should involve measurements with photon-number resolving detectors leading to discrete experimental probability distributions. In this case, the reconstructed PQPD will contain nonclassical features, such as negativity areas, even for perfectly `classical' states. This demonstrates the connection between two standard signs of non-classicality: the discreteness of photon numbers and the negativity of quasi-probability distributions. However, in an experiment with `bright' multi-photon states, it is usually impossible to perform measurements with single-photon resolution. Photon-number integration leads to the smearing of the probability distribution and therefore can prevent the observation of PQPD negativity even for some `very nonclassical' states such as the Fock ones. This problem can be solved by `highlighting' the quantum state, that is, by adding a strong coherent beam into the orthogonal polarization mode. This procedure actually bridges polarization quantum tomography with the Wigner-function tomography; in the very strong highlighting case the former one simply reduces to the latter one. The negativity of the Wigner function will then be manifested in the negativity of the PQPD, provided that the losses are not too high and the photon-number integration is not too broad. This way one can test for nonclassicality bright quantum states of light, such as squeezed Fock states. \acknowledgments The work of M.Ch. was supported in part by the grant for the NATO project EAP.SFPP 984397 ``Secure Communication Using Quantum Information Systems''. The work of F.Kh. was supported by a grant from Dr. Hertha u. Helmut Schmauser-Stiftung, LIGO NSF grant PHY-0967049 and Russian Foundation for Basic Research grant No.11-02-00383-a.
1,314,259,992,944
arxiv
\section*{Acknowledgments} The authors would like to thank Guillaume Desjardins, Csaba Szepesvari, Jost Tobias Springenberg, Steven Bohez, Philemon Brakel, Brendan Tracey, Jonas Degrave, Jonas Buchli, Leslie Fritz, Chloe Rosenberg, and many others at DeepMind for their support and feedback on this paper. \section{Algorithmic Details} \subsection{General Algorithm Description} We maintain one online network and one target network for each Q-function associated to objective $k$, with parameters denoted by $\phi_k$ and $\phi'_k$, respectively. We also maintain one online network and one target network for the policy, with parameters denoted by $\theta$ and $\theta'$, respectively. Target networks are updated every fixed number of steps by copying parameters from the online network. Online networks are updated using gradient descent in each learning iteration. Note that in the main paper, we refer to the target policy network as the old policy, $\pi_{\textrm{old}}$. We use an asynchronous actor-learner setup, in which actors regularly fetch policy parameters from the learner and act in the environment, writing these transitions to the replay buffer. We refer to this policy as the behavior policy throughout the paper. The learner uses the transitions in the replay buffer to update the (online) Q-functions and the policy. Please see Algorithms \ref{Alg:MO-MPO-actor} and \ref{Alg:MO-MPO-learner} for more details on the actor and learner processes, respectively. \begin{algorithm} \caption{MO-MPO: Asynchronous Actor}\label{Alg:MO-MPO-actor} \begin{algorithmic}[1] \STATE \textbf{given} (N) reward functions $\{r_k(s,a)\}_{k=1}^N$, $T$ steps per episode \REPEAT \STATE Fetch current policy parameters $\a$ from learner \STATE \textbf{// Collect trajectory from environment} \STATE $\tau = \lbrace \rbrace$ \FOR{$t = 0, \dots, T$} \STATE $a_t \sim \pi_\a(\cdot | s_t)$ \STATE \textbf{// Execute action and determine rewards} \STATE $\vec{r} = \lbrack r_{1}(s_t, a_t), \dots, r_{N}(s_t, a_t) \rbrack$ \STATE $\tau \leftarrow \tau \cup \lbrace (s_t, a_t, \vec{r}, \pi_\a(a_t | s_t)) \rbrace$ \ENDFOR \STATE Send trajectory $\tau$ to replay buffer \UNTIL{end of training} \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \small \caption{MO-MPO: Asynchronous Learner}\label{Alg:MO-MPO-learner} \begin{algorithmic}[1] \STATE {\bf given} batch size (L), number of actions (M), (N) Q-functions, (N) preferences $\{\epsilon_k\}_{k=1}^N$, policy networks, and replay buffer $\mathcal{D}$ \STATE {\bf initialize} Lagrangians $\{\eta_k\}_{k=1}^N$ and $\nu$, target networks, and online networks such that $\pi_{{\a}'}= \pi_{\a}$ and $Q_{\phi'_k}=Q_{\phi_k}$ \REPEAT \REPEAT \STATE {\textbf{// Collect dataset} $\{s^i, a^{ij}, Q_k^{ij}\}_{i,j,k}^{L,M,N}$\textbf{, where}} \STATE {\textbf{//} $s^{i}\sim\mathcal{D}, a^{ij}\sim\pi_{{\a}'}(a|s^i)$, \textbf{and} $Q_k^{ij} = Q_{{\phi_k}'}(s^i,a^{ij})$} \STATE \STATE {\bf // Compute action distribution for each objective} \FOR{k = 1, \dots, $N$} \STATE \scalebox{0.95}[1]{$ \delta_{\eta_k} \leftarrow \nabla_{\eta_k} \eta_k\epsilon_k+\eta_k\sum_{i}^L\frac{1}{L}\log\left(\sum_j^M\frac{1}{M}\exp\Big(\frac{Q_k^{ij}}{\eta_k}\Big)\right) $} \STATE {Update $\eta_k$ based on $\delta_{\eta_k}$} \STATE $q_k^{ij} \propto \exp(\frac{Q_k^{ij}}{\eta_k})$ \ENDFOR \STATE \STATE {\bf // Update parametric policy with trust region} \STATE {// sg denotes a stop-gradient} \STATE $\delta_\pi \leftarrow -\nabla_\a \sum_i^L \sum_j^M \sum_k^N q_k^{ij} \log \pi_{\a}(a^{ij}|s^i)$ \STATE $\hfill + \textrm{sg}(\nu) \Big(\beta - \sum_i^L \, \KL(\pi_{\a'}(a|s^i) \,\|\, \pi_\a(a|s^i) ) \diff s \Big)$ \STATE $\delta_\nu \leftarrow \nabla_\nu \nu \Big(\beta - \sum_i^L \, \KL(\pi_{\a'}(a|s^i) \,\|\, \pi_{\textrm{sg}(\a)}(a|s^i) ) \diff s \Big)$ \STATE {Update $\pi_\a$ based on $\delta_\pi$} \STATE {Update $\nu$ based on $\delta_\nu$} \STATE \STATE {\bf // Update Q-functions} \FOR{k = 1, \dots, $N$} \STATE $\delta_{{\phi}_k} \leftarrow \nabla_{\phi_k} \sum_{(s_t, a_t) \in {\tau \sim \mathcal{D}}} \big( \hat{Q}_{\phi_k}(s_t, a_t) - Q_k^{\text{ret}} \big)^2$ \STATE \hfill with $Q_k^{\text{ret}}$ as in Eq. \ref{eq:objective_q_value2} \STATE {Update $\phi_k$ based on $\delta_{{\phi}_k}$} \ENDFOR \STATE \UNTIL{fixed number of steps} \STATE {\bf // Update target networks} \STATE $\pi_{{\a}'}= \pi_{\a} ,Q_{{\phi_k}'}=Q_{\phi_k}$ \UNTIL{convergence} \end{algorithmic} \end{algorithm} \subsection{Retrace for Multi-Objective Policy Evaluation} Recall that the goal of the policy evaluation stage (Sec. 4.1) is to learn the Q-function $Q_{\phi_k}(s, a)$ for each objective $k$, parametrized by $\phi_k$. These Q-functions are with respect to the policy ${\pi_\textrm{old}}$. We emphasize that it is valid to use any off-the-shelf Q-learning algorithm, such as TD(0) \citep{Sutton_1998}, to learn these Q-functions. We choose to use Retrace \citep{Munos_2016}, described here. Given a replay buffer $\mathcal{D}$ containing trajectory snippets $\tau = \{ (s_0, a_0, \vec{r}_0,s_1), \dots, (s_T, a_T, \vec{r}_T,s_{T+1})\}$, where $\vec{r}_t$ denotes a reward vector $\{r_k(s_t, a_t)\}_{k=1}^N$ that consists of a scalar reward for each of $N$ objectives, the Retrace objective is as follows: \begin{equation} \min_{\phi_k} \, \mathbb{E}_{\tau \sim \mathcal{D}} \Big[ \big(Q^{ret}_{k}(s_{t}, a_{t}) - \hat{Q}_{\phi_k}(s_t, a_t))^2 \Big] \, , \label{eq:objective_q_value2} \end{equation} with \begin{align*} & Q^{\text{ret}}_{k}(s_{t}, a_t) = \hat{Q}_{\phi'_k}(s_t, a_t) + \sum_{j=t}^T \gamma^{j-t} \Big(\prod_{z=t+1}^j c_z \Big) \delta^j \, , \\ & \delta^j = r_k(s_j, a_j) + \gamma V(s_{j+1})- \hat{Q}_{\phi'_k}(s_j, a_j) \, , \\ & V(s_{j+1}) = \mathbb{E}_{\pi_{\textrm{old}}(a|s_{j+1})} [ \hat{Q}_{\phi'_k}(s_{j+1}, a) ] \, . \end{align*} The importance weights $c_k$ are defined as $$c_z = \min\left(1, \frac{\pi_{\textrm{old}}(a_z | s_z)}{b(a_z | s_z)}\right) \, ,$$ where $b(a_z | s_z)$ denotes the behavior policy used to collect trajectories in the environment. When $j=t$, we set $\Big(\prod_{z=t+1}^j c_z \Big) = 1$. We also make use of a target network for each Q-function \citep{mnih2015human}, parametrized by ${\phi'_k}$, which we copy from the online network $\phi_k$ after a fixed number of gradient steps on the Retrace objective \eqref{eq:objective_q_value2}. \subsection{Policy Fitting} \label{sec:policy_fitting} Recall that fitting the policy in the policy improvement stage (Sec. 4.2.2) requires solving the following constrained optimization: \begin{align} \pi_\textrm{new} = & \argmax_{\a} \sum_{k=1}^N \int_s \mu(s) \int_a q_k(a|s)\log \pi_\a(a|s)\diff a\diff s \nonumber \\ &\textrm{s.t.} \int_s \mu(s) \, \KL(\pi_\textrm{old}(a|s) \,\|\, \pi_\a(a|s) ) \diff s < \beta \, . \end{align} We first write the generalized Lagrangian equation, i.e. \begin{align} L(\a, \nu) = & \sum_{k=1}^N \int_s \mu(s) \int_a q_k(a|s)\log \pi_\a(a|s)\diff a\diff s \\ & + \nu \Big(\beta - \int_s \mu(s) \, \KL(\pi_\textrm{old}(a|s) \,\|\, \pi_\a(a|s) ) \diff s \Big), \nonumber \end{align} where $\nu$ is the Lagrangian multiplier. Now we solve the following primal problem, $$\max_{\a}\min_{\nu>0} L(\a,\nu) \, .$$ To obtain the parameters $\a$ for the updated policy, we solve for $\nu$ and $\a$ by alternating between 1) fixing $\nu$ to its current value and optimizing for $\a$, and 2) fixing $\a$ to its current value and optimizing for $\nu$. This can be applied for any policy output distribution. For Gaussian policies, we decouple the update rules to optimize for mean and covariance independently, as in \cite{Abdolmaleki_2018}. This allows for setting different KL bounds for the mean ($\beta_{\mu}$) and covariance ($\beta_{\Sigma}$), which results in more stable learning. To do this, we first separate out the following two policies for mean and covariance, \begin{align} \pi_{\a}^{\mu}(a|s) = \mathcal{N} \Big(a; \mu_{\a}(s),\Sigma_{\a_{old}}(s) \Big) \, , \\ \pi_{\a}^{\Sigma}(a|s) = \mathcal{N} \Big(a; \mu_{\a_{old}}(s),\Sigma_{\a}(s) \Big) \, . \end{align} Policy $\pi_{\a}^{\mu}(a|s)$ takes the mean from the online policy network and covariance from the target policy network, and policy $\pi_{\a}^{\Sigma}(a|s)$ takes the mean from the target policy network and covariance from online policy network. Now our optimization problem has the following form: \begin{align} &\max_{\a} \sum_{k=1}^N \int_s \mu(s) \int_a q_k(a|s)\Big(\log \pi_{\a}^{\mu}(a|s) \pi_{\a}^{\Sigma}(a|s) \Big)\diff a\diff s \nonumber \\ &\textrm{s.t.} \int_s \mu(s) \, \KL(\pi_\textrm{old}(a|s) \,\|\, \pi_{\a}^{\mu}(a|s) ) \diff s < \beta_{\mu} \nonumber \\ & \;\;\;\; \int_s \mu(s) \, \KL(\pi_\textrm{old}(a|s) \,\|\, \pi_{\a}^{\Sigma}(a|s) ) \diff s < \beta_{\Sigma} \, . \end{align} As in \cite{Abdolmaleki_2018}, we set a much smaller bound for covariance than for mean, to keep the exploration and avoid premature convergence. We can solve this optimization problem using the same general procedure as described above. \subsection{Derivation of Dual Function for $\eta$} Recall that obtaining the per-objective improved action distributions $q_k(a|s)$ in the policy improvement stage (Sec. 4.2.1) requires solving a convex dual function for the temperature $\eta_k$ for each objective. For the derivation of this dual function, please refer to Appendix D.2 of the original MPO paper \cite{Abdolmaleki_2018}. \section{Method} We propose a policy iteration algorithm for multi-objective RL. Policy iteration algorithms decompose the RL problem into two sub-problems and iterate until convergence: \begin{enumerate}[noitemsep,topsep=0pt] \item \emph{Policy evaluation}: estimate Q-functions given policy \item \emph{Policy improvement}: update policy given Q-functions \end{enumerate} Algorithm \ref{Alg:MO-MPO} summarizes this two-step multi-objective policy improvement procedure. In Appendix E, we explain how this can be derived from the ``RL as inference'' perspective. We describe multi-objective MPO in this section and explain multi-objective V-MPO in Appendix D. When there is only one objective, MO-(V-)MPO reduces to (V-)MPO. \begin{algorithm}[t] \small \caption{MO-MPO: One policy improvement step}\label{Alg:MO-MPO} \begin{algorithmic}[1] \STATE {\bf given} batch size ($L$), number of actions to sample ($M$), ($N$) Q-functions $\{Q_k^{\pi_{\textrm{old}}}(s,a)\}_{k=1}^N$, preferences $\{\epsilon_k\}_{k=1}^N$, previous policy $\pi_\textrm{old}$, previous temperatures $\{\eta_k\}_{k=1}^N$, replay buffer $\mathcal{D}$, first-order gradient-based optimizer $\mathcal{O}$ \STATE \STATE {\bf initialize} $\pi_\theta$ from the parameters of $\pi_\textrm{old}$ \REPEAT \STATE {\textbf{// Collect dataset} $\{s^i, a^{ij}, Q_k^{ij}\}_{i,j,k}^{L,M,N}$\textbf{, where}} \STATE {\textbf{//} $M$ \textbf{actions} $a^{ij} \sim \pi_\textrm{old}(a|s^i)$ \textbf{and} $Q_k^{ij} = Q_k^{\pi_{\textrm{old}}}(s^i, a^{ij})$} \STATE \STATE {\bf // Compute action distribution for each objective} \FOR{k = 1, \dots, $N$} \STATE \scalebox{0.95}[1]{$ \delta_{\eta_k} \leftarrow \nabla_{\eta_k} \eta_k\epsilon_k+\eta_k\sum_{i}^L\frac{1}{L}\log\left(\sum_j^M\frac{1}{M}\exp\Big(\frac{Q_k^{ij}}{\eta_k}\Big)\right) $} \STATE {Update $\eta_k$ based on $\delta_{\eta_k}$, using optimizer $\mathcal{O}$} \STATE $q_k^{ij} \propto \exp(\frac{Q_k^{ij}}{\eta_k})$ \ENDFOR \STATE \STATE {\bf // Update parametric policy} \STATE $\delta_\pi \leftarrow -\nabla_\theta \sum_i^L \sum_j^M \sum_k^N q_k^{ij} \log \pi_{\theta}(a^{ij}|s^i)$ \STATE {\hspace{0.8em} (subject to additional KL regularization, see \sref{sec:supervisedstep})} \STATE {Update $\pi_\theta$ based on $\delta_\pi$, using optimizer $\mathcal{O}$} \STATE \UNTIL{fixed number of steps} \STATE return $\pi_\textrm{old}=\pi_\theta$ \end{algorithmic} \end{algorithm} \subsection{Multi-Objective Policy Evaluation} \label{sec:policyevalstep} In this step we learn Q-functions to evaluate the previous policy $\pi_\textrm{old}$. We train a separate Q-function per objective, following the Q-decomposition approach \citep{Russell_2003}. In principle, any Q-learning algorithm can be used, as long as the target Q-value is computed with respect to $\pi_\textrm{old}$.\footnote{\citet{Russell_2003} prove critics suffer from ``illusion of control'' if they are trained with conventional Q-learning \citep{Watkins_1989}. In other words, if each critic computes its target Q-value based on its own best action for the next state, then they overestimate Q-values, because in reality the parametric policy $\pi_\theta$ (that considers all critics' opinions) is in charge of choosing actions.} In this paper, we use the Retrace objective \citep{Munos_2016} to learn a Q-function $Q^{\pi_\textrm{old}}_k(s, a; \phi_k)$ for each objective $k$, parameterized by $\phi_k$, as follows: \begin{align*} \min_{{\{\phi_k\}}_1^N} \sum_{k=1}^N \mathbb{E}_{(s,a)\sim \mathcal{D}} \Big[ \big( \hat{Q}_k^{\textrm{ret}}(s, a) - Q^{\pi_\textrm{old}}_k(s, a;\phi_k))^2 \Big], \end{align*} where $\hat{Q}_k^{\textrm{ret}}$ is the Retrace target for objective $k$ and the previous policy $\pi_\textrm{old}$, and $\mathcal{D}$ is a replay buffer containing gathered transitions. See Appendix C for details. \subsection{Multi-Objective Policy Improvement} Given the previous policy $\pi_\textrm{old}(a|s)$ and associated Q-functions $\{Q_k^{\pi_\textrm{old}}(s,a)\}_{k=1}^{N}$, our goal is to improve the previous policy for a given visitation distribution $\mu(s)$.\footnote{In practice, we use draws from the replay buffer to estimate expectations over the visitation distribution $\mu(s)$.} To this end, we learn an action distribution for each Q-function and combine these to obtain the next policy $\pi_\textrm{new}(a|s)$. This is a multi-objective variant of the two-step policy improvement procedure employed by MPO \cite{Abdolmaleki_2018}. \textbf{In the first step}, for each objective $k$ we learn an improved action distribution $q_k(a|s)$ such that $\mathbb{E}_{q_k(a|s)}[ Q^{\pi_\textrm{old}}_k(s,a)] \geq \mathbb{E}_{\pi_\textrm{old}(a|s)}[ Q^{\pi_\textrm{old}}_k(s,a)]$, where states $s$ are drawn from a visitation distribution $\mu(s)$. \textbf{In the second step}, we combine and distill the improved distributions $q_k$ into a new parametric policy $\pi_\textrm{new}$ (with parameters $\theta_\textrm{new}$) by minimizing the KL-divergence between the distributions and the new parametric policy, i.e, \begin{equation} \theta_\textrm{new} = \argmin_{\theta} \sum_{k=1}^N \mathbb{E}_{\mu(s)}\Big[\mathrm{KL}\Big(q_k(a | s) \| \pi_\theta(a | s)\Big)\Big] \, . \label{eq:klmin} \end{equation} This is a supervised learning loss that performs maximum likelihood estimate of distributions $q_k$. Next, we will explain these two steps in more detail. \subsubsection{Obtaining action distributions per objective (Step 1)} To obtain the per-objective improved action distributions $q_k(a|s)$, we optimize the standard RL objective for {\it each objective} $Q_{k}$: \begin{align} \label{eq:qk_opt} &\max_{q_{k}} \int_s \mu(s) \int_a q_k(a|s) \, Q_{k}(s,a) \diff a \diff s\\ &\textrm{s.t.} \int_s \mu(s) \, \textrm{KL}(q_k(a|s) \| \pi_\textrm{old}(a|s)) \diff s < \epsilon_k \, , \nonumber \end{align} where $\epsilon_k$ denotes the allowed expected KL divergence for objective $k$. We use these $\epsilon_k$ to encode preferences over objectives. More concretely, $\epsilon_k$ defines the allowed influence of objective $k$ on the change of the policy. For nonparametric action distributions $q_k(a|s)$, we can solve this constrained optimization problem in closed form for each state $s$ sampled from $\mu(s)$ \cite{Abdolmaleki_2018}, \begin{equation} q_k(a|s) \propto \pi_\textrm{old}(a|s) \exp\Big(\frac{Q_k(s,a)}{\eta_k}\Big) \, , \end{equation} where the temperature $\eta_k$ is computed based on the corresponding $\epsilon_k$, by solving the following convex dual function: \begin{align} \label{eq:eta} \eta_k = &\argmin_{\eta} \eta \, \epsilon_k \, + \\ &\eta \int_s \mu(s) \log \int_a \pi_\textrm{old}(a|s)\exp\Big(\frac{Q_k(s,a)}{\eta}\Big) \diff a \diff s \, . \nonumber \end{align} In order to evaluate $q_k(a|s)$ and the integrals in \eref{eq:eta}, we draw $L$ states from the replay buffer and, for each state, sample $M$ actions from the current policy $\pi_\textrm{old}$. In practice, we maintain one temperature parameter $\eta_k$ per objective. We found that optimizing the dual function by performing a few steps of gradient descent on $\eta_k$ is effective, and we initialize with the solution found in the previous policy iteration step. Since $\eta_k$ should be positive, we use a projection operator after each gradient step to maintain $\eta_k > 0$. Please refer to Appendix C for derivation details. \prg{Application to Other Deep RL Algorithms} Since the constraints $\epsilon_k$ in \eref{eq:qk_opt} encode the preferences over objectives, solving this optimization problem with good satisfaction of constraints is key for learning a policy that satisfies the desired preferences. For nonparametric action distributions $q_k(a|s)$, we can satisfy these constraints exactly. One could use any policy gradient method \citep[e.g.][]{schulman15,Schulman_2017,svg,Haarnoja_2018} to obtain $q_k(a|s)$ in a \emph{parametric} form instead. However, solving the constrained optimization for parametric $q_k(a|s)$ is not exact, and the constraints may not be well satisfied, which impedes the use of $\epsilon_k$ to encode preferences. Moreover, assuming a parametric $q_k(a|s)$ requires maintaining a function approximator (e.g., a neural network) per objective, which can significantly increase the complexity of the algorithm and limits scalability. \prg{Choosing $\epsilon_k$} It is more intuitive to encode preferences via $\epsilon_k$ rather than via scalarization weights, because the former is invariant to the scale of rewards. In other words, having a desired preference across objectives narrows down the range of reasonable choices for $\epsilon_k$, but does not narrow down the range of reasonable choices for scalarization weights. In order to identify reasonable scalarization weights, a RL practitioner needs to additionally be familiar with the scale of rewards for each objective. In practice, we have found that learning performance is robust to a wide range of scales for $\epsilon_k$. It is the \emph{relative} scales of the $\epsilon_k$ that matter for encoding preferences over objectives---the larger a particular $\epsilon_k$ is with respect to others, the more that objective $k$ is preferred. On the other hand, if $\epsilon_{k} = 0$, then objective $k$ will have no influence and will effectively be ignored. In Appendix A.1, we provide suggestions for setting $\epsilon_k$, given a desired preference over objectives. \subsubsection{Fitting a new parametric policy (Step 2)} \label{sec:supervisedstep} In the previous section, for each objective $k$, we have obtained an improved action distribution $q_k(a|s)$. Next, we want to combine these distributions to obtain a single parametric policy that trades off the objectives according to the constraints $\epsilon_k$ that we set. For this, we solve a supervised learning problem that fits a parametric policy to the per-objective action distributions from step 1, \begin{align} \theta_\textrm{new} = & \argmax_{\theta} \sum_{k=1}^N \int_s \mu(s) \int_a q_k(a|s)\log \pi_\theta(a|s)\diff a\diff s \nonumber \\ &\textrm{s.t.} \int_s \mu(s) \, \KL(\pi_\textrm{old}(a|s) \,\|\, \pi_\theta(a|s) ) \diff s < \beta \, , \label{eq:klpolicy} \end{align} where $\theta$ are the parameters of our policy (a neural network) and the KL constraint enforces a trust region of size $\beta$ that limits the overall change in the parametric policy. The KL constraint in this step has a regularization effect that prevents the policy from overfitting to the sample-based action distributions, and therefore avoids premature convergence and improves stability of learning \cite{schulman15,abdolmaleki2018relative,Abdolmaleki_2018}. Similar to the first policy improvement step, we evaluate the integrals by using the $L$ states sampled from the replay buffer and the $M$ actions per state sampled from the old policy. In order to optimize \eref{eq:klpolicy} using gradient descent, we employ Lagrangian relaxation, similar to in MPO \citep{abdolmaleki2018relative} (see Appendix C for more detail). \section{Background} \section{Background and Notation} \label{sec:background} \prg{Multi Objective Markov Decision Process} In this paper, we consider a multi-objective RL problem defined by a multi-objective Markov Decision Process (MO-MDP). The MO-MDP consists of states $s \in \mathcal{S}$ and actions $a \in \mathcal{A}$, an initial state distribution $p(s_0)$, transition probabilities $p(s_{t+1} | s_t, a_t)$ which define the probability of changing from state $s_t$ to $s_{t+1}$ when taking action $a_{t}$, reward functions $\{r_k(s, a) \in \mathbb{R}\}_{k=1}^N$ per objective $k$, and a discount factor $\gamma \in [0, 1)$. We define our policy $\pi_\theta(a | s)$ as a state conditional distribution over actions parametrized by $\theta$. Together with the transition probabilities, this gives rise to a state visitation distribution $\mu(s)$. We also consider per-objective action-value functions. The action-value function for objective $k$ is defined as the expected return (i.e., cumulative discounted reward) from choosing action $a$ in state $s$ for objective $k$ and then following policy $\pi$: $Q_k^\pi(s,a) = \mathbb{E}_\pi \lbrack \sum_{t=0}^\infty \gamma^t r_k(s_t, a_t) | s_0=s, a_0 = a]$. We can represent this function using the recursive expression $Q_k^\pi(s_t, a_t) = \mathbb{E}_{p(s_{t+1} | s_t, a_t)} \big[ r_k(s_t, a_t) + \gamma V_k^\pi(s_{t+1}) \big]$, where $V_k^\pi(s) = \mathbb{E}_\pi[ Q_k^\pi(s,a) ]$ is the value function of $\pi$ for objective $k$. \prg{Problem Statement} For any MO-MDP there is a set of nondominated policies, i.e., the Pareto front. A policy is nondominated if there is no other policy that improves its expected return for an objective without reducing the expected return of at least one other objective. Given a preference setting, our goal is to find a nondominated policy $\pi_{\theta}$ that satisfies those preferences. In our approach, a setting of constraints does not directly correspond to a particular scalarization, but we show that by varying these constraint settings, we can indeed trace out a Pareto front of policies. \section{Conclusions and Future Work} In this paper we presented a new distributional perspective on multi objective reinforcement learning, that is derived from the Rl-as-inference perspective. This view leads to two novel multi-objective RL algorithms, namely MO-MPO and MO-V-MPO. We showed these algorithms enable practitioners to encode preferences in a scale-invariant way. Although MO-(V-)MPO is a single-policy approach to MORL, it is capable of producing a variety of Pareto-dominant policies. A limitation of this work is that we produced this set of policies by iterating over a relatively large number of $\epsilon$'s, which is computationally expensive. In future work, we plan to extend MO-(V-)MPO into a true multiple-policy MORL approach, either by conditioning the policy on settings of $\epsilon$'s or by developing a way to strategically select $\epsilon$'s to train policies for, analogous to what prior work has done for weights (e.g., \citet{Roijers_2014}). \section{Experimental Domains} We evaluated our approach on one discrete domain (Deep Sea Treasure), three simulated continuous control domains (humanoid, Shadow Hand, and humanoid mocap), and one real-world continuous control domain (Sawyer robot). Here we provide more detail about these domains and the objectives used in each task. \subsection{Deep Sea Treasure} \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figs/dst} \caption{Deep Sea Treasure environment from \citet{Vamplew_2011}, with weights from \citet{Yang_2019}. Treasures are labeled with their respective values. The agent can move around freely in the white squares, but cannot enter the black squares (i.e., the ocean floor).} \label{fig:dst_env} \end{figure} Deep Sea Treasure (DST) is a $11 {\mkern-0.1mu\times\mkern-0.1mu} 10$ grid-world domain, the state space $\mathcal{S}$ consists of the $x$ and $y$ position of the agent and the action space $\mathcal{A}$ is $\{$\texttt{up}, \texttt{right}, \texttt{down}, \texttt{left}$\}$. The layout of the environment and values of the treasures are shown in \figref{fig:dst_env}. If the action would cause the agent to collide with the sea floor or go out of bounds, it has no effect. Farther-away treasures have higher values. The episode terminates when the agent collects a treasure, or after 200 timesteps. There are two objectives, time penalty and treasure value. A time penalty of $-1$ is given at each time step. The agent receives the value of the treasure it picks up as the reward for the treasure objective. In other words, when the agent picks up a treasure of value $v$, the reward vector is $[-1, v]$; otherwise it is $[-1, 0]$. \subsection{Shadow Hand} Our robot platform is a simulated Shadow Dexterous Hand \citep{ShadowHand} in the MuJoCo physics engine \citep{Todorov_2012}. The Shadow Hand has five fingers and 24 degrees of freedom, actuated by 20 motors. The observation consists of joint angles, joint velocities, and touch sensors, for a total of 63 dimensions. Each fingertip of our Shadow Hand has a $4 {\mkern-0.1mu\times\mkern-0.1mu} 4$ spatial touch sensor. This sensor has three channels, measuring the normal force and the $x$ and $y$-axis tangential forces, for a sensor dimension of $4 {\mkern-0.1mu\times\mkern-0.1mu} 4 {\mkern-0.1mu\times\mkern-0.1mu} 3$ per fingertip. We simplify this sensor by summing across spatial dimensions, resulting in a $1 {\mkern-0.1mu\times\mkern-0.1mu} 1 {\mkern-0.1mu\times\mkern-0.1mu} 3$ observation per fingertip. In the \textbf{\emph{touch}} task, there is a block in the environment that is always fixed in the same pose. In the \textbf{\emph{turn}} task, there is a dial in the environment that can be rotated, and it is initialized to a random position between $-30^{\circ}$ and $30^{\circ}$. The target angle of the dial is $0^{\circ}$. The angle of the dial is included in the agent's observation. In the \textbf{\emph{orient}} task, the robot interacts with a rectangular peg in the environment; the initial and target pose of the peg remains the same across episodes. The pose of the block is included in the agent's observation, encoded as the $xyz$ positions of four corners of the block (based on how \citet{Levine_2016} encodes end-effector pose). \subsubsection{Balancing Task Completion and Pain} In the \textbf{\emph{touch}} and \textbf{\emph{turn}} tasks, there are two objectives, ``pain'' penalty and task completion. A sparse task completion reward of $1$ is given for pressing a block with greater than $5$N of force or turning a dial to a fixed target angle, respectively. In both tasks, the episode terminates when the agent completes the task; i.e., the agent gets a total reward of either $0$ or $1$ for the task completion objective per episode. The Pareto plots for these two tasks in the main paper (Fig. 6) show the \emph{discounted} task reward (with a discount factor of $\gamma = 0.99$), to capture how long it takes agents to complete the task. The ``pain'' penalty is computed as in \citep{Huang_2019}. It is based on the impact force $m(s, s')$, which is the increase in force from state $s$ to the next state $s'$. In our tasks, this is measured by a touch sensor on the block or dial. The pain penalty is equal to the negative of the impact force, scaled by how unacceptable it is: \begin{equation} r_\text{pain}(s, a, s') = -\big[1 - a_\lambda(m(s,s'))\big] m(s,s') \, , \end{equation} where $a_\lambda(\cdot) \in [0, 1]$ computes the acceptability of an impact force. $a_\lambda(\cdot)$ should be a monotonically decreasing function, that captures how resilient the robot and the environment are to impacts. As in \citep{Huang_2019}, we use \begin{equation} a_\lambda(m) = \text{sigmoid}(\lambda_1 (-m + \lambda_2)) \, , \end{equation} with $\lambda = [2, 2]^\top$. The relationship between pain penalty and impact force is plotted in \figref{fig:pain_penalty}. \begin{figure}[t!] \centering \begin{subfigure}{0.16\textwidth} \centering \includegraphics[width=0.96\linewidth]{figs/shadow_touch} \end{subfigure}% \begin{subfigure}{0.16\textwidth} \centering \includegraphics[width=0.96\linewidth]{figs/shadow_turn} \end{subfigure}% \begin{subfigure}{0.16\textwidth} \centering \includegraphics[width=0.96\linewidth]{figs/shadow_orient} \end{subfigure}% \caption{These images show what task completion looks like for the \textbf{\emph{touch}}, \textbf{\emph{turn}}, and \textbf{\emph{orient}} Shadow Hand tasks (from left to right).} \label{fig:shadow} \vspace{-1.5em} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.8\linewidth]{figs/pain_penalty} \caption{In the \textbf{\emph{touch}} and \textbf{\emph{turn}} Shadow Hand tasks, the pain penalty is computed based on the impact force, as plotted here. For low impact forces, the pain penalty is near-zero. For high impact forces, the pain penalty is equal to the negative of the impact force.} \label{fig:pain_penalty} \end{figure} \subsubsection{Aligned Objectives} \begin{figure}[t!] \centering \includegraphics[width=0.8\linewidth]{figs/orient_rewards} \caption{In the \textbf{\emph{orient}} Shadow Hand task, the height and orientation objectives have shaped rewards, as plotted here.} \label{fig:orient_rewards} \end{figure} In the \textbf{\emph{orient}} task, there are three non-competing objectives: touching the peg, lifting the peg to a target height, and orienting the peg to be perpendicular to the ground. The target $z$ position and orientation is shown by the gray transparent peg (\figref{fig:shadow}, right-most); note that the $x$ and $y$ position is not specified, so the robot in the figure gets the maximum reward with respect to all three objectives. The touch objective has a sparse reward of $1$ for activating the peg's touch sensor, and zero otherwise. For the height objective, the target $z$ position is $7$cm above the ground; the peg's $z$ position is computed with respect to its center of mass. The shaped reward for this objective is $1 - \tanh(50 | z_\text{target} - z_\text{peg} |)$. For the orientation objective, since the peg is symmetrical, there are eight possible orientations that are equally valid. The acceptable target orientations $Q_\text{target}$ and the peg's orientation $q_\text{peg}$ are denoted as quaternions. The shaped reward is computed with respect to the closest target quaternion, as \begin{equation*} \min_{q \in Q_\text{target}} 1 - \tanh(2^*d(q, q_\text{peg})) \, , \end{equation*} where $d(\cdot)$ denotes the $\ell2$-norm of the axis-angle equivalent (in radians) for the distance between the two quaternions. \figref{fig:orient_rewards} shows the shaping of the height and orientation rewards. \subsection{Humanoid} We make use of the humanoid \textbf{\emph{run}} task from \citet{Tassa_2018}.\footnote{This task is available at \href{github.com/deepmind/dm_control}{github.com/deepmind/dm\_control}.} The observation dimension is $67$ and the action dimension is $21$. Actions are joint accelerations with minimum and maximum limits of $-1$ and $1$, respectively. For this task there are two objectives: \begin{itemize} \item The original task reward given by the environment. The goal is is to achieve a horizontal speed of $10$ meters per second, in any direction. This reward is shaped: it is equal to $\min(h/10, 1)$ where $h$ is in meters per second. For this objective we learn a Q-function. \item Limiting energy usage, by penalizing taking high-magnitude actions. The penalty is the $\ell2$-norm of the action vector, i.e, $r_{\textrm{penalty}}(\Vec{a}) = -\|\Vec{a}\|_2$. For this objective, we do not learn a Q-function. Instead, we compute this penalty directly to evaluate a given action in a {\it state-independent way} during policy optimization. \end{itemize} \subsection{Humanoid Mocap} The motion capture tracking task used in this paper was developed as part of a concurrent submission \cite{anonymous2020tracking} and does not constitute a contribution of this paper. We use a simulated humanoid adapted from the ``CMU humanoid'' available at {\color{blue} \href{https://github.com/deepmind/dm_control/tree/master/dm_control/locomotion}{dm\_control/locomotion}} \cite{merel2018hierarchical}. We adjusted various body and actuator parameters to be more comparable to that of an average human. This humanoid has 56 degrees of freedom and the observation is 1021-dimensional. This motion capture tracking task is broadly similar to previous work on motion capture tracking \cite{chentanez2018physics,Peng_2018, merel2018hierarchical}. The task is associated with an underlying set of motion capture clips. For this paper we used roughly 40 minutes of locomotion motion capture clips from the CMU motion capture database.\footnote{This database is available at \href{http://mocap.cs.cmu.edu}{mocap.cs.cmu.edu}.} At the beginning of a new episode, we select a frame uniformly at random from all frames in the underlying mocap data (excluding the last 10 frames of each clip). The simulated humanoid is then initialized to the pose of the selected frame. The observations in this environment are various proprioceptive observations about the current state of the body as well as the relative target positions and orientations of 13 different body parts in the humanoid's local frame. We provide the agent with the target poses for the next 5 timesteps. The choice of reward function is crucial for motion capture tracking tasks. Here we consider five different reward components, each capturing a different aspect of the similarity of the pose between the simulated humanoid and the mocap target. Four of our reward components were initially proposed by \citet{Peng_2018}. The first reward component is based on the difference in the center of mass: \begin{align} r_{\text{com}} = \exp\left(-10\|p_\text{com}-p_{\text{com}}^{\text{ref}}\|^2\right) \, ,\nonumber \end{align} where $p_\text{com}$ and $p_{\text{com}}^{\text{ref}}$ are the positions of the center of mass of the simulated humanoid and the mocap reference, respectively. The second reward component is based on the difference in joint angle velocities: \begin{align} r_{\text{vel}} = \exp\left(-0.1\|q_\text{vel}-q_{\text{vel}}^{\text{ref}}\|^2\right) \, ,\nonumber \end{align} where $q_\text{vel}$ and $q_{\text{vel}}^{\text{ref}}$ are the joint angle velocities of the simulated humanoid and the mocap reference, respectively. The third reward component is based on the difference in end-effector positions: \begin{align} r_{\text{app}} = \exp\left(-40\|p_\text{app}-p_{\text{app}}^{\text{ref}}\|^2\right) \, ,\nonumber \end{align} where $p_\text{app}$ and $p_\text{app}^{\text{ref}}$ are the end effector positions of the simulated humanoid and the mocap reference, respectively. The fourth reward component is based on the difference in joint orientations: \begin{align} r_{\text{quat}} = \exp\left(-2\|q_{\text{quat}}\varominus q_{\text{quat}}^{\text{ref}}\|^2\right),\nonumber \end{align} where $\varominus$ denotes quaternion differences and $q_\text{quat}$ and $q_{\text{quat}}^{\text{ref}}$ are the joint quaternions of the simulated humanoid and the mocap reference, respectively. Finally, the last reward component is based on difference in the joint angles and the Euclidean positions of a set of 13 body parts: \begin{align} r_{\text{trunc}} = 1 - \frac{1}{0.3}(\overbrace{\|b_\text{pos}-b_\text{pos}^{\text{ref}}\|_1 + \|q_\text{pos}-q_\text{pos}^{\text{ref}}\|_1}^{:=\varepsilon}), \nonumber \nonumber\end{align} where $b_\text{pos}$ and $b_\text{pos}^{\text{ref}}$ correspond to the body positions of the simulated character and the mocap reference and $q_\text{pos}$, and $q_\text{pos}^{\text{ref}}$ correspond to the joint angles. We include an early termination condition in our task that is linked to this last reward term. We terminate the episode if $\varepsilon>0.3$, which ensures that $r_\text{trunc}\in[0,1]$. In our MO-VMPO experiments, we treat each reward component as a separate objective. In our VMPO experiments, we use a reward of the form: \begin{align} r = \frac{1}{2}r_{\text{trunc}} + \frac{1}{2}\left(0.1r_{\text{com}}+\lambda r_{\textbf{}{vel}}+0.15r_{\text{app}}+0.65r_{\text{quat}}\right) \, , \end{align} varying $\lambda$ as described in the main paper (Sec. 6.3). \subsection{Sawyer} The Sawyer peg-in-hole setup consists of a Rethink Robotics Sawyer robot arm. The robot has 7 joints driven by series elastic actuators. On the wrist, a Robotiq FT 300 force-torque sensor is mounted, followed by a Robotiq 2F85 parallel gripper. We implement a 3D end-effector Cartesian velocity controller that maintains a fixed orientation of the gripper. The agent controls the Cartesian velocity setpoints. The gripper is not actuated in this setup. The observations provided are the joint positions, velocities, and torques, the end-effector pose, the wrist force-torque measurements, and the joint velocity command output of the Cartesian controller. We augment each observation with the two previous observations as a history. In each episode, the peg is initalized randomly within an $8\plh8\plh8$ cm workspace (with the peg outside of the hole). The environment is then run at 20 Hz for 600 steps. The action limits are $\pm0.05$ m/s on each axis. If the magnitude of the wrist force-torque measurements exceeds 15 N on any horizontal axis ($x$ and $y$ axis) or 5 N on the vertical axis ($z$ axis), the episode is terminated early. There are two objectives: the task itself (inserting the peg), and minimizing wrist force measurements. The reward for the task objective is defined as \begin{align} r_{\text{insertion}} &= \max\Big(0.2r_{\text{approach}}, r_{\text{inserted}}\Big) \\ r_{\text{approach}} &= s(p^{\text{peg}}, p^{\text{opening}}) \\ r_{\text{inserted}} &= r_{\text{aligned}} \; s(p^{\text{peg}}_z, p^{\text{bottom}}_z) \\ r_{\text{aligned}} &= p^{\text{peg}}_{xy} - p^{\text{opening}}_{xy} < d/2 \end{align} where $p^{\text{peg}}$ is the peg position, $p^{\text{opening}}$ the position of the hole opening, $p^{\text{bottom}}$ the bottom position of the hole and $d$ the diameter of the hole. $s(p_1, p_2)$ is a shaping operator \begin{align} s(p_1, p_2) = 1 - \tanh^2( \frac{\atanh(\sqrt{l})}{\epsilon} \|p_1-p_2 \|_2) \, , \end{align} which gives a reward of $1-l$ if $p_1$ and $p_2$ are at a Euclidean distance of $\epsilon$. Here we chose $l_{\text{approach}}=0.95$, $\epsilon_{\text{approach}}=0.01$, $l_{\text{inserted}}=0.5$, and $\epsilon_{\text{inserted}}=0.04$. Intentionally, $0.04$ corresponds to the length of the peg such that $r_{\text{inserted}}=0.5$ if $p^{\text{peg}} = p^{\text{opening}}$. As a result, if the peg is in the approach position, $r_{\text{inserted}}$ is dominant over $r_{\text{approach}}$ in $r_{\text{insertion}}$ . Intuitively, there is a shaped reward for aligning the peg with a magnitude of 0.2. After accurate alignment within a horizontal tolerance, the overall reward is dominated by a vertical shaped reward component for insertion. The horizontal tolerance threshold ensures that the agent is not encouraged to try to insert the peg from the side. The reward for the secondary objective, minimizing wrist force measurements, is defined as \begin{align} r_{\text{force}} = - \|F\|_1 \, , \end{align} where $F = (F_x, F_y, F_z)$ are the forces measured by the wrist sensor. \section{Experiments: Continuous Control Domains} \label{sec:continuouscontrol} The advantage of encoding preferences via $\epsilon$'s, rather than via weights, is apparent in more complex domains. We compared our approaches, MO-MPO and MO-V-MPO, against scalarized MPO and V-MPO in four high-dimensional continuous control domains, in MuJoCo \cite{Todorov_2012} and on a real robot. The domains we consider are: \textbf{Humanoid:} We use the humanoid \textbf{\emph{run}} task defined in DeepMind Control Suite \cite{Tassa_2018}. Policies must optimize for horizontal speed $h$ while minimizing energy usage. The task reward is $\min(h/10, 1)$ where $h$ is in meters per second, and the energy usage penalty is action $\ell2$-norm. The humanoid has 21 degrees of freedom, and the observation consists of joint angles, joint velocities, head height, hand and feet positions, torso vertical orientation, and center-of-mass velocity, for a total of 67 dimensions. \textbf{Shadow Hand:} We consider three tasks on the Shadow Dexterous Hand: \textbf{\emph{touch}}, \textbf{\emph{turn}}, and \textbf{\emph{orient}}. In the \textbf{\emph{touch}} and \textbf{\emph{turn}} tasks, policies must complete the task while minimizing ``pain.'' A sparse task reward of $1.0$ is given for pressing the block with greater than $5$N of force or for turning the dial from a random initial location to the target location. The pain penalty penalizes the robot for colliding with objects at high speed; this penalty is defined as in \citet{Huang_2019}. In the \textbf{\emph{orient}} task, there are three aligned objectives: touching the rectangular peg, lifting it to a given height, and orienting it to be perpendicular to the ground. All three rewards are between 0 and 1. The Shadow Hand has five fingers and 24 degrees of freedom, actuated by 20 motors. The observation consists of joint angles, joint velocities, and touch sensors, for a total of 63 dimensions. The \textbf{\emph{touch}} and \textbf{\emph{turn}} tasks terminate when the goal is reached or after 5 seconds, and the \textbf{\emph{orient}} task terminates after 10 seconds. \textbf{Humanoid Mocap:} We consider a large-scale humanoid motion capture tracking task, similar to in \citet{Peng_2018}, in which policies must learn to follow motion capture reference data.\footnote{This task was developed concurrently by \citet{anonymous2020tracking} and does not constitute a contribution of this paper.} There are five objectives, each capturing a different aspect of the similarity of the pose between the simulated humanoid and the mocap target: joint orientations, joint velocities, hand and feet positions, center-of-mass positions, and certain body positions and joint angles. These objectives are described in detail in Appendix B.4. In order to balance these multiple objectives, prior work relied on heavily-tuned reward functions \citep[e.g.][]{Peng_2018}. The humanoid has 56 degrees of freedom and the observation is 1021-dimensional, consisting of proprioceptive observations as well as six steps of motion capture reference frames. In total, we use about 40 minutes of locomotion mocap data, making this an extremely challenging domain. \textbf{Sawyer Peg-in-Hole:} We train a Rethink Robotics Sawyer robot arm to insert a cylindrical peg into a hole, while minimizing wrist forces. The task reward is shaped toward positioning the peg directly above the hole and increases for insertion, and the penalty is the $\ell1$-norm of Cartesian forces measured by the wrist force-torque sensor. The latter implicitly penalizes contacts and impacts, as well as excessive directional change (due to the gripper's inertia inducing forces when accelerating). We impose a force threshold to protect the hardware---if this threshold is exceeded, the episode is terminated. The action space is the end effector's Cartesian velocity, and the observation is 102-dimensional, consisting of Cartesian position, joint position and velocity, wrist force-torque, and joint action, for three timesteps. \subsection{Evaluation Metric} We run MO-(V-)MPO and scalarized (V-)MPO with a wide range of constraint settings $\epsilon_k$ and scalarization weights $w_k$, respectively, corresponding to a wide range of possible desired preferences. (The exact settings are provided in Appendix A.) For tasks with two objectives, we plot the Pareto front found by each approach. We also compute the \emph{hypervolume} of each found Pareto front; this metric is commonly-used for evaluating MORL algorithms \cite{Vamplew_2011}. Given a set of policies $\Pi$ and a reference policy $r$ that is dominated by all policies in this set, this metric is the hypervolume of the space of all policies that dominate $r$ and are dominated by at least one policy in $\Pi$. We use DEAP \cite{Fortin_2012} to compute hypervolumes. \subsection{Results: Humanoid and Shadow Hand} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/humanoid_gentle_pareto.pdf} \caption{Pareto fronts found by MO-MPO and scalarized MO-MPO for humanoid \textbf{\emph{run}} (top row) and Shadow Hand tasks. Each dot represents a single trained policy. Corresponding hypervolumes are in \tableref{tab:hypervolume}. Task reward is discounted for \textbf{\emph{touch}} and \textbf{\emph{turn}}, with a discount factor of $0.99$. For \textbf{\emph{orient}}, the $x$ and $y$ axes are the total reward for the lift and orientation objectives, respectively.} \label{fig:pareto_curves} \vspace{-1.5em} \end{figure} In the \textbf{\emph{run}}, \textbf{\emph{touch}}, and \textbf{\emph{turn}} tasks, the two objectives are competing---a very high preference for minimizing action norm or pain, as opposed to getting task reward, will result in a policy that always chooses zero-valued actions. Across these three tasks, the Pareto front found by MO-MPO is superior to the one found by scalarized MPO, with respect to the hypervolume metric (\tableref{tab:hypervolume}).\footnote{We use hypervolume reference points of $[0, -10^4]$ for \textbf{\emph{run}}, $[0, -10^5]$ for \textbf{\emph{run}} with $10{\mkern-0.1mu\times\mkern-0.1mu}$ penalty, $[0, -20]$ for \textbf{\emph{touch}} and \textbf{\emph{turn}}, and the zero vector for \textbf{\emph{orient}} and humanoid mocap.} In particular, MO-MPO finds more policies that perform well with respect to \emph{both} objectives, i.e., are in the upper right portion of the Pareto front. MO-MPO also speeds up learning on \textbf{\emph{run}} and \textbf{\emph{touch}} (\figref{fig:pareto_curves}). Qualitatively, MO-MPO trains \textbf{\emph{run}} policies that look more natural and ``human-like''; videos are at \url{http://sites.google.com/view/mo-mpo}. When we scale the action norm penalty by $10{\mkern-0.1mu\times\mkern-0.1mu}$ for \textbf{\emph{run}}, scalarized MPO policies no longer achieve high task reward, whereas MO-MPO policies do (\figref{fig:pareto_curves}, top right). This supports that MO-MPO's encoding of preferences is indeed scale-invariant. When the objectives are aligned and have similarly scaled rewards, as in the \textbf{\emph{orient}} task, MO-MPO and scalarized MPO perform similarly, as expected. \prg{Ablation} We also ran vanilla MPO on humanoid \textbf{\emph{run}}, with the same range of weight settings as for scalarized MPO, to investigate how useful Q-decomposition is. In vanilla MPO, we train a single critic on the scalarized reward function, which is equivalent to removing Q-decomposition from scalarized MPO. Vanilla MPO trains policies that achieve similar task reward (up to 800), but with twice the action norm penalty (up to $-10^4$). As a result, the hypervolume of the Pareto front that vanilla MPO finds is more than a magnitude worse than that of scalarized MPO and MO-MPO ($2.2 {\mkern-0.1mu\times\mkern-0.1mu} 10^6$ versus $2.6 {\mkern-0.1mu\times\mkern-0.1mu} 10^7$ and $6.9 {\mkern-0.1mu\times\mkern-0.1mu} 10^7$, respectively). \begin{table}[t] \small \centering \setlength\tabcolsep{3pt} \begin{tabular}{lcc} \toprule \hline \textbf{Task} & \textbf{scalarized MPO} & \textbf{MO-MPO} \Tstrut \\ \midrule Humanoid \textbf{\emph{run}}, mid-training & $1.1 {\mkern-0.1mu\times\mkern-0.1mu} 10^{6}$ & $\boldsymbol{3.3 {\mkern-0.1mu\times\mkern-0.1mu} 10^{6}}$ \\ Humanoid \textbf{\emph{run}} & $6.4 {\mkern-0.1mu\times\mkern-0.1mu} 10^{6}$ & $\boldsymbol{7.1 {\mkern-0.1mu\times\mkern-0.1mu} 10^{6}}$ \\ \midrule Humanoid \textbf{\emph{run}}, $1 {\mkern-0.1mu\times\mkern-0.1mu}$ penalty & $5.0 {\mkern-0.1mu\times\mkern-0.1mu} 10^{6}$ & $\boldsymbol{5.9 {\mkern-0.1mu\times\mkern-0.1mu} 10^{6}}$ \\ Humanoid \textbf{\emph{run}}, $10 {\mkern-0.1mu\times\mkern-0.1mu}$ penalty & $2.6 {\mkern-0.1mu\times\mkern-0.1mu} 10^{7}$ & $\boldsymbol{6.9 {\mkern-0.1mu\times\mkern-0.1mu} 10^{7}}$ \\ \midrule Shadow \textbf{\emph{touch}}, mid-training & $14.3$ & $\boldsymbol{15.6}$ \\ Shadow \textbf{\emph{touch}} & $16.2$ & $\boldsymbol{16.4}$ \\ Shadow \textbf{\emph{turn}} & $14.4$ & $\boldsymbol{15.4}$ \\ Shadow \textbf{\emph{orient}} & $\boldsymbol{2.8 {\mkern-0.1mu\times\mkern-0.1mu} 10^4}$ & $\boldsymbol{2.8 {\mkern-0.1mu\times\mkern-0.1mu} 10^4}$ \\ \midrule Humanoid Mocap & $\boldsymbol{3.86 {\mkern-0.1mu\times\mkern-0.1mu} 10^{-6}}$ & $3.41 {\mkern-0.1mu\times\mkern-0.1mu} 10^{-6}$ \\ \midrule \bottomrule \end{tabular} \caption{Hypervolume measurements across tasks and approaches.} \label{tab:hypervolume} \vspace{-1em} \end{table} \subsection{Results: Humanoid Mocap} The objectives in this task are mostly aligned. In contrast to the other experiments, we use V-MPO \cite{Song2020V-MPO} as the base algorithm because it outperforms MPO in learning this task. In addition, since V-MPO is an on-policy variant of MPO, this enables us to evaluate our approach in the on-policy setting. Each training run is very computationally expensive, so we train only a handful of policies each.\footnote{For MO-V-MPO, we set all $\epsilon_k = 0.01$. Also, for each objective, we set $\epsilon_k = 0.001$ and set all others to $0.01$. For V-MPO, we fix the weights of four objectives to reasonable values, and try weights of $[0.1, 0.3, 0.6, 1, 5]$ for matching joint velocities.} None of the MO-V-MPO policies are dominated by those found by V-MPO. In fact, although the weights span a wide range of ``preferences'' for the joint velocity, the only policies found by V-MPO that are \emph{not} dominated by a MO-V-MPO policy are those with extreme values for joint velocity reward (either $\leq 0.006$ or $\geq 0.018$), whereas it is between $0.0103$ and $0.0121$ for MO-V-MPO policies. Although the hypervolume of the Pareto front found by V-MPO is higher than that of MO-V-MPO (\tableref{tab:hypervolume}), finding policies that over- or under- prioritize any objective is undesirable. Qualitatively, the policies trained with MO-V-MPO look more similar to the mocap reference data---they exhibit less feet jittering, compared to those trained with scalarized V-MPO; this can be seen in the corresponding video. \subsection{Results: Sawyer Peg-in-Hole} In this task, we would like the robot to prioritize successful task completion, while minimizing wrist forces. With this in mind, for MO-MPO we set $\epsilon_\text{task} = 0.1$ and $\epsilon_\text{force} = 0.05$, and for scalarized MPO we try $[w_\text{task}, w_\text{force}] = [0.95, 0.05]$ and $[0.8, 0.2]$. We find that policies trained with scalarized MPO focus all learning on a single objective at the beginning of training; we also observed this in the \textbf{\emph{touch}} task, where scalarized MPO policies quickly learn to either maximize task reward or minimize pain penalty, but not both (\figref{fig:pareto_curves}, bottom left). In contrast, the policy trained with MO-MPO simultaneously optimizes for \emph{both} objectives throughout training. In fact, throughout training, the MO-MPO policy does just as well with respect to task reward as the scalarized MPO policy that cares more about task reward, and similarly for wrist force penalty (\figref{fig:realrobot}). \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/realrobot} \caption{Task reward (left) and wrist force penalty (right) learning curves for the Sawyer peg-in-hole task. The policy trained with MO-MPO quickly learns to optimize \emph{both} objectives, whereas the other two do not. Each line represents a single trained policy.} \label{fig:realrobot} \vspace{-1em} \end{figure} \section{Experiments: Toy Domains} In the empirical evaluation that follows, we will first demonstrate the mechanics and scale-invariance of MO-MPO in a single-state environment (\sref{sec:mab}), and then show that MO-MPO can find all Pareto-dominant policies in a popular MORL benchmark (\sref{sec:dst}). Finally, we show the benefit of using MO-MPO in high-dimensional continuous control domains, including on a real robot (\sref{sec:continuouscontrol}). Appendices A and B contain a detailed description of all domains and tasks, experimental setups, and implementation details. \prg{Baselines} The goal of our empirical evaluation is to analyze the benefit of using our proposed multi-objective policy improvement step (\sref{sec:supervisedstep}), that encodes preferences over objectives via constraints $\epsilon_k$ on expected KL-divergences, rather than via weights $w_k$. Thus, we primarily compare MO-MPO against \emph{scalarized MPO}, which relies on linear scalarization weights $w_k$ to encode preferences. The only difference between MO-MPO and scalarized MPO is the policy improvement step: for scalarized MPO, a single improved action distribution $q(a|s)$ is computed, based on $\sum_k w_k Q_k(s,a)$ and a single KL constraint $\epsilon$. State-of-the-art approaches that combine MORL with deep RL assume linear scalarization as well, either learning a separate policy for each setting of weights \citep{Mossalem_2016} or learning a single policy conditioned on scalarization weights \citep{Friedman_2018, Abels_2019}. Scalarized MPO addresses the former problem, which is easier. The policy evaluation step in scalarized MPO is analagous to \emph{scalarized Q-learning}, proposed by \citet{Mossalem_2016}. As we show later in \sref{sec:continuouscontrol}, even learning an optimal policy for a single scalarization is difficult in high-dimensional continuous control domains. \subsection{Simple World} \label{sec:mab} \begin{figure}[t!] \centering \includegraphics[width=0.5\linewidth]{figs/simple_world} \caption{A simple environment in which the agent starts at state $S_0$, and chooses to navigate to one of three terminal states. There are two objectives. Taking the \texttt{left} action, for instance, leads to a reward of $1$ for the first objective and $4$ for the second.} \label{fig:simpleworld} \end{figure} First, we will examine the behavior of MO-MPO in a simple multi-armed bandit with three actions (\texttt{up}, \texttt{right}, and \texttt{left}) (\figref{fig:simpleworld}), inspired by Russell \& Zimdars \yrcite{Russell_2003}. We train policies with scalarized MPO and with MO-MPO. The policy evaluation step is exact because the $Q$-value function for each objective is known: it is equal to the reward received for that objective after taking each action, as labeled in \figref{fig:simpleworld}.\footnote{The policy improvement step can also be computed exactly, because solving for the optimal temperature $\eta$ (or $\eta_1$ and $\eta_2$ in the MO-MPO case) is a convex optimization problem, and the KL-constrained policy update is also a convex optimization problem when there is only one possible state. We use CVXOPT \cite{CVXOPT} as our convex optimization solver.} We consider three possible desired preferences: equal preference for the two objectives, preferring the first, and preferring the second. Encoding preferences in scalarized MPO amounts to choosing appropriate linear scalarization weights, and in MO-MPO amounts to choosing appropriate $\epsilon$'s. We use the following weights and $\epsilon$'s: \begin{itemize}[noitemsep,topsep=0pt] \item \emph{equal preference}: weights $[0.5, 0.5]$ or $\epsilon$'s $[0.01, 0.01]$ \item \emph{prefer first}: weights $[0.9, 0.1]$ or $\epsilon$'s $[0.01, 0.002]$ \item \emph{prefer second}: weights $[0.1, 0.9]$ or $\epsilon$'s $[0.002, 0.01]$ \end{itemize} We set $\epsilon = 0.01$ for scalarized MPO. If we start with a uniform policy and run MPO with $\beta = 0.001$ until the policy converges, scalarized MPO and MO-MPO result in similar policies (\figref{fig:simple_scale}, solid bars): \texttt{up} for \emph{equal preference}, \texttt{right} for \emph{prefer first}, and \texttt{left} for \emph{prefer second}. However, if we make the rewards \emph{imbalanced} by multiplying the rewards obtained for the first objective by $20$ (e.g., \texttt{left} now obtains a reward of $[20, 4]$), we see that the policies learned by scalarized MPO shift to preferring the optimal action for the first objective (\texttt{right}) in both the \emph{equal preference} and \emph{prefer second} cases (\figref{fig:simple_scale}, striped bars). In contrast, the final policies for MO-MPO are the same as for balanced rewards, because in each policy improvement step, MO-MPO optimizes for a separate temperature $\eta_k$ that scales each objective's $Q$-value function. This $\eta_k$ is computed based on the corresponding allowed KL-divergence $\epsilon_k$, so when the rewards for any objective $k$ are multiplied by a factor but $\epsilon_k$ remains the same, the computed $\eta_k$ ends up being scaled by that factor as well, neutralizing the effect of the scaling of rewards (see Eq. \eqref{eq:eta}). Even in this simple environment, we see that MO-MPO's scale-invariant way of encoding preferences is valuable. In more complex domains, in which the $Q$-value functions must be learned in parallel with the policy, the (automatic) dynamic adjustment of temperatures $\eta_k$ per objective becomes more essential (\sref{sec:continuouscontrol}). The scale of $\epsilon_k$ controls the amount that objective $k$ can influence the policy's update. If we set $\epsilon_1 = 0.01$ and sweep over the range from $0$ to $0.01$ for $\epsilon_2$, the resulting policies go from always picking \texttt{right}, to splitting probability across \texttt{right} and \texttt{up}, to always picking \texttt{up} (\figref{fig:simple_simplex}, right). In contrast, setting weights leads to policies quickly converging to placing all probability on a single action (\figref{fig:simple_simplex}, left). We hypothesize this limits the ability of scalarized MPO to explore and find compromise policies (that perform well with respect to all objectives) in more challenging domains. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/simple_scale} \caption{When two objectives have comparable reward scales (solid bars), scalarized MPO (first row) and MO-MPO (second row) learn similar policies, across three different preferences. However, when the scale of the first objective is much higher (striped bars), scalarized MPO shifts to always preferring the first objective. In contrast, MO-MPO is scale-invariant and still learns policies that satisfy the preferences. The y-axis denotes action probability.} \label{fig:simple_scale} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/simple_simplex_combined} \caption{A visualization of policies during learning---each curve corresponds to a particular setting of weights (left) or $\epsilon$'s (right). Policies are initialized as uniform (the blue dot), and are trained until convergence. Each point $(x,y)$ corresponds to the policy with $p($\texttt{left}$) = x$, $p($\texttt{right}$) = y$, and $p($\texttt{up}$) = 1 - x - y$. The top left and bottom right blue stars denote the optimal policy for the first and second objectives, respectively.} \label{fig:simple_simplex} \end{figure} \subsection{Deep Sea Treasure} \label{sec:dst} \begin{figure}[t!] \centering \includegraphics[width=1.\linewidth]{figs/dst_combo_2.png} \caption{Left: Blue stars mark the true Pareto front for Deep Sea Treasure. MO-MPO, with a variety of settings for $\epsilon_k$, discovers all points on the true Pareto front. The area of the orange circles is proportional to the number of $\epsilon_k$ settings that converged to that point. Right: As more preference is given to the treasure objective (i.e., as $x$ increases), policies tend to prefer higher-value treasures. Each orange dot in the scatterplot corresponds to a particular setting of $\epsilon_k$.} \label{fig:dst} \end{figure} An important quality of any MORL approach is the ability to find a variety of policies on the true Pareto front \cite{Roijers_2013}. We demonstrate this in Deep Sea Treasure (DST) \cite{Vamplew_2011}, a popular benchmark for testing MORL approaches. DST consists of a $11 {\mkern-0.1mu\times\mkern-0.1mu} 10$ grid world with ten treasure locations. The agent starts in the upper left corner ($s_0 = (0, 0)$) and has a choice of four actions (moving one square \texttt{up}, \texttt{right}, \texttt{down}, and \texttt{left}). When the agent picks up a treasure, the episode terminates. The agent has two objectives: treasure value and time penalty. The time penalty is $-1$ for each time step that the agent takes, and farther-away treasures have higher treasure values. We use the treasure values in Yang et al. \yrcite{Yang_2019}. We ran scalarized MPO with weightings $[w, 1-w]$ and $w \in [0, 0.01, 0.02, \dots, 1]$. All policies converged to a point on the true Pareto front, and all but three found the optimal policy for that weighting. In terms of coverage, policies were found for eight out of ten points on the Pareto front.\footnote{Since scalarized MPO consistently finds the optimal policy for any given weighting in this task, with a more strategic selection of weights, we expect policies for all ten points would be found.} We ran MO-MPO on this task as well, for a range of $\epsilon$: $\epsilon_\text{time} \in [0.01, 0.02, 0.05]$ and $\epsilon_\text{treasure} = c * \epsilon_\text{time}$, where $c \in [0.5, 0.51, 0.52, \dots, 1.5]$. All runs converged to a policy on the true Pareto front, and MO-MPO finds policies for all ten points on the front (\figref{fig:dst}, left). Note that it is the \emph{ratio} of $\epsilon$'s that matters, rather than the exact settings---across all settings of $\epsilon_\text{time}$, similar ratios of $\epsilon_\text{treasure}$ to $\epsilon_\text{time}$ result in similar policies; as this ratio increases, policies tend to prefer higher-value treasures (\figref{fig:dst}, right). \section{Experiment Details} In this section, we describe implementation details and specify the hyperparameters used for our algorithm. In all our experiments, the policy and critic(s) are implemented with feed-forward neural networks. We use Adam \citep{Kingma_2015} for optimization. In our continuous control tasks, the policy returns a Gaussian distribution with a diagonal covariance matrix, i.e, $ \pi_{\a}(\vec a| \vec s) = \mathcal{N}\left(\mu , \vec \Sigma \right) $. The policy is parametrized by a neural network, which outputs the mean $\mu=\mu(\vec s)$ and diagonal Cholesky factors $A=A(\vec s)$, such that $\Sigma = AA^T$. The diagonal factor $A$ has positive diagonal elements enforced by the softplus transform $A_{ii} \leftarrow \log(1 + \exp(A_{ii}))$ to ensure positive definiteness of the diagonal covariance matrix. \tableref{t:MPO} shows the default hyperparameters that we use for MO-MPO and scalarized MPO, and Table \ref{t:MPO_domainspecific} shows the hyperparameters that differ from these defaults for the humanoid, Shadow Hand, and Sawyer experiments. For all tasks that we train policies with MO-MPO for, a separate critic is trained for each objective, with a shared first layer. The only exception is the action norm penalty in humanoid \textbf{\emph{run}}, which is computed exactly from the action. We found that for both policy and critic networks, layer normalization of the first layer followed by a hyperbolic tangent ($\tanh$) is important for the stability of the algorithm. \begin{table}[t] \small \centering \setlength\tabcolsep{3pt} \begin{tabular}{lc} \toprule \midrule \textbf{Hyperparameter} & \textbf{Default} \Tstrut \\ \midrule policy network & \\ \hspace{2em} layer sizes & $(300, 200)$ \\ \hspace{2em} take $\tanh$ of action mean? & yes \\ \hspace{2em} minimum variance & $10^{-12}$ \\ \hspace{2em} maximum variance & unbounded \\ \midrule critic network(s) & \\ \hspace{2em} layer sizes & $(400, 400, 300)$ \\ \hspace{2em} take $\tanh$ of action? & yes \\ \hspace{2em} Retrace sequence size & $8$ \\ \hspace{2em} discount factor $\gamma$ & $0.99$ \\ \midrule both policy and critic networks & \\ \hspace{2em} layer norm on first layer? & yes \\ \hspace{2em} $\tanh$ on output of layer norm? & yes \\ \hspace{2em} activation (after each hidden layer) & ELU \\ \midrule MPO & \\ \hspace{2em} actions sampled per state & $20$ \\ \hspace{2em} default $\epsilon$ & $0.1$ \\ \hspace{2em} KL-constraint on policy mean, $\beta_{\mu}$ & $10^{-3}$ \\ \hspace{2em} KL-constraint on policy covariance, $\beta_{\Sigma}$ & $10^{-5}$ \\ \hspace{2em} initial temperature $\eta$ & $1$ \\ \midrule training & \\ \hspace{2em} batch size & $512$ \\ \hspace{2em} replay buffer size & $10^{6}$ \\ \hspace{2em} Adam learning rate & $3 {\mkern-0.1mu\times\mkern-0.1mu} 10^{-4}$ \\ \hspace{2em} Adam $\epsilon$ & $10^{-3}$ \\ \hspace{2em} target network update period & $200$ \\ \midrule \bottomrule \end{tabular} \caption{Default hyperparameters for MO-MPO and scalarized MPO, with decoupled update on mean and covariance (\sref{sec:policy_fitting}).} \label{t:MPO} \end{table} \begin{table}[t] \small \centering \setlength\tabcolsep{3pt} \begin{tabular}{lc} \multicolumn{2}{c}{\textbf{Humanoid \emph{run}}} \Tstrut \\ \toprule \midrule policy network & \\ \hspace{2em} layer sizes & $(400, 400, 300)$ \\ \hspace{2em} take $\tanh$ of action mean? & no \\ \hspace{2em} minimum variance & $10^{-8}$ \\ \midrule critic network(s) & \\ \hspace{2em} layer sizes & $(500, 500, 400)$ \\ \hspace{2em} take $\tanh$ of action? & no \\ \midrule MPO & \\ \hspace{2em} KL-constraint on policy mean, $\beta_{\mu}$ & $5 {\mkern-0.1mu\times\mkern-0.1mu} 10^{-3}$ \\ \hspace{2em} KL-constraint on policy covariance, $\beta_{\Sigma}$ & $10^{-6}$ \\ \midrule training & \\ \hspace{2em} Adam learning rate & $2 {\mkern-0.1mu\times\mkern-0.1mu} 10^{-4}$ \\ \hspace{2em} Adam $\epsilon$ & $10^{-8}$ \\ \midrule \bottomrule \\ \multicolumn{2}{c}{\textbf{Shadow Hand}} \Tstrut \\ \toprule \midrule MPO & \\ \hspace{2em} actions sampled per state & $30$ \\ \hspace{2em} default $\epsilon$ & $0.01$ \\ \midrule \bottomrule \\ \multicolumn{2}{c}{\textbf{Sawyer}} \Tstrut \\ \toprule \midrule MPO & \\ \hspace{2em} actions sampled per state & $15$ \\ \midrule training & \\ \hspace{2em} target network update period & $100$ \\ \midrule \bottomrule \end{tabular} \caption{Hyperparameters for humanoid, Shadow Hand, and Sawyer experiments that differ from the defaults in \tableref{t:MPO}.} \label{t:MPO_domainspecific} \end{table} In our experiments comparing the Pareto front found by MO-MPO versus scalarized MPO, we ran MO-MPO with a range of $\epsilon_k$ settings and ran scalarized MPO with a range of weight settings, to obtain a set of policies for each algorithm. The settings for humanoid \textbf{\emph{run}} and the Shadow Hand tasks are listed in \tableref{t:eps_weight_settings}. The settings for humanoid mocap and Sawyer are specified in the main paper, in Sec. 6.3 and 6.4, respectively. \begin{table}[t] \small \centering \setlength\tabcolsep{3pt} \begin{tabular}{ll} \toprule \midrule \textbf{Condition} & \textbf{Settings} \Tstrut \\ \midrule \multicolumn{2}{l}{Humanoid \textbf{\emph{run}} (one seed per setting)} \\ \rule{0pt}{3ex} \hspace{2em} scalarized MPO & $w_\text{task} = 1 - w_\text{penalty}$ \\ & $w_\text{penalty} \in \text{linspace}(0, 0.15, 100)$ \\ \rule{0pt}{2ex} \hspace{2em} MO-MPO & $\epsilon_\text{task} = 0.1$ \\ & $\epsilon_\text{penalty} \in \text{linspace}(10^{-6}, 0.15, 100)$ \\ \midrule \multicolumn{2}{l}{Humanoid \textbf{\emph{run}}, normal vs. scaled (three seeds per setting)} \\ \rule{0pt}{3ex} \hspace{2em} scalarized MPO & $w_\text{task} = 1 - w_\text{penalty}$ \\ & $w_\text{penalty} \in \{0.01, 0.05, 0.1\}$ \\ \rule{0pt}{2ex} \hspace{2em} MO-MPO & $\epsilon_\text{task} = 0.1$ \\ & $\epsilon_\text{penalty} \in \{0.01, 0.05, 0.1\}$ \\ \midrule \multicolumn{2}{l}{Shadow Hand \textbf{\emph{touch}} and \textbf{\emph{turn}} (three seeds per setting)} \\ \rule{0pt}{3ex} \hspace{2em} scalarized MPO & $w_\text{task} = 1 - w_\text{penalty}$ \\ & $w_\text{penalty} \in \text{linspace}(0, 0.9, 10)$ \\ \rule{0pt}{2ex} \hspace{2em} MO-MPO & $\epsilon_\text{task} = 0.01$ \\ & $\epsilon_\text{penalty} \in \text{linspace}(0.001, 0.015, 15)$ \\ \midrule \multicolumn{2}{l}{Shadow Hand \textbf{\emph{orient}} (ten seeds per setting)} \\ \rule{0pt}{3ex} \hspace{2em} scalarized MPO & $w_\text{touch} = w_\text{height} = w_\text{orientation} = 1/3$ \\ \rule{0pt}{2ex} \hspace{2em} MO-MPO & $\epsilon_\text{touch} = \epsilon_\text{height} = \epsilon_\text{orientation} = 0.01$ \\ \midrule \bottomrule \end{tabular} \caption{Settings for $\epsilon_k$ and weights. ($\text{linspace}(x, y, z)$ denotes a set of $z$ evenly-spaced values between $x$ and $y$.)} \label{t:eps_weight_settings} \end{table} In the following subsections, we first give suggestions for choosing appropriate $\epsilon_k$ to encode a particular preference (\sref{sec:setting_eps}), then we describe the discrete MO-MPO used for Deep Sea Treasure (\sref{sec:dst_details}), and finally we describe implementation details for humanoid mocap (\sref{sec:humanoid_mocap_details}). \subsection{Suggestions for Setting $\epsilon$} \label{sec:setting_eps} Our proposed algorithm, MO-(V-)MPO, requires practitioners to translate a desired preference across objectives to numerical choices for $\epsilon_k$ for each objective $k$. At first glance, this may seem daunting. However, in practice, we have found that encoding preferences via $\epsilon_k$ is often more intuitive than doing so via scalarization. In this subsection, we seek to give an intuition on how to set $\epsilon_k$ for different desired preferences. Recall that each $\epsilon_k$ controls the influence of objective $k$ on the policy update, by constraining the KL-divergence between each objective-specific distribution and the current policy (Sec. 4.1). We generally choose $\epsilon_k$ in the range of $0.001$ to $0.1$. \prg{Equal Preference} When all objectives are equally important, the general rule is to set all $\epsilon_k$ to the same value. We did this in our \textbf{\emph{orient}} and humanoid mocap tasks, where objectives are aligned or mostly aligned. In contrast, it can be tricky to choose appropriate weights in linear scalarization to encode equal preferences---setting all weights equal to $1 / K$ (where $K$ is the number of objectives) is only appropriate if the objectives' rewards are of similar scales. We explored this in Sec. 5.1 and Fig. 3. When setting all $\epsilon_k$ to the same value, what should this value be? The larger $\epsilon_k$ is, the more influence the objectives will have on the policy update step. Since the per-objective critics are learned in parallel with the policy, setting $\epsilon_k$ too high tends to destabilize learning, because early on in training, when the critics produce unreliable Q-values, their influence on the policy will lead it in the wrong direction. On the other hand, if $\epsilon_k$ is set too low, then it slows down learning, because the per-objective action distribution is only allowed to deviate by a tiny amount from the current policy, and the updated policy is obtained via supervised learning on the combination of these action distributions. Eventually the learning will converge to more or less the same policy though, as long as $\epsilon_k$ is not set too high. \prg{Unequal Preference} When there is a difference in preferences across objectives, the \emph{relative} scale of $\epsilon_k$ is what matters. The more the relative scale of $\epsilon_k$ is compared to $\epsilon_l$, the more influence objective $k$ has over the policy update, compared to objective $l$. And in the extreme case, when $\epsilon_l$ is near-zero for objective $l$, then objective $l$ will have no influence on the policy update and will effectively be ignored. We explored this briefly in Sec. 5.1 and Fig. 4. One common example of unequal preferences is when we would like an agent to complete a task, while minimizing other objectives---e.g., an action norm penalty, ``pain'' penalty, or wrist force-torque penalty, in our experiments. In this case, the $\epsilon$ for the task objective should be higher than that for the other objectives, to incentivize the agent to prioritize actually doing the task. If the $\epsilon$ for the penalties is too high, then the agent will care more about minimizing the penalty (which can typically be achieved by simply taking no actions) rather than doing the task, which is not particularly useful. The scale of $\epsilon_k$ has a similar effect as in the equal preference case. If the scale of $\epsilon_k$ is too high or too low, then the same issues arise as discussed for equal preferences. If all $\epsilon_k$ increase or decrease in scale by the same (moderate) factor, and thus their relative scales remain the same, then typically they will converge to more or less the same policy. This can be seen in Fig. 5 (right), in the Deep Sea Treasure domain: regardless of whether $\epsilon_\text{time}$ is $0.01$, $0.02$, or $0.05$, the relationship between the relative scale of $\epsilon_\text{treasure}$ and $\epsilon_\text{time}$ to the treasure that the policy converges to selecting is essentially the same. \subsection{Deep Sea Treasure} \label{sec:dst_details} In order to handle discrete actions, we make several minor adjustments to scalarized MPO and MO-MPO. The policy returns a categorical distribution, rather than a Gaussian. The policy is parametrized by a neural network, which outputs a vector of logits (i.e., unnormalized log probabilities) of size $|\mathcal{A}|$. The KL constraint on the change of this policy, $\beta$, is $0.001$. The input to the critic network is the state concatenated with a four-dimensional one-hot vector denoting which action is chosen (e.g., the up action corresponds to $[1, 0, 0, 0]^\top$). Critics are trained with one-step temporal-difference error, with a discount of $0.999$. Other than these changes, the network architectures and the MPO and training hyperparameters are the same as in \tableref{t:MPO}. \subsection{Humanoid Mocap} \label{sec:humanoid_mocap_details} For the humanoid mocap experiments, we used the following architecture for both MO-VMPO and VMPO: for the policy, we process a concatentation of the mocap reference observations and the proprioceptive observations by a two layer MLP with 1024 hidden units per layer. This reference encoder is followed by linear layers to produce the mean and log standard deviation of a stochastic latent variable. These latent variables are then concatenated with the proprioceptive observations and processed by another two layer MLP with 1024 hidden units per layer, to produce the action distribution. For VMPO, we use a three layer MLP with 1024 hidden units per layer as the critic. For MO-VMPO we use a shared two layer MLP with 1024 hidden units per layer followed by a one layer MLP with 1024 hidden units per layer per objective. In both cases we use k-step returns to train the critic with a discount factor of $0.95$. \tableref{t:mocap_hypers} shows additional hyperparameters used in our experiments. \begin{table}[t] \small \centering \setlength\tabcolsep{3pt} \begin{tabular}{lc} \multicolumn{2}{c}{\textbf{Humanoid Mocap}} \Tstrut \\ \toprule \midrule Scalarized VMPO & \\ \hspace{2em} KL-constraint on policy mean, $\beta_\mu$ & 0.1 \\ \hspace{2em} KL-constraint on policy covariance, $\beta_\Sigma$ & $10^{-5}$ \\ \hspace{2em} default $\epsilon$ & $0.1$ \\ \hspace{2em} initial temperature $\eta$ & $1$ \\ Training & \\ \hspace{2em} Adam learning rate & $10^{-4}$\\ \hspace{2em} batch size & $128$\\ \hspace{2em} unroll length (for n-step return, \sref{sec:vmpo_eval}) & $32$\\ \midrule \bottomrule \\ \end{tabular} \caption{Hyperparameters for the humanoid mocap experiments.} \label{t:mocap_hypers} \end{table} \section{Introduction} Reinforcement learning (RL) algorithms do an excellent job at training policies to optimize a single scalar reward function. Recent advances in deep RL have made it possible to train policies that exceed human-level performance on Atari \cite{mnih2015human} and Go \cite{Silver_2016}, perform complex robotic manipulation tasks \cite{Zeng_2019}, learn agile locomotion \cite{Tan_2018}, and even obtain reward in unanticipated ways \cite{Amodei_2016}. However, many real-world tasks involve \emph{multiple}, possibly competing, objectives. For instance, choosing a financial portfolio requires trading off between risk and return; controlling energy systems requires trading off performance and cost; and autonomous cars must trade off fuel costs, efficiency, and safety. Multi-objective reinforcement learning (MORL) algorithms aim to tackle such problems \cite{Roijers_2013,Liu_2015}. A common approach is \emph{scalarization}: based on \emph{preferences} across objectives, transform the multi-objective reward vector into a single scalar reward (e.g., by taking a convex combination), and then use standard RL to optimize this scalar reward. \begin{figure}[t!] \centering \begin{subfigure}{0.12\textwidth} \centering \includegraphics[width=0.96\linewidth]{figs/shadow} \end{subfigure}% \begin{subfigure}{0.12\textwidth} \centering \includegraphics[width=0.96\linewidth]{figs/run} \end{subfigure}% \begin{subfigure}{0.12\textwidth} \centering \includegraphics[width=0.96\linewidth]{figs/mocap} \end{subfigure}% \begin{subfigure}{0.12\textwidth} \centering \includegraphics[width=0.96\linewidth]{figs/sawyer} \end{subfigure} \caption{We demonstrate our approach in four complex continuous control domains, in simulation and in the real world. Videos are at \url{http://sites.google.com/view/mo-mpo}.} \label{fig:domains} \end{figure} It is tricky, though, for practitioners to pick the appropriate scalarization for a desired preference across objectives, because often objectives are defined in different units and/or scales. For instance, suppose we want an agent to complete a task while minimizing energy usage and mechanical wear-and-tear. Task completion may correspond to a sparse reward or to the number of square feet a vacuuming robot has cleaned, and reducing energy usage and mechanical wear-and-tear could be enforced by penalties on power consumption (in kWh) and actuator efforts (in N or Nm), respectively. Practitioners would need to resort to using trial and error to select a scalarization that ensures the agent prioritizes actually doing the task (and thus being useful) over saving energy. Motivated by this, we propose a scale-invariant approach for encoding preferences, derived from the RL-as-inference perspective. Instead of choosing a scalarization, practitioners set a constraint per objective. Based on these constraints, we learn an action distribution per objective that improves on the current policy. Then, to obtain a single updated policy that makes these trade-offs, we use supervised learning to fit a policy to the combination of these action distributions. The constraints control the influence of each objective on the policy, by constraining the KL-divergence between each objective-specific distribution and the current policy. The higher the constraint value, the more influence the objective has. Thus, a desired preference over objectives can be encoded as the relative magnitude of these constraint values. Fundamentally, scalarization combines objectives in reward space, whereas our approach combines objectives in \emph{distribution} space, thus making it invariant to the scale of rewards. In principle, our approach can be combined with any RL algorithm, regardless of whether it is off-policy or on-policy. We combine it with maximum a posteriori policy optimization (MPO) \cite{abdolmaleki2018relative,Abdolmaleki_2018}, an off-policy actor-critic RL algorithm, and V-MPO \cite{Song2020V-MPO}, an on-policy variant of MPO. We call these two algorithms multi-objective MPO (MO-MPO) and multi-objective V-MPO (MO-V-MPO), respectively. Our main contribution is providing a distributional view on MORL, which enables scale-invariant encoding of preferences. We show that this is a theoretically-grounded approach, that arises from taking an RL-as-inference perspective of MORL. Empirically, we analyze the mechanics of MO-MPO and show it finds all Pareto-optimal policies in a popular MORL benchmark task. Finally, we demonstrate that MO-MPO and MO-V-MPO outperform scalarized approaches on multi-objective tasks across several challenging high-dimensional continuous control domains (\figref{fig:domains}). \section{MO-V-MPO: Multi-Objective On-Policy MPO} In this section we describe how MO-MPO can be adapted to the on-policy case, in which a state-value function $V(s)$ is learned and used to estimate the advantages for each objective. We call this approach MO-V-MPO. \subsection{Multi-objective policy evaluation in the on-policy setting} \label{sec:vmpo_eval} In the on-policy setting, to evaluate the previous policy $\pi_\text{old}$, we use advantages $A(s,a)$ estimated from a learned state-value function $V(s)$, instead of a state-action value function $Q(s,a)$ as in the main text. We train a separate $V$-function for each objective by regressing to the standard $n$-step return~\cite{Sutton_1998} associated with each objective. More concretely, given trajectory snippets $\tau = \{ (s_0, a_0, \vec{r}_0), \dots, (s_T, a_T, \vec{r}_T)\}$ where $\vec{r}_t$ denotes a reward vector $\{r_k(s_t, a_t)\}_{k=1}^N$ that consists of rewards for all $N$ objectives, we find value function parameters $\phi_k$ by optimizing the following objective: \begin{align} \min_{\phi_{1:k}} \sum_{k=1}^N \mathbb{E}_{\tau} \Big[ \big( G_k^{(T)}(s_t,a_t) - V^{\pi_\textrm{old}}_{\phi_k}(s_t))^2 \Big]. \end{align} Here $G_k^{(T)}(s_t,a_t)$ is the $T$-step target for value function $k$, which uses the actual rewards in the trajectory and bootstraps from the current value function for the rest: $G^{(T)}_k(s_t,a_t)=\sum_{\ell=t}^{T-1} \gamma^{\ell-t}r_k(s_\ell,a_\ell) + \gamma^{T-t}V^{\pi_\text{old}}_{\phi_k}(s_{\ell+T})$. The advantages are then estimated as $A^{\pi_\text{old}}_k(s_t,a_t)=G^{(T)}_k(s_t,a_t) - V^{\pi_\text{old}}_{\phi_k}(s_t)$. \subsection{Multi-objective policy improvement in the on-policy setting} Given the previous policy $\pi_\text{old}(a|s)$ and estimated advantages $\{ A^{\pi_\text{old}}_k(s,a) \}_{k=1,\ldots,N}$ associated with this policy for each objective, our goal is to improve the previous policy. To this end, we first learn an improved variational distribution $q_k(s,a)$ for each objective, and then combine and distill the variational distributions into a new parametric policy $\pi_\text{new}(a|s)$. Unlike in MO-MPO, for MO-V-MPO we use the joint distribution $q_k(s,a)$ rather than local policies $q_k(a|s)$ because, without a learned $Q$-function, only one action per state is available for learning. This is a multi-objective variant of the two-step policy improvement procedure employed by V-MPO~\cite{Song2020V-MPO}. \subsubsection{Obtaining improved variational distributions per objective (Step 1)} In order to obtain the improved variational distributions $q_k(s,a)$, we optimize the RL objective for each objective $A_{k}$: \begin{align} \label{eq:vmpo-e-step} &\max_{q_{k}} \int_{s,a} \, q_k(s,a) \, A_{k}(s,a) \diff a \diff s\\ &\textrm{s.t. } \textrm{KL}(q_k(s,a) \| p_\textrm{old}(s,a)) < \epsilon_k \, , \nonumber \end{align} where the KL-divergence is computed over all $(s,a)$, $\epsilon_k$ denotes the allowed expected KL divergence, and $p_\text{old}(s,a)=\mu(s)\pi_\text{old}(a|s)$ is the state-action distribution associated with $\pi_\text{old}$. As in MO-MPO, we use these $\epsilon_k$ to define the preferences over objectives. More concretely, $\epsilon_k$ defines the allowed contribution of objective $k$ to the change of the policy. Therefore, the larger a particular $\epsilon_k$ is with respect to others, the more that objective $k$ is preferred. On the other hand, if $\epsilon_{k} = 0$ is zero, then objective $k$ will have no contribution to the change of the policy and will effectively be ignored. Equation \eqref{eq:vmpo-e-step} can be solved in closed form: \begin{equation} q_k(s,a) \propto p_\textrm{old}(s,a) \exp\Big(\frac{A_k(s,a)}{\eta_k}\Big) \, , \end{equation} where the temperature $\eta_k$ is computed based on the constraint $\epsilon_k$ by solving the following convex dual problem \begin{align} \eta_k = &\argmin_{{\eta_k}}\bigg[ \eta_k \, \epsilon_k \, + \\ &\eta_k \log \int_{s,a} p_\textrm{old}(s,a)\exp\Big(\frac{A_k(s,a)}{\eta}\Big) \diff a \diff s\bigg] \, . \nonumber \end{align} We can perform the optimization along with the loss by taking a gradient descent step on $\eta_k$, and we initialize with the solution found in the previous policy iteration step. Since $\eta_k$ must be positive, we use a projection operator after each gradient step to maintain $\eta_k>0$. \prg{Top-$k$ advantages} As in \citet{Song2020V-MPO}, in practice we used the samples corresponding to the top 50\% of advantages in each batch of data. \subsubsection{Fitting a new parametric policy (Step 2)} We next want to combine and distill the state-action distributions obtained in the previous step into a single parametric policy $\pi_\text{new}(a|s)$ that favors all of the objectives according to the preferences specified by $\epsilon_k$. For this we solve a supervised learning problem that fits a parametric policy as follows: \begin{align} \pi_\textrm{new} = & \argmax_{\theta} \sum_{k=1}^N \int_{s,a} q_k(s,a)\log \pi_\theta(a|s)\diff a\diff s \nonumber \\ &\textrm{s.t.} \int_s \, \KL(\pi_\textrm{old}(a|s) \,\|\, \pi_\theta(a|s) ) \diff s < \beta \, , \label{eq:vmpo-m-step} \end{align} where $\theta$ are the parameters of our function approximator (a neural network), which we initialize from the weights of the previous policy $\pi_\textrm{old}$, and the KL constraint enforces a trust region of size $\beta$ that limits the overall change in the parametric policy, to improve stability of learning. As in MPO, the KL constraint in this step has a regularization effect that prevents the policy from overfitting to the local policies and therefore avoids premature convergence. In order to optimize Equation~\eqref{eq:vmpo-m-step}, we employ Lagrangian relaxation similar to the one employed for ordinary V-MPO~\cite{Song2020V-MPO}. \section{Related Work} \subsection{Multi-Objective Reinforcement Learning} \label{sec:relatedwork} Multi-objective reinforcement learning (MORL) algorithms are either \emph{single-policy} or \emph{multiple-policy} \cite{Vamplew_2011}. Single-policy approaches seek to find the optimal policy for a given scalarization of the multi-objective problem. Often this scalarization is linear, but other choices have also been explored \cite{Moffaert_2013}. However, the scalarization may be unknown at training time, or it may change over time. \emph{Multiple-policy} approaches handle this by finding a set of policies that approximates the true Pareto front. Some approaches repeatedly call a single-policy MORL algorithm with strategically-chosen scalarizations \cite{Natarajan_2005,Roijers_2014,Mossalem_2016,Zuluaga_2016}. Other approaches learn a set of policies simultaneously, by using a multi-objective variant of the Q-learning update rule \cite{Barrett_2008,Moffaert_2014,Reymond_2019,Yang_2019} or by modifying gradient-based policy search \cite{Parisi_2014,Pirotta_2015}. Most existing approaches for finding the Pareto front are limited to discrete state and action spaces, in which tabular algorithms are sufficient. Although recent work combining MORL with deep RL handles high-dimensional observations, this is in domains with low-dimensional and usually discrete action spaces \citep{Mossalem_2016,vanSeijen_2017,Friedman_2018,Abels_2019,Reymond_2019,Yang_2019,Nottingham_2019}. In contrast, we evaluate our approach on continuous control tasks with more than 20 action dimensions.\footnote{MORL for continuous control tasks is difficult because the policy can no longer output arbitrary action distributions, which limits how well it can compromise between competing objectives. In state-of-the-art RL algorithms for continuous control, policies typically output a single action (e.g., D4PG \citep{BarthMaron_2018}) or a Gaussian (e.g., PPO \citep{Schulman_2017}, MPO \citep{Abdolmaleki_2018}, and SAC \citep{Haarnoja_2018}).} A couple of recent works have applied deep MORL to find the Pareto front in continuous control tasks; these works assume scalarization and rely on additionally learning either a meta-policy \citep{Xi_2019} or inter-objective relationships \citep{Zhang_2019}. We take an orthogonal approach to existing approaches: one encodes preferences via constraints on the influence of each objective on the policy update, instead of via scalarization. MO-MPO can be run multiple times, with different constraint settings, to find a Pareto front of policies. \subsection{Constrained Reinforcement Learning} An alternate way of setting preferences is to enforce that policies meet certain constraints. For instance, threshold lexicographic ordering approaches optimize a (single) objective while meeting specified threshold values on the other objectives \cite{Gabor_1998}, optionally with slack \cite{Wray_2015}. Similarly, safe RL is concerned with learning policies that optimize a scalar reward while not violating safety constraints \cite{Achiam_2017,Chow_2018}; this has also been studied in the off-policy batch RL setting \cite{Le_2019}. Related work minimizes costs while ensuring the policy meets a constraint on the minimum expected return \cite{Bohez_2019}, but this requires that the desired or achievable reward is known a priori. In contrast, MO-MPO does not require knowledge of the scale of rewards. In fact, often there is no easy way to specify constraints on objectives, e.g., it is difficult to figure out a priori how much actuator effort a robot will need to use to perform a task. \subsection{Multi-Task Reinforcement Learning} Multi-task reinforcement learning can also be cast as a MORL problem. Generally these algorithms learn a separate policy for each task, with shared learning across tasks \citep{teh2017distral,riedmiller2018learning,wulfmeier2019regularized}. In particular, Distral \citep{teh2017distral} learns a shared prior that regularizes the per-task policies to be similar to each other, and thus captures essential structure that is shared across tasks. MO-MPO differs in that the goal is to learn a \emph{single} policy that must \emph{trade off} across different objectives. Other multi-task RL algorithms seek to train a single agent to solve different tasks, and thus need to handle different reward scales across tasks. Prior work uses adaptive normalization for the targets in value-based RL, so that the agent cares equally about all tasks \cite{van2016learning,hessel2019multi}. Similarly, prior work in multi-objective optimization has dealt with objectives of different units and/or scales by normalizing objectives to have similar magnitudes \citep{Marler_2005,Grodzevich_2006,Kim_2006,Daneshmand_2017,Ishibuchi_2017}. MO-MPO can also be seen as doing adaptive normalization, but for \emph{any} preference over objectives, not just equal preferences. In general, invariance to reparameterization of the function approximator has been investigated in optimization literature resulting in, for example, natural gradient methods \citep{Martens14}. The common tool here is measuring distances in function space instead of parameter space, using KL-divergence. Similarly in this work, to achieve invariance to the scale of objectives, we use KL-divergence over policies to encode preferences. \section{Multi-Objective Policy Improvement as Inference} In the main paper, we motivated the multi-objective policy update rules from an intuitive perspective. In this section, we show that our multi-objective policy improvement algorithm can also be derived from the RL-as-inference perspective. In the policy improvement stage, we assume that a $Q$-function for each objective is given, and we would like to improve our policy with respect to these $Q$-functions. The derivation here extends the derivation for the policy improvement algorithm in (single-objective) MPO in \citet{abdolmaleki2018relative} (in appendix) to the multi-objective case. We assume there are observable binary improvement events, $\{R_k\}_{k=1}^N$, for each objective. $R_k=1$ indicates that our policy has improved for objective $k$, while $R_k=0$ indicates that it has not. Now we ask, if the policy has improved with respect to \emph{all} objectives, i.e., $\{R_k=1\}_{k=1}^N$, what would the parameters $\a$ of that improved policy be? More concretely, this is equivalent to the maximum a posteriori estimate for $\{R_k = 1\}_{k=1}^N$: \begin{align} \max_\a p_{\a}(R_1 =1, R_2=1, \dots, R_N=1) \, p(\a) \, , \end{align} where the probability of improvement events depends on $\a$. Assuming independence between improvement events $R_k$ and using log probabilities leads to the following objective: \begin{align} \max_\a \sum_{k=1}^N\log p_{\a}(R_k =1) + \log p(\a) \, . \label{eq:map} \end{align} The prior distribution over parameters, $p(\a)$, is fixed during the policy improvement step. We set the prior such that $\pi_\a$ stays close to the target policy during each policy improvement step, to improve stability of learning (\sref{sec:m_step}). We use the standard expectation-maximization (EM) algorithm to efficiently maximize $\sum_{k=1}^N\log p_{\a}(R_k =1)$. The EM algorithm repeatedly constructs a tight lower bound in the E-step, given the previous estimate of $\a$ (which corresponds to the target policy in our case), and then optimizes that lower bound in the M-step. We introduce a variational distribution $q_k(s, a)$ per objective, and use this to decompose the log-evidence $\sum_{k=1}^N\log p_{\a}(R_k =1)$ as follows: \begin{align} \label{eq:decomp} \sum_{k=1}^N & \log p_{\a}(R_k =1) \\ & = \sum_{k=1}^N \KL(q_k(s,a)\,\|\,p_{\a}(s,a|R_k=1)) \, - \nonumber \\ & \;\;\;\; \sum_{k=1}^N \KL(q_k(s,a)\,\|\,p_{\a}(R_k=1, s, a)) \, . \nonumber \end{align} The second term of this decomposition is expanded as: \begin{align} \sum_{k=1}^N & \KL(q_k(s,a)\,\|\,p_{\a}(R_k=1, s, a)) \\ & = \sum_{k=1}^N \mathbb{E}_{q_k(s,a)} \Big[ \log \frac{p_{\a}(R_k=1,s,a)}{q_k(s,a)} \Big] \nonumber \\ & = \sum_{k=1}^N \mathbb{E}_{q_k(s,a)} \Big[ \log \frac{p(R_k=1|s,a)\pi_\a(a|s)\mu(s)}{q_k(s,a)}\Big] \, , \nonumber \end{align} where $\mu(s)$ is the stationary state distribution, which is assumed to be given in each policy improvement step. In practice, $\mu(s)$ is approximated by the distribution of the states in the replay buffer. $p(R_k=1|s,a)$ is the likelihood of the improvement event for objective $k$, if our policy chose action $a$ in state $s$. The first term of the decomposition in Equation \eqref{eq:decomp} is always positive, so the second term is a lower bound on the log-evidence $\sum_{k=1}^N\log p_{\a}(R_k =1)$. $\pi_\a(a|s)$ and $q_k(a|s) = \frac{q_k(s,a)}{\mu(s)}$ are unknown, even though $\mu(s)$ is given. In the E-step, we estimate $q_k(a|s)$ for each objective $k$ by minimizing the first term, given $\a = \a'$ (the parameters of the target policy). Then in the M-step, we find a new $\a$ by fitting the parametric policy to the distributions from first step. \subsection{E-Step} The E-step corresponds to the first step of policy improvement in the main paper (Sec. 4.2.1). In the E-step, we choose the variational distributions $\{q_k(a|s)\}_{k=1}^N$ such that the lower bound on $\sum_{k=1}^N\log p_{\a}(R_k =1)$ is as tight as possible when $\a=\a'$, the parameters of the target policy. The lower bound is tight when the first term of the decomposition in Equation \eqref{eq:decomp} is zero, so we choose $q_k$ to minimize this term. We can optimize for each variational distribution $q_k$ independently: \begin{align} q_k(a|s) &= \argmin_{q} \mathbb{E}_{\mu(s)} \Big[ \KL(q(a|s)\,\|\,p_{\a'}(s,a|R_k=1)) \Big] \nonumber \\ &= \argmin_{q} \mathbb{E}_{\mu(s)} \Big[ \KL(q(a|s)\,\|\,\pi_{\a'}(a|s)) \, - \nonumber \\ & \qquad\qquad\qquad\quad\;\; \mathbb{E}_{q(a|s)} \log p(R_k=1|s,a)) \Big] \, . \label{eq:estep} \end{align} We can solve this optimization problem in closed form, which gives us $$q_k(a|s) = \frac{\pi_{\a'}(a|s) \, p(R_k=1|s,a)}{\int \pi_{\a'}(a|s) \, p(R_k=1|s,a) \diff a} \, .$$ This solution weighs the actions based on their relative improvement likelihood $p(R_k=1|s,a)$ for each objective. We define the likelihood of an improvement event $R_k$ as $$p(R_k=1|s,a) \propto \exp\Big(\frac{Q_k(s,a)}{\alpha_k}\Big) \, ,$$ where $\alpha_k$ is an objective-dependent temperature parameter that controls how greedy the solution $q_k(a|s)$ is with respect to its associated objective $Q_k(s,a)$. For example, given a batch of state-action pairs, at the extremes, an $\alpha_k$ of zero would give all probability to the state-action pair with the maximum Q-value, while an $\alpha_k$ of positive infinity would give the same probability to all state-action pairs. In order to automatically optimize for $\alpha_k$, we plug the exponential transformation into \eqref{eq:estep}, which leads to \begin{align} q_k(a|s) & = \argmax_{q} \int \mu(s)\int q(a|s) Q_k(s,a) \diff a \diff s - \nonumber \\ & \qquad \; \alpha_k \int \mu(s) \KL(q(a|s) \,\|\, \pi_{\a'}(a|s) ) \diff s \, . \end{align} If we instead enforce the bound on KL divergence as a hard constraint (which we do in practice), that leads to: \begin{align} \label{eq:findingq} & q_k(a|s) = \argmax_{q} \int \mu(s)\int q(a|s) Q_k(s,a) \diff a \diff s \\ & \qquad \qquad \;\; \textrm{s.t.} \int \mu(s) \KL(q(a|s) \,\|\, \pi_{\a'}(a|s) ) \diff s < \epsilon_k, \nonumber \end{align} where, as described in the main paper, the parameter $\epsilon_k$ defines preferences over objectives. If we now assume that $q_k(a|s)$ is a non-parametric sample-based distribution as in \cite{Abdolmaleki_2018}, we can solve this constrained optimization problem in closed form for each sampled state $s$, by setting \begin{equation} q(a|s) \propto \pi_{\a'}(a|s) \exp\Big(\frac{Q_k(s,a)}{\eta_k}\Big) \, , \label{eq:nonparametric} \end{equation} where $\eta_k$ is computed by solving the following convex dual function: \begin{align} \label{eq:etadual} &\eta_k=\argmin_{\eta} \eta\epsilon_k+\\ & \qquad \eta\int\mu(s)\log\int \pi_{\a'}(a|s)\exp\Big(\frac{Q_k(s,a)}{\eta}\Big)\diff a \diff s \, . \nonumber \end{align} Equations \eqref{eq:findingq}, \eqref{eq:nonparametric}, and \eqref{eq:etadual} are equivalent to those given for the first step of multi-objective policy improvement in the main paper (Sec. 4.2.1). Please see refer to Appendix D.2 in \cite{Abdolmaleki_2018} for details on derivation of the dual function. \subsection{M-Step} \label{sec:m_step} After obtaining the variational distributions $q_k(a|s)$ for each objective $k$, we have found a tight lower bound for $\sum_{k=1}^N \log p_\a(R_k=1)$ when $\a=\a'$. We can now obtain the parameters $\a$ of the updated policy $\pi_{\a}(a|s)$ by optimizing this lower bound. After rearranging and dropping the terms in \eqref{eq:map} that do not involve $\a$, we obtain \begin{align*} \a^* =& \argmax_{\a} \sum_{k=1}^N\int \mu(s) \int q_k(a|s)\log \pi_\a(a|s)\diff a \diff s \\ & \qquad \quad + \log p(\a). \end{align*} This corresponds to the supervised learning step of policy improvement in the main paper (Sec. 4.2.2). For the prior $p(\a)$ on the parameters $\a$, we enforce that the new policy $\pi_{\a}(a|s)$ is close to the target policy $\pi_{\a'}(a|s)$. We therefore optimize for \begin{align*} &\a^* = \argmax_{\a} \sum_{k=1}^N \int \mu(s) \int q_k(a|s)\log \pi_\a(a|s) \diff a \diff s \\ & \qquad \textrm{s.t.} \int \mu(s) \KL(\pi_{\a'}(a|s) \,\|\, \pi_{\a}(a|s) ) \diff s < \beta. \nonumber \end{align*} Because of the constraint on the change of the parametric policy, we do not greedily optimize the M-step objective. See \cite{Abdolmaleki_2018} for more details.
1,314,259,992,945
arxiv
\section{Introduction} The solutions of the unconditional basic sequence problem \cite{go-ma} and the distortion problem (\cite{G0}, \cite{od-schl}) have an intricate connection. They both have profited on one side from the development of the Tsirelson-like constructions of conditional norms and on the other from the development of the infinite-dimensional Ramsey theory. These connections are however well understood only at the level of separable spaces. This paper, which can be considered as a natural continuation of our previous paper \cite{arg-lop-tod}, is an attempt to explain these connections in the non-separable context as well. As byproducts we discover some new non-separable as well as separable phenomena. Our results are all based on a method of constructing Banach spaces with long Schauder bases of length $\omega_{1}$, a method which among other things crucially uses the information about the way the classical Ramsey theorem \cite{ramsey} fails in the uncountable context \cite{tod1}. Recall that an infinite dimensional Banach space $X$ is \emph{indecomposable} if for every closed decomposition $X=X_0\oplus X_1$ one of the subspaces $X_0$ or $X_1$ must be finite dimensional. The space $X$ is \emph{hereditary indecomposable}, HI in short, if every closed infinite dimensional subspace of $E$ is indecomposable. The first example of such a space was constructed by Gowers-Maurey \cite{go-ma} as a byproduct of their solution of the unconditional basic sequence problem. The paper \cite{arg-lop-tod} considers the unconditional basic sequence problem in the context of Banach spaces that are not necessarily separable. In particular \cite{arg-lop-tod} produces a reflexive Banach space $ {\mathfrak X}_{\omega_{1}}$ with a Schauder basis $(e_\alpha)_{\alpha<\omega_{1}}$ of length $\omega_{1}$ with no infinite unconditional basic sequence. Applying Gowers' dichotomy \cite{G1}, one concludes that, while ${\mathfrak X}_{\omega_{1}}$ is decomposable (as for example ${\mathfrak X}_{\omega_{1}}=\overline{\langle e_\alpha \rangle_{\alpha<\omega}}\oplus \overline{\langle e_\alpha \rangle_{\alpha\ge\omega}}$) it is saturated by hereditarily indecomposable subspaces (HI-\emph{saturated} in short), i.e., every closed infinite dimensional subspace of ${\mathfrak X}_{\omega_{1}}$ contains an hereditarily indecomposable Banach space. In this paper we present an example of a Banach space ${\mathfrak X}$ with a Schauder basis of length $\omega_{1}$ with some extreme discrepancies between properties of the class of separable subspaces of ${\mathfrak X}$ and the class of non-separable subspaces of ${\mathfrak X}$. For example, at the separable level, ${\mathfrak X}$ is \emph{saturated} by copies of $c_0$, i.e. every closed infinite dimensional subspace of ${\mathfrak X}$ contains an isomorphic copy of $c_0$. So, in particular, every closed infinite dimensional subspace contains an infinite unconditional basic sequence. On the non-separable level, ${\mathfrak X}$ contains no unconditional basic sequence of length $\omega_{1}$. More precisely, for every closed subspace $X$ of ${\mathfrak X}$ and every decomposition $X=X_0\oplus X_1$ one of the spaces $X_0$ or $X_1$ must be separable. In fact every bounded operator $T:X\to {\mathfrak X}$, where $X$ is a closed subspace of ${\mathfrak X}$, can be decomposed as $$T=\lambda i_{X,{\mathfrak X}}+S,$$ where $i_{X,{\mathfrak X}}:X\to {\mathfrak X}$ is the inclusion mapping, and $S$ is an \emph{$\omega_{1}$-singular operator}, the non-separable counterpart of the notion of \emph{strictly singular operator}, which requires that $S$ is not an isomorphism on any non-separable subspace of $X$. Another discrepancy between the behavior of separable subspaces of ${\mathfrak X}$ and non-separable ones (as well as a striking distinction between ${\mathfrak X}$ and ${\mathfrak X}_{\omega_{1}}$) comes when one considers the distortion constants of its equivalent norms. Recall that the \emph{distortion} of an equivalent norm $\tnrm{\cdot}$ of a Banach space $(X,\nrm{\cdot})$ is the constant $$d(X,\tnrm{\cdot})=\inf_Y \sup\conj{\frac{\tnrm{x}}{\tnrm{y}}}{x,y\in S_{(Y,\nrm{\cdot})}}$$ where the infimum is taken over all \emph{infinite dimensional} subspaces $Y$ of $X$. One says that $(X,\nrm{\cdot})$ is \emph{arbitrary distortable} if $$\sup_{\tnrm{\cdot}}d(X,\tnrm{\cdot})=\infty$$ where the supremum is taken over all equivalent norms $\tnrm{\cdot}$ of $(X,\nrm{\cdot})$. The analysis of our space ${\mathfrak X}$ suggests the following variation of this notion of distortion. Given an equivalent norm $\tnrm{\cdot}$ of a Banach space $(X,\nrm{\cdot})$, let $$d_{\omega_{1}}(X,\tnrm{\cdot})=\inf_Y \sup\conj{\frac{\tnrm{x}}{\tnrm{y}}}{x,y\in S_{(Y,\nrm{\cdot})}}$$ where the infimum now is taken over all \emph{non-separable} subspaces $Y$ of $X$. We say that $(X,\nrm{\cdot})$ is \emph{arbitrarily $\omega_{1}$-distortable} when $$\sup_{\tnrm{\cdot}}d_{\omega_{1}}(X,\tnrm{\cdot})=\infty$$ where again the supremum is taken over all equivalent norms $\tnrm{\cdot}$ of $(X,\nrm{\cdot})$. It has been shown in \cite{arg-lop-tod} that the space ${\mathfrak X}_{\omega_{1}}$ is arbitrarily distortable. Note however that our space ${\mathfrak X}$ is \emph{not} distortable at all, i.e. $d({\mathfrak X},\tnrm{\cdot})=1$ for every equivalent norm $\tnrm{\cdot}$ on ${\mathfrak X}$. This is a consequence of the fact that ${\mathfrak X}$ is $c_0$-saturated and the well-known result of R. C. James \cite{jam} which states that if a Banach space contains isomorphic copies of $c_0$ then it contains almost isometric copies of $c_0$. Nevertheless, it turns out that the space ${\mathfrak X}$ is distortable in the non-separable sense, i.e., ${\mathfrak X}$ is arbitrarily $\omega_{1}$-distortable. It follows that, while the arbitrary distortion of a Banach space $X$ implies its arbitrary $\omega_{1}$-distortion, the converse implication is not true. \section{Definition of the space ${\mathfrak X}$} The construction of the space ${\mathfrak X}$ relies on the construction of the Banach space ${\mathfrak X}_{\omega_{1}}$ from \cite{arg-lop-tod}. So, in order to avoid unnecessary repetitions, we assume the reader is familiar with standard definitions and results in this area (see for example \cite{lind-tza} and \cite{arg}), and in particular with the way this has been amplified in \cite{arg-lop-tod} to the non-separable context. The space ${\mathfrak X}$ will be defined as the completion of $(c_{00},\|\cdot\|)$ under the norm $\|\cdot\|$ induced by a set of functionals $K \subseteq c_{00}(\omega_{1})$. \begin{defin}\rm\label{ljigrjiogj} Let $K $ bet the minimal subset of $c_{00}(\omega_1)$ satisfying the following conditions: \noindent {(i)} It contains $(e_{\gamma}^*)_{\gamma<\omega_1}$, is symmetric (i.e., $\phi\in K$ implies $-\phi\in K$) and is closed under the restriction on intervals of $\omega_1$. \noindent {(ii)} For every \emph{separated} block sequence $(\phi_i)_{i=1}^{d}\subseteq \ensuremath{K}$, with $d\le n_{2j}$, one has that the combination $ ({1}/{m_{2j}})\sum_{i=1}^{n_{2j}}\phi_i\in\ensuremath{K}$. \noindent {(iii)} For every separated \emph{special} sequence $(\phi_i)_{i=1}^{d}\subseteq \ensuremath{K}$ with $d\le n_{2j+1}$ one has that $\phi=({1}/{m_{2j+1}})\sum_{i=1}^{n_{2j+1}}\phi_i$ is in $K$. The functional $\phi$ is called a \emph{special functional}. \noindent {(iv)} It is rationally convex. Whenever $\phi \in K$ is of the form $\phi=(1/m_j)\sum_{i<d}\phi_i$ given in {(ii)} or {(iii)} above we say that $\phi$ has a \emph{weight} $w(\phi)=m_j$. Finally, the norm on $c_{00}(\omega_1)$ is defined as $$\nrm{x}=\sup\{\phi(x)=\sum_\alpha\phi(\alpha)\cdot x (\alpha):\;\phi\in\ensuremath{K}\}$$ and ${\mathfrak X}$ is the completion of $(\ensuremath{c_{00}(\omega_1)},\nrm{\cdot})$. \end{defin} Before we discuss the new notion of separated sequence, let us give a list of direct consequences from the definition of ${\mathfrak X}$. \begin{notas}\rm\label{oieutiougf} \noindent (a) It is clear that the norming set $K$ presented here is a subset of the one introduced in \cite{arg-lop-tod} for the Banach space ${\mathfrak X}_{\omega_{1}}=({\mathfrak X}_{\omega_{1}},\nrm{\cdot}_{\omega_{1}})$. So, it follows that for every $x\in c_{00}(\omega_{1})$ one has that $\nrm{x}\le \nrm{x}_{\omega_{1}}$. This fact will be used frequently in this paper. \noindent (b) By the minimality of $K$, there is the following natural notion of \emph{complexity} of every element $\phi$ of $K$: Either $\phi=\pm e_\alpha^*$, or $\phi$ is a rational convex combination $\phi=\sum_{i<k}r_i f_i$ of elements $(f_i)_{i<k}$ of $K$, or $\phi=(1/m_j)\sum_{i<d}f_i$ for a separated block sequence $(f_i)_{i<d}$ in $K$ with $d\le n_j$. And in this latter case we say that $w(\phi)=m_j$ is a \emph{weight} of $\phi$. \noindent (c) The property (i) makes the natural Hamel basis $(e_{\alpha})_{\alpha<\omega_1}$ of $c_{00}(\omega_{1})$ a \emph{transfinite bimonotone Schauder basis} of ${\mathfrak X}$, i.e. $(e_\alpha)_{\alpha<\omega_{1}}$ is total and for every interval $I\subseteq \omega_{1}$ the corresponding projection $P_I:{\mathfrak X}\to {\mathfrak X}_I=\overline{\langle e_\alpha\rangle_{\alpha\in I}}$ has norm 1. Let us set $P_{\gamma}=P_{[0,\gamma]}$ and ${\mathfrak X}_\gamma={\mathfrak X}_{[0,\gamma]}$ for every countable ordinal $\gamma$. It follows that every closed infinite dimensional subspace contains a further subspace isomorphic to the closed linear span of a block sequence of the basis $(e_\alpha)$ (see Proposition 1.3 in \cite{arg-lop-tod} for full details). This goes in contrast with the corresponding property of a Banach space with a Schauder basis $(x_k)_{k\in {\mathbb N}}$ for which it is well-known that every closed infinite dimensional subspace contains almost isometric copies of the closed linear span of a certain block sequence. \noindent (d) The second property (ii) is responsible of the existence of semi-normalized averages in the span of every uncountable block sequence of ${\mathfrak X} $. The third (iii) and fourth (iv) properties makes every operator from a closed subspace of ${\mathfrak X}$ into ${\mathfrak X}$ a multiple of the identity plus a $\omega_{1}$-singular operator. \noindent (e) The basis $(e_\alpha)_{\alpha<\omega_{1}}$ is shrinking, i.e. $(e_{\alpha_n})_n$ is shrinking in the usual sense for every increasing sequence $(\alpha_n)_n$ of countable ordinals (the proof is essentially equal to that for the space ${\mathfrak X}_{\omega_{1}}$ provided in \cite{arg-lop-tod}; we leave the details to the reader). It follows that $(e_\alpha)_{\alpha<\omega_{1}}$ is an uncountable weakly-null sequence, i.e. for every $x^*\in {\mathfrak X}^*$ the numerical sequence $(x^*(e_\alpha))_{\alpha<\omega_{1}}\in c_0(\omega_{1})$. This last property readily implies the following. \noindent (f) Suppose that $T:{\mathfrak X}\to {\mathfrak X}$ is a bounded operator. Then for every uncountable subset $A$ of $\omega_{1}$ and every countable ordinal $\gamma$ one has that $P_\gamma(T(e_\alpha))=0$ for all but countably many $\alpha\in A$. \end{notas} Fix from now on a function $\varrho:[\omega_{1}]^2\to \omega$ with the following properties: \noindent (i) $\varrho(\alpha,\gamma)\le \max\{\varrho(\alpha,\beta),\varrho(\beta,\gamma)\}$ for all $\alpha<\beta<\gamma<\omega_{1}$. \noindent (ii) $\varrho(\alpha,\beta)\le \max\{\varrho(\alpha,\gamma),\varrho(\beta,\gamma)\}$ for all $\alpha<\beta<\gamma<\omega_{1}$. \noindent (iii) $\conj{\alpha<\beta}{\varrho(\alpha,\beta)\le n}$ is finite for all $\beta<\omega_{1}$ and $n\in \omega$. The reader is referred to \cite{tod1} and \cite{tod2} for full discussion of this notion and constructions of various $\varrho$-functions. We shall use it here to measure ``distances'' between various subsets of $\omega_{1}$. \begin{defin}\rm For two subsets $s$ and $t$ of $\omega_1$, set $$\varrho(s,t)=\min\conj{\varrho(\alpha,\beta)}{\alpha\in s, \, \beta\in t}.$$ Given an integer $p$ we say that $s$ and $t$ are \emph{$p$-separated} if $\varrho(s,t)\ge p$. A sequence $(s_i)_i$ is $p$-separated if it is pairwise $p$-separated, i.e. $s_i$ and $s_j$ are $p$-separated for every $i\neq j$. The sequence $(s_i)_i$ is \emph{separated} if it is $|\bigcup_i s_i| $-separated. Every notion we introduced here for sets of ordinals can be naturally transferred, via their supports, to vectors of ${\mathfrak X}$. \end{defin} The following is a simple, but useful statement, will give use separated subsequences of any sufficiently long sequence of finite sets. It is the kind of result eventually used in showing that the norm of ${\mathfrak X}$ keeps a substantial conditional structure when restricted on an arbitrary non-separable subspace of ${\mathfrak X}$. \begin{propo} Let $(A_{i})_{i<n}$ be a block sequence of subsets of $\omega_{1}$, each of them of order-type $\omega$. Then for every block sequence $(s_\alpha)_{\alpha \in \cup_{i<n}A_i}$ of finite sets of countable ordinals and every integer $p$ there are $\alpha_i\in A_i$ ($i<n$) such that $(s_{\alpha_i})_{i<n}$ is $p$-separated. \end{propo} \begin{proof} This is done by induction on $n$. Fix all data as in the statement for $n>1$. Then let $\alpha_{n-1}=\min A_{n-1}$. Since the set $$\overline{s_{\alpha_{n-1}}}^p=\conj{\beta<\omega_{1}}{\text{there is some $\gamma\in s_{\alpha_{n-1}}$ with $\beta\le \gamma$ and $ \varrho(\beta,\gamma)\le p$}}$$ is, by property (iii) of $\varrho$, finite, one easily obtains infinite subsets $B_i$ of $A_i$ ($i<n-1$) such that $s_{\alpha}\cap \overline{s_{\alpha_{n-1}}}^p=\emptyset$ for every $\alpha\in B_i$ and all $i<n-1$. By inductive hypothesis, there are $\alpha_i\in B_i$ ($i<n-1$) such that $(s_{\alpha_i})_{i<n-1}$ is $p$-separated. Obviously $(s_{\alpha_i})_{i<n}$ is the desired $p$-separated sequence. \end{proof} \begin{coro}\label{oeuwtu3432423fgn} Let $n$ be an integer and let $(s_\alpha^i)_{\alpha<\omega_{1}}$ be a block sequence of finite set of countable ordinals for every $i<n$. Then there are $\alpha_0<\dots <\alpha_{n-1}$ such that $(s_{\alpha_i}^i)_{i<n}$ is a separated block sequence. \end{coro} \begin{proof} Let $A$ be an uncountable set such that $|s_\alpha^i|=p_i$ for every $i\in A$ and every $i<n$. Now for each $i<n$, let $A_i\subseteq A$ be of order type $\omega$ and such that $A_i<A_j$ and $s_\alpha^i<s_{\beta}^j$ if $i<j<n$ and $\alpha\in A_i$, $\beta\in A_j$ are such that $\alpha<\beta$. Then apply the previous proposition to $(t_\alpha)_{\alpha\in A_0\cup \dots \cup A_{n-1}}$ and $\sum_{i<n}p_i$, where $t_\alpha=s_{\alpha}^i$ for the unique $i<n$ such that $\alpha\in A_i$. \end{proof} \subsection{Rapidly increasing sequences and deciding pairs} We now introduce some standard technical tools in this field. Particularly, a sort of vectors, called $\ell_1^k$-averages, and the so-called rapidly increasing sequences (RIS in short). The importance of rapidly increasing sequences $(x_n)$ is that it is possible to estimate the norm of linear combinations of $(x_n)$ in terms of norms of linear combinations of the basis $(e_\alpha)_{\alpha<\omega_{1}}$. In this sense RIS behave like a subsequence $(e_{\alpha_k})_k$ of basis $(e_\alpha)_{\alpha<\omega_{1}}$. The role of $\ell_1^k$-averages is that they are useful in creating RIS. We give the precise definitions now. \begin{defin}\rm Let $C,\varepsilon>0$. A normalized block sequence $(x_k)_k$ of $X$ is called a $(C,\varepsilon)$- \emph{rapidly increasing sequence} ($(C,\varepsilon)$-RIS in short) iff there is an increasing sequence $(j_k)_k$ of integers such that for all $k$, \noindent (i) $\nrm{x_k}\le C$, \noindent (ii) $|\mathrm{supp\, } x_k| \le m_{j_{k+1}}\varepsilon$ and \noindent (iii) For every functional $\phi\in K$ of type I with $w(\phi)<m_{j_k}$ one has that $|\phi(x_k)|\le C/w(\phi)$. Let $C>0$ and $ k\in {\mathbb N}$. A normalized vector $y$ is called a $C-\ell_1^k$-\emph{\emph{average}} iff there is a finite block sequence $(x_0,\dots,x_{k-1})$ such that $y=(x_0+\dots + x_{k-1})/k$ and $\nrm{x_i}\le C$. \end{defin} First observe that any $\omega$-subsequence $(e_{\alpha_n})_n$ of the basis is a $(1,\varepsilon)$-RIS for every $\varepsilon$. Note also that it follows easily from the definition that if $(x_n)$ is a $(C,\varepsilon)$-RIS, then for every $\varepsilon'>0$ there is a subsequence $(x_n)_{n\in A}$ of $(x_n)$ which is a $(C,\varepsilon')$-RIS. As the norming set $K$ we are using here is not saturated by ``free'' combinations of the form $(1/m_{2j})\sum_{i<n_{2j}}f_i$, one cannot expect that there are $\ell_1^k$-averages in the span of arbitrary block sequence. Indeed, as Theorem \ref{ieoithie} shows, this is not the case for most of the block sequences, since clearly if $(x_n)$ is $C$-equivalent to the $c_0$-basis, then $\nrm{(x_0+\dots + x_k)/k}\le C/k$ for every $k$. However, the next Proposition guarantees their existence. Its proof is the natural modification of the standard proof for the separable case, which can be found, for example, in \cite{A-M} or in \cite{go-ma}. \begin{propo} \label{dfmlejtilejiew} For every $k\in {\mathbb N}$ there is $l=l(k)\in {\mathbb N}$ such that the following holds: Suppose that $(x_i)_{i<l}$ is a normalized block sequence with the property that there exists a separated sequence $(\phi_i)_{i<l}$ which is biorthogonal to $(x_i)_{i<l}$. Then $\langle x_i \rangle_{i<l}$ contains 2-$\ell_1^k$-averages. \qed \end{propo} We recall the following is also a well known fact about $\ell_1^k$-averages, connecting them to RIS. \begin{propo}\label{risav} Suppose that $y$ is a $C-\ell_1^k$-average and suppose that $E_0<\dots < E_{l-1}$ are intervals with $n< k$. Then $ \sum_{i=0}^{l-1} \nrm{E_i y}\le C(1+{2l}/k)$. As a consequence, if $y$ is a $C-\ell_1^{n_j}$-average and $\phi\in K$ is with $w(\phi)<m_j$ , then $|\phi(y)|\le {3C}/{2w(\phi)}$. In particular, for $2-\ell_1^{n_j}$-averages we get that $|\phi(y)|\le 3/w(\phi)$ if $w(\phi)<m_j$. \qed \end{propo} \begin{notas}\rm\label{gheuhrufhsd} It follows that if $\bsy{x}=(x_k)_k$ is a normalized block sequence such that each $x_k$ is a $2-\ell_1^{n_{j_k}}$-average with $|\mathrm{supp\, } x_k|\le m_{j_{k+1}}$, then $\bsy{x}$ is a $(3,1)$-RIS. It readily follows that if $(x_k)_k$ is a normalized block sequence having a separated biorthogonal pair, then the linear span of $(x_k)_k$ contains $(3,\varepsilon)$-RIS for every $\varepsilon>0$. \end{notas} Let $<_\mathrm{antilex}$ denote the anti-lexicographical ordering on ${\mathbb N}\times \omega_{1}$. Whenever we say that a sequence $(x_k^\alpha)_{(k,\alpha)\in A}$ indexed on a subset $A$ of ${\mathbb N}\times \omega_{1}$ is a block sequence we mean that $x_k^\alpha$ is finitely supported and that $x_{k_0}^{\alpha_0}<x_{k_1}^{\alpha_1}$ whenever $(k_0,\alpha_0)<_\mathrm{antilex} (k_1,\alpha_1)$. In the following definitions and lemma, we introduce two genuinely non-separable tools. They are necessary for us because it not true in general that a non-separable subspace of ${\mathfrak X}$ contains an almost isometric copy of the closed linear span of some uncountable block sequence. Recall that if $X$ is a separable Banach space with a Schauder basis of length $\omega$, the corresponding result is true and very frequently used. \begin{defin}\rm Let $C,\varepsilon>0$. We call a sequence $\boldsymbol{x} =(x_{k}^\alpha)_{(k,\alpha)\in {\mathbb N}\times\omega_{1}}$ a \emph{long rapidly increasing sequence} (\emph{LRIS} in short) iff \noindent (i) $x_{k_0}^{\alpha_0}<x_{k_1}^{\alpha_1}$ for every $(k_0,\alpha_0)<_{\text{antilex}}(k_1,\alpha_1)$, and \noindent (ii) The cardinality of $\mathrm{supp\, } x_k^\alpha$ only depends on $k$. \noindent (iii) There is a sequence of integers $(j_k)$ such that $x_{k}^\alpha$ is a $2-\ell_1^{n_{j_k}}$-average and $|\mathrm{supp\, } x_k^{\alpha}|<m_{j_{k+1}}$ for every $(k,\alpha)$. \end{defin} \begin{notas}\rm The chosen name is because it follows from the definition and Remark \ref{gheuhrufhsd} that if $f=(f_0,f_1):{\mathbb N}\to {\mathbb N}\times \omega_1$ is such that $$\text{if $k<l$ then $f_0(k)<f_0(l)$ and $f_1(k)\le f_1(l)$}$$ then one has that $(x_{f_0(k)}^{f_1(k)})_{k\in {\mathbb N}}$ is a $(3,1)$-RIS. \end{notas} \begin{defin}\rm Given $\varepsilon>0$ and two vectors $x,y\in c_0(\omega_{1})$, with $x\neq 0$, we write $x<_\varepsilon y$ to denote that $$\nrm{P_{\sup\mathrm{supp\, } x}(y)}<\varepsilon.$$ By technical reasons, we declare $0<_\varepsilon y$ for every $y$. We recall that for a vector $x\in c_{0}(\omega_{1})$, one sets $\mathrm{ran\, } x=[\min \mathrm{supp\, } x,\sup\mathrm{supp\, } x]$. \end{defin} \begin{defin}\rm\label{decpair} Given a bounded operator $T:X\to {\mathfrak X}$, where $X$ is a closed non-separable subspace of ${\mathfrak X}$, we say that the couple $(\bsy x,\bsy y)$ is a \emph{deciding pair} for $T$ if the following holds: \noindent (i) $\boldsymbol{x}=(x_k^\alpha)_{(k,\alpha)\in {\mathbb N}\times \omega_{1}}\subseteq X$ and $\boldsymbol{y}=(y_k^\alpha)_{(k,\alpha)\in {\mathbb N}\times \omega_{1}}$ is a LRIS and $\bsy x\subseteq X$. \noindent (ii) $x_{k}^\alpha\in X$ and $\mathrm{ran\, } y_k^\alpha\subseteq \mathrm{ran\, } x_k^\alpha$ for every pair $(k,\alpha)$. \noindent (iii) $\sum_{k\in {\mathbb N}}\nrm{x_k^{\alpha}-y_k^\alpha}\le 1$ for every $\alpha$. \noindent (iv) $ x_{k_0}^{\alpha_0}<_{\varepsilon_{k_1}} x_{k_1}^{\alpha_1}$ and $ T(x_{k_0}^{\alpha_0})<_{\varepsilon_{k_1}} T(x_{k_1}^{\alpha_1})$ for every $(k_0,\alpha_0)<_\text{antilex}(k_1,\alpha_1)$. A \emph{transversal subsequence} of a double-indexed sequence $(x_{k}^{\alpha})_{(k,\alpha)}$ is a finite subsequence of the form $(x_{k_i}^{\alpha_i})_{i}$ where $k_i<k_{i+1}$ and $\alpha_i<\alpha_{i+1}$ for every $i$. \end{defin} In other words a deciding pair is nothing else but an uncountable ordered sequence of rapidly increasing sequences $(y_k^\alpha)_{k\in {\mathbb N}}$ which are asymptotically closed to a sequence $(x_k^\alpha)_{k\in {\mathbb N}}$ in $X$ for which the corresponding sequence of images $(Tx_k^\alpha)_{k\in {\mathbb N}}$ is ``almost'' block ordered. Before we prove that deciding pairs always exist, we give some explanation of this notion. \begin{propo}\label{oehthorhgoiwehiohtg} Suppose that $(\bsy x,\bsy y)$ is a deciding pair for $T$. Then for every $\varepsilon>0$ and every integer $l$ there is a transversal subsequence $\bsy z=(y_{k_i}^{\alpha_i})_{i<l}$ of $\bsy y$ such that \noindent (i) $\bsy z$ is a $(3,\varepsilon)$-RIS. \noindent (ii) $\bsy z$ has a biorthogonal separated block sequence in the norming set $K$. \noindent (iii) $\sum_{i<l}\nrm{x_{k_i}^{\alpha_i}-y_{k_i}^{\alpha_i}}\le \varepsilon$. \end{propo} \begin{proof} Fix all data. Let $M\subseteq {\mathbb N}$ be such that for every $\alpha<\omega_{1}$ one has that $(y_{k}^\alpha)_{k\in M}$ is a $(3,\varepsilon)$-RIS and such that $\sum_{k\in M}\nrm{x_{k}^\alpha-y_k^\alpha}\le \varepsilon$. Fix also for each pair $(k,\alpha)$ a functional $\phi_{k}^\alpha\in K$ such that $\phi_k^\alpha(y_k^\alpha)=1$ and with $\mathrm{ran\, } \phi_k^\alpha\subseteq \mathrm{ran\, } y_k^\alpha$. Now apply Corollary \ref{oeuwtu3432423fgn} to $(\mathrm{supp\, } \phi_k^{\alpha})_{\alpha<\omega_{1}}$ ($k\in M$) to find a transversal subsequence $(\phi_{k_i}^{\alpha_i})_{i<l}$. Then $(y_{k_i}^{\alpha_i})_{i<l}$ is the desired sequence. \end{proof} \begin{lem}\label{onrothoirgjfig} Every bounded operator $T:X\to{\mathfrak X}$ with $X$ non-separable has a deciding pair. \end{lem} \begin{proof} First we make the following approximation to the final result. \begin{claim} For every integer $k$ and every $\varepsilon>0$ there are two normalized sequences $(x_\alpha)_{\alpha<\omega_{1}}$ and $(y_\alpha)_{\alpha<\omega_{1}}$ such that \noindent (a) $x_\alpha\in X$ ($\alpha<\omega_{1}$). \noindent (b) $\mathrm{ran\, } y_\alpha\subseteq \mathrm{ran\, } x_\alpha$ and $\nrm{x_\alpha-y_\alpha}\le \varepsilon$ ($\alpha<\omega_{1}$). \noindent (c) $y_\alpha$ is a $2-\ell_1^k$-average and $|\mathrm{supp\, } y_\alpha|$ is independent of $\alpha<\omega_{1}$. \noindent (d) $x_\alpha<_\varepsilon x_\beta$ and $Tx_\alpha<_\varepsilon Ty_\alpha$ for every $\alpha<\beta$. \end{claim} Let us show the desired result from this claim: Find recursively for each $k\in {\mathbb N}$ two sequences $(z_\alpha^k)_{\alpha<\omega_{1}}$ and $(t_\alpha^k)_{\alpha<\omega_{1}}$ as the result of the application of the previous claim to the integer $n_{j_k}$ and $\varepsilon_k$ and where $j_k$ is chosen such that $|\mathrm{supp\, } t_{\alpha}^{k-1}|<m_{j_k}$. Finally, it is not difficult to see that one can extract for every $k$ a subsequence $(x_k^\alpha,y_k^\alpha)_{\alpha<\omega_{1}}$ of $(z_\alpha^k,t_\alpha^k)_{\alpha<\omega_{1}} $ with the property that $x_{k_0}^{\alpha_0}<_{\varepsilon_{k_1}} x_{k_1}^{\alpha_1}$ and $T(x_{k_0}^{\alpha_0})<_{\varepsilon_{k_1}} T(x_{k_1}^{\alpha_1})$ for every $(k_0,\alpha_0)<_\mathrm{antilex} (k_1,\alpha_1)$. Let us give now a proof of the claim. Fix $k$ and $\varepsilon>0$, and let $l$ be any integer given by Proposition \ref{dfmlejtilejiew} when applied to $k$. Set $\delta=\varepsilon/2l$. Now use that the bounded operator $U=U(T,\gamma):X\to X_\gamma\oplus X_{\gamma}$ defined by $U(x)=(P_\gamma(x),P_\gamma(T(x)))$ is $\omega_{1}$-singular (because it has separable range) to find two normalized sequences $\bsy{z}=(z_\alpha)_{\alpha<\omega_{1}}$ and $\bsy{t}=(t_\alpha)_{\alpha<\omega_{1}}$, and a block sequence $(F_\alpha)_{\alpha<\omega_{1}}$ of finite sets $F_\alpha\subseteq \omega_{1}$ of size $l$ such that \noindent (e) $\bsy{z}\subseteq X$, and $\bsy t$ is a block sequence. \noindent (f) $\mathrm{ran\, } t_\alpha\subseteq \mathrm{ran\, } z_\alpha$ and $\nrm{z_\alpha-t_\alpha}\le \delta$ for every $\alpha$. \noindent (g) $z_\alpha<_\delta z_\beta$ and $T(z_\alpha)<_\delta T(z_\beta)$ for every $\alpha<\beta$. \noindent (h) For every countable ordinal $\alpha$ there is a separated block sequence $(f_\xi)_{\xi\in F_\alpha}$ that is biorthogonal to $(t_\xi)_{\xi\in F_\alpha}$ We observe that (h) can be achieved by a simple application of Corollary \ref{oeuwtu3432423fgn}. By Proposition \ref{dfmlejtilejiew}, we can find a $2-\ell_1^k$-average $y_\alpha\in \langle t_\beta \rangle_{\beta\in F_\alpha}$ for each $\alpha<\omega_{1}$. It is easy to see that if $x_\alpha$ is an arbitrary normalized vector in $\langle z_\beta \rangle_{\beta\in F_\alpha}$ such that $\mathrm{ran\, } y_\alpha\subseteq \mathrm{ran\, } x_\alpha$ and $\nrm{x_\alpha-y_\alpha}\le \varepsilon$ (and there is such vector), then the corresponding sequences $(x_\alpha)_{\alpha<\omega_{1}}$ and $(y_\alpha)_{\alpha<\omega_{1}}$ fulfill all the properties \emph{(a)} to \emph{(d)}. \end{proof} \begin{notas}\rm\label{ojgtiorufjnw} It is easy to find for given two bounded operators $T_0,T_1:X\to {\mathfrak X}$ a deciding pair $(\bsy x,\bsy y)$ for, simultaneously, $T_0$ and $T_1$. This can be done simply by replacing $U(T,\gamma)$ above by the mapping $\bar{U} :X\to {\mathfrak X}_{\gamma}^3$ defined by $x\mapsto \bar{U}(x)=(P_\gamma(x),P_\gamma(T_0(x)),P_\gamma(T_1(x)))$. \end{notas} \section{Main properties of the space ${\mathfrak X}$} \subsection{$c_0$-saturation} We are ready to prove that ${\mathfrak X}$ is $c_0$-saturated. We start with the following more informative result. \begin{lem}\label{orjuitorjhgfj} Suppose that $(x_k)$ is a normalized block sequence such that $$\lim_{k\to \infty} \nrm{x_k}_\infty=0.$$ Then $(x_k)$ has a subsequence which is 5-equivalent to the natural basis of $c_0$. \end{lem} \begin{proof} Let $(y_k)_k$ be a subsequence of $(x_k)_k$ with $\nrm{y_k}_\infty\le 1/({2^{k+2}k})$. \begin{claim} There is an infinite set $M$ of integers such that for every triple $(k_0,k_1,k_2)$ in $M$ one has that \begin{equation} \label{wejgiiujibhjbofdhss} \max\conj{\varrho(\alpha,\beta)}{\alpha\in \mathrm{supp\, } y_{k_0},\, \beta\in \mathrm{supp\, } y_{k_1}}< k_2. \end{equation} \end{claim} \begin{proof}[Proof of Claim:] We color every triple $(k_0,k_1,k_2)$ of integers by 1 if \eqref{wejgiiujibhjbofdhss} holds and 0 otherwise. By the classical Ramsey theorem we can find an infinite set $M$ all whose triples are equally colored. If this color is 1, then we are done. Otherwise, suppose that this color is 0, and let us yield a contradiction. Fix two integers $k_0<k_1$ in $M$. Then for every $k\in M$ with $k >k_1$ let $\alpha_k\in \mathrm{supp\, } y_{k_0}$ and $\beta_k\in \mathrm{supp\, } y_{k_1}$ with $\varrho(\alpha_k,\beta_k)\ge k$. Find an infinite $P\subseteq N$ with $(\alpha_k,\beta_k)=(\alpha,\beta)$ for every $k\in P$. Then $\varrho(\alpha,\beta)\ge k$ for every $k\in P$, a contradiction. \end{proof} Fix such $M$ from the claim. We show that $(y_n)_{n\in M}$ is 5-equivalent to the natural basis of $c_0$. Observe that since the basis $(e_\alpha)_{\alpha}$ is bimonotone one has that $\nrm{\sum_{n\in M} a_n y_n}\ge \nrm{(a_n)_{n\in M}}_{\infty}$ for every sequence $(a_n)_{n\in M}$ of scalars. So it remains to show that $\nrm{\sum_{n\in M} a_n y_n}\le 5 \nrm{(a_n)_{n\in M}}_{\infty}$. This is done in the next. \begin{claim} For every $\phi\in K$ and every sequence $(a_n)_{n\in M}$ of scalar one has that \begin{equation} \label{njonhnjdgfd} |\phi(\sum_n a_n y_n)|\le 5 \nrm{(a_n)_{n\in M}}_{\infty}. \end{equation} \end{claim} \begin{proof}[Proof of Claim:] Fix all data, and set $y=\sum_{n}a_n y_n $. The proof of \eqref{njonhnjdgfd} is done by induction on the complexity of $\phi$. If $\phi=\pm e_\alpha^*$, the result is trivial. Suppose that $\phi$ is rational convex combination $\phi=\sum_{i<k} c_i f_i$ of elements $(f_i)_{i<n}$ of $K$. Then, by applying the inductive hypothesis to $f_i$'s, one has that $|\phi(y)\le \sum_{i<k} c_i |f_i(y)|\le 5\nrm{(a_n)}_{\infty}$. Suppose now that $\phi=(1/m_j)\sum_{i<d}f_i$, where $d\le n_j$ and $(f_i)_{i<d}$ is a separated block sequence in $K$. Let \begin{align*} a=&\conj{i<d}{\mathrm{supp\, } y \cap \mathrm{supp\, } f_i\neq \emptyset },\text{ and for $i\in a$ let}\\ k(i)=&\min \conj{k}{\mathrm{supp\, } y_k\cap \mathrm{supp\, } f_i\neq \emptyset},\text{ and} \\ L=& \conj{k(i)}{i\in a}. \end{align*} \begin{claim} \noindent (a) If $k<\min L$, then $\mathrm{supp\, } y_k\cap \mathrm{supp\, } f=\emptyset$. \noindent (b) There is some is some $\bar{i}\in a$ such that for every $k>\max L$, then $\mathrm{supp\, } y_k\cap \mathrm{supp\, } f=\mathrm{supp\, } y_k\cap \mathrm{supp\, } f_{k(i)}$. \noindent (c) For every two consecutive $k_0<k_1$ in $K$ there is some $\bar i(k_0,k_1)\in a$ such that for every $k$ with $k_0<k<k_1$ one has that $\mathrm{supp\, } y_k \cap \mathrm{supp\, } f=\mathrm{supp\, } y_k\cap \mathrm{supp\, } f_{\bar i(k_0,k_1)}$. \end{claim} \begin{proof}[Proof of Claim:] The statement (a) is clear. It is not difficult to show that the statements (b) and (c) follow from the following, also non difficult, fact: Suppose that $I$ is an interval of integers, and suppose that there are at least two integers $i_0$ and $i_1$ in $a$ such that $\mathrm{supp\, } f_{i_\varepsilon}\cap \bigcup_{k\in I}\mathrm{supp\, } y_k\neq \emptyset$ for $\varepsilon=0,1$. Then $I\cap L\neq \emptyset$. \end{proof} Now we consider two cases: \noindent \emph{Case 1.} The cardinality of $L$ is at most two. If $L=\emptyset$, then $f(x)=0$. Suppose that $L=\{\bar{k}\}$. Then using \emph{(a)}, \emph{(b)} and the inductive hypothesis, above one obtains that $$|f(x)|\le \frac{1}{m_j}|f_{\bar{i}}(\sum_{k> \bar k}a_k y_k)|+|a_{\bar k}|\le 2 \nrm{(a_k)_k}_{\infty}.$$ Finally suppose that $L=\{k_0,k_1\}$ with $k_0<k_1$. Then one has that \begin{align*} |f(x)|\le & |a_{k_0}|+|a_{k_1}|+\frac{1}{m_j}\left|f_{\bar i(k_0,k_1)}(\sum_{k=k_0+1}^{k_1-1}a_k y_k) +f_{\bar i}(\sum_{k>k_1}a_k y_k) \right| \le 5 \nrm{(a_k)_k}_\infty. \end{align*} \noindent \emph{Case 2.} The set $L$ has cardinality at least three. Let $k_0<k_1<k_2$ be the least three elements of $L$. Find $i_0<i_1$ in $a$ such that $k(i_\varepsilon)=k_\varepsilon$ for $\varepsilon=0,1$, and then $\alpha_\varepsilon\in \mathrm{supp\, } f_{i_\varepsilon}\cap \mathrm{supp\, } y_{k_\varepsilon} $ for $\varepsilon=0,1$. It follows that \begin{equation*} \nrm{f}_{\ell_1}\le |\mathrm{supp\, } f|\le \varrho(\alpha_0,\alpha_1)< k_2. \end{equation*} Hence for every $k\ge k_2$ one has that \begin{equation} \label{ewfmkjirjjfd} |f(y_k)| \le \nrm{f}_{\ell_1}\nrm{y_k}_\infty \le \frac{1}{2^{k+2}}. \end{equation} Using the inequality \eqref{ewfmkjirjjfd}, conditions \emph{(a)}-\emph{(c)} above and the inductive hypothesis applied to $f_{i(i_0,i_1)}$ and $f_{i_1}$ one obtains that \begin{align*} |f(x)|\le & |a_{k_0}|+|a_{k_1}|+ \frac{1}{m_j}\left|f_{\bar i(k_0,k_1)}(\sum_{k=k_0+1}^{k_1- 1}a_k y_k) +f_{\bar i (k_1,k_2)}(\sum_{k=k_1+1}^ {k_2-1}a_k y_k)\right|+ |f(\sum_{k\ge k_2} a_ky_k)| \le \\ \le & 2 \nrm{(a_k)_k}_{\infty}+\frac{5}{2}\nrm{(a_k)_k}_\infty+\frac12\nrm{(a_k)}_\infty = 5 \nrm{(a_n)}_\infty, \end{align*} as desired. \end{proof} \end{proof} \begin{teore}\label{ieoithie} ${\mathfrak X}$ is $c_0$-saturated. \end{teore} \begin{proof} Fix an closed infinite dimensional subspace $X$ of ${\mathfrak X}$. We may assume, by Remark \ref{oieutiougf} (c), that indeed $X$ is the closed linear span of a normalized block sequence $(x_k)$. Suppose that $c_0$ does not embed isomorphically into $X$, and let us yield a contradiction. Consider the norm-one operator $ \sum_k a_k x_k\in X \mapsto (a_k)_k\in c_{0}$. This is, by hypothesis, strictly singular. So we can find a normalized block subsequence $(y_k)_k$ of $(x_k)_k$ with $\lim_{k\to \infty}\nrm{y_k}_\infty=0$. Then by Lemma \ref{orjuitorjhgfj} $(y_k)_k$ has a subsequence equivalent to the $c_0$-basis, a contradiction. \end{proof} \subsection{Distortion} From the previous Theorem \ref{ieoithie} one immediately obtains the following. \begin{coro} ${\mathfrak X}$ is not distortable. \qed \end{coro} In contrast to this, we have a strong distortion phenomenon in the level of non-separable subspaces of ${\mathfrak X}$: \begin{teore} ${\mathfrak X}$ is arbitrarily $\omega_{1}$-distortable. \end{teore} \begin{proof} We follow some of the ideas used to show that ${\mathfrak X}_{\omega_{1}}$ is arbitrarily distortable (see Corollary 5.36 in \cite{arg-lop-tod}). For $j\in {\mathbb N}$, and $x\in {\mathfrak X}_{\omega_{1}}$, let $$\nrm{x}_{j}=\sup \conj{\phi(x)}{w(\phi)=m_{2j}}.$$ Notice that, obviously, for every $\phi\in K$ one has that $(\phi)$ is a dependent sequence, hence $(1/m_{2j})\phi\in K$. It follows that $\nrm{\cdot}_j\le \nrm{\cdot}\le m_{2j}\nrm{\cdot}_j$. Fix a closed non-separable subspace $X$ of $ {\mathfrak X}$. Let $(\bsy x,\bsy y)$ be a deciding pair for the inclusion mapping $i_{X,{\mathfrak X}}:X\to {\mathfrak X}$. Now fix an integer $l$ and $\varepsilon>0$. Use Proposition \ref{oehthorhgoiwehiohtg} to find a transversal subsequence $\bsy z=(y_{k_i}^{\alpha_i})_{i<n_{2l}}$ of $\bsy y$ such that \noindent (a) $\sum_{i<l}\nrm{x_{k_i}^{\alpha_i}-y_{k_i}^{\alpha_i}}\le \varepsilon$, and \noindent (b) $\bsy z$ is a $(3,n_{2l}^{-2})$-RIS with a biorthogonal separated block sequence $(f_i)_{i<n_{2l}}$ in $K$. Set $$z_l=\frac{m_{2l}}{n_{2l}}\sum_{i<l}y_{k_i}^{\alpha_i}, \, \phi_l=\frac1{m_{2l}} \sum_{i<l}f_{i}.$$ Then $\phi_l\in K$, and the pair $(z_l,\phi_l)$ is what we called in \cite{arg-lop-tod} (Definition 3.1, see also the proof of Proposition 4.12) a $(6,2l)$-\emph{exact pair}. So, it follows that if $l=j$ then one has $$1\le \nrm{z_j}_{j}\le \nrm{z_j}\le 6,$$ while if $l>j$ then $$\text{$1\le \nrm{z_l}\le 6$ and $\nrm{z_l}_{j}\le \frac{12}{m_{2j}}$.}$$ Hence for every $l>j$ one has the discrepancy \begin{equation}\label{njhdfdj} \frac{ \nrm{ z_j/\nrm{z_j}}_{j}}{ \nrm{ z_l/\nrm{z_l}}_{j}}\ge \frac{1/6}{12/m_{2j+1}}=\frac{m_{2j}}{72}. \end{equation} Since the vectors $z_k$ and $z_l$ can be found to be arbitrary close to $X$, it follows that one can obtain a similar inequality to \eqref{njhdfdj} for vectors in $X$. Hence $({\mathfrak X},\nrm{\cdot})$ is arbitrarily $\omega_{1}$-distortable. \end{proof} \subsection{Operators} It turns out that while the space ${\mathfrak X}$ is $c_0$-saturated it has the following form of indecomposability at the non-separable level. \begin{defin}\rm A Banach space $X$ is \emph{$\omega_{1}$-indecomposable} if for every decomposition $X= Y\oplus Z$ one has that either $Y$ or $Z$ is separable. We say that $X$ is \emph{$\omega_{1}$-hereditarily indecomposable} ($\omega_{1}$-HI in short) if every subspace of $X$ is $\omega_{1}$-indecomposable. \end{defin} This kind of indecomposability corresponds to the following notion of singularity for operators. \begin{defin}\rm An operator $T:X\to Y$ is \emph{$\omega_{1}$-singular} if $T$ is not an isomorphism on any non-separable subspace of $X$. \end{defin} Observe that strictly singular and separable range operators are $\omega_{1}$-singular. While the strictly singular operators and operators with separable ranges form closed ideals of the Banach algebra $\mc L(X)$ of all bounded operators from $X$ into $X$, we do not know if, in general, this is also the case for the family $\mc S_{\omega_{1}}(X)$ of $\omega_{1}$-singular operators of $X$. Indeed we do not even know if $\mc S_{\omega_{1}}(X)$ is closed under sums. We shall show however that for subspaces $X$ of our space ${\mathfrak X}$ one does have the property that $\mc S_{\omega_{1}}(X)$ form a closed ideal in the algebra $\mc L(X)$. Using this notion one can have the following sufficient condition for being $\omega_{1}$-HI. \begin{propo}\label{jyhruthr} Suppose that $X$ has the property that for every subspace $Y$ of $X$ every bounded operator $T:Y\to X$ is of the form $T=\lambda i_{Y,X}+S$ where $S$ is $\omega_{1}$-singular and $\lambda\in {\mathbb R}$. Then $X$ is $\omega_{1}$-HI. \end{propo} \begin{proof} Otherwise, fix two nonseparable subspaces $Y$ and $Z$ of $X$ such that $d(S_X,S_Y)>0$. It follows that the two natural projections $P_{Y}:Y\oplus Z\to Y$ and $P_Z:Y\oplus Z\to Z$ are both bounded. Fix $\lambda\in {\mathbb R}$ such that $T=i_{Y,X}\circ P_Y=\lambda i_{Y\oplus Z,X} + S$ with $S$ $\omega_{1}$-singular. Since $T^2=T$, we have that \begin{equation}\label{reuturuhggh} (\lambda^2-\lambda)i_{Y\oplus Z,X} = ((1-2\lambda) i_{Y\oplus Z,X} -S)\circ S. \end{equation} Since it is clear that $U\circ S$ is $\omega_{1}$-singular if $S$ is $\omega_{1}$-singular, it follows from \eqref{reuturuhggh} that $\lambda^2=\lambda$. Without loss of generality, we may assume that $\lambda=1$ (if $\lambda=0$ we replace $Y$ by $Z$ in the preceding argument). Since $P_Y\negmedspace\negmedspace\upharpoonright\negthickspace Z=0$, we obtain that $S=- i_{Z,X}$, a contradiction. \end{proof} \begin{notas}\rm Recall that V. Ferenczi has shown in \cite{fere} that if a complex Banach spaces $X$ is HI then every operator from a subspace $Y$ of $X$ into $X$ is a multiple of the inclusion plus a strictly singular operator. We do not know if the analogous result is true for $\omega_{1}$-singular operators or, in other words, if the converse implication of Proposition \ref{jyhruthr} is true in the case of complex Banach spaces. \end{notas} The main purpose of this subsection is the study of the operator space $\mc L(X,{\mathfrak X})$, where $X$ is an arbitrary closed infinite dimensional subspace of ${\mathfrak X}$. For the next few lemmas we fix a bounded operator $T:X\to {\mathfrak X}$ from a closed infinite dimensional subspace $X$ of ${\mathfrak X}$ into the space ${\mathfrak X}$. We also fix a deciding pair $(\boldsymbol{x},\boldsymbol{y})$ for $T$ (see Definition \ref{decpair}). \begin{lem}\label{ojrtreuret} For all but countably many $\alpha<\omega_{1}$ one has that $\lim_{k\to \infty}d(Tx_k^\alpha,{\mathbb R} x_k^\alpha)=0$. \end{lem} \begin{proof} Otherwise, using the property (iii) of the deciding pair $(\bsy x,\bsy y)$ and going to subsequences if necessary, we may assume that there is $\varepsilon>0$ such that $$\inf_{k\in {\mathbb N}}d(Tx_k^\alpha,{\mathbb R} y_k^\alpha)>\varepsilon$$ for every countable ordinal $\alpha $. Now using Hahn-Banach theorem and the fact that the norming set $K$ is closed under rational convex combinations and restrictions on intervals we can find for every pair $(k,\alpha)\in{\mathbb N} \times \omega_{1}$ a functional $f_{k}^\alpha\in K$ such that one has that \noindent (a) $(f_k^{\alpha})_{(k,\alpha)\in {\mathbb N}\times \omega_{1}}$ is a block sequence and $\mathrm{ran\, } f_k^\alpha\subseteq \mathrm{ran\, } x_k^\alpha$ for every $(k,\alpha)$, and \noindent (b) $| f_k^\alpha(y_k^\alpha)|\le \varepsilon_k$ while $f_k^\alpha(T(x_k^\alpha))\ge \varepsilon $ for every $(k,\alpha)$. Fix $j$ with $\varepsilon m_{2j+1} > 2\nrm{T}$. Now use Proposition \ref{oehthorhgoiwehiohtg} to find a sequence $(F_i^\alpha)_{(i,\alpha)\in n_{2j+1}\times \omega_{1}}$ of finite sets of pairs from ${\mathbb N}\times \omega_{1}$ such that \noindent (c) $|F_i^\alpha|=n_{2j_i}$ for every $(i,\alpha)\in n_{2j+1}\times \omega_{1}$, \noindent (d) $(y_k^\xi)_{(k,\xi)\in F_i^\alpha}$ is a $(3,(n_{2j_i})^{-2})$-RIS and $(f_k^\xi)_{(k,\xi)\in F_i^\alpha}$ is a separated block sequence for every $(i,\alpha)\in n_{2j+1}\times \omega_{1}$. \noindent (e) $|f_k^\xi(T(y_k^\xi))|, \nrm{x_k^{\xi}-y_k^\xi}\le n_{2j_i}^{-3}$ for every $(k,\xi)\in F_i^\alpha $ and every $(i,\alpha)\in n_{2j+1}\times \omega_{1}$. \noindent (f) $2j_i=\sigma_\varrho(\phi_0^\alpha,m_{2j_0},p_0,\dots, \phi_{i-1}^\alpha, m_{2j_{i-1}},p_{i-1})$ for every $(i,\alpha)\in n_{2j+1}\times \omega_{1}$ where $p_i$ is an integer such that \begin{align*} p_i\ge & \max\{p_0,\dots,p_{i-1},n_{2j+1}^2,p_\varrho(\bigcup_{k<i}(\mathrm{supp\, } \phi_k^\alpha\cup\mathrm{supp\, } t_k^\alpha)),|\mathrm{supp\, } t_{i-1}^\alpha| n_{2j+1}^2\}, \end{align*} and where \begin{align*} \phi_i^\alpha=&\frac1{m_{2j_i}}\sum_{(k,\xi)\in F_i^\alpha}f_k^\xi,\\ t_i^\alpha=& \frac{n_{2j_i}}{m_{2j_i}}\sum_{(k,\xi)\in F_i^\alpha}y_k^\xi. \end{align*} and $\sigma_\varrho$ and $p_\varrho$ are the coding and the $\varrho$-number, respectively, introduced in \cite{arg-lop-tod}. Set also \begin{align*} z_i^\alpha=& \frac{n_{2j_i}}{m_{2j_i}}\sum_{(k,\xi)\in F_i^\alpha}x_k^\xi. \end{align*} Now again using Corollary \ref{oeuwtu3432423fgn} we can find countable ordinals $\alpha_0<\dots <\alpha_{n_{2j+1}-1}$ such that $(\phi_i^{\alpha_i})_{i<n_{2j+1}}$ is a separated block sequence. It follows that the sequence $((t_i^{\alpha_i},\phi_i^{\alpha_i}))_{i<n_{2j+1}}$ is a $(n_{2j+1}^{-2},j)$-dependent sequence, a slightly variation of the notion of $(0,j)$-dependent sequence used in \cite{arg-lop-tod} (Definition 5.22; see also the proof of Proposition 5.24). The only change is that now one has that $|\phi_i^{\alpha_i}(t_i^{\alpha_i})|\le 1/n_{2j+1}^2$ instead of zero. It follows that \begin{equation} \nrm{\frac{1}{n_{2j+1}}\sum_{i<n_{2j+1}} t_i^{\alpha_i} }\le \frac{1}{m_{2j+1}^2}. \end{equation} Hence, setting $z=(1/n_{2j+1})\sum_{i<n_{2j+1}} z_i^{\alpha_i}$, one has that \begin{equation} \label{joertjitjioe}\nrm{z}\le \frac{2}{m_{2j+1}^2}. \end{equation} By the other hand, since $\phi=(1/m_{2j+1})\sum_{i<n_{2j+1}}\phi_i^{\alpha_i}$ is in $K$, it follows that \begin{equation} \label{joertjitjioe1}\nrm{T(z) }\ge \phi(T(z))\ge \frac{\varepsilon}{m_{2j+1}}. \end{equation} Putting \eqref{joertjitjioe} and \eqref{joertjitjioe1} together one gets $$\frac{\varepsilon}{m_{2j+1}}\le \nrm{T(z)}\le \nrm{T}\nrm{z}\le \frac{2\nrm{T}}{m_{2j+1}^2},$$ and this is contradictory with the choice of $j$. \end{proof} Now for each countable ordinal $\alpha$, let $\lambda_k^{\alpha}=\lambda_k^\alpha(T,\bsy x,\bsy y)\in {\mathbb R}$ be such that $$d(T(x_{k}^\alpha),{\mathbb R} x_k^\alpha)=\nrm{T(x_k^\alpha)-\lambda_k^\alpha x_k^\alpha}.$$ \begin{lem}\label{kjnfjkggf} For all but countably many $\alpha<\omega_{1}$, the numerical sequence $(\lambda_k^\alpha)_k$ is convergent. \end{lem} \begin{proof} Otherwise, using Lemma \ref{ojrtreuret}, one can find two real numbers $\delta<\varepsilon$, an uncountable set $A\subseteq \omega_{1}$, for each $\alpha\in A$ two infinite disjoint subsets $L_\alpha$ and $R_\alpha$ of ${\mathbb N}$, and a block sequence $(f_k^\alpha)_{(k,\alpha)\in (L_\alpha\cup R_\alpha)\times A}$ in $K$ such that \noindent (i) $\mathrm{ran\, } f_k^\alpha\subseteq \mathrm{ran\, } y_k^\alpha$, $f_k^\alpha(y_k^\alpha)=1$ for every $(k,\alpha)\in (L_\alpha\cup R_\alpha)\times A$. \noindent (ii) For every $\alpha\in A$ one has that $f_k^\alpha(T(x_k^\alpha))<\delta$ if $k\in L_\alpha$, and $f_k^\alpha(T(x_k^\alpha))>\varepsilon$ if $k\in R_\alpha$. Let $j\in {\mathbb N}$ be such that $(\varepsilon-\delta) m_{2j+1}\ge 4 \nrm{T}$. We find, as in the proof of the previous Lemma \ref{ojrtreuret} a sequence $(F_i^\alpha)_{(i,\alpha)\in n_{2j+1}\times \omega_{1}}$ of finite sets of pairs from ${\mathbb N}\times \omega_{1}$ such that (c), (d), and (f) as there holds, and also \noindent (e') For every $(k,\xi)\in F_i^\alpha $ one has that $\nrm{x_k^{\xi}-y_k^\xi}\le n_{2j_i}^{-3}$ for every $(i,\alpha)\in n_{2j+1}\times \omega_{1}$, and $f_k^\xi(T(x_k^\xi))\ge \varepsilon$ if $i$ odd and $f_k^\xi(T(x_k^\xi))\le \delta$ if $i$ even. We set also $\phi_i^\alpha$, $u_i^\alpha$ and $z_i^\alpha$ as there. Let $\alpha_0<\dots <\alpha_{n_{2j+1}-1}$ be such that $(\phi_i^{\alpha_i})_{i<n_{2j+1}}$ is a separated block sequence. It follows that the sequence $((t_i^{\alpha_i},\phi_i^{\alpha_i}))_{i<n_{2j+1}}$ is a $(1,j)$-dependent sequence (see Definition 3.3 in \cite{arg-lop-tod}), hence \begin{equation} \nrm{\frac{1}{n_{2j+1 }}\sum_{i<n_{2j+1}}(-1)^i u_i^{\alpha_i}}\le \frac1{m_{2j+1}^{2}}, \end{equation} and so by the property (e') one has that \begin{equation} \label{oiejhotiuragj} \nrm{z}\le \frac2{m_{2j+1}^{2}}, \end{equation} where $z=(1/n_{2j+1 })\sum_{i<n_{2j+1}}(-1)^i z_i^{\alpha_i}$. One also has, setting $\phi=(1/m_{2j+1})\sum_{i<n_{2j+1}}\phi_i^{\alpha_i}$, that \begin{equation}\label{oiejhotiuragj1} \nrm{T(z)}\ge |\phi(Tz)|=|\frac{1}{m_{2j+1}n_{2j+1}}\sum_{i<n_{2j+1}}(-1)^i\phi_i^{\alpha_i}(T(z_i^{\alpha_i})) |\ge \frac{\varepsilon-\delta}{2m_{2j+1}}. \end{equation} From \eqref{oiejhotiuragj} and \eqref{oiejhotiuragj1} one easily gets a contradiction with the choice of $j$. \end{proof} For every $\alpha<\omega_{1}$ let (if exists) $\lambda_\alpha=\lambda_\alpha(T,\bsy x,\bsy y)=\lim_{k\to \infty } \lambda_k^\alpha$. \begin{coro}\label{ohrhtughr} There is a real number $\lambda=\lambda(T,\boldsymbol{x},\boldsymbol{y})$ such that $\lambda_\alpha=\lambda$ for all but countably many $\alpha$. \end{coro} \begin{proof} Otherwise, there are two reals $\delta<\varepsilon$ such that both sets $A_0=\conj{\alpha<\omega_{1}}{\lambda_\alpha<\delta}$ and $A_1=\conj{\alpha<\omega_{1}}{\lambda_\alpha>\varepsilon}$ are uncountable. Find for every $(k,\alpha)\in {\mathbb N}\times \omega_{1}$ a countable ordinals $\beta(k,\alpha)$ ($(k,\alpha)\in {\mathbb N}\times \omega_{1}$) such that \noindent (i) $\beta(k,\alpha)\in A_i$ if $k$ is equal to $i$ $\mod 2$. \noindent (ii) $((x_k^{\beta(k,\alpha)},y_k^{\beta(k,\alpha)}))_{(k,\alpha)\in {\mathbb N}\times \omega_{1}}$ is a deciding pair $(\bsy z,\bsy t)$ for $T$. It follows that $(\lambda_k^\alpha(\bsy z,\bsy t))_k$ is never converging, contradicting Lemma \ref{kjnfjkggf}. \end{proof} \begin{coro} The scalar $\lambda(T,\boldsymbol{x},\boldsymbol{y})$ is independent of $\boldsymbol{x}$ and $\boldsymbol{y}$. We call it $\lambda(T)$. \end{coro} \begin{proof} Fix two deciding pairs $(\bsy x,\bsy y)$ and $(\bsy z,\bsy u)$ for $T$. It is easy to define a third one $(\bsy v,\bsy w)$ for $T$ such that the sets \begin{align*} \{\alpha<\omega_{1}\,& :\,(\exists \beta<\omega_{1}) (\forall k\in {\mathbb N}) \, (x_k^\alpha,y_k^\alpha)=(z_k^\alpha,u_k^\alpha)\} \\ \{\alpha<\omega_{1}\, & : \,(\exists \beta<\omega_{1}) (\forall k\in {\mathbb N}) \, (v_k^\alpha,w_k^\alpha)=(z_k^\alpha,u_k^\alpha)\} \end{align*} are both uncountable. It follows that $\lambda(T,\bsy x,\bsy y)=\lambda(T,\bsy v, \bsy w)=\lambda(T,\bsy z,\bsy u)$. \end{proof} \begin{teore}\label{ouetiuoeugov} The mapping $\lambda:\mathcal{L}(X,{\mathfrak X})\to {\mathbb R}$ which sends $T$ to $\lambda(T)$ is a bounded linear functional whose kernel $\ker(\lambda)$ is equal to the family of all $\omega_{1}$-singular operators from $X$ into ${\mathfrak X}$. \end{teore} \begin{proof} It is obvious from the definition of $\lambda(T)$ that $|\lambda(T)|\le \nrm{T}$. We now show the linearity of $\lambda$. It is easy to see that $\lambda( \mu T)=\mu \lambda (T)$. Let us prove now that $\lambda(T_0+T_1)=\lambda(T_0)+\lambda(T_1)$. Let $(\boldsymbol{x},\boldsymbol{y})$ be a deciding pair for both $T_0$ and $T_1$ (See Remark \ref{ojgtiorufjnw}). Let $\lambda_k^{\alpha,0},\lambda_k^{\alpha,1},\lambda_k^\alpha\in {\mathbb R}$ be such that $d((T_0+T_1)(x_k^\alpha),{\mathbb R} x_k^\alpha)=\nrm{(T_0+T_1)(x_k^\alpha)-\lambda_k^\alpha x_k^\alpha}$, and $d(T_i(x_k^\alpha),{\mathbb R} x_k^{\alpha})=\nrm{T (x_k^{\alpha})-\lambda_k^{\alpha,i} x_k^{\alpha}}$, for $i=0,1$. It follows from Lemma \ref{ojrtreuret}, applied to $T_0,T_1$ and $T_0+T_1$, that \begin{equation} \label{etjirutirgffg}\lim_{k\to \infty}(\lambda_k^{\alpha,0}+\lambda_k^{\alpha,1})-\lambda_k^{\alpha}=0, \end{equation} for all but countably many $\alpha$. The desired result now follows from \eqref{etjirutirgffg} and Lemma \ref{kjnfjkggf}. Now we prove that $\ker(\lambda)$ is the family of the $\omega_{1}$-singular operators. Suppose first that $\lambda(T)=0$. We are going to show that $T$ is not isomorphism when restricted to a non-separable subspace of $X$. To do this, let $\varepsilon>0$, and let $Z$ be a non-separable subspace of $X$. Let $(\boldsymbol{x},\boldsymbol{y})$ be any deciding pair for $T$ with $\boldsymbol{x}\subseteq Z$. Since $\lambda(T)=0$, by previous Lemma \ref{ojrtreuret} we can find $(k,\alpha)\in {\mathbb N}\times\omega_{1}$ such that $\nrm{T(x_k^{\alpha})}<\varepsilon$, as desired. Finally, suppose that $T$ is a $\omega_{1}$-singular operator. Our intention is to provide a deciding pair $(\bsy x, \bsy y)$ such that $\nrm{T(x_k^\alpha)}\le 2^{-k}$ for every $(k,\alpha)$. Then $\lambda_\alpha(T,\bsy x,\bsy y)=0$ for every $\alpha$, and so $\lambda(T)=0$. \begin{claim} For every non-separable $X_0\hookrightarrow X$ every $\varepsilon>0$ and every $k$ there are two normalized vectors $x$ and $y$ such that $x\in X_0$, $y$ is a $2-\ell_1^k$-average, $\nrm{T(x)}\le \varepsilon$ and $\nrm{x-y}\le \varepsilon$. \end{claim} It is easy to find the desired deciding pair $(\bsy x,\bsy y)$ from a simple use of the previous claim. Now we pass to give a proof of the claim. Fix all data. Let $l\in {\mathbb N}$ be the result of the application of Proposition \ref{dfmlejtilejiew} to our fixed $k$, and let $\delta=\varepsilon/2l$. Since $T$ is $\omega_{1}$-singular, one can find two normalized uncountable sequences $(x_\alpha)_{\alpha<\omega_{1}}$ and $(y_\alpha)_{\alpha<\omega_{1}}$ such that \noindent (a) $x_\alpha\in X_0$ for every $\alpha<\omega_{1}$. \noindent (b) $\bsy y=(y_\alpha)_{\alpha<\omega_{1}}$ is a block sequence, and there an uncountable block sequence $(f_\alpha)_{\alpha<\omega_{1}}$ in $K$ biorthogonal to $\bsy y$. \noindent (c) $\nrm{x_\alpha-y_\alpha}, \nrm{T(x_\alpha)}\le \delta$. It follows from Corollary \ref{oeuwtu3432423fgn} when applied to $(\mathrm{supp\, } f_\alpha)_{\alpha<\omega_{1}}$ that there is $F\subseteq \omega_{1}$ of size $l$ such that $(f_\alpha)_{\alpha\in F}$ is a separated sequence. So, by Proposition \ref{dfmlejtilejiew} one can find a $2-\ell_1^k$ in $\langle y_\alpha \rangle_{\alpha\in F}$ and then, using property (c) above, the counterpart $x\in \langle x_\alpha \rangle_{\alpha\in F}$ so that $x$ and $y$ fulfills the desired conditions. \end{proof} From Theorem \ref{ouetiuoeugov} one easily gets the main conclusion of this section which gives us the description of the spaces of operators $\mc L(X,{\mathfrak X})$ where $X$ is an arbitrary closed infinite dimensional subspace of ${\mathfrak X}$. \begin{coro} Every bounded operator $T:X\to {\mathfrak X}$ from a closed subspace of ${\mathfrak X}$ into ${\mathfrak X}$ can be expressed as the sum $$T=\lambda(T)i_{X,{\mathfrak X}}+S$$ where $S$ is a $\omega_{1}$-singular operator.\qed \end{coro} \begin{coro} The space ${\mathfrak X}$ is $\omega_{1}$-hereditarily indecomposable, and therefore it contains no uncountable unconditional basic sequences. \end{coro} \begin{proof} This follows from Proposition \ref{jyhruthr}. \end{proof} It is well known that the class of strictly singular operators on a Banach space $X$ is a closed ideal of the Banach algebra $\mathcal{L}(X)$ of all bounded operators from $X$ into $X$. We do not know if the same is true for the class of $\omega_{1}$-singular operators. However, we have the following. \begin{coro} If $X$ is a closed infinite dimensional subspace of ${\mathfrak X}$, then the family $\mc S_{\omega_{1}}(X)$ of $\omega_{1}$-singular operators in $ \mathcal{L}(X)$ forms a closed ideal in the Banach algebra $\mathcal{L}(X)$. \end{coro} \begin{proof} We show that $\lambda:\mc{L}(X)\to {\mathbb R} $, formally defined by $\lambda \mapsto \lambda(T)=\lambda(i_{X,{\mathfrak X}}\circ T)$, is a bounded operator between Banach \emph{algebras}. From this one easily gets the desired result. Observe that we did almost all the work in Theorem \ref{ouetiuoeugov}. It remains to show that $\lambda(T_1\circ T_0)=\lambda(T_1) \lambda(T_0)$. Fix a deciding pair $(\bsy x,\bsy y)$ for both $T_0$ and $T_1$. It follows that for all but countably many $\alpha<\omega_{1}$ one has that \begin{align*} \lim_{k\to \infty}\nrm{(T_1\circ T_0)(x_k^{\alpha})- \lambda(T_1\circ T_0)x_k^\alpha}= & 0 \\ \lim_{k\to \infty}\nrm{(T_1\circ T_0)(x_k^{\alpha})-T_1(\lambda(T_0)x_k^\alpha)}= & 0 \\ \lim_{k\to \infty}\nrm{T_1(\lambda(T_0) x_k^{\alpha})-\lambda(T_1)\lambda(T_0)x_k^\alpha}= & 0, \end{align*} and so $\lambda(T_1\circ T_0)=\lambda(T_1)\lambda (T_0)$, as desired. \end{proof} In the case of $X={\mathfrak X}$ we obtain the following slightly more informative result. \begin{coro}\label{oieytouwyyghds}If $T:{\mathfrak X}\to {\mathfrak X}$ is a bounded operator, then $T(e_\alpha)=\lambda(T)e_\alpha$ for all but countably many $\alpha$. It follows that $T$ is the sum of a multiple of the identity plus an operator with separable range. \end{coro} \begin{proof} Otherwise we can find an uncountable subset $A\subseteq \omega_{1}$ and $\varepsilon>0$ such that \begin{equation} \label{ojriyojgh}\nrm{T(e_\alpha)-\lambda(T) e_\alpha}>\varepsilon \end{equation} for all $\alpha\in A$ and also, as a consequence of Remark \ref{oieutiougf} (f), such that $T(e_\alpha)<T(e_\beta)$ for every $\alpha<\beta$ in $A$ (here we are making an abuse of language by formally accepting that $0<x$ for every vector $x$). Let $\theta_A:\omega\times \omega_1\to A$ be the unique order-preserving onto mapping, and define $x_k^\alpha=e_{\theta_A(k,\alpha)}$. Then $((x_k^\alpha),(x_k^\alpha))$ is a deciding pair for $T$. Observe that this is what it makes so peculiar the situation $X={\mathfrak X}$. So, by Lemma \ref{ojrtreuret}, for all but countably many $\alpha$ one has that $$\lim_{k\to \infty}\nrm{T(x_k^\alpha)-\lambda(T)x_k^\alpha}=0,$$ which is contradictory with \eqref{ojriyojgh}. \end{proof} The following result shows that, among the bounded operators from ${\mathfrak X}$ into ${\mathfrak X}$, there is no distinction between the notions of $\omega_{1}$-singular operators and the notion of operators with separable range. \begin{coro} The following are equivalent for $T\in \mathcal{L}({\mathfrak X})$: \noindent (a) $(T(e_\alpha))_{\alpha<\omega_{1}}$ is eventually zero. \noindent (b) $T$ has separable range. \noindent (c) $T$ is $\omega_{1}$-singular. \end{coro} \begin{proof} (c) implies (a): Suppose that $T$ is $\omega_{1}$-singular. Then by Theorem \ref{ouetiuoeugov} one has that $\lambda(T)=0$; so by Corollary \ref{oieytouwyyghds} it follows that $T(e_\alpha)=0$ eventually. \end{proof}
1,314,259,992,946
arxiv
\section{Introduction} \label{sec: Intro} Inspired by both the fundamental interest and also potential applications to quantum computing, the search for Majorana fermions (MFs) has become one of the central topics in condensed matter physics, see Refs. \onlinecite{Alicea12,LF12,ST13,Bee13} for reviews. One of the most promising routes to the MFs is based on the observation that a one-dimensional (1D) lattice model of a spin-polarized $p$-wave superconductor, known as the Kitaev chain,\cite{Kit01} can have unpaired, or ``dangling'', zero-energy boundary states near its ends (although these states are not usual fermions, in particular, they have a non-Abelian exchange statistics, we will still call them MFs, following a considerable precedent in the literature). One can engineer a Kitaev chain-like system in a quantum wire with the spin-orbit coupling (SOC) and a sufficiently strong time reversal (TR) symmetry breaking, in which superconductivity is induced by proximity with a conventional bulk superconductor.\cite{Lutchyn10,Oreg10} It is in this setup that experimental signatures consistent with the MFs have been observed, in InSb nanowires in an applied magnetic field,\cite{InSb-wire} and also in ferromagnetic chains on a superconducting Pb substrate.\cite{Fe-chain} Both crucial ingredients of the recent MF proposals, the asymmetric SOC and TR symmetry breaking, are known to fundamentally affect superconductivity. The asymmetric, or Rashba, SOC (Refs. \onlinecite{Rashba-model} and \onlinecite{Manchon15}) requires the absence of inversion symmetry, which naturally occurs in a quantum wire placed on a substrate. It lifts the spin degeneracy of the electron states, resulting in nondegenerate Bloch bands characterized by a nontrivial momentum-space topology. This has profound consequences for superconductivity in three-dimensional (3D) and two-dimensional (2D) materials, which have been extensively studied in the last decade, see Refs. \onlinecite{NCSC-book} and \onlinecite{Smid17} for reviews. On the other hand, a TR symmetry-breaking field, either intrinsic (\textit{e.g.}, the exchange field in ferromagnets) or externally applied, also lifts the spin degeneracy of the bands and significantly changes the symmetry properties of the Cooper pairs.\cite{SW02,FMSC-review} The first goal of this work is to present a complete symmetry-based analysis of the electronic bands and the superconducting states in the presence of \textit{both} the asymmetric SOC and a TR symmetry-breaking field. Regardless of the microscopic mechanism of pairing, the symmetry approach has proven to be an extremely powerful tool in the studies of unconventional superconductivity, helping to identify possible stable states and determine the gap structure.\cite{Book} We focus on the quasi-1D case and develop our analysis without any model-specific assumptions, for an arbitrary number of bands. We emphasize, in particular, the crucial role of antiunitary symmetries in defining the proper gap function. Our second goal is to calculate the spectrum of the subgap Andreev bound states (ABS) near the ends of superconducting quantum wires, again in a model-independent way. The presence of these states, observed, \textit{e.g.}, in high-$T_c$ cuprates and other materials,\cite{ZBCP} is an important signature of an unconventional pairing. The ABS energies are obtained by solving the Bogoliubov-de Gennes (BdG) equations in the semiclassical, or Andreev, approximation.\cite{And64} The MFs emerge as zero-energy ABS protected by topology against sufficiently small perturbations. According to the bulk-boundary correspondence principle, the number of the boundary zero modes is determined by a certain topological invariant in the bulk.\cite{Volovik-book,top-SC} We prove this statement in the systems under consideration and present explicit expressions for the number of zero modes for different magnetic symmetry types. The rest of the paper is organized as follows. In Secs. \ref{sec: magnetic classes} and \ref{sec: bands}, we introduce the 1D magnetic points groups, or magnetic classes, and develop a symmetry classification of quasi-1D electron band structures. In Sec. \ref{sec: superconductivity}, superconducting pairing is analyzed for different types of the magnetic classes. In Sec. \ref{sec: ABS and topology}, we use the semiclassical approach to calculate the ABS spectrum in a general multiband quasi-1D superconductor, in particular, to count the number of zero-energy modes protected against symmetry-preserving perturbations. Sec. \ref{sec: summary} contains a summary of our results. Throughout the paper we use the units in which $\hbar=1$, neglecting the difference between the quasiparticle momentum and wavevector, and denote the absolute value of the electron charge by $e$. \section{Magnetic classes in one dimension} \label{sec: magnetic classes} We consider a quasi-1D wire oriented along the $x$ direction on a $xy$-plane substrate. The full 3D potential energy $U(x,y,z)$ affecting the electrons is periodic in $x$, with the period $d$, but confining in both $y$ and $z$ directions. This system lacks an inversion center, because the substrate breaks the $z\to-z$ mirror reflection symmetry. In the presence of a uniform external magnetic field $ \bm{H} =\bm{\nabla}\times \bm{A} $, the single-particle Hamiltonian has the following form: \begin{equation} \label{general H} \hat H=\frac{ \hat{\bm{P}} ^2}{2m}+U( \bm{r} )+\frac{1}{4m^2c^2}\hat{\bm{\sigma}}[\bm{\nabla}U( \bm{r} )\times \hat{\bm{P}} ]+\mu_B\hat{\bm{\sigma}} \bm{H} . \end{equation} Here $ \hat{\bm{P}} = \hat{\bm{p}} +(e/c) \bm{A} ( \bm{r} )$ and $ \hat{\bm{p}} =-i\bm{\nabla}$ are the kinetic and canonical momenta operators, respectively, $\hat{\bm{\sigma}}=(\hat\sigma_1,\hat\sigma_2,\hat\sigma_3)$ are the Pauli matrices, and $\mu_B$ is the Bohr magneton. The third term describes the SOC of electrons with the potential $U( \bm{r} )$ and the last term is the Zeeman interaction. It is convenient to choose the vector potential in the form $ \bm{A} = \bm{A} (y,z)$ (for instance, $A_x=H_yz-H_zy$, $A_y=-H_xz$, $A_z=0$), then the quasimomentum along $x$ is conserved and the Bloch states are labelled by the wavevector $\bm{k}=k_x \hat{\bm{x}} $, which takes values in the first Brillouin zone (BZ): $-G/2<k_x\leq G/2$. Here $G=2\pi/d$ is the basis vector of the 1D reciprocal lattice. The rotations and reflections leaving $U( \bm{r} )$ invariant form a point group $\mathbb{G}$. In the quasi-1D case, there are just two basic point-group operations -- the mirror reflections $\sigma_x$ and $\sigma_y$, which act as follows: $\sigma_xU(x,y,z)=U(-x,y,z)$ and $\sigma_yU(x,y,z)=U(x,-y,z)$, while their product is equivalent to a $\pi$-rotation about the $z$ axis: $\sigma_x\sigma_yU(x,y,z)=U(-x,-y,z)=C_{2z}U(x,y,z)$ (our notations for the symmetry operations are the same as in Ref. \onlinecite{LL-3}). It is easy to see that there are only five quasi-1D point groups: $\mathbf{C}_1=\{E\}$, $\mathbf{D}_x=\{E,\sigma_x\}$, $\mathbf{D}_y=\{E,\sigma_y\}$, $\mathbf{C}_2=\{E,\sigma_x\sigma_y\}$, and $\mathbf{V}=\{E,\sigma_x,\sigma_y,\sigma_x\sigma_y\}$, see Ref. \onlinecite{Sam17}. The symmetry analysis can be extended to include the quasi-1D systems in which the normal state breaks TR symmetry. In this case, the crystal symmetry is characterized by one of the \textit{magnetic} point groups, or the magnetic classes, $\mathbb{G}_M$, whose elements leave both $U( \bm{r} )$ and $ \bm{H} $ invariant.\cite{LL-8} We use the notation $ \bm{H} $ in a broad sense, for any spatially uniform TR symmetry-breaking field, which transforms as a pseudovector under the reflections, \textit{i.e.}, $\sigma_x \bm{H} =(H_x,-H_y,-H_z)$ and $\sigma_y \bm{H} =(-H_x,H_y,-H_z)$, and also changes sign under the TR operation ${\cal K}$. In a ferromagnetic system, $ \bm{H} $ can be taken to represent the exchange field or the spontaneous magnetization. According to general theory,\cite{LL-8} the magnetic classes can be grouped into three types. Type I classes do not involve the TR operation at all, neither by itself nor in combination with the reflections $\sigma_x$ and $\sigma_y$ and, therefore are formally the same as the ordinary point groups, \textit{i.e.}, $\mathbb{G}_M=\mathbb{G}$. These classes, which are listed in Table \ref{table: Type I}, describe TR symmetry-breaking systems with $ \bm{H} \neq \bm{0} $, the only exception being $\mathbb{G}_M=\mathbf{V}$, which does not allow the existence of a uniform pseudovector $ \bm{H} $. Type II classes contain the TR operation ${\cal K}$ itself and, therefore describe TR invariant systems, with $ \bm{H} = \bm{0} $. These classes are obtained from the ordinary point groups as direct products $\mathbb{G}_M=\mathbb{G}\times{\cal K}$. Finally, Type III magnetic classes contain the TR operation only in combination with the reflections $\sigma_x$ and $\sigma_y$. In the notation for these classes, $\mathbb{G}_M=\mathbb{G}(\tilde{\mathbb{G}})$, the unitary subgroup $\tilde{\mathbb{G}}$ includes all elements of $\mathbb{G}$ which are not multiplied by the antiunitary operation ${\cal K}$. In order to obtain Type III magnetic classes, one has to identify all subgroups $\tilde{\mathbb{G}}$ of index 2 for each ordinary point group $\mathbb{G}$. Then, $\mathbb{G}_M=\tilde{\mathbb{G}}+({\cal K}\tilde g)\tilde{\mathbb{G}}$, where $\tilde g$ is any element of $\mathbb{G}$ that is not in $\tilde{\mathbb{G}}$. The resulting list is shown in Table \ref{table: Type III}. Thus we conclude that there are altogether sixteen quasi-1D magnetic classes of all three types. One can see from Tables \ref{table: Type I} and \ref{table: Type III} that, if TR symmetry is broken by a uniform field $ \bm{H} $, then there are ten possible ``ferromagnetic'' classes: \begin{equation} \label{FM-classes} \mathbb{G}_M=\mathbf{C}_1, \mathbf{D}_x, \mathbf{D}_y, \mathbf{C}_2, \mathbf{D}_x(E), \mathbf{D}_y(E), \mathbf{C}_2(E), \mathbf{V}(\mathbf{D}_x), \mathbf{V}(\mathbf{D}_y), \mathbf{V}(\mathbf{C}_2). \end{equation} For a given orientation of the field, only certain symmetries can be realized. In general, \textit{i.e.}, if all three components of $ \bm{H} $ are nonzero, then the magnetic class is $\mathbf{C}_1$ (Type I), regardless of the symmetry of the potential $U( \bm{r} )$. If the magnetic field is along a high-symmetry direction, then the number of possibilities increases. For example, $ \bm{H} \parallel \hat{\bm{x}} $ can be realized in five classes: $\mathbf{C}_1$, $\mathbf{D}_x$, $\mathbf{D}_y(E)$, $\mathbf{C}_2(E)$, and $\mathbf{V}(\mathbf{D}_x)$, while $ \bm{H} \parallel \hat{\bm{z}} $ can be realized in $\mathbf{C}_1$, $\mathbf{C}_2$, $\mathbf{D}_x(E)$, $\mathbf{D}_y(E)$, and $\mathbf{V}(\mathbf{C}_2)$. Diagonalization of the Hamiltonian (\ref{general H}) produces the Bloch states $|k_x,n\rangle$, labelled by the band index $n$, with the corresponding dispersions $\xi_n(k_x)$ (the energies are counted from the chemical potential and the difference between the latter and the Fermi energy $\epsilon_F$ is neglected). Due to the absence of inversion symmetry, the bands are nondegenerate almost everywhere in the BZ, see Sec. \ref{sec: bands}, and, other than being periodic in the reciprocal space, $\xi_n(k_x+G)=\xi_n(k_x)$, do not have generically any additional symmetries. However, for some magnetic classes there is enough symmetry to ensure that \begin{equation} \label{xi-even} \xi_n(k_x)=\xi_n(-k_x). \end{equation} We call such magnetic classes ``superconducting'', because symmetric bands are favorable for the formation of the Cooper pairs with zero center-of-mass momentum through a Bardeen-Cooper-Schrieffer (BCS)-like mechanism, see Sec. \ref{sec: superconductivity}. This does not mean though that superconductivity is impossible if the property (\ref{xi-even}) does not hold. For instance, some kind of a nonuniform superconducting state might be stabilized in asymmetric bands. We do not consider such possibilities here. The property (\ref{xi-even}) obviously holds for all five Type II classes, \textit{i.e.}, for \begin{equation} \label{classes-even-bands-II} \mathbb{G}_M=\mathbf{C}_1\times{\cal K},\ \mathbf{D}_x\times{\cal K},\ \mathbf{D}_y\times{\cal K},\ \mathbf{C}_2\times{\cal K},\ \mathbf{V}\times{\cal K}, \end{equation} in which case the Bloch states with opposite momenta form the Kramers pairs. For Type I and Type III magnetic classes, the bands are even in $k_x$ only if there is a group element which transforms $k_x$ into $-k_x$, \textit{i.e.}, at least one of $\sigma_x$, ${\cal K}\sigma_y$, or $\sigma_x\sigma_y$. It is easy to see that this requirement is satisfied for only three of Type I classes: \begin{equation} \label{classes-even-bands-I} \mathbb{G}_M=\mathbf{D}_x,\ \mathbf{C}_2,\ \mathbf{V}, \end{equation} and three of Type III classes: \begin{equation} \label{classes-even-bands-III} \mathbb{G}_M=\mathbf{D}_y(E),\ \mathbf{V}(\mathbf{D}_x),\ \mathbf{V}(\mathbf{C}_2). \end{equation} Note that all ``superconducting'' magnetic classes correspond to $H_y=0$, therefore any deviation of the magnetic field from the $xz$ plane produces asymmetric bands and is, therefore detrimental for superconductivity. \begin{table} \caption{Type I magnetic classes in quasi-1D, with the corresponding directions of a uniform TR symmetry-breaking field. The magnetic class $\mathbf{V}$ does not allow ferromagnetism (\textit{i.e.}, the existence of a uniform pseudovector $ \bm{H} $).} \begin{tabular}{|c|c|} \hline $\mathbb{G}_M$ & $ \bm{H} $ \\ \hline $\mathbf{C}_1=\{E\}$ & $H_x \hat{\bm{x}} +H_y \hat{\bm{y}} +H_z \hat{\bm{z}} $ \\ \hline $\mathbf{D}_x=\{E,\sigma_x\}$ & $H_x \hat{\bm{x}} $ \\ \hline $\mathbf{D}_y=\{E,\sigma_y\}$ & $H_y \hat{\bm{y}} $ \\ \hline $\mathbf{C}_2=\{E,\sigma_x\sigma_y\}$ & $H_z \hat{\bm{z}} $ \\ \hline $\ \mathbf{V}=\{E,\sigma_x,\sigma_y,\sigma_x\sigma_y\}\ $ & $ \bm{0} $ \\ \hline \end{tabular} \label{table: Type I} \end{table} \begin{table} \caption{Type III magnetic classes in quasi-1D, with the corresponding directions of a uniform TR symmetry-breaking field.} \begin{tabular}{|c|c|} \hline $\mathbb{G}_M$ & $ \bm{H} $ \\ \hline $\mathbf{D}_x(E)=\{E,{\cal K}\sigma_x\}$ & $H_y \hat{\bm{y}} +H_z \hat{\bm{z}} $ \\ \hline $\mathbf{D}_y(E)=\{E,{\cal K}\sigma_y\}$ & $H_x \hat{\bm{x}} +H_z \hat{\bm{z}} $ \\ \hline $\mathbf{C}_2(E)=\{E,{\cal K}\sigma_x\sigma_y\}$ & $H_x \hat{\bm{x}} +H_y \hat{\bm{y}} $ \\ \hline $\ \mathbf{V}(\mathbf{D}_x)=\{E,\sigma_x,{\cal K}\sigma_y,{\cal K}\sigma_x\sigma_y\}\ $ & $H_x \hat{\bm{x}} $ \\ \hline $\ \mathbf{V}(\mathbf{D}_y)=\{E,{\cal K}\sigma_x,\sigma_y,{\cal K}\sigma_x\sigma_y\}\ $ & $H_y \hat{\bm{y}} $ \\ \hline $\ \mathbf{V}(\mathbf{C}_2)=\{E,{\cal K}\sigma_x,{\cal K}\sigma_y,\sigma_x\sigma_y\}\ $ & $H_z \hat{\bm{z}} $ \\ \hline \end{tabular} \label{table: Type III} \end{table} \section{Electronic band structure} \label{sec: bands} In order to get more detailed information about the band structure, in this section we derive an effective momentum-space Hamiltonian of noninteracting electrons. We adapt the general approach developed in Ref. \onlinecite{Sam09} to the quasi-1D case, with modifications required in the presence of magnetic field. The starting point is Eq. (1), which can be represented in the form $\hat H=\hat H_s+\hat H_a$, where \begin{eqnarray} \label{H-symm} && \hat H_s=\frac{ \hat{\bm{p}} ^2}{2m}+U_s( \bm{r} )+\frac{1}{4m^2c^2}\hat{\bm{\sigma}}[\bm{\nabla}U_s( \bm{r} )\times \hat{\bm{p}} ],\\ \label{H-asymm} && \hat H_a=U_a( \bm{r} )+\frac{1}{4m^2c^2}\hat{\bm{\sigma}}[\bm{\nabla}U_a( \bm{r} )\times \hat{\bm{p}} ]+\frac{e}{4m^2c^3}\hat{\bm{\sigma}}[\bm{\nabla}U( \bm{r} )\times \bm{A} ]+\frac{e}{2mc}( \hat{\bm{p}} \bm{A} + \bm{A} \hat{\bm{p}} )+\frac{e^2}{2mc^2} \bm{A} ^2 +\mu_B\hat{\bm{\sigma}} \bm{H} . \end{eqnarray} Here $$ U_s( \bm{r} )=\frac{U( \bm{r} )+U(- \bm{r} )}{2},\quad U_a( \bm{r} )=\frac{U( \bm{r} )-U(- \bm{r} )}{2} $$ are respectively the inversion-symmetric and antisymmetric parts of the potential $U( \bm{r} )$. Note that $\hat H_s$ is invariant with respect to both the full 3D inversion $I$ and time reversal ${\cal K}$ (recall that the TR operation for spin-1/2 particles is ${\cal K}=i\hat\sigma_2{\cal K}_0$, where ${\cal K}_0$ is complex conjugation), while $\hat H_a$ has neither of these symmetries. Diagonalizing $\hat H_s$ one obtains the Bloch bands $\epsilon_\mu(k_x)$, with the properties $\epsilon_\mu(k_x)=\epsilon_\mu(-k_x)$ and $\epsilon_\mu(k_x+G)=\epsilon_\mu(k_x)$. Due to the combined symmetry ${\cal K}I$, which leaves the wavevector unchanged, the bands are twofold degenerate, and one can introduce the ``pseudospin'' index $\alpha$ to distinguish two degenerate Bloch states at each $k_x$ (Ref. \onlinecite{pseudospin}). These states, denoted by $|k_x\mu\alpha\rangle$, can be chosen to transform under TR and the point group operations in the same way as the pure spin eigenstates, which justifies using the same notation for the pseudospin, $\alpha=\uparrow$ or $\downarrow$, as for the usual spin. In the gauge $ \bm{A} = \bm{A} (y,z)$, it is easy to see that $\hat H_a$, see Eq. (\ref{H-asymm}), is lattice-periodic in the $x$ direction and, therefore diagonal in $k_x$. Its matrix elements in the pseudospin basis can be written as \begin{equation} \label{Hprime-matrix} \langle k_x\mu\alpha|\hat H_a|k_x\nu\beta\rangle=i{\cal L}_{\mu\nu}(k_x)\delta_{\alpha\beta}+ \bm{{\cal M}} _{\mu\nu}(k_x)\bm{\sigma}_{\alpha\beta}. \end{equation} All effects of the breaking of TR and 3D inversion symmetries are contained in the matrices ${\cal L}_{\mu\nu}$ and $ \bm{{\cal M}} _{\mu\nu}=({\cal M}_{1,\mu\nu},{\cal M}_{2,\mu\nu},{\cal M}_{3,\mu\nu})$. Thus we arrive at the following second-quantized form of the noninteracting electron Hamiltonian: \begin{equation} \label{H-nonint} \hat H=\sum_{k_x,\mu\nu}\sum_{\alpha\beta}[\epsilon_\mu(k_x)\delta_{\mu\nu}\delta_{\alpha\beta}+i{\cal L}_{\mu\nu}(k_x)\delta_{\alpha\beta} + \bm{{\cal M}} _{\mu\nu}(k_x)\bm{\sigma}_{\alpha\beta}]\hat a^\dagger_{k_x\mu\alpha}\hat a_{k_x\nu\beta}, \end{equation} where $\hat a^\dagger$ and $\hat a$ are the fermionic creation and annihilation operators in the pseudospin states. The bands $\xi_n(k_x)$ are obtained by diagonalizing the above Hamiltonian. It is easy to see that the twofold pseudospin degeneracy at given $k_x$ is lifted only if $ \bm{{\cal M}} _{\mu\nu}(k_x)\neq \bm{0} $. The matrices ${\cal L}_{\mu\nu}(k_x)$ and $ \bm{{\cal M}} _{\mu\nu}(k_x)$ must satisfy certain conditions imposed by symmetry. It follows from the general requirements of the lattice periodicity and Hermiticity that \begin{eqnarray} \label{LM-periodic} && {\cal L}_{\mu\nu}(k_x)={\cal L}_{\mu\nu}(k_x+G),\quad \bm{{\cal M}} _{\mu\nu}(k_x)= \bm{{\cal M}} _{\mu\nu}(k_x+G),\\ \label{LM-Hermite} && {\cal L}_{\mu\nu}(k_x)=-{\cal L}_{\nu\mu}^*(k_x),\quad \bm{{\cal M}} _{\mu\nu}(k_x)= \bm{{\cal M}} _{\nu\mu}^*(k_x). \end{eqnarray} Under time reversal, we have \begin{equation} \label{LM-TR} {\cal K}:\ {\cal L}_{\mu\nu}(k_x)\to -{\cal L}_{\mu\nu}^*(-k_x),\quad \bm{{\cal M}} _{\mu\nu}(k_x)\to - \bm{{\cal M}} _{\mu\nu}^*(-k_x), \end{equation} while under the reflections, \begin{eqnarray} \label{LM-sigma-x} && \sigma_x:\ {\cal L}_{\mu\nu}(k_x)\to {\cal L}_{\mu\nu}(-k_x),\quad \bm{{\cal M}} _{\mu\nu}(k_x)\to C_{2x} \bm{{\cal M}} _{\mu\nu}(-k_x),\\ \label{LM-sigma-y} && \sigma_y:\ {\cal L}_{\mu\nu}(k_x)\to {\cal L}_{\mu\nu}(k_x),\quad \bm{{\cal M}} _{\mu\nu}(k_x)\to C_{2y} \bm{{\cal M}} _{\mu\nu}(k_x). \end{eqnarray} Here $C_{2x}$ and $C_{2y}$ are $\pi$-rotations about the $x$ and $y$ axes, respectively. For Type II classes, we have $ \bm{H} = \bm{0} $ and $ \bm{A} = \bm{0} $, therefore $\hat H_a$ is odd in $ \bm{r} $. Then it follows from Eq. (\ref{Hprime-matrix}) that \begin{equation} \label{LM-odd-k-x} {\cal L}_{\mu\nu}(k_x)=-{\cal L}_{\mu\nu}(-k_x),\quad \bm{{\cal M}} _{\mu\nu}(k_x)=- \bm{{\cal M}} _{\mu\nu}(-k_x). \end{equation} Using Eqs. (\ref{LM-Hermite}) and (\ref{LM-TR}), we obtain that ${\cal L}_{\mu\nu}$ is a real antisymmetric matrix, while $ \bm{{\cal M}} _{\mu\nu}$ is a real symmetric matrix, at each $k_x$. The point group symmetries can impose additional constraints. For example, if the magnetic class is $\mathbf{D}_x\times{\cal K}$, then the invariance under $\sigma_x$ dictates that ${\cal L}_{\mu\nu}(k_x)={\cal L}_{\mu\nu}(-k_x)$ and $ \bm{{\cal M}} _{\mu\nu}(k_x)=C_{2x} \bm{{\cal M}} _{\mu\nu}(-k_x)$, see Eq. (\ref{LM-sigma-x}). Comparing this with Eq. (\ref{LM-odd-k-x}) yields ${\cal L}_{\mu\nu}(k_x)=0$ and ${\cal M}_{1,\mu\nu}(k_x)=0$ at all $k_x$. We leave it to an interested reader to derive the permitted forms of the noninteracting Hamiltonian (\ref{H-nonint}) for other magnetic classes. The property (\ref{LM-odd-k-x}) leads to unavoidable isolated band degeneracies. Indeed, consider the TR invariant wavevectors $k_x=K$, satisfying $-K=K+G$. There are just two such points in the 1D BZ, given by $K_1=0$ and $K_2=\pi/d$. At these points, we have $ \bm{{\cal M}} _{\mu\nu}(K)=- \bm{{\cal M}} _{\mu\nu}(-K)=- \bm{{\cal M}} _{\mu\nu}(K+G)=- \bm{{\cal M}} _{\mu\nu}(K)$, therefore $ \bm{{\cal M}} _{\mu\nu}(K_{1,2})= \bm{0} $. For the same reason, ${\cal L}_{\mu\nu}(K_{1,2})=0$. Thus the bands $\xi_n(k_x)$ in a Type II system remain degenerate at $k_x=K_{1,2}$, coming in pairs connected at the center and the boundaries of the BZ, as shown in Fig. \ref{fig: bands-II}. The number of bands crossing the Fermi level, which we denote by $N$, is even, barring some exceptional values of the chemical potential, at which either $\xi_n(0)=0$ or $\xi_n(\pi/d)=0$. In contrast, for Type I and Type III magnetic classes, $\hat H_a$ is no longer odd in $ \bm{r} $, and the property (\ref{LM-odd-k-x}) does not hold. By inspecting the magnetic point group symmetries, one can verify that there are no reasons for all elements of $ \bm{{\cal M}} _{\mu\nu}$ to simultaneously vanish at $k_x=K_{1,2}$, or anywhere else in the BZ. The only exception is the nonferromagnetic Type I class $\mathbf{V}$, for which it follows from Eqs. (\ref{LM-sigma-x}) and (\ref{LM-sigma-y}) that ${\cal L}_{\mu\nu}(k_x)={\cal L}_{\mu\nu}(-k_x)$, ${\cal M}_{1,\mu\nu}(k_x)=0$, ${\cal M}_{3,\mu\nu}(k_x)=0$, and ${\cal M}_{2,\mu\nu}(k_x)=-{\cal M}_{2,\mu\nu}(-k_x)$, producing the bands which are symmetric and pairwise degenerate at $k_x=K_{1,2}$, see Fig. \ref{fig: bands-II}. In all other Type I and Type III cases, see Eq. (\ref{FM-classes}), the band degeneracy is completely lifted at all wavevectors, and the number of bands crossing the Fermi level can be odd or even, as shown in Fig. \ref{fig: bands-I and III-asymm}. For the ``superconducting'' Type I and Type III classes, listed in Eqs. (\ref{classes-even-bands-I}) and (\ref{classes-even-bands-III}), the bands are symmetric, $\xi_n(k_x)=\xi_n(-k_x)$, as shown in Fig. \ref{fig: bands-I and III-symm}. \subsection{Rashba model} \label{sec: Rashba} The band structure peculiarities in noncentrosymmetric quasi-1D systems can be illustrated using a minimal model with just one pseudospin-degenerate band, corresponding to $\mu=0$. The general Hamiltonian (\ref{H-nonint}) is then reduced to $$ \hat H=\sum_{k_x,\alpha\beta}[\epsilon_0(k_x)\delta_{\alpha\beta}+i{\cal L}_{00}(k_x)\delta_{\alpha\beta} + \bm{{\cal M}} _{00}(k_x)\bm{\sigma}_{\alpha\beta}]\hat a^\dagger_{k_x\alpha}\hat a_{k_x\beta}, $$ where the band index in the fermionic operators has been dropped for brevity. It is easy to check that for all magnetic classes, $ \bm{{\cal M}} _{00}$ is real, while ${\cal L}_{00}$ is either zero or purely imaginary, \textit{i.e.}, ${\cal L}_{00}(k_x)=-i\gamma_0(k_x)$, where the even in $k_x$ part of $\gamma_0$ can be absorbed into the ``bare'' band dispersion $\epsilon_0(k_x)$. Introducing the notation $ \bm{{\cal M}} _{00}(k_x)= \bm{\Gamma} (k_x)$, separating the odd and even in $k_x$ parts, and neglecting the momentum dependence of the latter, we finally arrive at the following expression: \begin{equation} \label{H-Rashba} \hat H = \sum\limits_{k_x,\alpha\beta}\left[\epsilon_0(k_x)\delta_{\alpha\beta}+\gamma_0(k_x)\delta_{\alpha\beta}+ \bm{\Gamma} (k_x)\bm{\sigma}_{\alpha\beta}\right]\hat a^\dagger_{k_x\alpha}\hat a_{k_x\beta}, \end{equation} where $\epsilon_0(k_x)=\epsilon_0(-k_x)$, $\gamma_0(k_x)=-\gamma_0(-k_x)$, and \begin{equation} \label{Gamma-def} \bm{\Gamma} (k_x)= \bm{\gamma} (k_x)+\bm{h}, \end{equation} with $ \bm{\gamma} (k_x)=- \bm{\gamma} (-k_x)$. With the exception of the $\gamma_0$ term, the Hamiltonian (\ref{H-Rashba}) has the same form as the 1D version of the well-known Rashba model\cite{Rashba-model} in an external magnetic field. The first and second terms in Eq. (\ref{Gamma-def}) are usually called the ``asymmetric SOC'' and the ``Zeeman interaction'', respectively. The simplest form of the parameters of the Rashba model which is compatible with the reciprocal lattice periodicity is $\gamma_0(k_x)=\gamma_0\sin(k_xd)$ and $ \bm{\gamma} (k_x)= \bm{\gamma} \sin(k_xd)$. The values of $\gamma_0$, $ \bm{\gamma} $, and $\bm{h}$ depend on the magnetic class and can be obtained by adapting the symmetry properties of ${\cal L}_{\mu\nu}$ and $ \bm{{\cal M}} _{\mu\nu}$ discussed above, see Eqs. (\ref{LM-periodic},\ref{LM-Hermite},\ref{LM-TR},\ref{LM-sigma-x},\ref{LM-sigma-y},\ref{LM-odd-k-x}), to the one-band case. The results are shown in Table \ref{table: Rashba-vectors}. Note some magnetic classes are indistinguishable at the level of the Rashba model, \textit{e.g.}, $\mathbf{D}_y\times{\cal K}$, $\mathbf{V}\times{\cal K}$, and $\mathbf{V}$. This is because the model (\ref{H-Rashba}) contains less information than the full noninteracting Hamiltonian (\ref{H-nonint}) and, therefore cannot capture all details of the quasi-1D electron structure. Diagonalizing the Rashba Hamiltonian (\ref{H-Rashba}), we obtain two bands: \begin{equation} \label{Rashba-bands} \xi_\lambda(k_x)=\epsilon_0(k_x)+\gamma_0(k_x)+\lambda| \bm{\Gamma} (k_x)|, \end{equation} which are labelled by the index $\lambda=\pm$, sometimes called ``helicity'' (note that $\lambda$ is not the actual helicity, which is defined as the projection of the spin on the wavevector). The corresponding eigenstates can be chosen in the following form: \begin{equation} \label{Rashba-states} |k_x,+\rangle=\left(\begin{array}{c} \cos\frac{\alpha}{2}\\ e^{i\beta}\sin\frac{\alpha}{2} \end{array}\right),\quad |k_x,-\rangle=\left(\begin{array}{c} \sin\frac{\alpha}{2}\\ -e^{i\beta}\cos\frac{\alpha}{2} \end{array}\right), \end{equation} where we used the spherical-angle parametrization: $ \bm{\Gamma} =| \bm{\Gamma} |(\sin\alpha\cos\beta,\sin\alpha\sin\beta,\cos\alpha)$. All three types of the band structure, shown in Figs. \ref{fig: bands-II}, \ref{fig: bands-I and III-asymm}, and \ref{fig: bands-I and III-symm}, can be obtained from Eq. (\ref{Rashba-bands}) by choosing the appropriate values of the parameters. For example, according to Table \ref{table: Rashba-vectors}, in all ``superconducting'' ferromagnetic classes we have $\gamma_0=0$, $ \bm{\gamma} \perp\bm{h}$, and $\bm{h}\perp \hat{\bm{y}} $. Therefore $\xi_\lambda(k_x)=\epsilon_0(k_x)+\lambda\sqrt{ \bm{\gamma} ^2(k_x)+\bm{h}^2}=\xi_\lambda(-k_x)$, in agreement with Eq. (\ref{xi-even}). In the theoretical proposal of Ref. \onlinecite{Lutchyn10}, which was used in experiment to detect the MFs in semiconducting nanowires with proximity-induced superconductivity,\cite{InSb-wire} the external magnetic field is directed parallel to the wire, \textit{i.e.}, $ \bm{H} \parallel \hat{\bm{x}} $. This restricts the possible magnetic symmetries to the following list: $\mathbb{G}_M=\mathbf{D}_x$, $\mathbf{D}_y(E)$, or $\mathbf{V}(\mathbf{D}_x)$. The actual magnetic class is determined by the details of the wire potential, the substrate, the gates, \textit{etc}. In Ref. \onlinecite{Lutchyn10}, the band structure was described by the Rashba model and it was assumed that the asymmetric SOC is perpendicular to the wire, namely, $ \bm{\gamma} \parallel \hat{\bm{y}} $. According to Table \ref{table: Rashba-vectors}, this is consistent with any one of the three magnetic classes listed above. \begin{table} \caption{Parameters of the 1D Rashba model, see Eq. (\ref{H-Rashba}). The last column indicates whether the Rashba bands have the symmetry (\ref{xi-even}).} \begin{tabular}{|c|c|c|c|c|} \hline $\mathbb{G}_M$ & $\gamma_0$ & $ \bm{\gamma} $ & $\bm{h}$ &\ SC\; \\ \hline $\mathbf{C}_1$ &\ $\neq 0$\ \ & $(\gamma_1,\gamma_2,\gamma_3)$ &\ $(h_1,h_2,h_3)$\ \ & N \\ \hline $\mathbf{D}_x$ & $0$ & $(0,\gamma_2,\gamma_3)$ &\ $(h_1,0,0)$\ \ & Y \\ \hline $\mathbf{D}_y$ & $\neq 0$ & $(0,\gamma_2,0)$ &\ $(0,h_2,0)$\ \ & N \\ \hline $\mathbf{C}_2$ & $0$ & $(\gamma_1,\gamma_2,0)$ & $(0,0,h_3)$ & Y \\ \hline $\mathbf{V}$ & $0$ & $(0,\gamma_2,0)$ &\ $(0,0,0)$\ \ & Y \\ \hline $\ \mathbf{C}_1\times{\cal K}$\ \ & $0$ & $\ (\gamma_1,\gamma_2,\gamma_3)\ $ & $(0,0,0)$ & Y \\ \hline $\mathbf{D}_x\times{\cal K}$ & $0$ & $(0,\gamma_2,\gamma_3)$ & $(0,0,0)$ & Y \\ \hline $\mathbf{D}_y\times{\cal K}$ & $0$ & $(0,\gamma_2,0)$ & $(0,0,0)$ & Y \\ \hline $\mathbf{C}_2\times{\cal K}$ & $0$ & $(\gamma_1,\gamma_2,0)$ & $(0,0,0)$ & Y \\ \hline $\mathbf{V}\times{\cal K}$ & $0$ & $(0,\gamma_2,0)$ & $(0,0,0)$ & Y \\ \hline $\mathbf{D}_x(E)$ & $\neq 0$ & $(0,\gamma_2,\gamma_3)$ &\ $(0,h_2,h_3)$\ \ & N \\ \hline $\mathbf{D}_y(E)$ & $0$ & $(0,\gamma_2,0)$ & $(h_1,0,h_3)$ & Y \\ \hline $\mathbf{C}_2(E)$ & $\neq 0$ & $(\gamma_1,\gamma_2,0)$ & $(h_1,h_2,0)$ & N \\ \hline $\mathbf{V}(\mathbf{D}_x)$ & $0$ & $(0,\gamma_2,0)$ & $(h_1,0,0)$ & Y \\ \hline $\mathbf{V}(\mathbf{D}_y)$ & $\neq 0$ & $(0,\gamma_2,0)$ & $(0,h_2,0)$ & N \\ \hline $\mathbf{V}(\mathbf{C}_2)$ & $0$ & $(0,\gamma_2,0)$ & $(0,0,h_3)$ & Y \\ \hline \end{tabular} \label{table: Rashba-vectors} \end{table} \section{Superconductivity in 1D nondegenerate bands} \label{sec: superconductivity} Suppose we have a normal state described by one of the ``superconducting'' magnetic classes, see Eqs. (\ref{classes-even-bands-II}), (\ref{classes-even-bands-I}), and (\ref{classes-even-bands-III}). We further assume that if TR symmetry is broken, it is due to a uniform field $\bm{H}$, thus eliminating the nonferromagnetic class $\mathbf{V}$ from the consideration. In a BCS-like mechanism of superconductivity, the Cooper pairs are formed by quasiparticles with opposite momenta and the pairing interaction is only effective near the Fermi surface. The ``Fermi surface'' is given by the set of $2N$ Fermi wavevectors $\pm k_{F,n}$, which are the roots of the equations $\xi_n(k_x)=0$. Here $N$ is the number of nondegenerate bands crossing the Fermi level. As shown in the previous section, for Type II classes $N$ is even (Fig. \ref{fig: bands-II}), while for Type I and Type III classes $N$ can be even or odd (Fig. \ref{fig: bands-I and III-symm}). The exact band states $|k_x,n\rangle$, which include all effects of the lattice potential, the SOC, and the TR symmetry-breaking field, provide a natural basis for constructing the pairing interaction. Our analysis does not rely on any specific pairing mechanism, the only assumption being that the band splitting is large enough to suppress the pairing of quasiparticles from different bands. The interaction Hamiltonian has the following form: $$ \hat H_{int}=\frac{1}{2L}\sum\limits_{k_xk_x'q}\sum_{nn'}V_{nn'}(k_x,k_x')\hat c^\dagger_{k_x+q,n}\hat c^\dagger_{-k_x,n}\hat c_{-k_x',n'}\hat c_{k_x'+q,n'}, $$ where $V_{nn}$ characterize the same-band pairing strength, $V_{nn'}$ with $n\neq n'$ describe the pair scattering between different bands, and $L$ is the length of the wire. Treating the pairing interaction in the mean-field approximation and assuming a uniform superconducting state, we obtain: \begin{equation} \label{H-MF-k-k} \hat H=\sum_{k_x,n}\xi_n(k_x)\hat c^\dagger_{k_x,n}\hat c_{k_x,n}+\frac{1}{2}\sum_{k_x,n}\bigl[\Delta_n(k_x)\hat c^\dagger_{k_x,n}\hat c^\dagger_{-k_x,n}+\mathrm{H.c.}\bigr]. \end{equation} The first term describes noninteracting quasiparticles, while the second term, with $\Delta_n(k_x)=-\Delta_n(-k_x)$, represents the intraband Cooper pairing between the states $|k_x,n\rangle$ and $|-k_x,n\rangle$. The Hamiltonian (\ref{H-MF-k-k}) can also be used to describe an ``extrinsic'' superconductivity in the wire, which is induced by proximity with a bulk superconductor. Unfortunately, perfunctory application of the above form of the mean-field pairing leads to a number of unpleasant consequences, which had been recognized a long time ago, see Ref. \onlinecite{Blount85}. \subsection{General analysis} \label{sec: BdG-general} In order to understand the issues with the expression (\ref{H-MF-k-k}), we start with a general form of the mean-field fermionic pairing Hamiltonian in some arbitrary basis of single-particle states $|i\rangle$: \begin{equation} \label{H-MF-general} \hat H = \sum_{ij}\varepsilon_{ij}\hat c_i^\dagger\hat c_j+\frac{1}{2}\sum_{ij}\bigl(\Delta_{ij}\hat c_i^\dagger\hat c_j^\dagger+\mathrm{H.c.}\bigr) = \frac{1}{2}\sum_{ij}(\hat c_i^\dagger,\ \hat c_i){\cal H}_{ij} \left(\begin{array}{c} \hat c_j \\ \hat c_j^\dagger \end{array}\right)+\mathrm{const}. \end{equation} Here $\varepsilon_{ij}=\langle i|\hat\varepsilon|j\rangle$ are the matrix elements of the single-particle Hamiltonian $\hat\varepsilon$ and \begin{eqnarray} \label{BdG-matrix-wrong} {\cal H}_{ij}=\left(\begin{array}{cc} \varepsilon_{ij} & \Delta_{ij} \\ \Delta^*_{ji} & -\varepsilon_{ji} \end{array}\right). \end{eqnarray} Due to the anticommutation of the fermionic creation and annihilation operators, the off-diagonal terms satisfy the condition $\Delta_{ij}=-\Delta_{ji}$. It was pointed out by Blount in Ref. \onlinecite{Blount85} that $\Delta_{ij}$ cannot be interpreted as a matrix element of a first-quantization operator and, therefore the matrix (\ref{BdG-matrix-wrong}) does not represent a proper BdG Hamiltonian. To show this, let us make a change of the single-particle basis, $|i\rangle=\sum_p|p\rangle U_{p i}$, with the coefficients $U_{p i}=\langle p|i\rangle$ forming a unitary matrix. The creation and annihilation operators transform as follows: \begin{equation} \label{ci-cmu} \hat c_i^\dagger=\sum_p \hat c_p^\dagger U_{p i},\quad \hat c_i=\sum_p U^*_{p i} \hat c_p, \end{equation} and Eq. (\ref{H-MF-general}) takes the form \begin{eqnarray*} \hat H=\sum_{pq} \Bigl(\sum_{ij}U_{p i}\varepsilon_{ij}U^*_{q j}\Bigr) \hat c_p^\dagger\hat c_q +\frac{1}{2}\sum_{pq}\Bigl[ \Bigl(\sum_{ij}U_{p i}\Delta_{ij}U_{q j}\Bigr) \hat c_p^\dagger\hat c_q^\dagger+\mathrm{H.c.}\Bigr]. \end{eqnarray*} The coefficients in the first term transform as expected, \textit{i.e.}, as the matrix elements of $\hat\varepsilon$: $$ \sum_{ij}U_{p i}\varepsilon_{ij}U^*_{q j}=\sum_{ij}\langle p|i\rangle\langle i|\hat\varepsilon|j\rangle\langle j|q\rangle=\langle p|\hat\varepsilon|q\rangle. $$ However, if one tries, by analogy, to write $\Delta_{ij}=\langle i|\hat{\Delta}|j\rangle$, where $\hat\Delta$ is the ``gap operator'', then \begin{equation} \label{UDU} \sum_{ij}U_{p i}\Delta_{ij}U_{q j}=\sum_{ij}\langle p|i\rangle\langle i|\hat{\Delta}|j\rangle\langle q|j\rangle \neq \langle p|\hat{\Delta}|q\rangle. \end{equation} Therefore the operator $\hat{\Delta}$ does not actually exist. One can obtain a meaningful definition of the gap operator by modifying the pairing terms in the mean-field Hamiltonian as follows. Consider an antiunitary operation ${\cal A}$, which may or may not be a symmetry of the system in the normal state, \textit{i.e.}, may or may not commute with $\hat\varepsilon$. We assume that ${\cal A}^2$ is either $+1$ or $-1$, when acting on spin-1/2 wave functions. For instance, one can use as ${\cal A}$ the time reversal operation ${\cal K}$, as in Ref. \onlinecite{Blount85}, or the latter's combination with one of the point group elements, see below. For each single-particle state $|i\rangle$, we introduce its ${\cal A}$-transformed counterpart ${\cal A}|i\rangle$, as well as the corresponding creation and annihilation operators $$ \hat{\tilde c}_i^\dagger={\cal A}\hat c_i^\dagger{\cal A}^{-1},\qquad \hat{\tilde c}_i={\cal A}\hat c_i{\cal A}^{-1}. $$ Then, for each pair of states $|i\rangle$ and $|j\rangle$ we define the gap function $\tilde\Delta_{ij}$ as a measure of the pairing between $|i\rangle$ and ${\cal A}|j\rangle$ and write the mean-field Hamiltonian in the form \begin{equation} \label{H-MF-general-corrected} \hat H=\sum_{ij}\varepsilon_{ij}\hat c_i^\dagger\hat c_j+\frac{1}{2}\sum_{ij}\bigl(\tilde\Delta_{ij}\hat c_i^\dagger\hat{\tilde c}_j^\dagger+\mathrm{H.c.}\bigr). \end{equation} It is easy to see that the gap functions $\tilde\Delta_{ij}$ have the desired transformation properties and can therefore be interpreted as matrix elements of a first-quantization operator $\hat{\tilde\Delta}$. Indeed, due to the antilinearity of ${\cal A}$ we have $$ \hat{\tilde c}_i^\dagger=\sum_p \hat{\tilde c}_p^\dagger U^*_{p i},\quad \hat{\tilde c}_i=\sum_p U_{p i} \hat{\tilde c}_p, $$ instead of Eq. (\ref{ci-cmu}), and $$ \sum_{ij}U_{p i}\tilde\Delta_{ij}U^*_{q j}=\sum_{ij}\langle p|i\rangle\langle i|\hat{\tilde\Delta}|j\rangle\langle j|q\rangle=\langle p|\hat{\tilde\Delta}|q\rangle, $$ instead of Eq. (\ref{UDU}). The anticommutation of the fermionic operators imposes a certain constraint on the matrix $\tilde\Delta_{ij}$. Since the basis $|i\rangle$ is complete, one can write \begin{equation} \label{A-matrix} {\cal A}|j\rangle=\sum_i|i\rangle A_{ij},\quad A_{ij}=\langle i|{\cal A}|j\rangle. \end{equation} Using the antiunitary adjoint definition, $\langle i|{\cal A}|j\rangle=\langle j|{\cal A}^\dagger|i\rangle$, it is straightforward to show that the matrix formed by the coefficients $A_{ji}$ is unitary: $ \hat{A} ^{-1}= \hat{A} ^\dagger$, and also \begin{equation} \label{A-inverse-matrix} \langle i|{\cal A}^{-1}|j\rangle=\langle i|{\cal A}^\dagger|j\rangle=A_{ji}. \end{equation} Next, we transform the pairing terms in Eq. (\ref{H-MF-general-corrected}) as follows: \begin{eqnarray*} \sum_{ij}\tilde\Delta_{ij}\hat c_i^\dagger\hat{\tilde c}_j^\dagger=\sum_{ijk}\tilde\Delta_{ij}A_{kj}\hat c_i^\dagger\hat c_k^\dagger=-\sum_{ijk}\tilde\Delta_{ij}A_{kj}\hat c_k^\dagger\hat c_i^\dagger =-\sum_{ijk}\tilde\Delta_{kj}A_{ij}\hat c_i^\dagger\hat c_k^\dagger. \end{eqnarray*} Therefore \begin{equation} \label{DA-AD} \hat{\tilde\Delta} \hat{A} ^\top=- \hat{A} \hat{\tilde\Delta}^\top, \end{equation} or, in other words, the matrix $\hat{\tilde\Delta} \hat{A} ^\top=\hat{\Delta}$ is antisymmetric, which was already evident from Eq. (\ref{H-MF-general}). Depending on whether the antiunitary operation ${\cal A}$ squares to $+1$ or $-1$, its matrix representation is symmetric or antisymmetric. We obtain from Eq. (\ref{A-matrix}): $\langle j|{\cal A}^2|i\rangle=\langle j|\sum_{kl} A^*_{ki}A_{lk}|l\rangle=( \hat{A} \hA^*)_{ji}$. If ${\cal A}^2=-1$, then \begin{equation} \label{A2-1-general} \hat{A} ^\top=- \hat{A} ,\qquad \hat{\tilde\Delta}^\top= \hat{A} ^\dagger\hat{\tilde\Delta} \hat{A} , \end{equation} but if ${\cal A}^2=+1$, then \begin{equation} \label{A2+1-general} \hat{A} ^\top= \hat{A} ,\qquad \hat{\tilde\Delta}^\top=- \hat{A} ^\dagger\hat{\tilde\Delta} \hat{A} . \end{equation} Here we used the unitarity of $ \hat{A} $ and the property (\ref{DA-AD}). One can now write the mean-field pairing Hamiltonian (\ref{H-MF-general-corrected}) in the form \begin{equation} \label{H-MF-tilde-rep} \hat H = \frac{1}{2}\sum_{ij}\left( \hat c_i^\dagger,\ \hat{\tilde c}_i \right)\tilde{\cal H}_{ij} \left(\begin{array}{c} \hat c_j \\ \hat{\tilde c}_j^\dagger \end{array}\right)+\mathrm{const}. \end{equation} The two-component fermionic operators that appear here are called the Gor'kov-Nambu operators, and the matrix connecting them is given by \begin{equation} \label{BdG-matrix-correct} \tilde{\cal H}_{ij}=\left(\begin{array}{cc} \varepsilon_{ij} & \tilde\Delta_{ij} \\ \tilde\Delta^*_{ji} & -(\hat A^\dagger\hat\varepsilon\hat A)_{ji} \end{array}\right). \end{equation} In contrast to Eq. (\ref{BdG-matrix-wrong}), the last expression can be represented as $\tilde{\cal H}_{ij}=\langle i|\hat H_{BdG}|j\rangle$, \textit{i.e.}, as the matrix element of a certain first-quantization operator, which is called the BdG Hamiltonian: \begin{equation} \label{H-BdG} \hat H_{BdG}=\left(\begin{array}{cc} \hat\varepsilon & \hat{\tilde\Delta} \\ \hat{\tilde\Delta}^\dagger & -\hat\varepsilon_{\cal A} \end{array}\right). \end{equation} where $\hat\varepsilon_{\cal A}={\cal A}^{-1}\hat\varepsilon{\cal A}$. In particular, for the bottom-right entry we have, using Eqs. (\ref{A-matrix}), (\ref{A-inverse-matrix}), and the Hermiticity of $\hat\varepsilon$: $$ \langle i|\hat\varepsilon_{\cal A}|j\rangle=\langle i|{\cal A}^{-1}\sum_{kl}\varepsilon_{lk}A_{kj}|l\rangle=\sum_{kl}\varepsilon^*_{lk}A^*_{kj}A_{li}=(\hat A^\dagger\hat\varepsilon\hat A)_{ji}. $$ The eigenstates and eigenvalues of the operator (\ref{H-BdG}) determine the wave functions and the energies of the Bogolibov quasiparticle excitations in our superconductor. It is easy to see that the matrices (\ref{BdG-matrix-wrong}) and (\ref{BdG-matrix-correct}) are related by a unitary transformation: ${\cal H}={\cal U}{\tilde{\cal H}}{\cal U}^{-1}$, where $$ {\cal U}_{ij}=\left(\begin{array}{cc} \delta_{ij} & 0 \\ 0 & A^*_{ij} \end{array}\right). $$ Therefore their eigenvalues are the same, coming in particle-hole symmetric pairs. The actual calculation has to be done using the BdG Hamiltonian (\ref{H-BdG}), whose spectrum can be found by utilizing various tools of quantum mechanics, \textit{e.g.}, the semiclassical approximation. Another advantage of working with the gap functions $\tilde\Delta_{ij}$ is that, unlike $\Delta_{ij}$, they transform in a simple way under the symmetry group operations, see the next subsection, and therefore lend themselves nicely to the standard symmetry-based analysis.\cite{Book} \subsection{Gap symmetry in the band representation} \label{sec: gap-band-symm} The above arguments can be made more explicit by using the basis of the Bloch states, $|i\rangle=|k_x,n\rangle$, labelled by quasimomentum $k_x$ and the band index $n$. To ensure that the Cooper pairs have zero center-of-mass momentum, the antiunitary operation ${\cal A}$ should transform $k_x$ into $-k_x$, and it follows from Eq. (\ref{A-matrix}) that \begin{equation} \label{A-matrix-band} {\cal A}|k_x,n\rangle=\sum_m|-k_x,m\rangle A_{mn}(k_x),\quad A_{mn}(k_x)=\langle -k_x,m|{\cal A}|k_x,n\rangle, \end{equation} where the matrix $ \hat{A} (k_x)$ is unitary at each $k_x$. Suppose that ${\cal A}$ is a symmetry of the single-particle Hamiltonian, \textit{i.e.}, $[\hat\varepsilon,{\cal A}]=0$, therefore $\hat\varepsilon_{\cal A}=\hat\varepsilon$. Both $ \hat{A} (k_x)$ and the gap function matrix can be chosen in a band-diagonal form: \begin{equation} \label{t_n-def} A_{mn}(k_x)=t_n(k_x)\delta_{mn}, \end{equation} where $t_n$ are phase factors, and $\tilde\Delta_n(k_x)=t_n^*(k_x)\Delta_n(k_x)$. For a uniform superconducting state in a symmetric band, $\xi_n(k_x)=\xi_n(-k_x)$, the mean-field Hamiltonian (\ref{H-MF-general-corrected}) takes the following form: \begin{equation} \label{H-MF-correct} \hat H=\sum_{k_x,n}\xi_n(k_x)\hat c^\dagger_{k_xn}\hat c_{k_xn}+\frac{1}{2}\sum_{k_x,n}\bigl[\tilde\Delta_{n}(k_x)\hat c^\dagger_{k_xn}\hat{\tilde c}^\dagger_{k_xn}+\mathrm{H.c.}\bigr], \end{equation} where $\hat{\tilde c}^\dagger_{k_xn}={\cal A}\hat c^\dagger_{k_xn}{\cal A}^{-1}=t_n(k_x)\hat c^\dagger_{-k_x,n}$, and the BdG Hamiltonian (\ref{H-BdG}) can be written as \begin{equation} \label{H-BdG-general} \hat H_{BdG}(k_x)=\sum_{n} |k_x,n\rangle \left(\begin{array}{cc} \xi_n(k_x) & \tilde\Delta_n(k_x) \\ \tilde\Delta_n^*(k_x) & -\xi_n(k_x) \end{array}\right) \langle k_x,n|. \end{equation} In order to make further progress, in particular, to determine the parity of the gap functions, one has to identify which ${\cal A}$ should be used for each magnetic class. \subsubsection{Type II} \label{sec: Type II-gap} For Type II classes, the TR operation ${\cal K}$ is a symmetry element on its own, and it is natural to choose \begin{equation} \label{A-Type-II} {\cal A}={\cal K}. \end{equation} This is the standard convention in superconductivity theory, see Refs. \onlinecite{pseudospin} and \onlinecite{Blount85}. Since ${\cal A}^2={\cal K}^2=-1$, we obtain from Eq. (\ref{A2-1-general}) that $t_n(-k_x)=-t_n(k_x)$ and \begin{equation} \label{Delta-Type-II-parity} \tilde\Delta_n(k_x)=\tilde\Delta_n(-k_x), \end{equation} \textit{i.e.}, the gap functions in nondegenerate TR-invariant bands are even in momentum. For the Rashba model, see Eq. (\ref{Rashba-states}), the phase factors are given by $t_\lambda(k_x)=\lambda e^{-i\beta(k_x)}$. Now let us see how the gap functions transform under various symmetry operations. The action of an element $g$ of the point group $\mathbb{G}$ is given by $g\hat c^\dagger_{k_xn}g^{-1}=\sum_m\hat c^\dagger_{gk_x,m}D_{mn,k_x}(g)$, where $D$ is the unitary representation matrix of $g$ in the Bloch basis. Since $g$ is a symmetry of the single-particle Hamiltonian and $[g,{\cal K}]=0$, the representation matrix is band-diagonal and we have $g\hat c^\dagger_{k_xn}g^{-1}=e^{i\Phi_{k_xn}(g)}\hat c^\dagger_{gk_x,n}$ and $g\hat{\tilde c}^\dagger_{k_xn}g^{-1}=e^{-i\Phi_{k_xn}(g)}\hat{\tilde c}^\dagger_{gk_x,n}$. Under the TR operation, ${\cal K}\hat c^\dagger_{k_xn}{\cal K}^{-1}=\hat{\tilde c}^\dagger_{k_xn}$ and ${\cal K}\hat{\tilde c}^\dagger_{k_xn}{\cal K}^{-1}=-\hat c^\dagger_{k_xn}$. Then it follows from Eq. (\ref{H-MF-correct}) that the gap function in each band transforms as a complex scalar, \textit{i.e.}, $\tilde\Delta_{n}(k_x)\to\tilde\Delta_{n}(g^{-1}k_x)$ under the point group symmetries and $\tilde\Delta_{n}(k_x)\to\tilde\Delta^*_{n}(k_x)$ under time reversal. Taken together with Eq. (\ref{Delta-Type-II-parity}), this means that the gap functions are invariant under all point group operations, thus corresponding to a 1D ``$s$-wave'' pairing.\cite{Sam17} Neglecting the momentum dependence of the gap functions near the Fermi points, one can write \begin{equation} \label{Type-II-OP} \tilde\Delta_{n}(k_x)=\eta_n, \end{equation} where the complex quantities $\eta_n$ are the components of the superconducting order parameter. Under the antiunitary symmetry operation ${\cal A}={\cal K}$ they transform as $\eta_n\to\eta^*_n$. \subsubsection{Type III} \label{sec: Type III-gap} For all three Type III ``superconducting'' magnetic classes, see Eq. (\ref{classes-even-bands-III}), there is only one antiunitary symmetry operation which transforms $k_x$ into $-k_x$, namely, \begin{equation} \label{A-Type-III} {\cal A}={\cal K}\sigma_y. \end{equation} In contrast to Type II classes, now we have ${\cal A}^2=+1$. Indeed, $\sigma_y^2=(IC_{2y})^2=C_{2y}^2$, but a $2\pi$-rotation of a spin-1/2 wave function about any axis is equivalent to the multiplication by $-1$, therefore ${\cal K}^2\sigma_y^2=1$. Then it follows from Eq. (\ref{A2+1-general}) that $t_n(-k_x)=t_n(k_x)$ and \begin{equation} \label{Delta-Type-III-parity} \tilde\Delta_n(k_x)=-\tilde\Delta_n(-k_x), \end{equation} \textit{i.e.}, the gap functions for Type III classes are odd in momentum. For the Rashba model, the action of the antiunitary opration (\ref{A-Type-III}) on the eigenstates (\ref{Rashba-states}) is equivalent to complex conjugation. By inspecting the Type III Rashba model parameters, see Table \ref{table: Rashba-vectors}, it is easy to show that $\alpha(-k_x)=\alpha(k_x)$ and $\beta(-k_x)=-\beta(k_x)$, therefore the phase factors are given by $t_\lambda(k_x)=1$. While all elements of the unitary subgroup $\tilde{\mathbb{G}}$ of $\mathbb{G}_M$ act on the gap functions in the same way as in the Type II case, \textit{i.e.}, $\tilde\Delta_{n}(k_x)\to\tilde\Delta_{n}(g^{-1}k_x)$ under $g\in\tilde{\mathbb{G}}$, the antiunitary element acts differently: from Eq. (\ref{H-MF-correct}) we have $\tilde\Delta_{n}(k_x)\to-\tilde\Delta^*_{n}(k_x)$ under ${\cal A}={\cal K}\sigma_y$. According to general theory, the superconducting order parameter components can be introduced by expanding the gap functions in terms of ``building blocks'' -- the basis functions of the irreducible representations (IREPs). However, instead of the usual IREPs, in the present case one should use the corepresentations of the magnetic point group $\mathbb{G}_M$, which are derived from the IREPs of the unitary component $\tilde{\mathbb{G}}$ (Ref. \onlinecite{magn-groups}). It is straightforward to show that, since all group elements commute and the antiunitary elements all square to $+1$, the corepresentations of the magnetic classes (\ref{classes-even-bands-III}) are one-dimensional. Using the property (\ref{Delta-Type-III-parity}), we obtain: \begin{equation} \label{Type-III-OP} \tilde\Delta_{n}(k_x)=i\eta_n\phi_n(k_x),\quad \phi_n(k_x)=-\phi_n(-k_x), \end{equation} where $\eta_n$ are the order parameter components and $\phi_n(k_x)$ are real basis functions. The latter can be different in different bands and taken to be $\phi_n(k_x)\propto k_x$, which corresponds to a 1D ``$p$-wave'' pairing. Therefore a Type III superconductor can be regarded as a multiband continuum generalization of the Kitaev chain.\cite{Kit01} The imaginary factor in Eq. (\ref{Type-III-OP}) is introduced for convenience, to make sure that, as in the Type II case, the action of the antiunitary operation on the order parameter components is equivalent to complex conjugation, \textit{i.e.}, we have $\eta_n\to\eta^*_n$ under ${\cal A}={\cal K}\sigma_y$. \subsubsection{Type I} \label{sec: Type I-gap} There are only two Type I magnetic classes that are simultaneously ``ferromagnetic'' and ``superconducting'', $\mathbf{D}_x$ and $\mathbf{C}_2$, see Eqs. (\ref{FM-classes}) and (\ref{classes-even-bands-I}). Since Type I classes do not have any antiunitary elements, there is no obvious choice for ${\cal A}$. While the pairing can be defined using, for instance, the TR operation, the matrix representation of ${\cal A}={\cal K}$ and the gap function are no longer band-diagonal, nor does the gap functions $\hat{\tilde\Delta}$ have a definite parity, see Eq. (\ref{A2-1-general}). Due to these complications, Type I systems have to be treated differently and will be studied elsewhere. We would like to note that, while it is the band representation of pairing that is best suited for the semiclassical analysis, see Sec. \ref{sec: ABS and topology}, it is possible to translate the results of this section into the more traditional spin representation. This can be done though only if an explicit model of the band structure is available. In Appendix \ref{app: gap spin}, we discuss the relation between the band and spin representations for a two-band Rashba wire. \subsection{Stable states} \label{sec: stable SC states} We have shown in Sec. \ref{sec: gap-band-symm} that the superconducting order parameter in both Type II and Type III quantum wires is given by a set of $\eta_1,...,\eta_N$, which transform into their complex conjugates under the action of the antiunitary symmetry operation ${\cal A}$. The actual values of the order parameter components can be obtained by minimizing the Ginzburg-Landau (GL) free energy. Assuming a uniform state, the most general second- and fourth-order terms in the free energy density have the following form: \begin{equation} \label{F_GL} F=\sum_{mn}\alpha_{mn}\eta_m^*\eta_n+\sum_{klmn}\beta_{klmn}\eta_k^*\eta_l^*\eta_m\eta_n. \end{equation} In a phenomenological theory, the values of the coefficients are only constrained by the requirement that $F$ is a real scalar, which is invariant under all symmetry operations, including ${\cal A}$. It is then straightforward to show that the coefficients are real and satisfy $\alpha_{mn}=\alpha_{nm}$, $\beta_{klmn}=\beta_{lkmn}=\beta_{klnm}=\beta_{mnkl}$. In the case of $N=1$, which is possible only in a Type III superconductor, one can make the single component of the order parameter real by a phase rotation, therefore the superconducting state is always invariant under the antiunitary operation (\ref{A-Type-III}). For $N>1$, in both Type II and Type III cases, Eq. (\ref{F_GL}) has the same form as the usual multiband generalization of the GL energy. The latter can have real minima, as well as intrinsically complex ones with the interband phase differences other than $0$ or $\pi$ (Ref. \onlinecite{TRSB-states}). Therefore in addition to the ${\cal A}$-invariant states characterized by real $\eta_n$, various antiunitary symmetry-breaking superconducting states are also phenomenologically possible. \section{Andreev boundary modes} \label{sec: ABS and topology} The Bogoliubov quasiparticle energies in an infinite uniform superconductor are obtained by diagonalizing the BdG Hamiltonian (\ref{H-BdG-general}), with the following result: $E_n(k_x)=\pm\sqrt{\xi_n^2(k_x)+|\tilde\Delta_n(k_x)|^2}$. The gap functions are given by Eq. (\ref{Type-II-OP}) or Eq. (\ref{Type-III-OP}). While the bulk excitations are gapped, there might exist subgap states localized near various inhomogeneities, in particular, near the boundaries of the system. In this section, we calculate the spectrum of these states in a half-infinite ($x\geq 0$) clean wire using the semiclassical, or Andreev, approach.\cite{And64} To make analytical progress, we neglect self-consistency and assume that the order parameters $\eta_1,...,\eta_N$ do not depend on $x$. \subsection{Semiclassical analysis of the boundary modes} \label{sec: ABS} The quasiparticle wave function in the $n$th band is an electron-hole spinor, which can be represented in the semiclassical approximation as $e^{irk_{F,n}x}\psi_{n,r}(x)$, where $r=\pm$ characterizes the direction of the Fermi wavevector $rk_{F,n}$. The ``envelope'' function $\psi_{n,r}(x)$ varies slowly on the scale of the Fermi wavelength $k_{F,n}^{-1}$ and satisfies the Andreev equation: \begin{equation} \label{And-eq-gen} \left(\begin{array}{cc} -iv_{n,r}\dfrac{d}{dx} & \tilde\Delta_n(rk_{F,n}) \\ \tilde\Delta_n^*(rk_{F,n}) & iv_{n,r}\dfrac{d}{dx} \end{array}\right)\psi_{n,r}=E\psi_{n,r}. \end{equation} Here $v_{n,r}=(\partial\xi_n/\partial k_x)|_{k_x=rk_{F,n}}$ is the quasiparticle group velocity near the Fermi point $rk_{F,n}$ (note that $|v_{n,\pm}|=v_{F,n}$). Focusing on the subgap states with $|E|<|\tilde\Delta_n|$, the solution of Eq. (\ref{And-eq-gen}) has the form $\psi_{n,r}(x)=\phi(rk_{F,n})e^{-\Omega_nx/v_{F,n}}$, where \begin{equation} \label{Andreev amplitude} \phi(rk_{F,n})=C(rk_{F,n})\left(\begin{array}{c} \dfrac{\tilde\Delta_n(rk_{F,n})}{E-i\Omega_n\, \mathrm{sgn}\, v_{n,r}} \\ 1 \end{array}\right), \end{equation} $\Omega_n=\sqrt{|\tilde\Delta_n(rk_{F,n})|^2-E^2}$, and $C$ is a coefficient. The semiclassical approximation breaks down near the boundary due to a rapid variation of the confining potential, which causes elastic transitions between the states corresponding to different Fermi wavevectors. This can be described by an effective boundary condition for the Andreev wave functions, which is obtained as follows. Depending on the sign of the group velocity, the Fermi wavevectors are classified as either incident, for which $v_{n,r}<0$, or reflected, for which $v_{n,r}>0$. We denote the former $k^{\mathrm{in}}_{1},...,k^{\mathrm{in}}_N$ and the latter $k^{\mathrm{out}}_{1},...,k^{\mathrm{out}}_N$, with $k^{\mathrm{out}}_{n}=-k^{\mathrm{in}}_{n}$. From Eq. (\ref{Andreev amplitude}), the Andreev wave functions at $x=0$ corresponding to the incident and reflected wavevectors can be written as \begin{equation} \label{phi-in-out} \phi(k^{\mathrm{in}}_n)=C(k^{\mathrm{in}}_n)\left(\begin{array}{c} \alpha^{\mathrm{in}}_n \\ 1 \end{array}\right),\quad \phi(k^{\mathrm{out}}_n)=C(k^{\mathrm{out}}_n)\left(\begin{array}{c} \alpha^{\mathrm{out}}_n \\ 1 \end{array}\right), \end{equation} where $$ \alpha^{\mathrm{in}}_n=\frac{\tilde\Delta_n(k^{\mathrm{in}}_{n})}{E+i\sqrt{|\tilde\Delta_n(k^{\mathrm{in}}_{n})|^2-E^2}},\quad \alpha^{\mathrm{out}}_n=\frac{\tilde\Delta_n(k^{\mathrm{out}}_{n})}{E-i\sqrt{|\tilde\Delta_n(k^{\mathrm{out}}_{n})|^2-E^2}}. $$ According to Ref. \onlinecite{Shel-bc}, the boundary conditions have the form of a linear relation between the ``in'' and ``out'' Andreev amplitudes: \begin{equation} \label{Shelankov-bc} \phi(k^{\mathrm{out}}_n)=\sum_{m=1}^N S_{nm}\phi(k^{\mathrm{in}}_{m}), \end{equation} with the coefficients $S_{nm}$ forming a unitary $N\times N$ matrix. The $S$ matrix is an electron-hole scalar, determined by the details of the boundary scattering at the Fermi level in the normal state. If a manageable microscopic description of the electron band structure is available, then the $S$ matrix can be calculated analytically, see Ref. \onlinecite{S-matrix-Rashba}. In general, it should be regarded as a phenomenological input. Inserting the expressions (\ref{phi-in-out}) into the boundary conditions (\ref{Shelankov-bc}), we obtain a system of $2N$ linear equations for $C(k^{\mathrm{in}}_n)$ and $C(k^{\mathrm{out}}_n)$. It has a nontrivial solution if \begin{equation} \label{ABS-energy-equation} \det\hat W(E)=0, \end{equation} where $$ \hat W(E)=\hat S-\hat M_{\mathrm{out}}^\dagger(E)\hat S\hat M_{\mathrm{in}}(E), $$ with $\hat M_{\mathrm{in}}=\,\mathrm{diag}\,(\alpha^{\mathrm{in}}_1,...,\alpha^{\mathrm{in}}_N)$ and $\hat M_{\mathrm{out}}=\,\mathrm{diag}\,(\alpha^{\mathrm{out}}_1,...,\alpha^{\mathrm{out}}_N)$. Below we focus on calculating the number of the zero-energy solutions of Eq. (\ref{ABS-energy-equation}) in an ${\cal A}$-invariant superconducting state. \subsection{Counting the zero modes} \label{sec: zero modes} The number of the ABS zero modes localized near $x=0$ is given by \begin{equation} \label{number-zero-modes} {\cal N}_0=\dim\ker\hat W(0). \end{equation} Using Eqs. (\ref{Delta-Type-II-parity}) and (\ref{Delta-Type-III-parity}), we find $\hat W(0)=\hat S\pm \hat{\tilde P}^\dagger\hat S\hat{\tilde P}$, where the upper (lower) sign corresponds to superconductors with a Type II (Type III) symmetry and $$ \hat{\tilde P}=\,\mathrm{diag}\,\left[\frac{\tilde\Delta_1(k^{\mathrm{out}}_{1})}{|\tilde\Delta_1(k^{\mathrm{out}}_{1})|},...,\frac{\tilde\Delta_N(k^{\mathrm{out}}_{N})}{|\tilde\Delta_N(k^{\mathrm{out}}_{N})|}\right]. $$ In terms of the order parameter components, as defined by Eqs. (\ref{Type-II-OP}) and (\ref{Type-III-OP}), we obtain: \begin{equation} \label{W-Type-II-III} \hat W(0)=\hat S\pm \hat P^\dagger\hat S\hat P, \end{equation} where $$ \hat P=\,\mathrm{diag}\,\left(\frac{\eta_1}{|\eta_1|},...,\frac{\eta_N}{|\eta_N|}\right). $$ In the Type III case, we neglected the momentum dependence of the ``$p$-wave'' basis functions near the Fermi points and used the convention $\phi_n(k_x)=\, \mathrm{sgn}\, (k^{\mathrm{out}}_{n})\, \mathrm{sgn}\, (k_x)$ in Eq. (\ref{Type-III-OP}). Thus the sign of $\eta_n$ is defined by the value of the gap function $\tilde\Delta_n$ at the reflected wavevector $k_x=k^{\mathrm{out}}_{n}$. We consider only the superconducting states in which the antiunitary symmetry ${\cal A}$ is not broken. For Type II classes, these are the usual TR invariant states, in which the order parameter components $\eta_n$ can be made real by a gauge transformation. For Type III classes, the states invariant under ${\cal A}={\cal K}\sigma_y$ are also characterized by a real order parameter. We label the bands in such a way that the first $N_+$ bands have positive $\eta_n$, while the remaining $N_-=N-N_+$ bands have negative $\eta_n$, so that $\hat P=\,\mathrm{diag}\,(\bm{1}_+,-\bm{1}_-)$. Here and below, vectors subscripted with ``$+$'' (``$-$'') have $N_+$ ($N_-$) components. Representing the $S$ matrix in the block form as \begin{equation} \label{S-R} \hat S=\left(\begin{array}{cc} \hat R_{++} & \hat R_{+-} \\ \hat R_{-+} & \hat R_{--} \\ \end{array}\right), \end{equation} where $\hat R_{ss'}$ is a $N_{s}\times N_{s'}$ matrix ($s,s'=\pm$), Eq. (\ref{W-Type-II-III}) takes the following form: \begin{equation} \label{W-Type-II-R} \hat W(0)=2\left(\begin{array}{cc} \hat R_{++} & 0 \\ 0 & \hat R_{--} \\ \end{array}\right)\qquad (\mathrm{Type\ II}), \end{equation} or \begin{equation} \label{W-Type-III-R} \hat W(0)=2\left(\begin{array}{cc} 0 & \hat R_{+-} \\ \hat R_{-+} & 0 \\ \end{array}\right)\qquad (\mathrm{Type\ III}). \end{equation} Further steps require a more detailed knowledge of the $S$ matrix. The latter is a normal-state property and has to satisfy, in addition to unitarity, certain constraints imposed by the antiunitary symmetry. \subsubsection{Type II} \label{sec: ZEABS Type II} As shown in Sec. \ref{sec: bands}, in Type II systems $N$ is even and, therefore $N_+$ and $N_-$ have the same parity. We obtain from Eqs. (\ref{number-zero-modes}) and (\ref{W-Type-II-R}) that $$ {\cal N}_0=\dim\ker\hat R_{++}+\dim\ker\hat R_{--}. $$ According to Appendix \ref{app: S-matrix-symmetry}, in the Type II case the scattering matrix can be made antisymmetric, therefore both $\hat R_{++}$ and $\hat R_{--}$ are antisymmetric. If $N_+$ and $N_-$ are even, then there is no reason for $\hat R_{++}$ and $\hat R_{--}$ to have zero eigenvalues. However, if $N_+$ and $N_-$ are odd, then both $\hat R_{++}$ and $\hat R_{--}$ are singular, \textit{i.e.}, have at least one zero eigenvalue each. Thus we arrive at the following result: \begin{equation} \label{N0-Type II} {\cal N}_0=\left\{\begin{array}{ll} 2,\quad & \mathrm{if}\ N_+,N_-=\mathrm{odd},\\ 0,\quad & \mathrm{if}\ N_+,N_-=\mathrm{even}, \end{array}\right. \end{equation} which means that the zero-energy ABS can only exist if there is an odd number of bands with the same sign of the gap. These zero modes form one Kramers pair per each end of the wire. The bulk-boundary correspondence principle states that the number of the boundary zero modes is related to a topological invariant in the bulk.\cite{Volovik-book,top-SC} Since the antiunitary symmetry in Type II case is the usual time reversal, which squares to $-1$, our system belongs to the symmetry class DIII and can be characterized in 1D by a $\mathbb{Z}_2$ invariant.\cite{tenfold-way} According to Ref. \onlinecite{QHZ10}, this invariant is given by $\prod_{n=1}^N\, \mathrm{sgn}\, (\eta_n)=(-1)^{N_-}=(-1)^{N_+}$, therefore the states with $N_+$ and $N_-$ odd are $\mathbb{Z}_2$-nontrivial and should have a pair of the ABS zero modes, in agreement with Eq. (\ref{N0-Type II}). If TR symmetry is broken, either due to a magnetic boundary scattering or intrinsically in the superconducting state, then the Kramers pairs of the zero modes are split. The same effect is produced by TR symmetry-breaking fluctuations, even if the mean-field state is TR invariant (Ref. \onlinecite{ST17}). \subsubsection{Type III} \label{sec: ZEABS Type III} In Type III systems, there can be any number of bands crossing the Fermi level. As shown in Appendix \ref{app: S-matrix-symmetry}, the scattering matrix is now symmetric, therefore $\hat R_{-+}=\hat R_{+-}^\top$. The number of the zero modes of Eq. (\ref{W-Type-III-R}) can be calculated using the rank-nullity theorem. Namely, we write the zero-mode eigenvectors of $\hat W(0)$ in the form $(\bm{v}^\top_+,\bm{v}^\top_-)$, where $\bm{v}_+$ and $\bm{v}_-$ satisfy the equations \begin{equation} \label{zero-modes-eqs-Type-III} \hat R_{+-}\bm{v}_-=\bm{0}_+,\quad \hat R_{-+}\bm{v}_+=\bm{0}_-. \end{equation} If $N_+>N_-$, then $\bm{v}_-=\bm{0}_-$ and the second equation (\ref{zero-modes-eqs-Type-III}) has $N_+-N_-$ nonzero solutions. If $N_+<N_-$, then $\bm{v}_+=\bm{0}_+$ and the first equation (\ref{zero-modes-eqs-Type-III}) has $N_--N_+$ nonzero solutions. Finally, if $N_+=N_-$, then the equations (\ref{zero-modes-eqs-Type-III}) generically have no nontrivial solutions. Collecting everything together, we obtain: \begin{equation} \label{N0-Type III} {\cal N}_0=|N_+-N_-|, \end{equation} \textit{i.e.}, there can be any integer number of the zero-energy ABS. The result (\ref{N0-Type III}) can also be understood using the MF language. Consider $N$ Kitaev chains in the nontrivial phase,\cite{Kit01} which is the lattice version of our quasi-1D $N$-band Type III superconductor. The chains are labelled by the index $n$, while the lattice sites are labelled by the discrete position $1\leq x\leq L$ (we assume a half-infinite geometry, with $L\to\infty$). Let us first neglect the boundary scattering between the chains, making them completely decoupled. Each chain has one zero-energy ABS located at $x=1$, corresponding to an unpaired, or ``dangling'', MF. Next, we turn on the interchain boundary scattering, which will hybridize the MFs, and ask how many of the $N$ MFs will survive. The mixing of the dangling MFs can be described phenomenologically by a quadratic Hamiltonian of the form \begin{equation} \label{MF-mixing} {\hat H}_{mix}=i\sum_{m,n=1}^N T_{mn}\hat{\gamma}_m\hat{\gamma}_n, \end{equation} where $\hat{\gamma}_n$ are the Majorana operators, satisfying the anticommutation relations $\{\hat{\gamma}_m,\hat{\gamma}_n\}=2\delta_{mn}$, and $T_{mn}$'s form a real antisymmetric matrix. Additionally, the above Hamiltonian has to be invariant under the antiunitary operation (\ref{A-Type-III}). According to Appendix \ref{app: MF transformation}, the transformation rules for the Majorana operators read \begin{equation} \label{gamma-transform-A} {\cal A}\hat{\gamma}_n{\cal A}^{-1}=\, \mathrm{sgn}\, (\eta_n)\hat{\gamma}_n. \end{equation} Using this in Eq. (\ref{MF-mixing}), we obtain the following antiunitary symmetry-imposed constraint on the mixing matrix: $T_{mn}=-\, \mathrm{sgn}\, (\eta_m\eta_n)T_{mn}$. Therefore an ${\cal A}$-invariant boundary scattering can hybridize only MFs from the bands with opposite gap signs, \textit{i.e.}, \begin{equation} \label{T-matrix} \hat T=\left(\begin{array}{cc} 0 & \hat T_{+-} \\ \hat T_{-+} & 0 \\ \end{array}\right), \end{equation} where $\hat T_{ss'}$ is a $N_{s}\times N_{s'}$ matrix and $\hat T_{-+}=-\hat T_{+-}^\top$. Next, we observe that any real antisymmetric matrix can be brought by an orthogonal transformation $\hat{Q}$ to a canonical block-diagonal form: $$ \hat{Q}^{-1}\hat T\hat{Q}=\,\mathrm{diag}\,\left\{ 0,...,0,\left(\begin{array}{cc} 0 & \lambda_1 \\ -\lambda_1 & 0 \\ \end{array}\right), \left(\begin{array}{cc} 0 & \lambda_2 \\ -\lambda_2 & 0 \\ \end{array}\right),...\right\}, $$ with real $\lambda$'s, see Ref. \onlinecite{Gantmakher-book}. Substituting this into Eq. (\ref{MF-mixing}), we see that the number of the zero-energy MFs which are not affected by the interchain boundary scattering is equal to the number of zero eigenvalues of $\hat T$. While a generic antisymmetric matrix has at least one zero eigenvalue only if it is odd-dimensional, in our case there are more possibilities due to special structure of the mixing matrix, see Eq. (\ref{T-matrix}). The analysis is similar to the calculation of the number of zero modes of $\hat W(0)$ given earlier in this subsection. The zero-mode eigenvectors of $\hat T$ have the form $(\bm{f}^\top_+,\bm{f}^\top_-)$, where $\bm{f}_+$ and $\bm{f}_-$ satisfy the equations $\hat T_{+-}\bm{f}_-=\bm{0}_+$ and $\hat T_{-+}\bm{f}_+=\bm{0}_-$. If $N_+>N_-$, then $\bm{f}_-=\bm{0}_-$, while the second equation has $N_+-N_-$ nontrivial solutions. If $N_+<N_-$, then $\bm{f}_+=\bm{0}_+$, while the first equation has $N_--N_+$ nontrivial solutions. If $N_+=N_-$, then the mixing matrix is even-dimensional and has no zero eigenvalues. Thus the number of the zero-energy MFs which are immune against the symmetry-preserving boundary scattering is equal to $|N_+-N_-|$, in agreement with Eq. (\ref{N0-Type III}). The conclusion that a Type III superconducting wire can have any integer number of the zero-energy MFs is also supported by a topological argument. The antiunitary symmetry ${\cal A}={\cal K}\sigma_y$ squares to $+1$, placing a Type III superconductor into the symmetry class BDI, which is characterized in 1D by a $\mathbb{Z}$ invariant.\cite{tenfold-way,FHAB11} Our expression (\ref{N0-Type III}) gives an explicit form of this invariant. We would like to note that the possibility of a $\mathbb{Z}$ topological invariant in certain models of superconducting nanowires has been previously argued in Refs. \onlinecite{TS12} and \onlinecite{ST13}. The $\mathbb{Z}_2$ invariant proposed in Ref. \onlinecite{Kit01}, see also Refs. \onlinecite{PL10} and \onlinecite{LSDS11}, is more ``coarse-grained'' in the sense that it does not take into account the fact that the antiunitary symmetry forbids certain couplings between the MFs, see Eq. (\ref{T-matrix}). \section{Summary} \label{sec: summary} We have developed a theory of normal-state and superconducting properties of quasi-1D quantim wires, in which the full 3D inversion symmetry is broken by a substrate. The effects of the electron-lattice SO coupling and TR symmetry breaking have been studied in a model-independent way, by using general symmetry arguments. The symmetry of a quasi-1D noncentrosymmetric crystal is described by one of the sixteen magnetic point groups, or magnetic classes, with only ten of them consistent with a uniform magnetic field (or a uniform magnetization). Superconductivity of the BCS type can potentially exist in eleven magnetic classes. The magnetic classes fall into one of three types, depending on the way the TR operation ${\cal K}$ enters the symmetry elements. The electronic band structure, in particular, the number and location of the Fermi points, is qualitatively different for different types of magnetic classes. As a by-product, we have shown how a generalized Rashba model can be derived in the quasi-1D case. We have emphasized the crucial role of antiunitary symmetries in ensuring that the superconducting gap function has the right transformation properties. The standard approach of superconductivity theory, in which the Cooper pairs are built from time-reversed states, see Refs. \onlinecite{pseudospin} and \onlinecite{Blount85}, has been extended to the case of an arbitrary magnetic symmetry. We have identified the appropriate antiunitary symmetry operations ${\cal A}$ for Type II and Type III classes. Regardless of the pairing mechanism, superconductivity in Type II systems corresponds to a quasi-1D multiband version of a conventional $s$-wave pairing. In contrast, Type III systems exhibit a ``$p$-wave'' pairing and thus represent multiband generalizations of the Kitaev model. We have studied the spectrum of the Andreev bound states near the end of a superconducting wire using the semiclassical approach. The boundary conditions for the Andreev equations are formulated phenomenologically, in terms of the normal-state boundary scattering matrix. In a superconducting state invariant under the antiunitary symmetry ${\cal A}$, we have calculated the number of the zero-energy ABS which are protected against symmetry-preserving perturbations. In the Type II case, we have ${\cal A}={\cal K}$, therefore ${\cal A}^2=-1$ (class DIII, according to the ``tenfold'' classification of Ref. \onlinecite{tenfold-way}) and the number of zero modes is given by the $\mathbb{Z}_2$ invariant (\ref{N0-Type II}). In the Type III case, we have ${\cal A}={\cal K}\sigma_y$, therefore ${\cal A}^2=+1$ (class BDI), and the number of zero modes is given by the $\mathbb{Z}$ invariant (\ref{N0-Type III}). Our work might be extended in several directions. One obvious omission is Type I case, in which the lack of an antiunitary symmetry in the magnetic class makes analysis more complicated. In particular, the classification of the gap functions according to their parity is no longer possible. Also, our counting of the zero-energy ABS is expected to be significantly modified by the interactions beyond the mean-field treatment of the pairing. Such interactions would lead to higher-order mixing terms in the MF Hamiltonian, thus reducing the number of protected zero modes.\cite{top-class-interactions} \acknowledgments This work was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada.
1,314,259,992,947
arxiv
\section{Introduction} Permanent Magnets Synchronous Machines (PMSMs) are nowadays widely adopted in several fields, ranging from vehicle propulsion to industrial motion applications. In many contexts, rotor position and speed information is required to achieve accurate regulation, yet the presence of mechanical sensors may pose reliability and economic issues. Furthermore, such sensors often result impractical because of space and weight requirements, e.g. in small/medium size electrically-powered Unmanned Aerial Vehicles (UAVs). In this respect, the so-called sensorless control techniques aim to replace the use of mechanical sensors with suitable reconstruction algorithms, and have been the subject of extensive research efforts. Several design strategies from nonlinear control theory have been applied to sensorless control. To name a few, without intending to be exhaustive, we recall Extended Kalman Filters \cite{hilairet2009speed}, Sliding Mode \cite{lee2013design} and High Gain and Adaptive strategies \cite{montanari2006speed, marino2008an, khalil2009speed}. Recently, some works have been dedicated to stator resistance estimation \cite{verrelli2018persistency} and Interior Permanent Magnets Synchronous Machines (IPMSMs) \cite{ortega2019globally}.\\ In the field of sensorless control and observation, the capability of dealing with highly variable speed, with little to no a priori knowledge of the mechanical dynamics, becomes crucial to achieve high-end, high precision algorithms when motors are coupled with nonlinear time-varying loads. This is the case, e.g., for the electric propulsion of UAVs or Hybrid Electric Vehicles (HEVs), where the environmental conditions heavily affect the external torque. In \cite{bobtsov2015robust}, an observer performing voltage and current integration is used to reconstruct position and rotor flux amplitude, independently of the mechanical model. In \cite{tilli2019towards}, a simple sixth-order observer with unknown mechanical model is designed employing a unit circle representation for the rotor angular configuration, and in \cite{bosso2020robust} such design is extended to include resistance estimation, combined with appropriate signal injection techniques to ensure observability. Indeed, the unit circle (indicated with $\mathbb{S}^1$) is a compact abelian Lie group, and Lyapunov-based tools can be used to derive a simple stability analysis. Specifically, the algorithm in \cite{tilli2019towards} exploits a high gain observer to reconstruct the back-Electromotive Force (back-EMF), which is then used to set up an adaptive attitude estimator on $\mathbb{S}^1$. The resulting reduced-order dynamics, corresponding to the attitude observer reconstruction error, is shown to evolve on the cylinder $\mathbb{S}^1\times\mathbb{R}$.\\ The use of a compact Lie group representation, however, introduces some relevant challenges. In fact, it is known that when a dynamical system evolves on a manifold that is not diffeomorphic to any Euclidean space, it is impossible for a continuous vector field to globally asymptotically stabilize an equilibrium point \cite{mayhew2011quaternion}. This phenomenon clearly arises in \cite{tilli2019towards} since two isolated hyperbolic equilibria are present, a stable node/focus and a saddle point: this restricts the basin of attraction of the reduced-order dynamics to $(\mathbb{S}^1\times\mathbb{R})\backslash\mathcal{R}_\mathcal{U}$, where $\mathcal{R}_\mathcal{U}$ is a curve passing through the saddle equilibrium. This property directly affects the full-order observer and therefore only regional stability can be established. We report the attitude observers on $SO(3)$ studied in \cite{mahony2008nonlinear}, which display a similar behavior in a higher dimensional context. Notably, an attempt to break this kind of topological constraints with discontinous and memoryless feedback laws leads to non-robust solutions, causing in practice chattering behaviors. Indeed, it is known that a dynamic hybrid feedback law must be employed in order to achieve global \textit{and} robust results \cite{sontag1999clocks, mayhew2011quaternion}.\\ In this work, we introduce a hybrid modification to the position, speed and flux observer of \cite{tilli2019towards} with the aim of establishing semi-global instead of regional stability. An alternative to the continuous-time strategy is possible because both components of the back-EMF vector are available as indirect measurement, thus allowing to detect when the angular estimation error settles to a wrong configuration. Exploiting this fact, we introduce a simple strategy based on a clock to periodically reset the position reconstruction. The rotor speed is restricted to have a constant (unknown) sign and to be bounded in norm from above and below by positive scalars. These conditions are compatible with many applications, including the control of propeller motors, where the sign of speed is usually not reversed. The properties of the observer are highlighted by means of two time scales arguments (see e.g. \cite{teel2003unified, sanfelice2011singular}), and numerically compared to the structure in \cite{tilli2019towards}. In particular, we underline the interesting feature that if the same tuning gains are adopted for the observer flows, the new algorithm displays a consistently faster transient response. Finally, inspired by these enhanced convergence properties, we also propose an augmentation based on a discrete-time identifier to further boost the estimation performance, clearly at the expense of increased computational complexity.\\ The structure of the paper is the following. After a brief introduction to the mathematical background in Section \ref{sec:notation}, we formally state the observer problem in Section \ref{sec:problem}. The observer structure and its stability properties are presented in Section \ref{sec:main_result}, while in Section \ref{sec:conclusions} some concluding remarks and future research directions are outlined. \section{Notation} \label{sec:notation} \par We use $(\cdot)^T$ to denote the transpose of real-valued matrices. For compactness of notation we often indicate with $(v, w)$, for any pair of column vectors $v$, $w$, the concatenated vector $(v^T, w^T)^T$. In case of non-differentiable signals, the upper right Dini derivative, indicated with $D^+$, is employed as generalized derivative. The time argument of signals will be omitted when clear from the context. \subsection{Notation on the Unit Circle ($\mathbb{S}^1$)} We employ the unit circle $\mathbb{S}^1$ to represent reference frames involved in the manipulation of PMSM equations, as in \cite{tilli2019towards}. In particular, $\mathbb{S}^1$ is a compact abelian Lie group, with the planar 2-D rotation employed as group operation. An integrator on $\mathbb{S}^1$ is given by \begin{equation*} \dot{\zeta} = u(t) \underbrace{ \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}}_{\mathcal{J}} \zeta , \qquad \zeta \in \mathbb{S}^1, \end{equation*} with $u(t) \in \mathbb{R}$. Any angle $\vartheta \in \mathbb{R}$ can be mapped into an element of the unit circle given by $(\cos(\vartheta) \; \sin(\vartheta))^T \in \mathbb{S}^1$. The identity element in $\mathbb{S}^1$ is $(1 \; 0)^T$. Finally, to any $\zeta = (c\; s)^T \in \mathbb{S}^1$ we can associate a rotation matrix $\mathcal{C}[\zeta] = {\tiny\begin{pmatrix}c & -s\\ s & c\end{pmatrix}}$, which is used for group multiplication: for any $\zeta_1, \zeta_2 \in \mathbb{S}^1$, the product is given by $\zeta_1 \cdot \zeta_2 = \mathcal{C}[\zeta_1]\zeta_2 = \mathcal{C}[\zeta_2]\zeta_1$. \subsection{Hybrid Dynamical Systems} In this paper we adopt the formalism of hybrid dynamical systems as in \cite{goebel2012hybrid}. In particular, a hybrid system $\mathcal{H}$ can be described as \begin{equation}\nonumber \mathcal{H} : \left\{ \begin{split} &\dot{x} &\in F(x,u) & \qquad (x, u) \in C\\ &x^+ &\in G(x,u) & \qquad (x, u) \in D \end{split} \right. \end{equation} where $x$ is the state, $u$ is the input, $C$ is the flow set, $F$ is the flow map, $D$ is the jump set, and $G$ is the jump map. The state of the hybrid system can either flow according to the differential inclusion $\dot{x} \in F$ (while $(x, u) \in C$), or jump according to the difference inclusion ${x}^+ \in G$ (while $(x, u) \in D$). For all the concepts regarding hybrid solutions, stability, robustness, and related Lyapunov theory, we refer to \cite{goebel2012hybrid} and references therein. \section{Model Formulation and Problem Statement}\label{sec:problem} The electromagnetic model of a PMSM in a static bi-phase reference frame under balanced working conditions, linear magnetic circuits, and negligible iron losses, can be written as \begin{equation}\label{eq:model1} \frac{d}{dt}i_s = -\frac{R}{L}i_s + \frac{1}{L}u_s - \frac{\omega \varphi \mathcal{J} \zeta}{L}, \qquad \dot{\zeta} = \omega \mathcal{J} \zeta, \end{equation} where $i_s, u_s \in \mathbb{R}^2$ are the stator currents and voltages, respectively, and in particular $u_s$ is a piecewise continuous signal defined on the interval $[t_0, \infty)$, with $t_0$ the initial time. Furthermore, $\omega$ is the rotor electrical angular speed, while $\zeta \in \mathbb{S}^1$ and $\varphi \in \mathbb{R}_{>0}$ are the angular configuration and the (constant) amplitude of the rotor magnetic flux vector, respectively. Finally, $R$ is the stator resistance and $L$ is the stator inductance.\\ In the field of sinusoidal machines, it is common practice to represent \eqref{eq:model1} in rotating reference frames. Consider a generic rotating reference frame with $\zeta_r \in \mathbb{S}^1$ and $\omega_r$ its angular orientation and speed, respectively. Then, \eqref{eq:model1} becomes \begin{equation}\label{eq:model2} \begin{split} \frac{d}{dt}i_r &= -\frac{R}{L}i_r + \frac{1}{L}u_r - \frac{\omega \varphi \mathcal{J} \mathcal{C}^T[\zeta_r] \zeta}{L} - \omega_r \mathcal{J} i_r \\ \dot{\zeta} &= \omega \mathcal{J} \zeta, \qquad \dot{\zeta}_r = \omega_r \mathcal{J} \zeta_r, \end{split} \end{equation} where $i_r = \mathcal{C}^T[\zeta_r]i_s$, $u_r = \mathcal{C}^T[\zeta_r]u_s$.\\ In this work, the angular speed $\omega$ is modeled as an unknown bounded input, and the following regularity assumption is required. \begin{assumption}\label{assumption}\label{hyp:w_regularity} The signal $\omega(\cdot)$ is defined over the interval $[t_0,\infty)$ and, in addition: \begin{itemize} \item $\omega(\cdot)$ is $\mathcal{C}^0$ and piecewise $\mathcal{C}^1$ in its domain of existence; \item there exist positive scalars $\omega_{\min}$, $\omega_{\max}$ such that, for all $t \geq t_0$, it holds $\omega_{\min} \leq |\omega(t)| \leq \omega_{\max}$; \item $|D^+ \omega(t)|$ exists and is bounded, for all $t \geq t_0$. \end{itemize} \end{assumption} Note that these conditions, combined with $u_s$ being piecewise continuous, ensure existence and uniqueness of solutions on $[t_0, \infty)$. Additionally, since the properties that we specified for the input signals do not depend on $t_0$, we can choose in the following $t_0 = 0$ without loss of generality. Assumption \ref{assumption} requires the angular speed $\omega$ to have constant sign and uniformly non-zero magnitude. This condition is slightly more restrictive than the well-known assumption of non-permanent zero speed, which was proven to be a sufficient condition to reconstruct $\omega$, $\zeta$, and $\varphi$, assuming currents and voltages available for measurement and the parameters $R$ and $L$ perfectly known \cite{zaltni2010synchronous}. Nevertheless, Assumption \ref{assumption} is compatible with significant applications such as renewables electric energy generation and electric vehicles propulsion (UAVs, HEVs).\\ We finally recall the problem of \textit{sensorless observer, with (restricted) variable speed and no mechanical model} \cite{tilli2019towards}: given the PMSM dynamics \eqref{eq:model1} or \eqref{eq:model2}, design an estimator of $\zeta$, $\omega$, $\varphi$ with only stator voltages and currents available for measurement, such that appropriate stability and convergence properties hold under Assumption \ref{assumption}. \section{The Proposed Hybrid Observer}\label{sec:main_result} \begin{table}[t!] \begin{center} \captionsetup{width=0.7\columnwidth} \caption{System and observers parameters}\label{tab:TabParMot} \begin{tabular}{lc | lc}\hline \hline {\scriptsize Stator resistance $R$ $[\Omega]$}&{\scriptsize $0.06$}&{\scriptsize $k_{\text{p}}$}&{\scriptsize $2.18 \times 10^4$} \\ {\scriptsize Stator inductance $L$ [$\mu \text{H}$]}&{\scriptsize $33.75$} &{\scriptsize $k_{\text{i}}$}&{\scriptsize $9.34 \times 10^3$}\\ {\scriptsize Nominal angular speed [$\text{rpm}$]}&{\scriptsize $6000$} & {\scriptsize $k_{\eta}$}&{\scriptsize $95.7$} \\ {\scriptsize Rotor magnetic flux $\varphi$ [$\text{mWb}$]}&{\scriptsize $1.9$}&{\scriptsize $\gamma$}&{\scriptsize $4582$}\\ {\scriptsize Number of pole pairs $p$}&{\scriptsize $7$}&{\scriptsize $\Lambda$}&{\scriptsize $200$}\\ {\scriptsize Load Inertia [$\text{Kgm}^2$]}&{\scriptsize $2.5 \times 10^{-5}$}&{\scriptsize $N$}&{\scriptsize $2$}\\ \hline \end{tabular} \end{center} \vspace{-8pt} \end{table} In this section we present the main result of this work, and we compare it with a preliminary continuous-time solution. To simplify the presentation and better highlight the connection between the different strategies, we choose to embed along the text some numerical results, based on a UAV propeller motor, whose parameters are indicated in Table \ref{tab:TabParMot}. Since the observer transient performance is more evident when it is disconnected from the controller, we employ a standard sensorized field-oriented controller to generate the speed profile $\omega$, selected as a combination of constant and time-varying ``aggressive'' sequences. We omit the complete closed-loop simulations for brevity. \subsection{The $\chi$-Reference Frame and a Continuous-Time Solution} Let $\chi \coloneqq |\omega|\varphi \in \mathbb{R}_{>0}$, $\xi \coloneqq (1/\varphi)\operatornamewithlimits{sgn}(\omega)$. This allows to consider, in order to replace the PMSM angular dynamics, the particular frame given by $\zeta_\chi \coloneqq \zeta \operatornamewithlimits{sgn}(\xi) = \zeta \operatornamewithlimits{sgn}(\omega)$, which yields a very simple reformulation of the model, for a generic rotating frame $\zeta_r$: \begin{equation}\label{eq:chi_frame} \begin{split} \frac{d}{dt}i_r &= -\frac{R}{L}i_r + \frac{1}{L}u_r - \frac{\chi\mathcal{J}\mathcal{C}^T[\zeta_r]\zeta_\chi}{L} - \hat{\omega}_r\mathcal{J}i_r\\ \dot{\zeta}_\chi &= \chi\xi \mathcal{J}\zeta_\chi, \qquad \dot{\zeta}_r = \omega_r \mathcal{J} \zeta_r. \end{split} \end{equation} Note, in particular, that $\xi \in \mathbb{R}$ is an unknown parameter and $\chi$ satisfies the following properties, which follow as a direct consequence of Assumption \ref{assumption}: \begin{itemize} \item $\chi$ is $\mathcal{C}^0$ and piecewise $\mathcal{C}^1$; \item $\chi_{\text{m}} \leq \chi \leq \chi_{\text{M}}$, for some positive scalars $\chi_{\text{m}}$, $\chi_{\text{M}}$; \item $|D^+\chi| \leq M$, for some positive scalar $M$. \end{itemize} The main idea of the proposed observer is then to design an estimator of the frame $\zeta_\chi$ by using as representation a frame $\zeta_r = \hat{\zeta}_\chi$, whose dynamics needs to be designed appropriately so that the two references synchronize asymptotically. This synchronization problem can be recast as the stabilization of the misalignment error $\eta \coloneqq \mathcal{C}^T[\hat{\zeta}_\chi]\zeta_\chi \in \mathbb{S}^1$. For convenience, we use the subscript $(\cdot)_{\hat{\chi}}$ to indicate the electric variables in the frame $\hat{\zeta}_\chi$, leading to the following dynamics: \begin{equation} \begin{split} \frac{d}{dt}i_{\hat{\chi}} &= -\frac{R}{L}i_{\hat{\chi}} + \frac{1}{L}u_{\hat{\chi}} - \frac{\chi\mathcal{J}\eta}{L} - \hat{\omega}_\chi\mathcal{J}i_{\hat{\chi}}\\ \dot{\zeta}_{\chi} &= \chi \xi \mathcal{J}\zeta_\chi, \qquad \dot{\hat{\zeta}}_\chi = \hat{\omega}_\chi \mathcal{J}\hat{\zeta}_\chi, \end{split} \end{equation} with $\hat{\omega}_\chi$ the angular speed of the frame $\hat{\zeta}_\chi$. In \cite{tilli2019towards}, the synchronization problem was addressed with a continuous-time observer of the form \begin{equation} \label{eq:observer} \begin{split} \dot{\hat{\imath}} &= -\frac{R}{L}\hat{\imath} + \frac{1}{L}u_{\hat{\chi}} + \frac{\hat{h}}{L} - \left(|\hat{h}|\hat{\xi} + k_{\eta}\hat{h}_1\right)\mathcal{J}i_{\hat{\chi}} + k_{\text{p}}\tilde{\imath}\\ \dot{\hat{h}} &= k_{\text{i}}\tilde{\imath} \qquad \dot{\hat{\zeta}}_\chi = \left(|\hat{h}|\hat{\xi} + k_{\eta}\hat{h}_1\right)\mathcal{J}\hat{\zeta}_\chi \qquad \dot{\hat{\xi}} = \gamma \hat{h}_1 \end{split} \end{equation} where $\hat \imath \in \mathbb{R}^2$ is the reconstruction of $i_{\hat{\chi}}$, $\tilde{\imath} \coloneqq i_{\hat{\chi}} - \hat{\imath}$ is the current estimation error, $\hat \xi \in \mathbb{R}$ is the estimation of $\xi$, $\hat{h}$ is the estimate of the back-EMF $h = -\chi\mathcal{J}\eta$, while the angular speed is assigned as $\hat{\omega}_\chi = |\hat{h}|\hat{\xi} + k_{\eta}\hat{h}_1$. Finally, $k_{\text{p}}$, $k_{\text{i}}$, $k_\eta$ and $\gamma$ are positive scalars used for tuning.\\ With this structure, the outputs of the observer, providing the desired estimates, are given by $\hat{\omega} = |\hat{h}|\hat{\xi}$ (the term multiplying $\hat{h}_1$ is often omitted in practice to limit noise propagation), $\hat{\zeta} = \hat{\zeta}_\chi\operatornamewithlimits{sgn}(\hat{\xi})$ and $\hat{\varphi} = \operatornamewithlimits{sat}(1/{|\hat{\xi}|})$, where the bounds of the saturation are chosen according to the expected motor parameter ranges. In particular, it was proven that as long as Assumption \ref{assumption} holds, it is possible to ensure regional practical asymptotic stability by proper selection of the gains. This stems from the two time scales approach used to separate the dynamics into a fast subsystem, given by a high-gain observer for current and back-EMF estimation, and a slow subsystem, which can be interpreted as an adaptive attitude observer on $\mathbb{S}^1$. From the state space of the slow subsystem derives the inherently regional and not semi-global result: the reduced order error dynamics, obtained by supposing perfect knowledge of $i_{\hat{\chi}}$ and $h$, can be written as follows on the cylinder $\mathbb{S}^1 \times \mathbb{R}$, with $\tilde{\xi} \coloneqq \xi - \hat{\xi}$ indicating the flux estimation error: \begin{equation}\label{eq:reduced} \dot{\eta} = \left( \chi\tilde{\xi} - k_{\eta} \chi \eta_2 \right)\mathcal{J}\eta \qquad \dot{\tilde{\xi}} = -\gamma\chi \eta_2. \end{equation} Indeed, it can be proven that the domain of attraction of the configuration $\bar{x}_{\text{s}} = ((1, 0), 0) \in \mathbb{S}^1\times\mathbb{R}$, corresponding to rotation alignment and correct flux estimation, does not include an unstable manifold of dimension $1$, which originates from the saddle equilibrium $\bar{x}_{\text{u}} = ((-1, 0), 0)$ as shown in red in Figure \ref{fig:phase_diagram}. \begin{figure}[t] \centering \psfragscanon \psfrag{x} [B][B][0.7][0]{$\tilde{\vartheta}$} \psfrag{y} [B][B][0.7][0]{$\tilde{\xi}$} \psfrag{a} [B][B][0.7][0]{$\bar{x}_{\text{u}}$} \psfrag{b} [B][B][0.7][0]{$\bar{x}_{\text{s}}$} \psfrag{c} [B][B][0.7][0]{$\bar{x}_{\text{u}}$} \psfrag{-p} [B][B][0.7][0]{$-\pi$} \psfrag{p} [B][B][0.7][0]{$\pi$} \includegraphics[clip=true, height = 4cm]{phase_portrait.eps} \psfrag{x} [B][B][0.7][0]{$\eta_1$} \psfrag{y} [B][B][0.7][0]{$\eta_2$} \psfrag{z} [B][B][0.7][0]{$\tilde{\xi}$} \includegraphics[clip=true, height=4cm]{3D_phase_portrait.eps} \caption{Phase diagram of \eqref{eq:reduced} for $\chi = 1$, $k_\eta = 1.5$, $\gamma = 1$, as shown in \cite{tilli2019towards}. On the left, the unstable manifold (red) and some trajectories converging to $\bar{x}_{\text{s}}$ (blue) are depicted on the equivalent planar representation $(\tilde{\vartheta}, \tilde{\xi})$, where $\tilde{\vartheta} = \operatornamewithlimits{atan2}(\eta_2, \eta_1)$ is the unique angle in the interval $[-\pi; \pi)$ corresponding to $\eta$. On the right, the same objects are represented on the cylinder $\mathbb{S}^1 \times \mathbb{R}$.} \label{fig:phase_diagram} \vspace{-8pt} \end{figure} Figure \ref{fig:sim_results} (plots (a),(d),(g),(j)) presents the simulation results corresponding to observer \eqref{eq:observer} with the same gains as in \cite{tilli2019towards} (see Table \ref{tab:TabParMot}). Note that the initial transient (corresponding to high values of $\tilde{\xi}$) is relatively slow, highlighting the same helicoidal shape as in Figure \ref{fig:phase_diagram}. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.23\textwidth} \centering \psfrag{x}[B][B][0.7][0]{(a) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\omega/p$, $\hat{\omega}/p$ [$\text{rpm}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{omega_hat_cont.eps}} \vspace{2pt} \centering \psfrag{x}[B][B][0.7][0]{(b) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\omega/p$, $\hat{\omega}/p$ [$\text{rpm}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{omega_hat_hyb1.eps}} \vspace{2pt} \psfrag{x}[B][B][0.7][0]{(c) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\omega/p$, $\hat{\omega}/p$ [$\text{rpm}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{omega_hat_hyb2.eps}} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}[b]{0.23\textwidth} \centering \psfrag{x}[B][B][0.7][0]{(d) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\tilde{\vartheta}$ [$\text{rad}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{theta_tilde_cont.eps}} \vspace{2pt} \centering \psfrag{x}[B][B][0.7][0]{(e) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\tilde{\vartheta}$ [$\text{rad}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{theta_tilde_hyb1.eps}} \vspace{2pt} \psfrag{x}[B][B][0.7][0]{(f) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\tilde{\vartheta}$ [$\text{rad}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{theta_tilde_hyb2.eps}} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}[b]{0.23\textwidth} \centering \psfrag{x}[B][B][0.7][0]{(g) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\xi$, $\hat{\xi}$ [${\text{Wb}}^{-1}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{xi_hat_cont.eps}} \vspace{2pt} \centering \psfrag{x}[B][B][0.7][0]{(h) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\xi$, $\hat{\xi}$ [${\text{Wb}}^{-1}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{xi_hat_hyb1.eps}} \vspace{2pt} \psfrag{x}[B][B][0.7][0]{(i) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\xi$, $\hat{\xi}$ [${\text{Wb}}^{-1}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{xi_hat_hyb2.eps}} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}[b]{0.23\textwidth} \centering \psfrag{x}[B][B][0.7][0]{(j) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\tilde{h}$ [$\text{V}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{h_tilde_cont.eps}} \vspace{2pt} \centering \psfrag{x}[B][B][0.7][0]{(k) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\tilde{h}$ [$\text{V}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{h_tilde_hyb1.eps}} \vspace{2pt} \psfrag{x}[B][B][0.7][0]{(l) \; time [s]} \psfrag{y}[B][B][0.7][0]{$\tilde{h}$ [$\text{V}$]} \raisebox{-\height}{\includegraphics[clip = true, width = \textwidth]{h_tilde_hyb2.eps}} \end{subfigure} \caption{First row: observer \eqref{eq:observer}. Second row: observer \eqref{eq:hybrid_obs_v1}. Third row: observer \eqref{eq:hybrid_obs_v1}-\eqref{eq:batch}-\eqref{eq:batch_jump}. (a),(b),(c): Rotor angular speed (blue) and estimated value (red). (d),(e),(f): Rotor angular position reconstruction error. (g),(h),(i): Parameter $\xi$ (blue) and its estimate (red). (j),(k),(l): Back-EMF reconstruction error, with the first component in blue and the second one in red.} \label{fig:sim_results} \vspace{-8pt} \end{figure} \subsection{A Hybrid Strategy for Semi-Global Stability} Following the insights provided by the continuous-time solution, we opt to modify the reduced order system \eqref{eq:reduced} by enriching its dynamics with a jump policy, which corresponds to jumps of the estimates $\hat{\zeta}_\chi$, $\hat{\xi}$, while preserving the existent flows. To simplify the approach and allow easy implementation of the observer, we propose to augment the observer dynamics with a clock, given by: \begin{equation}\label{eq:timer} \left\{ \begin{matrix} \dot{\rho} = \Lambda & \qquad \rho \in [0, 1]\\ \rho^+ = 0 & \qquad \rho = 1 \end{matrix} \right. \end{equation} with $\Lambda$ a positive scalar for tuning. Clearly, the clock dynamics can be used to enforce jumps of the angular estimate at regular times and thus break the cylinder topological constraint, but it seems also convenient as a way to embed additional desirable features. Among these, we will propose a simple identifier to enhance the observer transient performance. Firstly, however, we introduce the baseline strategy with no identifier. In place of \eqref{eq:reduced}, consider the hybrid system: \begin{equation}\label{eq:reduced_hybrid} \mathcal{H}_0: \left\{ \begin{matrix} \begin{pmatrix}\dot{\eta}\\ \dot{\tilde{\xi}}\\ \dot{\rho} \end{pmatrix} = \begin{pmatrix} \left( \chi\tilde{\xi} - k_{\eta} \chi \eta_2 \right)\mathcal{J}\eta\\ -\gamma \chi \eta_2\\ \Lambda \end{pmatrix} \eqqcolon F_0(\eta, \tilde{\xi}, \rho, \chi) & \begin{pmatrix}\eta \\ \tilde{\xi}\\ \rho \end{pmatrix} \in C_{\text{s}}\\ \begin{pmatrix}{\eta}^+\\ {\tilde{\xi}}^+\\ {\rho}^+ \end{pmatrix} \in \begin{pmatrix} \left\{ \begin{split} -F\eta, & \quad \chi\eta_1 \leq 0\\ \eta, & \quad \chi\eta_1 \geq 0 \end{split} \right.\\ \tilde{\xi}\\ 0 \end{pmatrix} \eqqcolon G_0(\eta, \tilde{\xi}, \rho, \chi) & \begin{pmatrix}\eta \\ \tilde{\xi}\\ \rho \end{pmatrix} \in D_{\text{s}}\\ \end{matrix} \right. \end{equation} where $F = \operatornamewithlimits{diag}\{1, -1\}$, while $C_{\text{s}} = \mathbb{S}^1\times\mathbb{R}\times[0, 1]$ and $D_{\text{s}} = \mathbb{S}^1\times\mathbb{R}\times\{1\}$. In this structure, the angle $\eta$ is always reset to a value satisfying $\eta_1 \geq 0$, thus ensuring that the set $\bar{x}_{\text{u}} \times [0, 1]$ is not an attractor compatible with the data of system \eqref{eq:reduced_hybrid}. In fact, the next result confirms that the proposed hybrid strategy removes the unstable manifold $\mathcal{R}_{\mathcal{U}}$. \begin{lemma}\label{lemma1} The set $\mathcal{A}_0 \coloneqq \bar{x}_{\textup{s}} \times [0, 1] \subset \mathbb{S}^1 \times \mathbb{R}^2$ is a uniformly preasymptotically stable attractor for the hybrid system \eqref{eq:reduced_hybrid}, with basin of preattraction given by $\mathbb{S}^1\times\mathbb{R}^2$. \end{lemma} \begin{proof} It is a direct application of the Nested Matrosov Theorem for hybrid systems \cite[Theorem 4.1]{sanfelice2009asymptotic}. Indeed, consider the following Matrosov functions (which are continuous in their arguments, and thus bounded in any compact set of the states $(\eta, \tilde{\xi}, \rho)$, by Assumption \ref{hyp:w_regularity}): \begin{equation}\label{eq:matrosov_fn} \begin{split} W_1(\eta, \tilde{\xi}, \rho, \chi) &= 1 - \eta_1 + \frac{1}{2\gamma}\tilde{\xi}^2\\ W_2(\eta, \tilde{\xi}, \rho, \chi) &= -\chi \tilde{\xi} \eta_1 \eta_2\\ W_3(\eta, \tilde{\xi}, \rho, \chi) &= \exp(\rho)\left[\eta_2^2 + \tilde{\xi}^2\right]\\ W_4(\eta, \tilde{\xi}, \rho, \chi) &= \exp(-\rho)\left[1 - \eta_1\right]. \end{split} \end{equation} Employing routine calculations and by means of by Assumption \ref{hyp:w_regularity}, it is possible to establish the bounds $\sup_{f \in F_0(\eta, \tilde{\xi}, \rho, \chi)}\langle \nabla W_i(\eta, \tilde{\xi}, \rho, \chi), (f, D^+\chi) \rangle \leq B_{\text{c}, i}(\eta, \tilde{\xi}, \rho)$, $i \in \{1, 2, 3, 4\}$, for all $(\eta, \tilde{\xi}, \rho) \in C_{\text{s}}$: \begin{equation} \begin{split} B_{\text{c}, 1} &= -k_{\eta}\chi_{\text{m}}\eta_2^2 \leq 0\\ B_{\text{c}, 2} &= -\chi_{\text{m}}^2\eta_1^2\tilde{\xi}^2 + \Delta_2(M, \chi_{\text{M}}, \tilde{\xi}, \eta)|\eta_2|\\ B_{\text{c}, 3} &= \Lambda W_2 + \exp(\rho)\Delta_3(\chi_{\text{M}}, \tilde{\xi}, \eta)|\eta_2|\\ B_{\text{c}, 4} &= -\Lambda \exp(-\rho)(1 - \eta_1) + \Delta_4(\chi_{\text{M}}, \tilde{\xi}, \eta)|\eta_2|, \end{split} \end{equation} with $\Delta_2$, $\Delta_3$, $\Delta_4$ positive continuous functions in their arguments. Note that $B_{\text{c}, 2} \leq -\chi_{\text{m}}^2\tilde{\xi}^2$ as $\eta_2 = 0$, thus in $B_{\text{c}, 3}$ and $B_{\text{c}, 4}$ the conditions 1)-2) of \cite[Theorem 4.1]{sanfelice2009asymptotic} must be checked in particular for $\eta_1 = -1$, $\eta_2 = 0$, $\tilde{\xi} = 0$, for any $\rho \in [0, 1]$. Similarly, it holds $\sup_{g \in G_0(\eta, \tilde{\xi}, \rho, \chi)} W_i(g) - W_i(\eta, \tilde{\xi}, \rho, \chi) \leq B_{\text{d}, i}(\eta, \tilde{\xi}, \rho)$, $i \in \{1, 2, 3, 4\}$, with the following bounds, for all $(\eta, \tilde{\xi}, \rho) \in D_{\text{s}}$: \begin{equation} \begin{split} B_{\text{d}, 1} &= \min \{0, 2\eta_1\} \leq 0\\ B_{\text{d}, 2} &= \max\{0, -2\chi_{\text{M}} |\tilde{\xi}| |\eta_2|\eta_1 \}, \qquad\qquad (B_{\text{d}, 2} > 0 \Rightarrow \eta_1 < 0)\\ B_{\text{d}, 3} &= [\exp(0) - \exp(1)](\eta_2^2 + \tilde{\xi}^2) \leq 0\\ B_{\text{d}, 4} &= [\exp(0) - \exp(-1)] (1 - |\eta_1|). \end{split} \end{equation} It can be easily verified from the first three bounds that the conditions 1)-2) of \cite[Theorem 4.1]{sanfelice2009asymptotic} are satisfied for all $(\eta, \tilde{\xi}, \rho) \in D_{\text{s}} \backslash \mathcal{A}_0$. Finally, note that uniform global stability is easily established with $B_{\text{c}, 1}$, $B_{\text{d}, 1}$, in connection with the fact that $W_1$ is positive definite (considering a proper indicator function) with respect to the attractor $\mathcal{A}_0$, for all $(\eta, \tilde{\xi}, \rho) \in C_{\text{s}} \cup D_{\text{s}} \cup G_0(D_{\text{s}})$. Since all sufficient conditions in \cite{sanfelice2009asymptotic} are verified, the statement follows immediately. \end{proof} In order to implement the hybrid observer leading to the above reduced order system, we need to compute the jumps of $\hat{\zeta}_\chi$ corresponding to $\eta^+ = -F\eta$, using $\hat{h}$ as a proxy of $h = -\chi\mathcal{J}\eta$. For, note that \begin{equation} -F\eta = \mathcal{C}^T[\hat{\zeta}_\chi^+]\zeta_\chi = \mathcal{C}[\zeta_\chi]F\hat{\zeta}_\chi^+, \end{equation} therefore it is possible to express $\hat{\zeta}_\chi^+$ as: \begin{equation} \hat{\zeta}_\chi^+ = -F\mathcal{C}^T[\zeta_\chi]F\eta = -\mathcal{C}[\zeta_\chi]\eta = -\mathcal{C}^T[\hat{\zeta}_\chi][\mathcal{C}[\zeta_\chi]\zeta_\chi]. \end{equation} Furthermore, at each time a ``fast'' estimate of the rotor position (rescaled by $\chi > 0$) can be retrieved from $\hat{h}$ and $\hat{\zeta}_{\chi}$, since $\mathcal{J}h = \chi \eta$, and therefore $\chi \zeta_{\chi} = \mathcal{C}[\hat{\zeta}_{\chi}]\mathcal{J}h$. These considerations finally yield the complete jump map $G_\zeta:\mathbb{R}^2\times\mathbb{S}^1 \rightrightarrows \mathbb{S}^1$: \begin{equation} \begin{split} &G_\zeta(\hat{h}, \hat{\zeta}_\chi) \in \left\{\begin{matrix} -\mathcal{C}^T[\hat{\zeta}_\chi]\begin{pmatrix}\cos(2\theta_\chi (\hat{h}, \hat{\zeta}_\chi))\\ \sin(2\theta_\chi (\hat{h}, \hat{\zeta}_\chi)) \end{pmatrix} &\quad \hat{h}_2 \geq 0\\ \hat{\zeta}_\chi &\quad \text{otherwise} \end{matrix} \right.\\ &\theta_\chi = \operatornamewithlimits{atan2}(y_\chi, x_\chi) \subset [-\pi, \pi],\qquad \begin{pmatrix} x_\chi \\ y_\chi \end{pmatrix} = \mathcal{C}[\hat{\zeta}_\chi]\mathcal{J}\hat{h} \end{split} \end{equation} where in particular we let $\operatornamewithlimits{atan2}(0, 0) = [-\pi, \pi]$ and $\operatornamewithlimits{atan2}(y, x) = \{-\pi, \pi\}$, for all $(x, y)$ in the set $S = \{(x, y) \in \mathbb{R}^2: x < 0, y = 0\}$. For convenience, let the map $G_{\text{f}}(\hat{h}, \hat{\zeta}_\chi) = \mathcal{C}^T[G_\zeta(\hat{h}, \hat{\zeta}_\chi)]\mathcal{C}[\hat{\zeta}_\chi]$ indicate the change of coordinates from the $\hat{\zeta}_\chi$-frame to the $\hat{\zeta}_\chi^+$-frame. This map, which is available for observer design, is fundamental to describe the jumps that occur to both $i_{\hat{\chi}}$ and $h$, indeed: \begin{equation} i_{\hat{\chi}}^+ = \mathcal{C}^T[\hat{\zeta}_\chi^+]i_s = \mathcal{C}^T[G_\zeta(\hat{h}, \hat{\zeta}_\chi)]\mathcal{C}[\hat{\zeta}_\chi]\mathcal{C}^T[\hat{\zeta}_\chi]i_s = G_{\text{f}} i_{\hat{\chi}}, \qquad h^+ = -\chi\mathcal{J}\mathcal{C}^T[\hat{\zeta}_\chi^+]\zeta_\chi = G_{\text{f}}h \end{equation} It follows that the overall observer structure is given by: \begin{equation}\label{eq:hybrid_obs_v1} \begin{matrix} \begin{pmatrix}\dot{\hat{\imath}} \\ \dot{\hat{h}}\\ \dot{\hat{\zeta}}_\chi \\ \dot{\hat{\xi}} \\ \dot{\rho} \end{pmatrix} = \begin{pmatrix} -\frac{R}{L}\hat{\imath} + \frac{1}{L}u_{\hat{\chi}} + \frac{\hat{h}}{L} - \hat{\omega}_\chi\mathcal{J}i_{\hat{\chi}} + k_{\text{p}}\tilde{\imath}\\ k_{\text{i}}\tilde{\imath}\\ \hat{\omega}_\chi\mathcal{J}\hat{\zeta}_\chi\\ \gamma \hat{h}_1\\ \Lambda \end{pmatrix} & \rho \in [0, 1]\\ \begin{pmatrix}{\hat{\imath}}^+ \\ {\hat{h}}^+\\ {\hat{\zeta}}_\chi^+ \\ {\hat{\xi}}^+ \\ {\rho}^+ \end{pmatrix} \in \begin{pmatrix} G_{\text{f}}(\hat{h}, \hat{\zeta}_\chi)\hat{\imath}\\ G_{\text{f}}(\hat{h}, \hat{\zeta}_\chi)\hat{h}\\ G_\zeta(\hat{h}, \hat{\zeta}_\chi)\\ \hat{\xi}\\ 0 \end{pmatrix} & \rho = 1 \end{matrix} \end{equation} with $\hat{\omega}_\chi = |\hat{h}|\hat{\xi} + k_{\eta}\hat{h}_1$ as before. Let $x_{\text{s}} \coloneqq (\eta, \tilde{\xi}, \rho) \in \mathbb{S}^1\times\mathbb{R}\times[0, 1]$ and $x_{\text{f}}\ \coloneqq T(\tilde{\imath}, \tilde{h}) \in \mathbb{R}^4$ with $\tilde{h} \coloneqq h - \hat{h}$ and $T$ a change of coordinates matrix such that \cite{tilli2019towards}: \begin{equation} x_{\text{f}} = T\begin{pmatrix}\tilde{\imath}\\ \tilde{h} \end{pmatrix} = \begin{pmatrix} \varepsilon^{-1}I_2 & 0_{2\times 2}\\ -\varepsilon^{-1}I_2 & L^{-1}I_2 \end{pmatrix}\begin{pmatrix}\tilde{\imath}\\ \tilde{h} \end{pmatrix}, \end{equation} with $\varepsilon$ a positive scalar such that $R/L + k_{\text{p}} = 2\varepsilon^{-1}$, $k_{\text{i}} = 2L\varepsilon^{-2}$. We can then define the overall error dynamics as follows: \begin{equation}\label{eq:error_system_hybrid} \begin{matrix} \begin{pmatrix}D^+x_{\text{f}}\\ \dot{x}_{\text{s}} \end{pmatrix} = \begin{pmatrix} \varepsilon^{-1}\underbrace{\begin{pmatrix}-I_2 & I_2\\ -I_2& -I_2 \end{pmatrix}}_{A_\text{f}} x_{\text{f}} + \underbrace{\begin{pmatrix}0_{2\times2} \\ L^{-1}I_2 \end{pmatrix}}_{B_{\text{f}}} f_h\\ F_{\text{s}}(x_{\text{f}}, \chi, x_{\text{s}}) \end{pmatrix} & x_{\text{s}} \in C_{\text{s}}\\ \begin{pmatrix}x_{\text{f}}+\\ x_{\text{s}}^+ \end{pmatrix} \in \begin{pmatrix} \operatornamewithlimits{diag}\{G_{\text{f}}, G_{\text{f}}\}x_{\text{f}}\\ G_{\text{s}}(x_{\text{f}}, \chi, x_{\text{s}}) \end{pmatrix} & x_{\text{s}} \in D_{\text{s}} \end{matrix} \end{equation} with $f_h = D^+h$ defined exactly as in \cite{tilli2019towards}, and $F_{\text{s}}$, $G_{\text{s}}$ the flows and jumps of the attitude estimation error (which correspond to the data in \eqref{eq:reduced_hybrid} if $\tilde{h} = 0$), respectively. Note that it holds $A_{\text{f}} + A_{\text{f}}^T = -2I_4$, while the jump $x_{\text{f}}^+$ preserves the norm, indeed: \begin{equation} \begin{split} |x_{\text{f}}^+|^2 & = |\varepsilon^{-1}\tilde{\imath}^+|^2 + |L^{-1}\tilde{h}^+ -\varepsilon^{-1}\tilde{\imath}^+|^2\\ & = |G_{\text{f}} \varepsilon^{-1}\tilde{\imath}|^2 + |G_{\text{f}} (L^{-1}\tilde{h} -\varepsilon^{-1}\tilde{\imath})|^2\\ & = |\varepsilon^{-1}\tilde{\imath}|^2 + |L^{-1}\tilde{h} -\varepsilon^{-1}\tilde{\imath}|^2 = |x_{\text{f}}|^2. \end{split} \end{equation} This means that on the one hand, during flows, the $x_{\text{f}}$-subsystem can be made arbitrarily fast by choosing $\varepsilon$ sufficiently small, while on the other hand the jumps do not cause any increase of $|x_{\text{f}}|$, and thus they do not represent an obstacle to time scale separation. We summarize the stability properties of the above hybrid system with the following theorem, which represents the main result of this work. \begin{theorem}\label{main_thm} Consider system \eqref{eq:error_system_hybrid} with inputs $\chi(\cdot)$, $D^+\chi(\cdot)$, satisfying Assumption \ref{assumption}, and denote its solutions with $(\psi_{\textup{f}}(\cdot), \psi_{\textup{s}}(\cdot))$, with initial conditions $(x_{{\textup{f}}, 0}, x_{{\textup{s}}, 0})$. In particular, denote with $\rho_0$ the initial condition of the clock. Then, the attractor $0_{4\times1}\times\mathcal{A}_0$ is semiglobally practically asymptotically stable as $\varepsilon \to 0^+$, that is: \begin{itemize} \item there exists a proper indicator function $\sigma_{\textup{s}}$ of $\mathcal{A}_0$ in $\mathbb{S}^1\times \mathbb{R}^2$; \item there exists a class $\mathcal{KL}$ function $\beta_{\textup{s}}$; \end{itemize} such that, for any positive scalars $\Delta_{\textup{f}}$, $\Delta_{\textup{s}}$, $\delta$, there exists a scalar $\varepsilon^* > 0$ such that, for all $0 < \varepsilon \leq \varepsilon^*$, all $(\psi_{\textup{f}}(\cdot), \psi_{\textup{s}}(\cdot))$ satifying $\rho_0 = 0$, $|x_{{\textup{f}}, 0}| \leq \Delta_{\textup{f}}$ and $\sigma_{\textup{s}}(x_{{\textup{s}}, 0}) \leq \Delta_{\textup{s}}$, the following bounds hold, for all $(t, j) \in \operatornamewithlimits{dom}(\psi_{\textup{f}}(\cdot), \psi_{\textup{s}}(\cdot))$: \begin{equation} \begin{split} |\psi_{\textup{f}}(t, j)| &\leq \exp\left(-t/\varepsilon \right)|x_{{\textup{f}}, 0}| + \delta\\ \sigma_{\textup{s}}(\psi_{\textup{s}}(t, j)) &\leq \beta_{\textup{s}}(\sigma_{\textup{s}}(x_{{\textup{s}}, 0}), t + j) + \delta. \end{split} \end{equation} \end{theorem} Figure \ref{fig:sim_results} (plots (b),(e),(h),(k)) presents the simulation results corresponding to observer \eqref{eq:hybrid_obs_v1}, with $\Lambda$ selected as in Table \ref{tab:TabParMot}. Notably, it is possible to appreciate that this new solution enhances the convergence performance of the previous continuous-time algorithm. This is motivated by the intuition that the jumps, for $\Lambda$ sufficiently large, impose the position estimation error to be close during transients either to $\eta = (0, \;1)$ or to $\eta = (0, \; -1)$: these configurations are associated with the maximal value of $\dot{\hat{\xi}}$. For this reason, we can expect that there exists a range for initial conditions of $|\tilde{\xi}|$ where the convergence properties of this observer are optimized. In particular, this range is expected to be between very large initial errors, where the continuous time angular ``wraps'' dominate the behavior, and small initial errors, where jumps do not cause any estimation correction. \subsection{A Mini-Batch Identifier for Enhanced Initial Convergence} We conclude this section with a modification of the above strategy to ensure a faster observer response, obtained by means of a discrete-time identifier. The need to employ a higher number of state variables, in addition to performing the minimization of a cost function, clearly makes this method more computationally intensive. However, some strategies can be adopted to mitigate the online burden and enable implementation in embedded computing systems (e.g., moving the procedure in a lower priority/frequency task).\\ Firstly, recall that a perturbed estimate of $\chi \zeta_{\chi}$ can be computed as $\mathcal{C}[\hat{\zeta}_{\chi}]\mathcal{J}\hat{h}$. From the solutions of system \eqref{eq:chi_frame} it can be noted that, for any positive scalar $T$, for all $t \geq T$: \begin{equation} \zeta_{\chi}(t) - \zeta_{\chi}(t-T) = \xi \mathcal{J} \int_{t - T}^{t}\chi(s)\zeta_{\chi}(s)ds. \end{equation} Hence by multiplying both sides by $\chi(t - T)\chi(t)$ it follows (let $y(s) = \chi(s)\zeta_{\chi}(s)$): \begin{equation}\label{eq:lin_regr_pmsm} \chi(t - T)y(t)- \chi(t)y(t - T) = \xi\chi(t - T)\chi(t)\mathcal{J} \int_{t - T}^{t}y(s)ds, \end{equation} which can be constructed by means of division-free estimates, since $\chi$ can be replaced with $|\hat{h}|$ and $y$ with $\mathcal{C}[\hat{\zeta}_{\chi}]\mathcal{J}\hat{h}$. Indeed, between jumps of the clock \eqref{eq:timer}, we can compactly rewrite \eqref{eq:lin_regr_pmsm} as $X(t, j) + e_X(t, j) = (\Phi(t, j) + e_\Phi(t, j))\xi$, where $X$ and $\Phi$ are only function of $\hat{h}$, $\hat{\zeta}_\chi$, their past values and their integrals, while $e_X$ and $e_\Phi$ are disturbances depending on $h$ and $\tilde{h}$. For $N \in \mathbb{N}_{\geq 1}$, let $\tau_N(\cdot)$ be a moving window operator such that, for a hybrid arc $\psi$ satisfying jumps according to the clock \eqref{eq:timer} (with $\rho(0, 0) = 0$), and for all $(t, j) \in \operatornamewithlimits{dom} \psi$ such that $j \geq N$: \begin{equation} \tau(\psi)(t, j) = \begin{pmatrix} \psi\left((j - N + 1)/\Lambda, j- N\right)\\ \vdots\\ \psi\left(j/\Lambda, j-1\right) \end{pmatrix}. \end{equation} Choosing $T = 1/\Lambda$ as interval of integration in \eqref{eq:lin_regr_pmsm}, we thus obtain a simple estimate of $\xi$ through a batch least-squares algorithm as follows (see \cite{bin2019approximate} for the same structure in the context of output regulation): \begin{equation} \begin{split} \xi^*(t, j) & = \operatornamewithlimits{argmin}_{\theta \in \mathbb{R}} J_N(\theta)(t, j)\\ J_N(\theta)(t, j) &\coloneqq \left| \tau_N(X)(t, j) - \tau_N(\Phi)(t, j)\theta \right|^2. \end{split} \end{equation} To implement the above strategy, the hybrid observer in \eqref{eq:hybrid_obs_v1} is augmented with an identifier based on the shift register variables $Y^\mu = (Y_{0}^\mu, \ldots, Y_{N}^\mu) \in \mathbb{R}^{2(N+1)}$, $Z^\mu = (Z_{0}^\mu, \ldots, Z_{N}^\mu) \in \mathbb{R}^{N + 1}$, $\Phi^\mu = (\Phi_{1}^\mu, \ldots, \Phi_{N}^\mu) \in \mathbb{R}^{2N}$, related to the moving window operator as $\tau_N(\Phi) = \Phi^\mu$, $\tau_N(X) = (X_{1}^\mu, \ldots, X_{N}^\mu)$, $X_{i}^\mu = Z_{i-1}^\mu Y_{i}^\mu - Z_{i}^\mu Y_{i-1}^\mu$, $i \in \{1, \ldots, N\}$): \begin{equation}\label{eq:batch} \begin{split} &\left\{\begin{split} \dot{\nu} &= \mathcal{C}[\hat{\zeta}_\chi]\mathcal{J}\hat{h}\\ \dot{Y}^\mu &= 0\\ \dot{Z}^\mu &= 0\\ \dot{\Phi}^\mu &= 0 \end{split}\right. \qquad \qquad \qquad \qquad \qquad \;\; \rho \in [0, 1]\\ &\left\{\begin{split} \nu^+ &= 0\\ (Y^\mu_{i})^+ &= Y_{i+1}^\mu, \qquad i \in \{ 0, \ldots, N \}\\ (Y^\mu_{N})^+ &= \mathcal{C}[\hat{\zeta}_{\chi}]\mathcal{J}\hat{h}\\ (Z^\mu_{i})^+ &= Z_{i+1}^\mu, \qquad i \in \{ 0, \ldots, N \}\\ (Z^\mu_{N})^+ &= |\hat{h}|\\ (\Phi^\mu_{i})^+ &= \Phi_{i+1}^\mu, \qquad i \in \{ 1, \ldots, N \}\\ (\Phi^\mu_{N})^+ &= \mathcal{J} \nu |\hat{h}|Z_N^\mu \end{split}\right. \qquad \rho = 1\\ &\xi^*(t, j) = \mathcal{G}[Y^\mu, Z^\mu, \Phi^\mu](t, j) = \operatornamewithlimits{argmin}_{\theta \in \mathbb{R}} J_N(\theta)(t, j), \end{split} \end{equation} where the standard Moore-Penrose pseudoinverse can be used to minimize $J_N$. The jump map of $\hat{\xi}$ can be then modified as a function of $\hat{\xi}(t, j)$ and $\xi^*(t, j)$. Without intending to provide a formal stability result for this modification, which will be the topic of future research activity, we propose to jump according to two criteria, which are the ``readiness'' of the shift register and the error $\hat{\xi} - \xi^*$: \begin{equation}\label{eq:batch_jump} \hat{\xi}^+ = \left\{ \begin{split} &\hat{\xi} \qquad j \leq N + 1 \text{ or } |\hat{\xi} - \xi^*| \leq 4\sqrt{\gamma}\\ &\xi^* \qquad \text{ otherwise}. \end{split} \right. \end{equation} This way it is possible to ensure that, if the regression errors $e_X$, $e_\Phi$ are sufficiently small, the above jump improves the estimate $\hat{\xi}$ by guaranteeing $x_{\text{s}}^+$ to be close to the set $W_1 \leq 2$ in \eqref{eq:matrosov_fn} (where $4\sqrt{\gamma}$ was employed to account for the worst case scenario). Within such set, the local behavior of the attitude observer becomes dominant, guaranteeing a desirable residual behavior. Note that the errors $e_X$, $e_\Phi$ can be made arbitrarily small, for any jump of the overall system. This is possible because $e_X$, $e_\Phi$ vanish as $\tilde{h} \to 0$ and, by proper selection of the gains of the fast subsystem, $\tilde{h}$ can be forced to converge during flows in an arbitrarily small ball, before the first jump occurs.\\ Finally, Figure \ref{fig:sim_results} (plots (c),(f),(i),(l)) presents the simulation results corresponding to the augmented observer \eqref{eq:hybrid_obs_v1}-\eqref{eq:batch}-\eqref{eq:batch_jump}, with $N$ chosen as in Table \ref{tab:TabParMot}. As expected, the proposed discrete-time identifier further improves on the previous solution in terms of estimation speed. Indeed, the fast reduction of $\tilde{\xi}$ is obtained after a brief waiting time according to \eqref{eq:batch_jump}. \section{Conclusions}\label{sec:conclusions} We presented a hybrid sensorless observer for PMSMs, with no a priori knowledge of the mechanical model. The rotor speed was assumed to be an unknown input disturbance (endowed with some mild regularity assumptions) with constant, unknown sign and a persistently non-zero magnitude. We showed that a clock, used to trigger an appropriate position estimation update, is sufficient to yield a semiglobal practical stability result in this challenging scenario. Motivated by the improved convergence properties with respect to an initial continuous-time solution, we also proposed an estimation speed-up strategy based on a discrete-time identifier. Future research effort will be dedicated to relaxing the required speed assumptions, as well as to further developments of the presented discrete-time identification technique. \bibliographystyle{ieeetr} {
1,314,259,992,948
arxiv
\section{Introduction} \label{sec:intro} Extraordinary amounts of data are being collected in the form of arrays across many scientific domains, including sociology, physics, and biology, among others. In particular, network data and network data analysis have received a lot of attention because of their wide-ranging applications in these areas \citep{newman2018networks,goldenberg2010survey,kolaczyk2009statistical}. Community detection is one of the fundamental problems in network analysis, where the goal is to find groups of nodes that are, in some sense, more similar to each other than to the other nodes. Past decades have seen various procedures on community detection including modularity maximization, spectral clustering, likelihood methods, semidefinite programming, among others; see \citet{abbe2017community} for a recent survey. However, large networks, say, networks with millions of nodes, bring great challenges to these community detection procedures despite the increasing computational power. {Taking the spectral clustering that we will focus on in this paper as an example, the full eigenvalue decomposition therein is time demanding when the dimension becomes large. } Randomization has become one popular method for modern large-scale data analysis; see \citet{mahoney2011randomized}, \citet{drineas2016randnla}, and references therein. The general idea is that depending on the problem of interest, {one uses} a degree of randomness to construct a small ``sketch'' of the full data set, and {then uses} the resulting sketched data instead to reduce the computational burden. Random projection and random sampling are the two general approaches to obtain such a sketch matrix. Roughly speaking, random projection reduces the computational cost by projecting the data matrix to a smaller dimensional space in order to approximate the data. While random sampling algorithms lighten the computational burden by sampling and rescaling the data in some manner. The randomization techniques have been applied to the least squares regression \citep{drineas2006sampling,drineas2011faster,drineas2012fast}, and the low-rank matrix approximation \citep{halko2011finding,martinsson2016randomized,witten2015randomized,mahoney2009cur}, among many others. Most works in this area were analyzed from an algorithmic perspective, where the randomized algorithm could lead to approximately as good performance as the full data at hand does for some problems of interest. However, from a statistical perspective, the aim is not only to obtain randomized algorithms which perform well on a particular data set but also to understand how well they perform under some underlying mechanisms. In the context of regression, there have been a few works that study the randomized algorithms under underlying regression models--for example, the ordinary linear regression \citep{ma2015statistical,raskutti2016statistical,wang2019information}, the logistic regression \citep{wang2018optimal,wang2019more}, the ridge regression \citep{wang2017sketched}, the constrained regressions \citep{pilanci2016iterative,pilanci2017newton}, and the spatial autoregressive (SAR) models \citep{zhou2017estimating,li2019randomized}, among others. Just like they have studied how well the randomized algorithms can estimate the underlying regression model, it is natural and important to study how well we can use the randomization techniques to detect the communities in a ``true'' network model. The stochastic block model (SBM) \citep{holland1983stochastic} is a simple but expressive network model that captures the community structure of networks observed in the real world. In an SBM, nodes are partitioned into several distinct communities and conditioned on the underlying community assignments, the edges are generated independently according to the community membership of their end nodes. Nodes within the same community are generally more likely to be connected than the other nodes. The SBM is popular among statisticians because it can be rigorously studied coupling with various network community detection procedures; see \citet{abbe2017community} for an excellent review. In this work, we focus on studying how randomization can be used to reduce the computational cost of spectral clustering, and understanding how well the resulting randomized spectral clustering algorithms perform under the SBMs. Spectral clustering is a popular and simple algorithm for clustering which consists of the following two steps. One first conducts the eigenvalue decomposition of the adjacency matrix or the Laplacian matrix and then runs the $k$-means on several leading eigenvectors to obtain the nodes clusters or communities \citep{von2007tutorial}. It is well known that the full eigenvalue decomposition in the first step generally requires $O(n^3)$ time where $n$ denotes the number of nodes, which is time demanding when $n$ becomes huge. Regardless of the computational issues, it has been shown to enjoy good theoretical properties within the SBM framework; see, \citet{rohe2011spectral,choi2012stochastic,qin2013regularized,lei2015consistency,sarkar2015role,joseph2016impact,su2019strong, yang2020simultaneous,tang2021asymptotically,deng2021strong,levin2021limit}, among many others. {Facing large networks, it is thus desirable to study whether these properties would retain under certain randomization of the algorithms.} In this paper, we utilize the idea of randomization to obtain two kinds of randomized spectral clustering algorithms; namely, the random projection-based and the random sampling-based spectral clustering, and in particular, we study their theoretical properties under the SBMs. We focus on the adjacency matrix $A$ of the network. The random projection-based method is motivated as follows. Note that the adjacency matrix inherits a low-rank structure {approximately} {since it is assumed to be sampled from a SBM~\citep{lei2015consistency,rohe2011spectral}}. Therefore, if one can make use of such low-rank structure to derive a matrix with a lower dimension which captures the essential information of $A$, then the eigenvalue decomposition of this matrix can help to derive that of $A$, which in turn reduces the computational cost. Indeed, the recently developed randomized low-rank matrix approximation algorithms provide a powerful tool for performing such low-rank matrix approximation \citep{halko2011finding,witten2015randomized,martinsson2016randomized}. Specifically, these techniques utilize some amount of randomness to compress the columns and rows of $A$ to $l$ ($l\ll n$) linear combinations of the columns and rows of $A$. The eigenvalue decomposition on the resulting $l$-dimensional matrix can be largely reduced since $l$ is far smaller than $n$. The random projection-based spectral clustering refers to the original spectral clustering with its first step replaced by the randomized eigenvalue decomposition. On the other hand, the computational cost of the original spectral clustering can be reduced via the random sampling. Note that we only need to find a few leading eigenvectors of $A$, which can be obtained using many fast iterative methods, such as the orthogonal iteration and Lanczos iteration; see \citet{baglama2005augmented,calvetti1994implicitly}, among others. And it is well known that the time complexity of iterative algorithms {is in direct proportion to }the number of non-zero elements of $A$ multiplied by the number of iterations. Therefore, if we sample the elements of $A$ in some way to obtain a sparser matrix, then the time for computing its leading eigenvectors will be largely reduced. There have been a few works on the randomized matrix sparsification; see \citet{gittens2009error,achlioptas2007fast,arora2006fast,li2020network}, among others. In particular, \citet{li2020network} apply the sampling technique to study the network cross-validation problem. In this work, we use a simple sampling strategy to obtain a sparsified matrix; that is, sample pair $(i,j)$'s of nodes with probability $p_{ij}$'s, then use the iteration method of \citet{calvetti1994implicitly} to find its leading vectors, and after that perform the $k$-means algorithm on these eigenvectors, which we refer to the random sampling-based spectral clustering. We theoretically justify the randomized spectral clustering algorithms in terms of the approximation error that measures the deviation of the randomized matrix $\tilde{A}$ of the adjacency matrix $A$ from the population matrix $P$ and the misclassification error. In addition, although the spectral clustering is nonparametric in nature, we develop a simple method to estimate the link probability matrix $B$ based on the output clusters where $B_{kl}$ is the edge probability between any node pairs in communities $k$ and $l$, and provide its theoretical bound. It is worth noting that our analysis does \emph{not} rely on the common assumption in most SBM literatures that $B$ is of full rank. In particular, we analyze the true eigen-structure of $P$ in the rank-deficient scheme and provide an explicit condition under which the nodes from different communities are separable. It turns out that the approximation error bound in terms of the spectral norm, namely, $\|\tilde{A}-P\|_2$, attains the minimax optimal rate in SBMs \citep{gao2015rate,gao2020discussion} under mild conditions, {indicating that the optimization error from randomization, namely, $\|\tilde{A}-A\|_2$, is dominated by the statistical error from SBMs, namely, $\|A-P\|_2$. The misclassification error bounds are identical to the original spectral clustering \citep{lei2015consistency} and are optimal provided that the community number $K$ is fixed \citep{ahn2018hypergraph}.} We also generalize the results to degree-corrected block models--an extension of SBMs incorporating the degree heterogeneity \citep{karrer2011stochastic}. The contributions of this paper are as follows. First, we utilize randomization tools to obtain two kinds of randomized spectral clustering algorithms and theoretically study the resulting algorithm under the SBMs. The results provide statistical insights of randomization on spectral clustering. From the statistical perspective, the randomization does not deteriorate the error bound of $\|\tilde{A}-P\|_2$, because the latter already attains the minimax optimal rate in SBMs. Second, extending the full-rank assumption in most works on SBMs, we also study the rank-deficient SBMs, analyze the true eigen-structure of these models, and provide sufficient conditions under which the spectral clustering may succeed, which is rarely mentioned in SBMs works and of independent interest. Third, we develop a new R package called \textsf{Rclust}\footnote{\url{https://github.com/XiaoGuo-stat/Rclust}} to facilitate the use of the randomized spectral clustering algorithms studied in this work. The remainder of this paper is organized as follows. Section \ref{sec:pre} defines the notation, introduces and analyzes the SBM and spectral clustering in more detail. Section \ref{sec:rspec} includes the random projection-based and random sampling-based spectral clustering schemes that we consider. Section \ref{sec:theo} presents the theoretical results. Section \ref{sec:dcsbm} contains extensions to degree-corrected block models. Section \ref{sec:related} reviews and discusses related works. Section \ref{sec:simu} and \ref{sec:real} display the simulation and real experiments that verify the theoretical results and show the effectiveness of the proposed methods. Section \ref{sec:dissc} concludes with discussion. Proofs are provided in the online supplemental material. \section{Preliminaries } \label{sec:pre} In this section, we provide some notation and briefly introduce the SBMs and the spectral clustering algorithm. In particular, the rationality of spectral clustering under SBMs is analyzed. \subsection{Notation} Let $\mathbb M_{n,K}$ be the set of all $n\times K$ matrices that have exactly one 1 and $K-1$ 0's in each row. Any $\Theta\in \mathbb M_{n,K} $ is called a \emph{membership matrix} where each row represents the community membership of a node in a network with $K$ communities; for example, node $i$ belongs to community $g_i\in\{1,...,K\}$ if and only if $\Theta_{ig_i}=1$. For $1\leq k\leq K$, let $G_k=G_k(\Theta)=\{i\in[n]:g_i=k\}$, where $[n]:=\{1,2,\dots,n\}$. $G_k$ consists of nodes with their community membership being $k$, and denote $n_k=|G_k|$. For any matrix $A_{n\times n}$ and $I,J\subseteq [n]$, $A_{I\ast}$ and $A_{\ast J}$ denote the submatrix of $A$ consisting of the corresponding rows and columns, respectively. $\|A\|_{\tiny {\rm F}}$ and $\|A\|_\infty$ denote the Frobenius norm and the element-wise maximum absolute value of $A$, respectively. We use $\|\cdot\|_2$ to denote the Euclidean norm of a vector and the spectral norm of a matrix. In addition, ${\rm diag}(A)$ denotes the matrix with its diagonal elements being the same as those of $A$ and non-diagonal elements being 0's. \subsection{Stochastic block model} \label{sub:sbm} The SBM introduced by \citet{holland1983stochastic} is a class of probabilistic model for networks with well-defined communities. For a potential network with $n$ nodes and $K$ communities, the model is parameterized by the membership matrix $\Theta\in\mathbb M_{n,K}$, and the link probability matrix $B\in[0,1]^{K\times K}$ where $B$ is symmetric, and the entry of $B$; for example, $B_{kl}$, represents the edge probability between the community $l$ and $k$. Here, $B$ is \emph{not} necessarily of full rank, and we assume ${\rm rank}(B)=K'\leq K$. Given $\Theta$ and $B$, the network adjacency matrix $A=(a_{ij})_{1\leq i,j\leq n}\in\{0,1\}^{n\times n}$ is generated as \[a_{ij}=\begin{cases} \label{2.1} {\rm Bernoulli } (B_{g_ig_j}) & \mbox{if }\; i<j, \\ 0,& \mbox{if } \;i=j,\\ a_{ji}, & \mbox{if } \;i>j.\tag{2.1} \end{cases}\] Define $P=\Theta B\Theta^{\intercal}$, then it is easy to see that $P$ is the population version of $A$ in the sense that $\mathbb E(A)=P-{\rm diag}(P)$. Under the SBMs, the goal of community detection is to use the adjacency matrix $A$ to recover the membership matrix $\Theta$ up to column permutations. \subsection{Spectral clustering} Spectral clustering is a popular and simple algorithm for community detection in networks \citep{von2007tutorial}. It generally consists of two steps. The first step is to perform the eigenvalue decomposition of a suitable matrix representing the network, where we consider the simple adjacency matrix $A$, and then put the eigenvectors of $A$ corresponding to the $K'$ largest eigenvalues into a $n\times K'$ matrix $\hat{U}$. Here and throughout the following paper, we should keep in mind that the target rank is $K'$ while the target community number is $K$ in the SBMs defined in Subsection \ref{sub:sbm}. In the next step, we treat each row of $\hat{U}$ as a point in $\mathbb R^K$ and run $k$-means on $\hat{U}$ with $K$ clusters. In this paper, for simplicity, solving $k$-means is to use the standard and efficient heuristic Lloyd's algorithm. The resulting clustering labels are arranged as $\tilde{\Theta}\in \mathbb M_{n,K}$, and the $K$-dimensional centroid vectors are collected as $\tilde{X}\in \mathbb R^{K\times K}$, where the $i$th row of $\tilde{X}$ corresponds to the centroid of the $i$th cluster. We summarize the spectral clustering in Algorithm \ref{spectral}, where note that we use $\hat{U}$ to denote the eigenvectors of $A$ by contrast to those of the population $P$ denoted by $U$, and we use ${\tilde{U}}$ to denote the estimator of $\hat{U}$ obtained by $k$-means. The spectral clustering is interpretable in SBMs because the population matrix $P$ has eigenvectors that reveal the true clusters as shown in the next lemma. \begin{lemma} \label{lem:eigen} For an SBM with $K$ communities parameterized by $\Theta\in \mathbb M_{n,K}$ and $B\in [0,1]^{K\times K}$ with ${\rm rank}(B)=K'(K'\leq K)$, suppose the eigenvalue decomposition of $P=\Theta B\Theta^{\intercal}$ is $U_{n\times K'}\Sigma_{K'\times K'} U^{\intercal}_{K'\times n}$. Define $\Delta={\rm diag}(\sqrt{n_1},...,\sqrt{n_{K}})$ and denote the eigenvalue decomposition of $\Delta B\Delta$ by $L_{K\times K'}D_{K'\times K'}L^\intercal_{K'\times K}$. Then the following arguments hold. (a) If $B$ is of full rank, i.e., $K'=K$, then for $\Theta_{i\ast}=\Theta_{j\ast}$, we have ${U}_{i\ast}={U}_{j\ast}$; while for $\Theta_{i\ast}\neq\Theta_{j\ast}$, we have $\|{U}_{i\ast}-{U}_{j\ast}\|_2=\sqrt{(n_{g_i})^{-1}+(n_{g_j})^{-1}}$. (b) If $B$ is rank deficient, i.e., $K'<K$, then for $\Theta_{i\ast}=\Theta_{j\ast}$, we have ${U}_{i\ast}={U}_{j\ast}$; while for $\Theta_{i\ast}\neq\Theta_{j\ast}$, if $\Delta^{-1}L$'s rows are mutually distinct such that there exists a deterministic sequence $\{\xi_n\}_{n\geq 1}$ satisfying \begin{align} \label{A1} {\rm min}_{\textcolor{red}{k\neq l}}\|\frac{L_{k\ast}}{\sqrt{n_{k}}}-\frac{L_{l\ast}}{\sqrt{n_{l}}}\|_2\geq \xi_n >0,\tag{A1} \end{align} then $\|{U}_{i\ast}-{U}_{j\ast}\|_2=\|\frac{L_{g_{i}\ast}}{\sqrt{n_{g_i}}}-\frac{L_{g_{j}\ast}}{\sqrt{n_{g_j}}}\|_2\geq \xi_n >0 $. \end{lemma} Lemma \ref{lem:eigen} says that when $B$ is of full rank, two rows of $U$ are identical if and only if the corresponding nodes are in the same community. These results have already been obtained in \cite{lei2015consistency,rohe2011spectral}, among others. While when $B$ is rank deficient, we additionally assume (\ref{A1}) holds in order to make sure that two rows of $U$ are separable when the corresponding nodes are in distinct communities. The next lemma provides an explicit condition on $B$ that suffices for (\ref{A1}). In particular, the within-community probabilities should dominate the between-community probabilities in the sense of (\ref{2.2}). \begin{lemma} \label{lem:eigencondition} For an SBM with $K$ communities parameterized by $\Theta\in \mathbb M_{n,K}$ and $B\in [0,1]^{K\times K}$ with ${\rm rank}(B)=K'<K$, suppose the eigenvalue decomposition of $P=\Theta B\Theta^{\intercal}$ is $U\Sigma U^{\intercal}$. If there exists two deterministic sequences $\{\eta_n\}_{n\geq 1}$ and $\{\iota_n\}_{n\geq 1}$ such that \begin{align} \label{2.2} {\rm min}_{1\leq k<l\leq K}B_{kk}+B_{ll}-2B_{kl}\geq \eta_n >0,\nonumber \tag{2.2} \end{align} and for any $1\leq i\leq K'$, $0<\Sigma_{ii}\leq \iota_n$, then (\ref{A1}) holds with $\xi_n=\sqrt{\eta_n/\iota_n}$. \end{lemma} Lemma \ref{lem:eigen} and \ref{lem:eigencondition} indicate that the spectral clustering could work well if the $K'$ leading eigenvectors of $A$ are close to those of the population $P$. While when $n$ is large, the full eigenvalue decomposition is time consuming. In the following sections, we will make use of the recently developed randomization techniques--namely, the random projection and the random sampling, to accelerate the spectral clustering. In the meanwhile, we will theoretically study how the randomized spectral clustering methods interact with the assumptions of SBMs. \begin{algorithm}[htb] \footnotesize \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \caption{Spectral clustering for $K$ clusters}~\label{alg:spectral_cluster} \label{spectral} \begin{algorithmic}[1] \REQUIRE ~\\ Cluster number $K$, {target rank $K'$}, adjacency matrix $A\in \mathbb{R}^{n\times n}$;\\ \ENSURE ~\\ Estimated membership matrix $\tilde{{\Theta}}\in\mathbb M_{n,K}$ and centriods $\tilde{{X}}\in \mathbb{R}^{K\times K'}$ ;\\ Estimated eigenvectors ${\tilde{U}}=\tilde{{\Theta}}\tilde{{X}}$;\\ ~\\ \STATE Find the $K'$ leading eigenvectors $\hat{U}$ of $A$ corresponding to the $K'$ largest eigenvalues of $A$. \\ \STATE Treat each row of $\hat{U}$ as a point in $\mathbb{R}^{K'}$ and run the Lloyd's algorithm on these points with $K$ clusters. Let $(\tilde{\Theta},\tilde{X})$ be the solution. \\ \end{algorithmic} \end{algorithm} \section{Randomized spectral clustering } \label{sec:rspec} In this section, we use the randomization techniques to derive two kinds of randomized spectral clustering--namely, random projection-based spectral clustering and random sampling-based spectral clustering. \subsection{Randomized spectral clustering via random projection} Recall that $A$ is generated from a low-rank matrix $P=\Theta B\Theta^{\intercal}$, hence $A$ inherits a low-rank structure naturally. Therefore, if one can make use of such low-rank structure to derive a smaller matrix that captures the essential information of $A$, then the eigenvalue decomposition of the smaller matrix can help to derive that of $A$, which in turn reduces the computational cost. Fortunately, randomization is a powerful tool for performing such low-rank matrix approximation \citep{halko2011finding,witten2015randomized,martinsson2016randomized}. These techniques utilize some amounts of randomness to compress the input matrix to obtain a low-rank factorization efficiently, which is called \emph{random projection}. In this section, we introduce the random projection strategy in the context of eigenvalue decomposition. Let us see how the random projection can help reduce the time for the eigenvalue decomposition of adjacency matrix $A$. For a symmetric matrix $A\in\mathbb{R}^{n\times n}$ with target rank $K'$, we aim to find an orthonormal basis $Q\in \mathbb{R}^{n\times K'}(K'\leq n)$ such that $$A\approx QQ^{\intercal}AQQ^{\intercal}:= \tilde{A}^{\rm rp},$$ where $\tilde{A}^{\rm rp}$ is essentially a low-rank approximation of $A$. Before constructing $Q$, we here provide some insights. $Q\in \mathbb{R}^{n\times K'}$ can be thought as a low-rank approximation of the column (row) space of matrix $A$. To see this, suppose the eigendecomposition of $A$ is $A=\hat{U}_{n\times m}\hat{\Sigma}_{m\times m}\hat{U}^{\intercal}_{m\times n}$, where $m$ is the rank of $A$ and $\hat{U}$ represents the column (row) space of $A$. Then, when $Q=\hat{U}$ and $m=K'$, it is straightforward to see $A=QQ^{\intercal} AQQ^{\intercal}$. In addition, $QQ^{\intercal}$ is a projection operator which projects any vector $x\in\mathbb{R}^n$ to the column space of $Q$, i.e., $\|x-QQ^{\intercal}x\|_2^2=\mbox{min}_{y\in \mathbb{R}^K}\;\|x-Qy\|_2^2$. $Q$ can be obtained using the following steps \citep{halko2011finding}:\\ \textbf{\emph{Step 1:}} Form a random test matrix $\Omega=(\omega_1,...,\omega_{K'})\in \mathbb R^{n\times K'}$, where $\{\omega_i\}_{i=1}^{K'}$ are $n$-dimensional random vectors independently drawn from a distribution. \\ \textbf{\emph{Step 2:}} Form the sketch matrix $Y=(y_1,...,y_K)=A\Omega\in\mathbb{R}^{n\times K'}$.\\ \textbf{\emph{Step 3:}} Obtain $Q$ via the QR decomposition $Y=:QR$.\\ Once $Q$ is obtained, we can perform the eigenvalue decomposition on the smaller matrix $C:=Q^{\intercal}AQ\in \mathbb{R}^{K'\times K'}$, and then post process it to obtain the approximate eigenvectors of $A$. In this way, {the computational cost of the original spectral clustering could be largely reduced when we incorporate the aforementioned steps into Algorithm \ref{alg:spectral_cluster} to provide the approximate eigenvectors of the adjacency matrix. We call this procedure random projection-based spectral clustering. } The random test matrix $\Omega$ can be generated in various ways, specifically, the entries of $\omega_i$ can be i.i.d. standard Gaussian, uniform, and Rademacher distributions, among many others. The oversampling strategy is often used to improve the empirical performance of the randomized low-rank approximation \citep{halko2011finding,witten2015randomized,martinsson2016randomized}. As most data matrices do not have exact rank $K'$, it is desirable to use $l:=K'+r$ random projections instead of exact $K'$ projections to form the random sketch of $A$. In practice, $r=\{5,10\}$ often suffices to make sure that the obtained basis $Q$ is close to the best possible basis, namely, the $K'$ leading eigenvectors of $A$, with high probability \citep{martinsson2016randomized}. Besides the oversampling scheme, the power iteration is another way to improve the quality of low-rank approximation. For some data matrices, the eigenvalues decay slowly that may lead to information loss. Thus instead of forming the sketch $Y$ on the basis of $A$, several authors incorporate $q$ steps of a power iteration before constructing the sketch matrix $Y$. Formally, it is defined as $$Y:=(AA^{\intercal})^qA\Omega=A^{2q+1}\Omega.$$ In practice, $q=1$ or $q=2$ often suffices to make the spectrum decay fast \citep{halko2011finding}. We summarize the random projection-based spectral clustering procedure with such power iteration and the aforementioned oversampling strategies in Algorithm \ref{randsp}. \begin{remark} The time complexity of Algorithm \ref{randsp} is dominated by the matrix multiplications when forming $Y$ and $C$ in Step 2 and Step 4, which take $O((2q+1)n^2(K'+r))$ and $O(n^2(K'+r))$ time, respectively. In particular, the time complexity of Step 2 can be improved to $O((2q+1)n^2{\rm log}(K'+r))$ by using {\rm structured} random test matrices, for example, the {\rm subsampled random Fourier transform} \citep{halko2011finding,erichson2019randomized}. Moreover, the matrix-vector multiplications in Step 2 can be paralleled to further reduce the computation cost. \end{remark} \begin{algorithm}[htb] \footnotesize \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \caption{Randomized spectral clustering via random projection } \label{randsp} \begin{algorithmic}[1] \REQUIRE ~\\ Cluster number $K$, target rank $K'$, adjacency matrix $A\in \mathbb{R}^{n\times n}$, oversampling parameter $r$, and exponent $q$;\\ \ENSURE ~\\ Membership matrix ${\tilde{\Theta}}^{\rm rp}\in\mathbb M_{n,K}$ and centriods ${\tilde{X}}^{\rm rp}\in \mathbb{R}^{K\times K'}$ ;\\ ${\tilde{U}}^{\rm rp}={\tilde{\Theta}}^{\rm rp}{\tilde{X}}^{\rm rp}$;\\ ~\\ \STATE Draw a $n\times( K+r)$ random test matrix $\Omega$.\\ \STATE Form the matrix $Y=A^{2q+1}\Omega$.\\ \STATE Construct $Q$ via orthonomalizing the columns of $Y$, i.e., $Y=:QR$.\\ \STATE Form $C=Q^{\intercal}AQ$ and denote $ \tilde{A}^{\rm rp}\equiv QCQ^{\intercal}$.\\ \STATE Compute the eigenvalue decomposition of the small matrix: $C={U_s}\Sigma_s {U}_s^{\intercal}$.\\ \STATE Set ${U}^{\rm rp}$ to be the column subset of $Q{U}_s$ corresponding to the $K'$ largest values of $\Sigma_s$. \STATE Treat each row of ${U}^{\rm rp}$ as a point in $\mathbb{R}^{K'}$ and run the Lloyd's algorithm on these points with $K$ clusters. Let $({\tilde{\Theta}}^{\rm rp},{\tilde{X}}^{\rm rp})$ be the solution. \end{algorithmic} \end{algorithm} \subsection{Randomized spectral clustering via random sampling} The random sampling strategy is to first do element-wise sampling from the adjacency matrix $A$, and then use fast iterative methods, say orthogonal iteration or Lanczos iteration, to find a nearly-optimal best rank $K'$ approximation of $A$. The motivation is that in spectral clustering, we aim to find the first $K'$ eigenvectors of $A$, or the best rank $K'$ approximation of $A$. And there exist many fast iterative methods for computing such low-rank matrix approximation; see \citet{calvetti1994implicitly,baglama2005augmented,allen2016lazysvd,lehoucq1995analysis}, among many others. The time complexity of iterative methods is generally proportional to the number of non-zero elements of $A$ multiplied by the number of iterations. Hence, if we sample the elements of $A$ in some way to obtain a sparser matrix, then the time for computing its rank $K'$ approximation will be largely reduced. In the meantime, we hope that the sampling scheme does not deteriorate the accuracy too much. In the sequel, we introduce the random sampling procedure and the corresponding randomized spectral clustering. We adopt a simple sampling strategy to obtain a sparsified version of $A$. That is, randomly select pairs $(i,j)$'s of the adjacency matrix $A$ independently with probability $p_{ij}$'s, and the randomized sparsified matrix $\tilde{A}^{\rm s}$ is defined as \[\tilde{A}_{ij}^{\rm s}=\begin{cases} \label{3.1} \frac{A_{ij}}{p_{ij}}, & \mbox{if }\; (i,j) { \mbox{ is selected},} \\ 0,& \mbox{if } \;(i,j) {\mbox{ is not selected}},\tag{3.1} \end{cases}\] for each $i<j$, and $\tilde{A}_{ji}^{\rm s}=\tilde{A}_{ij}^{\rm s}$ for each $i>j$. Once $\tilde{A}^{\rm s}$ is obtained, we can apply an {iterative algorithm} for the eigenvalue decomposition of $\tilde{A}^{\rm s}$ to attain the nearly-optimal rank $K'$ approximation of $\tilde{A}^{\rm s}$ such that $$\tilde{A}^s\approx{U}^{\rm rs}_{n\times K'}{\Sigma}^{\rm rs}_{K'\times K'}({U}^{\rm rs})^\intercal_{K'\times n}:=\tilde{A}^{\rm rs}.$$ Then the Lloyd's algorithm can be applied on the rows of ${U}^{\rm rs}$ to find the clusters. Let $({\tilde{\Theta}}^{\rm rs},{\tilde{X}}^{\rm rs})$ be the solution. For reference, we summarize these steps in Algorithm \ref{randss}. \begin{remark} The sampling strategy is element-specific. The simplest choice is that $p_{ij}=p$ for all pairs of $(i,j)$. Note that it is equivalent to sampling $1$'s with probability $p$ and sampling $0$'s with probability $p'\;(p'<p)$. Another choice is to set $p_{ij}$ proportional to $\|A_{i\ast}\|_2$ which {enables that the edges from high-degree nodes would remain with higher probability}, but computing $\|A_{i\ast}\|_2$ brings additional time cost. In addition, for real applications where certain edges or all edges of certain nodes are forced to remain in $\tilde{A}^s$, one can use the element-subject sampling strategy. \end{remark} \begin{remark} It should be noted that the iteration algorithms in \emph{Step 2} of Algorithm \ref{alg:sampling_spectral_cluster} yields the nearly-optimal solution instead of the exactly-optimal rank $K'$ approximation and it is acceptable to work with a nearly-optimal low-rank approximation. In the theoretical analysis, we treat \emph{Step 2} as a black box and suppose the best rank $K'$ approximation is obtained. We mainly deal with approximation error induced by \emph{Step 1}. \end{remark} \begin{algorithm}[htb] \footnotesize \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \caption{Randomized spectral clustering via random sampling }\label{alg:sampling_spectral_cluster} \label{randss} \begin{algorithmic}[1] \REQUIRE ~\\ Cluster number $K$, target rank $K'$, adjacency matrix $A\in \mathbb{R}^{n\times n}$, sampling probability matrix $\bar{P}=(p_{ij})$;\\ \ENSURE ~\\ Membership matrix ${\tilde{\Theta}}^{\rm rs}\in\mathbb M_{n,K}$ and centriods ${\tilde{X}}^{\rm rs}\in \mathbb{R}^{K\times K'}$ ;\\ ${\tilde{U}}^{\rm rs}={\tilde{\Theta}}^{\rm rs}{\tilde{X}}^{\rm rs}$;\\ ~\\ \STATE For each pair $(i,j)(i<j)$, randomly select pair $(i,j)$ of $A$ with probability $p_{ij}$. Form the sparsified matrix $\tilde{A}^{\rm s}$ according to (\ref{3.1}). \\ \STATE Apply an {iterative algorithm} to obtain the nearly-optimal rank $K'$ approximation of $\tilde{A}^{\rm s}$ such that $$\tilde{A}^{\rm s}\approx{U}^{\rm rs}_{n\times K'}{\Sigma}^{\rm rs}_{K'\times K'}({U}^{\rm rs})^\intercal_{K'\times n}:=\tilde{A}^{\rm rs}.$$\\ \STATE Treat each row of ${U}^{\rm rs}$ as a point in $\mathbb{R}^{K'}$ and run the Lloyd's algorithm on these points with $K$ clusters. Let $({\tilde{\Theta}}^{\rm rs},{\tilde{X}}^{\rm rs})$ be the solution. \end{algorithmic} \end{algorithm} \section{Theoretical analysis} \label{sec:theo} In this section, we theoretically justify the performance of two randomization schemes on spectral clustering under the model set-up of SBMs. Specifically, for each method, we evaluate its performance from the following three aspects. First, we derive an upper bound on how the randomized matrix $\tilde{A}^{\rm rp}$ (or $\tilde{A}^{\rm rs}$) deviates from the population adjacency matrix of SBMs. Then, we use these results to bound the misclassification error rate of the randomized spectral clustering algorithms. At last, we use the estimated clusters to obtain an estimate of $B$, and provide its theoretical bounds. \subsection{Random projection} \label{subsec:rp} The following notes and notation would be used throughout this subsection. Let $A$ be a $n\times n$ adjacency matrix generated from a SBM with $K$ communities parameterized by $\Theta\in \mathbb M_{n,K}$ and $B\in [0,1]^{K\times K}$ with ${\rm rank}(B)=K'(K'\leq K)$. Denote the eigenvalue decomposition of $P=\Theta B\Theta^{\intercal}$ by $U_{n\times K'}\Sigma_{K'\times K'} U^{\intercal}_{K'\times n}$. Let $\sigma_n$ and $\gamma_n$ be the largest and smallest nonzero eigenvalue of $P$. Let ${\tilde{\Theta}}^{\rm rp}$ be the output of Algorithm \ref{randsp} with the target rank being $K'$, the oversampling and the power parameter being respectively $r$ and $q$, and the test matrix $\Omega$ generating i.i.d. standard Gaussian entries. The following theorem provides the deviation of $\tilde{A}^{\rm rp}$ from $P$. \begin{theorem}\label{rproappro} If \begin{equation} \label{A2}{\rm max}_{kl}B_{kl}\leq \alpha_n \;{\rm for\; some}\; \alpha_n\geq c_0\,{\rm log}n/n,\tag{A2} \end{equation} and \begin{equation} \label{A3} r\geq 4,\, r{\rm log}r\leq n,\, K'+r\leq n,\, q=c_1\cdot n^{1/\tau},\tag{A3} \end{equation} for some constant $c_0,c_1>0$ and any $\tau>0$, then for any $s>0$, there exists a constant $c_2=c_2(s,c_0,c_1)$ such that \begin{equation} \label{4.1}\|\tilde{A}^{\rm rp}-P\|_2\leq c_2\sqrt{n\alpha_n}, \tag{4.1} \end{equation} with probability at least $1-6r^{-r}-2n^{-s}$. \end{theorem} The deviation of $\tilde{A}^{\rm rp}$ from $P$ arises from two sources, one is the deviation of $\tilde{A}^{\rm rp}$ from $A$ (optimization error), and the other is the deviation of $A$ from $P$ (statistical error). To bound the statistical error $\|A-P\|_2$, we pose condition (\ref{A2}), a weak condition on the population network sparsity, which has been used to obtain a sharp bound of $\|A-P\|_2$ \citep{lei2015consistency,gao2017achieving,chin2015stochastic}. To bound the optimization error $\|\tilde{A}^{\rm rp}-A\|_2$, we utilize the result in \citet{halko2011finding} and pose condition (\ref{A3}) on the order of the oversampling parameter $r$ and the power parameter $q$. It essentially indicates that the optimization error caused by random projection is dominated by the statistical error caused by sampling $A$ from $P$. Note that $q=c_1\cdot n^{1/\tau}$ is mild because $\tau$ can be sufficiently large. Under (\ref{A2}) and (\ref{A3}), the bound in (\ref{4.1}) attains the minimax optimal rate under the SBMs \citep{gao2015rate,gao2020discussion}. Thus in the sense of the spectral norm, the randomized matrix $\tilde{A}^{\rm rp}$ and the non-randomized matrix $A$ behave the same provided that $A$ is generated from an SBM, and thus the randomization pays no price theoretically ignoring the conditions that we imposed. Moreover, (\ref{A2}) could be removed if one consider regularized population adjacency matrix \citep{qin2013regularized} or using other trimming steps \citep{le2015sparse}. (\ref{A3}) could be relaxed if one use more advanced methods, say \citet{clarkson2017low,hu2021sparse,martinsson2020randomized}. With the derivation of $\tilde{A}^{\rm rp}$ from $P$ at hand, we are ready to justify the clustering performance of Algorithm \ref{randsp}. We consider the following metric that measures the sum of the fractions of the misclustered nodes within each community, \begin{align} \label{4.2} L_1({\tilde{\Theta}}, \Theta)=\underset{J\in E_K}{\rm min}\,\underset{1\leq k\leq K}{\sum}\;(2n_k)^{-1}\|({\tilde{\Theta}}J)_{G_{k}\ast}-\Theta_{G_{k}\ast}\|_0, \tag{4.2} \end{align} where ${\tilde{\Theta}}$ is an estimate of $\Theta$, and $E_K$ is the set of all $K\times K$ permutation matrices. The following theorem provides an upper bound on $L_1$. \begin{theorem}\label{rpromis} Suppose that {(\ref{A1})}, {(\ref{A2})} and {(\ref{A3})} hold, and there exists an absolute constant $c_3>0$ such that, \begin{equation} \label{A4}\frac{K'n\alpha_n}{\gamma_n^2\delta_n^2 {\rm min}\, n_k}\leq c_3,\tag{A4} \end{equation} where $\delta_n:=\delta_{1n}$ when $K'=K$ and $\delta_n:=\delta_{2n}$ when $K'<K$ with \begin{equation} \label{4.3}\delta_{1n}:={\rm min}_{l\neq k}\;\sqrt{n_k^{-1}+n_l^{-1}},\tag{4.3} \end{equation} \begin{equation} \label{4.4}\delta_{2n}:=\xi_n \;({\rm recall }\, (\ref{A1})),\tag{4.4} \end{equation} then with probability larger than $1-6r^{-r}-2n^{-s}$ for any $s>0$, there exist subsets $S_k\in G_k$ for $k=1,...,K$ such that \begin{equation} \label{4.5}L_1({\tilde{\Theta}}^{\rm rp}, \Theta)\leq\sum_{k=1}^K \frac{|S_k|}{n_k}\leq c_3^{-1}\frac{K'n\alpha_n}{\gamma_n^2\delta_n^2 {\rm min}\, n_k}\tag{4.5} \end{equation} Moreover, for $G=\cup _{k=1}^K(G_k\backslash S_k)$, there exists a $K\times K$ permutation matrix $J$ such that \begin{equation} \label{4.6}{\tilde{\Theta}}^{\rm rp}_{G\ast}J=\Theta_{G\ast}.\tag{4.6} \end{equation} \end{theorem} The proof of Theorem \ref{rpromis} follows that in \citet{lei2015consistency}. (\ref{A1}) is required only when $K'<K$. (\ref{A2}) and (\ref{A3}) ensure the results of Theorem \ref{rproappro} hold. (\ref{A4}) is a technical condition which ensures the bound in (\ref{4.5}) vanishes and provides the range of parameters $(K,n,\alpha_n,\gamma_n, \delta_n)$ in which the result is appropriate. (\ref{A4}) is satisfied automatically if the bound in (\ref{4.5}) is $o(1)$. $S_k$ is actually the set of nodes in $G_k$ that are misclustered. $\delta_n$ measures the minimum distance of every pair of rows of true eigenvectors for nodes from different communities (recall Lemma \ref{lem:eigen}). As expected, larger $\delta_n$ and ${\rm min}\, n_k$ indicate better misclassification error rate. In particular, following Theorem \ref{rproappro}, the bound in (\ref{4.5}) is identical to that of the non-randomized spectral clustering when $K'=K$ \citep{lei2015consistency}. The bound in (\ref{4.5}) is not explicit as $\gamma_n$ is related to $n$. To illustrate, we now consider a simple case. Suppose a SBM parameterized by $(\Theta,B)$ is generated with balanced communities size $n/K$, and \begin{equation} \label{4.7}P=\Theta B\Theta^{\intercal}=\Theta(\alpha_n\lambda I_K+\alpha_n(1-\lambda)1_K1_K^{\intercal})\Theta^{\intercal},\tag{4.7} \end{equation} where $1_K$ represents a $K$ dimensional vector of 1's and $\lambda$ is a constant. In the case, $\gamma_n=n\alpha_n\lambda/K$ \citep{rohe2011spectral}, and then the bound in (\ref{4.5}) reduces to $${\sum_{k=1}^K \frac{|S_k|}{n_k}= O({K^3}/{n\alpha_n}).}$$ Let us discuss some specific parameter settings now. For fixed $K$, $n\alpha_n$ needs to be of order $\omega(1)$, namely, $n\alpha_n\geq c$ for some constant $c$, to ensure a vanishing error bound. In such case, the bound ${O(1/{n\alpha_n})}$ is {optimal in the sense that there is no estimator which is weakly consistent when $n\alpha_n=O(1)$} (see \citet{ahn2018hypergraph} for example). On the other hand, when $\alpha_n=c_0{\rm log}n/n$, $K=o(({\rm log} n)^{1/3})$ is required to ensure a vanishing misclassification error rate. It should be noted that since the pure spectral clustering generally could not attain the optimal misclassification error rate under SBMs \citep{gao2017achieving} except some simple case ($K=2$, within-community and between-community probability being $\frac{b{\rm log} n}{n}$ and $\frac{{a\rm log} n}{n}$) considered in \citet{abbe2020entrywise}, our randomized version also has limitations in terms of misclassification rate. While the algorithms in \citet{gao2017achieving} that attain the statistical optimal error rate has {higher computational complexity} than the randomized spectral clustering we considered here do. The current error rate would be improved if one study more refined proof techniques of pure spectral clustering or develop variants of spectral clustering that has better error rates but without increasing the time complexity. \begin{remark} In the proof of Theorem \ref{rpromis}, we made an assumption that the $k$-means algorithm finds the optimal solution as in \citet{rohe2011spectral}. Alternatively, one can use more delicate $(1+\varepsilon)$-approximate $k$-means \citep{kumar2004simple,matouvsek2000approximate} to bridge the gap, where one can find a good approximate solution within a constant fraction of the optimal value. \end{remark} In the sequel, we discuss how we can utilize the estimated membership matrix ${\tilde{\Theta}}^{\rm rp}$ and $\tilde{A}^{\rm rp}$ to estimate the link probability matrix $B$. {Without loss of generality, we assume that the permutation matrix $J$ in (\ref{4.6}) is $I_{K\times K}$}. Noting that \begin{equation*} {B}_{ql}:=\frac{\sum_{1\leq i,j\leq n}P_{ij}\Theta_{iq}\Theta_{jl}}{\sum_{1\leq i,j\leq n}\Theta_{iq}\Theta_{jl}},\quad 1\leq q,l\leq K, \end{equation*} thus it is reasonable to estimate $B$ by the following ${\tilde{B}}^{\rm rp}=({\tilde{B}}^{\rm rp}_{ql})_{1 \leq q,l\leq K}$, \begin{equation*} {\tilde{B}}_{ql}^{\rm rp}:=\frac{\sum_{1\leq i,j\leq n}\tilde{A}^{\rm rp}_{ij}{\tilde{\Theta}}^{\rm rp}_{iq}{\tilde{\Theta}}^{\rm rp}_{jl}}{\sum_{1\leq i,j\leq n}{\tilde{\Theta}}^{\rm rp}_{iq}{\tilde{\Theta}}^{\rm rp}_{jl}},\quad 1\leq q,l\leq K. \end{equation*} The following theorem provides a theoretical bound for the estimator ${\hat{B}}^{\rm rp}$. \begin{theorem}\label{rprolink} Suppose that {(\ref{A2})}, {(\ref{A3})} and {(\ref{A4})} hold, then with probability larger than $1-6r^{-r}-2K^2n^{-s}$ for any $s>0$, there exists constant $c_4>0$ that, \begin{equation} \label{4.8}\|{\tilde{B}}^{\rm rp}-B\|_\infty \leq c_4\left( \frac{\sqrt{K'+r}\sqrt{n\alpha_n}}{{\rm min}\, n_k}+\frac{\sqrt{K'}\sigma_n}{{\rm min}\, n_k}\right)\left(1+(1-\Phi_n)^{-1}+\frac{2{\rm max}\, n_k}{{\rm min}\, n_k}(1-\Phi_n)^{-2}\right), \tag{4.8} \end{equation} with $\Phi_n:=c_3^{-1}\frac{K'n\alpha_n}{\gamma_n^2\delta_n^2{\rm min}\, n_k}$ where $\delta_n=\delta_{1n}$ (see (\ref{4.3})) when $K'=K$ and $\delta_n=\delta_{2n}$ (see (\ref{4.4})) when $K'<K$. \end{theorem} Let us illustrate the bound in (\ref{4.8}) more explicitly. As a simple example, we consider the specific case in (\ref{4.7}). Suppose further that $\alpha_n=c_0{\rm log}n/n$, and then the bound in (\ref{4.8}) reduces to \begin{equation} \label{4.9}O\left(\frac{K^{3/2}{{\rm log} n}}{n}\Big(1+(1-\frac{K^3}{{\rm log} n})^{-1}\Big)\right).\tag{4.9} \end{equation} It turns out that $K=o(({\rm log} n)^{1/3})$ would lead to a vanishing bound. We note that \citet{tang2021asymptotically} established the asymptotic normality results for the estimation of $B$ in SBMs, where they assume $n\alpha_n=\omega(\sqrt{n})$. In particular, when $B$ is of full-rank, $n\alpha_n=O(\sqrt{n})$ and the community size is balanced, the error rate for $B$ is $\frac{\sqrt{K}}{n^{3/4}}$ in \citet{tang2021asymptotically}, tighter than $\frac{K^{3/2}}{\sqrt{n}}$ in our work, which is partially because that we study the randomized spectral clustering while they considered the original spectral clustering. Note that the parameter range of $\alpha_n$ in this work is more friendly than theirs. In addition, it would be interesting to study the asymptotic properties of $\hat{B}$ under the setting of randomized spectral clustering. \subsection{Random sampling} Similar to the random projection-based spectral clustering, we will derive theoretical results on the random sampling method from three aspects--namely, the deviation of $\tilde{A}^{\rm rs}$ from $P$, the misclassification error rate, and the deviation of ${\tilde{B}}^{\rm rs}$ from $B$, where ${\tilde{B}}^{\rm rs}$ is an analog of ${\tilde{B}}^{\rm rp}$ with the estimators therein replaced by the counterparts under the random sampling scheme. The SBM set-up is the same with that in Subsection \ref{subsec:rp}. We here recall some notation specific to the random sampling scheme. Let $\tilde{A}^{\rm rs}$ be the intermediate output in Algorithm \ref{randss} with the target rank being $K'$, i.e., the best rank-$K'$ approximation of the sparsified matrix $\tilde{A}^{\rm s}$ whose elements $(i,j)$'s are sampled from $A$ with probability $p_{ij}$'s. The next theorem provides an upper bound for the deviation of $\tilde{A^{\rm rs}}$ from $P$. \begin{theorem}\label{rsamappro} Suppose that (\ref{A2}) holds and assume \begin{equation} \label{A5}p_{ij}\geq p_{\rm min},\quad {\rm for\; all\;} 1\leq i<j\leq n.\tag{A5} \end{equation} Define $$I_1={\rm min}\Big\{\sqrt{\frac{n\alpha_n}{p_{\rm min}}},\sqrt{{\rm max}_i\sum_j\frac{1}{p_{ij}}}\Big\},$$ then there exist constants $c_5>0$ and $c_6>0$ such that \begin{equation} \small \label{4.10}\|\tilde{A}^{\rm rs}-P\|_2\leq c_{5}\, {\rm max}\Big\{I_1,\;\frac{\sqrt{{\rm log} n}}{p_{\rm min}},\, \sqrt{n\alpha_n^2(\frac{1}{p_{\rm min}}-1)},\;\sqrt{\alpha_n^2{\rm log}n{\rm max} \{1,\frac{1}{p_{\rm min}}-1\}^2}\Big\}:=\Psi_{n}, \tag{4.10} \end{equation} with probability larger than $1-c_6n^{-\nu}$, where constant $\nu>0$ depends on $c_5$. \end{theorem} It should be noted that the bound in (\ref{4.10}) is not obtained by simple combination of the $\|A-P\|_2$ and $\|\tilde{A}^{\rm rs}-A\|_2$. Instead, we make use of the low-rank nature of $P$, which would lead to the improved result. Theorem \ref{rsamappro} indicates that besides the minimum sampling probability ${ p_{\rm min}}$, the term $\sqrt{{\rm max}_i\sum_j\frac{1}{p_{ij}}}$ which measures the worst overall sampling probability of all edges of certain node may affect the bound. In particular, when $\alpha_n$ is fixed, and $p_{i1}, p_{i2},...,p_{in}$'s are highly heterogeneous for each fixed $i$, $I_1$ reduces to $\sqrt{{\rm max}_i\sum_j\frac{1}{p_{ij}}}$. It should be noted that when $p_{ij}$'s are uniform and fixed, the RHS of (\ref{4.10}) reduces to $\sqrt{n\alpha_n}$, being the same with the best concentration bound of the full adjacency matrix $A$ around its population $P$ \citep{lei2015consistency,gao2015rate}. In this sense, the sampled matrix $\tilde{A}^{\rm rs}$ can be regarded as a network sampled from the same SBM generating $A$, although the elements of $\tilde{A}^{\rm rs}$ are not binary. The following theorem justifies the clustering performance of the randomized spectral clustering via the random sampling (Algorithm \ref{randss}). \begin{theorem}\label{rsammmis} Suppose that (\ref{A1}), (\ref{A2}) and {(\ref{A5})} hold, and assume there exists an absolute constant $c_7>0$ such that, \begin{equation} \label{A6}\frac{K'\Psi_n^2}{\gamma_n^2\delta_n^2 {\rm min}\, n_k}\leq c_7,\tag{A6} \end{equation} where $\delta_n:=\delta_{1n}$ (see (\ref{4.3})) when $K'=K$ and $\delta_n:=\delta_{2n}$ (see (\ref{4.4})) when $K'<K$. Then with probability larger than $1-c_6n^{-\nu}$ for some $\nu>0$, there exist subsets $S_k\in G_k$ for $k=1,...,K$ such that \begin{equation} \label{4.11} \small L_1({\tilde{\Theta}}^{\rm rs}, \Theta)\leq\sum_{k=1}^K \frac{|S_k|}{n_k}\leq c_7^{-1}\frac{K'\Psi_n^2}{\gamma_n^2\delta_n^2 {\rm min}\, n_k}\tag{4.11} \end{equation} where recall that $\Psi_n$ is defined in (\ref{4.10}). Moreover, for $G=\cup _{k=1}^K(G_k\backslash S_k)$, there exists a $K\times K$ permutation matrix $J$ such that \begin{equation} \label{4.12}{\tilde{\Theta}}^{\rm rs}_{G\ast}J=\Theta_{G\ast}.\tag{4.12} \end{equation} \end{theorem} The proof is similar to that of Theorem \ref{rpromis}, hence we omit it. Under the assumption of SBM in (\ref{4.7}) and let $p$ be fixed; then similar to the random projection scheme, the bound in (\ref{4.11}) reduces to ${O({K^3}/{n\alpha_n})},$ which is $o(1)$ under the parameter set-up that $\alpha_n=c_0{\rm log}n/n$ and $K=o(({\rm log} n)^{1/3})$. Also, the current bound could be improved potentially; see our discussion after Theorem \ref{rpromis}. Next, we turn to the estimation of the link probability matrix $B$. Similar to the random projection setting, we define the following plug-in estimator ${\tilde{B}}^{\rm rs}=({\tilde{B}}^{\rm rs}_{ql})_{1 \leq q,l\leq K}$ for $B$, \begin{equation} {\tilde{B}}_{ql}^{\rm rs}:=\frac{\sum_{1\leq i,j\leq n}\tilde{A}^{\rm rs}_{ij}{\tilde{\Theta}}^{\rm rs}_{iq}{\tilde{\Theta}}^{\rm rs}_{jl}}{\sum_{1\leq i,j\leq n}{\tilde{\Theta}}^{\rm rs}_{iq}{\tilde{\Theta}}^{\rm rs}_{jl}},\quad 1\leq q,l\leq K.\nonumber \end{equation} The following theorem provides an upper bound for the deviation of ${\tilde{B}}^{\rm rs}=({\tilde{B}}^{\rm rs}_{ql})_{1 \leq q,l\leq K}$ from $B$. \begin{theorem}\label{rsamlink} Suppose that {(\ref{A2})}, {(\ref{A5})} and {(\ref{A6})} hold, then with probability larger than $1-c_6Kn^{-\nu}$ for some $\nu>0$, there exists $c_8>0$ that, \begin{equation} \label{4.13}\|{\tilde{B}}^{\rm rs}-B\|_\infty \leq c_8\Big( \frac{\sqrt{K'+r}\sqrt{n\alpha_n}}{{\rm min}\, n_k}+\frac{\sqrt{K'}\sigma_n}{{\rm min}\, n_k}\Big)\left(1+(1-\Psi_n)^{-1}+\frac{2{\rm max}\, n_k}{{\rm min}\, n_k}(1-\Psi_n)^{-2}\right), \tag{4.13} \end{equation} where recall that $\Psi_n$ is defined in (\ref{4.10}). \end{theorem} We omit the proof since it is similar to that of Theorem \ref{rprolink}. We can discuss the bound (\ref{4.13}) in a similar way to those in the random projection scheme. For example, under the special case of SBM in (\ref{4.7}), let $\alpha_n=c_0{\rm log}n/n$ and $p$ be fixed, then the bound (\ref{4.13}) reduces to the one in (\ref{4.9}). Thus $K=o(({\rm log} n)^{1/3})$ suffices to make sure that the RHS of (\ref{4.13}) vanishes when $n$ goes to infinity. \section{Extensions} \label{sec:dcsbm} Standard SBMs often fail to capture the property of networks with strong degree heterogeneity. As a remedy, in this section we extend our results to degree-corrected stochastic block models (DC-SBMs) coupled with the randomized spherical spectral clustering. \subsection{Degree-corrected stochastic block models} Similar to the SBMs, the DC-SBMs \citep{karrer2011stochastic} are parameterized by the membership matrix $\Theta\in\mathbb M_{n,K}$, and the link probability matrix $B\in[0,1]^{K\times K}$ where $B$ is \emph{not} necessary of full rank and denote ${\rm rank}(B)=K'(K'\leq K)$. To account for the degree heterogeneity, the DC-SBMs additionally introduce the node propensity parameter $\vartheta\in \mathbb R^n_{+}$. With these set-up, the population adjacency matrix is defined as $P:={\rm diag}(\vartheta)\Theta B \Theta^\intercal{\rm diag}(\vartheta)$. To make the parameters identifiable, we follow \citet{lei2015consistency} to assume that $\max_{i\in G_k}\vartheta_i=1$. To facilitate further analysis, let $\phi_k$ be an $n\times1$ vector that is consistent with $\vartheta$ on $G_k$ and zero otherwise. Let $\Omega={\rm diag} (\|\phi_1\|_2,...,\|\phi_{K}\|_2)$, and let $\bar{B}=\Omega {B}\Omega$. The following lemma reveals the eigen-structure of the population matrix $P$. \begin{lemma} \label{lem:eigen2} For a DC-SBM with $K$ communities parameterized by $\Theta\in \mathbb M_{n,K}$, $B\in [0,1]^{K\times K}$ and $\vartheta\in \mathbb R^n_{+}$, we suppose that ${\rm rank}(B)=K'(K'\leq K)$ and the eigenvalue decomposition of $P={\rm diag}(\vartheta)\Theta B \Theta^\intercal{\rm diag}(\vartheta)$ is $U_{n\times K'}\Sigma_{K'\times K'} U^{\intercal}_{K'\times n}$. Denote the eigenvalue decomposition of $\bar{B}$ by $H_{K\times K'}D_{K'\times K'}H_{K'\times K}^\intercal$. For any two vectors $a$ and $b$, ${\rm cos}(a,b)$ is defined to be $a^\intercal b/\|a\|_2\|b\|_2$. Then the following arguments hold. (a) If $B$ is of full rank, i.e., $K'=K$, then for any $\Theta_{i\ast}=\Theta_{j\ast}$, ${\rm cos}({U}_{i\ast},{U}_{j\ast})=1$, and for $\Theta_{i\ast}\neq\Theta_{j\ast}$, ${\rm cos}({U}_{i\ast},{U}_{j\ast})=0$. (b) If $B$ is rank deficient, i.e., $K'<K$, then for any $\Theta_{i\ast}=\Theta_{j\ast}$, ${\rm cos}({U}_{i\ast},{U}_{j\ast})=1$, and for any $\Theta_{i\ast}\neq\Theta_{j\ast}$, if $H$'s rows are \emph{not} pairwise proportional such that there exists a deterministic sequence $\{\xi'_n\}_{n\geq 1}<1$ satisfying \begin{equation} \label{A7} \max_{k,l} {\rm cos}({H}_{k\ast},{H}_{l\ast})\leq \xi'_n, \tag{A7} \end{equation} then ${\rm cos}(U_{i\ast},U_{j\ast})={\rm cos}({H}_{g_i\ast},{H}_{g_j\ast})\leq \xi'_n<1$. \end{lemma} The following lemma gives an explicit condition on $B$ which suffices for (\ref{A7}). \begin{lemma} \label{lem:eigen2condition} For a DC-SBM with $K$ communities parameterized by $\Theta\in \mathbb M_{n,K}$, $B\in [0,1]^{K\times K}$ and $\vartheta\in \mathbb R^n_{+}$, where suppose that ${\rm rank}(B)=K'\, (K'< K)$ and the eigenvalue decomposition of $P={\rm diag}(\vartheta)\Theta B \Theta^\intercal{\rm diag}(\vartheta)$ is $U_{n\times K'}\Sigma_{K'\times K'} U^{\intercal}_{K'\times n}$. Recall that $\Omega={\rm diag} (\|\phi_1\|_2,...,\|\phi_{K}\|_2)$, and $\bar{B}=\Omega {B}\Omega$. If there exists deterministic positive sequences $\{\eta'_n\}_{n\geq 1}$, $\{\underline{\iota}_n\}_{n\geq 1}$, $\{\overline{\iota}_n\}_{n\geq 1}$ and $\{{\beta}_n\}_{n\geq 1}$ such that \begin{equation} \min_{1\leq k<l\leq K} \bar{B}_{kk}\bar{B}_{ll}-\bar{B}_{kl}^2\geq \eta'_n>0 ,\nonumber \end{equation} and for any $1 \leq i\leq K'$, $0<\underline{\iota}_n<\Sigma_{ii}<\overline{\iota}_n$, and $0<\min_{1\leq k\leq K} \bar{B}_{kk}\leq \max_{1\leq k\leq K} \bar{B}_{kk}\leq \beta_n$, then (\ref{A7}) holds with $$\xi'_n=\sqrt{1-\frac{\eta'_n}{\overline{\iota}_n \beta_n^2/\underline{\iota}_n}}.$$ \end{lemma} Compared with Lemma \ref{lem:eigen}, we see that for the DC-SBMs, not the distances but the angles between the rows of true eigenvector $U$ reveal whether the corresponding nodes are in the same community. \subsection{Randomized spherical spectral clustering} In light of Lemma \ref{lem:eigen2}, to make the spectral clustering valid on DC-SBMs, we need to normalize the rows of eigenvectors before performing the $k$-means. In this way, the angle-based results in Lemma \ref{lem:eigen2} can be transformed to the distance-based counterpart, and thus making the distance-based $k$-means valid. The resulting algorithm is called spherical spectral clustering; see Algorithm \ref{spectral2}. \begin{algorithm}[htb] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \caption{Spherical spectral clustering for $K$ clusters} \label{spectral2} \begin{algorithmic}[1] \REQUIRE ~\\ Cluster number $K$, target rank $K'$, adjacency matrix $A\in \mathbb{R}^{n\times n}$;\\ \ENSURE ~\\ Estimated membership matrix $\hat{{\Theta}}\in\mathbb M_{n,K}$ and centriods $\hat{{X}}\in \mathbb{R}^{K\times K'}$ ;\\ Estimated eigenvectors ${\tilde{U}}=\tilde{{\Theta}}\tilde{{X}}$;\\ ~\\ \STATE Find the $K'$ leading eigenvectors $\hat{U}$ of $A$ corresponding to the $K'$ largest eigenvalues of $A$. \\ \STATE Normalize each row of $\hat{U}$ and denote the resulting matrix by $\hat{U}'$, where the rows with Euclidean norm 0's are remained the same. \STATE Treat each row of $\hat{U}'$ as a point in $\mathbb{R}^{K'}$ and run the Lloyd's algorithm on these points with $K$ clusters. Let $(\tilde{\Theta},\tilde{X})$ be the solution. \\ \end{algorithmic} \end{algorithm} The randomized spherical spectral clustering is readily available when we replace the input adjacency matrix $A$ in Algorithm \ref{spectral2} by the randomized counterpart $\tilde{A}$ ($\tilde{A}^{\rm rp}$ or $\tilde{A}^{\rm rs}$). With slight abuse of notation, the output is denoted by ${\tilde{\Theta}}$ (${\tilde{\Theta}}^{\rm rp}$ or ${\tilde{\Theta}}^{\rm rs}$). \begin{remark} The spherical spectral clustering algorithms have been studied by several authors; see \citet{lei2015consistency,qin2013regularized}, among others. In particular, \citet{lei2015consistency} remove the zero rows of $\hat{U}$ and use $k$-median instead of $k$-means for technical reasons. Differently, we let the zero rows of $\hat{U}$ be untreated and still use the $k$-means on the normalized vectors. Note that besides $k$-means based algorithms, one could use other clustering algorithms, say subspace clustering \citep{vidal2005generalized,liu2012robust,terada2014strong}, directly on the un-normalized eigenvectors. \end{remark} \subsection{Misclassification analysis} Note that the approximation error bounds $\|\tilde{A}-P\|_2$ ($\tilde{A}$ can be $\tilde{A}^{\rm rs}$ or $\tilde{A}^{\rm rp}$; see (\ref{4.1}) and (\ref{4.10})) only make use of the low-rank nature of $P$, hence they remain the same under the DC-SBMs. The following theorem provides the misclassification error rate of randomized spherical spectral clustering on DC-SBMs, where output ${\tilde{\Theta}}$ represents ${\tilde{\Theta}}^{\rm rp}$ and ${\tilde{\Theta}}^{\rm rs}$ respectively when $\tilde{A}=\tilde{A}^{\rm rp}$ and $\tilde{A}=\tilde{A}^{\rm rs}$. \begin{theorem}\label{rpromis2} For a DC-SBM with $K$ communities parameterized by $\Theta\in \mathbb M_{n,K}$, $B\in [0,1]^{K\times K}$ with ${\rm rank}(B)=K' \leq K$ and $\vartheta\in \mathbb R^n_{+}$. Let $\tilde{\vartheta}$ be an $n\times 1$ vector such that the $i$th element is $\vartheta_i/\|\phi_{g_i}\|_2$, where recall that $\phi_k$ is an $n\times 1$ vector that consistent with $\vartheta$ on $G_k$ and zero otherwise. The following results hold for the output ${\tilde{\Theta}}$ of the randomized spherical spectral clustering. (a) For $K'=K$, suppose that there exists an absolute constant $c_9>0$ such that, \begin{equation} \label{A8}\frac{1}{\min_i \tilde{\vartheta}_i^2}\cdot\frac{K'\|\tilde{A}-P\|_2^2}{\gamma_n^2 {\rm min}\, n_k}\leq c_9,\tag{A8} \end{equation} then there exist subsets $S_k\in G_k$ for $k=1,...,K$ such that \begin{equation} \label{5.1}L_1({\tilde{\Theta}}, \Theta)\leq\sum_{k=1}^K \frac{|S_k|}{n_k}\leq c_9^{-1}\frac{1}{\min_i \tilde{\vartheta}_i^2}\cdot\frac{K'\|\tilde{A}-P\|_2^2}{\gamma_n^2 {\rm min}\, n_k}. \tag{5.1} \end{equation} Moreover, for $G=\cup _{k=1}^K(G_k\backslash S_k)$, there exists a $K\times K$ permutation matrix $J$ such that \begin{equation} {\tilde{\Theta}}_{G\ast}J=\Theta_{G\ast}.\nonumber \end{equation} (b) For $K'<K$, suppose that (\ref{A7}) holds, $\max_i\Sigma_{ii}<\overline{\iota}_n$, and $\min_{1\leq k\leq K} B_{kk}>0$. Also suppose there exists an absolute constant $c_{10}>0$ such that, \begin{equation} \label{A9}\frac{\overline{\iota}_n}{\min_i \tilde{\vartheta}_i^2 \min \bar{B}_{kk}}\cdot\frac{K'\|\tilde{A}-P\|_2^2}{(1-\xi'_n)\gamma_n^2 {\rm min}\, n_k}\leq c_{10},\tag{A9} \end{equation} then there exist subsets $S_k\in G_k$ for $k=1,...,K$ such that \begin{equation} \label{5.2}L_1({\tilde{\Theta}}, \Theta)\leq\sum_{k=1}^K \frac{|S_k|}{n_k}\leq c_{10}^{-1}\frac{\overline{\iota}_n}{\min_i \tilde{\vartheta}_i^2 \min \bar{B}_{kk}}\cdot\frac{K'\|\tilde{A}-P\|_2^2}{(1-\xi'_n)\gamma_n^2 {\rm min}\, n_k}.\tag{5.2} \end{equation} Moreover, for $G=\cup _{k=1}^K(G_k\backslash S_k)$, there exists a $K\times K$ permutation matrix $I$ such that \begin{equation} {\tilde{\Theta}}_{G\ast}I=\Theta_{G\ast}.\nonumber \end{equation} \end{theorem} (\ref{A8}) and (\ref{A9}) are technical conditions that ensure the bound (\ref{5.1}) and (\ref{5.2}) valid. (\ref{5.1}) and (\ref{5.2}) can be made explicitly by incorporating the bound of $\|\tilde{A}^{\rm rp}-P\|_2$ or $\|\tilde{A}^{\rm rs}-P\|_2$ coupled with the corresponding assumptions; see Theorem \ref{rproappro} and \ref{rsamappro}. Note that $\min_i \tilde{\vartheta}_i$ reflects the degree heterogeneity in some sense. Larger $\min_i \tilde{\vartheta}_i$ indicates less degree heterogeneity and thus better clustering performance. \section{Related work and discussion} \label{sec:related} {In this section, we review and discuss the literature that is closely related to the current work. We classify them into three groups: spectral clustering, randomization techniques, and iterative methods for fast eigen-decomposition.} The community detection is one of the fundamental problems in network analysis. The SBMs and their variants have been useful tools for modeling networks with communities and thus being widely studied \citep{abbe2017community}. In particular, a multitude of researches focus on spectral clustering and its variants, see \citet{arroyo2021overlapping,chin2015stochastic,fishkind2013consistent,joseph2016impact,lei2015consistency,li2020community,lyzinski2014perfect, paul2020spectral,qin2013regularized,rohe2011spectral,tang2021asymptotically,su2019strong,yang2020simultaneous,yun2014accurate,yun2016optimal}, and references therein, among which weak (strong) consistency, namely the fraction (number) of misclustered nodes decreases to zero as $n$ grows, are well-established. Compared with most of these works, the current work has novelty in terms of both algorithms and also theoretics. In respect of algorithms, the randomized spectral clustering algorithms can deal with networks with up to millions number of nodes, {showing the advantage over original spectral clustering with full eigenvalue decomposition.} In respect of theoretics, the approximation error bound $\|\tilde{A}-P\|_2$ is optimal under mild conditions though we use randomized adjacency matrix $\tilde{A}$. As a by-product, we generalize the common assumption ${\rm rank}(B)=K$ in SBMs and DC-SBMs to ${\rm rank}(B)\leq K$, which is of independent interest and rarely mentioned in the works of literature except \citet{tang2021asymptotically,fishkind2013consistent}, and a few others. There are also various prior works on spectral clustering using randomized methods, see \citet{liao2020sparse,sakai2009fast,sinha2018k,tremblay2016compressive,tremblay2020approximating,wang2019scalable,yan2009fast}, among others. For example, \citet{sakai2009fast} developed fast spectral clustering algorithms by using random projection and random sampling techniques in order to reduce the data dimensionality and cardinality. \citet{yan2009fast} provided a general framework for fast spectral clustering where a distortion-minimizing local transformation is first applied to the data to reduce the dimensionality. \citet{tremblay2016compressive} proposed the compressive spectral clustering using the randomized techniques in graph signal processing. {Compared with this line of works}, the merits of this work lie in that we study the effect of randomization from the statistical point of view\textendash under the framework of SBMs and DC-SBMs. The current methods can obtain optimal error for $\|\tilde{A}-P\|_2$ under mild conditions, indicating that the optimization error induced by random projection or random sampling are dominated by the statistical error induced by the randomness of networks from SBMs and DC-SBMs. It should be noted that the structure of SBMs and DC-SBMs facilitates bounding the approximation error in the random sampling regime. The resulting bound is tighter than those obtained by simply combining the optimization error bound $\|\tilde{A}^{\rm rs}-A\|_2$ in \citet{achlioptas2007fast} and the statistical error bound $\|{A}-P\|_2$ in \citet{lei2015consistency}. Note that \citet{li2020network} also studied the deviation of $\tilde{A}^{\rm rs}$ from $P$ but in the context of network cross-validation. It turns out that $K\leq n/{\rm log}n$ is additionally required therein to ensure that the concentration bound of $\tilde{A}^{\rm rs}$ meets that of the full adjacency matrix $A$, provided that $p$ is fixed. Iterative methods are widely used for partial eigen-decomposition and there are fruitful works in this line; see \citet{allen2016lazysvd,baglama2005augmented,calvetti1994implicitly,lehoucq1995analysis}, among others. {We illustrate the merits of this work as follows.} Actually, in the random sampling scheme, we use the iterative methods of \citep{calvetti1994implicitly,rspectra} as our baseline method and study how sampling could be used to further accelerate the partial eigen-decomposition. While for the random projection scheme, it has the following advantages \citep{halko2011finding}. First, the random projection-based methods are scalable because the matrix-vector operations can be done via multi-threading and distributed computing, which has been exploited in the R package (\textsf{Rclust}) of this work. Second, the random projection-based methods have low communication costs as it only requires few passes over the input matrix. Further, the communication costs could be reduced by considering single-pass version \citep{tropp2017randomized}. At last, our experiments show that the randomized methods are faster than iterative methods while achieving satisfactory performance provided that the network's scale is super-large, say millions of nodes. \section{Numerical results} \label{sec:simu} In this section, we empirically compare the finite sample performance of the randomized spectral clustering, namely, the random projection and the random sampling, with the original spectral clustering, where we use uniform sampling in the random sampling scheme for computational convenience. We will start with a simple SBM model to test the effect of $n,K,\alpha$ on the approximation error, misclassification error, estimation error for $B$, respectively. Then we extend our model setting to more complex models. At last, we test the effect of hyper parameters, including the power parameter $q$ and the oversampling parameter $r$ in the random projection scheme, and the sampling parameter $p$ in the random sampling scheme. \subsection{Theoretical bounds evaluation} \label{sub:sim1} {To be consistent with Section \ref{sec:theo}, we use the following three metrics to evaluate the theoretical performance of each method. The first one is the spectral derivation of the ``approximated'' adjacency matrix $\hat{A}$ from the population adjacency matrix $P$, namely, $\|\hat{A}-P\|_2$, where $\hat{A}$ can be $\tilde{A}^{\rm rs}$, $\tilde{A}^{\rm rp}$ or ${A}$. The second metric is the sum of the fractions of misclustered nodes within each true cluster, namely, \begin{align} \underset{J\in E_K}{\rm min}\,\underset{1\leq k\leq K}{\sum}\;(2n_k)^{-1}\|(\hat{{\Theta}}J)_{G_{k\ast}}-\Theta_{G_{k\ast}}\|_0,\nonumber \end{align} where $\hat{{\Theta}}$ can be ${\tilde{\Theta}}^{\rm rp}$, ${\tilde{\Theta}}^{\rm rs}$ or $\tilde{{\Theta}}$. The third metric is the derivation of the estimated link probability matrix $\hat{{B}}$ from the true link probability matrix $B$, namely, $\|{\hat{B}}-B\|_\infty$, where ${\hat{B}}$ can be $\tilde{{B}}^{\rm rp}$, $\tilde{{B}}^{\rm rs}$, or the counterpart corresponding to the original spectral clustering. Throughout this subsection, the SBMs parameterized by $(\Theta,B)$ were homogeneously generated in the following way, \begin{equation} P=\Theta B\Theta^{\intercal}=\Theta(\alpha_n\lambda I_K+\alpha_n(1-\lambda)1_K1_K^{\intercal})\Theta^{\intercal},\nonumber \end{equation} where $1_K$ represents a $K$ dimensional vector of 1's and $\lambda$ is a constant, and the community sizes are balanced to be $n/K$. To see how the above mentioned metrics change with $n$, $K$, $\alpha_n$, we conduct the following four experiments.} \paragraph{Experiment 1.} In this experiment, we aim to evaluate the effect of $n$ on the three metrics. To that end, we let $n$ vary while keeping the other parameters fixed at $K=3, \alpha_n=0.2, \alpha_n(1-\lambda)=0.1, q=2,r=10,p=0.7$. The random test matrix in the random projection scheme was generated with i.i.d. standard Gaussian entries, respectively. Figure \ref{effectofn} shows the average results of 20 replications, where ``non-random'' refers to the original spectral clustering. Recall that the error bound for $P$ increases with order $O(\sqrt{n})$, the error bound for $\Theta$ decreases with order $O(1/n)$, and the error bound for $B$ vanishes as $n$ goes to infinity. As expected, from Figure \ref{effectofn} we can see that the randomized methods perform worse than the original spectral clustering when $n$ is small, say $n<600$, but they become almost identical when $n$ becomes large, say $n>800$, which is actually the focus of this paper (see Figure \ref{effectofn}(b) and (c)). As for the approximation error, we see that the random projection and the random sampling perform better than the original spectral clustering (see Figure \ref{effectofn}(a)), which is partially because of the constants' effects. \paragraph{Experiment 2.} In this experiment, we evaluate the effect of $\alpha_n$ on the three metrics. We fix the sample size for the moment, and focus on the influence of the maximum link probability $\alpha$. Specifically, we let $\alpha$ vary and the between cluster probability was set as $\alpha(1-0.5)$ varying with $\alpha$. The sample size $n$ was fixed at 1152. The other parameters were the same as those in Experiment 1. Figure \ref{effectofalpha} displays the average results of 20 replications. By the theoretical results, we know that the error bound for $P$ increases with order $O(\sqrt{\alpha})$, the error bound for $\Theta$ decreases with order $O(1/\alpha_n)$, and the error bound for $B$ decreases ultimately after some increase at the beginning as $\alpha$ increases. The empirical results in Figure \ref{effectofalpha} coincide with the theoretical results in some sense. {The error for $P$ increases slowly with $alpha_n$, while the error for $\Theta$ and $B$ both decrease eventually with $\alpha_n$. In addition, the gap between the randomized and the original spectral clustering in Figure \ref{effectofalpha}(b) and (c) closes as $\alpha$ increases.} \paragraph{Experiment 3.} In this experiment, we test the effect of $K$ on the three metrics. Specifically, we let $K$ vary, the within cluster probability $\alpha=0.2$, and the between cluster probability $\alpha(1-0.5)=0.1$, respectively. The other parameters were the same as those in Experiment 2. The average results of 20 replications are shown in Figure \ref{effectofk}. The theoretical bounds indicate that the error bound for $\Theta$ increases with order $O(K^3)$, and the error bound for $B$ increases with $K$. {As expected, the empirical results support the theoretical findings (see Figure \ref{effectofk}(b) and (c)). The error for $\Theta$ and $B$ both increases with $K$.} While for the approximation error, recall that our randomized $\tilde{A}$ attains the minimax optimal rate which does not rely on $K$ (see Theorem 3.6 of \citet{gao2015rate}). Empirically, from Figure \ref{effectofk}(a) we see that the approximation error for $P$ changes slowly as $K$ increases, which is partially due to the randomness throughout the experimental procedure. \paragraph{Experiment 4.} In the above three experiments, we fixed all the other parameters except the one that we pay attention to. Indeed, in view of the theoretical bounds, all the parameters can vary with $n$. To see the so-called high-dimensional performance of each method, in this experiment, we consider a simple setting that the within cluster and between cluster probabilities decrease with $n$ according to $\alpha_n=2/\sqrt{n}$ and $\alpha_n(1-0.5)=1/\sqrt{n}$, respectively. In such a setting, to ensure the decreasing trend of the misclustered error, $K$ should be of smaller order than $n^{1/6}$, which is rather small for $n$ smaller than, say, 1000. Hence we set $K=2$ for simplicity. The other parameters were the same as those in Experiment 2. Figure \ref{effectofnalpha} shows the average curves for each method in terms of three metrics. {As expected, the misclassification error and the error for $B$ both decrease as $n$ increases, showing the high-dimensional feature of the theoretics. In addition, the performance of randomized methods become close to that of the original spectral clustering as $n$ increases.} \begin{figure*}[!htbp]{} \centering \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofn_app.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofn_mis.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofn_link.pdf}} \caption{The average effect of $n$ on the three metrics over 20 replications. (a), (b), (c) correspond to the approximation error for $P$, the misclassification error for $\Theta$, and the estimation error for $B$, respectively. The other parameters $K=3, \alpha_n=0.2, \alpha_n(1-\lambda)=0.1$, $r=10$, $q=2$, $p=0.7$, and $\Omega$ had i.i.d. standard Gaussian entries, respectively. }\label{effectofn} \end{figure*} \begin{figure*}[!htbp]{} \centering \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofalpha_app.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofalpha_mis.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofalpha_link.pdf}} \caption{The average effect of $\alpha$ on the three metrics over 20 replications. (a), (b), (c) correspond to the approximation error for $P$, the misclassification error for $\Theta$, and the estimation error for $B$, respectively. The other parameters $n=1152, K=3, \lambda=0.5$, $r=10$, $q=2$, $p=0.7$, and $\Omega$ had i.i.d. standard Gaussian entries, respectively.}\label{effectofalpha} \end{figure*} \begin{figure*}[!htbp]{} \centering \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofk_app.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofk_mis.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofk_link.pdf}} \caption{The average effect of $K$ on the three metrics over 20 replications. (a), (b), (c) correspond to the approximation error for $P$, the misclassification error for $\Theta$, and the estimation error for $B$, respectively. The other parameters $n=1152, \alpha_n=0.2, \alpha_n(1-\lambda)=0.1$, $r=10$, $q=2$, $p=0.7$, and $\Omega$ had i.i.d. standard Gaussian entries, respectively. }\label{effectofk} \end{figure*} \begin{figure*}[!htbp]{} \centering \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofnalpha_app.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofnalpha_mis.pdf}} \subfigure[]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofnalpha_link.pdf}} \caption{The average effect of $n$ and $\alpha_n$ on the three metrics over 20 replications. (a), (b), (c) correspond to the approximation error for $P$, the misclassification error for $\Theta$, and the estimation error for $B$, respectively. The within cluster probability $\alpha_n=2/\sqrt{n}$ and the between cluster probability $\alpha_n(1-0.5)=1/\sqrt{n}$. The other parameters $K=2$, $r=10$, $q=2$, $p=0.7$, and $\Omega$ had i.i.d. standard Gaussian entries, respectively. }\label{effectofnalpha} \end{figure*} \subsection{Model extensions} Besides the simple SBMs we considered in subsection \ref{sub:sim1}, we here consider the following six more complex models. \begin{itemize} \item\textbf{ Model 1} (Full-rank SBM with random $B$): $K=3$, and the elements of $B$ are generated randomly according to $B_{ii}\sim {\rm Uniform}(0.2,0.3)$ and $B_{ij}\sim {\rm Uniform}(0.01,0.1)$, the community sizes are balanced. \item\textbf{ Model 2} (Full-rank SBM with random $B$ and unblanced communities): The parameter set-up is identical to that of Model 1 except that the proportions of the number nodes within of each community over that of the whole nodes are $\frac{1}{6}, \frac{1}{2},\frac{1}{3}$, respectively. \item\textbf{ Model 3} (Rank-deficient SBM): $K=3$, and the community sizes are balanced. {The link probability matrix $B:=CC^\intercal$ where \begin{equation*} C:= \left[\begin{matrix} \frac{2{\rm sin}\,0}{3} & \frac{2{\rm cos}\,0}{3}\\ \frac{ {\rm sin}\,\frac{\pi}{5}}{2} & \frac{ {\rm cos}\,\frac{\pi}{5}}{2}\\ \frac{5 {\rm sin}\,\frac{2\pi}{5}}{6} & \frac{5 {\rm cos}\,\frac{2\pi}{5}}{6} \end{matrix}\right]. \end{equation*}} \item\textbf{ Model 4} (Full-rank DC-SBM): $K=3$, and the elements of $B$ are generated randomly according to $B_{ii}\sim {\rm Uniform}(0.4,0.6)$ and $B_{ij}\sim {\rm Uniform}(0.01,0.2)$. Within each true cluster $k$, $\vartheta_{i\in G_k}$'s are i.i.d. 0.2 with probability (w.p.) 0.8, and 1 w.p. 0.2. $\vartheta$'s are then normalized such that its maximum value is 1 within each true cluster. \item\textbf{Model 5} (Full-rank DC-SBM with more heterogeneity): Except for the node propensity parameter $\vartheta$, the parameter set-up is identical to that of Model 4. The $\vartheta$'s are generated as follows. Within each true cluster $k$, $\vartheta_{i\in G_k}$'s are i.i.d. with its element being 0.1 w.p. 0.4, being 0.2 w.p. 0.4, and being 1 with probability 0.2. $\vartheta$'s are then normalized such that its maximum value is 1 within each true cluster. \item\textbf{Model 6} (Rank-deficient DC-SBM): The parameter set-up is identical to that of Model 4 except the formulation of $B$. Particularly, $B$ is the same with that in Model 3. \end{itemize} Figure \ref{model123} and \ref{model456} display the averaged results over 20 replications of model 1-3 and model 4-6, respectively. It can be seen that for all models we tested, the clustering performance of the randomized spectral clustering algorithms become close to that of the original spectral clustering as the sample size $n$ increases, coinciding with theoretical results. {In the above experiments, we only considered assortative networks where nodes tend to be connected with those in the same community, which is mainly because that we require the link probability matrix to be diagonally dominant in some sense (Lemma \ref{lem:eigencondition} and \ref{lem:eigen2condition}). For the disassortative networks, it would be our future work to study their clustering methods specifically in the rank deficient setting, though it is suggest that the absolute eigenvalue would help finding the proper communities in full rank setting \citep{rohe2011spectral}.} \begin{figure*}[!htbp]{} \centering \small (I) Approximation error of model 1, model 2 and model 3.\\ \subfigure[Model 1]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m1_app.pdf}} \subfigure[Model 2]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m2_app.pdf}} \subfigure[Model 3]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m3_app.pdf}}\vspace{1cm}\\ (II) Misclassification error of Model 1, Model 2 and Model 3.\\ \subfigure[Model 1]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m1_mis.pdf}} \subfigure[Model 2]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m2_mis.pdf}} \subfigure[Model 3]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m3_mis.pdf}} \vspace{1cm}\\ (III) Estimation error for link probability matrix of model 1, model 2 and model 3.\\ \subfigure[Model 1]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m1_link.pdf}} \subfigure[Model 2]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m2_link.pdf}} \subfigure[Model 3]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m3_link.pdf}} \caption{Averaged results of each method over 20 replications on Models 1-3. Each column corresponds to a model. The first, second and third row corresponds to the approximation error, the misclassification error, and the estimation error for link probability matrix, respectively.}\label{model123} \end{figure*} \begin{figure*}[!htbp]{} \centering \small (I) Approximation error of Model 4, Model 5 and Model 6.\\ \subfigure[Model 4]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m4_app.pdf}} \subfigure[Model 5]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m5_app.pdf}} \subfigure[Model 6]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m6_app.pdf}}\vspace{1cm}\\ (II) Misclassification error of Model 4, Model 5 and Model 6.\\ \subfigure[Model 4]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m4_mis.pdf}} \subfigure[Model 5]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m5_mis.pdf}} \subfigure[Model 6]{\includegraphics[height=4.8cm,width=5cm,angle=0]{m6_mis.pdf}} \caption{Averaged results of each method over 20 replications on Models 4-6. Each column corresponds to a model. The first and second row corresponds to the approximation error and the misclassification error, respectively.}\label{model456} \end{figure*} \subsection{Additional experiments} To see how hyper parameters $r,q,p$ and the distribution of test matrix affect the performance of corresponding method, we here conduct another series of experiments. Specifically, to remove the computational cost of finding the best permutation matrix over the permutation matrix set, we use $\rm F_1$ score ($\rm F_1$), Normalized Mutual Information (NMI), and Adjusted Rand Index (ARI) \citep{hubert1985comparing,manning2010introduction} to justify the clustering performance of each method. These indexes measure the similarity of two clusters, and here we refer to the estimated and true clusters, from different perspectives. The larger these indexes, the better the clustering algorithm performs. The parameters were basically set as $n=1152$, $K=3$, and the within cluster probability $\alpha=0.2$. To see the effect of other parameters, we varied the oversampling parameter $r\in\{0, 4, 8, 12\}$, the power parameter $q\in\{2, 4, 6\}$, the sampling rate $p\in\{0.6, 0.7, 0.8, 0.9\}$, and the distribution of test matrix $\Omega$ was generated as i.i.d. Gaussian (standard), uniform (from -1 to 1), and Rademacher (take values $+1$ and $-1$ with equal probability). And for each setting, we let the between cluster probability vary. Figure \ref{senrp} and \ref{senrs} show the averaged results of the random projection scheme and the random sampling scheme, respectively. As expected, larger $r$, $q$ and $p$ lead to better clustering performance but at the cost of computational efficiency. One should choose these parameters according to the problem at hand. In addition, among the distribution of $\Omega$ we tested, it has little effect on the resulting clustering performance of random projection. \begin{figure*}[!htbp]{} \centering (I) Effect of the oversampling parameter $r$\\ \subfigure[${\rm F}_1$]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofr_f1.pdf}} \subfigure[NMI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofr_nmi.pdf}} \subfigure[ARI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofr_ari.pdf}}\\ (II) Effect of the power parameter $q$\\ \subfigure[${\rm F}_1$]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofq_f1.pdf}} \subfigure[NMI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofq_nmi.pdf}} \subfigure[ARI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofq_ari.pdf}}\\ (III) Effect of the test matrix $\Omega$\\ \subfigure[${\rm F}_1$]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofd_f1.pdf}} \subfigure[NMI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofd_nmi.pdf}} \subfigure[ARI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofd_ari.pdf}} \caption{Effects of the parameters $r,q,\Omega$ in the random projection scheme. Each row corresponds to the effect of one parameter with the others fixed. Each column corresponds to a measure for the clustering performance. The other parameters are fixed at $n=1152$, $K=3$, and the within cluster probability $\alpha=0.2$. }\label{senrp} \end{figure*} \begin{figure*}[!htbp]{} \centering \subfigure[${\rm F}_1$]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofp_f1.pdf}} \subfigure[NMI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofp_nmi.pdf}} \subfigure[ARI]{\includegraphics[height=4.8cm,width=5cm,angle=0]{effectofp_ari.pdf}}\\ \caption{Effect of the parameters $p$ within the random sampling scheme. Each column corresponds to a measure for the clustering performance. The other parameters are fixed at $n=1152$, $K=3$, and the within cluster probability $\alpha=0.2$. }\label{senrs} \end{figure*} \section{Real data examples} \label{sec:real} In this section, we numerically evaluate the merits of randomized spectral clustering in terms of accuracy and efficiency. Specifically, we first compare the clustering accuracy of each method on four small-scale real networks, using the original spectral clustering as the baseline method. After that, we examine the computational efficiency as well as the relative error of each method on four large-scale networks, where we compare randomized methods with several iterative methods. \subsection{Accuracy evaluation} In this subsection, we test the effectiveness of randomized spectral clustering on four network datasets, including the European email network \citep{leskovec2007graph,yin2017local}, the political blog network \citep{adamic2005political}, the statistician coauthor network and the statistician citation network \citep{ji2016coauthorship}, where the first two datasets have ground truth community assignments and the last two have no ground truth community assignment. Table \ref{tablesmall} shows the basic statistics about the networks, where for the first two networks, the target rank is set as the number of true clusters, while for the last two networks, the target rank follows \citet{ji2016coauthorship}. For the datasets with ground truth labels, we computed $\rm F_1$, NMI, and ARI \citep{hubert1985comparing,manning2010introduction} between the estimated clusters and the true clusters for each of the three methods, namely, the random projection, the random sampling, and the original spectral clustering, respectively. While for the datasets without ground truth labels, we computed $\rm F_1$, NMI, and ARI between the clusters estimated by the randomized spectral clustering and the clusters estimated by the original spectral clustering. Our aim is to show that randomized algorithms perform comparably to the original spectral clustering. Hence for the datasets with ground truth labels, the smaller gap of $\rm F_1$, NMI, and ARI between randomized and original spectral clustering indicate the better match between these methods. While for the datasets without ground truth labels, larger $\rm F_1$, NMI, and ARI indicate the better match. For the random projection scheme, the oversampling parameter $r=10$, the power parameter $q=2$, and the random test matrix has i.i.d. Gaussian entries. And for the random sampling scheme, we test two cases, namely, $p=0.7$ and $0.8$. Table \ref{table} summarizes the average performance of these methods over 20 replications with the standard deviations in the parentheses. From Table \ref{table} we see that all the methods perform very similarly to each other in terms of $\rm F_1$, NMI, and ARI, and the results are rather stable, which shows the effectiveness of randomized methods. \begin{table*}[!htbp] \centering \footnotesize \caption{ A summary of the four small-scale undirected networks. }\vspace{0.5cm} \def1.5{1.5} \begin{tabular}{lccc} \hline {Networks}&{No. of nodes}&{No. of edges}&{Target rank}\\ \hline European email network&986&16,064&42\\ Political blog network& 1,222&16,714&2\\ Statisticians coauthor network&2,263&4,388&3\\ Statisticians citation network&2,654&20,049&3\\ \hline \end{tabular} \label{tablesmall} \end{table*} \begin{table}[!htbp] \centering \footnotesize \caption{The clustering performance of each method on four real network datasets based on randomized spectral clustering algorithms. For the European email network and political blog network, the performance is evaluated based on a known ground truth. For the statisticians coauthor citation networks, the performance is evaluated based on the original spectral clustering. }\vspace{1cm} \def1.5{1.5} \begin{tabular}{lccc} \hline Methods&$\rm F_1$ &NMI&ARI\\ \hline (a) European email network\\ Random Projection&0.165(0.007)&0.558(0.006)&0.100(0.009)\\ Random Sampling ($p=0.7$)&0.126(0.007)&0.417(0.010)&0.059(0.008)\\ Random Sampling ($p=0.8$)&0.131(0.005)&0.436(0.010)&0.064(0.006)\\ Non-Random&0.154(0.006)&0.571(0.005)&0.088(0.007)\\ (b) Political blog network\\ Random Projection&0.641(0.004)&0.178(0.004)&0.079(0.006)\\ Random Sampling ($p=0.7$)&0.642(0.003)&0.177(0.007)&0.077(0.007)\\ Random Sampling ($p=0.8$)&0.641(0.004)&0.177(0.008)&0.077(0.009)\\ Non-Random&0.641(0.004)&0.178(0.004)&0.079(0.006)\\ (c) Statisticians coauthor network (No true labels)\\ Random Projection (relative)&0.981(0.012)&0.646(0.197)&0.715(0.246)\\ Random Sampling (relative) ($p=0.7$)&0.970(0.011)&0.480(0.148)&0.593(0.193)\\ Random Sampling (relative) ($p=0.8$)&0.973(0.011)&0.544(0.142)&0.639(0.190)\\ (d) Statisticians citation network (No true labels)\\ Random Projection (relative)&0.990(0.021)&0.881(0.166)&0.926(0.140)\\ Random Sampling (relative) ($p=0.7$)&0.981(0.019)&0.759(0.125)&0.863(0.120)\\ Random Sampling (relative) ($p=0.8$)&0.981(0.022)&0.770(0.163)&0.861(0.149)\\ \hline \end{tabular} \label{table} \end{table} \subsection{Efficiency evaluation} In this subsection, we examine the computational efficiency of randomized methods for partial eigenvalue decomposition on four large-scale real undirected networks, including DBLP collaboration network, Youtube social network, Internet topology network, LiveJournal social network \citep{yang2015defining,leskovec2005graphs}. These four networks are large-scale with up to millions of nodes and tens of millions of edges. Table \ref{table1} shows the basic statistics about the networks, where the target rank corresponds to a network is $k$ if there exits a large gap between the $k$-th and $(k+1)$-th largest (in absolute value) approximated eigenvalues. We compare the performance of our methods with iterative methods, including the implicitly restarted Lanczos algorithm \citep{calvetti1994implicitly} (\textsf{svds} in R package \textsf{RSpectra} \citep{rspectra}), the augmented implicitly restarted Lanczos bidiagonalization algorithms \citep{baglama2005augmented} (\textsf{irlba} and \textsf{partial\_eigen} in R package \textsf{irlba} \citep{irlba}). In addition, we also compare our implementation with the randomized methods implemented in \textsf{svdr} in R package \textsf{irlba} \citep{irlba}. Note that the full eigenvalue decomposition always fails in such large-scale data setting. Table \ref{table2} shows the median computational time of each method over 20 replications, where all computations are done on a machine with {Intel Core i9-9900K CPU 3.60GHz, 32GB memory, and 64-bit WS operating-system, and R version 4.0.4} is used for all computations. For the random projection-based method, the power parameter is 2 and the oversampling parameter is 10. For the random sampling-based method, the sampling probability is 0.7. We can see from Table \ref{table2} that the our methods shows great advantage over compared methods especially when the network scale is large. In particular, the random sampling-based method is efficient no matter the sampling time is included or not. Figure \ref{comparari} shows the pairwise comparison of the clustering results of six methods on these four networks. The relative clustering performance are measured by ARI. It turns out that the random projection-based method and the random sampling-based method yield similar results with other compared methods, though the random sampling-based method seems to behave slightly different. Overall, as indicated by our theory and experiments, the randomized methods could bring high efficiency while slightly sacrificing the accuracy. In real world applications, one should balance the accuracy-efficiency trade-off via selecting appropriate hyper parameters according to the real setting. \begin{table*}[!htbp] \centering \footnotesize \caption{ A summary of the four large-scale undirected networks.}\vspace{0.5cm} \def1.5{1.5} \begin{tabular}{lccc} {Networks}&{No. of nodes}&{No. of edges}&{Target rank}\\ \hline DBLP collaboration network&317,080&1,049,866&3\\ Youtube social network& 1,134,890&2,987,624&7\\ Internet topology graph&1,696,415&11,095,298&4\\ LiveJournal social network&3,997,962&34,681,189&4\\ \hline \end{tabular} \label{table1} \end{table*} \begin{table*}[!htbp] \centering \footnotesize \caption{ Median time (seconds) of each method for computing the (approximated) eigenvectors of four real network adjacency matrices over 20 replications, where for the random sampling, the time with the sampling time included and excluded (shown in the parentheses) are reported, respectively. } \vspace{0.5cm} \def1.5{1.5} \begin{tabular}{p{1.6cm}p{1.6cm}p{3cm}p{1.6cm}p{1.6cm}p{1.6cm}p{1.6cm}} \hline {Networks}&Random projection&{Random sampling}&{\textsf{irlba}}&\textsf{svds}&\textsf{svdr}&\textsf{partial\_eigen}\\ \hline DBLP &0.369&0.280(0.248)&0.341&0.411&6.132&0.346\\ Youtube & 2.037&2.302(2.204)&2.487&3.043&35.595&9.111 \\ Internet &2.773&2.072(1.774)&3.404&3.332&30.900&7.706\\ LiveJournal &13.213&7.207(6.216)&15.179&20.077&106.166&15.080\\ \hline \end{tabular} \label{table2} \end{table*} \begin{figure*}[!htbp]{} \centering \subfigure[DBLP, $K=3$]{\includegraphics[height=3.2cm,width=5cm,angle=0]{dblp.pdf}}\hspace{1cm} \subfigure[Youtube, $K=7$]{\includegraphics[height=3.2cm,width=5cm,angle=0]{youtube.pdf}}\\ \subfigure[Internet,$K=4$]{\includegraphics[height=3cm,width=5cm,angle=0]{skitter.pdf}}\hspace{1cm} \subfigure[LiveJournal, $K=4$]{\includegraphics[height=3cm,width=5.1cm,angle=0]{lj.pdf}}\\ \caption{The pairwise comparison of the clustering results of six methods on four large-scale networks. The relative clustering performance are measured by ARI. Larger ARI, i.e., {larger circles} in the figure, indicates that the clustering results of the two methods are more close.}\label{comparari} \end{figure*} \section{Conclusion} \label{sec:dissc} In this paper, we used randomized sketching techniques to accelerate the spectral clustering when facing large-scale networks, say networks with millions of nodes, and studied how well the resulting algorithms perform under the SBMs and DC-SBMs. We studied two randomized spectral clustering algorithms. The first one is random projection-based, which reduces the computational cost by projecting the columns and rows of the adjacency matrix to a lower-dimensional space. The second one is random sampling-based, which samples the edges to obtain a sparsified adjacency matrix, and thus reducing the computational cost of partial eigen-decomposition. In the framework of SBMs, we studied these two randomized spectral clustering algorithms in terms of the approximation error that measures the deviation of the randomized adjacency matrix $\tilde{A}$ from the population matrix $P$, the misclassification error that measures the fraction of the number of mis-clustered nodes over the total number of nodes, and {the estimation error} for the link probability matrix $B$. In particular, we considered a {more generalized content} where ${\rm rank}(B)$ could be smaller than the cluster number $K$. Under mild conditions, the approximation error turns out to be statistically optimal, which shows that the randomized matrix behaves as if it was sampled from the SBM. We also extend theoretical results to DC-SBMs. Experimental results showed the merits of randomized spectral clustering on networks with up to millions of nodes. For practical convenience, we developed an R package \textsf{Rclust}. There are many ways that the content in this paper can be extended. First, we studied the weak consistency of the pure spectral clustering without any regularization or refinement, and we mainly used the Davis-Kahan theorem to study the eigenvector perturbation. There exist several works on trimming or refining the pure spectral clustering to help the refined spectral clustering achieve the information-theoretic limit of the exact recovery (strong consistency) or minimax optimal rate of the partial recovery (weak consistency) under SBMs; see \citet{gao2017achieving,yun2016optimal}, among others. It would be interesting to study whether one could use similar treatments on the randomized spectral clustering in order to improve its theoretical performance while without increasing the time cost. On the other hand, a few works study the entry-wise perturbation of eigenvectors very recently; see \citet{cape2019signal,tang2021asymptotically,abbe2020entrywise,su2019strong}, among others. It would be important and insightful to study the entry-wise perturbation of eigenvectors after randomization, and also study the related strong consistency in SBMs. Second, although the approximation error is minimax optimal under SBMs, it would be important to develop advanced randomization techniques in order to weaken the condition on $\alpha_n$, $q$, $p_{ij}$'s. In addition, we mainly focused on the adjacency matrix sampled from the SBMs. It would be interesting to generalize the results to the Laplacian matrix and other network generating models, say--the latent space model, and the graphon models, among others. Finally, it would be important to study the estimation of the model parameters $K$ and ${\rm rank}(B)$ \citep{fishkind2013consistent,ma2021determining}. \begin{center} {\large ACKNOWLEDGMENT} \end{center} We are grateful to the editor, associate editor, and two reviewers for their consideration of our paper and for their helpful suggestions, which led to an improved version of this paper. We also thank Professor Yixuan Qiu for his great help in developing the R package. Our research is partially supported by National Natural Science Foundation of China (No.U1811461), National Natural Science Foundation for Outstanding Young Scholars (No.72122018), and Natural Science Foundation of Shaanxi Province (No. 2021JQ-429 and No.2021JC-01). \bibliographystyle{plainnat}
1,314,259,992,949
arxiv
\section{Introduction} The generalizability, which enables an algorithm to handle unseen tasks, is fruitful yet challenging in multifarious decision-making domains. Recent literature~\cite{xu2021bayesian, cranmer2006discovering, li2020ngs} reveals the critical role of reasoning in improving the generalization of reinforcement learning (RL). However, most off-the-shelf RL algorithms~\cite{sutton2018reinforcement} have not regarded reasoning as an indispensable accessory, thus usually suffer from data ineffciency and performance degradation due to the mismatch between training and testing settings. To attain generalization at the testing stage, some efforts were put into incorporating domain knowledge to learn structured information, including sub-task decomposition~\cite{lu2021learning}, or program generation~\cite{yang2021program, landajuela2021discovering, han2020neuro} and execution~\cite{zhao2021proto, sun2019program}, which guide the model to solve complicated tasks in an explainable way. However, such symbolism-dominant methods heavily depend on the re-usability of sub-tasks and pre-defined grammars, which may not always be accessible in decision-making tasks. Inspired by the close link between reasoning and the cause-and-effect relationship, causality is recently incorporated to compactly represent the aforementioned structured knowledge in RL training~\cite{gershman2017reinforcement}. Based on the form of causal knowledge, we divide the related works into two categories, i.e., \textit{implicit} and \textit{explicit} causation. With \textit{implicit} causal representation, researchers ignore the detailed causal structure. For instance, \cite{zhang2020invariant} extracts invariant features as one node that influences the reward function, while the other node consists of task-irrelevant features~\cite{tomar2021model, sontakke2021causal, sodhani2022improving, bica2021invariant}. This neat structure has good scalability but requires access to multiple environments that share the same invariant feature~\cite{zhang2020invariant, bica2021invariant, han2021learning}. One can also turn to the \textit{explicit} side by estimating detailed causal structures~\cite{wangtask, volodin2020resolving, seitzer2021causal, gasse2021causal}, which uses directed graphical models to capture the causality in the environment. A pre-request for this estimation is the object-level or event-level abstraction of the observation, which is available in most tasks and also becoming a frequently studied problem~\cite{abel2022theory, shanahan2022abstraction, abel2018state}. However, existing \textit{explicit} causal reasoning RL models either rely on heuristic design without theoretical guarantees~\cite{wangtask} or require the true causal graph~\cite{nair2019causal}. In this paper, we propose \textit{\textbf{G}ene\textbf{RA}lizing by \textbf{D}iscov\textbf{ER}ing (GRADER)}, a causal reasoning method that augments the RL algorithm with data efficiency, interpretability, and generalizability. We mainly focus on Goal-Conditioned RL (GCRL), where different goal distributions during training and testing reflect the generalization. We formulate the GCRL into a probabilistic inference problem~\cite{levine2018reinforcement} with a learnable causal graph as the latent variable. This novel formulation naturally explains the learning objective with three components, i.e., transition model learning, planning, and causal graph discovery, leading to an optimization framework that alternates between the causal structure discovery policy learning to gain generalizability. Under some mild conditions, we prove the unique identifiability of the causal graph and the theoretical performance guarantee of the proposed framework. To demonstrate the effectiveness of the proposed method, we conduct comprehensive experiments in environments that require strong reasoning capability. Specifically, we design two types of generalization settings, i.e., spuriousness and composition, and provide an example to illustrate these settings in Figure~\ref{fig:motivation}. The evaluation results confirm our method's advantages in several aspects. First, the proposed data-efficient discovery method provides an explainable causal graph yet requires much less data than score-based methods, increasing data efficiency and interpretability during task solving. Second, simultaneously discovering the causal graph during policy learning dramatically increases the success rate of solving tasks. In summary, the contribution of this paper is threefold: \vspace{-2mm} \begin{itemize}[leftmargin=0.3in] \item We use the causal graph as a latent variable to reformulate the GCRL problem and then derive an iterative training framework from solving this problem. \item We prove that our method uniquely identifies true causal graphs, and the performance of the iterative optimization is guaranteed with a lower bound given converged transition dynamics. \item We design nine tasks in three environments that require strong reasoning capability and show the effectiveness of the proposed method against five baselines on these tasks. \vspace{-4mm} \end{itemize} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figs/motivation_2.pdf} \caption{The agent picks and places objects to achieve given goals. \textbf{Left:} The causal graph of this task. \textbf{Right:} Training setting: the hexagon is always pink, and the box is always blue. Three testing settings: (1) In distribution: the same as the training setting. (2) Spuriousness: swap the color and shape to break the spurious correlation. (3) Composition: increase the height of the goal.} \label{fig:motivation} \vspace{-4mm} \end{figure} \section{Problem Formulation and Preliminary} We start by discussing the setting of RL we use in this paper and the assumptions required in causal reasoning. Then we briefly introduce the necessary concepts related to causality and causal discovery. \subsection{Factorized Goal-conditioned RL} \label{FactorMDP} We assume the environment follows the Goal-conditioned Markov Decision Process (MDP) setting with full observation. This setting is represented by a tuple $\mathcal{M}=(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, {G})$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{P}$ is the probabilistic transition model, ${G} \subset \mathcal{S}$ is the goal space which is a set of assignment of values to states, and $r(s, g) = \mathds{1}(s = g) \in \mathcal{R}$ is the sparse deterministic reward function that returns 1 only if the state $s$ match the goal $g$. In this paper, we focus on the goal-conditioned generalization problem, where the goal for training and testing stages will be sampled from different distributions $p_{\text{train}}(g)$ and $p_{\text{test}}(g)$. We refer to a goal $g \in G$ as a task and use these two terms interchangeably. To accomplish the causal discovery methods, we make a further assumption similar to~\cite{seitzer2021causal, boutilier2000stochastic} for the state and action space: \begin{assumption}[Space Factorization]\label{assumption_factorize} The state space $\mathcal{S} = \{ \mathcal{S}_1 \times \cdots \times \mathcal{S}_M \}$ and action space $\mathcal{A} = \{ \mathcal{A}_1 \times \cdots \times \mathcal{A}_N \}$ can be factorized to disjoint components $\{\mathcal{S}_i\}_{i=1}^{M}$ and $\{\mathcal{A}_i\}_{i=1}^{N}$. \end{assumption} The components representing one event or object's property usually have explicit semantic meanings for better interpretability. This assumption can be satisfied by state and action abstraction, which has been widely investigated in~\cite{abel2022theory, shanahan2022abstraction, abel2018state}. Such factorization also helps deal with the high-dimensional states since it might be impossible to treat each dimension as one random variable~\cite{wangtask}. \vspace{-2mm} \subsection{Causal Reasoning with Graphical Models} \label{cg_scm} Reasoning with causality relies on specific causal structures, which is commonly represented as directed acyclic graphs (DAGs)~\cite{peters2017elements} over variables. Consider random variables $\bm{X} = (X_1,\dots, X_d)$ with index set $\bm{V}:= \{1,\dots,d \}$. A graph ${\mathcal{G}} = (\bm{V}, \mathcal{E})$ consists of nodes $\bm{V}$ and edges $\mathcal{E} \subseteq \bm{V}^2$ with $(v, v)$ for any $v \in \bm{V}$. A node $i$ is called a parent of $j$ if $e_{ij} \in {\mathcal{E}}$ and $e_{ji} \notin {\mathcal{E}}$. The set of parents of $j$ is denoted by $\textbf{PA}^{{\mathcal{G}}}_j$. We formally discuss the graph representation of causality with the definitions: \begin{definition}[Structural Causal Models~\cite{peters2017elements}] A structural causal model (SCM) $\mathfrak{C}:= (\bm{S}, \bm{U})$ consists of a collection $\bm{S}$ of $d$ functions, \begin{equation} X_j := f_j(\textbf{PA}^{{\mathcal{G}}}_j, U_j),\ \ \ \ j \in [d], \end{equation} where $\textbf{PA}_j \subset \{ X_1,\dots,X_d \} \backslash \{X_j \}$ are called parents of $X_j$; and a joint distribution $\bm{U} = \{ U_1,\dots,U_d \}$ over the noise variables, which are required to be jointly independent. \label{def_scm} \end{definition} \begin{definition}[Causal Graph~\cite{peters2017elements}, CG] The causal graph ${\mathcal{G}}$ of a SCM is obtained by creating one node for each $X_j$ and drawing directed edges from each parent in $\textbf{PA}^{{\mathcal{G}}}_j$ to $X_j$. \end{definition} We note that CG describes the structure of the causality, and SCM further considers the specific causation from the parents of $X_j$ to $X_j$ as well as exogenous noises ${U_j}$. To uncover the causal structure from data distribution, we assume that the CG satisfies the \textit{Markov Property} and \textit{Faithfulness} (\cite{peters2017elements}), which makes the independences consistent between the joint distribution $P(X_1,\dots,X_n)$ and the graph ${\mathcal{G}}$. We also follow the \textit{Causal Sufficiency} assumption~\cite{spirtes2000causation} that supposes we have measured all the common causes of the measured variables Existing work~\cite{pitis2020counterfactual, seitzer2021causal} believes that two objects have causation only if they are close enough while there is no edge between them if the distance is large. Instead of using such a local view of the causality, we assume the causal graph is consistent across all time steps. Still the causal influence could be local, and the specific influence indicated by edges is estimated by the function $f_j(\textbf{PA}_j, U_j)$. \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-5mm} \centering \includegraphics[width=0.5\textwidth]{figs/framework_2.pdf} \caption{The paradigm of GRADER.} \label{fig:framework} \vspace{-10mm} \end{wrapfigure} \section{Generalizing by Discovering (GRADER)} With proper definitions and assumptions, we now look at the proposed method. We first derive the framework of GRADER by formulating the GCRL problem as a latent variable model, which provides a variational lower bound to optimize. Then, we divide this objective function into three parts and iteratively update them with a specific performance guarantee. \subsection{GCRL as Latent Variable Models} \label{gcrl_lvm} The general objective of RL is to maximize expected reward function w.r.t. a learnable policy model $\pi$. Particularly, in the goal-conditioned setting, such objective is represented as $\max_{\pi} \mathbb{E}_{\tau \sim \pi, g\sim p(g)}[ \sum_{t=0}^{T} r(s^t, g)]$, where $p(g)$ is the distribution of goals and $\tau := \{ s^0,a^0,\dots,s^{T} \}$ the action-state trajectory with maximal time step $T$. The trajectory ends only if the goal is achieved $g = s^T$ or the maximal step is reached. Inspired by viewing the RL problem as probabilistic inference~\cite{abdolmaleki2018maximum, levine2018reinforcement}, we replace the question from \textit{"How to find actions to achieve the goal?"} to \textit{"what are the actions if we achieve the goal?"}, leading to a likelihood maximization problem for $p(\tau|s^*)$ with $s^*:= \mathds{1}(g=s^T)$. Different from previous work~\cite{marino2019inference} that recasts actions as latent variables and inferring actions that result in "observed" high reward, we decompose $p(\tau|s^*)$ with ${\mathcal{G}}$ as the latent variable to get the evidence lower bound (ELBO) \begin{equation} \log p(\tau|s^*) = \log \int p(\tau|{\mathcal{G}}, s^*) p({\mathcal{G}} | s^*) d{\mathcal{G}} \geq \mathbb{E}_{q({\mathcal{G}}|\tau)}[\log p(\tau|{\mathcal{G}}, s^*)] - \mathbb{D}_{\text{KL}}[q({\mathcal{G}}|\tau) || p({\mathcal{G}})], \\ \label{elbo} \end{equation} where the prior $p({\mathcal{G}})$ and variational posterior $q({\mathcal{G}}|\tau)$ represent distributions over a graph structure, i.e, the probability of the existences of edges in the graph. $\mathbb{D}_{\text{KL}}$ represents the Kullback–Leibler (KL) divergence between two graphs, which will be explained in Section~\ref{causal_discovery}. Recall that the space of goal $G$ is a subset of the state, here we extend the meaning of $g$ and assume all trajectories achieve the goal in the final state~\cite{andrychowicz2017hindsight}, i.e., $g = s^T$. Such extension makes it possible to further decompose the first term of (\ref{elbo}) as (refer to Appendix~\ref{app:derivation_transition}) \begin{equation} \log p(\tau|{\mathcal{G}}, s^*) = \log p(s^0) + \sum_{t=0}^{T-1} \log p(s^{t+1}| s^{t}, a^{t}, {\mathcal{G}}) + \sum_{t=0}^{T-1} \log \pi(a^{t}|s^{t}, s^*, {\mathcal{G}}) + \log p(g). \label{transition} \end{equation} Here, we use the fact that ${\mathcal{G}}$ is effective in both the transition model $p(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ and the policy model $\log p(a^{t}|s^{t}, s^*, {\mathcal{G}})$, $g$ only influences the policy model, and the initial state $s_0$ depends neither on ${\mathcal{G}}$ nor $g$. We assume that both initial state $\log p(s_0)$ and goal $\log p(g)$ follow the uniform distribution; thus, the first and last terms of (\ref{transition}) are constants. The policy term $\pi$, selecting action $a^t$ according to both current state $s^t$ and goal $g$, is implemented with planning method and is further discussed in Section~\ref{model_planning}. Finally, we maximize the likelihood $p(\tau|s^*)$ with the following reformulated ELBO as the objective \begin{equation} \begin{aligned} {\mathcal{J}}(\theta,\phi) = \mathbb{E}_{ q_{\phi}({\mathcal{G}}|\tau)}\sum_{t=0}^{T-1} \left[ \log p_{\theta}(s^{t+1}|s^{t}, a^{t}, {\mathcal{G}}) + \log \pi_{\theta}(a^{t}|s^{t}, s^{*}, {\mathcal{G}})\right] - \mathbb{D}_{\text{KL}}[q_{\phi}({\mathcal{G}}|\tau) || p({\mathcal{G}})] \label{elbo_final} \end{aligned} \end{equation} where $\theta$ is the shared parameter of transition model $p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ and policy $\pi_{\theta}(a^{t}|s^{t}, s^{*}, {\mathcal{G}})$, and $\phi$ is the parameter of causal graph $q_{\phi}({\mathcal{G}}|\tau)$. To efficiently solve this optimization problem, we iteratively updates parameter $\phi$ (causal discovery, Section~\ref{causal_discovery}) and parameter $\theta$ (model and policy learning, Section~\ref{model_planning}), as shown in Figure~\ref{fig:framework}. Intuitively, these processes can be respectively viewed as the discovery of CG and update of SCM, which share tight connections discussed in Section~\ref{cg_scm}. \subsection{Model and Policy Learning} \label{model_planning} Let us start with a simple case where we already obtain a ${\mathcal{G}}$ and use it to guide the learning of parameter $\theta$ via $\max_{\theta} {\mathcal{J}}(\theta,\phi)$. Since the KL divergence of ${\mathcal{J}}(\theta,\phi)$ does not involve $\theta$, we only need to deal with the first expectation term, the likelihood of the transition model and policy. For the transition $p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$, we connect it with causal structure by further defining a particular type of CG and denote it as ${\mathcal{G}}$ in the rest of this paper: \begin{definition}[Transition Causal Graph] \label{transition_cg} We define a bipartite graph ${\mathcal{G}}$, whose vertices are divided into two disjoint sets $\mathcal{U}= \{\mathcal{A}^{t}, \mathcal{S}^{t} \}$ and $\mathcal{V}=\{ \mathcal{S}^{t+1} \}$. $\mathcal{A}^{t}$ represents action nodes at step $t$, $\mathcal{S}$ state nodes at step $t$, and $\mathcal{S}^{t+1}$ the state nodes at step $t+1$. All edges start from set $\mathcal{U}$ and end in set $\mathcal{V}$. \end{definition} \textbf{Model learning.} This definition builds the causal graph between two consecutive time steps, which indicates that the values of states in step $t+1$ depend on values in step $t$. It also implies that the interventions~\cite{peters2017elements} on nodes in $\mathcal{U}$ are directly obtained since they have no parents. We denote the marginal distribution of $\mathcal{S}$ as $p_{{\mathcal{I}}_\pi^s}$, which is collected by RL policy $\pi$. Combined with the Definition~\ref{def_scm} of SCM, we find that $p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ essentially approximates a collection of functions $f_j$ following the structure ${\mathcal{G}}$, which take as input the values of parents of the state node $s_j$ and outputs the value $s_j$. Thus, we propose to model the transition corresponding to ${\mathcal{G}}$ with a collection of neural networks $f_{\theta}({\mathcal{G}}) := \{ f_{\theta_j} \}_{j=1}^{M}$ to obtain \begin{equation} {s}^{t+1}_j = f_{\theta_j}([ \textbf{PA}_j^{{\mathcal{G}}}]^{t}, U_j), \label{gru_inference} \end{equation} where $[\textbf{PA}_j^{{\mathcal{G}}}]^{t}$ represents the values of all parents of node $s^{t}_j$ at time step $t$ and $U_j$ follows Gaussian noise $U_j \sim \mathcal{N}(0, \textbf{I})$. In practice, we use Gated Recurrent Unit~\cite{chung2014empirical} as $f_j$ because it supports varying numbers of input nodes. We take $s^{t}_j$ as the initial hidden embedding and the rest parents $[ \textbf{PA}_j^{{\mathcal{G}}}\backslash s_j ]^{t}$ as the input sequence to $f_j$. The entire model is optimized by stochastic gradient descent with the log-likelihood $\log p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ as objective. \textbf{Policy learning with planning.} Then we turn to the policy term $\pi_{\theta}(a^{t}|s^{t}, s^{*}, {\mathcal{G}})$ in ${\mathcal{J}}(\theta,\phi)$. We optimize it with planning methods that leverage the estimated transition model. Specifically, the policy aims to optimize an action-state value function $Q(s^t, a^t)=\mathbb{E}\left[\sum_{t'=0}^H \gamma^{t'} r(s^{t'+t}, a^{t'+t}) | s^t, a^t\right]$, which can be obtained by unrolling the transition model with a horizon of $H$ steps and discount factor $\gamma$. In practice, we use model predictive control (MPC)~\cite{camacho2013model} with random shooting~\cite{richards2005robust}, which selects the first action in the fixed-horizon trajectory that has the highest action-state value $Q(s^t, a^t)$, i.e. $\hat{\pi}(s^t) = \argmax_{a^t\in {\mathcal{A}}} Q_\theta^{\mathcal{G}} (s^t, a^t)$. The formulation we derived so far is highly correlated to the model-based RL framework~\cite{wang2019benchmarking}. However, the main difference is that we obtain it with variational inference by regarding the causal graph as a latent variable. \vspace{-3mm} \subsection{Data-Efficient Causal Discovery} \label{causal_discovery} \begin{wrapfigure}{r}{0.43\textwidth} \vspace{-5mm} \newcommand\mycommfont[1]{\small\ttfamily\textcolor{RoyalBlue}{#1}} \SetCommentSty{mycommfont} \begin{algorithm}[H] \caption{GRADER Training} \label{algorithm1} \KwIn{Trajectory buffer $\mathcal{B}_{\tau}$, Causal graph ${\mathcal{G}}$, Transition model $f_{\theta}$} \While{$\theta$ not converged} { \tcp{Policy from planning} Sample a goal $g\sim p_{\text{train}}(g)$ \\ \While{$t < T$} { $a^t \leftarrow$ Planner($f_{\theta}$, $s^t$, $g$) \\ $s^{t+1}$, $r^t \leftarrow$ Env($a^t$, $g$) \\ $\mathcal{B}_{\tau} \leftarrow \mathcal{B}_{\tau}\cup\{a^t, s^t, s^{t+1} \} $ } \tcp{Estimate causal graph} \For{$i \leq M + N$} { \For{$j \leq M$} { Infer edge $e_{ij} \leftarrow q_{\phi}(\cdot|\mathcal{B}, \eta)$ } } \tcp{Learn transition model} Update $f_{\theta}({\mathcal{G}})$ via (\ref{gru_inference}) with $\mathcal{B}$ \\ } \end{algorithm} \vspace{-6mm} \end{wrapfigure} In this step, we relax the assumption of knowing ${\mathcal{G}}$ and aim to estimate the posterior distribution $q({\mathcal{G}}|\tau)$ to increase the ELBO~(\ref{elbo_final}) w.r.t. parameter $\phi$. In most score-based methods~\cite{chickering2002optimal}, likelihood is used to evaluate the correctness of the causal graph, i.e., a better causal graph leads to a higher likelihood. Since the first term of (\ref{elbo_final}) represents the likelihood of the transition model, we convert the problem of $\max_{\phi} {\mathcal{J}}(\theta, \phi)$ to the causal discovery that finds the true causal graph based on collected data samples. As for the second term of (\ref{elbo_final}), the following proposition shows that the KL divergence between $q_{\phi}({\mathcal{G}}|\tau)$ and $p({\mathcal{G}})$ can be approximated by a sparsity regularization~(proof in Appendix~\ref{app:approx_kld}). \begin{proposition}[KL Divergence as Sparsity Regularization] With entry-wise independent Bernoulli prior $p({\mathcal{G}})$ and point mass variational distribution $q({\mathcal{G}}|\tau)$ of DAGs, $\mathbb{D}_{\text{KL}}[q_{\phi} \| p]$ is equivalent to an $\ell_1$ sparsity regularization for the discovered causal graph. \label{approx_kld} \end{proposition} We restrict the posterior $q_{\phi}({\mathcal{G}}|\tau)$ to point mass distribution and use a threshold $\eta$ to control the sparsity. We perform the discovery process from the classification perspective by proposing binary classifiers $q_{\phi}(e_{ij}|\tau, \eta)$ to determine the existence of an edge $e_{ij}$. This classifier $q_{\phi}(e_{ij}|\tau, \eta)$ is implemented by statistic \textit{Independent Test}~\cite{chalupka2018fast}, and $\eta$ is the threshold for the p-value of the hypothesis. A larger $\eta$ corresponds to harder sparsity constraints, thus leading to a sparse ${\mathcal{G}}$ since two nodes are more likely to be considered independent. According to the definition~\ref{transition_cg}, we only need to conduct classification to edges connecting nodes between $\mathcal{U}$ and $\mathcal{V}$. If two nodes are dependent, we add one edge directed from the node in $\mathcal{U}$ to the node in $\mathcal{V}$. This definition also ensures that we always have $q({\mathcal{G}}|\tau)\in {\mathcal{Q}}_{DAG}$, where ${\mathcal{Q}}_{DAG}$ is the class of DAG. With this procedure, we identify a unique CG ${\mathcal{G}}^*$ under optimality: \begin{proposition}[Identifiability] \label{uniqueness} Given an oracle independent test, with an optimal interventional data distribution $p_{{\mathcal{I}}_\pi^s}^*$, causal discovery obtains $\phi^*$ that correctly tells the independence between any two nodes, then the causal graph is uniquely identifiable, with $e^{*}_{ij} = q_{\phi^*}(e_{ij}|\tau), \forall i \in [M+N], j \in [M]$. \end{proposition} In practice, we use $\chi^2$-test for discrete variables and the Fast Conditional Independent Test~\cite{chalupka2018fast} for continuous variables. The sample size needed for obtaining the oracle test has been widely investigated~\cite{canonne2018testing}. However, testing with finite data is not a trivial problem, as stated in~\cite{shah2020hardness}, especially when the data is sampled from a Goal-conditioned MDP. Usually, the random policy is not enough to satisfy the oracle assumption because some nodes cannot be fully explored when the task is complicated and has a long horizon. To make this assumption empirically possible, it is necessary to simultaneously optimize $\pi_{\theta}(a^t | s^t, s^*, {\mathcal{G}})$ to access more samples close to finishing the task, which is further analyzed in Section~\ref{sec::convergence}. We also empirically support this argument in Section~\ref{discovery_results} and provide a detailed theoretical proof in Appendix~\ref{app:planning} and~\ref{app:causal_discovery}. \subsection{Analysis of Performance Guarantee} \label{sec::convergence} The entire pipeline of GRADER is summarized in Algorithm~\ref{algorithm1}. To analyze the performance of the optimization of (\ref{elbo_final}), we first list important lemmas that connect previous steps and then show that the iteration of these steps in \textit{GRADER} leads to a virtuous cycle. By the following lemmas, we show the following performance guarantees step by step. Lemma~\ref{monotonicity} shows model learning is monotonically better at convergence given a better causal graph from causal discovery. Then the learned transition model helps to improve the lower bound of value function during planning according to Lemma~\ref{Value Bound}. Lemma~\ref{TV_distance} reveals the connection between policy learning and interventional data distribution, which in turn improves the quality of our causal discovery, as is shown in Lemma~\ref{causal_discovery_quality} and Proposition~\ref{uniqueness}. \begin{lemma}[Monotonicity of Transition Likelihood] \label{monotonicity} Assume ${\mathcal{G}}^*=(V,E^*)$ be the true CG, for two CG ${\mathcal{G}}_1=(V, E_1)$ and ${\mathcal{G}}_2=(V, E_2)$, if $\text{SHD}({\mathcal{G}}_1, {\mathcal{G}}^*) < \text{SHD}({\mathcal{G}}_2, {\mathcal{G}}^*)$, $\exists \ e,\ s.t.\ E_1\cup \{e\} = E_2$, when transition model $\theta$ converges, the following inequality holds for the transition model in (\ref{elbo_final}): \begin{equation} \log p_\theta(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}}^*) \geq \log p_\theta(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}}_1) \geq \log p_\theta(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}}_2) \label{estep_eq2} \end{equation} where SHD is the Structural Hamming Distance defined in Appendix~\ref{proof_monotonicity}. \end{lemma} \begin{lemma}[Bounded Value Function in Policy Learning] \label{Value Bound} Given a planning horizon $H\to \infty$, if we already have an approximate transition model $\mathbb{D}_{\text{TV}}(\hat{p}(s'|s,a), p(s'|s,a)) \leq \epsilon_m$, the approximate policy $\hat{\pi}$ achieves a near-optimal value function (refer to Appendix~\ref{app:planning} for detailed analysis): \begin{equation} \|V^{\pi^*}(s)-V^{\hat{\pi}}(s)\|_\infty \leq \frac{\gamma}{(1-\gamma)^2} \epsilon_m \end{equation \end{lemma} \begin{lemma}[Policy Learning Improves Interventional Data Distribution] \label{TV_distance} With a step reward $r(s, a) = \mathbb{p_g} \mathds{1}(s=g)$, we show that the value function determines an upper bound for TV divergence between the interventional distribution with its optimal (proof details in Appendix~\ref{app:planning}): \begin{equation} \begin{aligned} \mathbb{D}_{\text{TV}}(p_{{\mathcal{I}}^s_\pi}, p_g) \leq 1 - (1-\gamma) V^\pi (s). \end{aligned} \end{equation} where $p_{{\mathcal{I}}_\pi^s}$ is the marginal state distribution in interventional data, and $p_g$ is the goal distribution. A better policy with larger $V^\pi(s)$ enforces the distribution of interventional data towards the goal. \end{lemma} \begin{lemma}[Interventional Data Benefits Causal Discovery] \label{causal_discovery_quality} For $\epsilon_g=\min_{p_g>0} p_g$, $\mathbb{D}_{\text{TV}}(p_{{\mathcal{I}}_\pi^s}, p_g)< \epsilon_g$, the error of our causal discovery is upper bounded with $\mathbb{E}_{\hat{{\mathcal{G}}}}[ \text{SHD}(\hat{{\mathcal{G}}}, {\mathcal{G}}^*)] \leq |{\mathcal{S}}|-1$. \end{lemma} After the close-loop analysis of our model, we are now able to analyze the overall performance of the proposed framework. Under the construction of $p_\theta(s^{t+1}|a^t, s^t, {\mathcal{G}})$ with NN-parameterized functions, the following theorem shows that the learning process will guarantee to perform a close estimation of true ELBO under the iterative optimization among model learning, planning, and causal discovery. \begin{theorem} \label{convergence} With T-step approximate transition dynamics $\mathbb{D}_{\text{TV}}\Big(\hat{p}(s'|s,a), p(s'|s,a)\Big) \leq \epsilon_m$, if the goal distribution satisfies $\epsilon_g > \frac{\gamma}{1-\gamma} \epsilon_m$, and the distribution prior CG is entry-wise independent $\text{Bernoulli}(\epsilon_{\mathcal{G}})$, GRADER guarantees to achieve an approximate ELBO $\hat{{\mathcal{J}}}$ with the true ELBO ${\mathcal{J}}^*$: \begin{equation} \|{\mathcal{J}}^*(\theta, \phi) - \hat{{\mathcal{J}}}(\hat{\theta}, \hat{\phi}) \|_{\infty} \leq \left[1 + \frac{\gamma}{(1-\gamma)^2}\right] \epsilon_m T+ \log \left(\frac{1-\epsilon_{\mathcal{G}}}{\epsilon_{\mathcal{G}}} \right) (|{\mathcal{S}}|-1) , \end{equation} \end{theorem} An intuitive understanding of the performance guarantee is that a better transition model indicates a better approximation of objective ${\mathcal{J}}$. Yet it's harder for \textit{GRADER} to approximate a longer trajectory with large state space. The proof of this theorem and corresponding empirical results are in Appendix~\ref{app:convergence}. \vspace{-2mm} \section{Experiments} \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-4mm} \centering \includegraphics[width=0.5\textwidth]{figs/env_2.png} \caption{Three environments used in this paper.} \label{fig:env} \vspace{-6mm} \end{wrapfigure} In this section, we first discuss the setting of our designed environments as well as the baselines used in the experiments. Then, we provide the numerical results and detailed discussions to answer the following important research questions: \qa{\textbf{Q1.}} Compared to baselines, how does GRADER gain performance improvement under both in-distribution and generalization settings? \qa{\textbf{Q2.}} Compared to an offline random policy, how does a well-trained policy improve the results of causal discovery? \qa{\textbf{Q3.}} Compared to score-based causal discovery, does the proposed data-efficient causal discovery pipeline guarantee to identifying true causal graph as stated in Section~\ref{causal_discovery}? \qa{\textbf{Q4.}} Compared to false causal graphs, how does the correctness of discovered causal graph influence the task-solving performance of RL agents? \subsection{Environments and Baselines} Since most commonly used RL benchmarks do not explicitly require causal reasoning for generalization, we design three environments shown in Figure~\ref{fig:env}. These environments use the true state as observation to disentangle the reasoning task from visual understanding. For each environment, we design three settings -- in-distribution (\textit{I}), spuriousness (\textit{S}), and composition (\textit{C}) -- corresponding to different goal distributions for generalization. We use $p_{\text{train}}(g)$ and $p_{\text{test}}(g)$ to represent the goal distribution during training and testing, respectively. $I$ uses the same $p_{\text{train}}(g)$ and $p_{\text{test}}(g)$, $S$ introduces spurious correlations in $p_{\text{train}}(g)$ but remove them in $p_{\text{test}}(g)$, and $C$ contains more similar sub-goals in $p_{\text{test}}(g)$ than in $p_{\text{train}}(g)$. The details of these settings in three environments are briefly summarized in the following (details in in Appendix~\ref{detail_env}): \begin{itemize}[leftmargin=0.2in] \item \textbf{Stack}: We design this manipulation task inspired by the CausalWorld~\cite{ahmed2020causalworld}, where the agent must stack objects to match specific shapes and colors. In \textit{Stack-S}, we let the same shape have the same color in $p_{\text{train}}(g)$ but randomly sample the color and shape in $p_{\text{test}}(g)$. In \textit{Stack-C}, the maximum number of object is two in $p_{\text{train}}(g)$ but five in $p_{\text{test}}(g)$. \item \textbf{Unlock}: We design this task for the agent to collect a key to open doors built upon the Minigrid~\cite{gym_minigrid}. In \textit{Stack-S}, the door and the key are always in the same row in $p_{\text{train}}(g)$ but uniformly sample in $p_{\text{test}}(g)$. In \textit{Unlock-C}, there are one door in $p_{\text{train}}(g)$ but two doors in $p_{\text{test}}(g)$. \item \textbf{Crash}: The occurrence of accidents usually relies on causality, e.g., an autonomous vehicle (AV) collides with a jaywalker because its view is blocked by another car~\cite{tavares2021language}. We design such a crash scenario based on highway-env~\cite{highway-env}, where the goals are to create crashes between a pedestrian and different AVs. In \textit{Stack-S}, the initial distance between ego car and pedestrian is a constant in $p_{\text{train}}(g)$ but irrelevant in $p_{\text{test}}(g)$. In \textit{Stack-C}, there is one pedestrian in $p_{\text{train}}(g)$ but two in $p_{\text{test}}(g)$. \end{itemize} \begin{table}[t] \caption{Success rate (\%) for nine settings in three environments. \highest{Bold} font means the best.} \label{tab:overall} \centering \small{ \begin{tabular}{l|p{0.9cm} p{0.9cm} p{0.9cm} p{0.9cm} p{1.2cm} | p{0.9cm} p{0.9cm} p{0.9cm} p{1.0cm}} \toprule \small{Env} & \small{SAC} & \small{ICIN} & \small{PETS} & \small{TICSA} & \small{ICIL} & \scriptsize{GRADER} & \small{Score} & \small{Full} & \small{Offline} \\ \midrule \small{Stack-I} & 34.7\scriptsize{$\pm$16.1} & 71.8\scriptsize{$\pm$6.9} & \highest{97.2\scriptsize{$\pm$6.9}} & 85.9\scriptsize{$\pm$8.4} & 93.7\scriptsize{$\pm$5.9} & 95.6\scriptsize{$\pm$5.4} & 92.7\scriptsize{$\pm$7.4} & 92.9\scriptsize{$\pm$6.3} & 96.8\scriptsize{$\pm$5.8} \\ \small{Stack-S} & 22.1\scriptsize{$\pm$14.0} & 71.0\scriptsize{$\pm$7.4} & 77.7\scriptsize{$\pm$13.5} & 88.8\scriptsize{$\pm$10.1} & 81.2\scriptsize{$\pm$14.4} & \highest{97.6\scriptsize{$\pm$6.0}} & 90.5\scriptsize{$\pm$7.5} & 86.0\scriptsize{$\pm$9.5} & 95.4\scriptsize{$\pm$6.1}\\ \small{Stack-C} & 31.7\scriptsize{$\pm$5.1} & 58.6\scriptsize{$\pm$8.3} & 73.7\scriptsize{$\pm$10.3} & 76.2\scriptsize{$\pm$8.3} & 62.8\scriptsize{$\pm$13.0} & \highest{93.7\scriptsize{$\pm$8.4}} & 73.9\scriptsize{$\pm$8.5} & 75.7\scriptsize{$\pm$10.3} & 81.4\scriptsize{$\pm$7.8}\\ \midrule \small{Unlock-I} & 0.1\scriptsize{$\pm$0.5} & 31.7\scriptsize{$\pm$9.6} & 59.5\scriptsize{$\pm$7.2} & 58.5\scriptsize{$\pm$12.3} & \highest{67.1\scriptsize{$\pm$11.6}} & 64.2\scriptsize{$\pm$9.1} & 44.9\scriptsize{$\pm$28.1} & 63.8\scriptsize{$\pm$9.2} & 13.8\scriptsize{$\pm$8.1} \\ \small{Unlock-S} & 0.0\scriptsize{$\pm$0.2} & 32.7\scriptsize{$\pm$8.6} & 20.6\scriptsize{$\pm$5.9} & 33.6\scriptsize{$\pm$14.3} & 15.9\scriptsize{$\pm$4.7} & \highest{61.4\scriptsize{$\pm$4.4}} & 23.1\scriptsize{$\pm$7.6} & 18.3\scriptsize{$\pm$7.4} & 13.9\scriptsize{$\pm$7.5}\\ \small{Unlock-C} & 0.4\scriptsize{$\pm$1.7} & 31.5\scriptsize{$\pm$8.5} & 28.3\scriptsize{$\pm$10.0} & 29.8\scriptsize{$\pm$8.3} & 53.6\scriptsize{$\pm$15.3} & \highest{82.1\scriptsize{$\pm$9.2}} & 36.2\scriptsize{$\pm$30.1} & 53.7\scriptsize{$\pm$14.3} & 11.7\scriptsize{$\pm$6.9}\\ \midrule \small{Crash-I} & 22.5\scriptsize{$\pm$17.6} & 27.9\scriptsize{$\pm$6.1} & 52.3\scriptsize{$\pm$11.5} & 68.9\scriptsize{$\pm$5.9} & 55.3\scriptsize{$\pm$20.9} & \highest{91.5\scriptsize{$\pm$4.4}} & 42.3\scriptsize{$\pm$17.5} & 69.8\scriptsize{$\pm$14.0} & 13.1\scriptsize{$\pm$16.2} \\ \small{Crash-S} & 18.6\scriptsize{$\pm$8.7} & 15.8\scriptsize{$\pm$17.2} & 44.6\scriptsize{$\pm$12.5} & 56.8\scriptsize{$\pm$8.6} & 21.7\scriptsize{$\pm$17.7} & \highest{84.3\scriptsize{$\pm$10.0}} & 53.4\scriptsize{$\pm$18.7} & 52.6\scriptsize{$\pm$12.8} & 30.2\scriptsize{$\pm$16.5}\\ \small{Crash-C} & 6.7\scriptsize{$\pm$3.8} & 7.8\scriptsize{$\pm$8.8} & 37.1\scriptsize{$\pm$5.1} & 15.0\scriptsize{$\pm$8.2} & 14.3\scriptsize{$\pm$7.3} & \highest{84.7\scriptsize{$\pm$7.3}} & 8.4\scriptsize{$\pm$6.1} & 42.0\scriptsize{$\pm$17.2} & 14.9\scriptsize{$\pm$12.4}\\ \bottomrule \end{tabular} } \vspace{-4mm} \end{table} We use the following methods as our baselines to fairly demonstrate the advantages of GRADER. \textbf{SAC:}~\cite{haarnoja2018soft} Soft Actor-Critic is a well-known model-free RL method; we augment it with HER~\cite{ren2019exploration} in \textit{Stack} because the goal in this task does not have pre-conditions. \textbf{ICIN:}~\cite{nair2019causal} It uses DAgger~\cite{ross2011no} to learn goal-conditioned policy with the causal graph estimated from the expert policy. We assume it can access the true causal graph for supervised learning. \textbf{PETS:}~\cite{chua2018deep} Since our framework shares a very similar structure as model-based RL, we consider this ensemble transition model with random shoot planning as one baseline, which achieves generalization with the uncertainty-aware design. \textbf{TICSA:}~\cite{wangtask} This is a causal-augmented MBRL method that simultaneously optimizes a soft adjacent matrix (representing the causality) and a transition model. \textbf{ICIL:}~\cite{bica2021invariant} This method proposes an invariant feature learning structure that captures the implicit causality of multiple tasks. We only use it for transition model learning since the original method is designed for imitation learning, \subsection{Results Discussion} \label{discovery_results} \textbf{Overall Performance (\qa{Q1})} We compare the testing reward of all methods under nine tasks and summarize the results in Table~\ref{tab:overall} to demonstrate the overall performance. Generally, our method outperforms other baselines in all tasks except \textit{Stack-I}, which is simple for all methods. We note that the gap between our method and baselines under \textit{S} and \textit{C} settings is more significant than the \textit{I} setting, showing that our method still works well in the non-trivial generalization task. As a model-free method, SAC fails in all three tasks of \textit{Unlock} and \textit{Crash} environments since they have sparse rewards. Without learning the causal structure of the environment, PETS even cannot fully solve \textit{Unlock-S}, \textit{Unlock-C}, and all \textit{Crash} tasks. Both TICSA and ICIL learn explicit or implicit causality of the task, so they are relatively better than SAC and PETS. However, they are still worse than GRADER in two generalization settings because of the unstable and inefficient causal reasoning mechanism. We also find that even if the ICIN is given the true causal graph, the policy learning part cannot efficiently leverage the causality, leading to worse performance in generalization settings. To further analyze the tendency of learning, we plot the curves of all methods under \textit{Crash} in Figure~\ref{fig:discover_graph}. Our method quickly learns to solve the tasks at the beginning of the training, demonstrating data efficiency. GRADER also outperforms other methods with large gaps in the later training phase. The results of the other two environments can be found in Appendix~\ref{app:more_results}. \textbf{Importance of Policy Learning (\qa{Q2})} As we mentioned in Section~\ref{causal_discovery}, we empirically compare GRADER and \textbf{Offline}~\cite{zhu2022offline}, which uses data from offline random policy, and plot results in the right part of Figure~\ref{fig:discover_graph}. We use SHD~\cite{tsamardinos2006max} to compute the distance between the estimated causal graph and the true causal graph, which is obtained by checking the best causal graph with human analysis. The true causal graph for each environment can be found in Appendix~\ref{app:causal_graph}. When we only use samples collected offline by random policy, we cannot obtain variables' values that require long-horizon reasoning, e.g., the door can be opened if the agent is close to the door and have the key, leading to unreliable discovery. As a consequence, the causal graph obtained by Offline indeed harms the performance, as shown in Figure~\ref{fig:discover_graph}. Instead, GRADER gradually explores more regions and quickly obtains the true causal graph when we iteratively discover the causal graph and update the policy. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{figs/overall_reward_4.pdf} \caption{\textbf{Left:} Test reward of the \textit{Crash} environment calculated by 30 trails. \textbf{Right:} The accuracy of causal graph discovery with samples from GRADER, Score, and Offline (random policy).} \label{fig:discover_graph} \vspace{-4mm} \end{figure} \begin{wrapfigure}{r}{0.35\textwidth} \vspace{-4mm} \centering \includegraphics[width=0.35\textwidth]{figs/monotonicity_svg.pdf} \caption{Influence of different causal graphs on \textit{Unlock-S}.} \label{fig:influence} \vspace{-5mm} \end{wrapfigure} \textbf{Advantage of Data-efficient Causal Discovery (\qa{Q3})} To show the advantage of proposed constraint-based methods, we design a model named \textbf{Score} that optimizes a soft adjacent matrix similar to the score-based method in the discovery step. Score-based discovery methods, also used in TICSA, are recently combined with NN for differentiable training. According to the discovery accuracy shown in the right part of Figure~\ref{fig:discover_graph}, we find score-based discovery is inefficient. Based on the performance of the Score model summarized in Table~\ref{tab:overall}, we also conclude that it is not as good as our constraint-based method and has a large variance due to the unstable learning of causal graph. \textbf{Influence of Causal Graph (\qa{Q4})} To illustrate the importance of the causal graph, we implement another variant of GRADER named \textbf{Full}, which uses a fixed full graph that connects all nodes between the sets $\mathcal{U}$ and $\mathcal{V}$. According to the performance shown in Table~\ref{tab:overall} and Figure~\ref{fig:discover_graph}, we find that the full graph achieves worse results than GRADER because of the redundant and spurious correlation. Intuitively, unrelated information causes additional noises to the learning procedure, and the spurious correlation creates a shortcut that makes the model extract wrong features, leading to worse results in the spuriousness generalization as shown in Table~\ref{tab:overall}. We then investigate how the correctness of the causal graph influences the performance. We use fixed graphs interpolating from the best causal graph to the full graph to train a GRADER model on \textit{Unlock-S} and summarize the results in Figure~\ref{fig:influence} The more correct the graph is, the higher reward the agent obtains, which supports our statements in Section~\ref{sec::convergence} that the causal graph is important for the reasoning tasks -- a better causal graph helps the model have better task-solving performance. \vspace{-2mm} \section{Related Works} \textbf{RL Generalization} From agent’s view, algorithms focus on actively obtaining the structure of the task or environment. Some decompose the given task into sub-tasks~\cite{kipf2019compile, xu2019regression, huang2019neural, lu2021learning} and when they encounter an unseen task, they rearrange those (learned or pre-defined) sub-tasks to solve the new task. Instead of dividing tasks, Symbolic RL learns a program-based policy consisting of domain language~\cite{yang2021program, han2020neuro} or grammar~\cite{landajuela2021discovering, garnelo2016towards}. The generated program is then executed~\cite{zhao2021proto} to interact with the environment, which has the potential to solve unseen tasks by searching the symbolic space. From the other view, we generate environments to augment agents' experience for better generalization. One straightforward way is data augmentation of image-based observation~\cite{kostrikov2020image, hansen2021stabilizing, srinivas2020curl, hansen2021generalization, lee2019network, raileanu2021automatic}. When extended to other factors of the environment, \textit{Domain Randomization}~\cite{tobin2017domain} and its variants~\cite{mehta2020active, prakash2019structured} are proposed. Considering the interaction between agent and environment, \textit{Curriculum Learning}~\cite{narvekar2020curriculum} also gradually generates difficult tasks to train generalizable agents. \textbf{Goal-Conditioned RL (GCRL)} From the optimization perspective, Universal Value Function~\cite{schaul2015universal}, reward shaping~\cite{trott2019keeping}, and latent dynamic model~\cite{nair2020goal} are widely used tools to solve GCRL problem. Sub-goal generation~\cite{florensa2018automatic} is another intuitive idea to tackle the long horizon with sparse reward, where the core thing is to make sure the generated sub-goals are solvable. Finally, \textit{HER}~\cite{andrychowicz2017hindsight}, belonging to the relabelling category, is a ground-breaking yet straightforward method that treats visited states as "fake" goals when the goal and state share the same space. Later on, improved versions of HER~\cite{ren2019exploration, pitis2020maximum, fang2019curriculum} were widely studied. One limitation is that we cannot directly use a visited state as a goal if the goal has pre-conditions. Similar to our setting, \cite{tang2021hindsight} and \cite{rudner2021outcome} convert the GCRL problem to variational inference by regarding control as inference~\cite{levine2018reinforcement}. \cite{tang2021hindsight} propose a EM framework under the HER setting and \cite{rudner2021outcome} treats the last state as goal and estimate a shaped reward during training. \textbf{RL with Causal Reasoning} Causality is now frequently discussed in the machine learning field to complement the interpretability of neural networks~\cite{peters2017elements}. RL algorithms also incorporate causality to improve the reasoning capability~\cite{madumal2020explainable}. For instance, \cite{nair2019causal} and \cite{volodin2020resolving} explicitly estimate causal structures with the interventional data obtained from the environment. These structures are either combined with dynamic model learning in model-based RL~\cite{wangtask} or used to adjust the buffer priority~\cite{seitzer2021causal}. Implicitly, we can abstract the causal structure and formulate it using the \textit{Block MDP}~\cite{zhang2020invariant} setting or training multiple encoders to extract different kinds of representations~\cite{sodhani2022improving}. Following the idea of invariant risk minimization~\cite{arjovsky2019invariant}, they assume task-relevant features are invariant and shared across all environments, which can be used as the only cause of the reward. \textbf{Causal Discovery} Causal discovery~\cite{glymour2019review} is a long-stand topic in economics and sociology, where the traditional methods can be generally categorized into constraint-based and score-based. Constraint-based methods~\cite{spirtes2000causation} start from a complete graph and iteratively remove edges with conditional independent test~\cite{pearson1900x, zhang2012kernel} as constraints. Score-based methods~\cite{chickering2002optimal, hauser2012characterization} use metrics such as \textit{Bayesian Information Criterion}~\cite{neath2012bayesian} as scores and prefer edges that maximize the score given the dataset. Recently, researchers extend score-based methods with RL~\cite{zhu2019causal} or differentiable discovery~\cite{brouillard2020differentiable, ke2019learning}. The former selects edges with a learned policy, and the latter learns a soft adjacent matrix with observational or interventional data. Active intervention methods are also explored~\cite{scherrer2021learning} to increase the efficiency of data collection and decrease the cost of conducting intervention~\cite{lindgren2018experimental}. \section{Conclusion} This paper proposes a latent variable model that injects a causal graph reasoning process into transition model learning and planning to solve Goal-Conditioned RL problems under the generalization setting. We theoretically prove that our iterative optimization process can obtain the true causal graph. To evaluate the performance of the proposed method, we designed nine tasks in three environments. The comprehensive experiment results show that our method has better data efficiency and performance than baselines. Our method also provides interpretability by the explicitly discovered causal graph. The main limitation of this work is that the explicit estimation of causal structure does not scale well to a large number of nodes. The factorized state and action space assumption may also restrict the usage of this work to abstracted representations. Thus, extending this method to broader applications could be the future direction. We further discuss the potential negative social impact in Appendix~\ref{app:social} \newpage \bibliographystyle{unsrt} \section{Introduction} The generalizability, which enables an algorithm to handle unseen tasks, is fruitful yet challenging in multifarious decision-making domains. Recent literature~\cite{xu2021bayesian, cranmer2006discovering, li2020ngs} reveals the critical role of reasoning in improving the generalization of reinforcement learning (RL). However, most off-the-shelf RL algorithms~\cite{sutton2018reinforcement} have not regarded reasoning as an indispensable accessory, thus usually suffer from data ineffciency and performance degradation due to the mismatch between training and testing settings. To attain generalization at the testing stage, some efforts were put into incorporating domain knowledge to learn structured information, including sub-task decomposition~\cite{lu2021learning}, or program generation~\cite{yang2021program, landajuela2021discovering, han2020neuro} and execution~\cite{zhao2021proto, sun2019program}, which guide the model to solve complicated tasks in an explainable way. However, such symbolism-dominant methods heavily depend on the re-usability of sub-tasks and pre-defined grammars, which may not always be accessible in decision-making tasks. Inspired by the close link between reasoning and the cause-and-effect relationship, causality is recently incorporated to compactly represent the aforementioned structured knowledge in RL training~\cite{gershman2017reinforcement}. Based on the form of causal knowledge, we divide the related works into two categories, i.e., \textit{implicit} and \textit{explicit} causation. With \textit{implicit} causal representation, researchers ignore the detailed causal structure. For instance, \cite{zhang2020invariant} extracts invariant features as one node that influences the reward function, while the other node consists of task-irrelevant features~\cite{tomar2021model, sontakke2021causal, sodhani2022improving, bica2021invariant}. This neat structure has good scalability but requires access to multiple environments that share the same invariant feature~\cite{zhang2020invariant, bica2021invariant, han2021learning}. One can also turn to the \textit{explicit} side by estimating detailed causal structures~\cite{wangtask, volodin2020resolving, seitzer2021causal, gasse2021causal}, which uses directed graphical models to capture the causality in the environment. A pre-request for this estimation is the object-level or event-level abstraction of the observation, which is available in most tasks and also becoming a frequently studied problem~\cite{abel2022theory, shanahan2022abstraction, abel2018state}. However, existing \textit{explicit} causal reasoning RL models either rely on heuristic design without theoretical guarantees~\cite{wangtask} or require the true causal graph~\cite{nair2019causal}. In this paper, we propose \textit{\textbf{G}ene\textbf{RA}lizing by \textbf{D}iscov\textbf{ER}ing (GRADER)}, a causal reasoning method that augments the RL algorithm with data efficiency, interpretability, and generalizability. We mainly focus on Goal-Conditioned RL (GCRL), where different goal distributions during training and testing reflect the generalization. We formulate the GCRL into a probabilistic inference problem~\cite{levine2018reinforcement} with a learnable causal graph as the latent variable. This novel formulation naturally explains the learning objective with three components, i.e., transition model learning, planning, and causal graph discovery, leading to an optimization framework that alternates between the causal structure discovery policy learning to gain generalizability. Under some mild conditions, we prove the unique identifiability of the causal graph and the theoretical performance guarantee of the proposed framework. To demonstrate the effectiveness of the proposed method, we conduct comprehensive experiments in environments that require strong reasoning capability. Specifically, we design two types of generalization settings, i.e., spuriousness and composition, and provide an example to illustrate these settings in Figure~\ref{fig:motivation}. The evaluation results confirm our method's advantages in several aspects. First, the proposed data-efficient discovery method provides an explainable causal graph yet requires much less data than score-based methods, increasing data efficiency and interpretability during task solving. Second, simultaneously discovering the causal graph during policy learning dramatically increases the success rate of solving tasks. In summary, the contribution of this paper is threefold: \vspace{-2mm} \begin{itemize}[leftmargin=0.3in] \item We use the causal graph as a latent variable to reformulate the GCRL problem and then derive an iterative training framework from solving this problem. \item We prove that our method uniquely identifies true causal graphs, and the performance of the iterative optimization is guaranteed with a lower bound given converged transition dynamics. \item We design nine tasks in three environments that require strong reasoning capability and show the effectiveness of the proposed method against five baselines on these tasks. \vspace{-4mm} \end{itemize} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figs/motivation_2.pdf} \caption{The agent picks and places objects to achieve given goals. \textbf{Left:} The causal graph of this task. \textbf{Right:} Training setting: the hexagon is always pink, and the box is always blue. Three testing settings: (1) In distribution: the same as the training setting. (2) Spuriousness: swap the color and shape to break the spurious correlation. (3) Composition: increase the height of the goal.} \label{fig:motivation} \vspace{-4mm} \end{figure} \section{Problem Formulation and Preliminary} We start by discussing the setting of RL we use in this paper and the assumptions required in causal reasoning. Then we briefly introduce the necessary concepts related to causality and causal discovery. \subsection{Factorized Goal-conditioned RL} \label{FactorMDP} We assume the environment follows the Goal-conditioned Markov Decision Process (MDP) setting with full observation. This setting is represented by a tuple $\mathcal{M}=(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, {G})$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{P}$ is the probabilistic transition model, ${G} \subset \mathcal{S}$ is the goal space which is a set of assignment of values to states, and $r(s, g) = \mathds{1}(s = g) \in \mathcal{R}$ is the sparse deterministic reward function that returns 1 only if the state $s$ match the goal $g$. In this paper, we focus on the goal-conditioned generalization problem, where the goal for training and testing stages will be sampled from different distributions $p_{\text{train}}(g)$ and $p_{\text{test}}(g)$. We refer to a goal $g \in G$ as a task and use these two terms interchangeably. To accomplish the causal discovery methods, we make a further assumption similar to~\cite{seitzer2021causal, boutilier2000stochastic} for the state and action space: \begin{assumption}[Space Factorization]\label{assumption_factorize} The state space $\mathcal{S} = \{ \mathcal{S}_1 \times \cdots \times \mathcal{S}_M \}$ and action space $\mathcal{A} = \{ \mathcal{A}_1 \times \cdots \times \mathcal{A}_N \}$ can be factorized to disjoint components $\{\mathcal{S}_i\}_{i=1}^{M}$ and $\{\mathcal{A}_i\}_{i=1}^{N}$. \end{assumption} The components representing one event or object's property usually have explicit semantic meanings for better interpretability. This assumption can be satisfied by state and action abstraction, which has been widely investigated in~\cite{abel2022theory, shanahan2022abstraction, abel2018state}. Such factorization also helps deal with the high-dimensional states since it might be impossible to treat each dimension as one random variable~\cite{wangtask}. \vspace{-2mm} \subsection{Causal Reasoning with Graphical Models} \label{cg_scm} Reasoning with causality relies on specific causal structures, which is commonly represented as directed acyclic graphs (DAGs)~\cite{peters2017elements} over variables. Consider random variables $\bm{X} = (X_1,\dots, X_d)$ with index set $\bm{V}:= \{1,\dots,d \}$. A graph ${\mathcal{G}} = (\bm{V}, \mathcal{E})$ consists of nodes $\bm{V}$ and edges $\mathcal{E} \subseteq \bm{V}^2$ with $(v, v)$ for any $v \in \bm{V}$. A node $i$ is called a parent of $j$ if $e_{ij} \in {\mathcal{E}}$ and $e_{ji} \notin {\mathcal{E}}$. The set of parents of $j$ is denoted by $\textbf{PA}^{{\mathcal{G}}}_j$. We formally discuss the graph representation of causality with the definitions: \begin{definition}[Structural Causal Models~\cite{peters2017elements}] A structural causal model (SCM) $\mathfrak{C}:= (\bm{S}, \bm{U})$ consists of a collection $\bm{S}$ of $d$ functions, \begin{equation} X_j := f_j(\textbf{PA}^{{\mathcal{G}}}_j, U_j),\ \ \ \ j \in [d], \end{equation} where $\textbf{PA}_j \subset \{ X_1,\dots,X_d \} \backslash \{X_j \}$ are called parents of $X_j$; and a joint distribution $\bm{U} = \{ U_1,\dots,U_d \}$ over the noise variables, which are required to be jointly independent. \label{def_scm} \end{definition} \begin{definition}[Causal Graph~\cite{peters2017elements}, CG] The causal graph ${\mathcal{G}}$ of a SCM is obtained by creating one node for each $X_j$ and drawing directed edges from each parent in $\textbf{PA}^{{\mathcal{G}}}_j$ to $X_j$. \end{definition} We note that CG describes the structure of the causality, and SCM further considers the specific causation from the parents of $X_j$ to $X_j$ as well as exogenous noises ${U_j}$. To uncover the causal structure from data distribution, we assume that the CG satisfies the \textit{Markov Property} and \textit{Faithfulness} (\cite{peters2017elements}), which makes the independences consistent between the joint distribution $P(X_1,\dots,X_n)$ and the graph ${\mathcal{G}}$. We also follow the \textit{Causal Sufficiency} assumption~\cite{spirtes2000causation} that supposes we have measured all the common causes of the measured variables Existing work~\cite{pitis2020counterfactual, seitzer2021causal} believes that two objects have causation only if they are close enough while there is no edge between them if the distance is large. Instead of using such a local view of the causality, we assume the causal graph is consistent across all time steps. Still the causal influence could be local, and the specific influence indicated by edges is estimated by the function $f_j(\textbf{PA}_j, U_j)$. \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-5mm} \centering \includegraphics[width=0.5\textwidth]{figs/framework_2.pdf} \caption{The paradigm of GRADER.} \label{fig:framework} \vspace{-10mm} \end{wrapfigure} \section{Generalizing by Discovering (GRADER)} With proper definitions and assumptions, we now look at the proposed method. We first derive the framework of GRADER by formulating the GCRL problem as a latent variable model, which provides a variational lower bound to optimize. Then, we divide this objective function into three parts and iteratively update them with a specific performance guarantee. \subsection{GCRL as Latent Variable Models} \label{gcrl_lvm} The general objective of RL is to maximize expected reward function w.r.t. a learnable policy model $\pi$. Particularly, in the goal-conditioned setting, such objective is represented as $\max_{\pi} \mathbb{E}_{\tau \sim \pi, g\sim p(g)}[ \sum_{t=0}^{T} r(s^t, g)]$, where $p(g)$ is the distribution of goals and $\tau := \{ s^0,a^0,\dots,s^{T} \}$ the action-state trajectory with maximal time step $T$. The trajectory ends only if the goal is achieved $g = s^T$ or the maximal step is reached. Inspired by viewing the RL problem as probabilistic inference~\cite{abdolmaleki2018maximum, levine2018reinforcement}, we replace the question from \textit{"How to find actions to achieve the goal?"} to \textit{"what are the actions if we achieve the goal?"}, leading to a likelihood maximization problem for $p(\tau|s^*)$ with $s^*:= \mathds{1}(g=s^T)$. Different from previous work~\cite{marino2019inference} that recasts actions as latent variables and inferring actions that result in "observed" high reward, we decompose $p(\tau|s^*)$ with ${\mathcal{G}}$ as the latent variable to get the evidence lower bound (ELBO) \begin{equation} \log p(\tau|s^*) = \log \int p(\tau|{\mathcal{G}}, s^*) p({\mathcal{G}} | s^*) d{\mathcal{G}} \geq \mathbb{E}_{q({\mathcal{G}}|\tau)}[\log p(\tau|{\mathcal{G}}, s^*)] - \mathbb{D}_{\text{KL}}[q({\mathcal{G}}|\tau) || p({\mathcal{G}})], \\ \label{elbo} \end{equation} where the prior $p({\mathcal{G}})$ and variational posterior $q({\mathcal{G}}|\tau)$ represent distributions over a graph structure, i.e, the probability of the existences of edges in the graph. $\mathbb{D}_{\text{KL}}$ represents the Kullback–Leibler (KL) divergence between two graphs, which will be explained in Section~\ref{causal_discovery}. Recall that the space of goal $G$ is a subset of the state, here we extend the meaning of $g$ and assume all trajectories achieve the goal in the final state~\cite{andrychowicz2017hindsight}, i.e., $g = s^T$. Such extension makes it possible to further decompose the first term of (\ref{elbo}) as (refer to Appendix~\ref{app:derivation_transition}) \begin{equation} \log p(\tau|{\mathcal{G}}, s^*) = \log p(s^0) + \sum_{t=0}^{T-1} \log p(s^{t+1}| s^{t}, a^{t}, {\mathcal{G}}) + \sum_{t=0}^{T-1} \log \pi(a^{t}|s^{t}, s^*, {\mathcal{G}}) + \log p(g). \label{transition} \end{equation} Here, we use the fact that ${\mathcal{G}}$ is effective in both the transition model $p(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ and the policy model $\log p(a^{t}|s^{t}, s^*, {\mathcal{G}})$, $g$ only influences the policy model, and the initial state $s_0$ depends neither on ${\mathcal{G}}$ nor $g$. We assume that both initial state $\log p(s_0)$ and goal $\log p(g)$ follow the uniform distribution; thus, the first and last terms of (\ref{transition}) are constants. The policy term $\pi$, selecting action $a^t$ according to both current state $s^t$ and goal $g$, is implemented with planning method and is further discussed in Section~\ref{model_planning}. Finally, we maximize the likelihood $p(\tau|s^*)$ with the following reformulated ELBO as the objective \begin{equation} \begin{aligned} {\mathcal{J}}(\theta,\phi) = \mathbb{E}_{ q_{\phi}({\mathcal{G}}|\tau)}\sum_{t=0}^{T-1} \left[ \log p_{\theta}(s^{t+1}|s^{t}, a^{t}, {\mathcal{G}}) + \log \pi_{\theta}(a^{t}|s^{t}, s^{*}, {\mathcal{G}})\right] - \mathbb{D}_{\text{KL}}[q_{\phi}({\mathcal{G}}|\tau) || p({\mathcal{G}})] \label{elbo_final} \end{aligned} \end{equation} where $\theta$ is the shared parameter of transition model $p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ and policy $\pi_{\theta}(a^{t}|s^{t}, s^{*}, {\mathcal{G}})$, and $\phi$ is the parameter of causal graph $q_{\phi}({\mathcal{G}}|\tau)$. To efficiently solve this optimization problem, we iteratively updates parameter $\phi$ (causal discovery, Section~\ref{causal_discovery}) and parameter $\theta$ (model and policy learning, Section~\ref{model_planning}), as shown in Figure~\ref{fig:framework}. Intuitively, these processes can be respectively viewed as the discovery of CG and update of SCM, which share tight connections discussed in Section~\ref{cg_scm}. \subsection{Model and Policy Learning} \label{model_planning} Let us start with a simple case where we already obtain a ${\mathcal{G}}$ and use it to guide the learning of parameter $\theta$ via $\max_{\theta} {\mathcal{J}}(\theta,\phi)$. Since the KL divergence of ${\mathcal{J}}(\theta,\phi)$ does not involve $\theta$, we only need to deal with the first expectation term, the likelihood of the transition model and policy. For the transition $p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$, we connect it with causal structure by further defining a particular type of CG and denote it as ${\mathcal{G}}$ in the rest of this paper: \begin{definition}[Transition Causal Graph] \label{transition_cg} We define a bipartite graph ${\mathcal{G}}$, whose vertices are divided into two disjoint sets $\mathcal{U}= \{\mathcal{A}^{t}, \mathcal{S}^{t} \}$ and $\mathcal{V}=\{ \mathcal{S}^{t+1} \}$. $\mathcal{A}^{t}$ represents action nodes at step $t$, $\mathcal{S}$ state nodes at step $t$, and $\mathcal{S}^{t+1}$ the state nodes at step $t+1$. All edges start from set $\mathcal{U}$ and end in set $\mathcal{V}$. \end{definition} \textbf{Model learning.} This definition builds the causal graph between two consecutive time steps, which indicates that the values of states in step $t+1$ depend on values in step $t$. It also implies that the interventions~\cite{peters2017elements} on nodes in $\mathcal{U}$ are directly obtained since they have no parents. We denote the marginal distribution of $\mathcal{S}$ as $p_{{\mathcal{I}}_\pi^s}$, which is collected by RL policy $\pi$. Combined with the Definition~\ref{def_scm} of SCM, we find that $p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ essentially approximates a collection of functions $f_j$ following the structure ${\mathcal{G}}$, which take as input the values of parents of the state node $s_j$ and outputs the value $s_j$. Thus, we propose to model the transition corresponding to ${\mathcal{G}}$ with a collection of neural networks $f_{\theta}({\mathcal{G}}) := \{ f_{\theta_j} \}_{j=1}^{M}$ to obtain \begin{equation} {s}^{t+1}_j = f_{\theta_j}([ \textbf{PA}_j^{{\mathcal{G}}}]^{t}, U_j), \label{gru_inference} \end{equation} where $[\textbf{PA}_j^{{\mathcal{G}}}]^{t}$ represents the values of all parents of node $s^{t}_j$ at time step $t$ and $U_j$ follows Gaussian noise $U_j \sim \mathcal{N}(0, \textbf{I})$. In practice, we use Gated Recurrent Unit~\cite{chung2014empirical} as $f_j$ because it supports varying numbers of input nodes. We take $s^{t}_j$ as the initial hidden embedding and the rest parents $[ \textbf{PA}_j^{{\mathcal{G}}}\backslash s_j ]^{t}$ as the input sequence to $f_j$. The entire model is optimized by stochastic gradient descent with the log-likelihood $\log p_{\theta}(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}})$ as objective. \textbf{Policy learning with planning.} Then we turn to the policy term $\pi_{\theta}(a^{t}|s^{t}, s^{*}, {\mathcal{G}})$ in ${\mathcal{J}}(\theta,\phi)$. We optimize it with planning methods that leverage the estimated transition model. Specifically, the policy aims to optimize an action-state value function $Q(s^t, a^t)=\mathbb{E}\left[\sum_{t'=0}^H \gamma^{t'} r(s^{t'+t}, a^{t'+t}) | s^t, a^t\right]$, which can be obtained by unrolling the transition model with a horizon of $H$ steps and discount factor $\gamma$. In practice, we use model predictive control (MPC)~\cite{camacho2013model} with random shooting~\cite{richards2005robust}, which selects the first action in the fixed-horizon trajectory that has the highest action-state value $Q(s^t, a^t)$, i.e. $\hat{\pi}(s^t) = \argmax_{a^t\in {\mathcal{A}}} Q_\theta^{\mathcal{G}} (s^t, a^t)$. The formulation we derived so far is highly correlated to the model-based RL framework~\cite{wang2019benchmarking}. However, the main difference is that we obtain it with variational inference by regarding the causal graph as a latent variable. \vspace{-3mm} \subsection{Data-Efficient Causal Discovery} \label{causal_discovery} \begin{wrapfigure}{r}{0.43\textwidth} \vspace{-5mm} \newcommand\mycommfont[1]{\small\ttfamily\textcolor{RoyalBlue}{#1}} \SetCommentSty{mycommfont} \begin{algorithm}[H] \caption{GRADER Training} \label{algorithm1} \KwIn{Trajectory buffer $\mathcal{B}_{\tau}$, Causal graph ${\mathcal{G}}$, Transition model $f_{\theta}$} \While{$\theta$ not converged} { \tcp{Policy from planning} Sample a goal $g\sim p_{\text{train}}(g)$ \\ \While{$t < T$} { $a^t \leftarrow$ Planner($f_{\theta}$, $s^t$, $g$) \\ $s^{t+1}$, $r^t \leftarrow$ Env($a^t$, $g$) \\ $\mathcal{B}_{\tau} \leftarrow \mathcal{B}_{\tau}\cup\{a^t, s^t, s^{t+1} \} $ } \tcp{Estimate causal graph} \For{$i \leq M + N$} { \For{$j \leq M$} { Infer edge $e_{ij} \leftarrow q_{\phi}(\cdot|\mathcal{B}, \eta)$ } } \tcp{Learn transition model} Update $f_{\theta}({\mathcal{G}})$ via (\ref{gru_inference}) with $\mathcal{B}$ \\ } \end{algorithm} \vspace{-6mm} \end{wrapfigure} In this step, we relax the assumption of knowing ${\mathcal{G}}$ and aim to estimate the posterior distribution $q({\mathcal{G}}|\tau)$ to increase the ELBO~(\ref{elbo_final}) w.r.t. parameter $\phi$. In most score-based methods~\cite{chickering2002optimal}, likelihood is used to evaluate the correctness of the causal graph, i.e., a better causal graph leads to a higher likelihood. Since the first term of (\ref{elbo_final}) represents the likelihood of the transition model, we convert the problem of $\max_{\phi} {\mathcal{J}}(\theta, \phi)$ to the causal discovery that finds the true causal graph based on collected data samples. As for the second term of (\ref{elbo_final}), the following proposition shows that the KL divergence between $q_{\phi}({\mathcal{G}}|\tau)$ and $p({\mathcal{G}})$ can be approximated by a sparsity regularization~(proof in Appendix~\ref{app:approx_kld}). \begin{proposition}[KL Divergence as Sparsity Regularization] With entry-wise independent Bernoulli prior $p({\mathcal{G}})$ and point mass variational distribution $q({\mathcal{G}}|\tau)$ of DAGs, $\mathbb{D}_{\text{KL}}[q_{\phi} \| p]$ is equivalent to an $\ell_1$ sparsity regularization for the discovered causal graph. \label{approx_kld} \end{proposition} We restrict the posterior $q_{\phi}({\mathcal{G}}|\tau)$ to point mass distribution and use a threshold $\eta$ to control the sparsity. We perform the discovery process from the classification perspective by proposing binary classifiers $q_{\phi}(e_{ij}|\tau, \eta)$ to determine the existence of an edge $e_{ij}$. This classifier $q_{\phi}(e_{ij}|\tau, \eta)$ is implemented by statistic \textit{Independent Test}~\cite{chalupka2018fast}, and $\eta$ is the threshold for the p-value of the hypothesis. A larger $\eta$ corresponds to harder sparsity constraints, thus leading to a sparse ${\mathcal{G}}$ since two nodes are more likely to be considered independent. According to the definition~\ref{transition_cg}, we only need to conduct classification to edges connecting nodes between $\mathcal{U}$ and $\mathcal{V}$. If two nodes are dependent, we add one edge directed from the node in $\mathcal{U}$ to the node in $\mathcal{V}$. This definition also ensures that we always have $q({\mathcal{G}}|\tau)\in {\mathcal{Q}}_{DAG}$, where ${\mathcal{Q}}_{DAG}$ is the class of DAG. With this procedure, we identify a unique CG ${\mathcal{G}}^*$ under optimality: \begin{proposition}[Identifiability] \label{uniqueness} Given an oracle independent test, with an optimal interventional data distribution $p_{{\mathcal{I}}_\pi^s}^*$, causal discovery obtains $\phi^*$ that correctly tells the independence between any two nodes, then the causal graph is uniquely identifiable, with $e^{*}_{ij} = q_{\phi^*}(e_{ij}|\tau), \forall i \in [M+N], j \in [M]$. \end{proposition} In practice, we use $\chi^2$-test for discrete variables and the Fast Conditional Independent Test~\cite{chalupka2018fast} for continuous variables. The sample size needed for obtaining the oracle test has been widely investigated~\cite{canonne2018testing}. However, testing with finite data is not a trivial problem, as stated in~\cite{shah2020hardness}, especially when the data is sampled from a Goal-conditioned MDP. Usually, the random policy is not enough to satisfy the oracle assumption because some nodes cannot be fully explored when the task is complicated and has a long horizon. To make this assumption empirically possible, it is necessary to simultaneously optimize $\pi_{\theta}(a^t | s^t, s^*, {\mathcal{G}})$ to access more samples close to finishing the task, which is further analyzed in Section~\ref{sec::convergence}. We also empirically support this argument in Section~\ref{discovery_results} and provide a detailed theoretical proof in Appendix~\ref{app:planning} and~\ref{app:causal_discovery}. \subsection{Analysis of Performance Guarantee} \label{sec::convergence} The entire pipeline of GRADER is summarized in Algorithm~\ref{algorithm1}. To analyze the performance of the optimization of (\ref{elbo_final}), we first list important lemmas that connect previous steps and then show that the iteration of these steps in \textit{GRADER} leads to a virtuous cycle. By the following lemmas, we show the following performance guarantees step by step. Lemma~\ref{monotonicity} shows model learning is monotonically better at convergence given a better causal graph from causal discovery. Then the learned transition model helps to improve the lower bound of value function during planning according to Lemma~\ref{Value Bound}. Lemma~\ref{TV_distance} reveals the connection between policy learning and interventional data distribution, which in turn improves the quality of our causal discovery, as is shown in Lemma~\ref{causal_discovery_quality} and Proposition~\ref{uniqueness}. \begin{lemma}[Monotonicity of Transition Likelihood] \label{monotonicity} Assume ${\mathcal{G}}^*=(V,E^*)$ be the true CG, for two CG ${\mathcal{G}}_1=(V, E_1)$ and ${\mathcal{G}}_2=(V, E_2)$, if $\text{SHD}({\mathcal{G}}_1, {\mathcal{G}}^*) < \text{SHD}({\mathcal{G}}_2, {\mathcal{G}}^*)$, $\exists \ e,\ s.t.\ E_1\cup \{e\} = E_2$, when transition model $\theta$ converges, the following inequality holds for the transition model in (\ref{elbo_final}): \begin{equation} \log p_\theta(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}}^*) \geq \log p_\theta(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}}_1) \geq \log p_\theta(s^{t+1}|a^{t}, s^{t}, {\mathcal{G}}_2) \label{estep_eq2} \end{equation} where SHD is the Structural Hamming Distance defined in Appendix~\ref{proof_monotonicity}. \end{lemma} \begin{lemma}[Bounded Value Function in Policy Learning] \label{Value Bound} Given a planning horizon $H\to \infty$, if we already have an approximate transition model $\mathbb{D}_{\text{TV}}(\hat{p}(s'|s,a), p(s'|s,a)) \leq \epsilon_m$, the approximate policy $\hat{\pi}$ achieves a near-optimal value function (refer to Appendix~\ref{app:planning} for detailed analysis): \begin{equation} \|V^{\pi^*}(s)-V^{\hat{\pi}}(s)\|_\infty \leq \frac{\gamma}{(1-\gamma)^2} \epsilon_m \end{equation \end{lemma} \begin{lemma}[Policy Learning Improves Interventional Data Distribution] \label{TV_distance} With a step reward $r(s, a) = \mathbb{p_g} \mathds{1}(s=g)$, we show that the value function determines an upper bound for TV divergence between the interventional distribution with its optimal (proof details in Appendix~\ref{app:planning}): \begin{equation} \begin{aligned} \mathbb{D}_{\text{TV}}(p_{{\mathcal{I}}^s_\pi}, p_g) \leq 1 - (1-\gamma) V^\pi (s). \end{aligned} \end{equation} where $p_{{\mathcal{I}}_\pi^s}$ is the marginal state distribution in interventional data, and $p_g$ is the goal distribution. A better policy with larger $V^\pi(s)$ enforces the distribution of interventional data towards the goal. \end{lemma} \begin{lemma}[Interventional Data Benefits Causal Discovery] \label{causal_discovery_quality} For $\epsilon_g=\min_{p_g>0} p_g$, $\mathbb{D}_{\text{TV}}(p_{{\mathcal{I}}_\pi^s}, p_g)< \epsilon_g$, the error of our causal discovery is upper bounded with $\mathbb{E}_{\hat{{\mathcal{G}}}}[ \text{SHD}(\hat{{\mathcal{G}}}, {\mathcal{G}}^*)] \leq |{\mathcal{S}}|-1$. \end{lemma} After the close-loop analysis of our model, we are now able to analyze the overall performance of the proposed framework. Under the construction of $p_\theta(s^{t+1}|a^t, s^t, {\mathcal{G}})$ with NN-parameterized functions, the following theorem shows that the learning process will guarantee to perform a close estimation of true ELBO under the iterative optimization among model learning, planning, and causal discovery. \begin{theorem} \label{convergence} With T-step approximate transition dynamics $\mathbb{D}_{\text{TV}}\Big(\hat{p}(s'|s,a), p(s'|s,a)\Big) \leq \epsilon_m$, if the goal distribution satisfies $\epsilon_g > \frac{\gamma}{1-\gamma} \epsilon_m$, and the distribution prior CG is entry-wise independent $\text{Bernoulli}(\epsilon_{\mathcal{G}})$, GRADER guarantees to achieve an approximate ELBO $\hat{{\mathcal{J}}}$ with the true ELBO ${\mathcal{J}}^*$: \begin{equation} \|{\mathcal{J}}^*(\theta, \phi) - \hat{{\mathcal{J}}}(\hat{\theta}, \hat{\phi}) \|_{\infty} \leq \left[1 + \frac{\gamma}{(1-\gamma)^2}\right] \epsilon_m T+ \log \left(\frac{1-\epsilon_{\mathcal{G}}}{\epsilon_{\mathcal{G}}} \right) (|{\mathcal{S}}|-1) , \end{equation} \end{theorem} An intuitive understanding of the performance guarantee is that a better transition model indicates a better approximation of objective ${\mathcal{J}}$. Yet it's harder for \textit{GRADER} to approximate a longer trajectory with large state space. The proof of this theorem and corresponding empirical results are in Appendix~\ref{app:convergence}. \vspace{-2mm} \section{Experiments} \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-4mm} \centering \includegraphics[width=0.5\textwidth]{figs/env_2.png} \caption{Three environments used in this paper.} \label{fig:env} \vspace{-6mm} \end{wrapfigure} In this section, we first discuss the setting of our designed environments as well as the baselines used in the experiments. Then, we provide the numerical results and detailed discussions to answer the following important research questions: \qa{\textbf{Q1.}} Compared to baselines, how does GRADER gain performance improvement under both in-distribution and generalization settings? \qa{\textbf{Q2.}} Compared to an offline random policy, how does a well-trained policy improve the results of causal discovery? \qa{\textbf{Q3.}} Compared to score-based causal discovery, does the proposed data-efficient causal discovery pipeline guarantee to identifying true causal graph as stated in Section~\ref{causal_discovery}? \qa{\textbf{Q4.}} Compared to false causal graphs, how does the correctness of discovered causal graph influence the task-solving performance of RL agents? \subsection{Environments and Baselines} Since most commonly used RL benchmarks do not explicitly require causal reasoning for generalization, we design three environments shown in Figure~\ref{fig:env}. These environments use the true state as observation to disentangle the reasoning task from visual understanding. For each environment, we design three settings -- in-distribution (\textit{I}), spuriousness (\textit{S}), and composition (\textit{C}) -- corresponding to different goal distributions for generalization. We use $p_{\text{train}}(g)$ and $p_{\text{test}}(g)$ to represent the goal distribution during training and testing, respectively. $I$ uses the same $p_{\text{train}}(g)$ and $p_{\text{test}}(g)$, $S$ introduces spurious correlations in $p_{\text{train}}(g)$ but remove them in $p_{\text{test}}(g)$, and $C$ contains more similar sub-goals in $p_{\text{test}}(g)$ than in $p_{\text{train}}(g)$. The details of these settings in three environments are briefly summarized in the following (details in in Appendix~\ref{detail_env}): \begin{itemize}[leftmargin=0.2in] \item \textbf{Stack}: We design this manipulation task inspired by the CausalWorld~\cite{ahmed2020causalworld}, where the agent must stack objects to match specific shapes and colors. In \textit{Stack-S}, we let the same shape have the same color in $p_{\text{train}}(g)$ but randomly sample the color and shape in $p_{\text{test}}(g)$. In \textit{Stack-C}, the maximum number of object is two in $p_{\text{train}}(g)$ but five in $p_{\text{test}}(g)$. \item \textbf{Unlock}: We design this task for the agent to collect a key to open doors built upon the Minigrid~\cite{gym_minigrid}. In \textit{Stack-S}, the door and the key are always in the same row in $p_{\text{train}}(g)$ but uniformly sample in $p_{\text{test}}(g)$. In \textit{Unlock-C}, there are one door in $p_{\text{train}}(g)$ but two doors in $p_{\text{test}}(g)$. \item \textbf{Crash}: The occurrence of accidents usually relies on causality, e.g., an autonomous vehicle (AV) collides with a jaywalker because its view is blocked by another car~\cite{tavares2021language}. We design such a crash scenario based on highway-env~\cite{highway-env}, where the goals are to create crashes between a pedestrian and different AVs. In \textit{Stack-S}, the initial distance between ego car and pedestrian is a constant in $p_{\text{train}}(g)$ but irrelevant in $p_{\text{test}}(g)$. In \textit{Stack-C}, there is one pedestrian in $p_{\text{train}}(g)$ but two in $p_{\text{test}}(g)$. \end{itemize} \begin{table}[t] \caption{Success rate (\%) for nine settings in three environments. \highest{Bold} font means the best.} \label{tab:overall} \centering \small{ \begin{tabular}{l|p{0.9cm} p{0.9cm} p{0.9cm} p{0.9cm} p{1.2cm} | p{0.9cm} p{0.9cm} p{0.9cm} p{1.0cm}} \toprule \small{Env} & \small{SAC} & \small{ICIN} & \small{PETS} & \small{TICSA} & \small{ICIL} & \scriptsize{GRADER} & \small{Score} & \small{Full} & \small{Offline} \\ \midrule \small{Stack-I} & 34.7\scriptsize{$\pm$16.1} & 71.8\scriptsize{$\pm$6.9} & \highest{97.2\scriptsize{$\pm$6.9}} & 85.9\scriptsize{$\pm$8.4} & 93.7\scriptsize{$\pm$5.9} & 95.6\scriptsize{$\pm$5.4} & 92.7\scriptsize{$\pm$7.4} & 92.9\scriptsize{$\pm$6.3} & 96.8\scriptsize{$\pm$5.8} \\ \small{Stack-S} & 22.1\scriptsize{$\pm$14.0} & 71.0\scriptsize{$\pm$7.4} & 77.7\scriptsize{$\pm$13.5} & 88.8\scriptsize{$\pm$10.1} & 81.2\scriptsize{$\pm$14.4} & \highest{97.6\scriptsize{$\pm$6.0}} & 90.5\scriptsize{$\pm$7.5} & 86.0\scriptsize{$\pm$9.5} & 95.4\scriptsize{$\pm$6.1}\\ \small{Stack-C} & 31.7\scriptsize{$\pm$5.1} & 58.6\scriptsize{$\pm$8.3} & 73.7\scriptsize{$\pm$10.3} & 76.2\scriptsize{$\pm$8.3} & 62.8\scriptsize{$\pm$13.0} & \highest{93.7\scriptsize{$\pm$8.4}} & 73.9\scriptsize{$\pm$8.5} & 75.7\scriptsize{$\pm$10.3} & 81.4\scriptsize{$\pm$7.8}\\ \midrule \small{Unlock-I} & 0.1\scriptsize{$\pm$0.5} & 31.7\scriptsize{$\pm$9.6} & 59.5\scriptsize{$\pm$7.2} & 58.5\scriptsize{$\pm$12.3} & \highest{67.1\scriptsize{$\pm$11.6}} & 64.2\scriptsize{$\pm$9.1} & 44.9\scriptsize{$\pm$28.1} & 63.8\scriptsize{$\pm$9.2} & 13.8\scriptsize{$\pm$8.1} \\ \small{Unlock-S} & 0.0\scriptsize{$\pm$0.2} & 32.7\scriptsize{$\pm$8.6} & 20.6\scriptsize{$\pm$5.9} & 33.6\scriptsize{$\pm$14.3} & 15.9\scriptsize{$\pm$4.7} & \highest{61.4\scriptsize{$\pm$4.4}} & 23.1\scriptsize{$\pm$7.6} & 18.3\scriptsize{$\pm$7.4} & 13.9\scriptsize{$\pm$7.5}\\ \small{Unlock-C} & 0.4\scriptsize{$\pm$1.7} & 31.5\scriptsize{$\pm$8.5} & 28.3\scriptsize{$\pm$10.0} & 29.8\scriptsize{$\pm$8.3} & 53.6\scriptsize{$\pm$15.3} & \highest{82.1\scriptsize{$\pm$9.2}} & 36.2\scriptsize{$\pm$30.1} & 53.7\scriptsize{$\pm$14.3} & 11.7\scriptsize{$\pm$6.9}\\ \midrule \small{Crash-I} & 22.5\scriptsize{$\pm$17.6} & 27.9\scriptsize{$\pm$6.1} & 52.3\scriptsize{$\pm$11.5} & 68.9\scriptsize{$\pm$5.9} & 55.3\scriptsize{$\pm$20.9} & \highest{91.5\scriptsize{$\pm$4.4}} & 42.3\scriptsize{$\pm$17.5} & 69.8\scriptsize{$\pm$14.0} & 13.1\scriptsize{$\pm$16.2} \\ \small{Crash-S} & 18.6\scriptsize{$\pm$8.7} & 15.8\scriptsize{$\pm$17.2} & 44.6\scriptsize{$\pm$12.5} & 56.8\scriptsize{$\pm$8.6} & 21.7\scriptsize{$\pm$17.7} & \highest{84.3\scriptsize{$\pm$10.0}} & 53.4\scriptsize{$\pm$18.7} & 52.6\scriptsize{$\pm$12.8} & 30.2\scriptsize{$\pm$16.5}\\ \small{Crash-C} & 6.7\scriptsize{$\pm$3.8} & 7.8\scriptsize{$\pm$8.8} & 37.1\scriptsize{$\pm$5.1} & 15.0\scriptsize{$\pm$8.2} & 14.3\scriptsize{$\pm$7.3} & \highest{84.7\scriptsize{$\pm$7.3}} & 8.4\scriptsize{$\pm$6.1} & 42.0\scriptsize{$\pm$17.2} & 14.9\scriptsize{$\pm$12.4}\\ \bottomrule \end{tabular} } \vspace{-4mm} \end{table} We use the following methods as our baselines to fairly demonstrate the advantages of GRADER. \textbf{SAC:}~\cite{haarnoja2018soft} Soft Actor-Critic is a well-known model-free RL method; we augment it with HER~\cite{ren2019exploration} in \textit{Stack} because the goal in this task does not have pre-conditions. \textbf{ICIN:}~\cite{nair2019causal} It uses DAgger~\cite{ross2011no} to learn goal-conditioned policy with the causal graph estimated from the expert policy. We assume it can access the true causal graph for supervised learning. \textbf{PETS:}~\cite{chua2018deep} Since our framework shares a very similar structure as model-based RL, we consider this ensemble transition model with random shoot planning as one baseline, which achieves generalization with the uncertainty-aware design. \textbf{TICSA:}~\cite{wangtask} This is a causal-augmented MBRL method that simultaneously optimizes a soft adjacent matrix (representing the causality) and a transition model. \textbf{ICIL:}~\cite{bica2021invariant} This method proposes an invariant feature learning structure that captures the implicit causality of multiple tasks. We only use it for transition model learning since the original method is designed for imitation learning, \subsection{Results Discussion} \label{discovery_results} \textbf{Overall Performance (\qa{Q1})} We compare the testing reward of all methods under nine tasks and summarize the results in Table~\ref{tab:overall} to demonstrate the overall performance. Generally, our method outperforms other baselines in all tasks except \textit{Stack-I}, which is simple for all methods. We note that the gap between our method and baselines under \textit{S} and \textit{C} settings is more significant than the \textit{I} setting, showing that our method still works well in the non-trivial generalization task. As a model-free method, SAC fails in all three tasks of \textit{Unlock} and \textit{Crash} environments since they have sparse rewards. Without learning the causal structure of the environment, PETS even cannot fully solve \textit{Unlock-S}, \textit{Unlock-C}, and all \textit{Crash} tasks. Both TICSA and ICIL learn explicit or implicit causality of the task, so they are relatively better than SAC and PETS. However, they are still worse than GRADER in two generalization settings because of the unstable and inefficient causal reasoning mechanism. We also find that even if the ICIN is given the true causal graph, the policy learning part cannot efficiently leverage the causality, leading to worse performance in generalization settings. To further analyze the tendency of learning, we plot the curves of all methods under \textit{Crash} in Figure~\ref{fig:discover_graph}. Our method quickly learns to solve the tasks at the beginning of the training, demonstrating data efficiency. GRADER also outperforms other methods with large gaps in the later training phase. The results of the other two environments can be found in Appendix~\ref{app:more_results}. \textbf{Importance of Policy Learning (\qa{Q2})} As we mentioned in Section~\ref{causal_discovery}, we empirically compare GRADER and \textbf{Offline}~\cite{zhu2022offline}, which uses data from offline random policy, and plot results in the right part of Figure~\ref{fig:discover_graph}. We use SHD~\cite{tsamardinos2006max} to compute the distance between the estimated causal graph and the true causal graph, which is obtained by checking the best causal graph with human analysis. The true causal graph for each environment can be found in Appendix~\ref{app:causal_graph}. When we only use samples collected offline by random policy, we cannot obtain variables' values that require long-horizon reasoning, e.g., the door can be opened if the agent is close to the door and have the key, leading to unreliable discovery. As a consequence, the causal graph obtained by Offline indeed harms the performance, as shown in Figure~\ref{fig:discover_graph}. Instead, GRADER gradually explores more regions and quickly obtains the true causal graph when we iteratively discover the causal graph and update the policy. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{figs/overall_reward_4.pdf} \caption{\textbf{Left:} Test reward of the \textit{Crash} environment calculated by 30 trails. \textbf{Right:} The accuracy of causal graph discovery with samples from GRADER, Score, and Offline (random policy).} \label{fig:discover_graph} \vspace{-4mm} \end{figure} \begin{wrapfigure}{r}{0.35\textwidth} \vspace{-4mm} \centering \includegraphics[width=0.35\textwidth]{figs/monotonicity_svg.pdf} \caption{Influence of different causal graphs on \textit{Unlock-S}.} \label{fig:influence} \vspace{-5mm} \end{wrapfigure} \textbf{Advantage of Data-efficient Causal Discovery (\qa{Q3})} To show the advantage of proposed constraint-based methods, we design a model named \textbf{Score} that optimizes a soft adjacent matrix similar to the score-based method in the discovery step. Score-based discovery methods, also used in TICSA, are recently combined with NN for differentiable training. According to the discovery accuracy shown in the right part of Figure~\ref{fig:discover_graph}, we find score-based discovery is inefficient. Based on the performance of the Score model summarized in Table~\ref{tab:overall}, we also conclude that it is not as good as our constraint-based method and has a large variance due to the unstable learning of causal graph. \textbf{Influence of Causal Graph (\qa{Q4})} To illustrate the importance of the causal graph, we implement another variant of GRADER named \textbf{Full}, which uses a fixed full graph that connects all nodes between the sets $\mathcal{U}$ and $\mathcal{V}$. According to the performance shown in Table~\ref{tab:overall} and Figure~\ref{fig:discover_graph}, we find that the full graph achieves worse results than GRADER because of the redundant and spurious correlation. Intuitively, unrelated information causes additional noises to the learning procedure, and the spurious correlation creates a shortcut that makes the model extract wrong features, leading to worse results in the spuriousness generalization as shown in Table~\ref{tab:overall}. We then investigate how the correctness of the causal graph influences the performance. We use fixed graphs interpolating from the best causal graph to the full graph to train a GRADER model on \textit{Unlock-S} and summarize the results in Figure~\ref{fig:influence} The more correct the graph is, the higher reward the agent obtains, which supports our statements in Section~\ref{sec::convergence} that the causal graph is important for the reasoning tasks -- a better causal graph helps the model have better task-solving performance. \vspace{-2mm} \section{Related Works} \textbf{RL Generalization} From agent’s view, algorithms focus on actively obtaining the structure of the task or environment. Some decompose the given task into sub-tasks~\cite{kipf2019compile, xu2019regression, huang2019neural, lu2021learning} and when they encounter an unseen task, they rearrange those (learned or pre-defined) sub-tasks to solve the new task. Instead of dividing tasks, Symbolic RL learns a program-based policy consisting of domain language~\cite{yang2021program, han2020neuro} or grammar~\cite{landajuela2021discovering, garnelo2016towards}. The generated program is then executed~\cite{zhao2021proto} to interact with the environment, which has the potential to solve unseen tasks by searching the symbolic space. From the other view, we generate environments to augment agents' experience for better generalization. One straightforward way is data augmentation of image-based observation~\cite{kostrikov2020image, hansen2021stabilizing, srinivas2020curl, hansen2021generalization, lee2019network, raileanu2021automatic}. When extended to other factors of the environment, \textit{Domain Randomization}~\cite{tobin2017domain} and its variants~\cite{mehta2020active, prakash2019structured} are proposed. Considering the interaction between agent and environment, \textit{Curriculum Learning}~\cite{narvekar2020curriculum} also gradually generates difficult tasks to train generalizable agents. \textbf{Goal-Conditioned RL (GCRL)} From the optimization perspective, Universal Value Function~\cite{schaul2015universal}, reward shaping~\cite{trott2019keeping}, and latent dynamic model~\cite{nair2020goal} are widely used tools to solve GCRL problem. Sub-goal generation~\cite{florensa2018automatic} is another intuitive idea to tackle the long horizon with sparse reward, where the core thing is to make sure the generated sub-goals are solvable. Finally, \textit{HER}~\cite{andrychowicz2017hindsight}, belonging to the relabelling category, is a ground-breaking yet straightforward method that treats visited states as "fake" goals when the goal and state share the same space. Later on, improved versions of HER~\cite{ren2019exploration, pitis2020maximum, fang2019curriculum} were widely studied. One limitation is that we cannot directly use a visited state as a goal if the goal has pre-conditions. Similar to our setting, \cite{tang2021hindsight} and \cite{rudner2021outcome} convert the GCRL problem to variational inference by regarding control as inference~\cite{levine2018reinforcement}. \cite{tang2021hindsight} propose a EM framework under the HER setting and \cite{rudner2021outcome} treats the last state as goal and estimate a shaped reward during training. \textbf{RL with Causal Reasoning} Causality is now frequently discussed in the machine learning field to complement the interpretability of neural networks~\cite{peters2017elements}. RL algorithms also incorporate causality to improve the reasoning capability~\cite{madumal2020explainable}. For instance, \cite{nair2019causal} and \cite{volodin2020resolving} explicitly estimate causal structures with the interventional data obtained from the environment. These structures are either combined with dynamic model learning in model-based RL~\cite{wangtask} or used to adjust the buffer priority~\cite{seitzer2021causal}. Implicitly, we can abstract the causal structure and formulate it using the \textit{Block MDP}~\cite{zhang2020invariant} setting or training multiple encoders to extract different kinds of representations~\cite{sodhani2022improving}. Following the idea of invariant risk minimization~\cite{arjovsky2019invariant}, they assume task-relevant features are invariant and shared across all environments, which can be used as the only cause of the reward. \textbf{Causal Discovery} Causal discovery~\cite{glymour2019review} is a long-stand topic in economics and sociology, where the traditional methods can be generally categorized into constraint-based and score-based. Constraint-based methods~\cite{spirtes2000causation} start from a complete graph and iteratively remove edges with conditional independent test~\cite{pearson1900x, zhang2012kernel} as constraints. Score-based methods~\cite{chickering2002optimal, hauser2012characterization} use metrics such as \textit{Bayesian Information Criterion}~\cite{neath2012bayesian} as scores and prefer edges that maximize the score given the dataset. Recently, researchers extend score-based methods with RL~\cite{zhu2019causal} or differentiable discovery~\cite{brouillard2020differentiable, ke2019learning}. The former selects edges with a learned policy, and the latter learns a soft adjacent matrix with observational or interventional data. Active intervention methods are also explored~\cite{scherrer2021learning} to increase the efficiency of data collection and decrease the cost of conducting intervention~\cite{lindgren2018experimental}. \section{Conclusion} This paper proposes a latent variable model that injects a causal graph reasoning process into transition model learning and planning to solve Goal-Conditioned RL problems under the generalization setting. We theoretically prove that our iterative optimization process can obtain the true causal graph. To evaluate the performance of the proposed method, we designed nine tasks in three environments. The comprehensive experiment results show that our method has better data efficiency and performance than baselines. Our method also provides interpretability by the explicitly discovered causal graph. The main limitation of this work is that the explicit estimation of causal structure does not scale well to a large number of nodes. The factorized state and action space assumption may also restrict the usage of this work to abstracted representations. Thus, extending this method to broader applications could be the future direction. We further discuss the potential negative social impact in Appendix~\ref{app:social} \newpage \bibliographystyle{unsrt}
1,314,259,992,950
arxiv
\section{Introduction} \IEEEPARstart{C}{haracteristic} mode analyses \cite{Garbacz_TCMdissertation,Harrington_1971a,Harrington_1971b} of arrays and metasurfaces typically follow one of two trends aimed at reducing what is, in principle, a problem of infinite spatial extent to one of finite size. The simplest of these approaches is to analyze large, finite periodic structures as a single object e.g., \cite{li2018metasurface,zhao2018characteristic,lin2018truncated,lin2019recent,genovesi2021characteristic}. This method involves no changes to the underlying characteristic mode formulation but yields current distributions which depend significantly on the size and shape of the finite structure being studied, e.g., modes of a large rectangular array resemble the modes of a large rectangular plate \cite{KingAJ_PhDThesis}. Though local whole domain basis functions defined over each unit cell may be used to accelerate computation \cite{Tzanidis_2012a, KingAJ_PhDThesis, Lonskyetal_CMofDipoleArrays,cheng2020development}, these analyses are typically limited by the computational cost of modeling electrically large systems, making asymptotic analysis toward infinite systems prohibitively complex. In contrast, another approach involves studying the modal behavior of a single unit cell, either in isolation \cite{chukwuka2020stored} or through the use of periodic boundary conditions \cite{maalik2016characteristic,haykir2018characteristic,Haykir_2019a,Angiulli_2000a,guo2020miniaturized,wu2021broadband} or their approximation \cite{ethier2012antenna}. The use of periodic boundary conditions appears to be promising as it naturally captures inter-element mutual coupling effects while requiring a relatively low number of degrees of freedom in representing fields and currents, however previous work in this area is largely limited to the numerical analysis of surfaces under normal illumination, a case where familiar properties of the method of moments impedance matrix (e.g., transpose symmetry) hold and no significant changes to the characteristic mode formulations are required. In this letter, the characteristic modes of infinite periodic structures are formulated using spectral Green's functions, i.e., Floquet mode expansion (numerically studied in \cite{angiulli2000application} and qualitatively discussed in \cite{Haykir_2019a}), in order to theoretically examine properties not present in the analysis of finite structures. We focus on the study of periodic structures constructed from patterned perfectly conducting (PEC) screens in free space. This restriction leads to several interesting analytic results related to the fact that only a small number of radiating characteristic modes exist, a property made clear by analysis of the spectral Green's function for these particular problems. Similar results hold for screens supported by lossless dielectric layers. Additionally, modal decomposition of a surface's reflection tensor is discussed, which, due to the limited number of radiating modes, is found to be an extremely sparse representation of this figure of merit. \section{Spectral representations} \label{sec:spectral} \begin{figure} \centering \tdplotsetmaincoords{60}{120} \begin{tikzpicture}[tdplot_main_coords,scale=0.9,transform shape] \draw[fill = yellow!10] (0,0,0) -- (0,1,0) -- (1,1,0) -- (1,0,0) -- (0,0,0); \begin{scope}[shift={(2,0)}] \draw[fill = yellow!10] (0,0,0) -- (0,1,0) -- (1,1,0) -- (1,0,0) -- (0,0,0); \end{scope} \node at (2.5,0.5) {$\varOmega$}; \begin{scope}[shift={(2,2)}] \draw[fill = yellow!10] (0,0,0) -- (0,1,0) -- (1,1,0) -- (1,0,0) -- (0,0,0); \end{scope} \begin{scope}[shift={(0,2)}] \draw[fill = yellow!10] (0,0,0) -- (0,1,0) -- (1,1,0) -- (1,0,0) -- (0,0,0); \end{scope} \draw[dashed] (-1,-0.5,0) -- (4,-0.5,0); \draw[dashed] (-1,1.5,0) -- (4,1.5,0); \draw[dashed] (-1,3.5,0) -- (4,3.5,0); \draw[dashed] (-0.5,-1,0) -- (-0.5,4,0); \draw[dashed] (1.5,-1,0) -- (1.5,4,0); \draw[dashed] (3.5,-1,0) -- (3.5,4,0); \draw[->] (0.5,2.5) -- (0.5,3.85) node[right] {$x$}; \draw[->] (0.5,2.5) -- (-0.95,2.5) node[right] {$y$}; \draw[<->] (1.5,-1) -- node[above left] {$T_y$} (3.5,-1); \draw[<->] (4,-0.5) -- node[below left] {$T_x$} (4,1.5); \end{tikzpicture} \caption{Four unit cells within an infinite periodic structure lying in the $xy$ plane. Each cell is of dimension $T_x\times T_y$ and contains an identical conducting patch $\varOmega$. These patches may be connected across unit cell boundaries.} \label{fig:schem} \end{figure} The characteristic mode analysis of periodic infinite structures is made relatively simple through the use of a spectral decomposition of currents existing within each unit cell. This process is reviewed over the next two sections in order to highlight a few of its key features relevant to characteristic mode analysis. Readers should consult more comprehensive references on this topic, e.g., \cite{pozar1984analysis,pozar1987radiation,cwik1987scattering,mittra1988techniques,jin2011theory,munk2005frequency}, for further details. Consider a planar, rectangular unit cell $S$ of dimension $T_x \times T_y$ containing a perfectly conducting patch $\varOmega$ lying in the $z=0$ plane, depicted in Fig.~\ref{fig:schem}. Assume this unit cell repeats infinitely in both $x$ and $y$ directions. A global current density $\V{J}(\V{r})$ in this plane may be expanded as \begin{equation} \V{J}(\V{r}) = \V{j}(\V{r})\T{e}^{\T{j}\V{k}_\T{i}\cdot\V{r}} \end{equation} where $\V{j}(\V{r})$ is the local current density repeated identically in each unit cell and $\V{k}_\T{i}$ represents the tangential component of an incident wavevector (i.e., scan angle or prescribed phase shift per unit cell). Due to its periodicity, the current density $\V{j}$ may be expanded into a two dimensional Fourier basis (Floquet modes) using the transform pair \begin{equation} \V{j}(\V{r}) = \sum_\gamma \tilde{\V{\jmath}}_\gamma \T{e}^{\T{j}\V{k}_{\gamma}\cdot\V{r}} \label{eq:j-periodic-floquet} \end{equation} and \begin{equation} \V{\tilde{\jmath}}_{\gamma} = \frac{1}{T_xT_y}\int_S \V{j}(\V{r}) \T{e}^{-\T{j}\V{k}_{\gamma}\cdot\V{r}}\T{d}\V{r} \end{equation} where \begin{equation} \V{k}_{\gamma} = \u{x}\frac{2\pi p}{T_x} + \u{y}\frac{2\pi q}{T_y} \end{equation} and the indices $p$ and $q$ are contained within the meta-index~$\gamma$. Each spectral current component $\tilde{\V{\jmath}}_\gamma$ produces a phase matched tangential electric field $\tilde{\V{e}}_\gamma$ via a spectral Green's function. Hence a spectral impedance operator\footnote{Following \cite{Harrington_1971a}, the impedance operator maps a current onto the negated tangential scattered field it produces. Hence, $\tilde{\V{e}}_\gamma$ should be interpreted as only the tangential field component.} acting on each spectral current component is given by \begin{equation} \tilde{\V{e}}_\gamma = -\V{\mathcal{Z}}_\gamma \tilde{\V{\jmath}}_\gamma \end{equation} where \begin{equation} \V{\mathcal{Z}}_\gamma = \frac{\eta}{2kk'_{\gamma z}} \begin{bmatrix} k^2-k'^2_{\gamma x} & - k'_{\gamma x}k'_{\gamma y} \\ - k'_{\gamma x}k'_{\gamma y} & k^2-k'^2_{\gamma y} \\ \end{bmatrix} = (k'_{\gamma z})^{-1}\V{\mathcal{A}}_{\gamma} \end{equation} with $\V{\mathcal{A}}_{\gamma}$ being a real-valued dyadic, the free space wavenumber $k = \omega\sqrt{\mu_0\varepsilon_0}$, and the primed wavevector given by \begin{equation} \V{k}'_\gamma = \V{k}_\T{i} + \V{k}_{\gamma} + \u{z}\sqrt{k^2 - |\V{k}_\T{i} + \V{k}_{\gamma}|^2}. \label{eq:kpqz} \end{equation} By the expansion in \eqref{eq:j-periodic-floquet}, the total periodic tangential field in the $z=0$ plane is made up of many spectral contributions \begin{equation} \V{e}(\V{r}) = \sum_\gamma \tilde{\V{e}}_\gamma \T{e}^{\T{j}\V{k}_{\gamma}\cdot\V{r}} = -\sum_\gamma \V{\mathcal{Z}}_\gamma \tilde{\V{\jmath}}_\gamma \T{e}^{\T{j}\V{k}_{\gamma}\cdot\V{r}}. \label{eq:ezj} \end{equation} Invoking Poynting's theorem over a region enclosing one unit cell, orthogonality of complex exponentials may be employed to reduce the complex power $P$ provided per unit cell to a single sum of spectral contributions $P_\gamma$, i.e., \begin{equation} P = \sum_{\gamma} \frac{T_xT_y}{2k'_{\gamma z}}\tilde{\V{\jmath}}_{\gamma}^*\cdot\V{\mathcal{A}}_{\gamma}\tilde{\V{\jmath}}_{\gamma} = \sum_{\gamma} P_{\gamma}. \end{equation} For any modal vector coefficient $\tilde{\V{\jmath}}_{\gamma}$ and real transverse wavenumber $\V{k}_{\gamma}$, the term $\tilde{\V{\jmath}}_{\gamma}^*\cdot\V{\mathcal{A}}_{\gamma}\tilde{\V{\jmath}}_{\gamma}$ appearing in the above expression is purely real, though indefinite. This implies that the spectral powers $P_\gamma$ are either purely radiating (real) or purely reactive (imaginary), depending on the nature of the longitudinal wavenumber $k'_{\gamma z}$, which in turn is dictated by the relative magnitudes of the transverse and total wavenumbers via \eqref{eq:kpqz}. Within a two dimensional space of incident wavenumber components $k_{\T{i}x}$ and $k_{\T{i}y}$, the radiating (i.e., visible) region for each spectral component is a circle of radius $k$ centered at \begin{equation} k_{\T{i}x}^\gamma = -\frac{2\pi p}{T_x},\quad k_{\T{i}y}^\gamma = -\frac{2\pi q}{T_y} . \end{equation} These regions are plotted for several values of the wavenumber $k$ in Fig.~\ref{fig:zones} for a square unit cell of dimension $T$. \begin{figure} \centering \includegraphics[width=3.1in]{figures/02-zones.pdf} \caption{Zones indicating the presence of radiating spectral components for a square unit cell of dimension $T$. Three cuts through $\V{k}_\T{it}$ space are shown at $kT/(2\pi) = T/\lambda = 0.25$ (blue), $0.5$ (red), and $0.8$ (green).} \label{fig:zones} \end{figure} \section{Spatial impedance matrices} \label{sec:mom} Expanding the periodic current density $\V{j}(\V{r})$ into a real-valued $N$-dimensional basis $\{\V{\psi}_\alpha(\V{r})\}$ (e.g., rooftop basis functions~\cite{cwik1987scattering}) with coefficients $\{I_\alpha\}$ and applying Galerkin testing to \eqref{eq:ezj} leads to \begin{equation} V_\beta = -\frac{1}{T_xT_y} \sum_{\alpha=1}^N I_\alpha \sum_\gamma \V{f}_{\gamma\beta}^*\cdot\V{\mathcal{Z}}_\gamma \V{f}_{\gamma\alpha} \end{equation} where \begin{equation} \V{f}_{\gamma\alpha} = \int_S \V{\psi}_\alpha(\V{r}) \T{e}^{-\T{j}\V{k}_{\gamma}\cdot\V{r}}\T{d}\V{r} \end{equation} and \begin{equation} V_\beta = \int_S \V{e}(\V{r})\cdot\V{\psi}_\beta(\V{r})\T{d}\V{r}. \end{equation} Collecting $N$ such equations allows for the construction of a method of moments system \begin{equation} \M{V} = -\M{Z}\M{I}. \end{equation} Hence we interpret the complex impedance matrix entry $Z_{\alpha\beta}$ as the sum of spectral terms $Z_{\alpha\beta,\gamma}$, each of which, in general, is complex. Based on the discussion in Sec.~\ref{sec:spectral}, the dyadic $\V{\mathcal{Z}}_{\gamma}$ takes on purely real or purely imaginary values when the Floquet harmonic with index $\gamma$ is radiating or evanescent, i.e., \begin{equation} Z_{\alpha\beta,\gamma} = R_{\alpha\beta,\gamma} = a\V{f}_{\alpha, \gamma}^*\cdot \V{\mathcal{A}}_{\gamma}\V{f}_{\beta, \gamma},\quad|\V{k}_{\gamma}+\V{k}_\T{i}|<k \end{equation} and \begin{equation} Z_{\alpha\beta,\gamma} = \T{j}X_{\alpha\beta,\gamma} = \T{j}b\V{f}_{\alpha, \gamma}^*\cdot \V{\mathcal{A}}_{\gamma}\V{f}_{\beta, \gamma},\quad|\V{k}_{\gamma}+\V{k}_\T{i}|\geq k \end{equation} with real-valued scaling factors $a$ and $b$. Note that the radiating spectral contributions are Hermitian while the evanescent spectral contributions are anti-Hermitian, i.e., \begin{equation} R_{\alpha\beta,\gamma} = R_{\beta\alpha,\gamma}^* ~~\text{and}~~ \T{j}X_{\alpha\beta,\gamma} = -\left(\T{j}X_{\beta\alpha,\gamma}\right)^*. \label{eq:xabpq-hermitian} \end{equation} We may divide the spectrum of Floquet harmonics into two subspectra and rewrite the impedance matrix $\M{Z}$ as \begin{equation} \M{Z} = \sum_\T{rad}\M{R}_{\gamma} + \T{j}\sum_\T{ev}\M{X}_{\gamma} = \M{R}+\T{j}\M{X}, \end{equation} where $\sum_\T{rad}$ and $\sum_\T{ev}$ denote sums over spectral meta-indices $\gamma$ corresponding to radiating and evanescent modes, respectively. The matrices $\M{R}$ and $\M{X}$ are, in general, complex, however due to the conditions observed in \eqref{eq:xabpq-hermitian}, they are also Hermitian symmetric. Therefore, the Hermitian parts $\M{R}$ and $\M{X}$ of the impedance matrix $\M{Z}$ are naturally constructed from contributions arising from the radiating and evanescent Floquet subspectra, respectively. These operators maintain their direct connection to per-unit-cell radiated and reactive power, i.e., \begin{equation} P_\T{r} = \frac{1}{2}\M{I}^\T{H}\M{R}\M{I} \end{equation} and \begin{equation} P_\T{x} = 2\omega(\bar{W}^\T{m} - \bar{W}^\T{e}) = \frac{1}{2}\M{I}^\T{H}\M{X}\M{I}. \end{equation} The number of radiating spectral components is predetermined by the parameters $\V{k}_\T{i}$, $T_x$ and $T_y$. This, in turn, determines the rank of the radiation and reactance matrices $\M{R}$ and $\M{X}$. Note that resonant evanescent spectral components may alter the rank of the matrix $\M{X}$ at discrete combinations of parameters. If the periodicity is sufficiently small that $T_{x/y}<\lambda/2$, then the visible regions of each spectral component are non-overlapping and at most only one radiating component exists at each incident spatial frequency $\V{k}_\T{i}$, cf Fig.~\ref{fig:zones}. Modifying the problem to include a dielectric support layer with or without a conductive backing involves a change of the underlying periodic impedance operator, see \cite{pozar1984analysis,pozar1987radiation, angiulli2000application, munk2005frequency,kwon2014energy}. In this case, radiating spectral components contribute to both the radiative and reactive parts of the impedance matrix, though the number of radiating components (and thus the rank of the matrix $\M{R}$) is still limited and determined via conditions similar to those illustrated in Fig.~\ref{fig:zones}, see \cite[Ch. 5]{munk2005frequency} and \cite{kwon2014energy}. \section{Characteristic modes} For perfectly conducting structures, characteristic modes diagonalize both the impedance and radiation operators \cite{Harrington_1971a}. Using the method of moments formulation in Sec.~\ref{sec:mom}, the generalized eigenvalue problem producing characteristic modes reads \cite{Harrington_1971b} \begin{equation} \M{Z}\M{I}_n = (1+\T{j}\lambda_n) \M{R}\M{I}_n, \label{eq:cmgep} \end{equation} where $\{\M{I}_n\}$ are eigenvectors corresponding to the periodic currents $\{\V{j}_n\}$ represented in a spatial basis, and $\{\lambda_n\}$ are the associated eigenvalues. By the Hermitian symmetry of both $\M{R}$ and $\M{X}$, eigenvectors associated with non-degenerate eigenvalues are orthogonal in the sense that \begin{equation} \M{I}_m^\T{H}\M{R}\M{I}_n = 0,~m\neq n,\quad \M{I}_m^\T{H}\M{X}\M{I}_n = 0,~m\neq n. \end{equation} Due to the limited number of radiating spectral components, there exist only $N_\T{r}$ radiating modes, where $N_\T{r} = \T{rank}~\M{R}$. For these modes, radiated and reactive powers are related via \begin{equation} \M{I}_m^\T{H}\M{R}\M{I}_m = 2P_{\T{r},m}, \end{equation} and \begin{equation} \M{I}_m^\T{H}\M{X}\M{I}_m = 2\lambda_mP_{\T{r},m}. \label{eq:xorth} \end{equation} While each radiating mode may have projections onto multiple radiating spectral components (i.e., planewave radiation in multiple directions), the orthogonality properties above indicate that these modes have orthogonal far fields when integrated over all radiation directions. Non-radiating modes (in the null space of $\M{R}$) have eigenvalues $|\lambda_n| = \infty$, save for those which are also resonant (in the null space of $\M{X}$) which have indeterminate eigenvalues. Numerically calculated eigenvalues for a square split ring resonator within a square unit cell of dimension $T$ are shown in Fig.~\ref{fig:modes}. Also shown are the number of radiating modes obtained numerically and through the analytic prediction based on overlapping spectral zones, cf Fig.~\ref{fig:zones}. Data are plotted for two incident wavevectors of the form of $\V{k}_\T{i} = \u{x}k\sin\theta$ with $\theta = 0^\circ$ and $55^\circ$, illustrating the effect of oblique incidence in lowering the electrical size required to observe higher-order radiating spectral components (i.e., grating lobes). Modal current distributions of the modes with smallest eigenvalue magnitude are plotted in Fig.~\ref{fig:currents} for normal incidence at two frequencies below and above the introduction of grating lobes. Non-radiating modes exhibit seemingly random current distributions with eigenvalues tending toward infinity, though they are in practice saturated at a dynamic range on the order of $10^{16}$ due to finite numerical precision. \begin{figure} \centering \includegraphics[width=3.25in]{figures/03-srr-eigs-new.pdf} \caption{Characteristic mode eigenvalues (left) of a split ring resonator over a range of unit cell electrical sizes and the number of radiating modes (right) obtained by characteristic mode decomposition and analytic prediction.} \label{fig:modes} \end{figure} \begin{figure} \centering \includegraphics[width=3.5in]{figures/04-currents-processed.pdf} \caption{Numerically calculated characteristic modes for a square split ring resonator, see inset for dimensions, within a square unit cell of dimension $T = 20\Delta$. Normal incidence is assumed ($\V{k}_\T{i} = 0$) and modal current distributions are plotted for two electrical sizes. Yellow (blue) colors indicate current maxima (minima), while red arrows clarify current direction.} \label{fig:currents} \end{figure} We now consider an incident field constructed from propagating plane waves \begin{equation} \V{e}_\T{i} = \sum_{\T{rad}}\tilde{\V{e}}_{\T{i}\gamma}\T{e}^{\T{j}\V{k}_{\gamma}\cdot\V{r}}\quad\Rightarrow\quad \M{V}_\T{i} = \sum_{\T{rad}} \M{f}^\T{H}_\gamma\tilde{\V{e}}_{\T{i}\gamma}, \label{eq:ei} \end{equation} where $\M{f}_\gamma$ is a collection of vectors $\V{f}_{\gamma\alpha}$ for all members of the chosen basis. Enforcing the PEC boundary condition canceling scattered and incident tangential fields over each patch $\varOmega$, the induced current distribution $\M{I}$ can, by \eqref{eq:cmgep}-\eqref{eq:xorth}, be expressed in terms of $N_\T{r}$ radiating characteristic modes via \begin{equation} \M{I} = \sum_{n=1}^{N_\T{r}} \alpha_n\M{I}_n, \quad \alpha_n = \frac{\M{I}_n^\T{H}\M{V}_\T{i}}{2P_{\T{r},n}(1+\T{j}\lambda_n)}. \label{eq:alpha} \end{equation} Note that, by \eqref{eq:ei}, \begin{equation} \M{I}_n^\T{H}\M{V}_\T{i} = \sum_{\T{rad}} (\M{f}_\gamma\M{I}_n)^\T{H}\tilde{\V{e}}_{\T{i}\gamma}, \end{equation} and only characteristic modes with non-zero projection onto radiating spectral components may be excited, consistent with expectations of reciprocity. \section{Modal reflection coefficients} \begin{figure} \centering \includegraphics[width=3.25in]{figures/05-a-srr-gamma-new-theta-0.pdf}\\ \includegraphics[width=3.25in]{figures/05-b-srr-gamma-new-theta-55.pdf}\\ \caption{Specular co-polarized reflection coefficients $\varGamma_{yy}^{0000}$ calculated for the split ring resonator studied in Fig.~\ref{fig:currents} using an incident wavevector of the form $\V{k}_\T{i} = \u{x}k\sin\theta$ with $\theta = 0^\circ$ (top) and $\theta = 55^\circ$ (bottom).} \label{fig:reflection} \end{figure} Given an incident field described by \eqref{eq:ei}, a reflection tensor mapping the incident $\gamma'$ spectral field component into the scattered $\gamma$ spectral field component may be written as \begin{equation} \tilde{\V{e}}_{\gamma} = \sum_{\gamma'}\V{\Gamma}^{\gamma\gamma'}\tilde{\V{e}}_{\T{i}\gamma'}. \end{equation} By the expansions in \eqref{eq:ei} and \eqref{eq:alpha}, the reflection tensor $\V{\Gamma}^{\gamma\gamma'}$ may be decomposed into modal contributions \begin{equation} \V{\Gamma}^{\gamma\gamma'} = \sum_{n=1}^{N_\T{r}} \V{\Gamma}^{\gamma\gamma'}_n = -\sum_{n=1}^{N_\T{r}} \frac{\V{\mathcal{Z}}_\gamma\M{f}_\gamma\M{I}_n}{T_xT_y} \frac{\M{I}_n^\T{H}\M{f}_{\gamma'}^\T{H}}{2P_{\T{r},n}(1+\T{j}\lambda_n)}, \label{eq:gamma-exp} \end{equation} where the right hand side is understood to be constructed from outer products of two-dimensional vectors. The action of each dyad $\V{\Gamma}^{\gamma\gamma'}_n$, has the interpretation of taking the projection of the incident field component $\tilde{\V{e}}_{\T{i}\gamma'}$ onto the characteristic mode $\M{I}_n$ and applying the result as a weighting for the projection of that same characteristic mode onto the spectral component $\tilde{\V{e}}_\gamma$ of the scattered field. Longitudinal scattered fields may be reconstructed by considering the propagation vector associated with each spectral component. In Fig.~\ref{fig:reflection}, the specular co-polarized reflection coefficient $\varGamma_{yy}^{0000}$ is plotted along with its modal contributions. As predicted, at low frequencies at most two characteristic modes (here one, by symmetry) contribute to specular reflection. As frequency increases and more radiating spectral components appear, more modes contribute. Consistent with the results in Figs.~\ref{fig:zones} and \ref{fig:modes}, the appearance of radiating characteristic modes is modulated by the incident wavevector. In all cases studied here, the modal expansions in \eqref{eq:alpha} and \eqref{eq:gamma-exp} using $N_\T{r}$ characteristic modes accurately reconstruct the driven current and reflection coefficient to numerical precision. \section{Discussion} In contrast to the analysis of structures of finite size, only a finite number of radiating characteristic modes are present on the infinite periodic systems studied in this letter. This distinction is primarily due to the nature of the spectral Green's functions used, which give rise to radiation operators of finite and predictable rank based on the number of radiating spectral components. The number of radiating spectral components governs the number of radiating characteristic modes and is predictable \emph{a priori} by standard Floquet mode theory. All non-radiating characteristic modes have infinite or indeterminate eigenvalues and cannot be driven by planewave excitation. This result indicates that, assuming excitations of the form of propagating plane waves, expansion of the driven current in characteristic modes is rigorously sparse, as is the expansion of the reflection tensor $\V{\Gamma}^{\gamma\gamma'}$. Inclusion of lossless dielectric support layers leads to similar results. Additionally, it is clear from the analysis of the cases considered here that the Hermitian parts, not the element-wise real and imaginary parts, of the impedance matrix are of interest in constructing the characteristic mode eigenvalue problem. This distinction is of particular importance in cases involving oblique incidence (non-zero phase shift per unit cell) where the impedance matrix is not transpose symmetric. With the unique features of characteristic modes on infinite and periodic systems come several interesting challenges and opportunities for future research. Similar to problems involving driven antenna systems \cite{yee1973self}, it is unclear if or how characteristic modes could accurately model currents excited by localized feeds with incident fields containing evanescent spectral components. Modal tracking (see \cite{Masek_2020a} for background and references) will also involve new challenges, with the single frequency parameterization now elevated to a three-dimensional space of frequency and incident wavevector components. There the interpretation of symmetry, common in frequency selective surfaces and reflectarrays \cite{munk2005frequency}, and its impact on tracking must be reassessed. Additionally, the strictly sparse nature of characteristic modes in this context may be a significant aid in formulating and solving optimization problems used to calculate physical bounds on certain classes of electromagnetic systems, e.g.,~\cite{ludvig2019fundamental,ludvig2020physical,2020_Gustafsson_NJP}. \section*{Acknowledgements} This work was funded by Sandia National Laboratories, a multimission laboratory managed and operated by NTESS, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s NNSA under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. DoE or the U.S. Government. The author thanks John Borchardt at Sandia National Laboratories for insightful discussions related to this work. \bibliographystyle{IEEEtran}
1,314,259,992,951
arxiv
\section{Main definitions} Let $T^2$ be a finite (but very deep) bi-tree. Bi-tree is the directed graph of all dyadic rectangles in the square $Q_0=[0,1]^2$. We assume that it terminates at small squares of size $2^{-N}\times 2^{-N}$, we call them generically by symbol $\omega$. The boundary $\partial(T^2)$ is this collection of $\omega$'s. Often we identify $T^2$ with dyadic rectangles, whose family is called $\mathcal D$. When we write $E\subset (\partial T)^2$ we mean any subset of $\omega's$. It is convenient to think of $E$ as the union of ``$N$-coarse" dyadic rectangles. Box condition \begin{equation}\label{e:Boxcond} \sum_{Q\in T^2, \,Q\subset R}\mu^2(Q)\alpha_Q \leq C_{\mu}\mu(E),\quad \textup{for any}\; R\in T^2. \end{equation} Carleson condition \begin{equation}\label{e:Carlcond} \sum_{Q\in T^2,\,Q\subset E}\mu^2(Q)\alpha_Q \leq C_{\mu}\mu(E),\quad \textup{for any}\; E\subset (\partial T)^2. \end{equation} Restricted Energy Condition \begin{equation}\label{e:REC} \sum_{Q\in \mathcal{D}}\mu^2(Q\cap E)\alpha_Q \leq C\mu(E),\quad \textup{for any}\; E\subset (\partial T)^2 \end{equation} Embedding \begin{equation}\label{e:imbed} \sum_{Q\in\mathcal{D}}\left(\int_{Q}\varphi\,d\mu\right)^2\alpha_Q \leq C\int_{Q_0}\varphi^2\,d\mu\quad \textup{for any}\; \varphi\in L^2(Q_0,d\mu). \end{equation} Each of the next statement implies the previous one. We are interested when the opposite direction implications hold, and whether they hold in general. Embedding is the boundedness of embedding operator $L^2(Q_0, \mu)\to L^2(T^2, \alpha)$ acting as follows: \[ f\in L^2(Q_0, \mu)\to \Big\{\int_R f\, d\mu\Big\}_{R\in T^2}\in L^2(T^2, \alpha). \] Restricted energy condition (REC) is the boundedness of embedding operator $L^2(Q_0, \mu)\to L^2(T^2, \alpha)$ on characteristic functions ${\bf 1}_E$ for all $E\subset\partial(T^2)$ uniformly. \section{Examples having box condition but not Carleson condition} \label{boxNotC} In \cite{Car} Carleson constructed the families $\mathcal R$ of dyadic sub-rectangles of $Q=[0,1]^2$ having the following two properties: \begin{equation} \label{boxCa} \forall R_0 \in \mathcal D, \quad \sum_{R\subset R_0, R\in \mathcal R} m_2(R) \le C_0 m_2(R_0)\,, \end{equation} but for $U_\mathcal R:= \cup_{R\in \mathcal R} R$ \begin{equation} \label{Ca} \sum_{ R\in \mathcal R} m_2(R) \ge C_1 m_2(U_\mathcal R)\,, \end{equation} where $C_1/C_0$ is as big as one wishes. \medskip With {\it some} (rather wild) $\{\alpha_R\}_{R\in \mathcal D}$ Carleson's counterexample is readily provide the example of $(\mu, \alpha)$ such that box condition is satisfied but Carleson condition fails. Indeed, let us put $$ \alpha_R= \begin{cases} \frac{1}{m_2(R)}, \quad R\in \mathcal R, \\ 0, \quad \text{otherwise} \end{cases} $$ Measure $\mu$ is just planar Lebesgue measure $m_2$. Fix any dyadic rectangle $R_0$, then box condition is satisfied: $$ \sum_{R\subset R_0}\mu(R)^2 \alpha_R = \sum_{R\subset R_0, R\in \mathcal R} m_2(R) \le m_2(R_0)=\mu(R_0)\,. $$ But if $\Omega:= \cup_{R\in \mathcal R} R$, then $$ \sum_{R\subset \Omega}\mu(R)^2 \alpha_R =\sum_{R\subset \Omega, R\in \mathcal R} m_2(R) =1\ge C\,m_2(\Omega)\,, $$ where $C$ can be chosen as large as one wants. Hence, \eqref{Ca} holds too with large constant. \bigskip The weight $\alpha=\{\alpha_R\}$. is rather wild here. But there is also an counterexample with $\alpha =1, 0$, see \cite{HPV}. \section{Examples of having Carleson condition but not restricted energy condition REC.} Our aim here is to show that if we do not restrict ourselves to the constant weights as in \cite{AMPS18}, \cite{AHMV18b}, \cite{AMVZ19}, then the Carleson condition \eqref{e:Carlcond} is no longer sufficient for the embedding \eqref{e:imbed}. In fact even the Restricted Energy Condition \eqref{e:REC} is not necessarily implied by \eqref{e:Carlcond}. Namely we prove the following statement. \begin{proposition}\label{p:71} For any $\delta>0$ there exists number $N$, a weight $\alpha: T^2_N\mapsto \mathbb{R}_+$ and a measure $\mu $ on $(\partial T)^2$ such that $\mu$ satisfies Carleson condition \eqref{e:Carlcond} with the constant $C_{\mu} = \delta$, \begin{equation}\label{e:771} \sum_{Q\subset E}\mu^2(Q)\alpha_Q \leq \delta\mu(E),\quad \textup{for any }E\subset (\partial T)^2, \end{equation} but one can also find a set $F$ such that \begin{equation}\label{e:772} \sum_{Q\in\mathcal{D}}\mu^2(Q\cap F)\alpha_Q \geq \mu(F), \end{equation} hence the constant in \eqref{e:REC} is at least $1$. \end{proposition} We intend to give two examples of this kind. Both of them rely on the fact that one can basically consider this problem on a cut bi-tree by letting $\alpha$ to be either $1$ or $0$. This approach clearly does not work on the tree (see \cite{AHMV18a}[Theorem 1.1]), but the bi-tree has richer geometric structure. While globally (i.e. for $\alpha\equiv 1$) it looks at least somewhat similar to the tree (this similarity is implicit in the proof of \cite{AHMV18b}[Theorem 1.5]), one can remove some vertices (which is what essentially happens when we put $\alpha_Q := 0$) in such a way that the remaining part looks nothing like the full bi-tree (or a tree for that matter). In particular this allows us to create a significant between the amount of ``available" rectangles that lie inside $E$ or just intersect $E$ for a certain choice of the weight $\alpha$ and the set $E$ (this corresponds to the difference between Carleson and REC conditions).\par The first example is quite simple and is inspired by the counterexample for $L^2$-boundedness of the biparameter maximal function. The weight $\alpha$ in this case cuts most of the bi-tree, and the resulting set differs greatly from the original graph.\\ The second example is somewhat more involved, on the other hand the weight there leaves a much bigger portion of the bi-tree, actually it has a certain monotonicity property: $\alpha_R \geq \alpha_Q$ for $R\supset Q$. The structure of the ``available" set is more rich in this case (it looks more like $\mathbb{Z}^2$ in a sense), nevertheless there is not enough rectangles to have the Carleson-REC equivalence. \subsection{A simple example of having Carleson condition but not restricted energy condition} Let $N\in\mathbb{N}$ be some large number (to be specified later), and let $T^2 = T_N^2$ be a bi-tree of depth $N$. We use the dyadic rectangle representation of $T^2$.\par Let $\omega$ be our $[0,2^{-N}]^2$ left lower corner. Given $R = [a,b]\times[c,d]\in\mathcal{D}$ let $R^{++} := \left[\frac{a+b}{2},b\right]\times\left[\frac{c+d}{2},d\right]$ be the upper right quadrant of $R$. Consider $Q_1= [0,1]\times [0, 2^{-N+1}]$ and its $Q_1^{++}$, and $Q_2= [0,2^{-1}]\times [0, 2^{-N+2}]$ and its $Q_2^{++}$, $Q_3= [0,2^{-2}]\times [0, 2^{-N+3}]$ and its $Q_3^{++}$, et cetera\dots . In total, $N$ of them. Put measure $\mu$ to have mass $\tau_0:=1/\sqrt{N}$ on $\omega$, and uniformly distribute mass $\tau_i$ on $Q_i^{++}$. Now $\alpha_R$ is always zero except when $R= \omega, Q_1, Q_2, \dots$. For those $\alpha=1$, so we have $N+1$ alphas equal to $1$. Now choose set $E=\omega$. When we calculate $\mathcal E[\mu|E]$ we sum up $$ \tau_0^2 +\sum_{i=1}^N \mu(\omega\cap Q_i)^2 =( N+1)\tau_0^2 =\frac{N+1}{N}\ge \sqrt{N} \frac1{\sqrt{N}} =\sqrt{N}\mu(E)\,. $$ So REC condition has a big constant. \bigskip Since $$ \mathcal E_\Omega[\mu] =\sum_{R\subset \Omega,\, \alpha_R\neq 0} \mu(R)^2\,, $$ then, denoting $Q_0:=\omega$, we have $$ \mathcal E_\Omega[\mu] = \sum_{j: Q_j\subset \Omega} \mu(Q_j)^2 =:\sum_{j\in J( \Omega)} \mu(Q_j)^2\,. $$ Let $\tau_0=1/\sqrt{N} \le \frac14=\tau_1=\tau_2=...=\tau_N$. Then $$ \mathcal E_\Omega[\mu] =\sum_{j\in J(\Omega)} (\tau_0+\tau_j)^2 \le 4\sum_{j\in J(\Omega)} \tau_j^2\,. $$ And this is $\le \sum_{j\in J(\Omega)} \tau_j \le \mu(\Omega)$. So Carleson condition holds with constant $1$. \subsection{The lack of maximal principle matters} \label{maxP} All measures and dyadic rectangles below will be $N$-coarse. In this section we build another example when Carleson condition holds, but restricted energy condition fails. But the example is more complicated (and more deep) than the previous one. In it the weight $\alpha$ again has values either $1$ or $0$, but the support $S$ of $\alpha$ is an {\it up-set}, that is, it contains every ancestor of every rectangle in $S$. The example is based on the fact that potentials on bi-tree may not satisfy maximal principle. So we start with constructing $N$-coarse $\mu$ such that given a small $\delta>0$ \begin{equation} \label{le1} \mathbb V^\mu \lesssim \delta\quad \text{on}\,\, \operatorname{supp}\mu, \end{equation} but with an absolute strictly positive $c$ \begin{equation} \label{logN} \max \mathbb V^\mu \ge \mathbb V^\mu(\omega_0) \ge c\, \delta\log N\,, \end{equation} where $\omega_0:=[0, 2^{-N}]\times [0, 2^{-N}]$. We define a collection of rectangles \begin{equation}\label{e:773} Q_j := [0,2^{-2^j}] \times [0,2^{-2^{-j}N}] ,\quad j=1\dots M\approx \log N, \end{equation} and we let \begin{equation}\label{Qs} \begin{split} &Q_j^{++} := [2^{-2^j-1},2^{-2^j}]\times [2^{-2^{-j}N-1},2^{-2^{-j}N}] \\ &Q_j^- := [0,2^{-2^j-1}] \times [0,2^{-2^{-j}N}] ,\quad j=1\dots M\\ &Q_j^t := Q_j\setminus Q_j^-\\ &Q_j^r:= [2^{-2^j-1},2^{-2^j}]\times [0,2^{-2^{-j}N}]\\ & Q_j^{--} := Q_j^-\setminus Q_j^r \end{split} \end{equation} to be their upper right quadrants, lower halves, top halves, right halves, and lower quadrant respectively. Now we put \begin{equation}\label{e:774} \begin{split} &\mathcal{R} := \{R:\; Q_j\subset R\; \textup{for some }j=1\dots M\}\\ &\alpha_{Q} := \chi_{\mathcal{R}}(Q)\\ &\mu(\omega) := \frac{\delta}{N}\sum_{j=1}^M \frac{1}{|Q_j^{++}|}\chi_{Q_j^{++}}(\omega),\\ & P_j= (2^{-2^j}, 2^{-2^{-j}N})\,. \end{split} \end{equation} here $|Q|$ denotes the total amount of points $\omega \in (\partial T)^2\cap Q$, i.e. the amount of the smallest possible rectangles (of the size $2^{-2N}$) in $Q$.\par Observe that on $Q_j$ the measure is basically a uniform distribution of the mass $\frac{\delta}{N}$ over the upper right quarter $Q_j^{++}$ of the rectangle $Q_j$ (and these quadrants are disjoint). \par To prove \eqref{le1} we fix $\omega\in Q_j^{++}$ and split $\mathbb V^\mu(\omega)= \mathbb V^\mu_{Q_j^{++}}(\omega) +\mu(Q_j^t)+ \mu(Q_j^r)+ \mathbb V^\mu(Q_j^{++})$, where the first term sums up $\mu(Q)$ for $Q$ between $\omega$ and $Q_j^{++}$. This term obviously satisfies $\mathbb V^\mu_{Q_j^{++}}(\omega) \lesssim \frac{\delta}{N}$. Trivially $\mu(Q_j^t)+ \mu(Q_j^r)\le \frac{2\delta}{N} $. The non-trivial part is the estimate \begin{equation} \label{VQ} \mathbb V^\mu(Q_j^{++}) \lesssim \delta\,. \end{equation} To prove \eqref{VQ}, consider the sub-interval of interval $[1, n]$ of integers. We assume that $j\in [m, m+k]$. We call by $C^{[m, m+k]}_j$ the family of dyadic rectangles containing $Q_j^{++}$ along with all $Q_i^{++}$, $i\in [m, m+k]$ (and none of the others). Notice that $C^{[m, m+k]}_j$ are not disjoint families, but this will be no problem for us as we wish to estimate $\mathbb V^\mu(Q_j^{++})$ from above. Notice that, for example, $C^{[m, m+1]}_j$ are exactly the dyadic rectangles containing point $P_j$. It is easy to calculate that the number of such rectangles is \[ (2^j+1)\cdot (2^{-j}N +1)\lesssim N\,. \] Analogously, dyadic rectangles in family $C^{[m, m+k]}_j$ have to contain points $P_m, P_{m+k}$. Therefore, each of such rectangles contains point $(2^{-2^m}, 2^{-2^{-m-k}N})$. The number of such rectangles is obviously at most $\lesssim 2^{-k}N$. The number of classes $C^{[m, m+k]}_j$ is at most $k+1$. Therefore, $\mathbb V^\mu((Q_j^{++})$ involves at most $(k+1)2^{-k}N$ times the measure in the amount $k\cdot \frac{\delta}{N}$. Hence \[ \mathbb V^\mu((Q_j^{++}) \le \sum_{k=1}^n k(k+1)2^{-k}N \cdot \frac{\delta}{N}\,, \] and \eqref{VQ} is proved. Inequality \eqref{le1} is also proved. \medskip We already denoted \[ \omega_0:=[0, 2^{-N}]\times [0, 2^{-N}]\,, \] calculate now $\mathbb V^\mu(\omega_0)$. In fact, we will estimate it from below. The fact that $C^{[m, m+k]}_j$ are not disjoint may represent the problem now because we wish estimate $\mathbb V^\mu(\omega_0)$ from below. To be more careful for every $j$ we denote now by $c_j$ the family of dyadic rectangles containing the point $P_j$ but not containing any other point $P_i, i\neq j$. Rectangles in $c_j$ contain $Q_j^{++}$ but do not contain any of $Q_i^{++}$, $i\neq j$. There are $(2^{j-1} - 2^j-1)\cdot (2^{-j+1}N-2^{-j}N -1)$, $j=2, \dots, M-2$. This is at least $\frac18 N$. But now families $c_j$ are disjoint, and rectangles of class $c_j$ contribute at least $\frac18 N\cdot \frac{\delta}{N}$ into the sum that defines $\mathbb V^\mu(\omega_0)$. W have $m_4$ such classes $c_j$, as $j=2, \dots, M-2$. Hence, \begin{equation} \label{deltaM} \mathbb V^\mu(\omega_0) \ge \frac18 N\cdot \frac{\delta}{N}\cdot (M-4)\ge\frac19 \delta M\,. \end{equation} Choose $\delta$ to be a small absolute number $\delta_0$. Then we will have (see \eqref{le1}) \[ \mathbb V^\mu \le 1, \quad\text{on}\,\, \operatorname{supp}\mu\,. \] But \eqref{deltaM} proves also \eqref{logN} as $M\asymp \log N$. \medskip \begin{remark} \label{Mx} Notice that in this example $\mathbb V^\mu\le 1$ on $\operatorname{supp}\mu$, and \begin{equation} \label{exp} \text{cap} \{\omega: \mathbb V^\mu\ge \lambda\} \le c e^{-2\lambda}\,. \end{equation} Here capacity is the bi-tree capacity defined e. g. in \cite{AMPS18}. So there is no maximal principle for the bi-tree potential, but the set, where the maximal principle breaks down, has small capacity. \end{remark} \bigskip Now we construct the example of $\nu$ and $\alpha$ with $\alpha=1$ on an up-set (and zero otherwise), and such that Carleson condition is satisfied but REC (restricted energy condition) is not satisfied. We use the same measure $\mu$ we have just constructed, and we put \[ \nu := \mu +\nu|\omega_0, \] where $\nu|\omega_0$ is the uniformly distributed over $\omega_0$ measure of total mass $\frac1{MN}$. Weight $\alpha$ is chosen as in \eqref{e:774}. \bigskip \noindent{\bf Warning.} The meaning of $\mathbb V$ changes from now on. Before $\mathbb V^{\cdot}= \mathbb I \mathbb I^*(\cdot)$. Everywhere below, \[ \mathbb V^\nu:= \mathbb I[\alpha \mathbb I^*(\nu)]\,. \] \bigskip Let us first check that REC constant is bad. We choose $F=\cup_j Q_j^-$. Then $\nu_F:=\nu|F = \nu|\omega_0$. On the hand, and this is the main feature, \begin{equation} \label{lies_in_M} \omega_0 \,\,\text{lies in}\,\, M\,\, \text{rectangles} \,\, Q_j\,. \end{equation} Hence, \begin{equation} \label{number_of_rectangles} \text{there are} \,\, \ge c NM\,\, \text{dyadic rectangles}\,\, R\,\,\text{ such that}\,\, \alpha_R=1\,\, \text{ and}\,\, \omega_0\subset R\,. \end{equation} In fact, consider dyadic rectangles in $\cup_j c_j$, where families $c_j$ were built above. For each $R\in \cup c_j$ we have $\alpha_R=1$, see \eqref{e:774}. And there are at least $\frac18 NM$ of them. We conclude \begin{equation} \label{bV_at_om0} \mathbb V^{\nu_F}(\omega_0) \ge \frac18 MN\cdot \nu(\omega_0)\,. \end{equation} Therefore, \[ \int \mathbb V^{\nu|F}\, d \nu|F \ge \nu(\omega_0)^2\cdot \frac18 NM=\frac18 \frac1{MN} \ge c_0\,\nu(\omega_0)\,. \] This means that constant of REC is at least absolute constant $c_0$. Let us show that the Carleson constant is $\lesssim c\cdot \delta$. But $c_0$ has nothing to do with $\delta$ that can be chosen as small as we wish. \begin{remark} \label{inM} We do not need the following claim now, we will need it only later, but notice that in a fashion completely similar to the one that just proved \eqref{bV_at_om0}, one can also prove \begin{equation} \label{bVmu_at_om0} \mathbb V^{\mu}(\omega_0) \ge \frac18 MN\cdot \frac{\delta}{N} \ge c\, \delta M\,. \end{equation} Moreover, we already proved it in \eqref{deltaM}. This holds because $\omega_0$ is contained in exactly $M$ rectangles $Q_j^{++}$. \end{remark} \bigskip \begin{defin} Dyadic rectangles whose left lower corner is $(0,0)$ will be called {\it hooked} rectangles. \end{defin} To check the Carleson condition with small constant we fix any finite family $\mathcal{A}$ of dyadic rectangles, and let \[ A=\cup_{R\in \mathcal{A}} R\,. \] We are interested in subfamily $\mathcal{A}'$ of $R$ such that $\alpha_R=1$. Other elements of $\mathcal{A}$ do not give any contribution to $\mathcal E_A[\nu]$ as $ \mu(Q)^2 \alpha_Q=\mu(Q)^2 \cdot 0=0$ for any $Q\subset Q', \, Q'\in \mathcal{A}\setminus \mathcal{A}',$ as the support of $\alpha$ is an up-set. All rectangles from $\mathcal{A}'$ are hooked rectangles. As we noticed, we can think that $\mathcal{A}'=\mathcal{A}$. In other words, without the loss of generality, we can think that $\mathcal{A}$ consists only of hooked rectangles. Any hooked rectangle generates a closed interval $\mathcal J$ in the segment $[1, n]$ of integers: interval $\mathcal J$ consists of $j$, $1\le j\le n$, such that point $P_j$ lies in this hooked rectangle. This is the same as to say that $Q_j$, $j\in \mathcal J$, is a subset of this hooked rectangle. So family $\mathcal{A}$ generates the family of closed intervals in the segment $[1, n]$ of integers. Let us call $\mathcal J_A$ this family of intervals in the segment $[1, n]$ of integers. Intervals of family $\mathcal J_A$ can be not disjoint. But we can do the following, if intervals intersect, or even if these closed intervals are adjacent, we unite them to a new interval. The new system (of disjoint and not even adjacent) closed intervals corresponds to another initial system $\tilde \mathcal{A}$, and we can think that $\tilde \mathcal{A}$ consists of hooked rectangles. We call a system of hooked rectangles {\it a clean system} if it gives rise to not adjacent disjoint family of closed intervals inside the set $[1,n]$ of integers The relationship between rectangles in $\mathcal{A}$ and $\tilde\mathcal{A}$ is the following: each rectangle of $\tilde\mathcal{A}$ is a common ancestor of a group of rectangles in $\mathcal{A}$. A very important geometric property of $\tilde \mathcal{A}$ is the following. Let $Q\in \tilde\mathcal{A}$ and let $R^1, \dots, R^s$ be all rectangles from $\mathcal{A}$ such that $R^i\subset Q$, $i=1,\dots, s$. Then \begin{equation} \label{tildeA} \nu(Q\setminus \cup_{i=1}^m R^i) =0\,. \end{equation} In particular, \eqref{tildeA} implies \begin{equation} \label{tildeA1} \nu(A)=\nu(\tilde A) \,. \end{equation} When checking the Carleson condition \begin{equation} \label{carlA} \mathcal E_A[\nu] \lesssim \delta \nu(A), \end{equation} we can always think about $\mathcal{A}$ being replaced by $\tilde \mathcal{A}$ and $A$ being replaced by $\tilde A$ because in \eqref{carlA} the RHS stays the same, but the LHS can jump up with passage from $A$ to $\tilde A$. Therefore, checking \eqref {carlA} for {\it clean systems} of rectangles is the same as to check it for all systems of rectangles. From now on $\mathcal{A}$ is supposed to be clean. To prove \eqref{carlA} is the same as to prove (since $\nu_A(Q) = (\mu_A(Q) +\nu_{\omega_0}(Q)$) \begin{equation} \label{carlA1} \sum_{Q\subset A} \nu_A(Q)^2 \alpha_Q + \sum_{Q\subset A} \nu_{\omega_0}(Q)^2 \alpha_Q \lesssim \delta \nu(A), \end{equation} The first sum is bounded by $\int \mathbb V^{\mu|A}\, d\mu|A$. But by \eqref{le1} (which follows from \eqref{VQ}) we have \[ \sum_{Q\subset A} \nu_A(Q)^2 \alpha_Q = \int \mathbb V^{\mu|A}\, d\mu|A \lesssim \delta \|\mu_A\|= \delta \mu(A) \le \delta \nu(A)\,, \] and this means that the part of \eqref{carlA1} is proved. To estimate $\sum_{Q\subset A} \nu_{\omega_0}(Q)^2 \alpha_Q= \frac1{(MN)^2} \sharp\{R: \alpha_R=1, R\subset A\} $ we take one interval $\mathcal J_k$ from the family generated by the clean $\mathcal{A}$ in $[1,n]$, we denote \begin{equation} \label{mk} m_k:=\sharp\mathcal J_k\,, \end{equation} and we estimate how many dyadic rectangles $R$ contain one of $Q_j$, $j\in \mathcal J_k$. We even do not care now whether $R$ is a subset of $A$ or not. The number of such rectangles in at most $m_k\cdot N$. On the other hand, \[ \nu(A)\ge \sum_k m_k \cdot \frac{\delta}{N} + \frac1{MN} \sum_k m_k \ge \sum_k m_k \cdot \frac{\delta}{N} \,. \] So to prove the estimate for the second sum in \eqref{carlA1}, we need to see that \[ \frac1{(MN)^2}\sum_k m_k\cdot N \lesssim \frac1M\sum_k m_k \cdot \frac{\delta}{N}, \] which is obviously true if we choose $\delta \ge \frac1{M}=\frac1{\log N}$. So \eqref{carlA1} is proved. \section{Restricted energy condition holds but no embedding} \label{RECnoEmb} In this section we emulate the previous construction, we start with $\{Q_j\}$ and measure $\mu$ but instead of adding $\omega_0$ we will add a more sophisticated piece of measure. Let us start with recalling the system $\{Q_j\}, j=1, \dots, M$ and measure $\mu$ from the previous section. We continue with denoting \[ Q_{0, j}:= Q_j,\quad \mu_0:=\mu\,\, \text{from the previous section}\,. \] Next we continue with defining a a sequence of collections $\mathcal{Q}_k,\; k=0\dots K,$ of dyadic rectangles as follows \begin{equation}\label{e:874} \begin{split} \mathcal{Q}_k := \left\{Q_{k,j} = \bigcap_{i=j}^{j+2^k}Q_{0,i},\; j=1\dots M-2^k\right\},\; k=1\dots K. \end{split} \end{equation} In other words, $\mathcal{Q}_0$ is the basic collection of rectangles, and \begin{equation} \label{lies_in_2k} \mathcal{Q}_{k}\,\text{ consists of the intersections of} \,\,2^k\,\text{ consecutive elements of}\,\mathcal{Q}_0. \end{equation} The total amount of rectangles in $\mathcal{Q}_k$ is denoted by $M_k = M-2^{k}+1$. We also denoted by $\mathcal R$ the collection of rectangles lying above $\mathcal{Q}_0$ \begin{equation}\label{e:874.4} \mathcal R := \{R:\; Q_{0,j}\subset R\;\textup{for some}\; 1\leq j\leq M\}, \end{equation} and we let \[ S_k := \bigcup_{Q\in \mathcal{Q}_k}Q. \] The weight $\alpha$ was defined as follows: \begin{equation}\label{e:874.5} \begin{split} &\alpha_{Q} := 1,\quad \textup{if}\; Q\in \mathcal R \\ &\alpha_Q := 0 \quad \textup{otherwise}. \end{split} \end{equation} Now we construct the measure $\mu$, whose main part will be already constructed $\mu_0$. Let \[ Q_{k,j}^{++} := \left[2^{-2^{j+2^k}-1},2^{-2^{j+2^k}}\right]\times \left[2^{-2^{j}N-1},2^{-2^{j}N}\right] \] be the upper right quadrant of $Q_{k,j}$. For every $k = 0\dots K$ we distribute the mass $2^{-2k}\frac{M_k\delta}{N}$ over the rectangles $Q_{k,j}^{++}$. Namely, for every $j=1\dots M_k$ we attach a mass $2^{-2k}\frac{\delta}{N}$ to the rectangle $Q_{k,j}$ that is uniformly distributed over the quadrant $Q_{k,j}^{++}$. We note that all these quadrants $Q_{k,j}^{++}$ are disjoint.\par Measure $\mu_0$ is the ``main" part of $\mu$, in the sense that $\mu_0$ is generated by the masses on $\mathcal{Q}_0$, \[ \mu_0(\omega) := \frac{\delta}{N}\sum_{j=1}^M \frac{1}{|Q_{0,j}^{++}|}\chi_{Q_{0,j}^{++}}(\omega),\quad \omega \in (\partial T)^2, \] and let $\mu_k$ be the corresponding mass on $\mathcal{Q}_k$ \[ \mu_k(\omega) := \frac{2^{-2k}\delta}{N}\sum_{j=1}^{M_k} \frac{1}{|Q_{k,j}^{++}|}\chi_{Q_{0,j}^{++}}(\omega),\quad \omega \in (\partial T)^2, \] so that \[ \mu = \mu_0 + \sum_{k=1}^{K}\mu_k. \] Finally we define the function $f$, and we do it in such a way that it is 'congruent' with the distribution of $\mu_0$ over $\Omega$, namely we let \[ f(R) := \mu_0(R)\cdot \alpha_R. \] \subsection{Main idea} \label{MI} Notice that \[ \mathbb V^{\mu_0} = \mathbb I f=\mathbb I[\alpha \mathbb I^*\mu_0], \, \,\,\int \mathbb V^{\mu_0}\, d\mu_0 =\sum\mathbb I^*\mu_0\cdot \mathbb I^*\mu_0 \cdot\alpha= \sum_{T^2} f^2\cdot \alpha\,. \] To prove that embedding has a bad constant, it is sufficient to show that the dual inequality has a bad constant: \[ \int (\mathbb I f)^2 \, d\mu >> \sum f^2\cdot \alpha, \] which becomes \begin{equation} \label{dual} \int (\mathbb V^{\mu_0})^2\, d\mu>> \int \mathbb V^{\mu_0} \, d\mu_0\,. \end{equation} Let us look at Remark \ref{inM}, at \eqref{lies_in_M}, \eqref{number_of_rectangles}, \eqref{bVmu_at_om0} and compare \eqref{lies_in_M} with \eqref{lies_in_2k}. The conclusion is: since every $Q_{k,j}$ lies in $2^k$ of $Q_{0,j}$ (number $2^k$ replaces $M$ in \eqref{lies_in_M}, \eqref{number_of_rectangles}, \eqref{bVmu_at_om0}), then \begin{equation} \label{bVmu0_on_muk} \mathbb V^{\mu_0} \ge c\,2^k N \cdot \frac{\delta}{N}=c \delta 2^k\,\,\text{on each}\,\, Q_{k, j}\,. \end{equation} We already saw that $\mathbb V^{\mu_0}\lesssim \delta$ on $\mu_0$, so \begin{equation} \label{RHS} \int \mathbb V^{\mu_0} \, d\mu_0 \lesssim \delta^2 \frac{M}{N}\,. \end{equation} Now, using \eqref{bVmu0_on_muk} we get \begin{equation} \label{LHS} \int (\mathbb V^{\mu_0})^2\,d\mu =\sum_{k=1}^K \int (\mathbb V^{\mu_0})^2\,d\mu_k \ge c^2 \delta^2 \sum_{k=1}^K 2^{2k} \|\mu_k\| = c^2 \delta^3 \frac{M\log M}{N} \end{equation} For example, let \begin{equation}\label{choice} \delta= \frac1{\log M}\,. \end{equation} Then the constant of embedding is $\approx 1$. \subsection{REC condition holds with a small constant} \label{RECsmall} Let $\mathcal{A}$ be a collection of (hooked) rectangles, $A=\cap_{R\in \mathcal{A}} R$. Let $\nu_k:=\mu_k|A$, $k=1, \dots, K$, $\nu:= \mu |A=\sum \nu_k$. We need to prove \begin{equation} \label{smREC} \mathcal E_A[\nu]\lesssim \delta \|\nu\|\,. \end{equation} Let $n>k$, we wish to estimate $\mathbb V^{\mu_n}(Q_{k, j})$. This is a certain sum over a system $\mathcal S$ of rectangles of the form \[ \sum_{R\in \mathcal S} \mu_n(R), \] where dyadic rectangles are a) contain $Q_{k, j}$, b) $\alpha_R=1$. Notice that this system depends on $Q_{k, j}$ but totally independent of $n$. So if we manage to estimate $\mu_n(R)$ via $\mu_0(R)$, then we compare $\mathbb V^{\mu_n}(Q_{k, j})$ to $\mathbb V^{\mu_0}(Q_{k, j})$. But let the number of $Q_{s, j}$ in $R$ be denoted by $m^s_R$. Then it is very easy to see that \[ m^n_R \le m^0_R +2^n+1\,. \] Then \[ 2^{-2n}m^n_R \le 2^{-n}(m^0_R +1+2^{-n}) \le 3\cdot 2^{-n} m^0_R\,. \] Then \[ \mathbb V^{\mu_n}(Q_{k, j}) =\sum_{R\in \mathcal S} \mu_n(R) \le 3\cdot 2^{-n} \sum_{R\in \mathcal S} \mu_0(R) = 3\cdot 2^{-n} \mathbb V^{\mu_0}(Q_{k,j})\lesssim \delta 2^{k-n} \,. \] Therefore, \[ \sum_{n\ge k} \mathbb V^{\nu_n} \, d\nu_k\lesssim \delta \sum_{n\ge k} 2^{k-n} \|\nu_k\|=2\delta \|\nu_k\|\,. \] And so \[ \sum_k\sum_{n\ge k} \mathbb V^{\nu_n} \, d\nu_k\lesssim \delta \sum_k \|\nu_k\|=\delta\|\nu\|\,. \] Inequality \eqref{smREC} is proved. \section{Strong dyadic maximal function and counterexamples} \label{counter} \noindent{\bf Definition.} Let $\mathcal S$ be a family of dyadic sub-rectangles of $Q_0$ (may be $\mathcal S=\mathcal D$, the family of all dyadic sub-rectangles). We call the sequence of $\{\beta_Q\}_{Q\in \mathcal S}$ Carleson if \begin{equation} \label{bC} \forall \mathcal S'\subset \mathcal S, \quad \sum_{Q\in \mathcal S'}\beta_Q \mu(Q) \le C\mu(\cup_{Q\in \mathcal S'} Q)\,. \end{equation} The best $C$ is called the Carleson norm of the sequence. \noindent{\bf Definition.} Abusing the language we say that the weight $\alpha:=\{\alpha_Q\}_{Q\in \mathcal S}$ satisfies Carleson condition if the sequence $\beta_Q:= \alpha_Q \cdot \mu(Q)$ is a Carleson sequence: \begin{equation} \label{alC} \forall \mathcal S'\subset \mathcal S, \quad \sum_{Q\in \mathcal S'}\alpha_Q \mu(Q)^2 \le C\mu(\cup_{Q\in \mathcal S'} Q)\,. \end{equation} We already know, see \cite{AHMV18b} e.g., that bi-parameter Carleson embedding \[ \sum_{R\in \mathcal D} \big(\int_R \psi\, d\mu\big)^2 \alpha_R =\sum_{R\in \mathcal D} [\mathbb I^*(\psi \mu)]^2 \alpha_R \le C' \int_{Q_0} \psi^2\, d\mu \] is equivalent to the Carleson condition above if we have $\alpha_R=1,\hspace{0.1cm} \forall R \in \mathcal D$ . Understanding the general two-weight bi-parameter situation (that is $\alpha \not\equiv 1$) seems to be super hard, as the examples above show. Notice that in two-weight one-parameter situation the answer is known, see, e.g. \cite{NTV99}. And the answer is given in terms of Carleson condition. However, in bi-parameter situation this is far from being so as the following theorem shows. First we give \noindent{\bf Definition.} A finite positive measure $\mu$ on $Q_0$ is called a ``bad'' measure if there exists weight $\alpha=\{\alpha_Q\}_{Q\in \mathcal D}$ that satisfies the Carleson condition but such that the embedding \begin{equation} \label{alEmb} \sum_{R\in \mathcal D} \Big(\int_R \psi\, d\mu\Big)^2\alpha_R \le C' \int_{Q_0} \psi^2\, d\mu \end{equation} does not hold. The strong maximal function with respect to $\mu$ is \[ \mathcal M_\mu \psi (x) =\sup_{\substack{R\in \mathcal D \\ R\ni x}} \frac{1}{\mu(R)}\int_R |\psi|\, d\mu\,, \] where $0/0=0$. The supremum is taken over all dyadic sub-rectangles of $Q_0$. \begin{theorem} \label{Dor} Let $\mu$ be atom free. Then the measure $\mu$ is bad if and only if $\mathcal M_\mu$ is not a bounded operator in $L^2(\mu)$. \end{theorem} We need some preparation, which is written down in \cite{Verb} and \cite{TH}, but we repeat it for the convenience of the reader. If weight $\alpha$ satisfies Carleson condition with constant $1$, then $\beta_Q=\alpha_Q \mu(Q)$ is a Carleson sequence with Carleson constant $1$ and this means that for any family $\mathcal D'\subset \mathcal D$ \begin{equation} \label{alCarl1} \sum_{Q\in \mathcal D'} \alpha_Q \mu(Q)^2 \le \mu(\bigcup_{Q\in\mathcal D'} Q)\,. \end{equation} Using the idea of Igor Verbitsky \cite{Verb} this can be stated in terms of discrete Triebel-Lizorkin space $f^{1, \infty}(\mu)$: \[ \{\alpha_Q\mu(Q)\}_{Q\in \mathcal D}\in f^{1, \infty}(\mu), \] and has norm at most $1$. But $( f^{1, \infty}(\mu))^* = f^{ \infty, 1}(\mu)$, and this is the space of coefficients $\{\lambda_Q\}_{Q\in \mathcal D}$ given by the norm \[ \|\{\lambda_Q\}_{Q\in \mathcal D}\|_{ f^{ \infty, 1}(\mu)} := \int\sup_{Q\in\mathcal D}\Big(\lambda_Q\chi_Q(x)\Big)\, d\mu\,. \] Thus, by duality \eqref{alCarl1} is equivalent to \begin{equation} \label{alCarlDual} \forall \{\lambda_Q\}_{Q\in \mathcal D}, \quad\sum_{Q\in \mathcal D} \alpha_Q \mu(Q)^2\cdot \lambda_Q \le \int \sup_{Q\in\mathcal D}\Big(\lambda_Q\chi_Q(x)\Big)\, d\mu\,. \end{equation} Without loss of generality we think that all $ \alpha_Q \mu(Q)>0$ (otherwise they are not in the LHS and we forget them). Then we can rewrite \eqref{alCarlDual} as \begin{equation} \label{alCarlDual2} \forall \{b_Q\}_{Q\in \mathcal D}, \quad \sum_{Q\in \mathcal D} b_Q \le \int \sup_{Q\in\mathcal D}\Big(b_Q\frac{\chi_Q(x)}{\alpha_Q \mu(Q)^2}\Big)\, d\mu\,. \end{equation} In \cite{Verb} and in \cite{TH} now the theorem of L. Dor \cite{Dor} is used. Here it is \begin{theorem} \label{thDor} Let $\mu$ be atom free. Let $g_i$ be a fixed sequence of measurable functions and suppose for all positive numbers $b_i$ that \[ \sum b_i \le \int \sup_j[b_j g_j(x)] \, d\mu\,. \] Then there are disjoint sets $E_j$ such that \[ 1\le \int_{E_j} g_j\, d\mu\,. \] \end{theorem} Using this theorem, we see that \eqref{alCarlDual2} implies the existence of measurable subsets $E_Q\subset Q$ with $Q\in \mathcal D$, such that \begin{equation} \label{choice} \alpha_Q\cdot \mu(Q)^2 \leq \mu(E_Q),\quad E_Q\cap E_{Q'} =\emptyset\,. \end{equation} We deduced \eqref{choice} from \eqref{alCarl1}. By now this has been done in various situations many times, see again \cite{Verb} (for cubes), \cite{TH} (for arbitrary Borel sets), and \cite{AB} for geometric proof of this geometric fact. The deduction of \eqref{choice} from \eqref{alCarl1} is called ``sparsity property from Carleson property deduction". Now we are ready to give the proof of Theorem \ref{Dor}. \begin{proof} $(\Rightarrow)$ Suppose measure $\mu$ has no atoms and it is a bad measure. Then we can find a sequence $\alpha=\{\alpha_Q\}_{Q\in\mathcal D}$ such that condition \eqref{alC} is satisfied with constant $C=1$ (we can always normalize) but embedding \eqref{alEmb} is false. Then for every positive $K$ there exists a positive function $\psi$ such that \[ \sum_{Q\in \mathcal D} \big(\int_Q \psi\, d\mu\big)^2 \alpha_Q\geq K\int \psi^2\, d\mu, \] or \[ \sum_{Q\in \mathcal D} \langle \psi\rangle_{Q, \mu}^2 \alpha_Q\cdot \mu(Q)^2\ge K\int \psi^2\, d\mu \] Let $\mathcal F$ be the sub-collection of $\mathcal D$ such that $\alpha_Q\neq 0$ for $Q\in \mathcal F$. We can use the sparsity condition \eqref{choice} to get $$K\int \psi^2\, d\mu \leq \sum_{Q\in \mathcal D} \langle \psi\rangle_{Q, \mu}^2 \alpha_Q\cdot \mu(Q)^2 \leq \sum_{Q\in \mathcal F} \langle \psi\rangle_{Q, \mu}^2 \cdot \mu(E_Q)=$$ $$\sum_{Q\in \mathcal F} \Big(\frac{1}{\mu(Q)}\int_Q\psi\hspace{0.1cm}{d}\mu\Big)^2 \cdot\mu(E_Q)\leq \sum_{Q\in \mathcal F} \Big(\inf_{x\in Q}\mathcal M_{\mu}\psi(x)\Big)^2\cdot \mu(E_Q)\leq$$ $$\sum_{Q\in \mathcal F} \inf_{x\in Q}\Big(\mathcal M_{\mu}\psi(x)\Big)^2\cdot \mu(E_Q)\leq\sum_{Q\in \mathcal F} \inf_{x\in E_Q}\Big(\mathcal M_{\mu}\psi(x)\Big)^2\cdot \mu(E_Q)\leq$$ $$\sum_{Q\in \mathcal F} \int_{E_Q}\Big(\mathcal M_{\mu}\psi(x)\Big)^2\hspace{0.1cm}{d}\mu\leq\int_{[0,1]^2}\Big(\mathcal M_{\mu}\psi(x)\Big)^2{d}\mu$$ since the sets $E_Q$ are disjoint. Hence \begin{equation} \label{MmuK} \int \big(\mathcal M_\mu\psi \big)^2\, d\mu \geq K\int \psi^2\, d\mu\,. \end{equation} Since $K$ can be arbitrary large the operator $\mathcal M_\mu$ is not bounded in $L^2(\mu)$. \bigskip $(\Leftarrow)$ Now assume that operator $\mathcal M_\mu$ is not bounded in $L^2(\mu)$. Then, for an arbitrary $K>0$ there exists a positive function $\psi$ such that \eqref{MmuK} holds. By approximating $\psi$ with an increasing sequence of positive, simple functions $\psi_n$ we can find $n_0\in \mathbb{N}$ such that \eqref{MmuK} holds with $\psi$ replaced by $\psi_{n_0}$. Also, we can find $N=N(n_0)\in\mathbb{N}$ so that the function $\psi_{n_0}$ is constant on each dyadic sub-square of $Q_0$ of size $2^{-N}\times 2^{-N}$. Given these, in the definition of $\mathcal M_\mu\psi_{n_0}$ we replace $\sup$ by $\max$. For simplicity we omit writing the sub-script $n_0$. Now we group the elements of $Q_0=[0,1]^2$ into the sets $A_Q$, $Q\in\mathcal D$ as follows: $x\in A_Q$ if $$\max_{\substack{R\in \mathcal D \\ R \ni x}}\frac{1}{\mu(R)}\int_R \psi {d}\mu = \frac{1}{\mu(Q)}\int_Q \psi {d}\mu$$ Of course there may be more than one set $A_Q$ for each $x$. We would like to make these sets disjoint. Let ${Q_1}$, ${Q_2}$,.. be an enumeration of the dyadic rectangles $Q$, $Q\in \mathcal D$. Then consider the sets $A_{Q_i}':=A_{Q_i}\setminus \bigcup\limits_{j=1}^{i}A_{Q_j}$. Obviously the sets $A_{Q_i}'$ are disjoint and their union is $Q_0=[0,1]^2$. Hence, we have $$\int_{Q_0}\Big(\mathcal M_{\mu}\psi(x)\Big)^2{d}\mu=\int_{Q_0}\Bigg(\max_{\substack{R\in \mathcal D \\ R \ni x}}\frac{1}{\mu(R)}\int_R \psi {d}\mu\Bigg)^2{d}\mu=\sum_{i}\int_{A_{Q_i}'}\Big(\frac{1}{\mu(Q_i)}\int_{Q_i} \psi {d}\mu\Big)^2 {d}\mu =$$ $$\sum_{i}\frac{\mu(A_{Q_i}')}{\mu(Q_i)^2}\Big(\int_{Q_i} \psi \hspace{0.1cm}{d}\mu\Big)^2:=\sum_{i}\alpha_{Q_i}\Big(\int_{Q_i} \psi \hspace{0.1cm}{d}\mu\Big)^2$$ In other words, we constructed a sequence $\alpha=\{\alpha_{Q_i}\}$, $Q_i\in \mathcal D$ for which the embedding \eqref{alEmb} does hold with a very large constant, but the Carleson condition holds with constant $1$. But, this sequence, as well as the function $\psi$ depend on $K$. To get a general sequence and function we do start with an $i\in\mathbb{N}$ and take $K=4^i$. Lets omit enumerating the dyadic rectangles in $[0,1]^2$ for now. Given the above, $\exists \psi_i$ and $\{\alpha_{Q}^i\}_{Q\in\mathcal D}$ such that $$4^i \int\limits_{[0,1]^2}\psi_i^2{d}\mu \hspace{0.1cm}\leq \hspace{0.1cm} \sum\limits_{Q\in \mathcal D}\alpha_{Q}^i\Big(\int_{Q} \psi_i \hspace{0.1cm}{d}\mu\Big)^2 $$ and re-normalizing we get \begin{equation}\label{iemb} 2^i \hspace{0.1cm}\leq \hspace{0.1cm} \sum\limits_{Q\in \mathcal D}\alpha_{Q}^i\Big(\int_{Q} \phi_i \hspace{0.1cm}{d}\mu\Big)^2 \end{equation} for $\phi_i=\dfrac{\psi_i}{2^{i/2}\|\psi_i\|_{L^2([0,1]^2)}}$. Now let $\alpha_Q:=\sum\limits_{i=1}^{\infty}2^{-i}\alpha_Q^i$ and $\phi=\sum\limits_{i=1}^{\infty}\phi_i$. Notice that $\phi\in L^2([0,1]^2)$ and the sequence $\alpha=\{\alpha_Q\}_{Q\in \mathcal D}$ satisfies the Carleson condition: Let any sub-collection $\mathcal S$ of $\mathcal D$. There is an indexing set $J_{\mathcal S}=\{j_1,j_2,...\}$ which enumerates the dyadic rectangles in $\mathcal S$. By the disjointness of $A_{Q_j}'^{i}$'s (for each fixed $i$) we have $$\sum_{Q\in \mathcal S}\alpha_{Q} \mu(Q)^2=\sum_{j\in J_{\mathcal S}}\alpha_{Q_j} \mu(Q_j)^2=\sum_{j\in J_{\mathcal S}} \sum\limits_{i=1}^{\infty}2^{-i}\alpha_{Q_j}^i \mu(Q_j)^2 = \sum\limits_{i=1}^{\infty}2^{-i} \sum_{j\in J_{\mathcal S}} \mu(A_{Q_j}'^{i})= $$ $$\sum\limits_{i=1}^{\infty}2^{-i}\mu(\bigcup\limits_{j\in J_{\mathcal S}} A_{Q_j}'^{i}) \leq \sum\limits_{i=1}^{\infty}2^{-i} \mu(\bigcup\limits_{Q \in {\mathcal S}} {Q})=\mu(\bigcup\limits_{Q \in {\mathcal S}} {Q}) $$ Finally, using \eqref{iemb} we get $$\sum\limits_{Q\in\mathcal D}\alpha_Q \Big(\int_{Q} \phi \hspace{0.1cm}{d}\mu\Big)^2=\sum\limits_{Q\in\mathcal D}\sum\limits_{i=1}^{\infty}2^{-i}\alpha_Q^i\Big(\int_{Q} \phi \hspace{0.1cm}{d}\mu\Big)^2 \ge \sum\limits_{i=1}^{\infty}2^{-i}\sum\limits_{Q\in\mathcal D}\alpha_Q^i\Big(\int_{Q} \phi_i \hspace{0.1cm}{d}\mu\Big)^2 = \infty$$ \end{proof}
1,314,259,992,952
arxiv
\section{Asymptotically self-similar singularities} \setcounter{equation}{0} We are concerned on the following Euler equations for the homogeneous incompressible fluid flows in $\Bbb R^3$. \[ \mathrm{ (E)} \left\{ \aligned &\frac{\partial v}{\partial t} +(v\cdot \nabla )v =-\nabla p , \quad (x,t)\in {\Bbb R^3}\times (0, \infty) \\ &\quad \textrm{div }\, v =0 , \quad (x,t)\in {\Bbb R^3}\times (0, \infty)\\ &v(x,0)=v_0 (x), \quad x\in \Bbb R^3 \endaligned \right. \] where $v=(v_1, v_2, v_3 )$, $v_j =v_j (x, t)$, $j=1,2,3$, is the velocity of the flow, $p=p(x,t)$ is the scalar pressure, and $v_0 $ is the given initial velocity, satisfying div $v_0 =0$. The system (E) is first modeled by Euler in \cite{eul}. The local well-posedness of the Euler equations in $H^m (\Bbb R^3)$, $m>5/2$, is established by Kato in \cite{kat}, which says that given $v_0 \in H^m (\Bbb R^3)$, there exists $T \in (0, \infty]$ such that there exists unique solution to (E), $v \in C([0, T);H^m (\Bbb R^3))$. The finite time blow-up problem of the local classical solution is known as one of the most important and difficult problems in partial differential equations(see e.g. \cite{maj,che,con1,con2,cha1} for graduate level texts and survey articles on the current status of the problem). We say a local in time classical solution $v\in C([0, T);H^m (\Bbb R^3))$ blows up at $T$ if $ \lim\sup_{t\to T}\|v(t)\|_{H^m} =\infty $ for all $m>5/2$. The celebrated Beale-Kato-Majda criterion(\cite{bea}) states that the blow-up happens at $T$ if and only if $$ \int_0 ^T \|\o (t )\|_{L^\infty} dt=\infty. $$ There are studies of geometric nature for the blow-up criterion(\cite{con3, con2, den}). As another direction of studies of the blow-up problem mathematicians also consider various scenarios of singularities and study carefully their possibility of realization(see e.g. \cite{cor1,cor2, cha2,cha3} for some of those studies). One of the purposes in this paper, especially in this section, is to study more deeply the notions related to the scenarios of the self-similar singularities in the Euler equations, the preliminary studies of which are done in \cite{cha2, cha3}. We recall that system (E) has scaling property that if $(v, p)$ is a solution of the system (E), then for any $\lambda >0$ and $\alpha \in \Bbb R $ the functions \begin{equation} \label{1.1} v^{\lambda, \alpha}(x,t)=\lambda ^\alpha v (\lambda x, \l^{\a +1} t),\quad p^{\l, \a}(x,t)=\l^{2\a}p(\l x, \l^{\a+1} t ) \end{equation} are also solutions of (E) with the initial data $ v^{\lambda, \alpha}_0(x)=\lambda ^\alpha v_0 (\lambda x)$. In view of the scaling properties in (\ref{1.1}), a natural self-similar blowing up solution $v(x,t)$ of (E) should be of the form, \begin{eqnarray} \label{1.2} v(x, t)&=&\frac{1}{(T-t)^{\frac{\a}{\a+1}}} \bar{V}\left(\frac{x}{(T-t)^{\frac{1}{\a+1}}}\right)\\ \label{1.3} p(x,t)&=&\frac{\a +1}{(T-t)^{\frac{2\a}{\a+1}}} \bar{P}\left(\frac{x}{(T-t)^{\frac{1}{\a+1}}}\right) \end{eqnarray} for $\a \neq -1$ and $t$ sufficiently close to $T$. Substituting (\ref{1.2})-(\ref{1.3}) into (E), we obtain the following stationary system. \begin{equation}\label{1.4} \left\{ \aligned & \a \bar{V} +(y \cdot \nabla)\bar{V} + (\a+1)(\bar{V}\cdot \nabla )\bar{V} =-\nabla \bar{P},\\ & \mathrm{div}\, \bar{V}=0, \endaligned \right. \end{equation} the Navier-Stokes equations version of which has been studied extensively after Leray's pioneering paper(\cite{ler, nec, tsa, mil, cha3, hou}). Existence of solution of the system (\ref{1.4}) is equivalent to the existence of solutions to the Euler equations of the form (\ref{1.2})-(\ref{1.3}), which blows up in a self-similar fashion. Given $(\a, p)\in (-1, \infty)\times (0, \infty]$, we say the blow-up is \emph{$\a-$asymptotically self-similar in the sense of $L^p$ }if there exists $\bar{V}=\bar{V}_\a\in \dot{W}^{1,p}(\Bbb R^3 )$ such that the following convergence holds true. $$ \lim_{t\to T} (T-t) \left\|\nabla v(\cdot, t)-\frac{1}{T-t} \nabla \bar{V}\left( \frac{\cdot}{(T-t)^{\frac{1}{\a+1}}}\right)\right\|_{L^\infty}=0 $$ if $p=\infty$, while $$ \lim_{t\to T}(T-t)^{1-\frac{3}{(\a+1)p}}\left\|\o(\cdot ,t)-\frac{1}{(T-t)^{1-\frac{3}{(\a+1)p}}}\bar{ \O }\left(\frac{\cdot }{(T-t)^{\frac{1}{\a+1}}}\right)\right\|_{L^p}=0 $$ if $0<p<\infty$, where and hereafter we denote $$\O=\mathrm{curl} \,V \quad \mbox{and}\quad \bar{ \O}=\mathrm{curl}\, \bar{V}. $$ The above limit function $\bar{V}\in L^p(\Bbb R^3)$ with $ \bar{\O }\neq 0$ is called the \emph{blow-up profile}. We observe that the self-similar blow-up given by (\ref{1.2})-(\ref{1.3}) is trivial case of $\a-$asymptotic self-similar blow-up with the blow-up profile given by the representing function $\bar{V}$. We say a blow-up at $T$ is of \emph{type I}, if $$ \lim\sup_{t\to T} (T-t)\|\nabla v (t)\|_{L^\infty}<\infty. $$ If the blow-up is not of type I, we say it is of \emph{type II}. For the use of terminology, type I and type II blow-ups, we followed the literatures on the studies of the blow-up problem in the semilinear heat equations(see e.g. \cite{mat,gig1,gig2}, and references therein). The use of $\|\nabla v(t)\|_{L^\infty}$ rather than $\|v(t)\|_{L^\infty}$ in our definition of type I and II is motivated by Beale-Kato-Majda's blow-up criterion. \begin{thm} Let $ m >5/2$, and $v\in C([0, T); H^m (\Bbb R^3 ))$ be a solution to (E) with $v_0 \in H^m (\Bbb R^3)$, div $v_0=0$. We set \begin{equation}\label{1.5} \lim\sup_{t\to T} (T-t)\|\nabla v (t)\|_{L^\infty}:=M(T). \end{equation} Then, either $M(T)=0$ or $M(T)\geq 1$. The former case corresponds to non blow-up, and the latter case corresponds to the blow-up at $T$. Hence, the blow-up at $T$ is of type I if and only if $M(T) \geq 1$. \end{thm} {\bf Proof } It suffices to show that $M(T)<1$ implies non blow-up at $T$, which, in turn, leads to $M(T)=0$, since $\|\nabla v(t)\|_{L^\infty}\in C([0, T])$ in this case. We suppose $M(T)<1$. Then, there exists $t_0 \in (0, T)$ such that $$\sup_{t_0 <t<T} (T-t)\|\nabla v (t)\|_{L^\infty}:=M_0 <1.$$ Taking curl of the evolution part of (E), we have the vorticity equation, $$ \frac{\partial \o }{\partial t} +(v\cdot \nabla )\o =(\o \cdot \nabla )v . $$ This, taking dot product with $\xi=\o/|\o|$, leads to $$ \frac{\partial |\o |}{\partial t} +(v\cdot \nabla )|\o |= (\xi\cdot \nabla)v \cdot \xi |\o|. $$ Integrating this over $[t_0, t]$ along the particle trajectories $\{ X(a,t)\}$ defined by $v(x,t)$, we have \begin{equation}\label{1.6} |\o (X(a,t),t)|=|\o (X(a,t_0), t_0)|\exp \left[ \int_{t_0} ^t (\xi\cdot \nabla)v \cdot \xi (X(a,s),s) ds \right], \end{equation} from which we estimate \begin{eqnarray}\label{1.7} \|\o (t)\|_{L^\infty}&\leq &\|\o(t_0) \|_{L^\infty} \exp\left[\int_{t_0}^t \|\nabla v (\tau)\|_{L^\infty} d\tau\right]\nonumber\\ &<&\|\o (t_0) \|_{L^\infty}\exp\left[M_0 \int_{t_0} ^t (T-\tau )^{-1} d\tau \right]\nonumber\\ &=&\|\o (t_0) \|_{L^\infty}\left(\frac{T-t_0}{T-t}\right)^{M_0}. \end{eqnarray} Since $M_0<1$, we have $ \int_{t_0}^T\|\o (t)\|_{L^\infty} dt <\infty, $ and thanks to the Beale-Kato-Majda criterion there exists no blow-up at $T$, and we can continue our classical solution beyond $T$. $\square$\\ \ \\ The following is our main theorem in this section. \begin{thm} Let a classical solution $v\in C([0, T); H^m (\Bbb R^3))$ with initial data $v_0 \in H^m (\Bbb R^3)\cap \dot{W}^{1, p} (\Bbb R^3)$, div $v_0=0$, $\o_0 \neq 0$ blows up with type I. Let $M=M(T)$ be as in Theorem 1.1. Suppose $(\a,p)\in (-1, \infty)\times (0, \infty]$ satisfies \begin{equation}\label{1.8} M < \left| 1-\frac{3}{(\a+1)p}\right|. \end{equation} Then, there exists no $\a-$asymptotically self-similar blow-up at $t=T$ in the sense of $L^p$ if $\o_0 \in L^p (\Bbb R^3)$. Hence, for any type I blow-up and for any $\a \in (-1, \infty)$ there exists $ p_1\in (0,\infty]$ such that it is not $\a-$asymptotically self-similar in the sense of $L^{p_1}$. \end{thm} {\bf Remark 1.1} We note that the case $p=\infty$ of the above theorem follows from Theorem 1.1, which states that there is no singularity at all at $t=T$ in this case. The above theorem can be regarded an improvement of the main theorem in \cite{cha3}, in the sense that we can consider the $L^p$ convergence only to exclude nontrivial blow-up profile $\bar{V}$, where $p$ depends on $M$. Moreover, we do not need to use the Besov space $\dot{B}^0_{\infty, 1}$ in the statement of the theorem, and the continuation principle of local solution in the Besov space in the proof.\\ \ \\ \noindent{\bf Proof of Theorem 1.2} We assume asymptotically self-similar blow-up happens at $T$. Let us introduce similarity variables defined by $$ y=\frac{x}{(T-t)^{\frac{1}{\a+1}}}, \quad s=\frac{1}{\a+1} \log \left( \frac{T}{T-t}\right), $$ and transformation of the unknowns $(v,p) \to (V, P)$ according to \begin{equation}\label{1.9} v(x,t)=\frac{1}{(T-t)^{\frac{\a}{\a+1}}} V(y,s ), \quad p(x,t)=\frac{1}{(T-t)^{\frac{2\a}{\a+1}}} P(y,s ). \end{equation} Substituting $(v,p)$ into the $(E)$ we obtain the equivalent evolution equation for $(V,P)$, $$ (E_1) \left\{ \aligned & V_s +\a V +(y \cdot \nabla)V + (\a+1)(V\cdot \nabla )V =-\nabla P,\\ & \mathrm{div}\, V=0,\\ & V(y,0)=V_0 (y)=T^{\frac{\a}{\a+1}} v_0 (T^{\frac{1}{\a}}y). \endaligned \right. $$ Then the assumption of asymptotically self-similar singularity at $T$ implies that there exists $\bar{V}=\bar{V}_\a\in \dot{W}^{1,p} (\Bbb R^3)$ such that \begin{equation}\label{1.10} \lim_{s\to \infty}\|\O (\cdot, s)-\bar{\O}\|_{L^p}=0. \end{equation} Now the hypothesis (\ref{1.8}) implies that there exists $t_0 \in (0, T)$ such that \begin{equation}\label{1.10a} \sup_{t_0 <t<T} (T-t)\|\nabla v (t)\|_{L^\infty} :=M_0 < \left| 1-\frac{3}{(\a+1)p}\right|. \end{equation} Taking $L^p(\Bbb R^3)$ norm of (\ref{1.6}), taking into account the following simple estimates, $$ -\|\nabla v(\cdot ,t)\|_{L^\infty}\leq (\xi\cdot \nabla)v \cdot \xi (x,t) \leq \|\nabla v (\cdot ,t)\|_{L^\infty} \quad \forall (x,t)\in \Bbb R^3\times [t_0 , T), $$ we obtain, for all $p\in (0, \infty]$, \begin{eqnarray}\label{1.11} \lefteqn{\|\o (t_0 ) \|_{L^p} \exp \left[- \int_{t_0} ^t \|\nabla v (\cdot,s)\|_{L^\infty} ds \right]\leq \|\o (t)\|_{L^p}}\hspace{1.in}\nonumber \\ && \qquad \leq \|\o_0 \|_{L^p}\exp \left[ \int_{t_0} ^t \|\nabla v (\cdot,s)\|_{L^\infty} ds \right], \end{eqnarray} where we use the fact that $a\mapsto X(a,t)$ is a volume preserving map. From the fact $$ \int_{t_0} ^t \|\nabla v (\cdot,s)\|_{L^\infty} ds\leq M_0 \int_{t_0}^t (T-\tau)^{-1}d\tau=-M_0 \log \left( \frac{T-t}{T-t_0}\right), $$ and $$ \frac{\|\o (t)\|_{L^p}}{\|\o (t_0)\|_{L^p}} = \left(\frac{T-t}{T-t_0}\right)^{ \frac{3}{(\a+1)p} -1} \frac{\|\O(s)\|_{L^p}}{\|\O (s_0 )\|_{L^p}}, $$ where we set $$s_0 =\frac{1}{\a+1} \log \left(\frac{T}{T-t_0}\right), $$ we find that (\ref{1.1}) leads us to \begin{equation}\label{1.12} \left(\frac{T-t}{T-t_0}\right)^{M_0+1- \frac{3}{(\a+1)p}} \leq \frac{\|\O(s)\|_{L^p}}{\|\O (s_0)\|_{L^p}} \leq \left(\frac{T-t}{T-t_0}\right)^{-M_0+1- \frac{3}{(\a+1)p}} \end{equation} for all $p\in (0, \infty]$. Passing $t\to T$, which is equivalent to $s\to \infty$ in (\ref{1.12}), we have from (\ref{1.10}) \begin{equation}\label{1.12a} \lim_{s\to \infty} \frac{\|\O(s)\|_{L^p}}{\|\O (s_0)\|_{L^p}} =\frac{\|\bar{\O}\|_{L^p}}{\|\O (s_0 ) \|_{L^p}}\in (0, \infty). \end{equation} By (\ref{1.10a}) $M_0+1- \frac{3}{(\a+1)p}<0$ or $-M_0+1- \frac{3}{(\a+1)p} >0$. In the former case we have \begin{equation}\label{1.13} \lim_{t \to T} \left(\frac{T-t}{T-t_0}\right)^{M_0+1- \frac{3}{(\a+1)p}}=\infty, \end{equation} while, in the latter case \begin{equation}\label{1.14} \lim_{t \to T} \left(\frac{T-t}{T-t_0}\right)^{-M_0 +1- \frac{3}{(\a+1)p}} =0. \end{equation} Both of (\ref{1.13}) and (\ref{1.14}) contradicts with (\ref{1.12a}). If the blow-up is of type I, and $M(T)<\infty$, then one can always choose $p_1\in (0, p_0)$ so small that (\ref{1.8}) is valid for $p=p_1$. With such $p_1$ it is not $\a-$asymptotically self-similar in $L^{p_1}$. $\square$\\ \ \\ For the self-similar blowing-up solution of the form (\ref{1.2})-(\ref{1.3}) we observe that in order to be consistent with the energy conservation, $\|v(t)\|_{L^2}=\|v_0\|_{L^2}$ for all $t\in [0, T)$, we need to fix $\a=3/2$. Since the self-similar blowing up solution corresponds to a trivial convergence of the asymptotically self-similar blow-up, the following is immediate from Theorem 1.2. \begin{cor} Given $p\in (0, \infty]$, there exists no self-similar blow-up with the blow-up profile $V$ satisfying $\O \in L^p (\Bbb R^3)$ if \begin{equation}\label{1.15} \|\nabla V\|_{L^\infty} < \left| 1-\frac{6}{5p}\right|. \end{equation} \end{cor} {\bf Remark 1.2 } The above corollary implies that we can exclude self-similar singularity of the Euler equations only under the assumption of $\O \in L^p (\Bbb R^3)$ if $p$ satisfies the condition (\ref{1.15}). \\ \ \\ The following is, in turn, immediate from the above corollary, which is nothing but Theorem 1.1 in \cite{cha2}. \begin{cor} There exists no self-similar blow-up with the blow-up profile $V$ satisfying $\O \in L^p (\Bbb R^3)$ for all $p\in (0, p_0 )$ for some $p_0 >0$. \end{cor} The following theorem is concerned on the possibility of type II asymptotically self-similar singularity of the Euler equations, for which the blow-up rate near the possible blow-up time $T$ is \begin{equation}\label{type2}\|\nabla v(t)\|_{L^\infty}\sim \frac{1}{(T-t)^\gamma }, \qquad \gamma >1. \end{equation} \begin{thm} Let $v\in C([0, T); H^m (\Bbb R^3))$, $m>5/2$, be local classical solution of the Euler equations. Suppose there exists $\gamma >1$ and $R_1 >0$ such that the following convergence holds true. \begin{equation}\label{1.16} \lim_{t\to T}(T-t)^{(\a-\frac32)\frac{\gamma}{\a+1}} \left\|v(\cdot, t)-\frac{1}{(T-t)^{(\a-\frac32)\frac{\gamma}{\a+1}}}\bar{V}\left(\frac{\cdot}{(T-t)^{\frac{\gamma}{\a+1}}} \right) \right\|_{L^2 (B_{R_1})}=0, \end{equation} where $B_{R_1}=\{ x\in \Bbb R^3 \, | \, |x|<R_1\}$. Then, the blow-up profile $\bar{V}\in L^2_{loc} (\Bbb R^3 )$ is a weak solution of the following stationary Euler equations, \begin{equation} (\bar{V}\cdot \nabla)\bar{V}=-\nabla \bar{P}, \qquad \mathrm{div} \,\bar{V}=0. \end{equation} \end{thm} \noindent{\bf Proof } We introduce a self-similar transform defined by \begin{equation}\label{1.17} v(x,t)=\frac{1}{(T-t)^{\frac{\a \gamma}{\a+1}}} V\left(y, s \right),\quad p(x,t)=\frac{1}{(T-t)^{\frac{2\a \gamma}{\a+1}}} P\left(y, s\right) \end{equation} with \begin{equation}\label{1.18} y=\frac{1}{(T-t)^{\frac{\gamma}{\a+1}}} x, \quad s=\frac{1}{(\gamma-1 )T^{\gamma-1}} \left[ \frac{T^{\gamma-1}}{(T-t)^{\gamma -1}} -1\right]. \end{equation} Substituting $(v,p)$ in (\ref{1.17})-(\ref{1.18}) into the $(E)$, we have \begin{equation}\label{1.19} (E_2) \left\{ \aligned & -\frac{\gamma}{s(\gamma-1) +T^{1-\gamma}}\left[ \frac{\a}{\a+1} V +\frac{1}{\a+1}(y \cdot \nabla)V \right]=V_s+(V\cdot \nabla )V +\nabla P,\\ & \qquad \mathrm{div}\, V=0,\\ & V(y,0)=V_0(y)=v_0 (y). \endaligned \right. \end{equation} The hypothesis (\ref{1.16}) is written as \begin{equation}\label{1.20} \lim_{s\to \infty}\|V(\cdot, s)-\bar{V}(\cdot )\|_{L^2 (B_{R(s)})}=0,\quad R(s)=\left[ (\gamma-1 )s +\frac{1}{T^{\gamma-1}} \right]^{\frac{\gamma}{(\a +1)(\gamma-1)}}, \end{equation} which implies that \begin{equation}\label{1.21} \lim_{s\to \infty} \|V(\cdot , s)-\bar{V}\|_{L^2 (B_R)}=0, \qquad \forall R>0, \end{equation} where $V(y,s) $ is defined by (\ref{1.17}). Similarly to \cite{hou, cha3}, we consider the scalar test function $\xi \in C^1_0 (0,1)$ with $\int_0 ^1\xi (s)ds\neq 0$, and the vector test function $\phi =(\phi_1 , \phi_2, \phi_3 )\in C_0^1 (\Bbb R^3)$ with div $\phi=0$. We multiply the first equation of $(E_2)$, in the dot product, by $\xi (s-n)\phi (y)$, and integrate it over $\Bbb R^3\times [n, n+1]$, and then we integrate by parts to obtain \begin{eqnarray*} &&+\frac{\a}{\a+1}\int_0 ^{1}\int_{\Bbb R^3}g(s+n)\xi (s) V(y, s+n)\cdot\phi(y) dyds \\ &&-\frac{1}{\a +1}\int_0 ^{1}\int_{\Bbb R^3}g(s+n)\xi (s)V(y, s+n)\cdot (y \cdot \nabla)\phi (y)dyds\\ &&\qquad=\int_0^{1}\int_{\Bbb R^3} \xi _s(s) \phi(y)\cdot V(y,s+n) dyds\\ &&+\int_0 ^{1}\int_{\Bbb R^3}\xi (s)\left[V(y,s+n)\cdot (V(y,s+n)\cdot \nabla )\phi (y)\right] dyds=0, \end{eqnarray*} where we set $$g(s)=\frac{\gamma}{s(\gamma-1) +T^{1-\gamma}}.$$ Passing to the limit $n\to \infty$ in this equation, using the facts $\int_0 ^1\xi _s(s)ds=0$, $\int_0 ^1\xi (s)ds\neq 0$, $V(\cdot, s+n)\to \bar{V}$ in $L^2_{\mathrm{loc}} (\Bbb R^3)$, and finally $g(s+n)\to 0$, we find that $\bar{V}\in L^2_{\mathrm{loc}} (\Bbb R^3)$ satisfies $$ \int_{\Bbb R^3} \bar{V}\cdot (\bar{V}\cdot \nabla )\phi (y) dy=0 $$ for all vector test function $\phi \in C_0^1 (\Bbb R^3)$ with div $\phi=0$. On the other hand, we can pass $s\to \infty$ directly in the weak formulation of the second equation of $(E_2)$ to have $$ \int_{\Bbb R^3} \bar{V}\cdot \nabla \psi (y)dy=0 $$ for all scalar test function $\psi \in C^1_0 (\Bbb R^3)$. $\square$\\ \section{Generalized similarity transforms and new a priori estimates} \setcounter{equation}{0} Let us consider a classical solution to (E) $v\in C([0, T);H^m (\Bbb R^3))$, $m>5/2$, where we assume $T\in (0, \infty]$ is the maximal time of existence of the classical solution. Let $p(x,t)$ be the associated pressure. Let $\mu(\cdot)\in C^1([0, T))$ be a scalar function such that $\mu(t)>0$ for all $t\in [0, T)$ and $\int_0 ^T \mu (t) dt=\infty$. We transform from $(v,p)$ to $(V, P)$ according to the formula, \begin{eqnarray}\label{2.1} v(x,t)&=&\mu (t)^{\frac{\a}{\a+1}} V\left(\mu(t)^{\frac{1}{\a+1}}x, \int_0 ^t \mu (\sigma )d\sigma \right),\\ \label{2.2} p(x,t)&=&\mu (t)^{\frac{2\a}{\a+1}} P\left(\mu(t)^{\frac{1}{\a+1}}x, \int_0 ^t \mu (\sigma )d\sigma \right), \end{eqnarray} where $\a\in (-1, \infty)$ as previously. This means that the space-time variables are transformed from $(x,t) \in \Bbb R^3 \times [0,T)$ into $(y,s)\in \Bbb R^3\times [0, \infty)$ as follows: \begin{equation}\label{2.3} y=\mu(t)^{\frac{1}{\a+1}}x, \quad s=\int_0 ^t \mu (\sigma )d\sigma. \end{equation} Substituting (\ref{2.1})-(\ref{2.3}) into the Euler equations, we obtain the equivalent equations satisfied by $(V,P)$ $$ (E_*) \left\{ \aligned & -\frac{\mu'(t)}{\mu(t)^2}\left[ \frac{\a}{\a+1} V +\frac{1}{\a+1}(y \cdot \nabla)V \right]=V_s+(V\cdot \nabla )V +\nabla P,\\ & \qquad \mathrm{div}\, V=0,\\ & V(y,0)=V_0(y)=v_0 (y). \endaligned \right. $$ We note that the special cases $$\mu (t)=\frac{1}{T-t}, \quad \mu (t)=\frac{1}{(T-t)^\gamma }, \gamma >1 $$ are considered in the previous section. In this section we choose $\mu(t)=\exp\left[\pm\gamma\int_0 ^t \|\nabla v(\tau)\|_{L^\infty}d\tau \right], \gamma\geq 1.$ Then, \begin{eqnarray}\label{2.4} v(x,t)&=&\exp\left[\frac{\pm\gamma\a}{\a+1}\int_0 ^t\|\nabla v(\tau)\|_{L^\infty}d\tau \right] \ V\left(y, s \right),\\ \label{2.5} p(x,t)&=&\exp\left[\frac{\pm 2\gamma\a}{\a+1}\int_0 ^t\|\nabla v(\tau)\|_{L^\infty}d\tau \right] \ P\left(y, s\right) \end{eqnarray} with \begin{eqnarray} \label{2.6} y&=&\exp\left[\frac{\pm \gamma}{\a+1}\int_0 ^t\|\nabla v(\tau)\|_{L^\infty}d\tau \right]x, \nonumber \\ s&=&\int_0 ^t\exp\left[\pm \gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau \end{eqnarray} respectively for the signs $\pm$. Substituting $(v,p)$ in (\ref{2.4})-(\ref{2.6}) into the $(E)$, we find that $(E_*)$ becomes $$ (E_\pm) \left\{ \aligned & \mp \gamma \|\nabla V(s)\|_{L^\infty}\left[ \frac{\a}{\a+1} V +\frac{1}{\a+1}(y \cdot \nabla)V \right]=V_s+(V\cdot \nabla )V +\nabla P,\\ & \mathrm{div}\, V=0,\\ & V(y,0)=V_0(y)=v_0 (y) \endaligned \right. $$ respectively for $\pm$. Similar equations to the system $(E_\pm )$, without the term involving $(y\cdot\nabla)V$ are introduced and studied in \cite{cha4}, where similarity type of transform with respect to only time variables was considered. The argument of the global/local well-posedness of the system $(E_\pm)$ respectively from the local well-posedness result of the Euler equations is as follows. We define $$ S^\pm =\int_0 ^{T}\exp\left[\pm \gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau . $$ Then, $ S^\pm$ is the maximal time of existence of classical solution for the system $ (E_\pm)$. We also note the following integral invariant of the transform, $$ \int_0 ^{T} \|\nabla v(t)\|_{L^\infty}dt=\int_0 ^{S^\pm} \|\nabla V^\pm (s)\|_{L^\infty}ds. $$ The key advantage of our choice of the function $\mu(t)$ here is that the convection term is dominated by $\mp \gamma \|\nabla V(s)\|_{L^\infty} V$ in the transformed system $(E\pm)$ in the vorticity formulation, which enable us to derive new \textit{a priori} estimates for $\|\o (t)\|_{L^\infty}$ as follows. \begin{thm} Given $m>5/2$ and $v_0 \in H^m (\Bbb R^3)$ with div $v_0=0$, let $\o $ be the vorticity of the solution $v\in C([0, T);H^m (\Bbb R^3 ))$ to the Euler equations (E). Then we have an upper estimate \begin{eqnarray}\label{2.7} \|\o (t)\|_{L^\infty}\leq \frac{\|\o_0\|_{L^\infty} \exp\left[ \gamma \int_0 ^t \|\nabla v (\tau)\|_{L^\infty} d\tau\right]}{1+ (\gamma-1) \|\o_0 \|_{L^\infty}\int_0 ^t\exp\left[ \gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau}, \end{eqnarray} and lower one \begin{eqnarray}\label{2.8} \|\o (t)\|_{L^\infty}\geq \frac{\|\o_0\|_{L^\infty} \exp\left[ -\gamma \int_0 ^t \|\nabla v (\tau)\|_{L^\infty} d\tau\right]}{1-(\gamma-1) \|\o_0 \|_{L^\infty}\int_0 ^t\exp\left[ -\gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau} \end{eqnarray} for all $\gamma \geq 1$ and $t\in [0, T).$ The denominator of the right hand side of (\ref{2.8}) can be estimated from below as \begin{equation}\label{2.8a} 1-(\gamma-1) \|\o_0 \|_{L^\infty}\int_0 ^t\exp\left[ -\gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau \geq \frac{1}{(1+\|\o_0 \|_{L^\infty} t)^{\gamma-1}}, \end{equation} which shows that the finite time blow-up does not follow from (\ref{2.8}). \end{thm} {\bf Remark 2.1} We observe that for $\gamma=1$, the estimates (\ref{2.7})-(\ref{2.8}) reduce to the well-known ones in (\ref{1.11}) with $p=\infty$. Moreover, combining (\ref{2.7})-(\ref{2.8}) together, we easily derive another new estimate, \begin{equation}\label{2.9} \frac{\sinh \left[ \gamma \int_0 ^t \|\nabla v(\tau )\|_{L^\infty} d\tau \right]}{\int_0 ^t \cosh \left[\gamma \int_\tau ^t \|\nabla v (\sigma )\|_{L^\infty}d\sigma\right] d\tau}\geq (\gamma-1)\|\o_0 \|_{L^\infty}. \end{equation} \ \\ {\bf Proof of Theorem 2.1} Below we denote $V^\pm$ for the solutions of $(E_\pm)$ respectively, and $\O^\pm = \mathrm{curl}\, V^\pm . $ Note that $V^\pm _0=v_0:=V_0$ and $\O^\pm _0 =\o_0:=\O_0$. We will first derive the following estimates for the system $(E_\pm)$. \begin{eqnarray}\label{2.10} \| \O ^+(s)\|_{L^\infty}&\leq& \frac{\|\O_0\|_{L^\infty}}{1+(\gamma-1 )s\|\O_0 \|_{L^\infty}},\\ \label{2.11} \| \O ^-(s)\|_{L^\infty}&\geq &\frac{\|\O_0\|_{L^\infty}}{1-(\gamma-1 )s\|\O_0 \|_{L^\infty}}, \end{eqnarray} as long as $V^\pm(s)\in H^m (\Bbb R^3 )$. Taking curl of the first equation of $(E_\pm)$, we have \begin{equation}\label{2.12} \mp\gamma \|\nabla V\|_{L^\infty} \left[\O -\frac{1}{\a +1}(y\cdot \nabla )\O\right]= \O_s +(V\cdot \nabla )\O -(\O\cdot \nabla )V. \end{equation} Multiplying $\Xi=\O/|\O|$ on the both sides of (\ref{2.12}), we deduce \begin{eqnarray}\label{2.13} \lefteqn{|\O|_s +(V\cdot \nabla )|\O|\mp \frac{\|\nabla V(s)\|_{L^\infty} }{\a +1}(y\cdot \nabla )|\O|=(\Xi\cdot \nabla V\cdot \Xi \mp\|\nabla V\|_{L^\infty}) |\O|}\hspace{1.5in}\nonumber \\ &&\qquad\mp(\gamma -1)\|\nabla V\|_{L^\infty} |\O|\nonumber \\ &&\left\{ \aligned & \leq -(\gamma -1)\|\nabla V\|_{L^\infty} |\O|\quad \mbox{for}\,(E_+)\\ &\geq (\gamma -1)\|\nabla V\|_{L^\infty} |\O|\quad \mbox{for}\, (E_-), \endaligned \right. \end{eqnarray} since $|\Xi\cdot \nabla V\cdot \Xi |\leq |\nabla V|\leq \|\nabla V\|_{L^\infty}. $ Given smooth solution $V(y,s)$ of $(E_\pm)$, we introduce the particle trajectories $\{ Y_\pm (a,s)\}$ defined by $$ \frac{\partial Y(a,s)}{\partial s}=V_\pm(Y(a,s),s)\mp\frac{\|\nabla V(s)\|_{L^\infty}}{\a+1} Y(a,s) \quad ;\quad Y(a,0)=a . $$ Recalling the estimate $$\|\nabla V(s)\|_{L^\infty}\geq \|\O(s)\|_{L^\infty} \geq |\O (y,s)|\qquad \forall y\in \Bbb R^3, $$ we can further estimate from (\ref{2.13}) \begin{equation}\label{2.14} \frac{\partial}{\partial s} |\O(Y(a,s),s)| \left\{ \aligned & \leq -(\gamma -1)|\O(Y(a,s),s)|^2 \quad \mbox{for}\, (E_+)\\ &\geq (\gamma -1) |\O(Y(a,s),s)|^2\quad \mbox{for}\, (E_-) . \endaligned \right. \end{equation} Solving these differential inequalities (\ref{2.14}) along the particle trajectories, we obtain that \begin{equation}\label{2.15} |\O (Y(a,s) ,s) | \left\{ \aligned & \leq \frac{|\O_0 (a)|}{1 +(\gamma -1)s|\O_0(a )|}\quad \mbox{for}\,(E_+)\\ &\geq \frac{|\O_0 (a)|}{1 -(\gamma -1)s|\O_0(a )|}\quad \mbox{for}\, (E_-). \endaligned \right. \end{equation} Writing the first inequality of (\ref{2.15}) as $$ |\O^+ (Y(a,s) ,s) | \leq \frac{1}{\frac{1}{|\O_0 (a)|} +(\gamma -1)s} \leq \frac{1}{\frac{1}{\|\O_0 \|_{L^\infty}} +(\gamma -1)s}, $$ and then taking supremum over $a\in \Bbb R^3$, which is equivalent to taking supremum over $Y(a,s)\in \Bbb R^3$ due to the fact that the mapping $a\mapsto Y(a,s)$ is a deffeomorphism(although not volume preserving) on $\Bbb R^3$ as long as $V\in C([0, S); H^m (\Bbb R^3))$, we obtain (\ref{2.10}). In order to derive (\ref{2.11}) from the second inequality of (\ref{2.15}), we first write $$ \|\O^-( s)\|_{L^\infty}\geq |\O (Y(a,s),s)|\geq \frac{1}{\frac{1}{|\O_0 (a) |} -(\gamma -1)s}, $$ and than take supremum over $a\in \Bbb R^3$. Finally, in order to obtain (\ref{2.7})-(\ref{2.8}), we just change variables from (\ref{2.10})-(\ref{2.11}) back to the original physical ones, using the fact \begin{eqnarray*} \O^+ (y,s)&=&\exp\left[ -\gamma \int_0 ^t \|\nabla v(\tau )\|_{L^\infty} d\tau\right]\o (x,t),\\ s&=&\int_0 ^t \exp\left[ \gamma \int_0 ^\tau \|\nabla v(\sigma )\|_{L^\infty} d\sigma\right]d\tau \end{eqnarray*} for (\ref{2.7}), while in order to deduce (\ref{2.8}) from (\ref{2.11}) we substitute \begin{eqnarray*} \O^- (y,s)&=&\exp\left[ \gamma \int_0 ^t \|\nabla v(\tau)\|_{L^\infty}d\tau \right]\o (x,t),\\ s&=&\int_0 ^t \exp\left[ -\gamma \int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty} d\sigma\right]d\tau . \end{eqnarray*} Now we can rewrite (\ref{2.8}) as $$ \|\o (t)\|_{L^\infty} \geq -\frac{1}{\gamma-1} \frac{d}{dt} \log\left\{1-(\gamma-1) \|\o_0 \|_{L^\infty}\int_0 ^t\exp\left[ -\gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau \right\}. $$ Thus, \begin{eqnarray}\label{integ} &&\int_0 ^t \|\nabla v(\tau)\|_{L^\infty} d\tau\geq\int_0 ^t \|\o (\tau)\|_{L^\infty} d\tau\geq \nonumber \\ &&\geq -\frac{1}{\gamma-1}\log\left\{1-(\gamma-1) \|\o_0 \|_{L^\infty}\int_0 ^t\exp\left[ -\gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau \right\}.\nonumber \\ \end{eqnarray} Setting $$ y(t):=1-(\gamma-1) \|\o_0 \|_{L^\infty}\int_0 ^t\exp\left[ -\gamma\int_0 ^\tau\|\nabla v(\sigma)\|_{L^\infty}d\sigma \right]d\tau, $$ We find further integrable structure in (\ref{integ}), which is $$ y'(t)\geq -(\gamma-1)\|\o_0\|_{L^\infty} y(t)^{\frac{\gamma}{\gamma-1}}. $$ Solving this differential inequality, we obtain (\ref{2.8a}). $\square$\\ \ \\ {\em In the last part of this section we fix $\mu(t):=\exp\left[\int_0 ^t \|\nabla v(\tau)\|_{L^\infty}d\tau \right]$.}\\ We assume our local classical solution in $H^m (\Bbb R^3)$ blows up at $T$, and hence $\mu(T-0)=\exp\left[\int_0 ^T \|\nabla v(\tau)\|_{L^\infty}d\tau \right]=\infty$. Given $(\a,p)\in (-1, \infty)\times (0, \infty)$, as previously, we say the blow-up is $\a-$asymptotically self-similar in the sense of $L^p$ if there exists $\bar{V}=\bar{V}_\a\in \dot{W}^{1,p}(\Bbb R^3 )$ such that the following convergence holds true. \begin{equation}\label{2.16} \lim_{t\to T}\mu(t)^{-1}\left\|\nabla v(\cdot, t) - \mu(t)\nabla \bar{V}\left(\mu(t)^{\frac{1}{\a+1} }(\cdot)\right)\right\|_{L^\infty}=0 \end{equation} for $p=\infty$, and \begin{equation}\label{2.17} \lim_{t\to T}\mu(t)^{-1+\frac{3}{(\a+1)p}}\left\|\o(\cdot ,t)-\mu(t)^{1-\frac{3}{(\a+1)p}} \bar{\O }\left(\mu(t)^{\frac{1}{\a+1}} (\cdot)\right)\right\|_{L^p}=0 \end{equation} for $p\in (0, \infty)$. The above limiting function $\bar{V}$ with $\bar{\O }\neq 0$ is called the blow-up profile as previously. \begin{pro} Let $\a \neq 3/2$. Then there exists no $\a-$ asymptotically self-similar blow-up in the sense of $L^\infty$ with the blow-up profile belongs to $L^2(\Bbb R^3)$. \end{pro} {\bf Proof } Let us suppose that there exists $\bar{V}\in \dot{W}^{1, \infty}(\Bbb R^3)\cap L^2 (\Bbb R^3)$ such that (\ref{2.16}) holds, then we will show that $\bar{V}=0$. In terms of the self-similar variables (\ref{2.16}) is translated into $$\lim_{s\to \infty} \|\nabla V(\cdot, s)-\nabla \bar{V}\|_{L^\infty}=0,$$ where $V$ is defined in (\ref{2.1}). If $\|\nabla \bar{V}\|_{L^\infty}=0$, then, the condition $\bar{V}\in L^2(\Bbb R^3)$ implies that $\bar{V}=0$, and there is noting to prove. Let us suppose $\|\nabla \bar{V}\|_{L^\infty}>0$. The equations satisfied $\bar{V}$ are \begin{equation}\label{2.18} \left\{ \aligned &-\|\nabla \bar{V}\|_{L^\infty}\left[ \frac{\a}{\a+1} \bar{V }+\frac{1}{\a+1}(y \cdot \nabla)\bar{V} \right]=(\bar{V}\cdot \nabla )\bar{V }+\nabla \bar{P},\\ &\qquad\mathrm{div} \,\bar{ V}=0 \endaligned \right. \end{equation} for a scalar function $\bar{P}$. Taking $L^2 (\Bbb R^3)$ inner product of the first equation of (\ref{2.18}) by $\bar{V}$ we obtain $$ \frac{\|\nabla \bar{V}\|_{L^\infty}}{\a+1} \left(\a-\frac{3}{2}\right) \|\bar{V}\|_{L^2}=0. $$ Since $\|\nabla \bar{V}\|_{L^\infty}\neq 0$ and $\a \neq \frac 32$, we have $\|\bar{V}\|_{L^2}=0$, and $\bar{V}=0$. $\square$\\ \ \\ \begin{pro} There exists no $\a-$asymptotically self-similar blowing up solution to (E) in the sense of $L^p$ if $0<p < \frac{3}{2(\a+1)}.$ \end{pro} {\bf Proof } Suppose there exists $\a-$asymptotically self-similar blow-up at $T$ in the sense of $L^p$. Then, there exists $\bar{\O} \in L^p (\Bbb R^3)$ such that, in terms of the self-similar variables introduced in (\ref{2.1})-(\ref{2.2}), we have \begin{equation}\label{2.19} \lim_{s\to \infty} \|\O (s)\|_{L^p}=\|\bar{\O}\|_{L^p} <\infty. \end{equation} We represent the $L^p$ norm of $\|\o (t)\|_{L^p}$ in terms of similarity variables to obtain \begin{equation}\label{2.20} \|\o (t)\|_{L^p} = \mu(t)^{1-\frac{3}{(\a+1)p}}\|\O(s)\|_{L^p}, \quad \mu (t)=\exp\left[ \int_0 ^t \|\nabla v (\tau)\|_{L^\infty} d\tau\right]. \end{equation} Substituting this into the lower estimate part of (\ref{1.11}), we have \begin{equation}\label{2.21} \mu(t)^{-2+ \frac{3}{(\a+1)p}}\leq \frac{\|\O (s)\|_{L^p}}{\|\O_0 \|_{L^p}}. \end{equation} If $-2+\frac{3}{(\a+1)p}>0$, then taking $t\to T$ the above inequality we obtain, \begin{eqnarray*} \lefteqn{\infty=\lim\sup_{t\to T}\mu(t)^{-2+\frac{3}{(\a+1)p}}\|\O_0 \|_{L^p}}\hspace{.0in}\nonumber \\ &&\leq\lim\sup_{s\to \infty}\|\O (s)\|_{L^p}=\|\bar{\O}\|_{L^p}, \end{eqnarray*} which is a contradiction to (\ref{2.19}). $\square$\\ \ \\ \section{\bf The case of the 3D Navier-Stokes equations} \setcounter{equation}{0} In this section we concentrate on the following 3D Navier-Stokes equations in $\Bbb R^3$ without forcing term. \[ \mathrm{ (NS)} \left\{ \aligned &\frac{\partial v}{\partial t} +(v\cdot \nabla )v =\Delta v-\nabla p , \quad (x,t)\in {\Bbb R^3}\times (0, \infty) \\ &\quad \textrm{div }\, v =0 , \quad (x,t)\in {\Bbb R^3}\times (0, \infty)\\ &v(x,0)=v_0 (x) \quad x\in \Bbb R^3. \endaligned \right. \] First, we exclude asymptotically self-similar singularity of type II of (NS), for which the blow-up rate is given by (\ref{type2}). We have the following theorem. \begin{thm} Let $p\in [3, \infty)$ and $v\in C([0, T); L^p (\Bbb R^3))$ be a local classical solution of the Navier-Stokes equations constructed by Kato(\cite{kat1}). Suppose there exists $\gamma >1$ and $\bar{V}\in L^p (\Bbb R^3)$ such that the following convergence holds true. \begin{equation}\label{3.1} \lim_{t\to T}(T-t)^{\frac{(p-3)\gamma }{2p}}\left\|v(\cdot, t)-(T-t)^{-\frac{(p-3)\gamma }{2p}}\bar{V}\left(\frac{\cdot}{(T-t)^{\frac{\gamma}{2}}} \right) \right\|_{L^p}=0, \end{equation} If the blow-up profile $\bar{V}$ belongs to $\dot{H}^1 (\Bbb R^3)$, then $\bar{V}=0$. \end{thm} {\bf Proof } Since the main part of the proof is essentially identical to that of Theorem 1.3, we will be brief. Introducing the self-similar variables of the form (\ref{1.17})-(\ref{1.19}) with $\a=\frac12$, and substituting $(v,p)$ into the Navier-Stokes equations, we find that $(V,P)$ satisfies $$ \left\{ \aligned & -\frac{\gamma}{2s(\gamma-1) +2T^{1-\gamma}}\left[ V +(y \cdot \nabla)V \right]=V_s+(V\cdot \nabla )V -\Delta V+\nabla P,\\ & \quad \mathrm{div}\, V=0,\\ & V(y,0)=V_0(y)=v_0 (y). \endaligned \right. $$ The hypothesis (\ref{3.1}) is now translated as $$ \lim_{s\to \infty}\|V(\cdot, s)-\bar{V}(\cdot )\|_{L^p}=0 $$ Following exactly same argument as in the proof of Theorem 1.3, we can deduce that $\bar{V}$ is a stationary solution of the Navier-Stokes equations, namely there exists $\bar{P}$ such that \begin{equation}\label{3.1a} (\bar{V}\cdot \nabla )\bar{V}=\Delta \bar{V }-\nabla \bar{P}, \qquad \mathrm{div }\, \bar{V}=0. \end{equation} In the case $\bar{V}\in \dot{H}^1\cap L^p (\Bbb R^3)$, we easily from (\ref{3.1a}) that $\int_{\Bbb R^3} |\nabla \bar{V}|^2 dy=0$, which implies $\bar{V}=0$. $\square$\\ \ \\ Next, we derive a new a priori estimates for classical solutions of the 3D Navier-stokes equations. \begin{thm} Given $v_0 \in H^1(\Bbb R^3)$ with div $v_0=0$, let $\o $ be the vorticity of the classical solution $v\in C([0, T);H^1 (\Bbb R^3 ))\cap C((0, T); C^\infty (\Bbb R^3))$ to the Navier-Stokes equations (NS). Then, there exists an absolute constant $C_0>1$ such that for all $\gamma \geq C_0$ the following enstrophy estimate holds true. \begin{eqnarray}\label{3.2} \|\o (t)\|_{L^2}\leq \frac{\|\o_0\|_{L^2} \exp\left[ \frac{\gamma}{4} \int_0 ^t \|\o (\tau)\|_{L^2}^4 d\tau\right]}{\left\{ 1+ (\gamma-C_0) \|\o_0 \|_{L^2}^4\int_0 ^t\exp\left[ \gamma\int_0 ^\tau\|\o (\sigma)\|_{L^2}^4d\sigma \right]d\tau\right\}^{\frac14}}. \end{eqnarray} The denominator of (\ref{3.2}) is estimated from below by \begin{equation}\label{3.2a} 1+ (\gamma-C_0) \|\o_0 \|_{L^2}^4\int_0 ^t\exp\left[ \gamma\int_0 ^\tau\|\o (\sigma)\|_{L^2}^4d\sigma \right]d\tau \leq \frac{1}{(1-C_0 \|\o_0 \|_{L^2}^4 t )^{\frac{\gamma-C_0}{C_0}}} \end{equation} for all $\gamma \geq C_0$. \end{thm} \noindent{\bf Proof } Let $(v,p)$ be a classical solution of the Navier-Stokes equations, and $\o$ be its vorticity. We transform from $(v,p)$ to $(V, P)$ according to the formula, given by (\ref{2.1})-(\ref{2.3}), where $$ \mu (t)=\exp\left[ \gamma\int_0 ^t \|\o (\tau )\|_{L^2} ^4 d\tau\right]. $$ Substituting (\ref{2.1})-(\ref{2.3}) with such $\mu(t)$ into (NS), we obtain the equivalent equations satisfied by $(V,P)$ $$ (NS_*) \left\{ \aligned & \frac{-\gamma\|\O (s)\|_{L^2}^4}{2}\left[ V +(y \cdot \nabla)V \right]=V_s+(V\cdot \nabla )V -\Delta V-\nabla P,\\ & \qquad \mathrm{div}\, V=0,\\ & V(y,0)=V_0(y)=v_0 (y). \endaligned \right. $$ Operating curl on the evolution equations of $(NS_*)$, we obtain \begin{equation}\label{3.5} \frac{-\gamma \|\O (s)\|_{L^2}^4}{2}\left[ 2\O +(y \cdot \nabla)\O\right]=\O_s+(V\cdot \nabla )\O-(\O \cdot \nabla )V-\Delta \O. \end{equation} Taking $L^2(\Bbb R^3)$ inner product of (\ref{3.5}) by $\O$, and integrating by part, we estimate \begin{eqnarray}\label{3.6} \lefteqn{\frac12 \frac{d}{ds} \|\O\|_{L^2}^2 +\|\nabla \O \|_{L^2}^2 + \frac{\gamma}{4} \|\O\|_{L^2}^6 =\int_{\Bbb R^3} (\O \cdot \nabla )V \cdot \O dy}\hspace{1.in}\nonumber \\ &&\leq\|\O\|_{L^3} \|\nabla V\|_{L^2} \|\O\|_{L^6}\leq C \|\O \|_{L^2}^{\frac32} \|\nabla \O\|_{L^2}^{\frac32}\nonumber \\ &&\leq \|\nabla \O \|_{L^2}^2 +\frac{C_0}{4} \|\O\|_{L^2}^6\nonumber \\ \end{eqnarray} for an absolute constant $C_0>1$, where we used the fact $\|\O\|_{L^2}=\|\nabla V\|_{L^2}$, the Sobolev imbedding, $\dot{H}^1(\Bbb R^3)\hookrightarrow L^6(\Bbb R^3)$, the Gagliardo-Nirenberg inequality in $\Bbb R^3$, $$ \|f\|_{L^3}\leq C \|f\|_{L^2} ^{\frac12}\|\nabla f\|_{L^2}^{\frac12}.$$ and Young's inequality of the form $ab\leq a^p/p+b^q/q,$ $1/p+1/q=1$. Absorbing the term $\|\nabla \O \|_{L^2}^2$ to the left hand side, we have from (\ref{3.6}) \begin{equation}\label{3.7} \frac{d}{ds} \|\O\|_{L^2}^2\leq -\frac{\gamma -C_0}{2} \|\O\|_{L^2}^6. \end{equation} Solving the differential inequality (\ref{3.7}), we have \begin{equation}\label{3.8} \|\O (s)\|_{L^2}\leq \frac{\|\O_0 \|_{L^2} }{\left[ 1+(\gamma-C_0) s \|\O_0\|_{L^2}^4\right]^{\frac14}}. \end{equation} Transforming back to the original variables and functions, using the relations \begin{eqnarray*} s&=&\int_0 ^t \exp\left[ \gamma \int_0 ^\tau \|\o (\sigma )\|_{L^2} ^4 d\sigma\right] d\tau,\\ \|\o (t)\|_{L^2}&=& \|\O (s)\|_{L^2} \exp\left[ \frac{\gamma}{4} \int_0 ^t \|\o (\tau)\|_{L^2}^4 d\tau\right], \end{eqnarray*} we obtain (\ref{3.2}). Next, we observe (\ref{3.2}) can be written as $$\|\o (t)\|_{L^2}^4\leq \frac{1}{(\gamma-C_0 )} \frac{d}{dt} \log\left\{ 1+ (\gamma-C_0) \|\o_0 \|_{L^2}^4\int_0 ^t\exp\left[ \gamma\int_0 ^\tau\|\o (\sigma)\|_{L^2}^4d\sigma \right]d\tau\right\}, $$ which, after integration over $[0,t]$, leads to \begin{eqnarray}\label{cla1} \int_0 ^t \|\o (\tau )\|_{L^2}^4 d\tau \leq \frac{1}{(\gamma-C_0 )}\log \left\{ 1+ (\gamma-C_0) \|\o_0 \|_{L^2}^4\int_0 ^t\exp\left[ \gamma\int_0 ^\tau\|\o (\sigma)\|_{L^2}^4d\sigma \right]d\tau\right\}\nonumber \\ \end{eqnarray} for all $\gamma >C_0$. Setting $$y(t):=1+ (\gamma-C_0) \|\o_0 \|_{L^2}^4\int_0 ^t\exp\left[ \gamma\int_0 ^\tau\|\o (\sigma)\|_{L^2}^4d\sigma \right]d\tau, $$ we find that (\ref{cla1}) can be written in the form of a differential inequality, $$ y'(t)\leq (\gamma-C_0 )\|\o_0\|_{L^2}^4 \,y(t)^{\frac{\gamma}{\gamma-C_0}}, $$ which can be solved to provide us with (\ref{3.2a}). $\square$
1,314,259,992,953
arxiv
\section{Introduction} \label{} Heavy quarks (charm and beauty) are mostly produced in hard partonic scattering processes in the early stages of the collisions. Because of their large mass the production cross section can be calculated using perturbative Quantum Chromodynamics (pQCD) down to $p_{\mathrm{T}}=0$. Therefore, they are an ideal probe of the properties of Quark-Gluon Plasma (QGP) created in heavy-ion collisions. Measurements of heavy-flavour tagged jets give direct access to the parton kinematics and can provide information on heavy-quark energy loss in the QGP, in particular on how the radiated energy is dissipated in the medium. Studies of angular correlations of heavy-flavour particles with charged particles allow us to characterise the heavy-quark fragmentation process and its possible modification in a hot nuclear-matter environment. Measurements in pp collisions provide constraints on pQCD-based models and are necessary as a reference for the interpretation of heavy-ion collision results. Studies in p--Pb collisions can give insight on how heavy-quark production and hadronisation into jets are affected by cold nuclear-matter effects. \section{Results} \subsection{$\mathrm{D}$-meson and $\Lambda_c^{+}$-baryon tagged jets} Jets are reconstructed using the anti-$k_\mathrm{T}$ \cite{Cacciari:2011ma} algorithm and are tagged as $\mathrm{D^0}$-meson or $\mathrm{\Lambda_c^{+}}$-baryon jets if they contain a fully reconstructed $\mathrm{D^0}$ meson or $\mathrm{\Lambda_c^{+}}$ baryon. The studied decay chanels are $\mathrm{D^0}\rightarrow \mathrm{K^-}\mathrm{\pi^+}$ (B.R. 3.89\%) and $\mathrm{\Lambda_c^{+}}\rightarrow \mathrm{p}{\rm K}_{\rm S}^{0}$ (B.R. 1.59\%) with corresponding charge conjugates. The sideband method is used for subtracting the combinatorial background and extracting the heavy-flavour-jet raw signals. The feed-down contribution from beauty hadron decays is estimated and subtracted using a simulation performed with POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za}, as detailed in \cite{Acharya:2019zup}. A Bayesian unfolding procedure was performed using the RooUnfold package \cite{Adye:2011gm} to correct for detector effects affecting the reconstructed jet momentum. The parallel jet momentum fraction, $z^{ch}_{\parallel}=\frac{\vec{p}_\mathrm{D}\cdot \vec{p}_{\mathrm{ch. jet}}}{\vec{p}_\mathrm{ch. jet}\cdot \vec{p}_\mathrm{ch. jet}}$, is proportional to the emitted angle of the $\mathrm{D^0}$ meson with respect to the jet axis. The $z^{ch}_{\parallel}$ probability density of $\mathrm{D^0}$-tagged jets with $5<p_{\mathrm{T,jet}}<7\ \mathrm{GeV/}c$ and $15<p_{\mathrm{T,jet}}<50\ \mathrm{GeV/}c$ is reported in Fig. \ref{fig:D} and \ref{fig:D2} for pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ and $\sqrt{s}=13\ \mathrm{TeV}$, respectively. The data are well described by POWHEG+PYTHIA6 at large $p_{\mathrm{T,jet}}$, whilst at lower momentum the prediction shows a harder fragmentation than the data. In Fig. \ref{bje} (a) the $z^{ch}_{\parallel}$ probability density of $\mathrm{\Lambda^+_c}$-tagged jets is compared to expectations from POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za}, PYTHIA8 \cite{Sjostrand:2007gs}, and PYTHIA8 with string formation beyond leading-colour approximation \cite{Christiansen:2015yqa}. The latter model expects a softer $z^{ch}_{\parallel}$ distribution than the others, in better agreement with the data. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.9in]{2019-10-29-n4_0_1} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.9in]{2019-10-29-n4_0_4} \caption{} \end{subfigure} \caption{Probability density distribution of the jet momentum fraction, $z^{ch}_{\parallel}$, carried by $\mathrm{D^0}$ mesons measured in pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ for (a) $5<p_{\mathrm{T,jet}}<7\ \mathrm{GeV/}c$ and (b) $15<p_{\mathrm{T,jet}}<50\ \mathrm{GeV/}c$, compared to POWHEG+PYTHIA6 predictions \cite{Frixione:2007vw,Sjostrand:2006za}.} \label{fig:D} \end{figure*} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.80in]{2019-10-28-JetZSpectraPDF_final_2} \caption{} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.80in]{2019-10-28-JetZSpectraPDF_final_5} \caption{} \end{subfigure} \caption{Probability density distribution of the jet momentum fraction, $z^{ch}_{\parallel}$, carried by $\mathrm{D^0}$ mesons measured in pp collisions at $\sqrt{s}=13\ \mathrm{TeV}$ for (a) $5<p_{\mathrm{T,jet}}<7\ \mathrm{GeV/}c$ and (b) $15<p_{\mathrm{T,jet}}<50\ \mathrm{GeV/}c$, compared to POWHEG+PYTHIA6 predictions \cite{Frixione:2007vw,Sjostrand:2006za}.} \label{fig:D2} \end{figure*} \subsection{b-jets} Two methods are used to identify b-jets, both exploiting the large distance between the b-hadron decay point (secondary vertex) and the collision point (primary vertex). In the first, used in the analysis of pp collisions, tracks inside jets are sorted by their impact parameter to the primary vertex and the jet is tagged as a b-jet if the $\mathrm{2^{nd}}$ most displaced track exceeds a given threshold. In p--Pb collisions 3-prong vertices are reconstructed with the jet tracks and b-jets are identified by the presence of a vertex compatible with a displaced decay topology. The $p_{\mathrm{T}}$-differential cross section of b-jets is shown in pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ in Fig. \ref{bje} (b) and in p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$ in Fig. \ref{fig:L} (a). The measurements are well described in both collision systems by POWHEG+PYTHIA predictions that include the usage of nuclear PDF \cite{Eskola:2009uj} for the p-Pb case. The nuclear modification factor of b-jets, defined as the ratio of cross sections in p--Pb and pp collisions, the latter scaled by lead nucleon number, is shown in Fig. \ref{fig:L} (b). The b-jet production appears not to be affected by cold nuclear matter effects within current uncertainties. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.8in]{2019-10-31-finalwsys_wmodels_pt_jet_7} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.8in]{2019-10-31-bjet_pp_Second_hvq} \caption{} \end{subfigure} \caption{(a) Probability density distribution of the jet momentum fraction, $z^{ch}_{\parallel}$, carried by the $\mathrm{\Lambda_c^+}$ baryons measured in pp collisions at $\sqrt{s}=13\ \mathrm{TeV}$ for $7<p_{\mathrm{T,jet}}<15\ \mathrm{GeV/}c$, compared to POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za} and PYTHIA8 \cite{Sjostrand:2007gs,Christiansen:2015yqa} predictions. (b) Comparison of the $p_{\mathrm{T}}$-differential cross section of b-jets in pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ with expectations from POWHEG+PYTHIA8 \cite{Frixione:2007vw,Sjostrand:2007gs}.} \label{bje} \end{figure*} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.8in]{2020-02-24-xsec_ratio_bjets_1000} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.82in]{2019-10-31-SV_bjet_RpPb_Second} \caption{} \end{subfigure} \caption{(a) Comparison of the b-jet $p_{\mathrm{T}}$-differential cross section in p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$, compared to POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za} predictions with EPS09NLO nuclear PDF \cite{Eskola:2009uj}. (b) Nuclear modification factor, $R_{\mathrm{pPb}}$, of b-jets at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$, compared to POWHEG+PYTHIA8 \cite{Frixione:2007vw,Sjostrand:2007gs} predictions with EPSS16 nuclear PDF \cite{Eskola:2016oht}.} \label{fig:L} \end{figure*} \subsection{Heavy-Flavour-decay-electron tagged jets} In this analysis, heavy-flavour jets are tagged by requiring an electron originating from a heavy-flavour hadron decay among the jet constituents. Previous measurements of heavy-flavour decay electrons in small systems showed a positive elliptic flow $v_2$ \cite{Acharya:2018dxy}, that could be induced by final-state effects. These effects may affect also the jet $p_{\mathrm{T}}$ spectrum, introducing modifications possibly dependent on the jet resolution parameter $R$. The nuclear modification factors of heavy-flavour decay electron jets, shown in Fig. \ref{hfe} (a) for $R=0.3$, $R=0.4$ and $R=0.6$ do not exhibit deviations from unity. The ratio of the $p_\mathrm{T}$-differential cross sections for $R=0.3$ and $R=0.6$ for pp and p--Pb collisions are consistent within uncertainties. The data do not support the presence of large final-state effects on heavy-flavour decay electron jets in p--Pb collisions. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[height=1.8in]{2019-06-07-Pic_HFeJetRpPbAllR} \caption{} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[height=1.8in]{2019-06-07-Pic_HFjetRdep} \caption{} \end{subfigure} \caption{(a) Nuclear modification factor, $R_{\mathrm{pPb}}$, of heavy-flavour electron jets at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$ for jet resolution parameters $R=0.3$, $0.4$, $0.6$. (b) Ratio of cross section of jets containing electrons from heavy-flavour hadron decays with resolution parameter $R=0.3$ and $R=0.6$ for pp and p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$.} \label{hfe} \end{figure*} \subsection{D-meson-hadron correlation} The angular correlation of $\mathrm{D^0}$, $\mathrm{D^+}$ and $\mathrm{D^{*+}}$ mesons with charged particles were studied. The correlation functions and the near- and away-side peak properties are found to be consistent in pp and p--Pb collisions, showing no modifications due to nuclear effects within uncertainties \cite{Acharya:2019icl}. \section{Summary} New measurements of the parallel jet momentum fraction of $\mathrm{D^0}$-tagged jets in pp collisions at $\sqrt{s}=13\ \mathrm{TeV}$ and $\sqrt{s}=5.02\ \mathrm{TeV}$ were presented. They hint a softer fragmentation at low $p_\mathrm{T,jet}$ than what expected from POWHEG+PY\-THIA. A similar trend is deduced from the first measurement at the LHC of $\mathrm{\Lambda_c^+}$-tagged jets. As the $R_\mathrm{pPb}$ of b-jets in p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$ indicate, the production of b-jets does not appear to be affected by cold nuclear matter effects within current uncertainties. Furthermore, the $R_\mathrm{pPb}$ of heavy-flavour decay electron tagged jets shows no dependency on the jet resolution parameter $R$. \bibliographystyle{elsarticle-num} \section{Introduction} \label{} Heavy quarks (charm and beauty) are mostly produced in hard partonic scattering processes in the early stages of the collisions. Because of their large mass the production cross section can be calculated using perturbative Quantum Chromodynamics (pQCD) down to $p_{\mathrm{T}}=0$. Therefore, they are an ideal probe of the properties of Quark-Gluon Plasma (QGP) created in heavy-ion collisions. Measurements of heavy-flavour tagged jets give direct access to the parton kinematics and can provide information on heavy-quark energy loss in the QGP, in particular on how the radiated energy is dissipated in the medium. Studies of angular correlations of heavy-flavour particles with charged particles allow us to characterise the heavy-quark fragmentation process and its possible modification in a hot nuclear-matter environment. Measurements in pp collisions provide constraints on pQCD-based models and are necessary as a reference for the interpretation of heavy-ion collision results. Studies in p--Pb collisions can give insight on how heavy-quark production and hadronisation into jets are affected by cold nuclear-matter effects. \section{Results} \subsection{$\mathrm{D}$-meson and $\Lambda_c^{+}$-baryon tagged jets} Jets are reconstructed using the anti-$k_\mathrm{T}$ \cite{Cacciari:2011ma} algorithm and are tagged as $\mathrm{D^0}$-meson or $\mathrm{\Lambda_c^{+}}$-baryon jets if they contain a fully reconstructed $\mathrm{D^0}$ meson or $\mathrm{\Lambda_c^{+}}$ baryon. The studied decay chanels are $\mathrm{D^0}\rightarrow \mathrm{K^-}\mathrm{\pi^+}$ (B.R. 3.89\%) and $\mathrm{\Lambda_c^{+}}\rightarrow \mathrm{p}{\rm K}_{\rm S}^{0}$ (B.R. 1.59\%) with corresponding charge conjugates. The sideband method is used for subtracting the combinatorial background and extracting the heavy-flavour-jet raw signals. The feed-down contribution from beauty hadron decays is estimated and subtracted using a simulation performed with POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za}, as detailed in \cite{Acharya:2019zup}. A Bayesian unfolding procedure was performed using the RooUnfold package \cite{Adye:2011gm} to correct for detector effects affecting the reconstructed jet momentum. The parallel jet momentum fraction, $z^{ch}_{\parallel}=\frac{\vec{p}_\mathrm{D}\cdot \vec{p}_{\mathrm{ch. jet}}}{\vec{p}_\mathrm{ch. jet}\cdot \vec{p}_\mathrm{ch. jet}}$, is proportional to the emitted angle of the $\mathrm{D^0}$ meson with respect to the jet axis. The $z^{ch}_{\parallel}$ probability density of $\mathrm{D^0}$-tagged jets with $5<p_{\mathrm{T,jet}}<7\ \mathrm{GeV/}c$ and $15<p_{\mathrm{T,jet}}<50\ \mathrm{GeV/}c$ is reported in Fig. \ref{fig:D} and \ref{fig:D2} for pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ and $\sqrt{s}=13\ \mathrm{TeV}$, respectively. The data are well described by POWHEG+PYTHIA6 at large $p_{\mathrm{T,jet}}$, whilst at lower momentum the prediction shows a harder fragmentation than the data. In Fig. \ref{bje} (a) the $z^{ch}_{\parallel}$ probability density of $\mathrm{\Lambda^+_c}$-tagged jets is compared to expectations from POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za}, PYTHIA8 \cite{Sjostrand:2007gs}, and PYTHIA8 with string formation beyond leading-colour approximation \cite{Christiansen:2015yqa}. The latter model expects a softer $z^{ch}_{\parallel}$ distribution than the others, in better agreement with the data. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.9in]{2019-10-29-n4_0_1} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.9in]{2019-10-29-n4_0_4} \caption{} \end{subfigure} \caption{Probability density distribution of the jet momentum fraction, $z^{ch}_{\parallel}$, carried by $\mathrm{D^0}$ mesons measured in pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ for (a) $5<p_{\mathrm{T,jet}}<7\ \mathrm{GeV/}c$ and (b) $15<p_{\mathrm{T,jet}}<50\ \mathrm{GeV/}c$, compared to POWHEG+PYTHIA6 predictions \cite{Frixione:2007vw,Sjostrand:2006za}.} \label{fig:D} \end{figure*} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.80in]{2019-10-28-JetZSpectraPDF_final_2} \caption{} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=1.80in]{2019-10-28-JetZSpectraPDF_final_5} \caption{} \end{subfigure} \caption{Probability density distribution of the jet momentum fraction, $z^{ch}_{\parallel}$, carried by $\mathrm{D^0}$ mesons measured in pp collisions at $\sqrt{s}=13\ \mathrm{TeV}$ for (a) $5<p_{\mathrm{T,jet}}<7\ \mathrm{GeV/}c$ and (b) $15<p_{\mathrm{T,jet}}<50\ \mathrm{GeV/}c$, compared to POWHEG+PYTHIA6 predictions \cite{Frixione:2007vw,Sjostrand:2006za}.} \label{fig:D2} \end{figure*} \subsection{b-jets} Two methods are used to identify b-jets, both exploiting the large distance between the b-hadron decay point (secondary vertex) and the collision point (primary vertex). In the first, used in the analysis of pp collisions, tracks inside jets are sorted by their impact parameter to the primary vertex and the jet is tagged as a b-jet if the $\mathrm{2^{nd}}$ most displaced track exceeds a given threshold. In p--Pb collisions 3-prong vertices are reconstructed with the jet tracks and b-jets are identified by the presence of a vertex compatible with a displaced decay topology. The $p_{\mathrm{T}}$-differential cross section of b-jets is shown in pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ in Fig. \ref{bje} (b) and in p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$ in Fig. \ref{fig:L} (a). The measurements are well described in both collision systems by POWHEG+PYTHIA predictions that include the usage of nuclear PDF \cite{Eskola:2009uj} for the p-Pb case. The nuclear modification factor of b-jets, defined as the ratio of cross sections in p--Pb and pp collisions, the latter scaled by lead nucleon number, is shown in Fig. \ref{fig:L} (b). The b-jet production appears not to be affected by cold nuclear matter effects within current uncertainties. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.8in]{2019-10-31-finalwsys_wmodels_pt_jet_7} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.8in]{2019-10-31-bjet_pp_Second_hvq} \caption{} \end{subfigure} \caption{(a) Probability density distribution of the jet momentum fraction, $z^{ch}_{\parallel}$, carried by the $\mathrm{\Lambda_c^+}$ baryons measured in pp collisions at $\sqrt{s}=13\ \mathrm{TeV}$ for $7<p_{\mathrm{T,jet}}<15\ \mathrm{GeV/}c$, compared to POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za} and PYTHIA8 \cite{Sjostrand:2007gs,Christiansen:2015yqa} predictions. (b) Comparison of the $p_{\mathrm{T}}$-differential cross section of b-jets in pp collisions at $\sqrt{s}=5.02\ \mathrm{TeV}$ with expectations from POWHEG+PYTHIA8 \cite{Frixione:2007vw,Sjostrand:2007gs}.} \label{bje} \end{figure*} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.8in]{2020-02-24-xsec_ratio_bjets_1000} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[height=1.82in]{2019-10-31-SV_bjet_RpPb_Second} \caption{} \end{subfigure} \caption{(a) Comparison of the b-jet $p_{\mathrm{T}}$-differential cross section in p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$, compared to POWHEG+PYTHIA6 \cite{Frixione:2007vw,Sjostrand:2006za} predictions with EPS09NLO nuclear PDF \cite{Eskola:2009uj}. (b) Nuclear modification factor, $R_{\mathrm{pPb}}$, of b-jets at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$, compared to POWHEG+PYTHIA8 \cite{Frixione:2007vw,Sjostrand:2007gs} predictions with EPSS16 nuclear PDF \cite{Eskola:2016oht}.} \label{fig:L} \end{figure*} \subsection{Heavy-Flavour-decay-electron tagged jets} In this analysis, heavy-flavour jets are tagged by requiring an electron originating from a heavy-flavour hadron decay among the jet constituents. Previous measurements of heavy-flavour decay electrons in small systems showed a positive elliptic flow $v_2$ \cite{Acharya:2018dxy}, that could be induced by final-state effects. These effects may affect also the jet $p_{\mathrm{T}}$ spectrum, introducing modifications possibly dependent on the jet resolution parameter $R$. The nuclear modification factors of heavy-flavour decay electron jets, shown in Fig. \ref{hfe} (a) for $R=0.3$, $R=0.4$ and $R=0.6$ do not exhibit deviations from unity. The ratio of the $p_\mathrm{T}$-differential cross sections for $R=0.3$ and $R=0.6$ for pp and p--Pb collisions are consistent within uncertainties. The data do not support the presence of large final-state effects on heavy-flavour decay electron jets in p--Pb collisions. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[height=1.8in]{2019-06-07-Pic_HFeJetRpPbAllR} \caption{} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[height=1.8in]{2019-06-07-Pic_HFjetRdep} \caption{} \end{subfigure} \caption{(a) Nuclear modification factor, $R_{\mathrm{pPb}}$, of heavy-flavour electron jets at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$ for jet resolution parameters $R=0.3$, $0.4$, $0.6$. (b) Ratio of cross section of jets containing electrons from heavy-flavour hadron decays with resolution parameter $R=0.3$ and $R=0.6$ for pp and p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$.} \label{hfe} \end{figure*} \subsection{D-meson-hadron correlation} The angular correlation of $\mathrm{D^0}$, $\mathrm{D^+}$ and $\mathrm{D^{*+}}$ mesons with charged particles were studied. The correlation functions and the near- and away-side peak properties are found to be consistent in pp and p--Pb collisions, showing no modifications due to nuclear effects within uncertainties \cite{Acharya:2019icl}. \section{Summary} New measurements of the parallel jet momentum fraction of $\mathrm{D^0}$-tagged jets in pp collisions at $\sqrt{s}=13\ \mathrm{TeV}$ and $\sqrt{s}=5.02\ \mathrm{TeV}$ were presented. They hint a softer fragmentation at low $p_\mathrm{T,jet}$ than what expected from POWHEG+PY\-THIA. A similar trend is deduced from the first measurement at the LHC of $\mathrm{\Lambda_c^+}$-tagged jets. As the $R_\mathrm{pPb}$ of b-jets in p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$ indicate, the production of b-jets does not appear to be affected by cold nuclear matter effects within current uncertainties. Furthermore, the $R_\mathrm{pPb}$ of heavy-flavour decay electron tagged jets shows no dependency on the jet resolution parameter $R$. \bibliographystyle{elsarticle-num}
1,314,259,992,954
arxiv
\section*{List of Supplementary Figures} \vspace{-0.25cm} \begin{enumerate} \item Fig S1: The final expression levels $B$, $W$ and $R$ of the patterning genes, as well as, the concentrations of the Notch ligand, $L$, and NICD, $N^b$, for each of the coupling types.. \item Fig S2: Spatio-temporal evolution of the expression levels of the patterning genes B, W, and R at different cells in a 1D array comprising $30$ cells. \item Fig S3: The effect of differential expression of Notch on cell fates. \item Fig S4: ``Sloppy parameter sensitivity'' of the flags to inter-cellular coupling. \end{enumerate} \vspace{-0.2cm} \section*{Model parameter values} \vspace{-0.5cm} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hline parameter& $S_M(0)$&$\lambda_M$ &$\alpha$&$\beta$&$\gamma$& $\beta_L$&$\beta_{N^b}$& $K$ & $K_N$ & $k_1$ & $k_2$ & $k_3$ & $\tau_L$ & $\tau_{N^b}$ &$h_1$&$h_2$&$h_3$&$h_4$&$h_5$\\\hline value&100.0&0.3&4.0&6.3&5.0&5.0&5.0&1.0&1.0&1.0&1.0&1.0&1.0&1.0&6&2&5&1&1\\\hline \end{tabular} \caption{The values for the model parameters used for all simulation results reported (unless specified otherwise).} \label{tableS1} \end{table} \vspace{-0.5cm} \section*{Temporal evolution and parameter dependence of the patterns representing the sequential arrangement of distinct cell fates} The fate of each cell in the linear array we consider in our simulation is determined by the expression levels of the three patterning genes B,W and R at $t_{max}$ = 100. Gene B is chosen to be the pre-patterning gene, such that its expression level is high ($=5$ arb. units) initially while the initial expression level of the genes W and R is $0$. Fig.~\ref{figs1} shows for each of the coupling types considered in the main text, the time evolution of the expression levels for B, W and R with a representative set of values chosen for the coupling parameters. For coupling types I and II, B continues to be the maximally expressed gene in cells that have relatively low exposure to the morphogen, while in cells subject to intermediate and high morphogen concentrations W and R genes (respectively) are the maximally expressed ones. In case of types III and IV, more than two fate boundaries can emerge and the chromatic order seen in the uncoupled system is not conserved. This implies that the coupling types III and IV give rise to a large diversity of flags, compared to types I and II. Fig.~\ref{figs2} shows the final expression levels of the three pattering genes B, W, R in all the cells of the linear array, along with the concentrations of the Notch ligand, $L$, and the Notch intra-cellular domain NICD, $N^b$, for four representative sets of parameter values for each of the four coupling types. We have considered the maximal production rate of NICD, $\beta^{N^b}$ to be $5$ for all the results described in the main text. To understand the implications of the over or under-expression of Notch for the results of our model, we have also performed simulations by considering a a wide range of values of $\beta^{N^b}$. We consider only coupling types I and II which conserve the number of boundaries and the chromatic order observed in the uncoupled system, ensuring that pathological patterns of cell fates do not arise for $\beta^{N^b}=5$. Fig.~\ref{figs3} shows the variation in length of the chromatic regions as a function of $\beta^{N^b}$ for four different sets of coupling parameter values for each of the two coupling types. It is evident that, depending on the values of the coupling parameters chosen, the higher expression of Notch can lead to either increase or reduction of the red region, while the width of the blue region remains largely invariant. This result suggests that experimental observation of the effect of under or over-expression of Notch on the length of the regions with different cell fates can provide us with information about the type of coupling and the strength of interactions in different biological systems. \begin{figure}[H] \includegraphics[width = \textwidth]{Figures/FigS5.pdf} \caption{{\bf Spatio-temporal evolution of the expression levels of the patterning genes B, W, and R at different cells in a 1D array comprising $30$ cells.} The resulting cell fates are indicated on the top surface of each panel corresponding to the coupling types I-IV. For each coupling type, a representative parameter set $\Theta$ is used for the simulation. Note that, all cells initially exhibit high expression levels for B.} \label{figs1} \end{figure} \begin{figure}[H] \includegraphics[width = \textwidth]{Figures/FigS1.pdf} \caption{{\bf The final expression levels $B$, $W$ and $R$ of the patterning genes, as well as, the concentrations of the Notch ligand, $L$, and NICD, $N^b$, for each of the coupling types.} The maximally expressed gene at each cell in the 1D array determines its fate, indicated by the colors blue, white or red. For each coupling type, results obtained using different choices of values for the parameter set $\Theta$ are shown.} \label{figs2} \end{figure} \begin{figure}[H] \includegraphics[width = \textwidth]{Figures/FigS7.pdf} \caption{\textbf{The effect of differential expression of Notch on cell fates.} The variation of the spatial extent of the three chromatic regions (indicated using the colors blue, white and red) with the NICD maximal production rate, $\beta^{N^b}$, for coupling types I and II which preserve the chromatic order and number of boundaries seen in the uncoupled system. For all results shown in the main text, we have chosen $\beta^{N^b} = 5$ (indicated using a broken line). The result of under or over-expression of Notch relative to this value is shown for different choices of values for the parameter set $\Theta$ in the case of each coupling type.} \label{figs3} \end{figure} \section*{Parameter Sensitivity} \subsection*{Variance-based sensitivity analysis} To quantify the contribution of the parameters governing the regulation of the patterning genes by NICD in determining the cell fates, we have used a variance-based sensitivity analysis often referred to as the Sobol method~\cite{Saltelli1995}. This method can be used to investigate the effect of varying any one of the parameters, or a pair of them at a time, or any other higher-order combinations. We consider a system that yields a scalar output as a function of parameters $\theta_j$, \begin{equation} Y = f(\theta_1,\theta_2,..,\theta_k)\,. \end{equation} The first-order sensitivity index, which corresponds to the fraction of the total variation in $Y$ that can be attributed to varying only $\theta_j$, keeping the other parameters fixed, is defined as \begin{equation} S1 (j) = \frac{V_{\theta_i}(E_{\theta_{-j}}(Y|\theta_j))}{V(Y)}\,. \end{equation} Here, $-j$ refers to all other parameters except $j$. For each cell this sensitivity index is, by definition, bound within the range $0\leq S1 (j) \leq 1$. Note that the fraction of the total variance in the output variable that can be explained by changing one parameter at a time is given by the sum over all the first-order sensitivity indices, $\sum_{j}{S1 (j)}\leq 1$. As mentioned in the main text, we quantitatively investigate the role of the parameter set $\Theta$ in determining the final state of the cells $i$ ($i=1,\dots,30$) represented by a discrete scalar variable, $F_i \in \{0,1,2\}$. The contribution of each of $\theta_k$ ($k=1,\ldots,6$) to the observed variation in cell fates is measured by the respective first-order sensitivity indices $S1$, expressed as the the variance of $\langle F_i | \theta_j \rangle_{\theta_{k (\neq j)}}$ normalized by $\sigma^2$ ($i= 1,\ldots,30$). In case of the coupling types~I and II, we almost always observe two boundaries separating the distinct regions. This allows us to uniquely identify a resultant flag by specifying $l_R$, $l_W$ and $l_B$, which correspond to the lengths of the red, white and blue regions, respectively, and which can take discrete values in the range $[0,30]$. Corresponding to these three scalar variables, we obtain first-order sensitivity indices $S1^{R}$, $S1^{W}$ and $S1^{B}$, using quasi Monte Carlo methods. \begin{figure}[H] \includegraphics[width = \textwidth]{Figures/FigS6.pdf} \caption{\textbf{``Sloppy parameter sensitivity'' of the flags to inter-cellular coupling.} The concentric pie charts (a: type I, b: type II) show the fractional contribution of the components of the parameter set $\Theta$, represented by colors indicated in the key at the top, to the sensitivity of the system to parameter variation. This is expressed in terms of the spectral characteristics of the Hessian matrix, whose eigenvalues $\lambda$ obtained for a specific parameter set for each coupling type are shown at the bottom. For both coupling types, the two largest eigenvalues are comparable in magnitude while there is a large gap between these and the subsequent eigenvalues. The bar plots (c: type I, d: type II) show the probability distribution [in logarithmic scale] of the exponential function of entropy $S$ for the eigenvector components of the Hessian calculated for an ensemble of $10^3$ randomly chosen parameter sets $\Theta$. Note that, as exp($S$) provides a measure for the number of dominant components in an eigenvector, most eigenvectors are dominated by a single component. This is in agreement with the observation that almost all of the concentric shells in the pie charts show the predominance of one color. The aggregated contribution $w$ of each parameter to the eigenvectors (e: type I, f: type II) indicates that, consistent with the variance-based sensitivity analysis reported in the main text, $\theta_2$ and $\theta_3$ are almost exclusively responsible for the observed variation in the flags.} \label{figs4} \end{figure} \subsection*{``Sloppy parameter sensitivity'' analysis} As a supplement to the variance-based sensitivity analysis, we have characterized the sensitivity of the model output to variation of the parameters $\Theta = \{\theta_1, \ldots, \theta_6\}$ using ``sloppy model analysis''~\cite{Brown2003,Waterfall2006,Gutenkunst2007}. This is done by varying each parameter over a relevant range and calculating a Jacobian matrix that captures the variation of output variables of interest. The Jacobian is then used to obtain a Hessian matrix whose spectrum indicates the sensitivity of the system to each of the parameters.\\ The output variables that we use to characterize the sloppiness of the model are the lengths of the blue, white and red regions. These are specified using the 3-tuple $(B,W,R)$, each of which can take integer values between $0$ and $30$, subject to the constraint that $B+W+R=30$. In conventional sloppy analysis of models, a specific set of parameter values is chosen as the reference set $\Theta^{*}$ and the results of variations from this set are then investigated. As in our model, there is no such privileged parameter set, we have carried out the analysis using several different $\Theta^{*}$ obtained by randomly choosing values of each of the parameters from their respective ranges.\\ To quantify the sloppiness of our model system, we compare the lengths of the regions $(B^{*},W^{*},R^{*})$ obtained using a given set $\Theta^{*}$ with those obtained using perturbed parameter sets. Each perturbed set $\Theta_{i,j}$ is obtained by independently varying the value of the parameter $\theta_j$ in the set $\Theta^{*}$ over the relevant range in small steps of $\Delta\theta$, while keeping values of the other five parameters fixed. Thus, for a given choice of $\Theta^{*}$ and $\Theta_{i,j}$, the residue is \begin{equation} \mathcal{R}_{i,j} = \sqrt{(B_{i,j}-B^*)^2 + (W_{i,j}-W^*)^2 + (R_{i,j}-R^*)^2}\,, \end{equation} where $(B_{i,j},W_{i,j},R_{i,j})$ denotes the lengths of the regions obtained using the parameter set $\Theta_{i,j}$. This is then used to obtain the Jacobian matrix {\bf J} as \begin{equation} J_{i,j} = \frac{\mathcal{R}_{i+1,j} - \mathcal{R}_{i,j}}{\Delta \theta}\,, \end{equation} which comprises $(k-1)$ rows and $6$ columns, where $k$ is the number of perturbed parameter sets considered. This subsequently yields the $6 \times 6$ Hessian matrix \begin{equation} \mathcal{H} = J^{T}J. \end{equation} We calculate the eigen spectrum of $\mathcal{H}$ and normalize the eigenvalues by the largest eigenvalue. The number of ``sloppy'' directions corresponds to the number of eigenvalues whose normalized magnitude $\ll 1$. The eigenvector corresponding to an eigenvalue whose magnitude is very small represents an axis in the $6$-dimensional parameter space along which any variation has relatively little impact on the output. Our results indicate that most eigenvectors have a single large component. \\ We obtain the Hessian spectra for each of $10^3$ different random choices of $\Theta^*$ (Fig.~\ref{figs4}). Using these, we establish a hierarchy of the six parameters $\theta_j$ in terms of their ``weights'' defined as \begin{equation} w_{\theta_j} = \frac{1}{N_{\Theta^*}} \sum_{\Theta^*}\sum_{k=1}^{6}{\lambda^k v^{k}_{\theta_j} * v^{k}_{\theta_j}}\,, \end{equation} where the first summation is over all $10^3$ parameter sets $\Theta^*$ and the second is over the six components of the spectra of $\mathcal{H}$. Further, $\lambda^k$ and $v^{k}_{\theta_j}$ are the eigenvalue and eigenvector of the $k$-th component of the Hessian spectra obtained upon varying the parameter $\theta_j$.
1,314,259,992,955
arxiv
\section{} Measurements of exoplanet magnetic fields remain elusive, although detections have now been confirmed in a small number of T-dwarfs \citep{route12,route13,kao16}. While radio observations of auroral emission continue to be pursued, the possibility of indirectly measuring planetary magnetic fields via UV and optical observations of bow-shocks around hot planets remains a plausible path forward \citep{vidotto10,llama11,benjaffel,cauley15}. The upcoming Colorado Ultraviolet Transit Experiment (CUTE) aims, in part, to find such signals\footnote{http://lasp.colorado.edu/home/cute/}. Recent work has also suggested that estimates of hot Jupiter magnetic fields must be revised upward to take into account the extra heat being deposited into their interiors \citep{yadav17}. If these field strength estimates are accurate, many known hot Jupiter systems should exhibit pre-transit interactions between the stellar wind and the planetary magnetosphere. To aid in planning observations of signatures due to transiting bow-shocks, we derive an estimate of the contact time of the bow-shock nose with the stellar disk. This will allow telescope resources to be allocated more efficiently when searching for such signatures. The formula is a generalization of Eq. 19 from \citet{vidotto11b}; the final form does not rely on a sky-projected value of $r_\text{m}$. We only consider leading shocks \citep[see][for details]{vidotto11a}. Our result is only approximate: if the density along the bow is sufficient, a portion of the bow away from the nose may cause a transit signal before the nose reaches the stellar disk. For reasonable bow geometries and densities, however, this should only amount to a difference of $\approx \pm 10$ minutes, which we consider negligible. A circular orbit is assumed since there is no closed-form for the distance along an arc of an ellipse. However, planets with $e \lesssim 0.1$, which encompasses the majority of transiting giant planets, should not show significant deviations from the derived pre-transit time for circular orbits. \autoref{fig:fig1} shows the geometry of the transiting bow-shock. The planet is represented by the small black circle and the bow is shown as a parabola. The nose of the bow is at $\textbf{$s_\text{m}$} = (x_\text{m},y_\text{m})$ which is a distance $r_\text{m}$ away from the center of the planet. The bow makes an angle $\theta_\text{m}$ with the tangent to the planet's orbit. This angle is determined by the relative velocity of the planet and the stellar wind \citep[see][]{vidotto11a}. The angle $\gamma = 90^\circ - \theta_\text{m}$ is marked in blue and the angle $\beta$, the angle between the star-planet line and star-bow nose line, in purple. The angle $\phi_{\text{pl}}$, the planet's angular distance from mid-transit, is shown in green. The distances $a_{\text{pl}}$ and $a_\text{m}$ represent the semi-major axis of the planet's orbit and the bow nose orbit, respectively. \begin{figure*}[b!] \centering \includegraphics[scale=.75,clip,trim=20mm 35mm 5mm 45mm,angle=0]{prebow_geom.pdf} \figcaption{Bow-shock geometry. \label{fig:fig1}} \end{figure*} The bow nose transits when $x_\text{m} = -\sqrt{R_*^2 - b^2}$ where $b$ is the planet's transit impact parameter in units of $R_*$ ($b=0$ in \autoref{fig:fig1}. The position $x_\text{m}$ is given by \begin{equation}\label{eq:eq1} x_\text{m} = -a_\text{m} \sin(\phi_{\text{pl}} - \beta) \end{equation} \noindent which, for the transit condition to be met, is \begin{equation}\label{eq:eq2} -\sqrt{R_*^2 - b^2} = -a_\text{m} \sin(\phi_{\text{pl}} - \beta). \end{equation} Using the law of cosines we can calculate $a_m$: \begin{equation}\label{eq:am} a_\text{m} = \sqrt{ a_{\text{pl}}^2 + r_\text{m}^2 - 2 a_{\text{pl}} r_\text{m} \cos\gamma } = \sqrt{ a_{\text{pl}}^2 + r_\text{m}^2 - 2 a_{\text{pl}} r_\text{m} \sin\theta_\text{m} } \end{equation} \noindent where we have replaced $\cos \gamma$ with $\sin \theta_\text{m}$. We can also express $\beta$ in terms of $r_\text{m}$, $a_\text{m}$, and $\theta_\text{m}$: \begin{equation}\label{eq:beta} \beta = \sin^{-1}\left(\frac{r_\text{m}}{a_\text{m}} \cos \theta_\text{m}\right). \end{equation} The planet's angle from mid-transit $\phi_\text{pl}$ can be expressed as the distance along the arc subtended by $\phi_\text{pl}$: \begin{equation}\label{eq:phi} \phi_\text{pl} = \frac{v_\text{orb} \Delta t}{a_\text{pl}} \end{equation} \noindent where $v_\text{orb}$ is the planet's Keplerian orbital velocity and $\Delta t$ is the time from mid-transit for the planet. We can now combine \autoref{eq:eq2}, \autoref{eq:beta}, and \autoref{eq:phi} to solve for $\Delta t$ as a function of the planet and bow parameters: \begin{equation}\label{eq:deltat} \Delta t = \frac{v_\text{orb}}{a_\text{pl}} \left[ \sin^{-1} \left(\frac{\sqrt{R_*^2 - b^2}}{a_\text{m}}\right) + \sin^{-1}\left( \frac{r_\text{m}}{a_\text{m}} \cos \theta_\text{m} \right) \right] \end{equation} \noindent where $\Delta t$ is the beginning of the bow-shock transit relative to the transit midpoint of the planet. We have left \autoref{eq:deltat} in terms of $a_\text{m}$ for clarity, although $r_\text{m}$ is the more physically relevant parameter \citep[see Eq. 9 of][]{llama13}. The value $a_\text{m}$ can be calculated using \autoref{eq:am}. \autoref{eq:deltat} can be used to estimate the expected transit time of the bow-shock nose if approximations have been made for $r_\text{m}$ and $\theta_\text{m}$. This may allow, for example, a half-night of telescope time to be used to measure the bow-shock transit of a planet when $\Delta t$ is small, rather than a full night. Most transiting hot Jupiter hosts do not have measured magnetic fields \citep{fares13,mengel17}. This is important since the stellar field strength plays a large role in determining $r_\text{m}$. Similarly, the stellar wind speed at hot Jupiter orbital distances is poorly constrained for all stars. Thus \autoref{eq:deltat} should be used to consider the range of plausible bow-shock transit times given the range of reasonable values for the host star's magnetic field and wind parameters. Finally, hot planets orbit through inhomogeneous regions of the stellar wind which may result in $\Delta t$ changing as a function of time as the star's activity level varies \citep{llama13}. This should be taken into account when considering values of $\Delta t$. \bigskip
1,314,259,992,956
arxiv
\section*{Glossary of Terms} \begin{description} \item[\emph{Granular materials}:] Granular materials are collections of discrete, macroscopic particles that interact with each other through contact (rather than long-range) forces. Importantly, these systems are non-equilibrium: the particles are large enough to avoid rearrangement under thermal fluctuations, and they lose energy through frictional and inelastic interactions with neighboring particles. \item[\emph{Particulate materials}:] Like granular materials, particulate materials are collections of discrete, macroscopic elements. However, the elements making up the system may be entities --- such as bubbles, foams, colloids, or suspensions --- that include multiple phases of matter. The term ``particulate material'' is a more general one than ``granular material''. \item[\emph{Packing fraction}:] The fraction of a granular material that consists of the particles. One calculates the packing fraction as the ratio of the total volume of all particles to the volume of the region that they occupy. The packing fraction is also sometimes called the ``packing density'' or ``volume fraction''. \item[\emph{Force chain}:] Force chains are typically described as the subset of inter-particle contacts in a granular material that carry the largest forces in the system. They often form filamentary networks that align preferentially with the principal stress axes under which a material is loaded. \item[\emph{Jamming}:] As certain system parameters change, disordered, particulate materials typically undergo a transition from an underconstrained, liquid-like state to a rigid, solid-like state characterized by the onset of mechanical stability. The transition to/from a jammed state may arise through increases/decreases in quantities like packing fraction or contact number, which can occur due to an applied load. A formal theory of jamming exists for idealized situations (with soft, frictionless, and spherical particles). \item[\emph{Isostatic}:] A jammed packing is isostatic when it has exactly the minimum number of contacts that are required for maintaining mechanical stability through force balance and torque balance. One typically examines isostaticity by calculating the mean number of contacts per particle. ``Hyperstatic'' and ``hypostatic'' packings have more and fewer contacts, respectively. \item[\emph{Mono/bi/poly-disperse}:] A particulate material is monodisperse if it is composed of particles of a single species (with the same size, shape, and material properties). Bidisperse materials have particles of two species, and polydisperse materials can either have particles from three or more discrete species or have particles with a continuum of properties. \item[\emph{Structural rigidity theory}:] For a specified structure composed of fixed-length rods connected to one another by hinges, structural rigidity theory studies the conditions under which the associated structural graph is able to resist deformations and support applied loads. \item[\emph{Stress}:] A stress is a force applied to an object's surfaces. (The units measure a force per unit area.) Shear stress arises from the component of the applied force that acts in a direction parallel to the object's cross-section, and normal stress arises from the perpendicular component. \item[\emph{Strain}:] Strain is the fractional (unitless) deformation of a material that arises due to an applied stress. One calculates strain from the relative displacement of particles in a material, excluding rigid-body motion such as translation or rotation. Like stress, strain has both shear and normal components. \item[\emph{Pure shear}:] One can use the term ``pure shear'' to describe either stresses or strains in which an object is elongated along one axis and shortened in the perpendicular direction without inducing a net rotation. \item[\emph{Axial compression}:] In axial compression, one applies inward forces to an object in one direction (uniaxial), two directions (biaxial), or all directions (isotropic compression). These forces result in uniaxial strain, biaxial strain, or isotropic strain, respectively. \item[\emph{Cyclic shear/compression}:] These consist of repeated cycles of shear or compression applied to the same system. \item[\emph{Shear band}:] A shear band is a narrow region of a particulate material in which most of the strain is localized, whereas other regions remain largely undeformed. A shear band is also sometimes called a region of ``strain localization''. \item[\emph{Strain softening/hardening}:] As a material is loaded and undergoes deformation, continuing deformation can become either easier (strain softening) or harder (strain hardening). Eventually, after much deformation, the material can reach a critical state in which there are no further changes in the resistance to deformations. \item[\emph{Stress ratio}:] The stress ratio, which is analogous to Coulomb's Law, is the ratio of shear to normal stresses. Frictional failure occurs when the shear force exceeds the product of the normal force and the coefficient of friction. \item[\emph{Photoelasticity/birefringence}:] Photoelasticity is an optical technique for quantifying internal stresses based on the transmission of polarized light through ``birefringent'' materials, which have preferentially fast and slow directions for the propagation of light. \item[\emph{DEM} or \emph{MD simulations}:] The Discrete (or Distinct) Element Method and Molecular Dynamics simulations are related numerical techniques that compute the motions of all particles in a system (such as a granular material). In each method, a computer algorithm treats each particle as an object subject to Newton's laws of motion, where forces consist of body forces (e.g., gravity) and those that arise from interactions with the object's neighbors. \end{description} \newpage \section{Introduction} \label{intro} Granular materials comprise a subset of the larger set of particulate matter \citep{Jaeger1996,duran1999sands,Mehta2007,Franklin2015,Andreotti2013,Nagel:2017a}. People engage with such materials --- which include sands, beans, grains, powders such as cornstarch, and more --- often in their daily lives. One can define a \emph{granular material} as a large collection of discrete, macroscopic particles that interact only when in contact. Granular materials are inherently non-equilibrium in two distinct ways, characterized by \emph{(1)} the lack of rearrangement under thermal fluctuations and \emph{(2)} the loss of energy through frictional and inelastic dissipation during contact between grains. Nonetheless, they phenomenologically reproduce equilibrium states of matter, exhibiting characteristics of solids (forming rigid materials), liquids (flowing out of a container), or gases (infrequent contacts between grains), depending on the type and amount of driving. In this review, we focus mainly on granular solids and slow (non-inertial) flows \citep{Mort2015}; these are dense materials in which sustained inter-particle contacts provide the dominant contribution to material properties. The functional properties of granular materials are related in a nontrivial way to the complex manner in which particles interact with one another and to the spatial scales (particle, chain, domain, and bulk) and time scales over which those interactions occur. For example, pairs of particles can exert force on one another in a local neighborhood. However, as particles push on adjacent particles, the combined effect can transmit forces over long distances via important mesoscale structures commonly called \emph{force chains} \citep{Liu:1995aa,Mueth:1998a}. The idea of networks has been invoked for many years to help provide a quantitative understanding and explanation of force-chain organization \citep{Coppersmith:1996a,Claudin:1998a,Sexton:1999a, Socolar:2002a, Peters:2005aa}. Broadly speaking, force chains form a network of filamentary-like structures that are visually apparent in images from experiments, like the one shown in Fig.~\ref{f:Behringer2014Statistical_Fig3Right}. In such images, the brighter particles carry larger forces \citep{Liu:1995aa, Howell1999}. Furthermore, force chains tend to align preferentially along the principal stress axes \citep{Majmudar:2005aa}. It can be helpful to think of a force-chain network as the backbone of strong forces that span a system, providing support for both static \citep{Geng:2001a} and dynamic \citep{Howell1999} loading. However, weaker forces can also play a stabilizing role, much as guy-wires do on an aerial tower \citep{Radjai:1998aa, cates1999jamming}. It is also possible for sets of particles to cluster together into larger geographical domains, with potentially distinct properties, that can have weak structural boundaries between them \citep{bassett2012influence}. At the largest scale, granular materials as a whole exhibit bulk properties, such as mechanical stability or instability in response to sheer or compression \citep{richard2005slow}. All of the aforementioned spatial scales are potentially relevant for understanding phenomena such as transmission of acoustic waves \citep{Owens:2011}, thermal conductivity and heat transfer \citep{Smart:2007a}, electrical properties \cite{Gervois:1989a}, and more. The time scales of interactions in granular materials are also important, and they can vary over many orders of magnitude. For example, in systems under compression, statistical fluctuations of grain displacements depend fundamentally on the length of the strain step (i.e., ``increment'') over which one makes measurements, as fluctuations over short windows are consistent with anomalous diffusion and those over longer windows are consistent with Brownian behavior \citep{combe2015experimental}. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{Behringer2014Statistical_Fig3Right_small.jpg} \caption{\textbf{Force chains in an experimental granular system.} A photoelastic image of a quasi-two-dimensional (quasi-2D) packing of photoelastic disks that were subjected to pure shear. The photoelastic image allows one to visualize the force pattern in a material. Bright particles carry the strongest forces, and one can observe that a network of force chains tends to align along the principal stress axes. Modern techniques allow one to determine vector contact forces at each inter-particle contact. [We adapted this figure, with permission, from \citep{Behringer2014:Statistical}.]} \label{f:Behringer2014Statistical_Fig3Right} \end{figure} The principled study of such diverse characteristics and organization in a single system can be very challenging, and the development of appropriate physical, mathematical, statistical, and computational models is important to attain a mechanistic understanding of granular materials. Traditionally, it has been common to model granular materials using either particulate-based or continuum-based frameworks \citep{cates1999jamming}. However, both of these approaches are often implicitly agnostic to intermediate-scale organization, which is important for understanding both static granular packings \cite{bassett2015extraction} as well as granular dynamics \cite{Herrera:2011aa}. Recently, tools from network science \citep{newman2010networks,Bollobas:1998a} and related mathematical subjects --- which include approaches that can account explicitly for mesoscale structures \citep{Fortunato2016,Porter2009,Fortunato2010,csermely2013} --- have been used successfully to study properties of granular materials across multiple spatial and temporal scales. The most common representation of a network, an important idea for the study of complex systems of interacting entities \citep{newman2011complex}, is as a graph \citep{Bollobas:1998a}. A graph consists of a set of nodes (to represent the entities) and a set of edges, each of which represents an interaction between a pair of entities (or between an entity and itself). Increasingly, more complicated network representations (such as multilayer networks \citep{kivela2014multilayer}) are also employed. Moreover, there is also an increasing recognition that it is important to consider the impact of other features, such as spatial embedding and other spatial effects \citep{barth2011}, on network structure and dynamics, rather than taking an approach that promises that ``one size fits all.'' Network science offers methods for quantitatively probing and analyzing large, interacting systems whose associated networks have heterogeneous patterns that defy explanations attained by considering exclusively all-to-all, regular, or lattice-like interactions \citep{newman2010networks}. There are several open problems in granular physics that may benefit from network-science approaches. In particular, because granular materials have multiple relevant length and time scales \citep{Howell1999,Hidalgo2002,candelier2009building,mehta2008heterogeneities,Keys:2007a}, it can be challenging to model and quantify their structural organization, material properties, and responses to external loads \citep{Digby:1981,Velicky:2002,Goddard:1990,Makse-1999-WEM,Goldenberg-2005-FEE}. However, although complex, the pairwise inter-particle interactions that underlie and govern the structure and behavior of granular systems (and other particulate matter) render them amenable to various network representations and network-based analyses. \citet{Smart:2008b} were among the first to explicitly suggest and formalize the use of ideas from network science to begin to study some of the difficult questions in granular physics. In their paper, they highlighted the ability of a network-based perspective to complement traditional methods for studying granular materials and to open new doors for the analysis of these complex systems. One place in which network analysis may be especially useful is in quantifying how local, pairwise interactions between particles in a granular packing yield organization on larger spatial scales (both mesoscale and system-level). For example, in sheared or compressed granular packings, such organization can manifest as force chains or other intermediate-sized sets of particles that together comprise a collective structure. Network science provides approaches to extract and quantitatively characterize heterogeneous architectures at microscale, mesoscale, and macroscale sizes, and one can use these methods to understand important physical phenomena, including how such multiscale organization relates to bulk material properties or to spatial and temporal patterns of force transmission through a material. Network-based approaches should also be able to provide new insights into the mechanisms that govern the dynamics of granular materials. For example, as we will discuss, network analysis can helpfully describe certain aspects of complex dynamics (such as granular flows), and can provide quantitative descriptions of how the structure of a dense granular material evolves as a system deforms under external loads (such as those induced by compression, shear, tapping, or impact). It seems sensible to use a network-based approach when a network is changing on temporal scales slower than the time that it takes for information to propagate along it. We also expect ideas from temporal networks \cite{Holme2011} or adaptive networks \cite{thilo-adaptive} to be fruitful for studying faster dynamics, and investigation of granular dynamics in general should benefit from the development of both novel network representations and methods of network analysis that are designed specifically to understand temporally evolving systems. Another important problem in the study of granular materials is to predict when and where a granular system will fail. There has been some progress made in this area using network-based approaches, but it is important to continue to develop and apply tools from network analysis and related areas to gain a deeper understanding of which network features regulate or are most indicative of eventual failure. Another exciting direction for future work is to combine network-based approaches with questions about material design. In particular, can one use network-based approaches to help engineer granular systems --- or other materials that are amenable to network representations --- with desired and specialized properties? It is also important to note that network-based representations and methods of analysis can provide insightful descriptions of granular materials with various additional complexities, such as systems that are composed of differently-shaped particles, 3-dimensional (3D) materials, and so on. This flexibility makes the application of tools from network science a powerful approach for studying the structural properties and dynamics of granular networks. Such a framework also allows one to compare network architectures in diverse situations, such as between simulations and experiments, across systems that are composed of different types of particles or exposed to different loading conditions, and more. Exploiting these capabilities will yield improved understanding of which properties and behaviors of granular materials are general, versus which are specific to various details of a system. With the continued development of physically-informed network-analysis tools, network-based approaches show considerable promise for further development of both qualitative and quantitative descriptions of the organization and complex behavior of granular materials. The purpose of our paper is to review the nascent application of network theory (and related topics) to the study of granular materials. We begin in Sec.~\ref{s:networks} with a mathematical description of networks. In Sec.~\ref{s:network_measures}, we briefly review a set of measures that one can calculate on graphs and which have been useful in past investigations of granular materials. In Sec.~\ref{s:granular_networks}, we review several different ways in which granular materials have been represented as networks, and we discuss investigations of such networks to quantify heterogeneous, multiscale organization in granular materials and to understand how these systems evolve when exposed to external perturbations. We also point out insights into the underlying physics that have resulted from network-based investigations of granular matter. We close in Sec.~\ref{open_problems} with some thoughts on the many remaining open questions, and we describe a few specific future directions that we feel are important to pursue. We hope that our review will be helpful for those interested in using tools from network science to better understand the physics of granular systems, and that it will spur interest in using these techniques to inform material design. \section{Network construction and characterization} \subsection{What is a network?} \label{s:networks} % \begin{figure}[t] \centering \includegraphics[width = \textwidth]{fig1_from090819cbs_v2small2.jpg} \caption{\textbf{From a packing to a network.} \emph{(a)} A sample packing of grains. \emph{(b)} Representation of particles as network nodes. \emph{(c)} Representation of contacts as network edges. \emph{(d)} Graph representation of nodes and edges. The edges highlighted in green illustrate a cycle, which is a loop of physical contacts. The edges highlighted in peach illustrate a set of triangles, which are minimally-rigid structures in the context of structural rigidity. Edge weights often represent contact forces, which we illustrate here with different line widths. The degree $k$ of a node is equal to the number of edges attached (i.e., \emph{incident}) to that node, and the strength (i.e., weighted degree) $s$ of a node is given by the sum of weights of the edges attached to that node. One can \emph{(e)} encode an unweighted (i.e., binary) graph as an unweighted adjacency matrix and \emph{(f)} encode a weighted graph as a weighted adjacency matrix. } \label{f:packing} \end{figure} It is often useful to model a complex system as a network, the simplest type of which is a graph \citep{Newman:2006aa, newman2010networks}. For most of our article, we will use the terms \emph{network} and \emph{graph} synonymously, but the former concept is more general than the latter.\footnote{Indeed, it is increasingly important to examine network representations that are more complicated than graphs (see Secs.~\ref{alt_methods} and \ref{s:homology}) --- such as multilayer networks \citep{kivela2014multilayer}, simplicial complexes \citep{giusti2016twos}, and others --- and it is also essential to study dynamical processes on networks, rather than focusing exclusively on structural characteristics \citep{porter2016dynamical}.} A graph $G$ consists of \emph{nodes} (i.e., vertices), where pairs of nodes are adjacent to each other via \emph{edges} (i.e., links). We denote the set of nodes by $\mathcal{V}$ and the set of edges by $\mathcal{E}$. A node can also be adjacent to itself via a \emph{self-edge} (which is also sometimes called a \emph{self-loop}), and a \emph{multi-edge} describes the presence of two or more edges that are attached to (i.e., \emph{incident} to) the same pair of nodes. (Unless we state otherwise, we henceforth assume that our networks have neither self-edges nor multi-edges.) The number of nodes in a graph is the \emph{size} of the graph, and we also use the word ``size'' in the same way for other sets of nodes. A \emph{subgraph} of a graph $G$ is a graph constructed using a subset of $G$'s nodes and edges. An edge between two nodes represents some sort of relationship between them. For example, edges can represent information flow between different parts of the internet \citep{liben2008tracing}, friendship or other social interactions between people \citep{Scott:2012a}, trading between banks \citep{hurd2017framework}, anatomical or functional connections between large-scale brain regions \citep{Sporns:2013aa,Bassett:2017a}, physical connections between particles in contact \citep{Walker:2010aa, bassett2012influence}, and so on. Edges can be either unweighted or weighted, and they can be either undirected or directed \citep{newman2010networks}. In an unweighted (i.e., \emph{binary}) network, an edge between two nodes is assigned a binary value (traditionally $0$ or $1$) to encode the absence or presence of a connection. In a weighted network, edges can take a variety of different values to convey varying strengths of relationships between nodes. In an undirected network, all edges are bidirectional, so one assumes that all relationships are reciprocal. In a directed network, however, edges have a direction that encodes a connection from one node to another. An \emph{adjacency matrix} is a useful way to represent the information in a graph. For an unweighted and undirected graph, an element of the adjacency matrix $\mathbf{A}$ of an $N$-node network is \begin{equation} A_{ij} = \left\{\begin{array}{ll} 1\,, \text{ if there is an edge between nodes $i$ and $j$\,,} \\ 0\,, \text{ otherwise\,,} \end{array} \right. \end{equation} where $i,j \in \{1, \dots, N\}$. For a network in which nodes do not have labels, one can apply a permutation to ${\bf A}$'s rows and columns --- one uses the same permutation for each --- to obtain another adjacency matrix that represents the same network. In this paper, when we refer to ``the'' adjacency matrix ${\bf A}$ of a graph with unlabeled nodes, we mean any one of these matrices, which have the same properties (spectra, etc.). For a weighted graph, if nodes $i$ and $j$ are adjacent via an edge, we denote the corresponding edge weight by $w_{ij}$ (which is usually given by a nonnegative real number (e.g., the value of the normal or tangential component of the force between two contacting particles)\footnote{We do not consider edges with negative weights, although it may be interesting to do so in future work if there is an appropriate physical reason.}). An element in the associated weighted adjacency matrix (which is sometimes called a \emph{weight matrix}) $\mathbf{W}$ is \begin{equation} W_{ij} = \left\{\begin{array}{ll} w_{ij}\,, \text{ if there is an edge between nodes $i$ and $j$\,,} \\ 0\,, \text{ otherwise\,.} \end{array} \right.\ \end{equation} For the more general case of a weighted, directed graph, if there is an edge from node $j$ to node $i$, then we let $w_{ij}$ represent the weight of that edge \citep{newman2010networks}. The associated weighted and directed adjacency matrix $\mathbf{W}$ is \begin{equation} W_{ij} = \left\{\begin{array}{ll} w_{ij}\,, \text{ if there is an edge from node $j$ to node $i$\,,} \\ 0\,, \text{ otherwise\,.} \end{array} \right.\ \end{equation} An adjacency matrix associated with an undirected network is symmetric, but an adjacency matrix for a directed network need not be (and is symmetric if and only if all directed edges are reciprocated). In the present review, we primarily consider undirected networks, although we will occasionally make remarks about directed situations. For weighted graphs, it is often also important to consider a binary adjacency matrix $\mathbf{A}$ associated with a weight matrix $\mathbf{W}$. Note that $\mathbf{A}$ captures only the \textit{connectivity} of nodes (i.e., their adjacencies), irrespective of how strongly they interact with each other. In terms of $\mathbf{W}$, the corresponding binary network (which can be either directed or undirected) is \begin{equation} A_{ij} = \left\{\begin{array}{ll} 1\,, \text{ if } W_{ij} \neq 0\,, \\ 0\,, \text{ otherwise\,.} \end{array} \right.\, \end{equation} It is common to use terms like \emph{network topology} when discussing structural properties of $\mathbf{A}$, and sometimes one uses terms like \emph{network geometry} when discussing properties that also depend on edge weights. Because we will also employ ideas from subjects like algebraic topology (see Sec.~\ref{s:homology}), we will need to be very careful with such terminology. The network representations that have been used to study granular matter (and other kinds of materials) employ diverse definitions of edges (both weighted and unweighted, and both directed and undirected), and some generalizations of graphs have also been considered. See Fig.~\ref{f:packing} for a schematic showing possible choices of nodes and edges for a network representation of a granular packing. A variety of tools and measures from network analysis have been used to study granular networks. We discuss some of these ideas in Sec.~\ref{s:network_measures}. \subsection{Some tools for characterizing granular networks} \label{s:network_measures} Network theory \citep{newman2010networks} provides myriad ways to characterize and quantify the topological and geometrical organization of complex networks. Thus, different network methods can reveal different important features of the underlying system, and these features, in turn, can help explain how a system behaves in certain situations. In the context of granular matter, for example, it is often desirable to understand the stability of a material, mechanical responses to external stresses, or wave propagation through a system. Recent investigations have demonstrated that network analysis can inform understanding of the mechanisms that underlie these phenomena. In this section, we discuss several network concepts; and in Sec.~\ref{s:granular_networks}, we describe how they have been used for the study of granular materials. We are, of course, not presenting anything close to an exhaustive list of tools from network science. See \citep{newman2010networks} and other books and reviews (and references therein) for discussions of other tools from network science. For simplicity, we primarily give definitions for undirected networks, though many of the ideas that we present also have directed counterparts. We start with basic network diagnostics, and also discuss some more complicated methods. \subsubsection{Degree.} \label{degree} One local property of a network is node \emph{degree}. In an undirected network, a node's degree is equal to the number of edges that are attached to it (see Fig.~\ref{f:packing}\emph{d}). We denote the degree of node $i$ as $k_{i}$, and we recall that $N$ denotes the total number of nodes. For an unweighted graph with adjacency matrix ${\bf A}$, one can calculate $k_{i}$ with the formula \begin{equation}\label{eq:node_degree} k_{i} = \sum_{j=1}^{N} A_{ij}\,. \end{equation} One can generalize the idea of degree to \emph{strength} (i.e., weighted degree) using a weight matrix $\mathbf{W}$ \citep{Barrat:2004aa,Newman:2004aa}. The strength $s_i$ of node $i$ is equal to the sum of the weights of the edges that are attached to that node: \begin{equation}\label{eq:node_strength} s_{i} = \sum_{j=1}^{N} W_{ij}\,. \end{equation} Its associated degree $k_i$ is still given by Eq.~\eqref{eq:node_degree}. A common network representation of granular materials is to treat particles as nodes and physical contacts between particles as either unweighted or weighted edges (see Fig.~\ref{f:packing}\emph{a}--{c}). In this representation, node degree and node strength quantify information at the scale of single particles. One can compute the mean degree (a global property) of a network by calculating \begin{equation}\label{eq:avg_degree} \langle{k}\rangle = \frac{1}{N}\sum_{i} k_{i}\,, \end{equation} and one can similarly compute the mean strength with the formula \begin{equation}\label{eq:avg_strength} \langle{s}\rangle = \frac{1}{N}\sum_{i} s_{i}\,. \end{equation} In an undirected network, mean degree is related to $N$ and the total number $m$ of edges through the relation $\langle{k}\rangle = \frac{2m}{N}$. It is sometimes useful to characterize a network using its degree distribution $P(k)$ \citep{newman2010networks}, which gives the probability that a node has degree $k$. When one represents a granular packing as a contact network (see Fig.~\ref{f:packing} and Sec.~\ref{s:contact_network}), which is a binary network (i.e., unweighted network), the degree $k_i$ of node $i$ is known more commonly in the physics literature as its \emph{contact number} or \emph{coordination number} $Z_i$. If every node has the same degree (or even if most nodes have this degree), such as in a regular lattice, one often refers to the mean coordination number $Z$ of the lattice or packing. It is well-known that coordination number is related to the stability of granular packings \citep{Alexander:1998aa} and plays a critical role in the jamming transition \citep{Wyart:2005a,Liu:2010aa,VanHecke2010}, a change of phase from an underconstrained state to a rigid state that is characterized by the onset of mechanical stability.\footnote{Unless we note otherwise, we use the phrase \emph{jamming} in the formal sense of the jamming transition as defined by \citep{Liu:2010aa,VanHecke2010}. Packings of particles above the jamming point (a critical point related to the jamming transition) are rigid and overconstrained (i.e., ``hyperstatic''), those at this point are marginally stable and exactly constrained (i.e., ``isostatic''), and those below this point are underconstrained (i.e., ``hypostatic''). Additionally, packings below the jamming point are sometimes called ``unjammed'', and those above the jamming point are called ``jammed''.} We discuss these ideas further throughout the review. \subsubsection{Walks and paths.} \label{paths} In a network, a \emph{walk} is an alternating sequence of nodes and edges that starts and ends at a node, such that consecutive edges are both incident to a common node. A walk thus describes a traversal from one node to another node (or to itself) along edges of a network. A \emph{path} is a walk that does not intersect itself or visit the same node (or the same edge) more than once, except for a \emph{closed path}, which starts and ends at the same node (see Sec.~\ref{cycles}). One can compute the number of (unweighted) walks of a given length from a binary, undirected adjacency matrix ${\bf A}$ \citep{newman2010networks}. The length $l$ of an unweighted walk is defined as the number of edges in the associated sequence (counting repeated edges the number of times that they appear). Letting $\Xi^{l}_{ij}$ denote the number of walks of length $l$ between nodes $i$ and $j$, one calculates \begin{equation}\label{eq:num_paths} \Xi^{l}_{ij} = [\mathbf{A}^{l}]_{ij}\,. \end{equation} Various types of random walks yield short paths between nodes in a network, and such ideas (and their relation to topics such as spectral graph theory) are very insightful for studying networks \cite{masuda2016}. In an undirected network, a path from node $i$ to node $j$ is necessarily also a path from node $j$ to node $i$. However, this is not typically true in directed networks, which have sometimes been utilized in studies of granular force chains (see, e.g., \citep{Socolar:2002a}). Depending on a network's structure, there may be several or no paths between a given pair of nodes. An undirected network is called \emph{connected} if there exists a path between each pair of nodes, and a directed network is called \emph{strongly connected} if there is a path between each pair of nodes \citep{newman2010networks}. (A directed network is called \emph{weakly connected} if its associated undirected network is connected, and a strongly connected network is necessarily also weakly connected.) In networks of granular packings, both the existence and lengths of paths can impact system behavior. A connected network consists of a single \emph{component}. When a network has multiple components, it is common to study one component at a time (e.g., focusing on a \emph{largest connected component} (LCC), which is one that has the largest number of nodes). The length of an unweighted path is the number of edges in the associated sequence, and it is sometimes also called the \emph{hop distance} (or occasionally, unfortunately, the \emph{topological distance}). Paths in a network can also be weighted by defining some (possibly abstract) notion of distance associated with the edges of the network. For example, in a spatially-embedded network \citep{barth2011}, distance may refer to actual physical distance along an edge, in which case the length of a weighted path in the network is given by the sum of the physical distances along the sequence of edges in the path. However, one can also consider ``distance" more abstractly. For example, in a transportation or flow network, one can define a distance between two adjacent nodes to be some measure of resistance between those nodes, and then the length of a weighted path in such a network is given by the sum of the resistances along the sequence of edges in the path.\footnote{One can also calculate distances between nodes if they occupy positions in a metric space (such as a latent one that determines the probability that a pair of nodes is adjacent to each other) \cite{boguna2010}, and the properties of that metric space can influence the distances (namely, the ones along a network) that concern us.} We use the term \emph{network distance} to indicate a distance between two nodes (which can be either unweighted or weighted) that is computed by summing along edges in a path. A \textit{geodesic path} --- i.e., a shortest path (which need not be unique) --- between two nodes can be particularly relevant (though other short paths are often also relevant), and a \textit{breadth-first search} (BFS) algorithm \citep{skiena2008algorithm} is commonly employed to find geodesic paths in a network. The \textit{diameter} of a graph is the maximum geodesic distance between any pair of nodes. Denoting the shortest, unweighted network distance (i.e., shortest-path distance) between nodes $i$ and $j$ as $d_{ij}$, the mean shortest-path distance $L$ between pairs of nodes in a graph is \citep{Watts:1998aa} \begin{equation}\label{eq:shortest_path} L = \frac{1}{N(N-1)}\sum_{i,j \,\, (i \neq j)} d_{ij} \,. \end{equation} Note that one must be cautious when computing the mean shortest-path distance on disconnected networks (i.e., on networks that have more than one component), because the usual convention is to set the distance between two nodes in different components to be infinite \citep{newman2010networks}. Therefore, in a network with multiple components, the distance $L$ from Eq.~\eqref{eq:shortest_path} is infinite. One solution to this problem is to compute $L$ for each component separately. Another network notion that relies on paths is \emph{network efficiency} \citep{latora2001efficient,Rubinov2009} \begin{equation} E = \frac{1}{N(N-1)}\sum_{i,j\,\, (i \neq j)}\frac{1}{d_{ij}}\,. \label{eq:E} \end{equation} One can generalize measures based on walks and paths to incorporate edge weights \citep{Latora2003,Rubinov2009}. Letting $d^{w}_{ij}$ denote the shortest, weighted network distance between nodes $i$ and $j$, one can define a weighted mean shortest-path distance $L^{w}$ and weighted efficiency $E^{w}$ as in Eqs.~\eqref{eq:shortest_path} and ~\eqref{eq:E}, respectively, but now one uses $d_{ij}^{w}$ instead of $d_{ij}$. The network efficiency $E$ is a normalized version of the Harary index of a graph \cite{estrada2016}. Additionally, the convention $d_{ij} = \infty$ (or $d^{w}_{ij} = \infty$) if there is no path from $i$ to $j$ allows one to use Eq.~\eqref{eq:E} (or its weighted counterpart $E^{w}$) on connected graphs or on graphs with more than one component. For both unweighted and weighted scenarios, large values of network efficiency tend to correspond to small values of mean shortest-path length, and vice versa. One can also readily generalize notions of paths, distances, and efficiency to directed networks \citep{latora2001efficient,Latora2003,Rubinov2009}. In later sections, we will describe the use of paths, walks, and related ideas for investigating the structure of granular materials and their response to perturbations --- including, but not limited to, how these quantities change as a granular packing is compressed and goes through the jamming transition \citep{Arevalo:2010aa} --- and we will also describe their use in specific applications, such as in understanding heat transfer through a granular material \cite{Smart:2007a}. \subsubsection{Cycles.} \label{cycles} A \textit{cycle} (i.e., a \emph{closed walk}) in a network is a walk that begins and ends at the same node \cite{Bollobas:1998a}. As with other walks, one way to characterize a cycle is by calculating its length. An \emph{$l$-cycle} is a cycle in which $l$ edges are traversed (counting repeated edges the number of times that they appear in the cycle). A \emph{simple cycle} is a cycle that does not include repeated nodes or edges, aside from one repetition of the origin node at the termination of a closed cycle. Thus, for example, a simple 3-cycle in an undirected network is a triangle. For the remainder of this review, we assume that cycles are simple cycles, unless we explicitly state otherwise. In the context of a granular packing, one can directly map particle \textit{contact loops} --- sets of physically-connected grains arranged in a circuit --- to cycles in a corresponding graphical representation (see Fig.~\ref{f:packing}d). The length $l$ is odd for an \emph{odd cycle} and even for an \emph{even cycle}. We briefly note a few related concepts that are used to examine cycles in graphs because of their relevance to several network-based studies of granular materials. These are the notions of \textit{cycle space}, \textit{cycle basis}, and \textit{minimum cycle basis} \cite{Bollobas:1998a,Gross:2005a}. The \textit{cycle space} of an undirected graph is the set of all simple cycles in a graph along with all subgraphs that consist of unions of edge-disjoint simple cycles (i.e., they can share nodes but not edges) \cite{Kavitha:2009a,Griffin:2017a}. A \textit{cycle basis} is a minimal set of simple cycles such that any element of the cycle space can be written as a symmetric difference of cycles in the cycle basis \cite{Kavitha:2009a}. Finally, for unweighted networks, a \textit{minimum cycle basis} is a basis in which the total length of all cycles in the basis is minimal. For weighted networks, it is a basis in which the sum of the weights of all cycles in the basis is minimal. Minimum cycle bases can provide useful information about the structure and organization of cycles in a network, so several algorithms have been developed to extract them (see, for example, \cite{Horton:1987a,Mehlhorn:2006a}). Once one has determined a minimum cycle basis, one can examine the distribution of cycle lengths or define measures to quantify the participation of different nodes in cycles of different lengths. For example, \citet{Tordesillas:2014a,Walker:2015b} defined the concept of a \emph{cycle-participation vector} $X_{i}^{\mathrm{cycle}} = [x_{i}^{0},x_{i}^{3},\dots,x_{i}^{l}]$ for each node $i$. The elements of this vector count the number of cycles of each length in which node $i$ participates. In this definition, $x_{i}^{3}$ is the number of 3-cycles in which node $i$ participates, $x_{i}^{4}$ is the number of 4-cycles in which node $i$ participates, and so on (up to cycles of length $l$). If a node is not part of any cycle, then $x_{i}^{0} = 1$ and $x_{i}^{j} = 0$ for all $j \geq 3$; otherwise, $x_{i}^{0} = 0$. One reason to examine cycles in granular networks \citep{Smart:2008aa,Arevalo:2009aa,Arevalo:2010ba,Arevalo:2010aa,Tordesillas:2010aa,Walker:2010aa} is that they can help characterize mesoscale structural features of a network. Cycles (that are nontrivial) involve more than a single node, but they do not typically embody global structures of a large network. This makes them appealing for studying network representations of granular materials, because mesoscale features seem to play a role in the behavior of these systems \citep{bassett2012influence}. Perhaps the most important motivation, however, is that cycles appear to be relevant for stability and rigidity of a system. Specifically, in the context of structural rigidity theory, 3-cycles tend to be stabilizing structures that can maintain rigidity under applied forces \citep{rivier2006extended}, whereas 4-cycles can bend or deform (see Sec.~\ref{rigidity}). In Sec.~\ref{s:granular_networks}, we discuss in more detail how cycles can help characterize granular systems. \subsubsection{Clustering coefficients.} \label{clustering_coefficient} Clustering coefficients are commonly-used diagnostics to measure the density of triangles either locally or globally in a network \citep{newman2010networks}. For an unweighted, undirected network, the local clustering coefficient $C_{i}$ is usually defined as the number of triangles involving node $i$ divided by the number of triples centered at node $i$ \citep{Watts:1998aa,Newman:2003aa}. A triple is a set of three nodes that can include either three edges (to form a 3-cycle) or just two of them. In terms of the adjacency matrix and node degree, the local clustering coefficient is \begin{equation}\label{eq:local_cluster} C_{i} = \frac{\sum_{hj} A_{hj} A_{ih} A_{ij}}{k_{i}(k_{i} - 1)} \end{equation} for $k_i \geq 2$ (and $C_{i} = 0$ if $k_i \in \{0,1\}$). One can then calculate a global clustering coefficient of a network as the mean of $C_{i}$ over all nodes: \begin{equation} \label{eq:global_cluster} C = \frac{1}{N} \sum_{i}^{N} C_{i}\,. \end{equation} There is also another (and simpler) common way of defining a global clustering coefficient in a network that is particularly useful when trying to determine analytical approximations of expectations over ensembles of random graphs \citep{Barrat:2000aa, newman2010networks}. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{fig2_betweenness.jpg} \caption{\textbf{Network diagnostics reveal different types of structure in granular packings.} The two panels show the same compressed collection of photoelastic disks. One can measure the forces between particles and use the magnitude of the force between particle $i$ and particle $j$ to weight the edge between node $i$ and node $j$ in a weighted, undirected graph representation of the packing. \emph{(a)} One can compute a weighted clustering coefficient at each network node (i.e., particle) to probe local structure in the packing. \emph{(b)} One can compute a weighted betweenness centrality at each network node to probe how often its associated particle lies on a weighted shortest path between any two particles in the packing. [We adapted this figure, with permission, from \citep{bassett2012influence}.] } \label{f:diagnostics} \end{figure} The notion of a local clustering coefficient has also been extended to weighted networks in several ways \citep{jari2007,Barrat:2004aa,onnela2005,zhang2005general}. In one formulation \citep{Barrat:2004aa}, a local, weighted clustering coefficient $C^{w}_{i}$ is defined as \begin{equation} C^{w}_{i} = \frac{1}{s_{i}(k_{i} - 1)} \sum_{j,h} \frac{(W_{ij} + W_{ih})} {2} A_{ij} A_{ih} A_{jh} \end{equation} for strength $s_i > 0$ and degree $k_i \geq 2$. The quantity $C^{w}_{i} = 0$ if either $s_i = 0$ (so that $k_i = 0$) or $k_i = 1$. Recall that $\mathbf{W}$ and $\mathbf{A}$ are, respectively, associated weighted and unweighted adjacency matrices. The mean of $C^{w}_{i}$ over all nodes gives a weighted clustering coefficient $C^{w}$ of a network. As we will discuss later (see Secs.~\ref{s:role_of_cycles} and \ref{s:force_threshold}), clustering coefficients have been employed in several studies of granular materials. For example, they have been used to examine stability in granular packings \citep{Arevalo:2009aa,Arevalo:2010ba,Arevalo:2010aa,Tordesillas:2010aa,Walker:2010aa}. See Fig.~\ref{f:diagnostics}\emph{a} for an example of the spatial distribution of a clustering coefficient in a granular packing. \subsubsection{Centrality measures.} \label{s:centrality} In network analysis, one calculates centrality measures to attempt to quantify the importance of particular nodes, edges, or other structures in a network \citep{newman2010networks}. Different types of centralities characterize importance in different ways. The \emph{degree centrality} (i.e., degree) of a node, for example, is simply the number of edges attached to it (see Sec.~\ref{degree}). A few other types of centrality that have been used to study granular materials are closeness centrality, node betweenness centrality, edge betweenness centrality, and subgraph centrality. Notions of \emph{closeness centrality} of a node measure how close that node is to other nodes in a network \citep{Freeman:1977aa}. For a given node $i$, the most standard notion of closeness is defined as the inverse of the sum over the shortest-path lengths from node $i$ to all other nodes $j$ in a network. That is, node $i$'s closeness centrality is \begin{equation} \label{eq:closeness} H_{i} = \frac{N-1}{\sum_{j \neq i} d_{ij}}\,. \end{equation} Note that if we use the convention that the distance between two nodes in different components is infinite, then Eq.~\eqref{eq:closeness} only makes sense for connected networks. For any network with more than one component, Eq.~\eqref{eq:closeness} yields a closeness centrality of $0$. The \emph{geodesic node betweenness centrality} of node $i$ is the fraction of geodesic paths (either unweighted or weighted) between distinct nodes (not including $i$) that traverse node $i$ \citep{Freeman:1977aa}. Let $\psi_{gh}(i)$ denote the number of geodesic paths from node $g$ to node $h$ that traverse node $i$ (with $i \not \in \{g,h\}$), and let $\psi_{gh}$ denote the total number of geodesic paths from node $g$ to node $h$. The geodesic node betweenness centrality of node $i$ is then \begin{equation} \label{eq:betweenness} B_{i} = \sum_{g,h;\,g \neq h}\frac{\psi_{gh}(i)}{\psi_{gh}}\,, \quad i \not\in \{g,h\}\,. \end{equation} Geodesic node betweenness can probe the heterogeneity of force patterns in granular networks. See Fig.~\ref{f:diagnostics}\emph{b} for an example spatial distribution of a geodesic node betweenness centrality in an experimental granular packing. One can also compute a \emph{geodesic edge betweenness centrality} of an edge by calculating the fraction of shortest paths (either unweighted or weighted) that traverse it \cite{Girvan2002}. Let $\psi_{gh}(i,j)$ denote the number of geodesic paths from node $g$ to node $h$ that traverse the edge that is attached to nodes $i$ and $j$, and let $\psi_{gh}$ denote the total number of geodesic paths from node $g$ to node $h$. The geodesic edge betweenness centrality of this edge is then \begin{equation} \label{eq:edge_betweenness} B^{e}_{ij} = \sum_{g,h;\, g \neq h} \frac{\psi_{gh}(i,j)}{\psi_{gh}}\,. \end{equation} Another measure of node importance is \emph{subgraph centrality} $Y$ \citep{Estrada:2005aa,estrada2012}, which quantifies a node's participation in closed walks of all lengths. Recall from Sec.~\ref{paths} that one can write the number of length-$l$ walks from node $i$ to node $j$ in terms of powers of the adjacency matrix $\mathbf{A}$. To calculate closed walks of length $l$ that begin and end at node $i$, we take $i = j$ in Eq.~\eqref{eq:num_paths}. The subgraph centrality of node $i$, with a specific choice for how much we downweight longer paths, is then given by \begin{equation} \label{eq:subgraph_centrality_1} Y_{i} = \sum_{l=0}^{\infty} \frac{[\mathbf{A}^{l}]_{ii}}{l!}\,. \end{equation} Because shorter walks are weighted more strongly than longer walks in Eq.~\eqref{eq:subgraph_centrality_1}, they contribute more to the value of subgraph centrality. (In other contexts, centrality measures based on walks have also been used to compare the spatial efficiencies of different networks, and such ideas are worth exploring in granular materials \cite{estrada2016}.) One can also express subgraph centrality in terms of the eigenvalues and eigenvectors of the adjacency matrix \citep{Estrada:2005aa}. Let $v^{i}_{\alpha}$ denote the $i{\mathrm{th}}$ component of the $\alpha{\mathrm{th}}$ eigenvector $\bm{v}_{\alpha}$ of $\mathbf{A}$, and let $\lambda_\alpha$ denote the corresponding $\alpha{\mathrm{th}}$ eigenvalue. One can then write \begin{equation}\label{eq:subgraph_centrality} Y_{i} = \sum_{\alpha=1}^{n} (v^{i}_{\alpha})^{2} e^{\lambda_\alpha}\,. \end{equation} One can then calculate a mean subgraph centrality $Y$ by averaging $Y_i$ over the nodes in a network. In one study of granular materials \citep{Walker:2010aa}, a subgraph centrality was examined for weighted networks by considering the eigenvalues and eigenvectors of the weight matrix $\mathbf{W}$ in Eq.~\eqref{eq:subgraph_centrality}. One can also compute \emph{network bipartivity} $R$ \citep{Estrada:2005ab} to quantify the contribution to mean subgraph centrality $Y$ from closed walks of even length. In particular, the network bipartivity $R_{i}$ of node $i$ is \begin{equation} R_{i} = \frac{{Y}^{\mathrm{even}}_{i}}{Y_{i}}\,, \label{eq:bipartivity} \end{equation} where $Y^{\mathrm{even}}_{i}$ is the contribution to the sum in Eq.~\eqref{eq:subgraph_centrality_1} from even values of $l$ (i.e., even-length closed walks). As with other node diagnostics, one can average bipartivity over all nodes in a network to obtain a global measure, which we denote by $R$. In Sec.~\ref{s:granular_networks}, we will discuss calculations of closeness, betweenness, and subgraph centralities in granular packings. Obviously, our discussion above does not give an exhaustive presentation of centrality measures, and other types of centralities have also been used in studies of granular materials (see, for example, \citep{bassett2012influence}). \subsubsection{Subgraphs, motifs, and superfamilies.} \label{s:motifs} One can interpret the local clustering coefficient in Eq.~\eqref{eq:local_cluster} as a relationship between two small subgraphs: a triangle and a connected triple. Recall that a subgraph of a graph $G$ is a graph constructed using a subset of $G$'s nodes and edges. Conceptually, one can interpret small subgraphs as building blocks or subunits that together can be used to construct a network. For example, in a directed network, there exist three possible 2-node subgraphs (i.e., \emph{dyads}): the dyad in which node $i$ is adjacent to node $j$ by a directed edge, the dyad in which node $j$ is adjacent to node $i$ by a directed edge, and the dyad in which both of these adjacencies exist. In a directed, unweighted graph, there are 13 different connected 3-node subgraphs \citep{Milo:2002a} (see Fig.~\ref{f:subgraphs}). \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{motifs_milo_2002_network.jpg} \caption{\textbf{Subgraphs in networks.} We show all 13 different 3-node subgraphs that can occur in a directed, unweighted graph. A \emph{motif} is a subgraph that occurs often (typically relative to some null model) in a particular network or set of networks \citep{shenorr2002network}. [We reproduced this figure, with permission, from \citep{Milo:2002a}.]} \label{f:subgraphs} \end{figure} The term \emph{motif} is sometimes used for a small subgraph that occurs often in a particular network or set of networks (typically relative to some null model, such as a randomly rewired network that preserves the original degree distribution) \citep{shenorr2002network, Milo:2002a, milo2004superfamilies,alon2007network}. Borrowing terminology from genetics, these motifs appear to be \emph{overexpressed} in a network (or set of networks). Unsurprisingly, the number of $n$-node subgraphs increases very steeply with $n$, so identifying subgraphs in large networks is computationally expensive, and many algorithms have been developed to estimate the number of subgraphs in an efficient (though approximate) way. See, for example, \citep{Milo:2002a,schreiber2005frequency,wernicke2006efficient,grochow2007network,omidi2009moda,kashani2009kavosh}. In applying algorithms for motif counting to data, one seeks to identify subgraphs that are present more often than expected in some appropriate random-network null model. The over-representation of a motif in a network is often interpreted as indicative of its playing a role in the function of that network (though one has to be cautious about drawing such conclusions). For example, 3-node motifs can form feedforward loops in which there are directed edges from node $i_1$ to node $i_2$, from node $i_2$ to node $i_3$, and from node $i_3$ to node $i_1$. The identification and characterization of motifs has yielded insights into the structure and function of a variety of systems, including food webs \cite{paulau2015motif}, gene-regulation networks of yeast \citep{Milo:2002a}, neuronal networks of the macaque monkey \citep{sporns2004motifs}, and others. For different types of networks, one can also identify so-called \emph{superfamilies}, which are sets of networks that have similar motif-frequency distributions \citep{milo2004superfamilies}. There also exists a less-stringent definition of a superfamily in which one disregards whether a subgraph is a motif in the sense of it being more abundant than expected from some random-graph null model and instead considers a superfamily to be a set of networks that have the same rank-ordering of the number of $n$-node subgraphs for some fixed value of $n$ \cite{Xu:2008aa}. In either case, one can examine different superfamilies to help understand the role that specific motifs (or subgraphs) or sets of motifs (or subgraphs) may have in potentially similar functions of networks in a given superfamily. Subgraphs, motifs, and superfamilies have been examined in several studies that applied network analysis to granular materials \cite{Walker:2014aa,Walker:2015b,Walker:2015c,Tordesillas:2012aa}. They have revealed interesting insights into the deformation and reconfiguration that occurs in granular systems for different types of loading conditions and external perturbations. We discuss these ideas further in Secs.~\ref{subgraph} and \ref{kinematic}. \subsubsection{Community structure.} \label{s:comm_structure} Many real-world networks also have structure on intermediate scales (\emph{mesoscales}) that can arise from particular organizations of nodes and edges \citep{Fortunato2016,peixoto2017,Porter2009,Fortunato2010,csermely2013}. The most commonly-studied mesoscale network property is \textit{community structure} \citep{Fortunato2016,Porter2009}, which describes sets of nodes, called \textit{communities}, that are densely (or strongly) interconnected to each other but only weakly connected to other dense sets of nodes. In other words, a community has many edges (or large total edge weight, in the case of weighted networks) between its own nodes, but the number and/or weight of edges between nodes in different communities is supposed to be small. Once one has detected communities in a network, one way to quantify their organization is to compute and/or average various network quantities over the nodes (or edges) within each community separately, rather than over an entire network. For example, one can compute the size (i.e., number of nodes) of a community, mean path lengths between nodes in a given community, or some other quantity to help characterize the architecture of different communities in a network. Studying community structure can reveal useful insights about granular systems, whose behavior appears to be influenced by mesoscale network features \cite{bassett2012influence,bassett2015extraction,Giusti:2016a,papadopoulos2016evolution,Walker:2012aa,Tordesillas:2013a,Walker:2014ba,Walker:2012a}. Community structure and methods for detecting communities have been studied very extensively \cite{Fortunato2016}. We will briefly discuss the method of \textit{modularity maximization} \citep{NG2004,Newman2006b,newman2010networks}, in which one optimizes an (occasionally infamous) objective function known as \emph{modularity}, as this approach has been employed previously in several studies of granular materials (see, e.g., Sec.~\ref{s:comm_detect}). However, myriad other approaches exist for studying community structure in networks. These include stochastic block models (SBMs) and other methods for statistical inference (which are increasingly favored by many scholars) \cite{peixoto2017}, approaches based on random walks (e.g., InfoMap \citep{rosvall2008maps}), various methods for detecting \emph{local community structure} (see, e.g., \cite{clauset2005finding,jeub2015}), edge-based communities \citep{ahn2010link}, and many others. The goal of modularity maximization is to identify communities of nodes that are more densely (or more strongly) interconnected with other nodes in the same community than expected with respect to some null model. To do this, one maximizes a modularity objective function \begin{equation}\label{eq:one} Q = \sum_{i,j} [W_{ij} - \gamma P_{ij}] \delta(g_{i},g_{j})\,, \end{equation} where $g_{i}$ is the community assignment of node $i$ and $g_{j}$ is the community assignment of node $j$, and where the Kronecker delta $\delta(g_{i},g_{j})=1$ if $g_{i} = g_{j}$ and $\delta(g_{i},g_{j})=0$ otherwise. The quantity $\gamma$ is a resolution parameter that adjusts the relative average sizes of communities \citep{Good2010,Fortunato2007}, where smaller values of $\gamma$ favor larger communities and larger values of $\gamma$ favor smaller communities \citep{bassett2013robust}. The element $P_{ij}$ is the expected weight of the edge between node $i$ and node $j$ under a specified null model. In many contexts, the most common choice is to determine the null-model matrix elements $P_{ij}$ from the Newman--Girvan (NG) null model \citep{NG2004,Newman2006,Bazzi2016}, for which \begin{equation} P^{\mathrm{NG}}_{ij} = \frac{s_{i} s_{j}}{2m}\,, \label{eq:ng_null} \end{equation} where $s_{i}=\sum_j W_{ij}$ is the strength (and, for unweighted networks, the degree $k_i$) of node $i$ and $m=\frac{1}{2}\sum_{i,j} W_{ij}$ is the total edge weight (and, for unweighted networks, the total number of edges) in the network. There are several other null models, which are usually based on a random-graph model, and they can incorporate system features (such as spatial information) in various ways \cite{Sarzynska:2015aa}. In the next part of this subsubsection, we discuss a physically-motivated null model that is useful for studying granular force networks. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{null_models_small.pdf} \caption{\textbf{Modularity maximization with different null models can reveal distinct types of community structures in granular networks.} In a network representation in which particles are represented by nodes and contact forces are represented by weighted edges, \emph{(a)} using the Newman--Girvan null model helps uncover contiguous domains in a granular system, and \emph{(b)} using a geographical null model on the same network helps detect chain-like structures that are reminiscent of force chains. In both panels, nodes (i.e., particles) of the same color are assigned to the same community.} \label{f:nullmodels} \end{figure} Maximizing $Q$ is NP-hard \citep{Brandes2008}, so it is necessary to use computational heuristics to identify near-optimal partitions of a network into communities of nodes \citep{Good2010}. Two well-known choices are the Louvain \citep{Blondel2008} and Louvain-like \cite{genlouvain2016} locally greedy algorithms, which begin by placing all nodes in their own community, and they then iteratively agglomerate nodes when the resulting partition increases modularity $Q$. Because of the extreme near degeneracy of the modularity landscape (a very large number of different partitions can have rather similar values of the scalar $Q$), it is often useful to apply such an algorithm many times to construct an ensemble of partitions, over which one can average various properties to yield a consensus partition \citep{bassett2013robust,Lancichinetti2012,Sarzynska:2015aa,jeub-santo2017}. ~\\ \noindent \emph{Physical considerations.} Community-detection tools, such as modularity maximization, have often been applied to social, biological, and other networks \cite{Porter2009,Fortunato2016}. In applying these techniques to granular materials, however, it is important to keep in mind that the organization of particulate systems (such as the arrangements of particles and forces in a material) is subject to significant spatial and physical constraints, which can severely impact the types of organization that can arise in a corresponding network representation of the material. When studying networks that are embedded in real space or constructed via some kind of physical relationship between elements, it is often crucial to consider the spatial constraints --- and, more generally, a system's underlying physics --- and their effects on network architecture \cite{barth2011}. Such considerations also impact how one should interpret network diagnostics such as path lengths and centrality measures, the null models that one uses in procedures such as modularity maximization, and so on. The NG null model was constructed to be appropriate for networks in which a connection between any pair of nodes is possible. Clearly, in granular materials --- as in other spatially-embedded systems \citep{barth2011} --- this assumption is unphysical and therefore problematic. Bassett et al. \citep{bassett2013robust} defined a null model that accounts explicitly for geographical (and hence spatial) constraints in granular materials, in which each particle can contact only its nearest neighbors \citep{bassett2015extraction}. In the context of granular networks with nodes representing particles and edges representing forces between those particles, the \emph{geographical null model} ${\bf P}$ in \citep{bassett2013robust} has matrix elements \begin{equation} \label{geog} P_{ij} = \rho A_{ij} \,, \end{equation} where $\rho$ is the mean edge weight in the network and ${\bf A}$ is the binary adjacency matrix of the network. In this particular application, $\rho = \overline{f} := \langle f_{ij} \rangle$ is the mean inter-particle force. As we illustrate in Fig.~\ref{f:nullmodels}, modularity maximization with the geographical null model [Eq.~\eqref{geog}] produces different communities than modularity maximization with the NG null model [Eq.~\eqref{eq:ng_null}] \citep{bassett2015extraction,Giusti:2016a,papadopoulos2016evolution}. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{fig4_multilayer_schematic.jpg} \caption{\textbf{A schematic of a multilayer network with layer-dependent community structure.} In this example, each layer represents a static granular force network in which nodes (i.e., particles) are adjacent to one another via intralayer weighted edges (e.g., representing contact forces). Additionally, the same particle in consecutive layers is adjacent to itself via an interlayer edge of uniform weight $\omega$. For clarity, we only show two such couplings, but these interlayer edges exist for all particles (and between all consecutive layers). One can extract communities that change across layers --- e.g., if layers represent time, these are time-evolving communities --- to study mesoscale organization in a granular system and to help understand how it reconfigures due to external loading (such as compression). In this schematic, we use different colors to label the particles that belong to different communities. Note that the same community can persist across several (or all) layers and reconfigure in terms of its constituent particles and the mean strength of its nodes. [We reproduced this figure, with permission, from \citep{papadopoulos2016evolution}.]} \label{f:multilayer_schematic} \end{figure} ~\\ \noindent \emph{Generalization of modularity maximization to multilayer networks.} Although studying community structure in a given granular packing can provide important insights, one is also typically interested in how such mesoscale structures reconfigure as a material experiences external perturbations, such as those from applied compression or sheer. To examine these types of questions, one can optimize a \emph{multilayer} generalization of modularity to study multilayer granular force networks in which each layer represents a network at a different step in the evolution of the system (for example, at different time steps or for different packing fractions) \citep{papadopoulos2016evolution}. In Fig.~\ref{f:multilayer_schematic}, we show a schematic of a multilayer construction that has been employed in such investigations. See \cite{kivela2014multilayer,Boccaletti2014} for reviews of multilayer networks (including generalizations of this construction). One way to detect multilayer communities in a network is to use a generalization of modularity maximization \citep{Mucha2010}, which was derived for multilayer networks with interlayer edges between counterpart nodes in different layers. For simplicity, we suppose that all edges are bidirectional. One maximizes \begin{equation} \label{eq:dynamic_modularity} Q_{\mathrm{multi}} = \frac{1}{2 \eta} \sum_{ijqr} [(\mathcal{W}_{ijq} - \gamma_{q}\mathcal{P}_{ijq}) \delta_{qr} + \omega_{jqr}\delta_{ij}] \delta(g_{iq},g_{jr})\,, \end{equation} where $\mathcal{W}_{ijq}$ is the $(i,j){\mathrm{th}}$ component of the $q{\mathrm{th}}$ layer of the adjacency tensor $\mathcal{W}$ \cite{domenico2013mathematical} associated with the multilayer network, $\mathcal{P}_{ijq}$ is the $(i,j){\mathrm{th}}$ component of the $q{\mathrm{th}}$ layer of the null-model tensor, $\gamma_{q}$ is a resolution parameter (sometimes called a \emph{structural resolution parameter}) for layer $q$, and $\omega_{jqr}$ is the \textit{interlayer coupling} between layers $q$ and $r$. (In the context of multilayer representations of temporal networks, if $\mathcal{\omega}_{jqr} = \omega$ for all $j$, $q$, and $r$, one can interpret $\omega$ as a \textit{temporal resolution parameter}.) More specifically, $\mathcal{\omega}_{jqr}$ is the strength of the coupling that links node $j$ in layer $q$ to itself in layer $r$. (This type of interlayer edge, which occurs between counterpart nodes in different layers, is called a \emph{diagonal} edge \cite{kivela2014multilayer}.) The quantities $g_{iq}$ and $g_{jr}$, respectively, are the community assignments of node $i$ in layer $q$ and node $j$ in layer $r$. The intralayer strength of node $j$ in layer $q$ is $s_{jq} = \sum_{i} \mathcal{W}_{ijq}$, and the interlayer strength of node $j$ in layer $q$ is $\zeta_{jq} = \sum_{r}{\mathcal{\omega}_{jqr}}$, so the multilayer strength of node $j$ in layer $q$ is given by $\kappa_{jq} = s_{jq} + \zeta_{jq}$. Finally, the normalization factor $\eta = \frac{1}{2} \sum_{jq} \kappa_{jq}$ is the total strength of the adjacency tensor.\footnote{In the study of multilayer networks, it is common to use the term ``tensor'' to refer to a multidimensional array \cite{kivela2014multilayer} (as is common in some disciplines \cite{Kolda2009Tensor}), and proper tensorial structures have been explored briefly in adjacency tensors \cite{domenico2013mathematical}.} Maximizing multilayer modularity [Eq.~\eqref{eq:dynamic_modularity}] allows one to examine phenomena such as evolving communities in time-dependent networks or communities that evolve with respect to some other parameter, and communities in networks with multiple types of edges. Capturing such behavior has been useful in many applications, including financial markets \citep{Bazzi2016}, voting patterns \cite{Mucha2010}, international relations \cite{Cranmer:2014ut}, international migration \cite{danchev2016}, disease spreading \cite{Sarzynska:2015aa}, human brain dynamics \citep{Bassett2011b,bassett2015learning,braun2015dynamic}, and more. In the context of granular matter, multilayer community detection allows one to examine changes in community structure of a force network, in which communities can either persist or reconfigure, with respect to both particle content and the mean strength of nodes inside a community, due to applied loads on a system. \subsubsection{Flow networks.} \label{s:max_flow} One can examine many natural and engineered systems --- such as animal and plant vasculature, fungi, and urban transportation networks \citep{Blinder:2013aa, Katifori:2010a, shl2017, Bebber:2007a, Banavar:2000a, Newman:2006a, Kurant:2006a} --- from the perspective of \emph{flow networks} (which are often directed) that transport a load (of fluids, vehicles, and so on) along their edges. It is of considerable interest to examine how to optimize flow through a network \citep{Bertsekas:1998, newman2010networks}. A well-known result from optimization theory is the \emph{maximum-flow--minimum-cut theorem} \citep{Ahuja:flow,Bertsekas:1998,newman2010networks}: for a suitable notion of flow and under suitable assumptions, the maximum flow that can pass from a source node to a sink node is given by the total weight of the edges in the \emph{minimum cut}, which is the set of edges with smallest total weight that, when removed, disconnect the source and the sink. A related notion, which applies to networks in which there is some cost associated with transport along network edges, is that of \emph{maximum-flow--minimum-cost}. In this context, one attempts to find a route through a network that maximizes flow transmission from source to sink, while minimizing the cost of flow along network edges \citep{Ahuja:flow,Bertsekas:1998}. The maximum-flow--minimum-cut and maximum-flow--minimum-cost problems are usually examined under certain constraints, such as flow conservation at each node and an upper bound (e.g., limited by a capacitance) on flow through any edge. One can examine granular networks from such a perspective by considering a flow of force along a network formed by contacting grains. We discuss relevant studies in Sec.~\ref{s:flow_networks}. \subsubsection{Connected components and percolation.} \label{s:largest_comp} Sometimes it is possible to break a network into connected subgraphs called components (which we introduced briefly in Sec.~\ref{paths}). A \emph{component}, which is sometimes called a \textit{cluster}, is a subgraph $G_C$ of a graph $G$ such that at least one path exists between each pair of nodes in $G_C$ \citep{newman2010networks}. Components are maximal subsets in the sense that the addition of another node of $G$ to it destroys the property of connectedness. An undirected graph is connected when it consists of a single component. Networks with more than one component often have one component that has many more nodes than the other components, so there can be one large component and many small components. One can find the components of a graph using a breadth-first search (BFS) algorithm \citep{skiena2008algorithm}, and one can determine the number of components by counting the number of $0$ eigenvalues of a graph's combinatorial Laplacian matrix \citep{newman2010networks}. To study graph components, one can also use methods from computational algebraic topology. Specifically, the zeroth Betti number $\beta_{0}$ indicates the number of connected components in a graph \citep{Kaczynski:2004aa} (see Sec.~\ref{s:homology}). Percolation theory \citep{newman2010networks,kesten-whatis,Stauffer:1994a,saberi2015}, which builds on ideas from subjects such as statistical physics and probability theory, is often used to understand the emergence and behavior of connected components in a graph \citep{porter2016dynamical}. For example, in the traditional version of what is known as \emph{bond percolation} (which is also traditionally studied on a lattice rather than on a more general network) \citep{broadbent1957percolation}, edges are occupied with probability $p$, and one examines quantities such as the size distributions of connected components as a function of the parameter $p$, called the \textit{bond occupation probability}. It is especially interesting to determine a critical value $p_c$, called the \textit{percolation threshold}, at which there is a phase transition: below $p_{c}$, there is no \emph{percolating component} (or cluster), which spans the system and connects opposite sides; above $p_{c}$, there is such a cluster \citep{Albert2002,broadbent1957percolation}. In the latter scenario, it is common to say that there is a ``percolating network". In percolation on more general networks, one can study how the size of the largest component, as a fraction of the size of a network, changes with $p$. Related ideas also arise in the study of components in Erd\H{o}s--R\'{e}nyi random graphs $G(N,p)$, in which one considers an ensemble of $N$-node graphs and $p$ is the independent probability that an edge exists between any two nodes \citep{Bollobas:1998a,newman2010networks,Erdos:1959aa, Erdos:1960aa}. In the limit $N \rightarrow \infty$, the size of the largest connected component (LCC) undergoes a phase transition at a critical probability $p_{c} = 1/N$. When $p < p_{c}$, the ER graph in expectation does not have a \textit{giant connected component} (GCC); at $p = p_{c}$, a GCC emerges whose size scales linearly with $N$ for $p > p_{c}$. Similarly, for bond percolation on networks, a transition occurs at a critical threshold $p_c$, such that for $p > p_c$, there is a GCC (sometimes also called a ``giant cluster'' or ``percolating cluster'') whose size is a finite fraction of the total number $N$ of nodes as $N \rightarrow \infty$ \citep{newman2010networks,porter2016dynamical}. When studying percolation on networks, quantities of interest include the fraction of nodes in the LCC, the mean component size, the component-size distribution, and critical exponents that govern how these quantities behave just above the percolation threshold \citep{Albert2002,Stauffer1979a,newman2010networks}. We will see in Sec.~\ref{s:granular_networks} that it can be informative to use ideas from percolation theory to study the organization of granular networks. For example, it is particularly interesting to examine how quantities such as the number and size of connected components evolve as a function of packing density (or another experimental parameter). \citep{Slotterback:2012aa,Herrera:2011aa,Arevalo:2010aa,Kondic:2012aa,Kramar:2013aa,Kramar:2014b,Kramar:2014aa,Ardanza-Trevijano:2014aa,Kondic:2016a,Pugnaloni:2016a}. Some studies have considered \emph{connectivity percolation} transitions, which are characterized by the appearance of a connected component that spans a system (i.e., a percolating cluster, as reflected by an associated GCC in the infinite-size limit of a network); or \emph{rigidity percolation transitions}, which can be used to examine the transition to jamming \citep{Feng1985,Moukarzel:1995a,Jacobs1995,Aharonov:1999a,Lois:2008a,Shen:2012a,Kovalcinova:2015a,Henkes2016}. Rigidity percolation is similar to ordinary bond percolation (which is sometimes used to study connectivity percolation), except that edges represent the presence of rigid bonds between network nodes \cite{Thorpe1985a,Thorpe:1999aa} and one examines the emergence of rigid clusters in the system as a function of the fraction of occupied bonds. One can also study percolation in force networks by investigating the formation of connected components and the emergence of a percolating cluster of contacts as a function of a force threshold, which is a threshold applied to a \emph{force-weighted adjacency matrix} (representing contact forces between particles) to convert it to a binary adjacency matrix \citep{Arevalo:2010aa,Kondic:2012aa,Kramar:2013aa,Kramar:2014b,Kramar:2014aa,Ardanza-Trevijano:2014aa,Kondic:2016a,Pugnaloni:2016a,Kovalcinova:2016a,Kovalcinova:2015a,PastorSatorras2012,pathak2017force}. (See Sec.~\ref{s:force_threshold} for additional discussion.) However, it is important to note that when studying networks of finite size, one needs to be careful with claims about GCCs and percolation phase transitions, which are defined mathematically only in the limit of infinite system size. \subsubsection{Methods from algebraic topology and computational topology.} \label{s:homology} The tools that we have described thus far rely on the notion of a dyad (i.e., a 2-node subgraph) as the fundamental unit of interest (see Fig.~\ref{f:algebraic_topology}\emph{a}). However, recent work in algebraic topology and computational topology \citep{Edelsbrunner:2010a,Carlsson:2009a,ghrist2014elementary,Kaczynski:2004aa} offers a complementary view, in which the fundamental building blocks that encode relationships between elements of a system are \emph{$k$-simplices} (each composed of $k+1$ nodes), rather than simply nodes and dyadic relations between them (see Fig.~\ref{f:algebraic_topology}\emph{b}). These structures can encode ``higher-order'' interactions and can be very useful for understanding the architecture and function of real-world networks (e.g., they yield a complementary way to examine mesoscale network features), and they have been insightful in studies of sensor networks \citep{dlotko2012distributed}, contagion spreading \citep{taylor2015topological}, protein interactions \citep{sizemore2016classification}, neuronal networks \citep{giusti2016twos,sizemore2016cliques}, and many other problems. See \cite{otter2015,Patania2017} for further discussion and pointers to additional applications. The discussion in \cite{Stolz2016} is also useful. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{fig5_algebraic_topology_v2.jpg} \caption{\label{f:algebraic_topology} \textbf{Algebraic topology and clique complexes.} \emph{(a)} An interaction in a graph is a dyad (i.e., a 2-node subgraph) or a node and a self-edge. \emph{(b)} An alternative fundamental unit is a $k$-simplex. A 0-simplex is a node, a 1-simplex is an edge, a 2-simplex is a filled triangle, and so on. \emph{(c)} A collection of $k$-simplices is called a simplicial complex, and one type of simplicial complex that can be used to encode the information in a graph is a clique complex (sometimes also called a \emph{flag complex}). One constructs a clique complex by taking every $k$-clique (a complete subgraph of $k$ nodes) in a graph $G$ to be a simplex of the same number of nodes. \emph{(d)} An interesting feature that can occur in a simplicial complex is a cycle, which is a closed arrangement of a collection of $k$-simplices. The purple edges in the upper object indicate a 1-dimensional cycle that encloses a region filled in by simplices, whereas the purple edges in the lower object indicate a 1-dimensional cycle that encloses a hole. \emph{(e)} One can use a filtration to decompose a weighted graph into a sequence of binary graphs. For example, if one uses edge weight as a filtration parameter, one can represent a weighted graph as a sequence of unweighted graphs, which in turn yields a sequence of unweighted clique complexes. [We adapted this figure, with permission, from \citep{Bassett:2017a}.]} \end{figure} A collection of simplices that are joined in a compatible way is called a \emph{simplicial complex}, which is a generalization of a graph that can encode non-dyadic relations \citep{ghrist2014elementary}. More precisely, and following \citep{giusti2016twos}, we define an \emph{(abstract) simplicial complex} $\mathcal{X}$ as a pair of sets: $V_{\mathcal{X}}$, called the \emph{vertices} (or \emph{nodes}); and $S_{\mathcal{X}}$, called the \emph{simplices}, each of which is a finite subset of $V_{\mathcal{X}}$, subject to the requirement that if $\sigma \in S_{\mathcal{X}}$, then every subset $\tau$ of $\sigma$ is also an element of $S_{\mathcal{X}}$. A simplex with $k$ elements is called a \emph{$(k-1)$-simplex}, and subsets $\tau \subset \sigma$ are called \emph{faces} of $\sigma$. Using this notation, a $0$-simplex is a node, a $1$-simplex is an edge and its two incident nodes (i.e., a dyad), a 2-simplex is a filled triangle, and so on (see Fig.~\ref{f:algebraic_topology}\emph{b}). One type of simplicial complex that can be used to encode the information in a graph is a \emph{clique complex} (sometimes also called a \emph{flag complex}); we show an example in Fig.~\ref{f:PH_schematic}. To construct the clique complex of a graph $G$, one associates every $k$-clique (a complete --- i.e., fully connected --- subgraph of $k$ nodes) in $G$ with a $(k-1)$-simplex. One can thus think of building the clique complex of a graph $G$ as ``filling in'' all of the $k$-cliques in $G$ (see Fig.~\ref{f:algebraic_topology}c). Note that we use the terms $k$-simplex and $k$-clique because they are standard, but it is important not to confuse the use of $k$ in this context with the use of $k$ as the (also standard) notation for node degree. One important feature of a simplicial complex is the potential presence of \emph{cycles}.\footnote{Although we use the term \emph{cycle}, which is standard in algebraic topology, note that this concept of a cycle is distinct from (though related to) the standard network-science use of the word ``cycle'' (see Sec.~\ref{cycles}). The latter is sometimes called a \emph{circuit}, a term that we will use occasionally for clarity (especially given our focus on connected graphs).} A cycle can consist of any number of nodes, and a $k$-dimensional cycle is defined as a closed arrangement of $k$-simplices, such that a cycle has an empty boundary\footnote{The precise mathematical definition of a cycle requires a more detailed presentation than what we include in our present discussion. For more information and further details from a variety of perspectives, see \cite{ghrist2014elementary,otter2015,kozlov2007combinatorial,nanda2014simplicial,Kaczynski:2004aa,Edelsbrunner:2010a,sizemore2016classification,Stolz2016}.}. For example, Fig.~\ref{f:algebraic_topology}\emph{d} illustrates a closed arrangement of $1$-simplices (i.e., edges) that forms a 1-dimensional cycle. It important to distinguish between cycles that encircle a region that is filled by simplices with cycles that enclose a void (which is often called a ``hole'' for the case of 1-dimensional cycles). For example, the set of purple edges in the object in the upper portion of Fig.~\ref{f:algebraic_topology}\emph{d} constitute a 1-dimensional cycle that surrounds a region filled by 2-simplices (i.e., filled triangles), whereas the purple edges in the object in the lower portion of Fig.~\ref{f:algebraic_topology}\emph{d} constitute a $1$-dimensional cycle that encloses a hole. Characterizing the location and prevalence of void-enclosing cycles in the clique complex of a network representation of a granular packing can offer fascinating insights into the packing's structure \citep{Kondic:2012aa}. One way to do this is by computing topological invariants such as \emph{Betti numbers} \citep{Kaczynski:2004aa,Edelsbrunner:2010a,Kramar:2014b}. The $k{\mathrm{th}}$ Betti number $\beta_k$ counts the number of inequivalent $k$-dimensional cycles that enclose a void, where two $k$-dimensional cycles are \emph{equivalent} if they differ by a boundary of a collection of $(k+1)$-simplices. In other words, the $k{\mathrm{th}}$ Betti number $\beta_k$ counts the number of nontrivial \emph{equivalence classes} of $k$-dimensional cycles and can thus also be interpreted as counting the number of voids (i.e., ``holes" of dimension $k$).\footnote{In the literature, it is common to abuse terminology and refer to an equivalence class of $k$-dimensional cycles simply as a $k$-dimensional cycle.} The zeroth Betti number $\beta_{0}$ gives the number of connected components in a network, the first Betti number $\beta_{1}$ gives the number of inequivalent 1-dimensional cycles that enclose a void (i.e., it indicates loops), the second Betti number $\beta_{2}$ gives the number of inequivalent 2-dimensional cycles that enclose a void (i.e., it indicates cavities), and so on. Another useful way to examine the topological features that are determined by equivalence classes of $k$-dimensional cycles (i.e., components, loops, cavities, and so on) is to compute \emph{persistent homology} (PH) of a network. For example, to compute PH for a weighted graph, one can first decompose it into a sequence of binary graphs. One way to do this is to begin with the empty graph and add one edge at a time in order of decreasing edge weights (see Fig.~\ref{f:algebraic_topology}\emph{e}). More formally and following \citep{sizemore2016cliques}, this process can translate information about edge weights into a sequence of binary graphs as an example of what is called a \emph{filtration} \cite{ghrist2014elementary,otter2015}. The sequence $G_0 \subset G_1 \subset \dots \subset G_{|\mathcal{E}|}$ of unweighted graphs begins with the empty graph $G_0$, and one adds one edge at a time (or multiple edges, if some edges have the same weight) in order from largest edge weight to smallest edge weight. (One can also construct filtrations in other ways). Constructing a sequence of unweighted graphs in turn yields a sequence of clique complexes \citep{Petri2013a}, allowing one to examine equivalence classes of cycles as a function of the edge weight $\theta$ (or another filtration parameter). Important values of $\theta$ include the weight $\theta_{\mathrm{birth}}$ associated with the first graph in which an equivalence class (i.e., a topological feature) occurs (i.e., its \emph{birth} coordinate) and the edge weight $\theta_{\mathrm{death}}$ associated with the first graph in which the feature disappears (i.e., its \emph{death} coordinate), such as by being filled in with higher-dimensional simplices or by merging with an older feature. One potential marker of the relative importance of a particular feature (a component, a loop, and so on) in the clique complex is how long it persists, as quantified by its \emph{lifetime} $\theta_{\mathrm{birth}} - \theta_{\mathrm{death}}$ (although short-lived features can also be meaningful \cite{Stolz2016,otter2015}). A large lifetime indicates robust features that persist over many values of a filtration parameter. \emph{Persistence diagrams} (PDs) are one useful way to visualize the evolution of $k$-dimensional cycles with respect to a filtration parameter. PDs encode birth and death coordinates of features as a collection of \emph{persistence points} $(\theta_{\mathrm{birth}},\theta_{\mathrm{death}})$ in a planar region. One can construct a PD for each Betti number: a $\beta_{0}$ PD (denoted by $\mathrm{PD}_{0}$) encodes the birth and death of components in a network, a $\beta_{1}$ PD (denoted by $\mathrm{PD}_{1}$) encodes the birth and death of loops, and so on. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{PH_schematic_Kramar2014b_edited2.jpg} \caption{\label{f:PH_schematic} \textbf{An example force network, an associated filtration over the flag complex (i.e., clique complex), and persistence diagrams.} \emph{(a)} In the force network, colored edges represent the magnitude of the force between contacting particles, which are represented as nodes in the network. In order from smallest to largest, the four values of the force are $\theta_{1}$ (dark blue), $\theta_{2}$, (cyan), $\theta_{3}$ (green), and $\theta_{4}$ (red). \emph{(b)} The flag complex is formed by filling in all 3-particle loops (i.e., triangular loops) with the smallest value of the force along any of its edges. Defining a filtration over the flag complex avoids counting these 3-particle loops. \emph{(c)--(f)} The sequence of complexes corresponding to the filtration over the flag complex; one obtains the sequence by descending the four levels of the force threshold $\theta$. \emph{(g)} The $\beta_{0}$ persistence diagram $\mathrm{PD}_{0}$. \emph{(h)} The $\beta_{1}$ persistence diagram $\mathrm{PD}_{1}$. [We adapted this figure, with permission, from \citep{Kramar:2014b}.]} \end{figure} To demonstrate some key aspects of a filtration, the birth and death of topological features, and PDs, we borrow and adapt an example from \citet{Kramar:2014b}. Consider the small granular force network in Fig.~\ref{f:PH_schematic}\emph{a}; the nodes represent particles in a 2D granular packing, and the colored edges represent the magnitude of the inter-particle forces (of which there are four distinct values) between contacting particles. In a 2D system like this one, the only relevant Betti numbers are $\beta_{0}$ and $\beta_{1}$, as all others are $0$. In Fig.~\ref{f:PH_schematic}\emph{b}, we show the \emph{flag complex} (which is essentially the same as a \emph{clique complex} \cite{otter2015}) of the granular network, where the color of a triangle indicates the value corresponding to the minimum force along any of its edges. Computing PH on a flag complex (which has been done in several studies of PH in granular force networks \citep{Kramar:2013aa,Kramar:2014b,Kramar:2014aa,Pugnaloni:2016a,Kondic:2016a,Ardanza-Trevijano:2014aa}) only counts loops that include 4 or more particles. That is, it does not count 3-particle loops (which are sometimes called ``triangular loops''). Loops with 4 or more particles are associated with \emph{defects}, because they would not exist in a collection of monosized disks that are packed perfectly and as densely as possible into a ``crystalline" structure (which has only triangular loops) \citep{Kramar:2014b}. In Fig.~\ref{f:PH_schematic}\emph{c--f}, we show the sequence of complexes that correspond to the filtration over the flag complex. One descends the four threshold levels (i.e., edge weights), beginning with the largest ($\theta_{4}$) and ending with the smallest ($\theta_{1}$). In Fig.~\ref{f:PH_schematic}\emph{g,h}, we show the corresponding PDs for $\beta_{0}$ and $\beta_{1}$. It is helpful to discuss a few features of these diagrams. In $\mathrm{PD}_{0}$, we observe four points that are born at $\theta_{4}$; these points correspond to the four connected components that emerge at the first level of the filtration in Fig.~\ref{f:PH_schematic}\emph{c}. Two of the components merge into one component at $\theta_{3}$ (see Fig.~\ref{f:PH_schematic}\emph{d}); this corresponds to the point at $(\theta_{4},\theta_{3})$. A new component forms at $\theta_{3}$ and dies at $\theta_{2}$; this is represented by the point at $(\theta_{3},\theta_{2})$ (see Fig.~\ref{f:PH_schematic}\emph{d}). Additionally, two components born at $\theta_4$ die at $theta_2$, corresponding to the two points at $(\theta_{4},\theta_{2})$. One can continue this process until the end of the filtration, where there is just a single connected component (see Fig.~\ref{f:PH_schematic}\emph{f}). This component is born at $\theta_{4}$; it persists for all thresholds, and we use \citet{Kramar:2014b}'s convention to give it a death coordinate of $-1$; this yields the persistence point at $(\theta_{4},-1)$. In $\mathrm{PD}_{1}$, we observe that a loop emerges at $\theta_{3}$ (see Fig.~\ref{f:PH_schematic}\emph{d}), and it is then filled by triangles at $\theta_{2}$ (see Fig.~\ref{f:PH_schematic}\emph{e}), leading to the point at $(\theta_{3},\theta_{2})$. Three more loops are born at $\theta_{2}$ and never die (see Fig.~\ref{f:PH_schematic}\emph{e}); using the convention in \citep{Kramar:2014b}, we assign these features a death coordinate of $0$, so there are three persistence points at $(\theta_{2},0)$. Finally, one more loop appears at $\theta_{1}$ and does not die (see Fig.~\ref{f:PH_schematic}\emph{e}); this is represented by a point at $(\theta_{1},0)$. \citet{Kramar:2014b} gave an in-depth exposition of how to apply PH to granular networks, and we refer interested readers to this paper for more information. Because studying PH is a general mathematical approach, it can be applied to different variations of force networks and can also be used on networks constructed from different types of experimental data (e.g., digital image data, particle-position data, or particle-interaction data). \citet{Kramar:2014b} also discussed a set of measures that can be used to compare and contrast the homology of force networks both within a single system (e.g., at two different packing fractions) and across different systems (e.g., if one uses particles of different sizes or shapes), and they explored the robustness of PH computations to noise and numerical errors. In Sec.~\ref{s:granular_alg_topology}, we further discuss applications of methods from algebraic and computational topology to granular materials. \subsection{Some considerations when using network-based methods} \label{s:relationships_measures} Because there are many methods that one can use to analyze granular networks and many quantities that one can compute to measure properties of these networks, it is useful to discuss some relationships, similarities, and distinctions between them. Naturally, the meaning of any given network feature depends on how the network itself is defined, so we focus the present discussion on the most common representation of a granular system as a network. (See Sec.~\ref{s:other_representations} for discussions of other representations.) In this representation (see Fig.~\ref{f:packing}), nodes correspond to particles and edges correspond to contacts between particles. Edge weights can represent quantities such as normal or tangential forces between particles. In this type of granular network, it is important to be aware of which network quantities explicitly take into account spatial information or physical constraints in a system, which consider only network topology, and which consider only network geometry (i.e., both topology and edge weights, but not other information). Granular materials have a physical nature and are embedded in real space, so such considerations are extremely important. For discussions of how such issues manifest in spatial networks more generally, see \cite{barth2011}. One way to explicitly include spatial or physical information into network analysis is to calculate quantities that are defined from some kind of distance (e.g., a Euclidean distance between nodes), whether directly or through a latent metric space, rather than a hop distance. For example, as discussed in Sec.~\ref{paths}, one can define the edge length between two adjacent nodes from the physical distance between them, which allows quantities such as mean shortest path length, efficiency, and some centrality measures to directly incorporate spatial information. However, traditional network features such as degree and clustering coefficient depend only on network connectivity, although their values are influenced by spatial effects. In Sec.~\ref{s:comm_structure}, we also saw that one can incorporate physical constraints from granular networks into community-detection methods by using a geographical null model, rather than the traditional NG null model, in modularity maximization. Different computations in network analysis can also probe different spatial, topological, or geometrical scales. For example, measures such as degree, strength, and clustering coefficients are local measures that quantify information about the immediate neighborhood of a node. However, measures such as the mean shortest path length and global efficiency are global in nature, as they probe large-scale network organization. In between these extremes are mesoscale structures. A network-based framework can be very helpful for probing various types of intermediate-scale structures, ranging from very small ones (e.g., motifs, such as small cycles) to larger ones (e.g., communities), and tools such as PH were designed to reveal robust structural features across multiple scales. Crucially, although there are some clear qualitative similarities and differences between various network-analysis tools (and there are some known quantitative relationships between some of them \cite{newman2010networks}), it is a major open issue to achieve a precise understanding of the relationships between different network computations. Moreover, in spatially-embedded systems (as in any situation where there are additional constraints) one can also expect some ordinarily distinct quantities to become more closely related to each other \cite{barth2011}. Furthermore, the fact that a granular particle occupies a volume in space (volume exclusion) gives constraints beyond what arises from embeddedness in a low-dimensional space. \section{Granular materials as networks} \label{s:granular_networks} We now review network-based models and approaches for studying granular materials. Over the past decade, network analysis has provided a novel view of the structure and dynamics of granular systems, insightfully complementing and extending traditional perspectives. See \citep{Jaeger1996,duran1999sands,Mehta2007,Franklin2015,Andreotti2013} for reviews of non-network approaches. Perhaps the greatest advantages of using network representations and associated tools are their natural ability to \emph{(1)} capture and quantify the complex and intrinsic heterogeneity that manifests in granular materials (e.g., in the form of force chains), and to \emph{(2)} systematically and quantitatively investigate how the structure and organization of a granular system changes when subjected to external loads or perturbations (such as compression, shear, or tapping). In particular, network science and related subjects provide a set of tools that help quantify structure (and changes in structure) over a range of scales --- including local, direct interactions between neighboring particles; larger, mesoscale collections of particles that can interact and reconfigure via more complicated patterns; and system-wide measurements of material (re)organization. It is thought that local, intermediate, and system-wide scales are all important for regulating emergent, bulk properties of granular systems. Because structure at each of these scales can play a role in processes such as acoustic transmission and heat transfer, it can be difficult to obtain a holistic, multiscale understanding of granular materials. For example, microscale particle-level approaches may not take into account collective organization that occurs on slightly larger scales, and continuum models and approaches that rely on averaging techniques may be insensitive to interesting and important material inhomogeneities \citep{Digby:1981,Velicky:2002,Goddard:1990,Goldenberg-2005-FEE}. Network representations also provide a flexible medium for modeling different types of granular materials (and other particulate matter). For example, network analysis is useful for both simulation and experimental data of granular materials, and methods from complex systems and network science can help improve understanding of both dense, quasistatically-deforming materials as well as granular flows. In any of these cases, one often seeks to understand how a system evolves over the course of an experiment or simulation. To study such dynamics, one can examine a network representation of a system as a function of a relevant physical quantity that parameterizes the system evolution. For example, for a granular system in which the packing fraction increases in small steps as the material is compressed quasistatically, one can extract a network representation of the system at each packing fraction during the compression process and then study how various features of that network representation change as the packing fraction increases. Even a particular type of granular system is amenable to multiple types of network representations, which can probe different aspects of the material and how it evolves under externally applied loads. For instance, one can build networks based only on knowledge of the locations of particles (which, in some cases, may be the only information available) or by considering the presence or absence of physical contacts between particles. If one knows additional information about the elements in a system or about their interactions, one can construct more complicated network representations of it. For example, it has long been known that granular materials exhibit highly heterogeneous patterns of force transmission, with a small subset of the particles carrying a majority of the force along force chains \citep{Dantu1957, Drescher1972}. Recall from Sec.~\ref{intro} that, broadly speaking, a force chain (which is also sometimes called a \emph{force network}) is a set of contacts that carry a load that is larger than the mean load \citep{Liu:1995aa, Howell1999}, and the mean orientation of a force chain often encodes the direction of the applied stress \citep{Majmudar:2005aa}. We illustrated an example of force chain structure in Fig.~\ref{f:Behringer2014Statistical_Fig3Right}, and we further discuss force-chain organization in Sec.~\ref{s:force_weighted}. Because of the nature of the distribution of force values and the interesting way in which forces are spatially distributed in a material, it is often very useful to consider network representations of granular materials that take into account information about inter-particle forces (see Sec.~\ref{s:force_weighted}) and to use network-based methods that allow one to quantitatively investigate how the structure of a force network changes when one includes only contacts that carry at least some threshold force. See Sec.~\ref{s:force_threshold} and Sec.~\ref{s:granular_alg_topology}. In our ensuing discussion, we describe several network constructions that have been used to study granular materials, discuss how they have been investigated using many of the concepts and diagnostics introduced in Sec.~\ref{s:network_measures}, and review how these studies have improved scientific understanding of the underlying, complex physics of granular systems. \subsection{Contact networks} \label{s:contact_network} A \emph{contact network} is perhaps the simplest way to represent a granular system. Such networks (as well as the term ``contact network'') were used to describe granular packings long before explicitly network science-based approaches were employed to study granular materials; see, for example, \citep{luding1997stress,silbert2002statistics}. The structure of a contact network encodes important information about a material's mechanical properties. As its name suggests, a contact network embodies the physical connectivity and contact structure of the particles in a packing (see Fig.~\ref{f:packing}). In graph-theoretic terms, each particle in the packing is represented as a node, and an edge exists between any two particles that are in physical contact with one another. Note that it may not always be possible to experimentally determine which particles are in physical contact, and one may need to approximate contacts between particles using information about particle positions, radii, and inter-particle distances. (See Sec.~\ref{limit} for details.) By definition (and however it is constructed), a contact network is unweighted and undirected, and it can thus be described with an unweighted and undirected adjacency matrix (see Sec.~\ref{s:networks}): \begin{equation} A_{ij} = \left\{\begin{array}{ll} 1\,, \text{ if particle \textit{i} and \textit{j} are in contact\,,} \\ 0\,, \text{ otherwise\,.} \end{array} \right. \end{equation} Because the organization of a contact network depends on and is constrained by the radii of the particles and their locations in Euclidean space, a contact network is a \emph{spatially-embedded} graph \citep{barth2011}. In Sec.~\ref{s:comm_detect}, we will see that this embedding in physical space has important consequences for the extraction of force-chain structures via community-detection techniques (see Sec.~\ref{s:comm_structure}). In Fig.~\ref{f:contact_network}, we show an example of a contact network generated from a discrete-element-method (DEM) simulation (see Sec.~\ref{limit}) of biaxial compression \citep{Tordesillas:2007aa}. The granular system in this figure is \emph{polydisperse}, as it has more than two types of particles. (In this case, the particles have different sizes.) If all particles are identical in a granular system, it is called \emph{monodisperse}; if there are two types of particles in a system, it is called \emph{bidisperse}. In practice, although the presence or absence of a contact is definitive only in computer simulations, one can set reasonable thresholds and perform similar measurements in experiments \citep{Majmudar:2007aa} (see Sec.~\ref{limit}). It is also important to note that packing geometry and the resulting contact network do not completely define a granular system on their own. In particular, one can associate a given geometrical arrangement of particles with several configurations of inter-particle forces that satisfy force and torque balance constraints and the boundary conditions of a system \citep{Snoeijer-2004-FNE,Snoeijer-2004-ETF,Tighe2010a,Kollmer2017}. This is a crucial concept to keep in mind when conducting investigations based only on contact networks, and it also motivates the inclusion of contact forces to construct more complete network representations of granular systems (see Sec.~\ref{s:force_weighted}). In the remainder of this subsection, we review some of the network-based approaches for characterizing contact networks of granular materials and how these approaches have been used to help understand the physical behavior of granular matter. We primarily label the following subsubsections according to the type of employed methodology. However, we also include some subsubsections about specific applications to certain systems. \begin{figure}[t!] \centering \includegraphics[width=0.65\textwidth]{Walker2010aa_Fig3_small.pdf} \caption{\textbf{An example of a contact network from a discrete-element-method (DEM) simulation of a densely-packed 2D system of polydisperse, disk-shaped particles.} In this case, the granular material was subjected to biaxial compression and constrained to move along a plane. This snapshot corresponds to a network at an axial strain before full shear-band formation. [We adapted this figure, with permission, from \citep{Walker:2010aa}.]} \label{f:contact_network} \end{figure} \subsubsection{Coordination number and node degree.} \label{s:kz} One can study a contact network in several ways to investigate different features of a granular system. We begin our discussion by associating the mean node degree of a contact network with the familiar and well-studied \emph{coordination number} (i.e., \emph{contact number}) $Z$. Although early investigations of granular materials did not consciously make this connection, the mean degree and coordination number are synonymous quantities. The contact degree $k_i$ of particle $i$ is the number of particles with which $i$ is directly in contact, and one can calculate it easily from an adjacency matrix [see Eq.~\eqref{eq:node_degree}]. A contact network is undirected, so its adjacency matrix ${\bf A}$ is symmetric, and its row sum and column sum each yield a vector of node degrees. The mean degree $\langle k \rangle$ of a contact network [ Eq.~\eqref{eq:avg_degree}] is then what is usually known as the mean coordination number (i.e., contact number) $Z$, and it gives the mean number of contacts per particle. As we noted previously (see Sec.~\ref{degree}), $Z$ is an important quantity in granular systems because of its connection with mechanical stability and rigidity --- which, loosely speaking, is the ability of a system to withstand deformations --- in these systems and its characterization of the jamming transition \citep{Liu:2010aa,Liu:2011aa} and other mechanical properties. In particular, the condition for mechanical stability --- i.e., the condition to have all translational and rotational degrees of freedom constrained so that there is force and torque balance --- in a packing of frictionless spheres in $d$ dimensions \citep{Alexander:1998aa,VanHecke2010,Henkes:2010aa,Liu:2010aa,Liu:2011aa} is \begin{equation} Z \geq 2d \equiv Z_{\mathrm{iso}}\,. \end{equation} The isostatic number $Z_{\mathrm{iso}}$ indicates the condition for \emph{isostaticity}, which is defined as the minimum contact number that is needed for mechanical stability. One can use the coordination number (which is often tuned by changing the packing fraction $\phi$) as an order parameter to describe the jamming transition for frictionless spheres in two and three dimensions \citep{Liu:2010aa, VanHecke2010,Henkes:2010aa}. Specifically, there is a critical packing fraction $\phi_{c}$ such that below $\phi_{c}$, the contact number for these systems is $Z = 0$ (i.e. there are no load-bearing contacts), and at the critical packing fraction $\phi_{c}$, the contact number jumps to the critical value $Z_c = Z_{{\mathrm iso}} = 2d$. One can also generalize the use of the coordination number in order to examine mechanical stability and jamming in granular systems of frictional spheres. In these systems, the condition for stability is \begin{align} Z &\geq Z^{m}_{{\mathrm{iso}}}\,, \notag \\ Z^{m}_{{\mathrm{iso}}} &\equiv (d+1) + \frac{2N_{m}}{d}\,, \end{align} where $N_{m}$ is the mean number of contacts that have tangential forces $f_{t}$ equal to the so-called \emph{Coulomb threshold} --- i.e., $N_{m}$ is the mean number of contacts with $f_{t} = \mu f_{n}$, where $\mu$ is the coefficient of friction and $f_{n}$ is the normal force \citep{Shundyak:2007aa,Henkes:2010aa,VanHecke2010} --- and $Z^{m}_{{\mathrm{iso}}}$ again designates the condition for isostaticity. Results from experimental systems have demonstrated that contact number also characterizes the jamming transition in frictional, photoelastic disks \citep{Majmudar:2007aa}. The coordination number has been studied for several years in the context of granular materials and jamming, and it is fruitful to connect it directly with ideas from network science. Several recent studies have formalized the notion of a \emph{contact network}, and they deliberately modeled granular systems as such networks to take advantage of tools like those described in Sec.~\ref{s:network_measures}. Such investigations of contact networks allow one to go beyond the coordination number and further probe the rich behavior and properties of granular materials --- including stability and the jamming transition \citep{Liu:2010aa}, force chains \citep{Liu:1995aa,Mueth:1998a,Howell1999}, and acoustic propagation \citep{Digby:1981,Makse-1999-WEM,Owens:2011,bassett2012influence}. Perhaps the simplest expansion of investigations into the role of the coordination number is the study of the degree distribution $P(k)$ of the contact network of a packing. Calculating degree distributions can provide potential insights into possible generative mechanisms of a graph \citep{Albert2002,newman2010networks}, although one has to be very careful to avoid over-interpreting the results of such calculations \citep{stumpf2012critical}. In granular physics, it has been observed that the degree distribution of a contact network can track changes in network topology past the jamming transition in isotropically compressed simulations of a 2D granular system \citep{Arevalo:2010aa}. Specifically, the peak of $P(k)$ shifts from a lower value of $k$ to a higher value of $k$ near the transition. Moreover, changes in the mean degree $\langle k \rangle$ and its standard deviation can anticipate the onset of different stages of deformation in DEM simulations (i.e., molecular-dynamics simulations) of granular systems under various biaxial compression tests \citep{Walker:2010aa, Tordesillas:2010a}. \subsubsection{Investigating rigidity of a granular system using a contact network.} \label{rigidity} An important area of research in granular materials revolves around attempts to \emph{(1)} understand how different types of systems respond when perturbed, and \emph{(2)} determine what features of a system improve its integrity under such perturbations. As we noted in Sec.~\ref{s:kz}, it is well-known that coordination number (and hence node degree) is a key quantity for determining mechanical stability and understanding jamming in granular materials. However, contact networks obviously have many other structural features, and examining them can be very helpful for providing a more complete picture of these systems. To the best of our knowledge, the stability of granular materials was first studied from a graph-theoretic standpoint in the context of structural rigidity \citep{Jacobs1995,Duxbury1999,Thorpe:1999aa}, and it has since been applied to amorphous solids more generally \citep{Wyart:2005a}. In structural rigidity theory, thought to have been studied originally by James Clerk Maxwell \citep{Maxwell:1864aa}, rods of fixed length are connected to one another by hinges, and one considers the conditions under which the associated structural graphs are able to resist deformations and support applied loads (see Fig.~\ref{f:structural_rigidity}). A network is said to be \emph{minimally rigid} (or \emph{isostatic}) when it has exactly the number of bars needed for rigidity. This occurs when the number of constraints is equal to the number of degrees of freedom in the system (i.e., when \emph{Laman's minimal-rigidity criterion} is satisfied). The network is flexible if there are too few rods, and it is overconstrained (i.e., self-stressed) if there are more rods than needed for minimal rigidity. Triangles are the smallest isostatic structures in two dimensions \citep{Laman:1970aa, Asimow:1978aa, Crapo:1979aa}; there are no allowed motions of the structure that preserve the lengths and connectivity of the bars, so triangles (i.e., 3-cycles) do not continuously deform due to an applied force. In comparison, a 4-cycle is structurally flexible and can continuously deform from one configuration to another while preserving the lengths and connectivity of the rods (see Fig.~\ref{f:structural_rigidity}). \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{structural_rigidity.jpg} \caption{\textbf{Some ideas from structural rigidity.} \emph{(a)} An example of a 3-cycle and a 4-cycle that one can examine using concepts from structural rigidity theory by considering the edges to be rods of fixed length that are connected to one another by rotating hinges. \emph{(b)} Triangular structures are rigid under a variety of applied forces (represented as red arrows), whereas \emph{(c)} squares can deform under such perturbations. By Laman's theorem, a 2D network with $N$ nodes is minimally rigid if it has exactly $2N-3$ edges, and each of its subgraphs satisfies the analogous constraint (so an $\tilde{N}$-node subgraph has no more than $2\tilde{N}-3$ edges). The network in panel (b) satisfies this criterion, but the network in panel (c) does not. } \label{f:structural_rigidity} \end{figure} Extending a traditional network of rods and hinges, concepts from structural rigidity yield interesting insights into contact networks of particulate matter. See \citep{Guyon:1990aa} for a discussion of some of the earliest applications of such ideas to disordered systems and granular materials. \citet{Moukarzel:1998aa} used structural rigidity theory to derive conditions for the isostaticity of a granular packing; and he tied the fact that random packings of particles tend to be isostatic to the origin of instabilities in granular piles. Later, similar concepts were used to show that in granular networks, cycles with an even number of edges allow contacting grains to roll without slipping when subject to shear; however, these relative rotations are ``frustrated" in cycles with an odd number of edges, so such cycles can act as stabilizing structures in a network \citep{rivier2006extended}. Several later studies (such as \cite{Smart:2008aa,Tordesillas:2010aa,Walker:2010aa,Tordesillas:2010a,Tordesillas:2011aa,Tordesillas:2014a,Walker:2015b,Tordesillas:2010b,Arevalo:2010aa,Arevalo:2010ba,Walker:2014c}) have confirmed that contact loops are often stabilizing mesoscale features in a contact network of a granular material. We specifically consider the role of cycles in granular contact networks in Sec.~\ref{s:role_of_cycles}. Another type of network approach for understanding rigidity in granular systems is rigidity percolation \cite{Thorpe1985a,Duxbury1999,Thorpe:1999aa} (see Sec.~\ref{s:largest_comp}). \citet{Feng1985} conducted an early investigation of an idealized version of bond percolation in a granular context. It is now known that hallmarks of this bond-percolation transition occur below isostaticity: \citep{Shen:2012a} identified that a percolating (i.e., system-spanning) cluster of non-load-bearing contacts forms at a packing density below the jamming point. In modern contexts, the rigidity-percolation approach can be used to determine if a network is both percolating and rigid (see Sec.~\ref{s:largest_comp}). Note that a rigid granular network is also percolating, but a percolating network need not be rigid. Rigidity percolation relies on tabulating local constraints via a \emph{pebble game} \citep{Jacobs1995}, which reveals connected, rigid regions (sometimes called ``clusters'') in a network. In a series of papers \citep{Jeng2008,Jeng2010,Cao2012,Lopez:2013aa} on simulated packings, Schwarz and coworkers went beyond Laman's minimal-rigidity criterion to investigate local versus global rigidity in a network, the size distribution of rigid clusters, the important role of spatial correlations, and the necessity of force balance. Building on the above work, \citep{Henkes2016} recently utilized a rigidity-percolation approach to identify floppy versus rigid regions in slowly-sheared granular materials and to characterize the nature of the phase transition from an underconstrained system to a rigid network. See also the recent paper \cite{heroy2017}. \subsubsection{Exploring the role of cycles.} \label{s:role_of_cycles} We now consider the role of circuits (i.e., the conventional network notion of cycles, which we discussed in Sec.~\ref{cycles}) in granular contact networks. Cycles in a contact network can play crucial stabilizing roles in several situations. Specifically, as we will discuss in detail in this section, simulations (and some experiments) suggest that \emph{(1)} odd cycles (especially 3-cycles) can provide stability to granular materials by frustrating rotation among grains and by providing lateral support to surrounding particles, and that \emph{(2)} a contact network loses these stabilizing structures as the corresponding granular system approaches failure. Noting that 3-cycles are the smallest arrangement of particles that can support (via force balance) a variety of 2D perturbations to a compressive load without deforming the contact structure, \citet{Smart:2008aa} studied the effects of friction and tilting on the evolution of contact-loop organization in a granular bed. In their simulations, they implemented tilting by incrementally increasing the angle of a gravity vector with respect to the vertical direction, while preserving the orientation of the granular bed and maintaining quasistatic conditions. In untilted granular packings, they observed that lowering inter-particle friction yields networks with a higher density of 3-cycles and 4-cycles, where they defined the ``density'' of an $l$-cycle to be the number of $l$-cycles divided by the total number of particles. By examining the contact network as a function of tilting angle, \citet{Smart:2008aa} also observed that the density of 4-cycles increases prior to failure --- likely due to the fracture of stabilizing 3-cycles --- and that this trend was distinguishable from changes in coordination number alone. Cycles have also been studied in the context of DEM simulations of dense, 2D granular assemblies subject to quasistatic, biaxial compression tests \citep{Tordesillas:2010aa, Walker:2010aa, Tordesillas:2011aa,Tordesillas:2010a}. In many of these studies, the setup consists of a collection of disks in 2D that are compressed slowly at a constant strain rate in the vertical direction, while allowed to expand under constant confining pressure in the horizontal direction \citep{Tordesillas:2010aa, Walker:2010aa, Tordesillas:2011aa}. In another variation of boundary-driven biaxial compression, a sample can be compressed with a constant volume but a varying confining pressure \citep{Tordesillas:2010a}). Before describing specifics of the network analysis for these systems, it is important to note that for the previously described conditions, the axial strain increases in small increments (i.e., ``steps'') as compression proceeds, and one can extract the inter-particle contacts and forces at each strain value during loading to examine the evolution of a system as a function of strain. Additionally, these systems undergo a change in behavior from a solid-like state to a liquid-like state and are characterized by different regimes of deformation as a function of increasing axial strain \citep{Tordesillas:2007aa}. In particular, the granular material first undergoes a period of \emph{strain hardening}, followed by \emph{strain softening}, after which it enters a \emph{critical state}. In the strain-hardening regime, the system is stable and the shear stress increases monotonically with axial strain up to a peak value. After the peak shear stress, strain softening sets in; this state is marked by a series of steep drops in the shear stress that indicate reduced load-carrying capacity. Finally, in the critical state, a persistent shear band has fully formed, and the shear stress fluctuates around a steady-state value. The shear band is a region of localized deformation and gives one signature of material failure \cite{Oda:1998a}. Inside the shear band, force chains can both form and buckle \citep{Tordesillas:2009b}. One can also associate increases in the energy dissipation rate of the system with particle rearrangements (such as those that occur during force-chain buckling) and loss of stability in the material. \begin{figure}[t!] \centering \includegraphics[width=0.75\textwidth]{walker2010aa_Fig6_Fig7_Fig8.jpg} \caption{\textbf{Evolution of cycles in a deforming granular material.} \emph{(a)} One can track a subset of particles and their corresponding contact network from a DEM simulation for increasing axial strain values $|\varepsilon_{22}|$. The system consists of 5098 spherical, polydisperse particles that were subjected to a quasistatic, biaxial compression test. At the smallest displayed axial strain, the set of particles in this figure yields a network that is composed of 3-cycles and 4-cycles. During loading, contacts are lost and longer cycles arise until only a single 9-cycle remains. \emph{(b)} One way to quantify these structural changes is by calculating the global clustering coefficient $C$ (solid curve), which undergoes a sharp drop at peak stress, signifying the onset of material failure. (The dashed curve shows the standard deviation of the distribution of local clustering coefficients $C_{i}$). \emph{(c)} A decrease in mean subgraph centrality $Y$ (solid curve) also illustrates the loss of short cycles during deformation. More specifically, the mean network bipartivity $R$ (dashed curve) increases with axial strain, highlighting that, during loading, closed walks of even (respectively, odd) length contribute more (respectively, less) to the mean subgraph centrality of the contact network. [We adapted this figure, with permission, from \citep{Walker:2010aa}.] } \label{f:cycle_evolution} \end{figure} Examining the temporal evolution of cycles in an evolving granular contact network can reveal important information about changes that occur in a material during deformation. Using DEM simulations (see the previous paragraph), \citet{Tordesillas:2010aa} computed the total number of cycles of different lengths in a minimal cycle basis (see Sec.~\ref{cycles}) of a contact network at each strain state during loading, and they observed that there are many more 3-cycles and 4-cycles than longer cycles in the initial, solid-like state of the system. However, as axial strain increases and one approaches the maximum shear stress, the total number of 3-cycles falls off steeply. (The same is true for 4-cycles, though it is less dramatic.) Additionally, during axial-strain steps (i.e., axial-strain ``intervals'') corresponding to drops in shear stress, \citet{Tordesillas:2010aa} observed large increases in the number of 3-cycles and 4-cycles that open up to become longer cycles. In Fig.~\ref{f:cycle_evolution}\emph{a}, we show an example of the evolution of cycle organization with increasing axial strain for a subset of particles from a DEM simulation of a granular material under biaxial compression, carried out by \citet{Walker:2010aa}. The authors observed that in this system, both the global clustering coefficient $C$ [Eq.~\eqref{eq:global_cluster}] and the mean subgraph centrality $Y$ decrease with increasing axial strain, drop sharply at peak shear stress, and then level out (see Fig.~\ref{f:cycle_evolution}\emph{b,c}). Recalling that $C$ is a measure of triangle density in a graph and that subgraph centrality measures participation of nodes in closed walks (with more weight given to shorter paths), these results also imply that the loss of small cycles co-occurs with the deformation and failure of a system due to increasing load. \citet{Walker:2010aa} also computed the network bipartivity $R$ \citep{Estrada:2005ab} of the contact network to quantify the contribution to mean subgraph centrality $Y$ from closed walks of even length [see Eq.~\ref{eq:bipartivity}]. They observed that $R$ increases with increasing axial strain, revealing that closed walks of even length become more prevalent during loading (see Fig.~\ref{f:cycle_evolution}c). The authors suggested that this trend may be due to a decrease in the prevalence of 3-cycles (which are stabilizing, as discussed in Sec.~\ref{cycles} and elsewhere). \citet{Tordesillas:2011aa} also examined the stability of cycles of various lengths in both DEM simulations and experimental data, and they observed that, during loading, 3-cycles tend to be more stable (as quantified by a measure of stability based on a structural-mechanics framework \citep{Bagi:2007a}) than cycles of other lengths in a minimal cycle basis of the network. Minimal cycle bases and the easier-to-compute subgraph centrality have also been used to examine fluctuations in kinetic energy in simulations of deforming sand. \citet{Tordesillas:2014a} computed a minimal cycle basis and then constructed cycle-participation vectors (see Sec.~\ref{cycles}) from a contact network after each strain step (i.e., at each strain state) during loading. They observed that temporal changes in the cycle-participation vectors of the particles between consecutive strain steps are correlated positively with temporal changes in kinetic energy over those steps. They also observed that large values in the temporal changes of particle cycle-participation vectors and particle subgraph centrality occur in the shear-band region. \citet{Walker:2015b} also studied a minimal cycle basis and corresponding cycle-participation vectors to examine structural transitions in a 3D experimental granular system of hydrogel spheres under uniaxial compression. As pointed out in \citep{Tordesillas:2014a}, developing quantitative predictors that are based on topological information alone is extremely important for furthering understanding of how failure and rearrangements occur in systems in which energy or force measurements are not possible. Examining cycles in contact networks can also shed light on the behavior of force chains. The stability, load-bearing capacity, and buckling of force chains depend on neighboring particles (so-called \emph{spectator grains}) to provide lateral support \citep{Radjai:1998aa, cates1999jamming,Tordesillas:2009aa}. Because 3-cycles appear to be stabilizing features, it is interesting to consider the co-evolution of force chains and 3-cycles in a contact network. Such an investigation requires a precise definition of what constitutes a force chain, so that it is possible to \emph{(1)} extract these structures from a given packing of particles and \emph{(2)} characterize and quantify force-chain properties. Several definitions of force chains have been proposed; see, e.g., \citep{Cates:1998aa, Howell1999, Peters:2005aa,bassett2015extraction}. The studies that we describe in the next three paragraphs used a notion of ``force chains'' from \citep{Peters:2005aa,Muthuswamy2006a}, in which force chain particles are identified based on their particle-load vectors (where each particle is associated with a single particle-load vector that indicates the main direction of force transmission). More specifically, a single chain is a set of three or more particles for which the magnitude of each of their particle-load vectors is larger than the mean particle-load vector magnitude over all particles, and for which the directions of the particle load vectors are, within some tolerance, aligned with one another (i.e., they are ``quasilinear''.) We note that an important point for future work is to conduct network-based studies of force-chain structure for different definitions of force chains, and to investigate if there are qualitative differences in their associated network properties. Using DEM simulations of a densely packed system of polydisperse disks under biaxial loading --- i.e., compressed quasistatically at a constant strain rate in the vertical direction, while allowed to expand under constant confining pressure in the horizontal direction --- \citet{Tordesillas:2010aa} quantified the co-evolution of force chains and 3-cycles in several ways. For example, they computed a minimal cycle basis (see Sec.~\ref{cycles}) of a contact network and then examined \emph{(1)} the ratio of 3-cycles to the total number of cycles in which particles from a force chain participate and \emph{(2)} the force chain's 3-cycle \emph{concentration}, which is defined as the ratio of 3-cycles involving force-chain particles to the total number of particles in the force chain. When averaged over all force chains, the above two measures decrease rapidly with increased loading. Additionally, \citet{Tordesillas:2010aa} observed that force chains that do not fail by buckling (see \citep{Tordesillas:2007aa} for how ``buckling" was defined) have a larger ratio of 3-cycle participation to total cycle participation than force chains that do buckle. \citet{Tordesillas:2011aa} observed, in both DEM simulations of biaxial loading (see above) and 2D photoelastic disk experiments under pure shear, that a particular measure (developed by \citet{Bagi:2007a}) of structural stability of force chains is correlated positively with the mean of the local clustering coefficient [Eq.~\eqref{eq:local_cluster}] over force-chain particles. Their results also suggest that 3-cycles are more stable structures than cycles of longer length during loading and that force chains with larger 3-cycle participation tend to be more structurally stable. These observations suggest that cycles --- and especially 3-cycles --- in contact networks are stabilizing structures that can provide lateral support to force chains. It would be interesting to study these ideas further, and to relate them to structural rigidity theory (see Fig.~\ref{f:structural_rigidity} and Sec.~\ref{rigidity}), especially in light of the difference between 3-cycles (which are rigid) and deformable cycles (e.g., 4-cycles). DEM simulations of 3D, ellipsoidal-particle systems subject to triaxial compression also suggest that 3-cycles are important features in granular contact networks \citep{Tordesillas:2010b}. Similar to the aforementioned results from simulations of 2D systems with disk-shaped particles, the number of 3-cycles in a minimal cycle basis of the contact networks (and the global clustering coefficient [Eq.~\eqref{eq:global_cluster}]) initially decrease and then saturate with increasing load, and particles in force chains have a larger number of 3-cycles per particle than particles that are not in force chains. \citet{Tordesillas:2010b} also observed that the set of 3-cycles that survive throughout loading tend to lie outside the strain-localization region (where force chains buckle). The dearth of 3-cycles in certain regions in a material may thus be a signature of strain-localization zones. Another paper to note is \citep{Tordesillas:2010a}, which examined and compared the temporal evolution of cycles (and several other contact-network diagnostics) in a set of DEM simulations using a variety of different material properties and boundary conditions. In another interesting study, \citet{Walker:2014c} examined the phenomenon of \emph{aging} \citep{kob1997aging,kabla2004contact} --- a process in which the shear strength and stiffness of a granular material increase with time --- in collections of photoelastic disks subject to multiple cycles of pure shear under constant volume. Because aging is a slow process, it can be difficult both to uncover meaningful temporal changes in dynamics and to characterize important features in packing structure that accompany aging. To overcome these challenges, \citet{Walker:2014c} first analyzed the time series of the stress ratio (using techniques from dynamical-systems theory) to uncover distinct temporal changes in the dynamics of the system. (See \citep{Walker:2014c} for details.) After each small, quasistatic strain step, they also extracted the contact network of the packing at that time to relate aging to changes in topological features of the network structure. As one approaches the shear-jammed regime during prolonged cyclic shear, they observed on average that force chains are associated with more 3-cycles and 4-cycles from the minimal cycle basis. \begin{figure} \centering \includegraphics[width=0.50\textwidth]{Arevalo2010aa_Fig5.jpg} \caption{\textbf{The number of $l$-cycles in a contact network versus the packing fraction $\phi$ from a DEM simulation of isotropic compression of a granular system.} Each color represents a cycle of a different length $l$: 3-cycles (black squares), 4-cycles (red circles), 5-cycles (green triangles), 6-cycles (dark-blue inverted triangles), and 7-cycles (cyan diamonds). As the packing transitions from a fluid-like state to a solid-like one, there is an increase in the number of cycles in the contact network near the critical packing fraction $\phi_{c}$. In addition, while 3-cycles continue to grow in number after the transition, cycles of longer lengths slowly decrease. The inset shows the same data on a semi-logarithmic plot. [We adapted this figure, with permission, from \citep{Arevalo:2010aa}.]} \label{f:jamming_triangles} \end{figure} We have just discussed many papers that concern transitions in granular matter from a solid-like regime to a liquid-like regime. One can also use changes in the loop structure of a contact network to describe the opposite transition, in which a granular material goes from an underconstrained, flowing state to a solid-like state during the process known as jamming (see Sec.~\ref{s:kz}). Studying 2D frictional simulations of isotropically compressed granular packings, \citet{Arevalo:2010aa} examined a granular contact network as a function of packing fraction. They observed that the number of cycles (which were called \textit{polygons} in \citep{Arevalo:2010aa}) in the contact network grows suddenly when the packing fraction approaches the critical value $\phi_{c}$ that marks the transition to a rigid state (see Fig.~\ref{f:jamming_triangles}). They also observed that 3-cycles appear to be special: they continue to grow in number above the jamming point, whereas longer cycles slowly decrease in number after jamming. Although they observed a nonlinear relationship near the jamming point between $Z$ (the contact number, which is the usual order parameter for the jamming transition) and the number of 3-cycles \citep{Arevalo:2010ba}, these quantities appear to depend linearly on each other after the transition. These results suggest that one can use the evolution of contact loops to understand the transition to a rigid state and to characterize subsequent changes in the system. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Arevalo2013_edited.jpg} \caption{\textbf{Using a contact network to distinguish states of the same packing fraction in simulations of tapped granular materials.} \emph{(a)} Packing fraction $\phi$ versus tap intensity $\Gamma$. The horizontal lines connect states at the same packing fraction that were obtained using different tap intensities. \emph{(b)} A section of one of the packings. \emph{(c)} The mean value of the ``bond orientational order parameter'' (or simply ``bond order parameter'') as a function of tap intensity. The bond order parameter, which is often used to quantify local and long-range crystalline structure in a material \citep{Steinhardt:1983a}, was computed on each subgraph of particles defined by a central particle and the set of its neighbors within a distance of 1.2 particle diameters, and it was then averaged over all such subgraphs to obtain a mean value. \emph{(d)} The mean value of the bond order parameter as a function of packing fraction, where the arrows indicate the direction of increasing $\Gamma$. It is difficult to differentiate between different states at the same packing fraction using this quantity. \emph{(e, f)} The same as panels (\emph{c,d}), but now the vertical axis is the number of 3-cycles in the contact network. Calculating the number of 3-cycles successfully separates different states of the system with the same packing fraction. [We adapted this figure, with permission, from \citep{Arevalo:2013aa}.]} \label{f:contact_tapping} \end{figure} ~\\ \noindent \emph{Application to tapped granular materials.} \label{s:tapping_contact} Properties of contact networks have also been used to study \emph{tapped granular materials}, in which a packing of grains is subject to external pulses of excitation against it. In most studies of tapped granular materials, the packing and pulses are both vertical. The intensity $\Gamma$ of these mechanical perturbations (so-called ``taps") is usually quantified as a dimensionless ratio of accelerations, such as the ratio of the peak acceleration of the excitation to the acceleration of gravity \cite{Nowak:1998a,Pugnaloni:2010aa}. Tapped granular materials are interesting, because the packing fraction $\phi$ is not a monotonic function of the tapping intensity $\Gamma$ \citep{Pugnaloni:2008aa, Gago:2009aa,Carlevaro:2011a}. It reaches a minimum value $\phi_{\mathrm{min}}$ at an intensity of $\Gamma_{\mathrm{min}}$, and it then increases as the tap intensity increases (see Fig.~\ref{f:contact_tapping}\emph{a}). Consequently, one can achieve steady states with the same packing fraction by using different tap intensities (i.e., both a ``low" tap intensity, which is smaller than $\Gamma_{\mathrm{min}}$, and a ``high" tap intensity, which is larger than $\Gamma_{\mathrm{min}}$). These steady states are not equivalent to each other, as they have different force-moment tensors \citep{Pugnaloni:2010aa}, for example. An interesting question is thus the following: What features of a granular packing distinguish between states at the same packing fraction that are reached by using different tap intensities? Recent work has suggested that properties of contact networks --- especially cycles (which, in this case, are particle contact loops) --- can distinguish between steady-state configurations at the same packing fraction but that are generated from different tap intensities in simulated 2D granular packings subjected to tapping \citep{Arevalo:2013aa, Arevalo:2013ba} (see Fig.~\ref{f:contact_tapping}\emph{b}). For example, as $\Gamma$ is increased in the regime $\Gamma < \Gamma_{\mathrm{min}}$, the number of 3-cycles (i.e., triangles) and the number of 4-cycles (i.e., squares) both decrease. As $\Gamma$ is increased in the regime $\Gamma > \Gamma_{\mathrm{min}}$, the opposite trend occurs, so the numbers of 3-cycles and 4-cycles increase. This makes it possible to differentiate configurations at the same $\phi$ obtained from low and high tap intensities. (See Fig.~\ref{f:contact_tapping}\emph{e,f} for a plot of the number of triangles versus $\Gamma$ and $\phi$.) However, geometrical measures like the pair-correlation function, distributions of Voronoi tessellation areas, or bond orientational order parameters do not seem to be as sensitive to differences in these two different states of the system (see Fig.~\ref{f:contact_tapping}\emph{c,d}), perhaps because they quantify only local proximity rather than directly examining contacts. (See \cite{Arevalo:2013aa} and references therein for details of these descriptors.) These results suggest that topological features (e.g., mesoscale features) of a contact network can capture valuable information about the organization of granular packings. \subsubsection{Other subgraphs in contact networks.} \label{subgraph} When studying contact networks, it can also be helpful to explore network motifs other than cycles. Recall from Sec.~\ref{s:motifs} that motifs are subgraphs that appear often in a network (e.g., in comparison to a null model) and are thus often construed as being related to system function \citep{Milo:2002a,itzkovitz2005subgraphs,alon2007network,shoval2010snap}. Network motifs, which traditionally refer to small subgraphs, are a type of mesoscale feature, and it can be insightful to examine how their prevalences change in a granular material as it deforms. One system in which motifs and their dynamics have been studied is frictional, bidisperse, photoelastic disks subject to quasistatic cyclic shear \citep{Tordesillas:2012aa}. After each small strain increment (i.e., strain step) in a shear cycle, the authors considered the contact network of the granular packing. For each particle $i$ in the contact network, they extracted the subgraph of particles (nodes) and contacts (edges) formed by the central particle $i$ and particle $i$'s contacting neighbors. This process results in a set of $N$ subgraphs\footnote{In some degenerate cases (e.g., when there is a network component that consists of a clique), this includes duplications of subgraphs, even when the nodes of a network are labeled.} (which, borrowing terminology from \citep{Tordesillas:2012aa}, we call \textit{conformation subgraphs}), where $N$ is the number of particles in the network. To examine packing rearrangements as a system is sheared, \citep{Tordesillas:2012aa} represented each distinct conformation subgraph present at any time during loading as one ``state" in a Markov transition matrix, and they studied transitions between the conformation subgraphs as a discrete-time Markov process. More specifically, each element in the $n_{c} \times n_{c}$ transition matrix (where $n_{c}$ is the total number of unique conformation subgraphs and hence the number of unique states) captured the fraction of times that each conformation subgraph transformed to each other conformation subgraph (including itself) after four quasistatic steps of the shearing process. \citet{Tordesillas:2012aa} reported that force-chain particles typically occur in network regions with high mean degree, high mean local clustering coefficients, and many 3-cycles. (Note that this study, as well as the others that we describe in the present and the following paragraph, define force-chains as in \citep{Peters:2005aa,Muthuswamy2006a}.) Furthermore, when considering the conformation subgraphs of particles in force chains that fail by buckling (see \citep{Tordesillas:2009aa,Tordesillas:2009b} for details on the definition of ``buckling"), the most likely transformations to occur tend either to maintain the topology of those conformation subgraphs or to involve transitions from conformation subgraphs in which the central particle has a larger degree or is part of more 3-cycles to conformation subgraphs in which the degree of the central particle is smaller or in which it participates in fewer 3-cycles. \citet{Tordesillas:2012aa} also used force information to compute a measure of structural stability (based on a structural-mechanics framework \citep{Bagi:2007a,Tordesillas:2011aa} and summarized in a single number) for each conformation subgraph. They then split the full range of the stability values into several smaller ``stability intervals'' (i.e., small ranges of contiguous structural-stability values) and modeled transitions between stability intervals as a Markov chain. They examined the number of conformation subgraphs that occupy each stability interval and observed pronounced peaks in some intervals that persist during loading. They also reported that conformation subgraphs whose central particles belong to force chains tend to be more stable and that conformation subgraphs whose central particles are part of buckling force chains have a higher probability of transitioning from high-stability states to low-stability states than vice versa. (For details, see Fig.~7 of \citep{Tordesillas:2012aa} and the corresponding discussions.) \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{walker2015b.jpg} \caption{\textbf{Conformation subgraphs can help quantify structural transitions in a 3D network of hydrogels.} \emph{(a)} A rendering of a granular system composed of spherical, hydrogel beads \citep{Brodu2015, Dijksman2017} that are subject to multiple cycles of compression along a single axis. Redder colors indicate stronger contact forces. \emph{(b)} An example of a contact network in a decompressed state and \emph{(c)} an example of a contact network in a compressed state. \emph{(d)} Pictorial representations of the most common conformation subgraphs in each almost-invariant set. The almost-invariant sets are collections of conformation subgraphs that, during loading, tend to transition amongst themselves more than they transition to conformation subgraphs in another almost-invariant set. (See the main text and \citep{Walker:2015b} for details.) \emph{(e)} The evolving cardinalities of almost-invariant sets can track structural transitions during cycles of compression and decompression. (The number of elements in an almost-invariant set at a given time step is the number of particles in the system whose induced conformation subgraph belongs to that set.) In highly-compressed states, most conformation subgraphs in a packing are members of Set 3, whereas most conformation subgraphs belong to Set 4 for the least-compressed states. \emph{(f)} An example of a possible \emph{transition pathway}, which consists of a sequence of conformation subgraphs in a given almost-invariant set and the transitions between them that occur during loading. In this example, the pathway corresponds to the following sequence of transitions between sets: Set 3 $\rightarrow$ Set 2 $\rightarrow$ Set 4 $\rightarrow$ Set 2 $\rightarrow$ Set 3. The first conformation subgraph (40) corresponds to the most prevalent conformation subgraph in Set 3, and the conformation subgraphs for the subsequent transitions between sets are those with the largest transition probabilities. For example, of all conformation subgraphs in Set 2, subgraph 19 is the one to which conformation subgraph 40 (of Set 3) is most likely to transition. These transition pathways can potentially inform constitutive-modeling efforts. [We adapted this figure, with permission, from \citep{Walker:2015b}.] } \label{f:3D_subgraphs} \end{figure} \citet{Walker:2015b} used similar methods to study self-assembly in an almost-frictionless, 3D system of hydrogel spheres under quasistatic, cyclic uniaxial compression (see Fig.~\ref{f:3D_subgraphs}\emph{a}--\emph{c}). After every compression step, they constructed the contact network for the system and examined two types of subgraphs for each particle: \emph{(1)} conformation subgraphs, which (as discussed earlier) consist of a single central particle $i$ and that particle's contacts; and \emph{(2)} the cycle-participation vector of each particle (see Sec.~\ref{cycles}). \citet{Walker:2015b} determined the set of all unique conformation subgraphs that exist during the above compression process. They then used each of those conformation subgraphs as one state in a transition matrix, the elements of which give the fraction of times (across the whole experiment) that a particle in one state transitions to any other state in consecutive compression steps. To focus on the presence or absence of a particle in an $l$-cycle (using cycle lengths up to $l = 10$), they binarized each element of the cycle-participation vectors. (The new vectors thus indicate, for each particle, whether a particle is part of at least one $l$-cycle.) They then constructed a transition matrix in which each state is a unique binarized cycle-participation vector that occurs during the experiment. The two transition matrices capture useful information about the most likely transformations that occur between different conformation subgraphs and cycle-participation vectors as one compresses or decompresses the granular system. For both types of mesoscale structures, \citet{Walker:2015b} used their transition matrices to extract \textit{almost-invariant sets}, which indicate sets of conformation subgraphs or cycle-participation vectors (i.e., states) that tend to transition among themselves more than to states in another almost-invariant set. (See \citep{Walker:2015b} for details.) In Fig.~\ref{f:3D_subgraphs}\emph{d}, we show the most common conformation subgraphs in each almost-invariant set of the subgraphs. The conformation subgraphs formed by force-chain particles belong mostly to Set 3 (see Fig.~\ref{f:3D_subgraphs}\emph{d}), which consists of densely-connected conformation subgraphs in which there are many contacts between particles. To characterize structural changes that occur in a packing as it moves towards or away from a jammed configuration, \citet{Walker:2015b} tracked the number of conformation subgraphs (and cycle-participation vectors) in each almost-invariant set across time. In Fig.~\ref{f:3D_subgraphs}\emph{e}, we show the temporal evolution of the numbers of elements in the almost-invariant sets of the conformation subgraphs. \citet{Walker:2015b} also proposed transition pathways (see Fig.~\ref{f:3D_subgraphs}\emph{f}) that may be useful for thermo-micro-mechanical constitutive-modeling efforts \citep{sepiani2009thermo}. (A \emph{transition pathway} consists of a sequence of conformation subgraphs in different almost-invariant sets, and transitions between them.) Another way to study various types of subgraphs in granular materials is through the classification of superfamilies \citep{milo2004superfamilies,Xu:2008aa} (see Sec.~\ref{s:motifs}). A recent investigation by \citet{Walker:2015c} considered superfamilies that result from examining 4-particle subgraphs (see Fig.~\ref{f:superfamilies}\emph{a}) in a variety of different granular systems, including experimental packings of sand and photoelastic disks, and DEM simulations for different types of loading and in different dimensions. In their study, the authors defined a superfamily as a set of networks in which the prevalence of the different 4-node subgraphs have the same rank-ordering. (They did not consider whether the subgraph was a motif in the sense of occurring more frequently than in a random-graph null model.) Despite the diversity of system types, they observed several trends in the transitions between superfamilies that occur as a system transitions from a pre-failure regime to a failure regime. The most important change in the superfamilies appears to be a switch in relative prevalence of 4-edge motifs with 3 edges arranged as a triangle to acyclic 3-edge motifs (see Fig.~\ref{f:superfamilies}). This observation highlights the important role that small mesoscale structures can play as building blocks in granular systems. It also suggests that examining the prevalence and temporal evolution of such motifs can \emph{(1)} help characterize the macroscopic states of a granular system and \emph{(2)} help quantify what structural changes occur as a system transitions between different states. Notably, although calculating the prevalence of cycles and small motifs can be useful for gaining insights into contact-network structure, it is also important to employ other types of network analysis that examine structure on larger scales. For example, in simulations of 2D packings of disks under isotropic compression, \citet{Arevalo:2010aa} observed that the mean shortest-path length (\ref{eq:shortest_path}) of a contact network reflects changes in the organization of a packing as one approaches the jamming point and changes that occur after the jamming transition takes place. The path length appears to reach a maximum value at a packing fraction below $\phi_{c}$. With further increases in $\phi$ below $\phi_{c}$, the path length then decreases rapidly, likely due to the formation of new contacts that shorten the distance between particles as the system nears the jamming point. After the jamming transition, the path length decreases further. Before moving on, we note that because it can be difficult to measure force information accurately in many experimental granular systems, continuing to develop relevant measures for studying contact topology (i.e., without incorporating weights) remains an important area of investigation. \begin{figure} \centering \includegraphics[width=\textwidth]{walker2015c_small.jpg} \caption{\textbf{Examination of superfamilies in a granular material.} \emph{(a)} One can use a rank-ordering of relative frequencies of sets of 4-node subgraphs to define a superfamily in a contact network. \emph{(b)} Mean degree (blue) and stress ratio (red) versus axial strain in a DEM simulation of a deforming granular material. (The stress ratio is the minimum stress divided by the maximum stress.) The symbols at each strain increment correspond to the superfamily of the associated contact network at that point. During loading, the system is characterized by a superfamily transition (specifically, ABDECF $\longrightarrow$ ADBECF) that corresponds to a shift from one with a higher prevalence of a subgraph that includes a triangle (B) to one with a higher prevalence of a subgraph without this stabilizing feature (D). [We adapted this figure, with permission, from \citep{Walker:2015c}.] } \label{f:superfamilies} \end{figure} \subsection{Force-weighted networks} \label{s:force_weighted} Although studying only a contact network can be rather informative (see Sec.~\ref{s:contact_network}), it is important to incorporate more information into network representations to more fully capture the rich behavior of granular materials. Many of the approaches for quantifying unweighted networks can be generalized to weighted networks (see Sec.~\ref{s:network_measures}), although significant complications often arise (e.g., because typically there are numerous choices for precisely how one should do the generalizing). From both physics and network-science perspectives, it is sensible to construct weighted networks that incorporate information about the forces between particles. This can shed considerable light on phenomena that have eluded understanding from studying only unweighted contact networks. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{photoelastic_network_small.pdf} \caption{\textbf{Photoelastic techniques allow the extraction of force-weighted networks from experimental granular packings.} \emph{(a)} An example of the photoelastic stress pattern from a vertical, 2D collection of bidisperse disks that are compressed from the top. \emph{(b)} Corresponding structure of the force network overlaid on the image. Each particle represents a node in the network, and line segments represent edges. Line thicknesses indicate edge weights and are proportional to the normal forces between contacting particles. [We adapted this figure, with permission, from \citep{bassett2015extraction}.]} \end{figure} One important physical motivation for incorporating information about inter-particle forces is that photoelastic-disk experiments and numerical simulations have both highlighted that, particularly just above isostaticity (see the bottom of Sec.~\ref{degree}), loads placed on granular systems are not shared evenly across all particles. Instead, forces are carried primarily by a backbone of force chains. It has often been claimed that the statistical distribution of the forces are approximately exponential \citep{Liu:1995aa}, but in fact, it depends on whether forces are recorded at a granular system's boundary or in its bulk \citep{Tighe:2011a}, as well as on the loading history \citep{Majmudar:2005aa}. Illuminating how force-chain structures arise provides crucial information for understanding how one can control the elastic modulus and mechanical stability \citep{Aharonov:1999a} and acoustic transmission \citep{Makse-1999-WEM} in granular materials. However, despite the ability of humans to see force chains easily in photoelastic images, it is difficult to characterize quantitatively what is or is not a force chain, and it can also be difficult to quantify how force chains evolve under compression or shear. Part of the challenge lies in the fact that force chains are spatially anisotropic, can exhibit long-range spatial correlations \citep{Majmudar:2005aa}, and can have complex temporal fluctuations \citep{Howell1999,Hidalgo2002}. Consequently, understanding emergent phenomena in granular systems is typically difficult using continuum theories or approaches based only on local structure. On the other hand, a network-theoretic perspective provides a fruitful way to explore interesting material properties and organization that arise from inter-particle contact forces in granular materials. Importantly, in addition to data from simulations, multiple techniques now exist for measuring inter-particle forces between grains in experiments; these include photoelasticity \citep{Majmudar:2005aa,Daniels2017}, x-ray diffraction measurements of microscopic \citep{Hurley:2016a} or macroscopic \citep{Mukhopadhyay2011,Saadatfar2012} deformations, and fluorescence with light sheets \citep{Brodu2015,Dijksman2017}. As we will see, incorporating information about inter-particle forces into network-based investigations has yielded fascinating insights into the organization and collective structure in granular packings (and other particulate materials) for both numerically-simulated and experimental systems. The most common method for constructing a network that captures the structure of forces in a granular system is to let a node represent a particle and then weight the edge between two particles that are in contact according to the value of the force between them. One can describe such a network with a weighted adjacency matrix (see Sec.~\ref{s:networks}) ${\bf W}$ with elements \begin{equation} W_{ij} = \left\{\begin{array}{ll} f_{ij}\,, \text{ if particles $i$ and $j$ are in contact\,,} \\ 0\,, \text{ otherwise\,,} \end{array} \right. \end{equation} where $f_{ij}$ is the inter-particle force between particles $i$ and $j$. Such a force network also encodes the information in the associated contact network, and one can recover a contact network from a force-weighted network by setting all non-zero weights in the latter to $1$. Although most work has determined edge weights using the normal component of the inter-particle force, one could alternatively weight the edges by inter-particle tangential forces. With the advent of high-performance computational capabilities, one can determine inter-particle forces from DEM simulations \citep{Poschel2005} of hundreds to millions of particles. In experiments, it is possible to determine inter-particle forces using photoelastic disks (in 2D) \citep{Daniels2017} or x-ray tomography (in 3D) \citep{weis:17}, although typically these techniques are limited to systems of hundreds to thousands of particles. We now review network-based approaches for investigating force-weighted networks constructed from granular materials and discuss the resulting insights that these approaches have provided into the physical behavior of such systems. We label most of the following subsubsections according to the type of employed methodology, although we also include a subsubsection about some specific applications to different systems. \subsubsection{Examining weighted cycles and other structural features.} \label{s:weighted_cycles} \begin{figure} \centering \includegraphics[width = \textwidth]{Smart2008aa_Fig9_Fig11.jpg} \caption{\textbf{A measure of the stability of 3-cycles quantifies the effects of tilting a granular packing.} The \emph{loop stability} $\xi_{3}$ of a 3-cycle is determined from the contact forces along each edge of the loop. (See the main text for details.) \emph{(a)} A contact loop with approximately equal forces on all edges is very stable to perturbations in the direction of the compressive force vectors on the loop, and has a loop stability that is close to $1$. \emph{(b)} A contact loop in which one of the edges has a much smaller force than the others is very unstable to perturbations; it has a loop stability near $0$. \emph{(c)} The normalized mean loop stability $\xi^{*}_{3}$ tends to decrease with increasing tilting angle $\theta_{\text{g}}$. [We adapted this figure, with permission, from \citep{Smart:2008aa}.]} \label{f:loop_stability} \end{figure} In Sec.~\ref{s:contact_network}, we discussed why examining cycles can be useful for studying granular contact networks. It is also useful to examine cycles when investigating force networks, which are weighted. For example, \citet{Smart:2008aa} studied the evolution of weighted contact loops in a simulation of a quasistatically tilted granular packing. They used topological information (i.e., which particles are in contact) to define the presence of a cycle, and they defined a notion of \textit{loop stability}, \begin{equation} \xi_{l} = \frac{1}{\overline{f}^{l}}\prod_{i=1}^{l}f^{\text{edge}}_{i}\,, \label{eq:loop_stability} \end{equation} to quantify the range of compressive loads that a given loop can support. In Eq.~\eqref{eq:loop_stability}, $l$ is the number of edges in the loop (i.e., its \emph{length}), $f^{\text{edge}}_{i}$ is the contact force of the $i^{\mathrm{th}}$ edge, and $\overline{f}$ is the mean edge weight (i.e., mean force) over all of the edges in the loop. See Fig.~\ref{f:loop_stability} for a schematic of this stability measure for a $3$-cycle. For $l = 3$, the quantity $\xi_{3} \approx 1$ corresponds to having approximately equal contact forces on all edges and is the most stable configuration (see Fig.~\ref{f:loop_stability}\emph{a}). The value of $\xi_{3}$ approaches $0$ as the contact force on one edge becomes much smaller than those on the other two edges. As illustrated in Fig.~\ref{f:loop_stability}\emph{b}, this situation is rather unstable. Both the density of $3$-cycles (specifically, the number of $3$-cycles in the system divided by the total number of particles) and a normalized $3$-cycle loop stability \begin{equation*} \xi^{*}_{3} = \frac{\langle \xi_{3}(\theta_{g}) \rangle}{\langle \xi_{3}(\theta_{g} = 0) \rangle} \end{equation*} (where the brackets denote means over all $3$-cycles in a network) tend to decrease with increasing tilting angle $\theta_g$ (see Fig.~\ref{f:loop_stability}\emph{c}). \citet{Smart:2008aa} also reported that the effect of tilting on loop stability is largely independent from the effect of tilting on mean coordination number (i.e., mean degree). \citet{Tordesillas:2010aa} examined what they called a \emph{force cycle}, which is a cycle of physically-connected particles in which each contact carries a force above the global mean. Using DEM simulations of a biaxially compressed, dense granular system --- the sample was compressed quasistatically at a constant strain rate in the vertical direction, while allowed to expand under constant confining pressure in the horizontal direction --- they studied the evolution of $3$-force cycles (i.e., force cycles with $3$ particles) in a minimal cycle basis of the contact network with respect to axial strain. They observed that $3$-force cycles initially decrease in number during strain hardening, before increasing in number at the onset of force-chain buckling \citep{Tordesillas:2007aa}, and finally leveling out in number in the critical-state regime. (See the third paragraph of Sec.~\ref{s:role_of_cycles} for a brief description of these different regimes of deformation.) In Fig.~\ref{f:Tordesillas2010aa_Fig10b}, we show a plot of the number of $3$-force cycles and the shear stress versus axial strain. The $3$-force cycles that arise at the onset of buckling are often part of force chains (using the definition from \citep{Peters:2005aa,Muthuswamy2006a}). Additionally, these $3$-force cycles tend to concentrate in the region of the shear band, where they may act as stabilizing structures both by frustrating relative rotations and by providing strong lateral support to force chains. However, with increased loading, the system eventually fails, and \citet{Tordesillas:2010aa} suggested that the increase in the number of $3$-force cycles may be an indicator of failure onset. Qualitatively similar results have been observed when examining the evolution of $3$-force cycles in three DEM simulations (each with slightly different material properties and boundary conditions) \citep{Tordesillas:2010a} and in DEM simulations of 3D ellipsoidal-particle packings subject to triaxial compression \citep{Tordesillas:2010b}. \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{Tordesillas2010aa_Fig10b.jpg} \caption{\textbf{Evolution of shear stress (gray curve) and the number of $3$-force cycles in a minimal cycle basis of a contact network (red curve) as a function of axial strain in a DEM simulation of a granular material under quasistatic, biaxial loading.} The number of 3-force cycles decreases rapidly during the initial stages of strain hardening, but it begins to increase at the onset of force-chain buckling (dashed line). [We adapted this figure, with permission, from \citep{Tordesillas:2010aa}.] } \label{f:Tordesillas2010aa_Fig10b} \end{figure} Using similar DEM simulations for biaxial compression as those described in the previous paragraph, \citet{Walker:2010aa} examined the evolution of force-weighted networks with axial strain using several of the network concepts that we discussed in Sec.~\ref{s:network_measures}. Unsurprisingly, they found that including contact forces in their network analysis yields a more complete characterization of a granular system than ignoring them. One measure that they felt was particularly useful is a weighted version of subgraph centrality (see Sec.~\ref{s:centrality}). From a contact network, \citet{Walker:2010aa} first extracted all conformation subgraphs. As described in Sec.~\ref{subgraph}, a conformation subgraph is a subgraph that consists of a given particle $i$ and that particle's immediate contacts. (Each particle in a network thus yields one conformation subgraph.) To incorporate inter-particle force information, \citet{Walker:2010aa} generated \textit{force-weighted conformation subgraphs} by weighting each edge in the conformation subgraphs by the magnitude of the normal force component along that contact. They then computed a weighted subgraph centrality $Y^{w}_{i}$ for each force-weighted conformation subgraph, and they computed changes $|\upDelta \tilde{Y}^{w}|$ in magnitude between consecutive strain steps of a suitably averaged version ($\tilde{Y}^{w}$) of this quantity (see \citep{Walker:2010aa} for details). They observed that the temporal evolution of $|\upDelta \tilde{Y}^{w}|$ with strain step is effective at tracking large changes in energy dissipation rate that can occur due to rearrangement events (e.g., force-chain buckling) associated with the loss of inter-particle contacts. They also observed that the central particle in the conformation subgraphs that undergo the largest changes in weighted subgraph centrality seems to be associated with locations with much dissipation, such as in the shear band and in buckling force chains or the neighboring particles of those force chains. See \cite{Peters:2005aa,Muthuswamy2006a} for the employed definition of force chains and \cite{Tordesillas:2007aa} for the employed specification of force-chain buckling. \citet{Walker:2010aa} highlighted that network analysis --- and especially examination of mesoscale features --- can be helpful for gaining insights into mechanisms that regulate deformation in granular materials. Such studies can perhaps also help guide efforts in thermo-mechanical constitutive modeling \citep{Tordesillas:2008a}. \subsubsection{Extracting multiscale architectures from a force network using community detection.} \label{s:comm_detect} A major benefit of studying a network representation of a granular system (and using associated computational tools) is that it provides a natural framework in which to probe structure and dynamics across several spatial scales. One can examine different spatial scales in multiple ways, including both physically (e.g., using distance in Euclidean space or in some other metric space) or topologically (e.g., using the hop distance along edges in a network). In these studies, one can use network diagnostics and approaches like the ones discussed in Sec.~\ref{s:network_measures}. The ability to successfully study mesoscale architecture, which manifests at intermediate scales between particle-level and system-wide organization, is an especially important contribution of network analysis. One of the most common ways to examine mesoscale structures is with community detection (see Sec.~\ref{s:comm_structure}), which one can use to extract sets of densely-connected nodes in a network \citep{Fortunato2016,Porter2009}. One can also tune community-detection methods to examine sets of nodes of a range of sizes, from very small sets (with a lower limit of one node per set) to large sets (with an upper limit of all nodes in a single set). By applying multiscale community-detection methods to force-weighted contact networks of photoelastic disks, \citet{bassett2015extraction} identified chain-like structures that are visually reminiscent of force chains in granular packings. Notably, the algorithmic extraction of these ``distributed'' mesoscale structures \citep{bassett2015extraction,Giusti:2016a,Huang:2016a,papadopoulos2016evolution}, in contrast to ``compact'', densely-connected geographical domains in a material \citep{bassett2012influence}, required the development of a geographical null model, which can be used in modularity maximization and which encodes the fact that a given particle can exert force only on other particles with which it is in direct contact \citep{bassett2013robust} (see Sec.~\ref{s:comm_structure}). The different type of mesoscale organization extracted by this geographical null model highlights the fact that using physically motivated network-based approaches, which incorporate spatial and/or physical constraints, may give different information about a granular system than what one obtains when doing network calculations based only on network structure (i.e., without considering known context about a network, so that ``one size fits all''). In a modularity-maximization approach to community detection, one can also tune a resolution parameter of a modularity objective function to identify and characterize network communities of different sizes. This can also be combined with inference procedures to determine particularly important scales. \begin{figure} \centering \includegraphics[width=\textwidth]{Bassett2015_lab_sim_comms_small.pdf} \caption{\textbf{Community-detection techniques can uncover multiscale force-chain structures in laboratory and simulated granular packings.} \emph{(a)} Examples of force-chain communities extracted using modularity maximization for networks constructed from 2D laboratory and simulated granular packings. Tuning a resolution parameter allows one to detect mesoscale features at multiple scales in a packing: smaller resolution-parameter values yield larger and more compact communities, and larger values yield smaller and chain-like communities. \emph{(b)} One can characterize community structure using different measures, which are able to differentiate between laboratory and simulated packings in a quantitative manner. In this figure, the \emph{size} of a community is the number of particles in that community, the \emph{network force} of a community is the contribution of that community to modularity, and the \emph{gap factor} is a diagnostic that measures the presence of gaps and the extent of branching in a community. (The gap factor is small for very linear or compact communities, and it is large for communities with curves and/or branching.) [We adapted this figure, with permission, from \citep{bassett2015extraction}.] } \end{figure} One interesting result from \citep{bassett2015extraction} is that properties of force chain-like communities can distinguish frictional, laboratory packings from frictionless, simulated ones, allowing a quantification of structural differences between these systems. In later work, \citet{Huang:2016a} used similar techniques to examine friction-dependence and pressure-dependence of community structure in 3D simulations of compressed granular materials. To further quantify such mesoscale organization and examine how it changes with compression, \citet{Giusti:2016a} extracted communities using the geographical null model, and used ideas from algebraic topology to define a \emph{topological compactness factor} that quantifies the amount of branching --- versus compact, densely-interconnected regions --- in communities of a force network from 2D granular systems. The approach from \citep{bassett2015extraction} was extended to multilayer networks (see Sec.~\ref{s:comm_structure}) in \citep{papadopoulos2016evolution}, providing a way to link particulate communities across compression steps (rather than extracting new ones at each step) when examining how such communities reconfigure. These studies helped lay groundwork to improve understanding of how the multiscale nature of force-chain architecture impacts bulk material properties. Various community-detection approaches have also been used for identifying other types of inhomogeneities in granular matter \citep{Navakas:2014aa}. Before moving on, it is important to note that --- although related --- the definition of force-chain structure using the community-detection approaches that we described above \cite{bassett2012influence,bassett2015extraction,Huang:2016a,Giusti:2016a,papadopoulos2016evolution} differs from the definitions of force chains that have been used in some other studies (e.g., see \cite{Peters:2005aa}). In future work, it is important to examine how the properties of force chains differ when they are defined in different ways. \subsubsection{Some applications.} \noindent \emph{Comminution processes.} A network-based approach can give fascinating insights into \emph{comminution}, the fragmentation of a material into smaller pieces. \citet{Walker:2011a} used DEM simulations to study comminution in a granular material under uniaxial compression and reported that the degree distribution of the system's contact network (which we recall is unweighted) evolves towards a power law during this process. This is consistent with the development of a power-law grain-size distribution, in which large particles are hubs that have many smaller, neighboring particles, which make up the majority of a packing. \citet{Walker:2011a} also examined several other features (such as measures of network efficiency, node betweenness, and cycle populations) of both contact networks and networks weighted by the normal force between particles as a function of increasing strain to examine what changes occur in a granular system during comminution. ~\\ \noindent \emph{Heat transfer.} Another problem that has been examined using network-based methods is heat transfer in granular matter. Using a heat-transport model on simulations of a compressed granular material, \citet{Smart:2007a} probed the effects of heterogeneity in the force distribution and the spatial arrangements of forces in a system on heat transfer through the material. Specifically, they compared measures of transport in the (normal) force-weighted network of a granular system to two null-model networks with the same contact topology but with either \emph{(1)} homogenous, uniform edge weights that were equal to the mean force of the packing; or \emph{(2)} the same set of heterogeneous edge weights from the actual granular network, but assigned uniformly at random to the edges. \citet{Smart:2007a} estimated the thermal diffusivity and effective conductivity from simulations on each network, and they observed that the real granular system has significantly higher diffusivity and effective conductivity than the homogenous null model. Additionally, comparing the results from the real material to the null model with randomly reassigned edge weights demonstrated that the qualitative differences between the real granular network and the homogenous null model could not be explained by the heterogeneity in the force distribution alone, as the authors observed that this second null model (with randomly reassigned edge weights) was also not a good medium for heat transfer. To investigate what features of a granular network facilitate efficient heat transfer, \citet{Smart:2007a} defined a weighted network distance (see Sec.~\ref{paths}) between particles $i$ and $j$ as $d^{w}_{ij} = 1/H_{ij}$, where $H_{ij}$ is the \emph{local heat-transfer coefficient}, such that the network distance between two particles in contact is proportional to that contact's resistance to heat transfer. Note that $H_{ij} \propto f_{ij}^{\nu}$, where $f_{ij}$ is the magnitude of the normal force between $i$ and $j$, and $\nu \geq 0$ is a constant. They then defined a network-based (but physically-motivated) measure of heat transport efficiency as the weighted efficiency $E^{w}$ (see Sec.~\ref{paths}) computed using the distances $d^{w}_{ij}$. In a comparison between the real granular system and the two null models, $E^{w}$ gave the same quantitative results as the effective conductivity. In particular, the calculations in \citep{Smart:2007a} revealed that the real granular system has a larger efficiency than that of either null model, suggesting that the spatial distribution of force-chain structure in the granular network appears to facilitate heat transport. Finally, iterative edge removals in decreasing order of geodesic edge betweenness centrality [Eq.~\eqref{eq:edge_betweenness}] yield a faster decrease in effective conductivity than either edge removals done uniformly at random or edge removals in decreasing order of the local heat-transport coefficient, further illustrating the utility of network-theoretic measures for examining transport phenomena in granular systems. ~\\ \noindent \emph{Acoustic transmission.} One can also examine the effect of network structure on properties such as electrical conductivity in systems composed of metallic particles or on other types of transport (such as sound propagation) through a particulate material. The transmission of acoustic signals through granular materials is poorly understood \citep{Owens:2011}, and it is particularly challenging to explain using continuum or particulate models \citep{Digby:1981,Velicky:2002,Goddard:1990}. A few years ago, \citet{bassett2012influence} represented compressed, 2D packings of bidisperse, photoelastic disks as force-weighted contact networks, and found that some network diagnostics are able to identify injection versus scattering phases of acoustic signals transmitted through a granular material. Among the diagnostics that they computed, the authors observed that network efficiency (see Eq.~\eqref{eq:E} in Sec.~\ref{paths}) is correlated positively with acoustic transmission during the signal-injection phase, suggesting that high-amplitude and high-energy signals are transmitted preferentially along short paths (and perhaps even shortest paths) through a force-weighted contact network. In contrast, low-amplitude and low-energy signals that reverberate through a packing during the subsequent signal-scattering phase correlate positively with the intra-community strength $z$-score, which characterizes how strongly a node connects to other nodes in its own community. These results suggest that one can use network diagnostics that probe diverse spatial scales in a system to describe different bulk properties. Because \citep{bassett2012influence} did not use community-detection approaches informed by a geographical null model (see Sec.~\ref{s:comm_structure}), it did not address (and it is not yet fully understood) how acoustic transmission depends on the multiscale architecture of chain-like structures reminiscent of force chains. This remains an open issue, and network-based approaches --- e.g., using geographical null models and other ideas that pay attention to the role of space --- are likely to be important in future work on this topic. \subsubsection{Thresholded force networks.} \label{s:force_threshold} Before researchers started using network-based methods as a common perspective for studying granular materials, \citet{Radjai:1998aa,Radjai:1999aa} reported that under stress, the force network in simulations of granular matter organizes into two subsets: one set with ``strong" contacts and another set with ``weak" contacts. The \emph{strong subnetwork} of forces forms a backbone of chain-like structures that carry most of a system's load and which tend to align approximately with the direction of compression. Between these strong force chains, there is a \emph{weak subnetwork} of contacts that carry forces that are less than the mean. This weak subnetwork tends to have an anisotropy that is orthogonal to the compression, and it may provide support to the backbone of strong forces. Such heterogeneity in a force network is an interesting feature of granular materials, and network-based approaches provide a direct way to examine how strong and weak contacts can play important roles in material properties and stability. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{Radjai1997a_Fig5_small.jpg} \caption{\textbf{The contacts and forces in a granular network at two different force thresholds.} \emph{(a)} When only contacts with forces $f > 1.3 \langle f \rangle$ (where $\langle f \rangle$ is the mean force) are included, the resulting network is composed of many disconnected components. \emph{(b)} As one decreases the threshold, the components grow, and if only contacts with $f > \langle f \rangle$ are included, the network is connected and ``percolates" along the axis of compression. The system is a numerical simulation of a dense, 2D collection of hard spheres under biaxial compression. [We adapted this figure, with permission, from \citep{Radjai:1998aa}.] } \label{f:Radjai1997a_Fig5} \end{figure} Theses ideas have been explored using \emph{force-thresholded} networks \citep{Arevalo:2009aa,Arevalo:2010aa,Arevalo:2010ba,Pena:2009aa}, in which one retains only contacts that carry a force of at least some threshold $f_{\mathrm{th}}$. That is, \begin{equation} A^{\mathrm{th}}_{ij} = \left\{\begin{array}{ll} 1\,, \text{ if particles $i$ and $j$ are in contact and } f_{ij} \geq f_{\mathrm{th}}\,, \\ 0\,, \text{ otherwise.} \end{array} \right. \label{eq:force_threshold} \end{equation} The threshold $f_\mathrm{th}$ should be lower-bounded by the smallest non-zero contact force $f_{\mathrm{min}}$ in the system being considered. (Note that when $f_{\mathrm{th}} = f_{\mathrm{min}}$, one includes all contacts in the force-thresholded network.) It is common to use a system's mean force $\langle f \rangle$ as a reference point and to systematically vary $f_{\mathrm{th}}$ to be different fractions of the mean force. Whether one uses the normal or the tangential component of the force, varying $f_{\mathrm{th}}$ results in a series of thresholded networks that one can subsequently characterize using traditional network or percolation-based analyses (which we discuss in the following three paragraphs of this subsubsection) or using methods from computational algebraic topology (which we discuss in Sec.~\ref{s:granular_alg_topology}). See Fig.~\ref{f:Radjai1997a_Fig5} for an example depicting the contacts and forces in a simulation of a 2D granular material at two different values of the force threshold. There is also a connection between the idea of force-thresholded networks and carrying out modularity maximization with the geographical null model [Eq.~\eqref{geog}] and a resolution parameter $\gamma$ (see Sec.~\ref{s:comm_structure}). Similar to how increasing the threshold $f_{\mathrm{th}}$ selects the subset of particles in a network for which inter-particle forces exceed or equal the threshold, in modularity maximization, increasing the resolution-parameter value $\gamma$ tends to yield communities of particles such that within a community, the inter-particle forces are at least as large as the threshold $\gamma \langle f \rangle$. \citet{Arevalo:2009aa, Arevalo:2010aa,Arevalo:2010ba} examined several network diagnostics --- e.g., mean degree, shortest-path length, diameter, LCC size, and component-size distributions --- as a function of the force threshold $f_{\mathrm{th}}$ (and at different values of the packing fraction) in DEM simulations of 2D granular packings under isotropic compression. The computations in \citep{Arevalo:2009aa, Arevalo:2010aa} suggest that the way in which many of these measures change as a function of $f_{\mathrm{th}}$ depends on packing fraction, and many of the measures can thus potentially be used to help understand differences in the organization of force networks above versus below the jamming point. For example, for packing fractions above the jamming point, the LCC size and the shortest-path length undergo qualitative changes in behavior near a threshold $f_{\mathrm{th}} \approx \langle f \rangle$, signifying interesting structural changes in the organization of the force networks at that threshold. The relationship between the number of 3-particle contact cycles (i.e., triangles) and the force threshold was examined in \citet{Arevalo:2010ba,Arevalo:2010aa}. In the jammed state, they observed a steep decline in the number of 3-cycles in the networks as they increased the threshold and considered only progressively larger forces. (As we discuss in Sec.~\ref{s:granular_alg_topology}, one can also use methods from computational algebraic topology to examine the evolution of cycle organization in dense granular materials.) This observation suggests that triangles (or at least one of their contacts) belong primarily to the weak subnetwork of forces that help support strong, filamentary force-chain structures. See Fig.~\ref{f:Behringer2014Statistical_Fig3Right} and Secs.~\ref{s:role_of_cycles} and ~\ref{s:weighted_cycles} for other discussions of the roles of cycles and their relationship to force chains. Another way to study force-thresholded granular networks is using a percolation-like approach (see Sec.~\ref{s:largest_comp}). For example, one can examine the sizes and number of connected components in a thresholded network as a function of $f_{\mathrm{th}}$ \citep{Ostojic:2007a,PastorSatorras2012,Kovalcinova:2015a,Kovalcinova:2016a,pathak2017force}. For dense packings, the intuition is that when $f_{\mathrm{th}}$ is very large, a force-thresholded granular network splits up into several disconnected components; and as one decreases $f_{\mathrm{th}}$, these components begin to merge until eventually all contacts are in a single component. In this type of bond percolation, which edges are included in the network thus depends on the force that the edges carry. One can look for a critical threshold $f^{\mathrm{c}}_{\mathrm{th}}$ such that for $f_{\mathrm{th}} > f_{\mathrm{th}}^{\mathrm{c}}$, the network fragments into many small components, but as one lowers the threshold towards $f^{\mathrm{c}}_{\mathrm{th}}$, a large, percolating cluster forms in the system (see Sec.~\ref{s:largest_comp}). Quantities that are often investigated when studying this type of force-percolation transition include $f^{\mathrm{c}}_{\mathrm{th}}$ and ``critical exponents'' for the transition (see Sec.~\ref{s:largest_comp}). Because (both experimental and computational) granular systems are finite in practice, such investigations often use finite-size scaling techniques. Several simulation-based studies of granular systems have deployed percolation analyses based on force-thresholded networks to quantify the organization of granular force networks, how such organization changes with increasing compression, and other phenomena \citep{Ostojic:2007a,PastorSatorras2012,Kovalcinova:2016a,pathak2017force}. For example, \citet{PastorSatorras2012} studied force percolation in the \emph{q-model} \citep{Coppersmith:1996a} of anisotropic granular force networks, in which there is a preferred direction of force propagation. They concluded that the asymmetry in the model has a significant effect on the percolation transition, and they found that the critical exponents differ from those of isotropically compressed granular force networks. \citet{Kovalcinova:2016a} investigated force percolation in a variety of simulations of slowly compressed, 2D granular systems. They examined the effects of polydispersity and friction, finding that these factors can qualitatively influence various features of the percolation transition. Very recently, \citet{pathak2017force} also investigated the force-percolation transition in simulations of jammed granular packings at fixed pressures. \subsubsection{Methods from computational algebraic topology.} \label{s:granular_alg_topology} In addition to traditional approaches for network analysis, one can also study the architecture of granular networks using ideas from algebraic topology. Persistent homology (PH) \citep{Kaczynski:2004aa,Edelsbrunner:2010a,otter2015,Kramar:2014b} (see Sec.~\ref{s:homology}) seems especially appropriate, and over the past several years, it has provided a fruitful perspective on the structure of compressed \citep{Kondic:2012aa,Kramar:2013aa,Kramar:2014b,Kramar:2014aa,Kondic:2017a}, and tapped \citep{Pugnaloni:2016a,Kondic:2016a,Ardanza-Trevijano:2014aa} granular materials. Very recently, it has also been used to study responses of granular materials to impact \cite{kondic-impact2017,lim-impact2017}. One way to characterize the organization and evolution of granular force networks is to examine how Betti numbers (see Sec.~\ref{s:homology})) change as a function of a force threshold (and also as a function of packing fraction) in compressed granular systems. This includes studying the birth and death of components (determined by $\beta_{0}$) and loops (determined by $\beta_{1}$) as a function of a force threshold by computing and analyzing persistence diagrams (see Fig.~\ref{f:PH_schematic} of Sec.~\ref{s:homology}). Examining when and how long different features persist in a network provides a detailed characterization of the structure of granular force networks, and one can quantify differences between two networks by defining measures of ``distance" between their associated persistence diagrams. These capabilities allow a PH framework to provide a distinct set of contributions that complement more traditional network-based analyses of components, cycle structure, and other features. \citet{Kondic:2012aa} investigated how simulated 2D granular force networks evolve under slow compression as they cross the jamming point. They first demonstrated that one can identify the jamming transition by a significant change in behavior of $\beta_{0}$ (specifically, there is an increase in the number of components at a force threshold approximately equal to the mean force $\langle f \rangle$), and that structural properties of the network --- such as the size of the connected components --- continue to change above jamming. \citet{Kondic:2012aa} also demonstrated that $\beta_0$ and $\beta_1$ can quantitatively describe the effects of friction and polydispersity on the organization of force networks (and can distinguish how friction and polydispersity alter the structure of a force network). This work was extended in \citet{Kramar:2013aa}, who examined numerical simulations of 2D, slowly compressed, dense granular materials using PH. In addition to examining the values of the Betti numbers, they also computed $\beta_{0}$ and $\beta_{1}$ persistence diagrams ($\mathrm{PD}_{0}$ and $\mathrm{PD}_{1}$, respectively) as the system was compressed to quantify the appearance, disappearance, and lifetimes of components and loops. In \citep{Kramar:2013aa}, they defined a filtration over the clique complex of the networks (see Sec.~\ref{s:homology}), so only loops with four or more particles were counted. To extract useful information from the PDs, they binned the persistence points in each diagram into different regions corresponding to features that \emph{(1)} are born at any force threshold but have relatively short lifetimes compared to those that are born at either \emph{(2)} strong, \emph{(3)} medium, or \emph{(4)} weak forces and that persist for a large range of thresholds. Their persistence analysis led to several insights into the structure of a normal-force network as a granular system is compressed, as well as insights into differences in the structure of a normal-force network for systems with different amounts of friction and different polydispersities. For example, \citet{Kramar:2013aa} observed that, near the jamming point, frictionless packings appear to have more ``extreme'' features than frictional packings, in the sense that frictionless packings have many more $\beta_{0}$ persistence points that are born at either weak or strong forces and that are relatively long-lived. Observations on the effects of polydispersity and friction may be difficult to observe using traditional measures such as the probability density function of the normal forces. \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{Kramar2014aa_LP.jpg} \caption{\textbf{Examination of granular force networks using persistent homology.} \emph{(a)} An example of a force network from a simulation of a dense, 2D, frictional, polydisperse granular material under compression. The grayscale bar indicates the magnitude of the normal force (normalized by the mean force in the packing) between contacting particles. \emph{(b)} The left panel shows the $\beta_{0}$ persistence diagram ($\mathrm{PD}_{0}$) of the force network, and the right panel shows the $\beta_{1}$ persistence diagram ($\mathrm{PD}_{1}$) of the force network. These PDs indicate the appearance and disappearance of connected components and loops, respectively. Because the filtration is defined over the clique complex of the force network, one counts loops only when they include four or more particles. \emph{(c)} One can examine the evolution of the force network as a function of packing fraction $\rho$ by using various distances (left axis) to quantify differences between the persistence diagrams computed from force information from consecutive samples (see the main text for details) of the system during compression. These distances (the \emph{bottleneck distance} $d_B$ and two variants, $d_{W1}$ and $d_{W2}$, of the \emph{Wasserstein distance} \citep{otter2015}) capture both local and global changes in force geometry as one compresses the system. The curve for the mean coordination number $Z$ (right axis) gives the mean number of contacts per particle, and the steep rise in $Z$ signifies the onset of jamming. The values of the distances (which we show for consecutive $\mathrm{PD}_{0}$s) change dramatically as the system goes through the jamming point, after which they vary much more smoothly. See \citep{Kramar:2014aa} for details about these measures and for an example computation of the distances between consecutive $\mathrm{PD_1}$s as a function of packing fraction. \emph{(d)} The ratios of some of the distance measures during compression. Considered in conjunction with the behavior of the distance measures in panel \emph{(c)}, the dramatic drop in these ratios near the jamming point signifies a rapid global restructuring of the force network that occurs via many small changes in the organization of the network's connected components. [We adapted this figure, with permission, from \citep{Kramar:2014aa}.] } \label{f:Kramar2014aa_LP} \end{figure} \citet{Kramar:2014aa} also used PH to study force networks from simulations of slowly compressed, polydisperse packings of disks in 2D as they traverse the jamming transition (see Fig.~\ref{f:Kramar2014aa_LP}\emph{a}). As they compressed the system through a range of different packing fractions $\rho \in [0.63,0.9]$, they extracted force information at approximately fixed time intervals during the simulation, and they then computed PDs of components and loops (i.e., $\mathrm{PD}_{0}$ and $\mathrm{PD}_{1}$, respectively) for each force network sampled during compression (see Fig.~\ref{f:Kramar2014aa_LP}\emph{b}). As in other studies, \citep{Kramar:2014aa} used the clique complex of the force networks to avoid counting 3-particle loops. They then used the bottleneck distance $d_{B}$ and two variants, $d_{W1}$ and $d_{W2}$, of the Wasserstein distance \citep{otter2015} to quantify differences between two PDs and thereby help quantify differences in local and global features between two granular force networks. As described in \citet{Kramar:2014aa}, the bottleneck distance captures only the largest difference between two PDs, whereas the Wasserstein distances include all differences between two PDs, with $d_{W1}$ more sensitive than $d_{W2}$ to small changes. (See \citep{Kramar:2014aa} for more details.) Calculating the various types of distance between the two $\beta_0$ PDs and the two $\beta_1$ PDs for consecutive samples (i.e., consecutive states) of a force network allows one to characterize different kinds of variations in the geometry of force networks as a function of packing fraction (see Fig.~\ref{f:Kramar2014aa_LP}\emph{c}). Using the employed distance measures, \citet{Kramar:2014aa} observed that in the unjammed state, there can be significant (but localized) reorganization in force geometry as a packing is compressed. They also concluded that the jamming transition is characterized by rapid and dramatic global rearrangements of a force network, followed by smoother and less dramatic reconfiguration in the system above jamming, where the distances between consecutive states of a packing are much smaller than in the unjammed state. \citet{Kramar:2014aa} also found that tangential-force networks seem to exhibit similar behavior (in most respects) to that of normal-force networks. They also observed that friction can have a significant impact on how the geometry of the forces reconfigures with compression. For example, they showed that the rate of change of loop features (as measured by the distances between consecutive $\mathrm{PD}_{1}$s) is larger for a frictional system than for a frictionless one below the jamming point, but just before jamming and thereafter (i.e., during the entire jammed state) differences in loop structure between consecutive packing fractions are larger for frictionless systems than for frictional ones. In very recent work, \citet{Kondic:2017a} used PH to examine the temporal scales on which granular force networks evolve during slow compression. They simulated dense 2D granular materials and studied the influence of the externally-imposed time scale --- set by the rate of compression --- on how frequently one should sample a system during compression to be able to evaluate its dynamics near the jamming transition. By varying the sampling rate and carrying out a persistence analysis to quantify the distance between consecutive sampled states of the system, their results indicate that close to jamming, a force network evolves on a time scale that is much faster than the one imposed by the external compression. See \citep{Kondic:2017a} for further details. One can also use PH to study force networks in tapped granular systems. \citet{Pugnaloni:2016a} examined DEM simulations of two different 2D systems exposed to tapping. One type of packing consisted of monosized disk-shaped particles, and the other type consisted of monosized pentagon-shaped particles. \citet{Pugnaloni:2016a}'s investigation suggested that particle shape can play an important role in mechanical responses, which is consistent with observations from classical investigations of granular materials \cite{Geng:2001a}. More specifically, \citet{Pugnaloni:2016a} computed $\beta_{0}$ and $\beta_{1}$ as a function of force threshold in both normal-force networks and tangential-force networks. They observed for both types of force-weighted networks (but particularly for the tangential one) that the first two Betti numbers are able to clearly distinguish between disks and pentagons, where $\beta_{0}$ (respectively, $\beta_{1}$) is consistently larger (respectively, smaller) for pentagons across a wide range of force thresholds. However, using only $\beta_{0}$ and $\beta_{1}$, \citep{Pugnaloni:2016a} were unable to clearly differentiate states with similar packing fractions but that result from different tap intensities. In a follow-up investigation, \citet{Kondic:2016a} simulated a series of several taps to granular packings and used PH to examine how normal and tangential force-weighted networks vary between individual tap realizations. Specifically, they computed distances between $\mathrm{PD}_{0}$s and between $\mathrm{PD}_{1}$s of force networks associated either with individual realizations of tapping to the same system or with individual realizations of tapping to two different systems. In one part of their study, they examined systems of disks exposed to a series of taps at two different tap intensities. (See Sec.~\ref{s:tapping_contact} for a rough delineation of ``low" tapping intensity versus ``high" tapping intensity.) They observed that in terms of loop structure, the set of networks generated from a series of taps at low intensity differ far more substantially from each other --- as quantified by the distribution of distances between the $\mathrm{PD}_{1}$s for different realizations of the tapping --- than do the set of force networks from a series of taps at high intensity. They also observed that the distances between different realizations of low-intensity tapping are as large as the distances between low-intensity tapping and high-intensity tapping realizations. Therefore, although the high-intensity tapping and low-intensity tapping regimes yield networks with approximately the same packing fraction (see Sec.~\ref{s:tapping_contact}), one can use methods from PH to help explain some of the differences between the packing structure in the two regimes. In another part of their study, \citet{Kondic:2016a} carried out a persistence analysis of tapped packings of disks and tapped packings of pentagons, and they observed clear distinctions between the two systems based on calculations of $\beta_1$ PDs. For example, for each system, they computed the $\mathrm{PD}_{1}$s for a set of networks associated with several individual realizations of the same tapping intensity, and they then computed a distance between each pair of $\mathrm{PD}_{1}$s for realizations within the same packing and across the two types of packings. They observed that the distribution of distances between the $\mathrm{PD}_{1}$s of individual tapping realizations to the packing of pentagons is narrower and centered at a smaller value than the distribution of distances between individual realizations of taps for the packing of disks. They also observed that the distances between the disk and pentagon systems are much larger than those between different realizations of the disk system. Thus, \citet{Kondic:2016a} were able to distinguish clearly between tapped disk packings and tapped pentagon packings using PH, especially when considering properties of loop structures. Past work using 2D experiments has also been able to distinguish between the dynamics of disk and pentagon packings using conventional approaches \cite{Geng:2001a}. One can also use methods from computational algebraic topology to study granular networks in which one uses edge weights from something other than a force. For example, \citet{Ardanza-Trevijano:2014aa} used only particle positions (in the form of point clouds) and computed Betti numbers to distinguish states at the same density but that are at different mechanical equilibria. Using both experimental and simulated 2D granular packings of monodisperse particles, they constructed networks by locating the center of each particle and then introducing a filtration parameter $\delta$, such that any two particles separated by a Euclidean distance less than or equal to $\delta$ are adjacent to each other in a graph. They considered $\delta \in [d,1.12d]$, where $d$ is the particle diameter and the domain for $\delta$ resembles the choices that are used for determining if particles are in physical contact with each other. The authors computed, as a function of $\delta$, the first Betti number $\beta_{1}$ on the whole network to count the total number of loops at a given $\delta$. They also computed $\beta_{1}$ on the flag complex (see Sec.~\ref{s:homology}), thus counting the number of loops with four or more nodes at a given value of $\delta$. For values of $\delta$ that are slightly larger than $d$, \citet{Ardanza-Trevijano:2014aa} were able to separate states at the same packing fraction that are generated by tapping at different intensities. They observed for a fixed packing fraction that states that arise from lower-intensity tapping have a larger value of $\beta_{1}$ when computed on the whole network and a lower value of $\beta_{1}$ when computed on the flag complex. Their results were robust to both noise and errors in particle-position data. \subsection{Other network representations and approaches} \label{s:other_representations} \subsubsection{Network-flow models of force transmission.} \label{s:flow_networks} Another technique for gaining insight into the organization of forces in deforming granular systems, and how microscale aspects of a force network lead to macroscale phenomena such as shear bands and material failure, is to view force transmission from the perspective of maximum-flow--minimum-cut and maximum-flow--minimum-cost problems (see Sec.~\ref{s:max_flow}). To examine a granular system using such a perspective, one can consider the ``flow" of force through a contact network (with some contacts able to transmit more force than others), which in turn yields a ``cost'' to the system in terms of energy dissipation at the transmitting contacts \citep{Tordesillas:2015a, Tordesillas:2015b, Tordesillas:2013b, Lin:2013a, Lin:2014a}. One can calculate flow and costs in routes through a network (and hence determine bottlenecks in force transmission) to gain understanding of how contact structure relates to and constrains a system's ability to transmit forces in a material. For example, \citet{Lin:2013a, Lin:2014a} constructed flow networks from DEM simulations of a system of polydisperse particles compressed quasistatically under a constant strain rate in the vertical direction and allowed to expand under constant confining pressure in the horizontal direction. At a given axial-strain value (i.e., ``strain state''), they assigned uniform capacities $u_{ij}$ to each edge of the contact network to reflect the maximum flow that can be transmitted through each contact, and they assigned costs $c_{ij}$ to each edge to model dissipation of energy at each contact. After each axial-strain increment during loading, they then solved the maximum-flow--minimum-cost problem for the network at the given strain state, finding that edges in the minimum cut (yielding \emph{bottlenecks} in the force-transmission networks) localize in the material's shear band. By using costs $c_{ij}$ that reflected the type of inter-particle contact (specifically, elastic contacts versus various types of plastic contacts) \citet{Lin:2013a, Lin:2014a} were able to track different stages of deformation (i.e., strain-hardening, strain-softening, and the critical-state regime). They also computed a minimal cycle basis and observed that a large majority of force-chain particles and particles in 3-cycles are involved in the set of contacts that comprise the maximum-flow--minimum-cost routes. One can use the above approach with various definitions of force capacity and cost functions. Using simulations of the same type of system as that in the previous paragraph, \citet{Tordesillas:2013b} constructed networks --- one for each strain state as a system was loaded until it failed --- that incorporated information about both the inter-particle contacts at a given strain state and the particle displacements that occur between consecutive strain steps. Specifically, if nodes $i$ and $j$ are in contact at a given strain state, the weight of the edge between them is the inverse of the absolute value of the magnitude of the relative displacement vector of particles $i$ and $j$, where one computes the displacement for each particle from the previous strain state to the current one. The distance between nodes $i$ and $j$ is $0$ if the associated particles are not in contact. The intuition behind this capacity function is that, when there is more relative motion between a pair of particles, one expects those particles to have less capacity to transmit force to each other. \citet{Tordesillas:2015b} used capacities that incorporate 3-cycle memberships of edges in a study of minimum cuts of a flow network in two samples of 3D sand under triaxial compression and in a 3D DEM simulation of simple shear. Grains in the bottlenecks localize early during loading, and are indicative of subsequent shear-band formation. Other work \citep{Tordesillas:2015a} studied DEM simulations of compressed, 3D bonded granular materials (where \emph{bonded} signifies that the grains are connected via solid bonds of some strength) and a system of 2D photoelastic disks under shear stress with the goal of testing the hypothesis that an appropriate maximum-flow--minimum-cost approach can identify experimentally-determined load-bearing particles and force-chain particles without relying on knowledge of contact forces. \citet{Tordesillas:2015a} examined different combinations of force-transmission capacity and cost functions, and they examined the fraction of force-chain particles that are part of the associated maximum-flow--minimum-cost network for a given capacity and cost function. In both cases, costs based on 3-cycle membership of edges seem to yield large values of these fractions, and \citet{Tordesillas:2015a} were able to successfully forecast most of the particles that eventually become part of force chains without using information about contact forces. \subsubsection{Broken-link networks.} \label{s:broken_link} In previous discussions (see Secs.~\ref{s:contact_network},~\ref{s:force_weighted}), we have seen that one way to investigate the evolution of a granular system under an applied load is \emph{(1)} to compute contact networks or force-weighted networks for the system as a function of packing fraction, strain, or some other control parameter, and then \emph{(2)} to study how different features and properties of the networks emerge and change as one varies that parameter. One can also use other network constructions to explore different mesoscale features and examine system dynamics. For example, \citet{Herrera:2011aa} designed a \emph{broken-link network} (see Fig.~\ref{f:Herrera2011aa_Fig3}) to study the dynamics of 3D granular flows. They conducted an experiment on a collection of acrylic beads immersed in a box of liquid medium, shearing the system at a constant rate ($\Omega \approx 1.05 \times 10^{-3}$ rad/s) by a rotating circular disk at the bottom of the box. (See \citep{Herrera:2011aa} for details about their experiments.) First, they constructed \textit{proximity networks} (a variant of a contact network) as a function of time. (Specifically, they constructed one network every 3-degree increment of rotation.) In their proximity network, they assigned a ``contact" (edge) to each pair of particles whose distance from one another in the given frame was within a specified distance threshold, which they chose to be a conservative upper bound for particle contact. They then defined a \emph{broken link} (relative to some reference frame) as an existing edge between two particles in the reference frame that was subsequently absent in at least two later time frames (due to the particles having moved apart). A broken-link network for the frame in which a pair of particles moved apart had an edge between the two particles that were initially in contact, and broken links were not allowed to reform later. In Fig.~\ref{f:Herrera2011aa_Fig3}\emph{a}, we illustrate this procedure for constructing a broken-link network. \begin{figure} \centering \includegraphics[width=\textwidth]{Herrera2011aa_Fig3.jpg} \caption{\textbf{A broken-link network can yield insights into dynamic reconfiguration in granular shear flows.} \emph{(a)} A schematic of the construction of a broken-link network based on changes in a proximity network that occur as a system is sheared. \emph{(b)} An example of a broken-link network (with edges in green) at one instant in time. [We adapted this figure, with permission, from \citep{Herrera:2011aa}.] } \label{f:Herrera2011aa_Fig3} \end{figure} Studying the temporal evolution of a broken-link network provides a quantitative approach for examining particle rearrangement events in granular matter. To examine the temporal evolution of a granular system, \citet{Herrera:2011aa} examined the size of the LCC in a sequence of broken-link networks as a function of applied shear, drawing an analogy between the fraction $\chi_{b}$ of broken links and the occupation probability in traditional percolation problems (see Sec.~\ref{s:largest_comp}). They observed that the fraction $s_{g}$ of nodes in the LCC of the broken-link network grows with $\chi_b$ in a way that suggests that there is a continuous phase transition in $s_{g}$, and they approximated the value of $\chi_{b}$ at which this transition occurs. (However, as we noted in Sec.~\ref{s:largest_comp}, because these networks have finite sizes, one needs to be cautious regarding statements about percolation and phase transitions.) From a physical standpoint, this transition region corresponds to a characteristic deformation at which broken links --- which are due to particle rearrangements --- start to become globally connected, relative to a reference proximity network. By examining $\chi_{b}$ as they applied shear to the system, \citet{Herrera:2011aa} approximated a characteristic amount of strain associated with this transition region (by mapping the value of $\chi_b$ associated with the transition region to a corresponding value of strain), and they suggested that the determined strain scale may be useful for identifying the onset of global reorganization in the system. In a later study, \citet{Slotterback:2012aa} used a similar approach to examine progression from reversible to irreversible dynamics in granular suspensions under oscillatory shear strain. In experiments similar to the one described above \citep{Herrera:2011aa}, the authors considered a series of 20 shear cycles of the following form: for each cycle in one experiment, a suspension was sheared with a rotating disk up to a strain amplitude of $\theta_{r}$, after which the direction of rotation was reversed and the disk was rotated by $-\theta_{r}$ back to its original location. The authors performed experiments for $\theta_r$ values of $2^\circ$, $4^\circ$, $10^\circ$, $20^\circ$, and $40^\circ$. In a qualitative sense, measuring ``reversibility" of the dynamics in this setup entails determining the extent to which particles return to their original positions after some number of shear cycles. To quantify this idea, one can compute the mean square displacement (MSD) of the particles after each cycle, where smaller MSD values correspond to more reversible dynamics. To study this system, \citet{Slotterback:2012aa} adjusted the idea of a broken-link network to include ``healing", so that broken links that are repaired in later frames do not contribute edges in the broken-link network, and they studied the temporal evolution of this broken-link network as a function of the cyclic shear and for different amplitudes $\theta_r$. In addition to the extent of \textit{spatial (ir)reversibility} measured by calculating the MSD, \citet{Slotterback:2012aa} also proposed a notion of \emph{topological (ir)reversibility} by examining the temporal evolution of the size of the LCC in their broken-link networks. For low values of $\theta_r$ (specifically, for $\theta_r \leq 20^\circ$), the system appears to be almost reversible: proximity-based contacts break, but they reform after shear reversal, and the fraction of particles in the LCC of the broken-link network thus grows before subsequently shrinking to almost $0$ after reversal. However, for a higher shearing amplitude (specifically, for $\theta_r = 40^\circ$), the system shows signatures of irreversibility. Many broken links do not reform after a shear cycle, and after reversal, the LCC of the broken-link network remains at a value that constitutes a substantial fraction of the total system size. \subsubsection{Constructing networks from time series of node properties or from kinematic data.} \label{kinematic} Another way to examine the organization of deforming granular materials is to construct networks based on the temporal evolution of particle properties \citep{Walker:2012aa,Walker:2013b,Walker:2014ba,Navakas:2014aa}, an idea that draws from earlier work in complex systems on constructing networks from different kinds of time-series data. (See, for example, \citep{Zhang:2006aa, Xu:2008aa, Yang:2008a, Lacasa:2008a, Gao:2009a, Marwan:2009a}.) In one type of construction, which yields what are sometimes called \emph{functional networks} \cite{Stolz2016}, one records the time series of some property of each particle and places an edge (which can potentially be weighted and/or directed) between two particles according to some relationship (e.g., some type of distance or other measure of similarity) between the particle-property time series. When using a different property, the nodes are the same, but the edges and especially edge weights will in general be different. Once networks have been constructed, one can examine them using techniques such as those in Sec.~\ref{s:network_measures}. Some authors have also used the generic term \emph{particle-property networks} to describe networks that they constructed in some way from particle properties. \citet{Walker:2012aa} used particle-property networks to study the evolution of DEM simulations of a quasistatically deforming, 2D granular material under biaxial compression. They constructed time series for two features (as well as their coevolution) for each particle --- membership in force chains (determined as in \cite{Peters:2005aa,Muthuswamy2006a}) and membership in 3-cycles of a minimal cycle basis --- by recording a $1$ if a particle has the given property and a $0$ if it does not. They then quantified the similarity of particle-property evolution using the Hamming distance \citep{MacKay:2003}, and they added (undirected and unweighted) edges between each particle and its $k$ closest (i.e., most similar) particles until eventually obtaining a single network with one connected component. \citet{Walker:2012aa} then extracted sets of particles that exhibit similar dynamic behavior by detecting communities (see Sec.~\ref{s:comm_structure}) in the particle-property networks. This uncovered distinct regions in the material --- including the shear band and different subnetworks composed of primarily force chain or primarily non-force chain particles --- as well as interlaced regions in the shear band that alternate between jammed and unjammed configurations. See \citep{Walker:2013b,Walker:2014ba} for additional studies that used membership in cycles of length up to $l = 7$ for the construction of particle-property networks. One can also construct particle-property networks from data that do not rely on knowledge of contact or force networks. For example, \citet{Walker:2012a} studied deformation in sand subject to plane-strain compression using measurements of grain-scale displacements from digital image correlations (DICs) \citep{Rechenmacher:2006a,Rechenmacher:2010a}. From these data, they constructed \textit{kinematic networks}, a type of network that arises from some measurement of motion (such as displacement or a rotation) over a small time increment. \citet{Walker:2012a} considered each observation grid point of digital images of a sample to be a node, and they placed edges between nodes with similar displacement vectors during a small axial-strain increment. This yields a collection of (undirected and unweighted) time-ordered kinematic networks. In another study, \citet{Tordesillas:2013a} calculated particle rotations and displacements for triaxial compression tests on sand using x-ray micro-tomography scanning \citep{Ando:2012a}. They generated an ordered set of networks from these data for several strain steps by treating each grain as a node and linking nodes with similar kinematics during the specified interval. More specifically, they represented the displacements and rotations of each particle as points in a state space, and they connected particles that are nearby in that state space according to Euclidean distance. In the network that they constructed, each particle is adjacent to $k$ nearest neighbors in state space, where $k$ is as small as possible so that the (unweighted and undirected) network is connected. Various notions of what it means to be ``similar'', and thus how to quantitatively define edges, are discussed in \citep{Walker:2012a,Walker:2014ba}. To probe the collective dynamics of interacting groups of particles for the network in each strain step, \citet{Tordesillas:2013a} detected communities corresponding to mesoscale regions in the material that exhibit similar dynamic behavior. Calculating the mean shortest-path length between pairs of particles in the same community yields a potentially important intermediate spatial scale (that they concluded is consistent with the shear-band diameter) of a granular system. For each strain step, they computed a variant of closeness centrality --- it is similar to the one in Sec.~\ref{s:centrality}, but it is not exactly the same --- of each particle in the corresponding network, and observed that particles with large closeness centrality localize in the region of the shear band early in loading (and, in particular, before the shear band develops). This study highlights the potential of network analysis to provide early warning to detect regions of failure in particulate systems. Methods from nonlinear time-series analysis have also been used in network-based studies of stick-slip dynamics in a granular packing sheared by a slider \citep{Walker:2014aa}. Using so-called \emph{phase-space networks} (see \citep{Xu:2008aa} for a description of them) to construct networks from measurements of a slider time series, \citet{Walker:2014aa} associated network communities with slip events. \subsection{Comparing and contrasting different network representations and approaches} Network representations of granular and other particulate systems, in combination with methods from network science and related disciplines, provide a plethora of ways to analyze granular materials. For these tools to be optimally useful in furthering understanding of the physics of these systems, it is also important to draw connections between the many existing approaches. The application of network-based frameworks to the study of granular matter is relatively young, and little work thus far has focused specifically on exploring such connections, but it is crucial to determine \emph{(1)} how conclusions from different approaches relate to one another and \emph{(2)} how similarities and differences in these conclusions depend on the system being studied. In this short subsection, we point out a few relationships and places to begin thinking about such questions. First, it is important to consider the network representation itself. We have discussed several different representations in this review, and some are more related to each other than others. Broadly speaking, one class of granular networks tries to encode the physical structure of a material --- in the sense that edges exist only when there is an inter-particle contact --- at a given point in an experiment or simulation. Such networks include contact networks (see Sec.~\ref{s:contact_network}) and force-weighted networks (see Sec.~\ref{s:force_weighted}). One can obtain a contact network from an associated force-weighted network by discarding the edge weights and keeping information only about connectivity. Force-weighted networks thus contain much more information, and they allow one to investigate phenomena --- such as force-chain organization --- that may not be possible to probe with the contact network alone. However, it is still important to develop tools for the analysis of contact networks and understand what phenomena arises from features of the connectivity alone (or what one can learn and explain from connectivity alone), as force information may not always be available or may not be necessary to understand certain behaviors of a granular system. Generalizing some quantities (e.g., betweenness centralities) from unweighted networks to weighted networks also involves choices, and it is often desirable to conduct investigations that have as few of these potentially confounding factors as possible. Other types of granular networks do not encode physical connectivity, but instead directly represent something about the dynamics or changes that occur in a system during an experiment or simulation. Examples of such networks include broken-link networks (see Sec.~\ref{s:broken_link}) and particle-property networks (see Sec.~\ref{kinematic}). These different classes of network representations offer distinct ways of studying granular materials, and utilizing each of them should improve understanding of these systems. For a given network representation, it is reasonable to expect that conclusions that arise from similar network quantities or methods of analysis are related to one another, whereas conclusions that result from tools designed to probe very different kinds of organization in a network provide rather different information about the underlying granular system \cite{bassett2012influence, Giusti:2016a}. For example, some studies have suggested that in deforming granular materials, results based on calculations of clustering coefficients are similar to those from studying 3-cycles. This is intuitively reasonable, given that calculating a local clustering coefficient yields one type of local 3-cycle density. We have also observed that conclusions drawn from examinations of small subgraphs in a deforming granular system may also be related to whether or not those subgraphs contain cycles of certain lengths. As we discussed in Sec.~\ref{s:relationships_measures}, another way to draw connections between different approaches is to consider the spatial, topological, or other scales that are probed by the different approaches. For instance, in a force-weighted network, node strength is a particle-scale property, and it encompasses only very local information about a granular material. However, granular systems exhibit collective organization and dynamics on several larger scales that may be difficult to understand by exclusively computing local measures and distributions of such measures. To obtain an understanding of larger-scale structures, it is necessary to also employ different methods, such as community detection, persistent homology, and the examination of conformation subgraphs that are composed of more than just a single particle. Utilizing such approaches has provided insights into force-chain structure, shear band formation, and reconfiguration in granular systems under load that one may not be able to obtain by considering only local network quantities. It is thus important to continue to use and develop methods to analyze granular networks across multiple scales, as doing so can provide important and new information about a system. Finally, we note that even a single approach, depending on how it is used, can provide multiple types of information about a granular network. A good illustration of this is community detection. In Sec.~\ref{s:comm_structure}, for example, we saw that using different null models in modularity maximization allows one to probe rather different types of mesoscale architecture in granular force networks. We look forward to forthcoming studies that directly compare the results and assumptions in different approaches (both network-based and traditional ones) and different network representations of granular and other particulate systems. Conducting principled investigations into how conclusions from various network-based approaches are related is indeed an important direction for future work. \subsection{Limitations and practicalities of simulations and experiments} \label{limit} Network-based studies of granular materials have examined inter-particle contact and force data (and associated dynamics, such as in the presence of external loading) from both experiments and simulations. A recent summary of the many experimental techniques available for obtaining data about inter-particle contacts is available in the focus issue \citep{amon_focus_2017}. Such techniques include laser-sheet scanning \citep{Dijksman2017}, photoelasticity \citep{Daniels2017}, x-ray tomography \citep{weis:17}, and nuclear magnetic resonance \citep{stannarius:17}. Using each of the first three approaches, it is possible to measure both particle positions and inter-particle forces. If one is careful, it is sometimes possible to measure the forces as vectors (i.e., including both the normal and tangential components), but some techniques or systems do not have sufficient resolution to allow more than scalar or coarse-grained values. Determining the forces also helps experimentalists to confidently construct contact (i.e., unweighted) networks of particulate materials. In deciding whether or not two particles are actually in contact, rather than merely being adjacent in space, it is necessary to perform a detailed study of the effects of thresholding the data \citep{Majmudar:2007aa}. Any experimental technique will imperfectly report the presence versus absence of the weakest contacts in a system. Additionally, because of the difficulty of accessing the interior of granular materials, much more data is available for 2D force networks than for 3D force networks \citep{amon_focus_2017}. The most widely-used simulation techniques are discrete element methods (DEMs) \citep{Poschel2005}, in which the dynamics of individual particles (usually spheres) are determined by their pairwise interactions under Newton's laws of motion. The normal forces are typically determined from a Hertzian-like contact law (see, e.g., the sidebar in \cite{PT2015} for an introduction to Hertzian contacts) via an energy penalty for the overlap of two particles. The tangential (frictional) forces are most commonly modeled using the Cundall--Strack method \citep{Cundall1979} of a spring and a dashpot, but they have also been modeled via the surface roughness created by a connected set of smaller spheres \citep{Papanikolaou2013}. For a given application, it is not known whether these simplified models capture all of the salient features of inter-grain contacts, and the situation likely differs for different applications. For example, experimental measurements of sound propagation in photoelastic disks \citep{Owens:2011} suggest that the amplitude of sounds waves may be largest along a force chain network, an effect not observed in DEM simulations \citep{Somfai2005}. This is likely a consequence of real particles physically deforming their shape to create an increased contact area through which sound can be transmitted; existing DEM simulations do not account for this effect. Another important use of particle simulations is to provide a means to investigate the robustness of network-based analyses to various amounts of experimental error \citep{Kramar:2014b}. Simulations provide an important check on experimental uncertainties in the determination of force-weighted networks and other network representations of granular materials. Conversely, network-based approaches provide a means to compare how faithfully simulations are able to reproduce experimental results. \section{Open problems and future directions} \label{open_problems} We now discuss a few open problems and research directions for which we anticipate important progress in the near future. We divide our comments into three main areas: the construction of different types of networks that encode various physical relationships or other properties (see Sec.~\ref{alt_methods}), the application of network analysis to additional types of materials (see Sec.~\ref{extensions}), and the application of network-based approaches to the design of materials (see Sec.~\ref{net_design}). Network tools can provide valuable insights --- both explanatory and predictive --- into particulate materials and their dynamics, and a lot of fascinating research is on the horizon. \subsection{Network representations and computations} \label{alt_methods} To briefly explore the potential of different approaches for constructing granular (and other particulate) networks for providing insights into the physics of granular materials (and particulate matter more generally), we discuss choices of nodes, choices of edges, edge-to-node dual networks, multilayer networks, and annotated networks.\footnote{Other ideas that are worth considering include memory networks \cite{Rosvall2014NatComm}, adaptive networks \cite{thilo-adaptive}, and various representations of temporal networks \cite{Holme2015EurPhysJB}.} It is also worth thinking about what calculations to do once one has constructed a network representation of a particulate system, so we also briefly consider the important issue of developing physically-informed methods and diagnostics for network analysis. \subsubsection{Definitions of nodes and edges.} There are many choices --- both explicit and implicit --- for constructing a network \citep{butts2009,newman2010networks}, and these choices can impact the physics that one can probe in granular networks \citep{bassett2012influence}. Perhaps the most obvious choices lie in how one defines nodes and edges. In the study of granular materials, a common definition is to treat individual particles as nodes and to treat contacts as edges (often with weights from the inter-particle forces). A natural set of open questions lies in how contact network architectures depend on different features of the grains in a system. For example, there have been several recent studies on systems composed of particles that are neither spheres nor disks --- including ones with U-shaped particles \citep{gravish2012entangled}, Z-shaped particles \citep{murphy2015freestanding}, squares and rods \citep{Hidalgo:2009aa,trepanier2010column}, dimers and ellipses \citep{Schreck:2010a}, and others \citep{Athanassiadis:2014a,durian2017}. It would be interesting to build network representations of these systems, examine how different grain geometries affect network organization, and investigate how that organization relates to the mechanical properties of a system \citep{Azema:2013aa}. It seems particularly important to develop an understanding of which (quantitative and qualitative) aspects of network structure depend on features of grains (such as shape, polydispersity, friction, cohesiveness, and so on \citep{Kovalcinova:2015a,Kondic:2016a,Pugnaloni:2016a,bassett2015extraction,Kovalcinova:2016a,Kramar:2013aa,Tordesillas:2010a}) and which are more universal. One can also consider defining particulate networks in a variety of other ways. For example, when determining edges and edge weights, one can examine the tangential (rather than, or in addition to, the usual normal) component of the force between two grains. Such extensions may facilitate increasingly detailed investigations into a packing's organization \citep{Kramar:2014aa}. It may also be useful to retain information about both the magnitude and direction of forces when defining edges. One may even wish to construct signed networks, for which edges can take either positive or negative values, thereby conveying further information about the relationship between nodes. In such studies, one can perhaps take advantage of advancements in community-detection techniques, such as by using signed null models \citep{Gomez:2009a,Traag:2009a}. Additionally, as we discussed in Sec.~\ref{s:other_representations}, particle-property networks \citep{Walker:2012aa,Walker:2013b,Walker:2014ba} and networks constructed from particle-displacement information \citep{Herrera:2011aa, Slotterback:2012aa,Tordesillas:2013a,Walker:2012a} are other informative ways to build networks for particulate systems. One can also construct edges (and determine edge weights) by incorporating information about inter-grain relationships based on similarities in particle properties such as orientation \citep{Zhang:2015} (see Fig.~\ref{f:collagen}), coefficient of friction \cite{Puckett2013}, or size \citep{shaebani2012influence}. Constructing networks whose edges are determined or weighted by inter-particle similarities may be particularly useful for achieving a better understanding of mesoscale physics in polydisperse packings, which are thought to depend on the spatial distributions of particles of different types \citep{Kumar:2016a}. A perhaps nonintuitive choice is to use a bipartite representation of a granular network, such as the approach used in \cite{slanina2017}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{fig_collagen.jpg} \caption{\textbf{Constructing a network representation of collagen-fiber alignment.} We show quantitative polarized light images (QPLIs) of the human cervical facet capsular ligament taken both before and during loading. These data were collected as part of a study of collagen-fiber realignment following a mechanical stretching of the material. Such realignment is a mechanical process that commonly leads to both acute and chronic neck pain, particularly following car accidents. QPLIs were used to generate pixel-wise collagen alignment maps in the selected regions of interest (ROIs) of anomalous realignment (AR; red box) and normal realignment (NR; blue box) at rest (time point $t = 1$), at the onset of anomalous realignment (time point $t = T$), and in between these two points in 1-second increments. Rectangular ROIs were defined using the upper-left and lower-right fiduciary markers as the common information between different time points. Each $3 \times 3$ pixel window in an ROI is a node in a network. We show representative $3 \times 3$ pixel windows in the resting and stretched states, along with a corresponding demonstration of alignments. For each node, we calculate the alignment angle, and we use a measure of similarity between alignment angles to weight the edges in the network. In this way, weighted adjacency matrices, with nodes numbered spatially, illustrate a pairwise similarity in alignment angles between ROIs. [We reproduced this figure, with permission, from \citep{Zhang:2015}.] } \label{f:collagen} \end{figure} The above choices for network construction give a grain-centric view of the physics of particulate materials. One can also consider edge-to-node ``duals'' of such networks to provide a contact-centric perspective. In a contact-centric approach, one treats contacts between physical objects as nodes and the objects themselves as edges between contacts. (Compare this idea to the notion of a \emph{line graph} \cite{harary1972} of a network $G$.) A contact-centric network approach was used recently in the study of nanorod dispersions \citep{Shi:2013aa, Shi:2014aa}. Contacts between rods were treated as nodes, and the effective conductance of the rod was treated as a weighted edge. The treatment of a grain or other physical object as an edge rather than as a node is also a particularly appealing way to describe networks of fibers in both human-made and natural systems. Recent examples of such fiber networks that have benefited from network analysis include collagen networks \citep{abhilash2014remodeling,Zhang:2015} (see Fig.~\ref{f:collagen}), fibrin networks \citep{purohit2011protein}, and axonal-fiber networks \citep{Bullmore2009,Bassett:2017a}. Another way to study granular systems (especially porous ones) is to consider a network constructed from the \emph{pore space} \citep{Blunt:2001a}, in which \emph{pores} (i.e., empty volumes between contacting grains) are nodes and \emph{throats} (i.e., flow pathways that connect pores) are edges (which can be weighted in various ways). Conveniently, there are several methods to precisely determine pores and grains \citep{Al-Raoush:2003a}. Studying pore networks is a common way to examine flow through porous materials or to understand the responses of granular materials to external stresses, but only more recently have such networks been studied explicitly from a network-science perspective. See \citep{Vo:2013,Walker:2013a,Linden:2016a,Russell:2016a,Martinez:2017a} for some recent examples, and see \cite{porous2017} for a study of force chains in porous media. Given how pore networks are formulated, we expect that they can also be studied using PH (see Sec.~\ref{s:homology}). \subsubsection{Multilayer networks.} When considering different ways to construct a network, it is also natural to ask whether it is beneficial to combine two or more approaches in an integrated way to more accurately (and elaborately) represent a particulate system as a network. Ideally, doing network analysis using a more complicated representation will also lead to improved physical understanding. One way to incorporate heterogeneous node types, multiple types of relationships between nodes, and time-dependence into network representations is with \emph{multilayer networks} \citep{kivela2014multilayer,Boccaletti2014,domenico2013mathematical}. Recall that \citep{papadopoulos2016evolution} (see Sec.~\ref{s:comm_structure}) studied one type of multilayer community structure in a compressed granular material. Another type of multilayer network that may be useful for the study of particulate systems are \emph{multiplex networks}, in which one can encode different types of relationships between nodes in different \emph{layers} of a network. For example, one can construct a multiplex network in which two particles are linked both by normal force-weighted contacts and also according to another relationship (such as the tangential force between them, their contact angle, or by a measure of similarity of one or several particle-properties, as discussed in Sec.~\ref{kinematic}). One can also envision using multilayer networks to study particulate systems with multiple types of particles. For example, if a system consists of particles of different shapes or sizes, one possibility is that each particle of a given shape (or size) is in a different layer, intralayer edges represent interactions between particles of the same type, and interlayer edges represent interactions between particles of different types. Another possibility is to let each layer represent a time window, perhaps with intralayer edges representing a mean interaction strength during that window. The study of multilayer networks is one of the most popular areas of network science, and we expect that they will be very illuminating for studies of particulate matter. \subsubsection{Annotated graphs.} \label{annotated} The network-construction techniques that we have been discussing throughout this review represent data in the form of an adjacency matrix (for an ordinary graph) or an adjacency tensor (for a multilayer network) \citep{domenico2013mathematical,kivela2014multilayer,Mucha2010}. However, it may be desirable to encode not only relationships between grains, but also features of the grains and/or their interactions. One option is to use \emph{annotated graphs} (one can also use multilayer networks) to encode inter-node relationships, individual-node properties, and individual-edge properties \citep{newman2016structure,tiago2016}. One can use annotated graphs --- also sometimes called \emph{labeled graphs} or \emph{tagged graphs} \citep{palla2008fundamental} --- to study properties of interaction patterns (such as force-weighted contacts) that may be driven by local particle properties (e.g., size, shape, spatial position in a material, membership in cycles or force chains, and so on). Available tools for annotated graphs include clustering techniques in the form of stochastic block models (SBMs) that combine information from both connectivity and annotations to infer densely-connected sets of nodes (e.g., including when there is sparse or missing data) \citep{newman2016structure,tiago2016}. \subsubsection{Beyond pairwise interactions.} \label{hyper} It is also desirable to explicitly examine interactions between three or more particles, rather than restricting one's study to representations of pairwise interactions. We discussed this idea briefly in Sec.~\ref{s:homology} in the context of simplicial complexes \cite{edels2010}, and we note that one can also encode edges among three or more nodes by using hypergraphs \cite{newman2010networks}. \subsubsection{Physically-informed network calculations.} In addition to exploring different choices of how to construct particulate networks, it is also important to consider ways to generalize the tools of network science to incorporate physical constraints \citep{barth2011,bassett2015extraction}. A natural way to begin is to build null models of spatially-embedded graphs that obey simple mechanical constraints like force balance \cite{Ramola:2017a}. One can also develop network diagnostics that incorporate physical geometry, spatial embedding, and latent spatial structure of networks \citep{barth2011}. See \cite{jkt2017} for an example of a kinetic approach. It should also be useful to incorporate ideas of flow and other forms of dynamics into community detection \cite{beguerisse2014interest,jeub2015}. Additionally, it is desirable to develop methods to more explicitly characterize the relationship between a network's structure and the geometry of a space in which it is embedded \citep{Bassett2010,Modes:2016a,barth2011} (as well as latent geometry) and to use such techniques to better understand the plurality of packings that are consistent with a single force distribution via a \emph{force-network ensemble}, which is the space of allowed force configurations for a fixed contact geometry \citep{Snoeijer-2004-ETF,Snoeijer-2004-FNE,Tighe2008,Tighe2010a,Kollmer2017}. We also point out that a spatial embedding can induce correlations between network diagnostics \cite{bassett2012influence}, and it is therefore important to develop a better understanding of the extent to which this occurs in networks that arise from particulate systems. \subsection{Beyond granular materials} \label{extensions} Although we have focused primarily on network analysis of the canonical type of granular systems, which consist of discrete, macroscopic particles that interact via contact forces, one can potentially also use network-based approaches to characterize granular materials with more complex interactions as well as soft materials more broadly \cite{ronhovde2011detection}. As reviewed recently in \citep{Nagel:2017a}, these materials include colloids, glasses, polymers, and gels. In these systems (and others) the particles (or other entities) can interact via various (and often complicated) means.\footnote{One can also examine particulate networks of hard materials that admit Hamiltonian descriptions \cite{amorphous2017}.} For example, system components can have attractive and/or repulsive long-range interactions \citep{muller2009homogeneous}, can be cohesive \citep{Mitarai:2006a}, and/or can interact with one another via chemical gradients \citep{wrobel2014directed,huttenlocher2008reverse}, electric charges \citep{hartveit2012electrical,nualart2013biochim,Pahtz:2010a}, or through other molecular or mechanical processes \citep{ladoux2015mechanotransduction}. In each of these cases, interaction strengths can yield edge weights, either as the sole relationship studied between entities in a graph or as one of several inter-entity relationships in a multilayer network. One particularly interesting avenue for future work may be to study polymer and fiber networks \citep{Bausch:2006aa,Broedersz:2014aa}, which are important in biological systems. For example, in biopolymer assemblies, cross-linking can glue filaments together into large-scale web-like structures (e.g., as in the cytoskeleton of a cell) \citep{Lieleg:2009a}. Such cross-linked actin filaments are critical for cellular function and integrity \citep{Fletcher:2010aa}, and it is thus important to understand the structural organization of these kinds of networks, their mechanical properties, and how force transmission is regulated in them \citep{Mizuno2007,Gardel:2008aa,actin2017}. Indeed, there has already been some work examining network representations of gels and polymers, and employing graph-theoretic analyses to quantify the structural properties of these systems (e.g.\citep{Billen2009,Kim2014,Gavrilov2015,Liang:2016,samal2017,Bouzid2017}). One can also use network analysis to study systems at smaller spatial scales. Because traditional network-based approaches are agnostic to the physical scale of a system, off-the-shelf calculations and algorithms are directly portable to microscale, nanoscale, and even smaller-scale systems \citep{ronhovde2011detection,Ahnert:2017aa}. However, despite the technical portability of common tools, the investigation of much smaller-scale systems should benefit from the extension of classical network-based tools in ways that incorporate additional underlying physics. For example, we have described extensions of network tools to assimilate ideas and constraints from classical physics (e.g., spatial embeddedness) \citep{bassett2015extraction,Sarzynska:2015aa}. For such investigations, we expect that ideas from the study of random geometric graphs (RGGs) and their extensions will be helpful \citep{bassett2012influence,setford2014}. One can also consider employing ideas that take into account principles from quantum physics \citep{Mulken:2011aa,Bianconi:2015a} or other areas. The study of quantum networks is a new and exciting area of network science, and there are many opportunities for progress \citep{Biamonte:2017a}. \subsection{Implications for material design} \label{net_design} As is the case with mathematics more generally, network analysis gives a flexible approach for studying many systems, because it is agnostic to many details of their physical (or other) nature \citep{newman2010networks,Newman:2003aa,bocca2006}. Such versatility supports the application of network-science techniques to both living and non-living materials to examine the architectures and dynamics of these systems and to gain insights into relationships between structure and function. The tools and approaches of network science also have the potential to inform the design of new materials. For example, it should be possible to use network theory (e.g., via the tuning of a system's network architecture) to provide guidance for how to engineer a material to exhibit specific mechanical, electrical, or other properties. Material design has become increasingly popular with recent advances in the study and development of \emph{metamaterials} \citep{Liu:2011, Turpin:2014, Lee2012}. Metamaterials can take advantage of precisely-defined component shapes, geometries, orientations, arrangements, and connectivity patterns (rather than specific material and physical characteristics of individual units) to produce tailored mechanical \citep{Greaves:2011a}, acoustic \citep{Lee2012, Rocklin:2015, Fang:2006, Nicolaou:2012aa,PT2015}, and electromagnetic \citep{Liu:2011, Simovski2012, Smith:2004aa} properties. The control of a single unit or component is relatively straightforward, but the question of how to link many components in ways that yield complex material properties is a very challenging one. Approaches that use ideas from network science have the potential to offer guidance for construction patterns of material units that support desired bulk properties. There are likely many ways to use network-based approaches to inform the design of new materials. One reasonable possibility is to employ evolutionary and genetic computer algorithms \citep{eiben2015from,diaz2016review} and other tools from algorithmic game theory \citep{papadimitriou2014algorithms}. For example, the combination of multi-objective functions and Pareto optimality \citep{Goldberg:1989} can offer a targeted path through the space of possible network architectures (i.e., through a \emph{network morphospace} \citep{McGhee:1997}). If exact simulations are not computationally tractable, one can use machine-learning techniques to offer fast estimates of material properties and behavior \cite{valera2017machine}. One can perhaps begin with a single material network structure that is physically realizable and then rewire the initial network architecture with a cost function that drives the system towards an arrangement that enhances a desired property. One can perform such a rewiring explicitly along Pareto-optimal fronts using a set of rewiring rules that preserve physical constraints and laws. This approach, which builds on prior applications to other types of spatially-embedded networks \citep{Avena-Koenigsberger:2014, Avena-Koenigsberger:2013, Goni:2013}, selects for physically-feasible network designs that purposely maximize specific properties. A network analysis of these ``evolved'' (and evolving) networks may help elucidate relationships between structural features of system architecture (e.g., clustering-coefficient values) and material properties (e.g., stability under load), providing a link between structure and function. Such an evolutionary-design approach can complement recent efforts to identify optimal shapes with which to construct a packing \citep{jaeger2016evolutionary,miskin2013adapting}, to design rules with which to pack those shapes \citep{miskin2014evolving,roth2016optimizing}, and to construct ``allosteric materials'' with specific functionalities via evolution according to fitness functions \citep{Yan:2017a}. One set of problems for which network-based tools may be useful are those related to rigidity and mechanical responses of disordered material networks, which have been studied previously using a rigidity-percolation framework \citep{Feng1985,Jacobs1995,Ellenbroek:2015aa,Henkes2016}. In terms of material design, a particularly interesting line of future work may be to use network analysis in conjunction with methods such as \emph{tuning-by-pruning}, in which individual bonds are selectively removed from a disordered material network to engineer a specific property into a system \citep{Goodrich:2015a}. For example, beginning with simulated, disordered spring networks (derived from jammed particle packings), \citet{Goodrich:2015a} used tuning-by-pruning to design networks with different ratios of the shear modulus to the bulk modulus. Motivated by allosteric responses in proteins, \citep{Rocks:2017a} developed an approach that allows careful control of local mechanical responses in disordered elastic networks. In particular, with the removal of very few bonds, one can tune the local strain in one region of a network via a response to an applied strain in a distant region of a system. \citet{Driscoll2016} studied three different spring networks and continuously tuned their mechanical rigidity (measured by different parameters, such as the distance above isostaticity) to examine the effect of such tuning on material failure. They observed that for a fixed amount of disorder (which the authors measured using a scalar, following the approach in \citep{shekhawat2013damage}), the width of the failure zone in a stressed material increases as the rigidity decreases. Recently, \citep{ReidAuxeticMetamaterials:2017} studied a model of amorphous networks that incorporates angle-bending forces, and employed pruning-based methods to design auxetic metamaterials. In light of these findings, it is natural to ask what network quantities and methods may be related to the above types of global or local mechanical properties, and whether they can inform which bonds to remove (or rearrange \citep{Lopez:2013aa}) to invoke particular types of functional responses. In developing a network-based approach for designing and building new materials, it is desirable to capitalize on the ability of network analysis to quantify multiscale structures (with respect to space, time, and network architectures) in a wide variety of systems (regardless of the exact details of their composition). For example, network analysis has revealed mesoscale architectures that are often crucial for determining material properties in disordered media, but such heterogeneities also appear to be important for biological materials, including networks of collagen fibers \citep{Quinn:2011,Zhang:2015}, tissues and tendons \citep{Zhao:2014,Han:2013}, muscle fibers \citep{Pong:2011}, and axonal fibers \citep{Sporns:2013aa, Sporns:2014aa}. Tools from network science should be useful (and, in principle, flexible) for designing nontrivial structural organizations that yield desired material functions. One can imagine using a network-theoretic framework to design localized, mesoscale, and/or system-level properties, to design and manipulate human-made and natural (including biological) materials, and to precisely control both static and dynamic material properties. \section{Conclusions} \label{conclusion} Network science is an interdisciplinary subject --- drawing on methods from physics, mathematics, statistics, computer science, social science, and many other disciplines --- that has been used successfully to help understand the structure and function of many complex systems. Much recent work on networks has yielded fascinating insights into granular materials, which consist of collections of discrete, macroscopic particles whose contact interactions give rise to many interesting behaviors and intricate organization on multiple spatial and temporal scales. These insights have increased scientific understanding of the structure and dynamics of the heterogeneous material architecture that manifests in granular matter (as well as network-based approaches to quantify such architecture) and the response of granular systems to external perturbations such as compression, shear, tapping, and tilting. In this paper, we have reviewed the increasingly fertile intersection of network science and granular materials. Future efforts should help provide a better understanding of the physics of particulate matter more generally, elucidate the roles of mesoscale interaction patterns in mechanical failure, inform the design of new materials with desired properties, and further scientific understanding of numerous important problems and applications in granular physics, soft-matter physics more generally, and even biophysics. \section*{Acknowledgements} We thank Alejandro J. Mart\'inez, Ann E. Sizemore, Konstantin Mischaikow, Jen Schwarz, and an anonymous referee for helpful comments. We also thank our collaborators, students, and mentors, who have shaped our views on the subjects of this review. KD is grateful for support from the National Science Foundation (DMR-0644743, DMR-1206808) and the James S. McDonnell Foundation. LP is grateful to the National Science Foundation for a Graduate Research Fellowship. DSB is grateful to the Alfred P. Sloan Foundation, the John D. and Catherine T. MacArthur Foundation, the Paul G. Allen Foundation, and to the National Science Foundation (PHY-1554488). The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies. \newpage
1,314,259,992,957
arxiv
\section{Parameterized by Clique-width}\label{sec:cliquewidth} In this section, we give an $\XP$-time algorithm for \cliquecol parameterized by clique-width, more precisely, parameterized by the equivalent measure module-width. We provide an algorithm that given an $n$-vertex graph $G$ with one of its rooted branch decompositions of module-width $w$ and an integer $k$, decides whether $G$ has a clique coloring with $k$ colors in time $n^{f(w)}$, where $f(w) = 2^{2^{\calO(w)}}$. Before we describe the algorithm, we give a high level outline of its main ideas, and where the double exponential dependence on $w$ in the degree of the polynomial comes from. The algorithm is bottom-up dynamic programming along the given branch decomposition of the input graph. Let $t$ be some node in the branch decomposition. To keep the number of table entries bounded by something that is \XP in the module-width, we have to find a way to group color classes into a number of \emph{types} that is upper bounded by a function of $w$ alone. The intention is that two color classes of the same type are interchangeable with respect to the underlying coloring being completable to a valid clique coloring of the whole graph. Partial solutions (colorings of the subgraph $G_t$) can then be described by remembering, for each type, how many color classes of that type there are. If the number of types is $f(w)$ for some function $f$, this gives an upper bound of $n^{f(w)}$ on the number of table entries at each node of the branch decomposition. Let us discuss what kind of information goes into the definition of a type. Since the final coloring of $G$ has to avoid monochromatic maximal cliques, we maintain information about cliques in $G_t$ that are or may become monochromatic maximal cliques in some extension of the coloring at hand. A natural attempt would be to consider and describe maximal cliques in $G_t$ by their intersection patterns with the equivalence classes of $\sim_t$. However, it is not sufficient to consider only maximal cliques in $G_t$; given a maximal clique $X$ in $G_t$, it may happen that in $\overline{V_t}$ there is a vertex $v$ that is adjacent to a strict subset $Y \subset X$ of that clique, forming a maximal clique with $Y$ -- which does not fully contain $X$ -- in a supergraph of $G_t$. Considering the equivalence classes of $\sim_t$, this implies that the equivalence classes containing $Y$ and the ones containing $X \subset Y$ are disjoint. We therefore consider cliques $X$ that are \emph{maximal in the subgraph induced by the equivalence classes containing vertices of $X$}. We call such cliques $X$ \emph{\eqcmax}, and observe that with a little extra information, we can keep track of the forming and disintegrating of \eqcmax cliques along the branch decomposition. If an \eqcmax clique is fully contained in some set of vertices (/color class) $C$, then we call it \emph{potentially bad for $C$}. A potentially bad clique is described via its \emph{profile}, which consists of the intersection pattern with the equivalence classes of $\sim_t$, and some extra information. At each node, there are at most $2^{\calO(w)}$ profiles. Equipped with this definition, we can define the notion of a $t$-type of a color class $C$, which is simply the subset of profiles at $t$, such that $G_t$ contains a potentially bad clique with that $C$-profile. It immediately follows that the number of $t$-types is $2^{2^{\calO(w)}}$. Now, colorings $\calC_t$ of $G_t$ are described by their \emph{$t$-signature}, which records how many color classes of each type $\calC_t$ has. There are at most $k^{f(w)}$ many $t$-signatures, where $f(w) = 2^{2^{\calO(w)}}$, and this essentially bounds the runtime of the resulting algorithm to $n\cdot k^{f(w)} = n^{\calO(f(w))}$. At the root node $\rootnode \in V(T)$, there is only one equivalence class, namely $V_{\rootnode} = V(G)$, and if in a coloring, there is a clique that is potentially bad for some color class, then it is indeed a monochromatic maximal clique. Therefore, at the root node, we only have to check whether there is a coloring all of whose color classes have no potentially bad cliques. \newcommand\calS{\ensuremath\mathcal{S}} \newcommand\insiders{\ensuremath\mathcal{Q}} \newcommand\outsiders{\ensuremath\mathcal{P}} \newcommand\bubblebuddies{\ensuremath\mathsf{bb}} \newcommand\prmergeaux{H'} \subsection{Potentially Bad Cliques} We now introduce the main concept used to describe color classes in partial solutions of our algorithms, namely potentially bad cliques. These are cliques that are monochromatic in some subgraph induced by a set of equivalence classes. \begin{definition}[Potentially Bad Clique] Let $G$ be a graph with rooted branch decomposition $(T, \decf)$ and let $t \in V(T)$. A clique $X$ in $G_t$ is called \emph{\eqcmax (in $G_t$)} if it is maximal in $G_t[V(\eqc_t(X))]$. Let $C \subseteq V_t$ and let $X$ be a clique in $G_t$. Then, $X$ is called \emph{potentially bad for $C$ (in $G_t$)}, if $X$ is \eqcmax in $G_t$ and $X \subseteq C$. \end{definition} Naturally, it is not feasible to keep track of \emph{all} potentially bad cliques. We therefore capture the most vital information about potentially bad cliques in the following notion of a \emph{profile}. For our algorithm, it is only important to know for a color class whether or not it has some potentially bad clique with a given profile, rather than how many, or what its vertices are. This is key to reduce the amount of information we need to store about partial solutions. There are two components of a profile of a potentially bad clique $X$; the first one is the set of equivalence classes $\insiders$ containing its vertices, and the second one consists of the equivalence classes $P \notin \insiders$ that have a vertex that is complete to $X$. This is because, at a later stage, $P$ may be merged with an equivalence class containing vertices of $X$ (via the bubbles), in which case $X$ is no longer potentially bad. We illustrate the following definition in Figure~\ref{fig:ccol:profile}. \begin{figure} \centering \includegraphics[height=.1\textheight]{fig/C-profile.pdf} \caption{Illustration of the $C$-profile of a clique $X$ that is potentially bad for a color class $C$, depicted as the shaded areas within the equivalence classes. In this case, we have that $\profile(X \mid C) = (\{Q_1, Q_2\}, \{Q_3, Q_4\})$.} \label{fig:ccol:profile} \end{figure} \begin{definition}[Profile] Let $G$ be a graph with rooted branch decomposition $(T, \decf)$ and let $t \in V(T)$. Let $C \subseteq V_t$ and let $X$ be a clique in $G_t$ that is potentially bad for $C$. The \emph{$C$-profile} of $X$ is a pair of subsets of $V_t/{\sim_t}$, $\profile(X \mid C) \defeq (\insiders, \outsiders)$, where % \begin{align*} \insiders = \eqc_t(X) \mbox{ and } \outsiders = \{P \in \eqcbar_t(X) \mid \exists v \in P \colon X \subseteq N(v)\}. \end{align*} We call the set of all pairs of disjoint subsets of $V_t/{\sim_t}$, where the first coordinate is nonempty, the \emph{profiles at $t$}, formally, % \(\profiles_t \defeq \{(\insiders, \outsiders) \mid \insiders, \outsiders \subseteq V_t/\sim_t \colon \insiders \neq \emptyset \wedge \insiders \cap \outsiders = \emptyset\}.\) \end{definition} \begin{observation}\label{obs:ccol:number:of:profiles} Let $(T, \decf)$ be a rooted branch decomposition. For each $t \in V(T)$, there are at most $2^{\calO(\givenmw)}$ profiles at $t$, where $\givenmw = \modulew(T, \decf)$. \end{observation} Let $t \in V(T) \setminus \leaves(T)$ be an internal node with children $r$ and $s$ and operator $(\decaux_t, \bubblemap_r, \bubblemap_s)$, and let $\profile_r \in \profiles_r$ and $\profile_s \in \profiles_s$ be a pair of profiles. We are now working towards a notion that precisely captures when and how a potentially bad clique in $G_r$ for some $C_r \subseteq V_r$ with $C_r$-profile $\pr_r$ can be merged with a potentially bad clique in $G_s$ for some $C_s \subseteq V_s$ with $C_s$-profile $\pr_s$ to obtain a potentially bad clique for $C_r \cup C_s$ in $G_t$. As it turns out, if this is possible, then the profile of the resulting clique only depends on $\pr_r$, $\pr_s$, and the operator of $t$. Note that for now, we focus on the case when the cliques in $G_r$ and $G_s$ are both nonempty, and we discuss the case when one of them is empty below. Before we proceed with this description, we need to introduce some more concepts. We illustrate all of the following concepts in Figure~\ref{fig:bb:comp:merge-profile}. \begin{figure} \centering \includegraphics[height=.25\textheight]{fig/merge-profile.pdf} \caption{Merging a potentially bad clique $X$ in $G_r$ with a potentially bad clique $Y$ in $G_s$ to obtain a potentially bad clique in $G_t$. The color class at hand is depicted in blue and the gray and yellow areas show the (three) bubbles. Note that the equivalence classes $P_1$ and $Q_2$ are bubble buddies of $\eqc_r(X)$ and $\eqc_s(Y)$. Moreover, the types of $X$ and $Y$ are compatible, since $\{Q_1, P_2, P_3\}$ is a maximal biclique in $\decaux_t[\{Q_1, P_1, P_2, P_3\}]$. Finally, note that the equivalence class of $\sim_t$ corresponding to the bubble containing $Q_3$ will have a vertex that is complete to the potentially bad clique $X \cup Y$. } \label{fig:bb:comp:merge-profile} \end{figure} For a set of equivalence classes $\calS \subseteq V_r/{\sim_r} \cup V_s/{\sim_s}$, its \emph{bubble buddies at $t$}, denoted by $\bubblebuddies_t(\calS)$, are the equivalence classes of $V_r/{\sim_r} \cup V_s/{\sim_s}$ that are in the same bubble as some equivalence class in~$\calS$: $$\bubblebuddies_t(\calS) \defeq \{Q_p \mid p \in \{r, s\}, \bubblemap_p(Q_p) \in \bubblemap_r(\calS \cap V_r/{\sim_r}) \cup \bubblemap_s(\calS \cap V_s/{\sim_s})\}.$$ We say that $\pr_r = (\insiders_r, \outsiders_r)$ and $\pr_s = (\insiders_s, \outsiders_s)$ are \emph{compatible}, if $\insiders_r \cup \insiders_s$ is a maximal biclique in \begin{align} \label{eq:ccol:pr:compatibility} \prmergeaux_t(\pr_r, \pr_s) \defeq \decaux_t[(\insiders_r \cup \insiders_s) \cup ((\outsiders_r \cup \outsiders_s) \cap \bubblebuddies_t(\insiders_r \cup \insiders_s))]. \end{align} As we show below, the notion of compatibility precisely captures the `merging behavior' of potentially bad cliques. Moreover, for $\pr_r$ and $\pr_s$ compatible, we can immediately construct the profile of the resulting potentially bad clique: the \emph{merge profile} of $\pr_r$ and $\pr_s$ is the profile $\mergeprofile(\pr_r, \pr_s) = (\insiders_t, \outsiders_t)$ such that \begin{itemize} \item $\insiders_t = \bubblemap_r(\insiders_r) \cup \bubblemap_s(\insiders_s)$ and \item $\outsiders_t = \bigcup_{\{o, p\} = \{r, s\}} \{\bubblemap(Q_p) \mid Q_p \in \outsiders_p \setminus \bubblebuddies_t(\insiders_r \cup \insiders_s) \colon \insiders_o \subseteq N_{\decaux_t}(Q_p)\}$. \end{itemize} \begin{lemma}\label{lem:ccol:merge:profile:bt} Let $t \in V(T) \setminus \leaves(T)$ be an internal node with children $r$ and $s$ and operator $(\decaux_t, \bubblemap_r, \bubblemap_s)$. For all $p \in \{r, s\}$, let $C_p \subseteq V_p$, let $X_p$ be a clique in $G_r$ that is potentially bad for $C_p$, and let $\pr_p \defeq \profile(X_p \mid C_p) = (\insiders_p, \outsiders_p)$. If $\pr_r$ and $\pr_s$ are compatible, then $X_t \defeq X_r \cup X_s$ is a clique that is potentially bad for $C_t \defeq C_r \cup C_s$, and $\profile(X_t \mid C_t) = \mergeprofile(\pr_r, \pr_s)$. \end{lemma} \begin{proof} We first argue that $X_t$ is a clique. Since $X_r$ and $X_s$ are cliques, we only have to show that for each $v_r \in X_r$ and $v_s \in X_s$, $v_r v_s \in E(G_t)$. In other words, if $Q_r$ is the equivalence class of~$\sim_r$ containing~$v_r$, and $Q_s$ is the equivalence class of~$\sim_s$ containing $v_s$, then $Q_r Q_s \in E(\decaux_t)$. Now, $Q_r \in \eqc_r(X_r) = \insiders_r$ and $Q_s \in \eqc_s(X_s) = \insiders_s$, and since $\pr_r$ and $\pr_s$ are compatible, we have that $\insiders_r \cup \insiders_s$ is a biclique in $\decaux_t$, therefore $Q_r Q_s \in E(\decaux_t)$. Next, we show that $X_t$ is potentially bad for $C_t$. Since $X_r$ and $X_s$ are potentially bad for $C_r$ and $C_s$, respectively, we have that $X_r \subseteq C_r$ and $X_s \subseteq C_t$, and therefore $X_t = X_r \cup X_s \subseteq C_r \cup C_s = C_t$. It remains to show that $X_t$ is \eqcmax. Suppose not, and let $y \in V(\eqc_t(X_t))$ be a vertex that is complete to $X_t$. First, we know that $y \notin V(\eqc_r(X_r) \cup \eqc_s(X_s))$, for if $y \in V(\eqc_p(X_p))$ for some $p \in \{r, s\}$, then $X_p$ is not \eqcmax, contradicting $X_p$ being potentially bad for $C_p$. On the other hand, we have that $\eqc_t(X_t) = \bubblebuddies_t(\eqc_r(X_r) \cup \eqc_s(X_s)) = \bubblebuddies_t(\insiders_r \cup \insiders_s)$. We may assume that for some $p \in \{r, s\}$, the vertex $y$ is contained in some $Q_p \in \bubblebuddies_t(\insiders_r \cup \insiders_s) \setminus (\insiders_r \cup \insiders_s)$. Assume up to renaming that $p = r$. Since $y$ is complete to $X_t$, we have that $y$ is complete to $X_r$, and therefore $Q_r \in \outsiders_r$. In other words, $Q_r$ is contained in the graph $\prmergeaux_t(\pr_r, \pr_s)$ as described in Equation~\eqref{eq:ccol:pr:compatibility}. Moreover, since $y$ is complete to $X_s$, we have that $Q_r$ is complete to $\eqc_s(X_s) = \insiders_s$. This implies that $\{Q_r\} \cup \insiders_r \cup \insiders_s$ is a biclique in $\prmergeaux_t(\pr_r, \pr_s)$, contradicting $\pr_r$ and $\pr_s$ being compatible. To conclude the proof, we need to show that $\profile(X_t \mid C_t) = \mergeprofile(\pr_r, \pr_s)$. Let $\mergeprofile(\pr_r, \pr_s) = (\insiders_t, \outsiders_t)$. We first show that $\eqc_t(X_t) = \insiders_t$. % To see that $\insiders_t = \bubblemap_r(\insiders_r) \cup \bubblemap_s(\insiders_s) \subseteq \eqc_t(X_t)$, we observe that for all $Q_p \in \insiders_p$, there is an $x \in X_p \cap Q_p$. This means that $x \in \bubblemap_p(Q_p)$, therefore $X_t \cap \bubblemap_p(Q_p) \neq \emptyset$ and $\bubblemap_p(Q_p) \in \eqc_t(X_t)$. The other inclusion can be argued similarly. Now suppose that $Q_t \in \outsiders_t$. Then, $Q_t = \bubblemap_p(Q_p)$ for some $Q_p \in \outsiders_p \setminus \bubblebuddies_t(\insiders_r \cup \insiders_s)$ with $\insiders_o \subseteq N_{\decaux_t}(Q_p)$. In other words, there is a vertex $v \in Q_p$ that is complete to $X_t$, and $\bubblemap_p(Q_p) \notin \eqc_t(X_t)$. According to the definition of a profile, $Q_t = \bubblemap_p(Q_p)$ is contained in the second coordinate of $\pr_t$. The other inclusion can be shown similarly. \end{proof} Now we show the other direction, i.e.\ that if we have a potentially bad clique for some $C_t \subseteq V_t$ in $G_t$, then its restrictions to $V_r$ and $V_s$ necessarily also form potentially bad cliques for the restriction of $C_t$ to $V_r$ and $V_s$ in $G_r$ and $G_s$, respectively. Furthermore, in that case, the profiles of the resulting cliques are compatible. \begin{lemma}\label{lem:ccol:merge:profile:tb} Let $t \in V(T) \setminus \leaves(T)$ be an internal node with children $r$ and $s$ and operator $(\decaux_t, \bubblemap_r, \bubblemap_s)$. Let $C_t \subseteq V_t$, and let $X_t$ be a clique in $G_t$ that is potentially bad for $C_t$. For all $p \in \{r, s\}$, let $X_p \defeq X_t \cap V_p$ and $C_p \defeq C_t \cap V_p$. Suppose that for all $p \in \{r, s\}$, $X_p \neq \emptyset$. Then, for all $p \in \{r, s\}$, $X_p$ is a potentially bad clique for $C_p$, and $\profile_r \defeq \profile(X_r \mid C_r)$ and $\profile_s \defeq \profile(X_s \mid C_s)$ are compatible. \end{lemma} \begin{proof} Since $X_t$ is a potentially bad clique for $C_t$, we have that $X_t \subseteq C_t$, and so for $p \in \{r, s\}$, $X_p \subseteq C_p$. It remains to show that $X_p$ is \eqcmax for all $p \in \{r, s\}$. Up to renaming, it suffices to show that $X_r$ is \eqcmax. Suppose not and let $y \in \eqc_r(X_r)$ be a vertex that is complete to $X_r$. % Since $X_t$ is a clique in $G_t$, we have that $\eqc_r(X_r) \cup \eqc_s(X_s)$ is a biclique in $\decaux_t$. Therefore, $y$ is also complete to $X_s$ and therefore to $X_t$. Clearly, $y \in \eqc_t(X_t)$, and we have a contradiction with $X_t$ being \eqcmax. What remains to be shown is that $\pr_r = (\insiders_r, \outsiders_r)$ and $\pr_s = (\insiders_s, \outsiders_s)$ are compatible. We have already argued that $\insiders_r \cup \insiders_s = \eqc_r(X_r) \cup \eqc_s(X_s)$ is a biclique in $\decaux_t$; we have to show that $\insiders_r \cup \insiders_s$ is a maximal biclique in $\prmergeaux_t \defeq \prmergeaux_t(\pr_r, \pr_s)$ as defined in Equation~\eqref{eq:ccol:pr:compatibility}. Clearly, $\insiders_r \cup \insiders_s \subseteq V(\prmergeaux_t)$, so suppose that $\insiders_r \cup \insiders_s$ is not a maximal biclique in $\prmergeaux_t$. This means that for some $p \in \{r, s\}$, there is some $Q_p \in \outsiders_p \cap \bubblebuddies_t(\insiders_r \cup \insiders_s)$ such that $\{Q_p\} \cup \insiders_r \cup \insiders_s$ is a biclique in $\prmergeaux_t$. In that case, there is a vertex $y \in Q_p$ that is complete to $X_t$ (since $Q_p \in \outsiders_p$ and $\{Q_p\} \cup \insiders_r \cup \insiders_s$ is a biclique), and $y \in V(\eqc_t(X_t))$ (since $Q_p \in \bubblebuddies_t(\insiders_r \cup \insiders_s)$); we obtained a contradiction with $X_t$ being \eqcmax. \end{proof} As mentioned above, we treat the case when a clique $X_p$ in one of the children $p \in \{r, s\}$ remains potentially bad in $G_t$ separately. This is because in that case, the notion of a maximal biclique in $\prmergeaux_t$ as defined in Equation~\eqref{eq:ccol:pr:compatibility} does not hold up very naturally. We formulate the analogous requirements for this case here, and we skip some of the details. Let $t \in V(T) \setminus \leaves(T)$ be an internal node with children $r$ and $s$ and operator $(\decaux_t, \bubblemap_r, \bubblemap_s)$. Let $\pr_r \in \profiles_r$. We say that $\pr_r$ is \emph{liftable} if \begin{itemize} \item there is no $Q_s \in \bubblebuddies_t(\insiders_r)$ that is complete to $\insiders_r$ in $\decaux_t$, and \item $\bubblebuddies_t(\insiders_r) \cap \outsiders_r = \emptyset$. \end{itemize} The \emph{lift profile} of $\pr_r$, denoted by $\liftprofile(\pr_r)$, is constructed as the merge profile of $\pr_r$ with the empty set; i.e.\ we take $(\insiders_s, \outsiders_s) = (\emptyset, V_s/{\sim_s})$ and apply the definition given above. \begin{lemma}\label{lem:ccol:lift:profile} Let $t \in V(T) \setminus \leaves(T)$ be an internal node with children $r$ and $s$. Let $C_r \subseteq V_r$, $C_s \subseteq V_s$, let $X_r$ be a clique in $G_r$, and let $\pr_r \defeq \profile(X_r \mid C_r)$. Then, $X_r$ is a potentially bad clique for $C_r \cup C_s$ in $G_t$ if and only if $X_r$ is a potentially bad clique for $C_r$ in $G_r$ and $\pr_r$ is liftable, in which case $\profile_t(X_r \mid C_r \cup C_s) = \liftprofile(\pr_r)$. \end{lemma} \begin{proof} The proof can be done with very similar arguments to those given above and is therefore omitted. One only needs to observe that the notion of `liftable' modulates the notion of a profile being compatible with the profile of an empty set. \end{proof} \subsection{The type of a color class} We now describe the $t$-type of a color class $C$, which is the subset of profiles at $t$ such that there is a clique in $G_t$ that is potentially bad for $C$, with that $C$-profile. For our algorithm, two color classes with the same type will be interchangeable, therefore we only have to remember the number of color classes of each type. \begin{definition}[$t$-Type] Let $G$ be a graph with rooted branch decomposition $(T, \decf)$, and let $t \in V(T)$. For a set $C \subseteq V_t$, the \emph{$t$-type} of $C$, denoted by $\ctype_t(C)$ is % \begin{align*} \ctype_t(C) \defeq \{\pr_t \in \profiles_t \mid \exists \mbox { clique $X$ in $G_t$ which is potentially bad for $C$ and } \profile(X \mid C) = \pr_t\}. \end{align*} We call the set $\ctypes_t = 2^{\profiles_t}$ of all subsets of profiles at $t$ the \emph{$t$-types}. \end{definition} Since for each $t \in V(T)$, $\card{\profiles_t} \le 2^{\calO(w)}$ by Observation~\ref{obs:ccol:number:of:profiles}, the number of $t$-types can be upper bounded as follows. \begin{observation}\label{obs:ccol:number:of:types} Let $(T, \decf)$ be a rooted branch decomposition, and let $t \in V(T)$. There are at most $2^{2^{\calO(w)}}$ many $t$-types, where $w \defeq \modulew(T, \decf)$. \end{observation} In our algorithm we want to be able to determine the $t$-type of the union of a color class in $G_r$ and a color class in $G_s$. This is done via the following notion of a merge type, which is based on the notion of merge and lift profiles given in the previous section. \begin{definition}[Merge Type] Let $G$ be a graph with rooted branch decomposition $(T, \decf)$, let $t \in V(T) \setminus \leaves(T)$ with children $r$ and $s$. For a pair of an $r$-type $\ctype_r \in \ctypes_r$ and an $s$-type $\ctype_s \in \ctypes_s$, the \emph{merge type} of $\ctype_r$ and $\ctype_s$, denoted by $\mergetype(\ctype_r, \ctype_s)$, is the $t$-type obtained as follows. % \begin{align*} \mergetype(\ctype_r, \ctype_s) \defeq\;&\left\lbrace\mergeprofile(\pr_r, \pr_s) \mid \pr_r \in \ctype_r,~\pr_s \in \ctype_s, \mbox{where } \pr_r \mbox{ and } \pr_s \mbox{ are compatible}\right\rbrace \\ &\bigcup\nolimits_{p \in \{r, s\}} \left\lbrace\liftprofile(\pr_p) \mid \pr_p \in \ctype_p, \mbox{where $\pr_p$ is liftable}\right\rbrace \end{align*} \end{definition} \begin{lemma}\label{lem:ccol:merge:type} Let $G$ be a graph with rooted branch decomposition $(T, \decf)$, let $t \in V(T) \setminus \leaves(T)$ with children $r$ and $s$. Let $C_r \subseteq V_r$ and $C_s \subseteq V_s$. Then, $\ctype_t(C_r \cup C_s) = \mergetype(\ctype_r(C_r), \ctype_s(C_s))$. \end{lemma} \begin{proof} Let $C_t \defeq C_r \cup C_s$. For one inclusion, let $\pr_t \in \ctype_t(C_t)$. Then, there is a clique $X_t$ in $G_t$ that is potentially bad for $C_t$ whose $C_t$-profile is $\pr_t$. If for all $p \in \{r, s\}$, $X_p \defeq X_t \cap V_p \neq \emptyset$, then by Lemma~\ref{lem:ccol:merge:profile:tb}, we know that for all $p \in \{r, s\}$, $X_p$ is a potentially bad clique for $C_p \defeq C_t \cap V_p$, therefore $\pr_p \defeq \profile(X_p \mid C_p) \in \ctype_p(C_p)$. Moreover, the lemma asserts that $\pr_r$ and $\pr_s$ are compatible, so by construction, we can conclude that $\pr_t = \mergeprofile(\pr_r, \pr_s) \in \mergetype(\ctype_r(C_r), \ctype_s(C_s))$. On the other hand, if for some $p \in \{r, s\}$, $X_t \subseteq V_p$, then by Lemma~\ref{lem:ccol:lift:profile}, $X_t$ is a potentially bad clique for $C_p$, so $\pr_p \defeq \profile_p(X_t \mid C_p) \in \ctype(C_p)$. The lemma also asserts that $\pr_p$ is liftable and that $\liftprofile(\pr_r) = \pr_t$, in which case we also have that $\pr_t \in \mergetype(\ctype_r(C_r), \ctype_s(C_s))$. We have argued that $\ctype_t(C_t) \subseteq \mergetype(\ctype_r(C_r), \ctype_s(C_s))$. For the other inclusion, suppose that $\pr_t \in \mergetype(\ctype_r(C_r), \ctype_s(C_s))$. Then, either there is a pair of profiles $\pr_r \in \ctype_r(C_r)$, $\pr_s \in \ctype_s(C_s)$ such that $\pr_r$ and $\pr_s$ are compatible and $\pr_t = \mergeprofile(\pr_r, \pr_s)$ or for some $p \in \{r, s\}$, there is a profile $\pr_p \in \ctype_p(C_p)$ that is liftable and $\pr_t = \liftprofile(\pr_p)$. % In the former case, we can use Lemma~\ref{lem:ccol:merge:profile:bt} to conclude that $\pr_t \in \ctype_t(C_t)$, and in the latter case, we have that $\pr_t \in \ctype_t(C_t)$ by Lemma~\ref{lem:ccol:lift:profile}. % This shows that $\mergetype(\ctype_r(C_r), \ctype_s(C_s)) \subseteq \ctype_t(C_t)$ which concludes the proof. % \end{proof} \subsection{The algorithm} We are now ready to describe the algorithm. As alluded to above, partial solutions at a node $t$, i.e.\ colorings of $G_t$, are described via the notion of a \emph{$t$-signature} which records the number of color classes of each type in a coloring. If two colorings have the same $t$-signature, then they are interchangeable as far as our algorithm is concerned. We show that this information suffices to solve the problem in a bottom-up dynamic programming fashion. \begin{definition}[$t$-Signature] Let $k$ be a positive integer. Let $G$ be a graph with rooted branch decomposition $(T, \decf)$, let $t \in V(T)$, and let $\calC = (C_1, \ldots, C_k)$ be a $k$-coloring of $G_t$. % Then, $\csig_{\calC} \colon \ctypes_t \to \{0, 1, \ldots, k\}$ where % \[ \forall \ctype_t \in \ctypes_t\colon \csig_{\calC}(\ctype_t) \defeq \card{\{i \in \{1, \ldots, k\} \mid \ctype(C_i) = \ctype_t\}}, \] is called the \emph{$t$-signature} of $\calC$. % The set of \emph{$t$-signatures} is defined as: \[ \csignatures_t \defeq \left\{\csig_t \colon \ctypes_t \to \{0, 1, \ldots, k\} \;\middle|\; \sum\nolimits_{\ctype_t \in \ctypes_t} \csig_t(\ctype_t) = k\right\} \] \end{definition} The following bound on the number of $t$-signatures immediately follows from Observation~\ref{obs:ccol:number:of:types}, stating that the number of $t$-types is upper bounded by $2^{2^{\calO(w)}}$. \begin{observation}\label{obs:number:of:signatures} Let $(T, \decf)$ be a rooted branch decomposition of an $n$-vertex graph, and let $t \in V(T)$. There are at most $k^{2^{2^{\calO(w)}}} \le n^{2^{2^{\calO(w)}}}$ many $t$-signatures, where $w \defeq \modulew(T, \decf)$ and $k$ is the number of colors. \end{observation} \begin{dptabledef} For each $t \in V(T)$ and $\csig_t \in \csignatures_t$, we let $\dptable[t, \csig_t] = 1$ if and only if there is a $k$-coloring $\calC$ of $G_t$ such that $\csig_{\calC} = \csig_t$. \end{dptabledef} We now show that the information stored at the table entries suffices to determine whether or not our input is a \yes-instance; that is, after filling all the table entries, we can read off the solution to the problem at the root node. \begin{lemma}\label{lem:ccol:root} Let $G$ be a graph with rooted branch decomposition $(T, \decf)$, and let $\rootnode$ be the root of $T$. $G$ has a clique coloring with $k$ colors if and only if $\dptable[\rootnode, \csig^\star] = 1$, where $\csig^\star$ is the $\rootnode$-signature for which $\csig^\star(\emptyset) = k$. \end{lemma} \begin{proof} The lemma immediately follows from two facts. First, since $\csig^\star(\emptyset) = k$, we have that $\csig^\star(\ctype_\rootnode) = 0$ for any other $\rootnode$-type $\ctype_\rootnode \neq \emptyset$. Second, that for each set $C \subseteq V_\rootnode = V(G)$, the set of potentially bad cliques for $C$ is precisely the set of maximal cliques that are fully contained in $C$, i.e.\ it is the set of monochromatic maximal cliques in the corresponding coloring that are contained in $C$. \end{proof} We first describe how to compute the table entries at the leaves, by brute-force. \begin{dpleaves} Let $t \in \leaves(T)$ be a leaf node in $T$ and let $v \in V(G)$ be the vertex such that $\decf(v) = t$. We show how to compute the table entries $\dptable[t, \cdot]$. Note that $G_t = (\{v\}, \emptyset)$, and that $\{v\}$ is the only equivalence class of $\sim_t$. % To describe the types of color classes of $G_t$, observe that the only \eqcmax clique in $G_t$ is $\{v\} \eqdef X_v$, which is potentially bad for $C_v \defeq \{v\} = X_v$. % In that case, we have that $\pr_v \defeq \profile(X_v \mid C_v) = (\{v\}, \emptyset)$, and the type of color class $C_v$ is $\{\pr_v\}$. % The type of the remaining $k-1$ color classes is $\emptyset$, since they are all empty. % Therefore, for each $t$-signature $\csig_t$, we set $\dptable[t, \csig_t] \defeq 1$ if and only if $\csig_t(\{\pr_v\}) = 1$ and $\csig_t(\emptyset) = k - 1$. \end{dpleaves} Next, we move on to the computation of the table entries at internal nodes of the branch decomposition. To describe this part of the algorithm, we borrow the following notion of a \emph{merge skeleton} from~\cite{JLL20}.\footnote{Note that in~\cite{JLL20}, the graph structure of the bipartite graph plays a role, in that there is only edges between compatible types. In the present setting, there is no notion of compatibility of color class types which is why the bipartite graph of the merge skeleton is always complete.} \begin{definition}[Merge skeleton] Let $G$ be a graph and $(T, \decf)$ one of its rooted branch decompositions. Let $t \in V(T) \setminus \leaves(T)$ with children $r$ and $s$. The \emph{merge skeleton} of $r$ and $s$ is an edge-labeled complete bipartite graph $(\mergeaux, \malab)$ where % \begin{itemize} \item $V(\mergeaux) = \ctypes_r \cup \ctypes_s$, and \item for all $\ctype_r \in \ctypes_r$, $\ctype_s \in \ctypes_s$, $\malab(\ctype_r \ctype_s) = \mergetype(\ctype_r, \ctype_s)$. \end{itemize} \end{definition} \begin{algorithm} \lForEach{$\csig_t \in \csignatures_t$}{set $\dptable[t, \csig_t] \gets 0$} Let $(\mergeaux, \malab)$ be the merge skeleton of $t$\; \ForEach{$\csig_r \in \csignatures_r$, $\csig_s \in \csignatures_s$ such that $\dptable[r, \csig_r] = 1$ and $\dptable[s, \csig_s] = 1$}{ \ForEach{\label{alg:cliquecol:internal:l4}$\maassign \colon E(\mergeaux) \to \{0, 1, \ldots, k\}$ such that \begin{enumerate} \item $\sum_{e \in E(\mergeaux)} \maassign(e) = k$, and \item for all $p \in \{r, s\}$ and all $\ctype_p \in \ctypes_p$, it holds that $\sum_{\ctype_p \ctype_o \in E(\mergeaux)} \maassign(\ctype_p \ctype_o) = \csig_p(\ctype_p)$ \end{enumerate} }{ Let $\csig_t \colon \ctypes_t \to \{0, 1, \ldots, k\}$ be such that for all $\ctype_t \in \ctypes_t$, $\csig_t(\ctype_t) = \sum_{e \in E(\mergeaux), \malab(e) = \ctype_t} \maassign(e)$\label{alg:cliquecol:internal:l6}\; update $\dptable[t, \csig_t] \gets 1$\; } } \caption{Algorithm to set the table entries at an internal node $t \in V(T) \setminus \leaves(T)$ with children $r$ and $s$, assuming the table entries at $r$ and $s$ have been computed.} \label{alg:cliquecol:internal} \end{algorithm} \begin{dpinternal} Let $t \in V(T) \setminus \leaves(T)$ be an internal node with children $r$ and $s$. We discuss how to compute the table entries at $t$, assuming the table entries at $r$ and $s$ have been computed. % Each coloring of $G_t$ can be obtained from a coloring of $G_r$ and a coloring of $G_s$, by merging pairs of color classes. % Therefore, for each pair $\csig_r \in \csignatures_r$, $\csig_s \in \csignatures_s$ such that $\dptable[r, \csig_r] = 1$ and $\dptable[s, \csig_s] = 1$, we do the following. We enumerate all labelings of the edge set of the merge skeleton with numbers from $\{0, 1, \ldots, k\}$, with the following interpretation. If an edge $\ctype_r \ctype_s$ has label $j$, then it means that $j$ color classes of $r$-type $\ctype_r$ will be merged with $j$ color classes of $s$-type $\ctype_s$; this gives $j$ color classes of $t$-type $\mergetype(\ctype_r, \ctype_s) = \malab(\ctype_r \ctype_s)$. % Each such labeling that respects the number of color classes available of each type will produce a coloring of $G_t$ with some signature $\csig_t$, which can then be read off the edge labeling. For all such $\csig_t$, we set $\dptable[t, \csig_t] = 1$. We give the formal details in Algorithm~\ref{alg:cliquecol:internal}. \end{dpinternal} We now prove correctness of the algorithm. \begin{lemma}\label{lem:ccol:cor} Let $G$ be a graph and $(T, \decf)$ one of its rooted branch decompositions, and let $t \in V(T)$. The above algorithm computes the table entries $\dptable[t, \cdot]$ correctly, i.e.\ for each $\csig_t \in \csignatures_t$, it sets $\dptable[t, \csig_t] = 1$ if and only if $G_t$ has a $k$-coloring $\calC$ with $\csig_\calC = \csig_t$. \end{lemma} \begin{proof} The proof is by induction on the height of $t$. In the base case, when $t$ is a leaf, it is straightforward to verify correctness. Now suppose that $t \in V(T) \setminus \leaves(T)$ is an internal node with children $r$ and $s$, and let $(\mergeaux, \malab)$ be the merge skeleton at $t$. Suppose for some $t$-signature $\csig_t \in \csignatures_t$, the algorithm set $\dptable[t, \csig_t] = 1$. Then, there is some $r$-signature $\csig_r$ and some $s$-signature $\csig_s$ such that $\dptable[r, \csig_r] = 1$, $\dptable[s, \csig_s] = 1$, and there is a map $\maassign\colon E(\mergeaux) \to \{0, 1, \ldots, k\}$ satisfying the conditions of lines~\ref{alg:cliquecol:internal:l4} and~\ref{alg:cliquecol:internal:l6} in Algorithm~\ref{alg:cliquecol:internal}. % By induction, there is a $k$-coloring $\calC_r$ of $G_r$ whose $r$-signature is $\csig_r$, and a $k$-coloring of $\calC_s$ of $G_s$ whose $s$-signature is $\csig_s$. % We construct the desired coloring $\calC_t$ of $G_t$ whose $t$-signature is $\csig_t$ as follows: For each pair of an $r$-type $\ctype_r$ and an $s$-type $\ctype_s$, we take $\maassign(\ctype_r \ctype_s)$ pairs of a color class $C_r$ of $r$-type $\csig_r$ and a color class $C_s$ of $s$-type $C_s$, and for each such pair, we add $C_r \cup C_s$ as a color class to $\calC_t$. By Lemma~\ref{lem:ccol:merge:type}, the $t$-type of $C_r \cup C_s$ is $\mergetype(\ctype_r, \ctype_s) = \malab(\ctype_r \ctype_s)$. % The condition in line~\ref{alg:cliquecol:internal:l4} ensures that each color class of $\calC_r$ and each color class of $\calC_s$ is used precisely once to create a color class of $\calC_t$, and the condition in line~\ref{alg:cliquecol:internal:l6} ensures that the $t$-signature of $\calC_t$ is indeed $\csig_t$. For the other direction, suppose that there is a $k$-coloring $\calC_t$ of $G_t$ with $t$-signature $\csig_t$. We construct a pair of a coloring $\calC_r$ of $G_r$ and a coloring of $\calC_s$ of $G_s$, together with their signatures $\csig_r$ and $\csig_s$, respectively, and a map $\maassign\colon E(\mergeaux) \to \{0, 1, \ldots, k\}$. Initially, for all $p \in \{r, s\}$, we let $\calC_p = \emptyset$, and for all $\ctype_p \in \ctypes_p$, $\csig_p(\ctype_p) \defeq 0$. Moreover, we let $\maassign(e) \defeq 0$ for all $e \in E(\mergeaux)$. For each color class $C_t \in \calC_t$, we add $C_r \defeq C_t \cap V_r$ to $\calC_r$ and $C_s \defeq C_t \cap V_s$ to $\calC_s$. Let $\ctype_t$ be the $t$-type of $C_t$. By Lemma~\ref{lem:ccol:merge:type}, $C_r$ has some $r$-type $\ctype_r$ and $C_s$ has some $s$-type $\ctype_s$ such that $\ctype_t$ is the merge type $\mergetype(\ctype_r, \ctype_s)$ of $\ctype_r$ and $\ctype_s$. % We increase the values of $\csig_r(\ctype_r)$ and $\csig_s(\ctype_s)$ by $1$, since we added one more color class of $r$-type $\ctype_r$ to $\calC_r$, and one more color class of $s$-type $\ctype_s$ to $\calC_s$. % Additionally, we add $1$ to the value of $\maassign(\ctype_r \ctype_s)$, since $C_t$ is a color class of $t$-type $\mergetype(\ctype_r, \ctype_s) = \malab(\ctype_r \ctype_s)$ obtained from merging $C_r$ (a color class of $r$-type $\ctype_r$) with $C_s$ (a color class of $s$-type $\ctype_s$). After doing this for all color classes of $\calC_t$, we have that $\calC_r$ is a $k$-coloring with $r$-signature $\csig_r$, and that $\calC_s$ is a $k$-coloring with $s$-signature $\csig_s$. By induction, $\dptable[r, \csig_r] = 1$ and $\dptable[s, \csig_s] = 1$. % It remains to argue that $\maassign$ satisfies the conditions expressed in lines~\ref{alg:cliquecol:internal:l4} and~\ref{alg:cliquecol:internal:l6} in Algorithm~\ref{alg:cliquecol:internal}. The first item of line~\ref{alg:cliquecol:internal:l4} is clearly satisfied, since we increased $\card{\calC_t} = k$ values of $\maassign$ by $1$ in the above process. The second item holds since we increased the value of some $\csig_p(\ctype_p)$ by $1$ if and only if we increased the value of an edge $e$ incident with $\ctype_p$ in $\mergeaux$ by $1$. % To see that for each $\ctype_t$, $\csig_t(\ctype_t) = \sum_{e \in E(\mergeaux), \malab(e) = \ctype_t} \maassign(e)$, observe that we identified for each color class of type $\ctype_t$, the occurrence of $\ctype_t$ as a merge type of a pair of an $r$-type and an $s$-type, and therefore a label of some edge $e \in E(\mergeaux)$, and increased $\maassign(e)$ by $1$ in such a case. % We can conclude that $\csig_t$ can be obtained as shown in line~\ref{alg:cliquecol:internal:l6} of Algorithm~\ref{alg:cliquecol:internal}, and so the algorithm set $\dptable[t, \csig_t] = 1$. \end{proof} \begingroup \newcommand\eff{2^{2^{\calO(w)}}} To wrap up, it remains to argue the runtime of the algorithm. Suppose we are given a graph $G$ with rooted branch decomposition $(T, \decf)$ and let $w \defeq \modulew(T, \decf)$. By Observation~\ref{obs:number:of:signatures}, there are at most $n^{\eff}$ table entries at each node of $T$. The entries of leaf nodes can clearly be computed in time $n^{\calO(1)}$. Now let $t \in V(T) \setminus \leaves(T)$ be an internal node with children $r$ and $s$. To compute all table entries at $t$, we execute Algorithm~\ref{alg:cliquecol:internal}. In the worst case, it loops over each pair of an $r$-signature and an $s$-signature, and given such a pair, it enumerates all labelings of the edges of the merge skeleton $\mergeaux$ with numbers from $\{0, 1, \ldots, k\}$ (such that all entries sum up to $k$). We have that $\card{E(\mergeaux)} = \card{\ctypes_r} \cdot \card{\ctypes_s} = \left(\eff\right)^2 = \eff$ (see Observation~\ref{obs:ccol:number:of:types}), therefore the number of labelings to consider is upper bounded by $k^{\eff} \le n^{\eff}$. The runtime of Algorithm~\ref{alg:cliquecol:internal} can therefore be upper bounded by \begin{align*} \left(n^{\eff}\right)^2 \cdot n^{\eff} = n^{\eff}, \end{align*} and since $\card{V(T)} = \calO(n)$, this equals the runtime of the whole procedure. \endgroup Correctness is proved in Lemma~\ref{lem:ccol:cor}, and Lemma~\ref{lem:ccol:root} asserts that the solution to the problem can be read off the table entries at the root, once computed. Using standard memoization techniques, we can modify the above algorithm so that it returns a coloring if one exists. We therefore have the following theorem. \begin{theorem}\label{thm:ccol:xp:cliquew} There is an algorithm that given a graph $G$ together with one of its rooted branch decompositions $(T, \decf)$ and a positive integer $k$, decides whether $G$ has a clique coloring with $k$ colors in time $n^{2^{2^{\calO(w)}}}$, where $w \defeq \modulew(T, \decf)$. If such a coloring exists, the algorithm can construct it. \end{theorem} \section{Introduction} Vertex coloring problems are central in algorithmic graph theory, and appear in many variants. One of these is \cliquecol, which given a graph $G$ and an integer $k$ asks whether $G$ has a clique coloring with $k$ colors, i.e.\ whether each vertex of $G$ can be assigned one of $k$ colors such that there is no monochromatic maximal clique. The notion of a clique coloring of a graph was introduced in 1991 by Duffus et al.~\cite{DuffusEtAl1991}, and it behaves quite differently from the classical notion of a proper coloring, which forbids monochromatic edges. Any proper coloring is a clique coloring, but not vice versa. For instance, a complete graph on $n$ vertices only has a proper coloring with $n$ colors, while it has a clique coloring with two colors. Moreover, proper colorings are closed under taking subgraphs. On the other hand, removing vertices or edges from a graph may introduce new maximal cliques, therefore a clique coloring of a graph is not always a clique coloring of its subgraphs, not even of its induced subgraphs. Also from a complexity-theoretic perspective, \cliquecol behaves very differently from \graphcol. Most notably, while it is easy to decide whether a graph has a proper coloring with two colors, Bacs\'{o} et al.~\cite{BacsoEtAl2004} showed that it is already \coNP-hard to decide if a given coloring with two colors is a clique coloring. Marx~\cite{Marx2011} later proved \cliquecol to be $\Sigma_2^p$-complete for every fixed number of (at least two) colors. On the positive side, Cochefert and Kratsch showed that the \cliquecol problem can be solved in $\Ostar(2^n)$ time,\footnote{The $\Ostar$-notation suppresses polynomial factors in the input size, i.e.\ for inputs of size $n$, we have that $\Ostar(f(n)) = \calO(f(n)\cdot n^{\calO(1)})$.} and the problem has been shown to be polynomial-time solvable on several graph classes. Mohar and Skrekovski~\cite{MS1999} showed that all planar graphs are $3$-clique colorable, and Kratochv\'{i}l and Tuza gave an algorithm that decides whether a given planar graph is $2$-clique colorable~\cite{KT2002}. For several graph classes it has been shown that all their members except odd cycles on at least five vertices (which require three colors) are $2$-clique colorable~\cite{BacsoEtAl2004,BacsoTuza2009,CamposEtAl2008,CerioliKorenchendler2009,Defossez2006,KleinMorgana2012,Penev2016,ShanEtAl2014}. Therefore, on these classes \cliquecol is polynomial-time solvable. Duffus et al.~\cite{DuffusEtAl1991} even conjectured in 1991 that perfect graphs are $3$-clique colorable, which was supported by many subclasses of perfect graphs being shown to be $2$- or $3$-clique colorable~\cite{AndreaeEtAl1991,BacsoEtAl2004,ChudnovskyLo2017,Defossez2006,DuffusEtAl1991,MS1999,Penev2016}. However, in 2016, Charbit et al.~\cite{CharbitEtAl2016} showed that there are perfect graphs whose clique colorings require an unbounded number of colors. In this work, we consider \cliquecol from the viewpoint of parameterized algorithms and complexity~\cite{CyganEtAl2015,DowneyFellows2013}. In particular, we consider structural parameterizations of \cliquecol by two of the most commonly used decomposition-based width measures of graphs, namely \emph{treewidth} and \emph{clique-width}. Informally speaking, the treewidth of a graph $G$ measures how close $G$ is to being a forest. On dense graphs, the treewidth is unbounded, and clique-width can be viewed as an extension of treewidth that remains bounded on several simply structured dense graphs. Our first main result is a fixed-parameter tractable algorithm for \qcliquecol parameterized by treewidth. More precisely: we show that for any fixed $q \ge 2$, \qcliquecol (asking for a clique coloring with $q$ colors) can be solved in time $\Ostar(q^{\treewidth})$, where $\treewidth$ denotes the width of a given tree decomposition of the input graph. We also show that this running time is likely the best possible in this parameterization; we prove that under the Strong Exponential Time Hypothesis (\SETH), for any~$q \ge 2$, there is no~$\epsilon > 0$ such that \qcliquecol can be solved in time $\Ostar((q-\epsilon)^\treewidth)$. In fact, we rule out $\Ostar((q-\epsilon)^t)$-time algorithms for a much smaller class of graphs than those of treewidth $t$, namely: graphs that have both \emph{pathwidth} and \emph{feedback vertex set number} simultaneously bounded by $t$. Our second main result is an $\XP$ algorithm for \cliquecol with clique-width as the parameter. The algorithm runs in time~$n^{f(w)}$, where \(w\) is the clique-width $w$ of a given clique decomposition of the input $n$-vertex graph and $f(w) = 2^{2^{\calO(w)}}$. The double-exponential dependence on $w$ in the degree of the polynomial stems from the notorious property of clique colorings which we mentioned above; namely, that taking induced subgraphs does not necessarily preserve clique colorings. This results in a large amount of information that needs to be carried along as the algorithm progresses. The paper is organized as follows. In Section~\ref{sec:preliminaries}, we give introduce the basic concepts that are important in this work; in Section~\ref{sec:treewidth} we give the results on \qcliquecol parameterized by treewidth, and in Section~\ref{sec:cliquewidth} we give the algorithm for \cliquecol parameterized by clique-width. \section{Preliminaries}\label{sec:preliminaries} \paragraph*{Graphs.} All graphs considered here are simple and finite. For a graph $G$ we denote by $V(G)$ and $E(G)$ the vertex set and edge set of $G$, respectively. For an edge $e = uv \in E(G)$, we call $u$ and $v$ the \emph{endpoints} of $e$ and we write $u \in e$ and $v \in e$. For two graphs $G$ and $H$, we say that $G$ is a \emph{subgraph} of $H$, written $G \subseteq H$, if $V(G) \subseteq V(H)$ and $E(G) \subseteq E(H)$. For a set of vertices $S \subseteq V(G)$, the \emph{subgraph of $G$ induced by $S$} is $G[S] \defeq (S, \{uv \in E(G) \mid u, v \in S\})$. For a graph $H$, we say that a graph $G$ is \emph{$H$-free} if $G$ does not contain $H$ as an induced subgraph. For a set of graphs $\calH$, we say that $G$ is \emph{$\calH$-free} if $G$ is $H$-free for all $H \in \calH$. For a graph $G$ and a vertex $v \in V(G)$, the set of its \emph{neighbors} is $N_G(v) \defeq \{u \in V(G) \mid uv \in E(G)\}$. Two vertices $u, v \in V(G)$ are called \emph{false twins} if $N_G(u) = N_G(v)$. We say that a vertex $v$ is \emph{complete} to a set $X \subseteq V(G)$ if $X \subseteq N_G(v)$. The \emph{degree} of $v$ is $\deg_G(v) \defeq \card{N_G(v)}$. The \emph{closed neighborhood} of $v$ is $N_G[v] \defeq \{v\} \cup N_G(v)$. For a set $X \subseteq V(G)$, we let $N_G(X) \defeq \bigcup_{v \in X} N_G(v) \setminus X$ and $N_G[X] \defeq X \cup N_G(X)$. In all these cases, we may drop $G$ as a subscript if it is clear from the context. A graph is called \emph{subcubic} if all its vertices have degree at most three. A graph $G$ is \emph{connected} if for all $2$-partitions $(X, Y)$ of $V(G)$ with $X \neq \emptyset$ and $Y \neq \emptyset$, there is a pair $x \in X$, $y \in Y$ such that $xy \in E(G)$. A \emph{connected component} of a graph is a maximal connected subgraph. A connected graph is called a \emph{cycle} if all its vertices have degree two. A graph that does not contain a cycle as a subgraph is called a \emph{forest} and a connected forest is a \emph{tree}. In a tree $T$, the vertices of degree one are called the \emph{leaves} of $T$, denoted by $\leaves(T)$, and the vertices in $V(T) \setminus \leaves(T)$ are the \emph{internal vertices} of $T$. A tree of maximum degree two is a \emph{path} and the leaves of a path are called its \emph{endpoints}. A tree $T$ is called a \emph{caterpillar} if it contains a path $P \subseteq T$ such that all vertices in $V(T) \setminus V(P)$ are adjacent to a vertex in $P$. A forest is called a \emph{linear forest} if all its components are paths and a \emph{caterpillar forest} if all its components are caterpillars. A tree $T$ is called \emph{rooted}, if there is a distinguished vertex $r \in V(T)$, called the \emph{root} of $T$, inducing an ancestral relation on $V(T)$: for a vertex $v \in V(T)$, if $v \neq r$, the neighbor of $v$ on the path from $v$ to $r$ is called the \emph{parent} of $v$, and all other neighbors of $v$ are called its \emph{children}. For a vertex $v \in V(T) \setminus \{r\}$ with parent $p$, the \emph{subtree rooted at $v$}, denoted by $T_v$, is the subgraph of $T$ induced by all vertices that are in the same connected component of $(V(T), E(T) \setminus \{vp\})$ as $v$. We define $T_r \defeq T$. A set of vertices $S \subseteq V(G)$ of a graph $G$ is called an \emph{independent set} if $E(G[S]) = \emptyset$. A set of vertices $S \subseteq V(G)$ is a \emph{vertex cover} in $G$ if $V(G) \setminus S$ is an independent set in $G$. A graph $G$ is called \emph{complete} if $E(G) = \{uv \mid u, v \in V(G)\}$. A set of vertices $S \subseteq V(G)$ is a \emph{clique} in $G[S]$ is complete. A complete graph on three vertices is called a \emph{triangle}. A graph $G$ is called \emph{bipartite} if its vertex set can be partitioned into two nonempty independent sets, which we will refer to as a \emph{bipartition} of $G$. \paragraph*{Notation for Equivalence Relations.} Let $\Omega$ be a set and $\sim$ an equivalence relation over $\Omega$. For an element $x \in \Omega$ the \emph{equivalence class of $x$}, denoted by $[x]$, is the set $\{y \in \Omega \mid x \sim y\}$. We denote the set of all equivalence classes of $\sim$ by $\Omega/\sim$. \paragraph*{Parameterized Complexity.} We give the basic definitions of parameterized complexity that are relevant to this work and refer to~\cite{CyganEtAl2015,DowneyFellows2013} for details. Let $\Sigma$ be an alphabet. A \emph{parameterized problem} is a set $\Pi \subseteq \Sigma^* \times \bN$, the second component being the \emph{parameter} which usually expresses a structural measure of the input. A parameterized problem $\Pi$ is said to be \emph{fixed-parameter tractable}, or in the complexity class \FPT, if there is an algorithm that for any $(x, k) \in \Sigma^* \times \bN$ correctly decides whether or not $(x, k) \in \Pi$, and runs in time $f(k) \cdot \card{x}^c$ for some computable function $f \colon \bN \to \bN$ and constant $c$. We say that a parameterized problem is in the complexity class \XP, if there is an algorithm that for each $(x, k) \in \Sigma^* \times \bN$ correctly decides whether or not $(x, k) \in \Pi$, and runs in time $f(k) \cdot \card{x}^{g(k)}$, for some computable functions $f$ and $g$. The concept analogous to \NP-hardness in parameterized complexity is that of \W[1]-hardness, whose formal definition we omit. The basic assumption is that $\FPT \neq \W[1]$, under which no \W[1]-hard problem admits an \FPT-algorithm. For more details, see~\cite{CyganEtAl2015,DowneyFellows2013}. \paragraph*{Strong Exponential Time Hypothesis.} In 2001, Impagliazzo et al.\ conjectured that a brute force algorithm to solve the \textsc{$q$-SAT} problem which given a CNF-formula with clauses of size at most $q$, asks whether it has a satisfying assignment, is `essentially optimal.' This conjecture is called the \emph{Strong Exponential Time Hypothesis}, and can be formally stated as follows. (For a survey of conditional lower bounds based on \SETH and related conjectures, see~\cite{Vas15}.) \begin{conjecture-nn}[$\SETH$, Impagliazzo et al.~\cite{ImpagliazzoPaturi2001,ImpagliazzoPaturiZane2001}] For every $\epsilon > 0$, there is a $q \in \calO(1)$ such that \textsc{$q$-SAT} on $n$ variables cannot be solved in time $\Ostar((2-\epsilon)^n)$. \end{conjecture-nn} \subsection{Treewidth} We now define the treewidth and pathwidth of a graph, and later the notion of a nice tree decomposition that we will use later in this work. \begin{definition}[Treewidth, Pathwidth]\label{def:tw} Let $G$ be a graph. A \emph{tree decomposition} of $G$ is a pair $(T, \calB)$ of a tree $T$ and an indexed family of vertex subsets $\calB = \{B_t \subseteq V(G)\}_{t \in V(T)}$, called \emph{bags}, satisfying the following properties. % \begin{enumerate}[label={(T\arabic*)}, ref={T\arabic*}] \item\label{def:tw:vertex:coverage} $\bigcup_{t \in V(T)} B_t = V(G)$. \item\label{def:tw:edge:coverage} For each $uv \in E(G)$ there exists some $t \in V(T)$ such that $\{u, v\} \subseteq B_t$. \item\label{def:tw:subtrees} For each $v \in V(G)$, let $U_v \defeq \{t \in V(T) \mid v \in B_t\}$ be the nodes in $T$ whose bags contain $v$. Then, $T[U_v]$ is connected. \end{enumerate} The \emph{width} of $(T, \calB)$ is $\max_{t \in V(T)} \card{B_t} - 1$, and the \emph{tree-width} of a graph is the minimum width over all its tree decompositions. % If $T$ is a path, then $(T, \calB)$ is called a \emph{path decomposition}, and the \emph{path-width} of a graph is the minimum width over all its path decompositions. \end{definition} The following notion of a \emph{nice} tree decomposition allows for streamlining the description of dynamic programming algorithms over tree decompositions. \begin{definition}[Nice Tree Decomposition] Let $G$ be a graph and $(T, \calB)$ a tree decomposition of $G$. Then, $(T, \calB)$ is called a \emph{nice tree decomposition}, if $T$ is rooted and each node is of one of the following types. % \begin{description} \item[Leaf.] A node $t \in V(T)$ is a \emph{leaf node}, if $t$ is a leaf of $T$ and $B_t = \emptyset$. \item[Introduce.] A node $t \in V(T)$ is an \emph{introduce node} if it has precisely one child $s$, and there is a unique vertex $v \in V(G) \setminus B_s$ such that $B_t = B_s \cup \{v\}$. In this case we say that \emph{$v$ is introduced at $t$}. \item[Forget.] A node $t \in V(T)$ is a \emph{forget node}, if it has precisely one child $s$, and there is a unique vertex $v \in B_s$ such that $B_t = B_s \setminus \{v\}$. In this case we say that \emph{$v$ is forgotten at $t$}. \item[Join.] A node $t \in V(T)$ is a \emph{join node}, if it has precisely two children $s_1$ and $s_2$, and $B_t = B_{s_1} = B_{s_2}$. \end{description} \end{definition} It is known that any tree decomposition of a graph can be transformed in linear time into a nice tree decomposition of the same width, with a relatively small number of bags. \begin{lemma}[Kloks~\cite{Klo94}]\label{lem:nice:td} Let $G$ be a graph on $n$ vertices, and let $k$ be a positive integer. Any width-$k$ tree decomposition $(T, \calX)$ of $G$ of can be transformed in time $\calO(k \cdot \card{V(T)})$ into a nice tree decomposition $(T', \calX')$ of width $k$ such that $\card{V(T')} = \calO(k\cdot n)$. \end{lemma} \subsection{Clique-Width, Branch Decompositions, and Module-Width} We first define clique-width, introduced by Courcelle and Olariu~\cite{CourcelleOlariu2000}, and then the equivalent measure of \emph{module-width} that we will use in our algorithm. We keep the definition of clique-width slightly informal and refer to~\cite{CourcelleOlariu2000} for more details. Let $G$ be a graph. The \emph{clique-width} of $G$, denoted by $\cliquewidth(G)$, is the minimum number of labels $\{1, \ldots, k\}$ needed to obtain $G$ using the following four operations: \begin{enumerate}[label={\arabic*.}] \item Create a new graph consisting of a single vertex labeled $i$. \item Take the disjoint union of two labeled graphs $G_1$ and $G_2$. \item Add all edges between pairs of vertices of label $i$ and label $j$. \item Relabel every vertex labeled $i$ to label $j$. \end{enumerate} We now turn to the definition of module-width which is based on the notion of a rooted branch decomposition. \begin{definition}[Branch decomposition] Let $G$ be a graph. A \emph{branch decomposition} of $G$ is a pair $(T, \decf)$ of a subcubic tree $T$ and a bijection $\decf \colon V(G) \to L(T)$. If $T$ is a caterpillar, then $(T, \decf)$ is called a \emph{linear branch decomposition}. If $T$ is rooted, then we call $(T, \decf)$ a \emph{rooted branch decomposition}. In this case, for $t \in V(T)$, we denote by $T_t$ the subtree of $T$ rooted at $t$, and we define $V_t \defeq \{v \in V(G) \mid \decf(v) \in \leaves(T_t)\}$, $\overline{V_t} \defeq V(G) \setminus V_t$, and $G_t \defeq G[V_t]$. \end{definition} Module-width is attributed to Rao~\cite{Rao2006,Rao2008}. On a high level, the module-width of a rooted branch decomposition bounds, at each of its nodes $t$, the maximum number of subsets of $\overline{V_t}$ that make up the intersection of $\overline{V_t}$ with the neighborhood of some vertex in $V_t$. \begin{definition}[Module-width] Let $G$ be a graph, and $(T, \decf)$ be a rooted branch decomposition of $G$. For each $t \in V(T)$, let $\sim_t$ be the equivalence relation on $V_t$ defined as follows: % \begin{align*} \forall u, v \in V_t \colon u \sim_t v \Leftrightarrow N_G(u) \cap \overline{V_t} = N_G(v) \cap \overline{V_t} \end{align*} The \emph{module-width} of $(T, \decf)$ is $\mw(T, \decf) \defeq \max_{t \in V(T)} \card{V_t/{\sim_t}}$. % The \emph{module-width of $G$}, denoted by $\modulew(G)$, is the minimum module width over all rooted branch decompositions of $G$. \end{definition} We introduce some notation. For a node $t \in V(T)$ and a set $S \subseteq V(G_t)$, we let \(\eqc_t(S)\) be the set of all equivalence classes of \(\sim_{t}\) which have a nonempty intersection with \(S\), and \(\eqcbar_t(S)\) be the remaining equivalence classes of \(\sim_{t}\). Formally, $\eqc_t(S) \defeq \{Q \in V_t/{\sim_t} \mid Q \cap S \neq \emptyset\}$ and $\eqcbar_t(S) \defeq V_t/{\sim_t} \setminus \eqc_t(S)$. Moreover, for a set of equivalence classes $\calQ \subseteq V_t/{\sim_t}$, we let $V(\calQ) \defeq \bigcup_{Q \in \calQ} Q$. Let $(T, \decf)$ be a rooted branch decomposition of a graph $G$ and let $t \in V(T)$ be a node with children $r$ and $s$. We now describe an operator associated with $t$ that tells us how the graph $G_t$ is formed from its subgraphs $G_r$ and $G_s$, and how the equivalence classes of $\sim_t$ are formed from the equivalence classes of $\sim_r$ and $\sim_s$. Concretely, we associate with $t$ a bipartite graph $\decaux_t$ on bipartition $(V_r/{\sim_r}, V_s/{\sim_s})$ such that: \begin{enumerate} \item $E(G_t) = E(G_r) \cup E(G_s) \cup F$, where $F = \{uv \mid u \in V_r, v \in V_s, \{[u], [v]\} \in E(\decaux_t)\}$, and \item there is a partition $\calP = \{P_1, \ldots, P_h\}$ of $V(\decaux_t)$ such that $V_t/{\sim_t} = \{Q_1, \ldots, Q_h\}$, where for $1 \le i \le h$, $Q_i = \bigcup_{Q \in P_i} Q$. For each $1 \le i \le h$, we call $P_i$ the \emph{bubble} of the resulting equivalence class $\bigcup_{Q \in P_i} Q$ of $\sim_t$. \end{enumerate} As auxiliary structures, for $p \in \{r, s\}$, we let $\bubblemap_p \colon V_p/{\sim_p} \to V_t/{\sim_t}$ be the map such that for all $Q_p \in V_p/{\sim_p}$, $Q_p \subseteq \bubblemap_p(Q_p)$, i.e.\ $\bubblemap_p(Q_p)$ is the equivalence class of $\sim_t$ whose bubble contains $Q_p$. We call $(\decaux_t, \bubblemap_r, \bubblemap_s)$ the \emph{operator} of $t$. \begin{theorem}[Rao, Thm.~6.6 in~\cite{Rao2006}]\label{thm:cw:mw} For any graph $G$, $\mw(G) \le \cw(G) \le 2 \cdot \mw(G)$, and given a decomposition of bounded clique-width, a decomposition of bounded module-width, and vice versa, can be constructed in time $\calO(n^2)$, where $n = \card{V(G)}$. \end{theorem} \subsection{Colorings} Let $G$ be a graph. An ordered partition $\calC = (C_1, \ldots, C_k)$ of $V(G)$ is called a \emph{coloring} of $G$ with $k$ colors, or a \emph{\(k\)-coloring} of \(G\). (Observe that for $i \in \{1, \ldots, k\}$, $C_i$ may be empty.) For $i \in \{1, \ldots, k\}$, we call $C_i$ the \emph{color class $i$}, and say that the vertices in $C_i$ \emph{have color $i$}. $\calC$ is called \emph{proper} if for all $i \in \{1, \ldots, k\}$, $C_i$ is an independent set in $G$. A coloring $\calC = (C_1, \ldots, C_k)$ of a graph $G$ is called a \emph{clique coloring (with $k$ colors)} if there is no monochromatic maximal clique, i.e.\ no maximal clique $X$ in $G$ such that $X \subseteq C_i$ for some $i$. In this work, we study the following computational problems. \fancyproblemdef {\cliquecol} {Graph $G$, integer $k$} {Does $G$ have a clique coloring with $k$ colors?} \vspace{-2em} \fancyproblemdef {\qcliquecol for $q \ge 2$} {Graph $G$} {Does $G$ have a clique coloring with $q$ colors?} The \textsc{$q$-Coloring} and \textsc{$q$-List Coloring} problems also make an appearance. In the former, we are given a graph $G$ and the question is whether $G$ has a proper coloring with $q$ colors. In the latter, we are additionally given a list $L(V) \subseteq \{1, \ldots, q\}$ for each vertex $v \in V(G)$, and additionally require the color of each vertex to be from its list. Whenever convenient, we alternatively denote a coloring of a graph with $k$ colors as a map $\phi\colon V(G) \to \{1, \ldots, k\}$. In this case, a restriction of $\phi$ to $S$ is the map $\phi|_S\colon S \to \{1, \ldots, k\}$ with $\phi|_S(v) = \phi(v)$ for all $v \in S$. For any $T \subseteq V(G)$ with $S \subseteq T$, we say that $\phi|_T$ extends $\phi|_S$. \section{Parameterized by Treewidth}\label{sec:treewidth} In this section, we consider the \qcliquecol problem, for fixed $q \ge 2$, parameterized by treewidth. First, in Section~\ref{sec:ccol:treewidth:algo}, we show that if we are given a tree decomposition of width $\treewidth$ of the input graph, then \qcliquecol can be solved in time $\Ostar(q^{\treewidth})$. After that, in Section~\ref{sec:ccol:treewidth:lb}, we show that this is tight according to \SETH, by providing one reduction ruling out $\Ostar((2-\epsilon)^{\treewidth})$-time algorithms for \qqcliquecol{2} and another one ruling out $\Ostar((q-\epsilon)^{\treewidth})$-time algorithms for \qcliquecol when $q \ge 3$. \subsection{Algorithm}\label{sec:ccol:treewidth:algo} The algorithm is bottom-up dynamic programming along the given tree decomposition of the input graph. As a subroutine, we will have to be able to check, at each bag $B_t$, if some subset $S \subseteq B_t$ contains a maximal clique in $G[B_t]$. Doing this by brute force would add a \emph{multiplicative} factor of roughly~$2^{\treewidth}\cdot \treewidth^{\calO(1)}$ to the runtime which we cannot afford. To avoid this increase in the runtime, we use fast subset convolution to build an oracle $\cliqueoracle_t$ that, once constructed, can tell us in constant time whether or not any subset $S \subseteq B_t$ contains a maximal clique in $G[B_t]$, for each node $t$. Since it suffices to construct this oracle once per node, this will infer only an \emph{additive} factor of $2^{\treewidth}\cdot\treewidth^{\calO(1)}$ per node to the runtime, which does not increase the worst-case complexity for any~$q \ge 2$. \begin{proposition}\label{prop:clique:oracle} There is an algorithm that given a graph $G$ on $n$ vertices, constructs an oracle $\cliqueoracle_G$ in time $\Ostar(2^n)$, such that given a set $S \subseteq V(G)$, $\cliqueoracle_G$ returns in constant time whether or not $S$ contains a clique that is maximal in $G$. \end{proposition} \begin{proof} Before we proceed with the proof, recall that for a set $\Omega$, and two functions $\alpha$ and $\beta$ defined on $2^{\Omega}$, their \emph{subset convolution} \(\convol\) is defined as: for all $S \in 2^{\Omega}$, $(\alpha \convol \beta)(S) = \sum_{T \subseteq S}\alpha(T)\beta(S \setminus T)$. % Let $f\colon 2^{V(G)} \to \{0, 1\}$ be the function defined as follows. For all $X \subseteq V(G)$, we let % \begin{align*} f(X) \defeq\left\lbrace \begin{array}{ll} 1, &\mbox{if $X$ contains a maximal clique,} \\ 0, &\mbox{otherwise.} \end{array} \right. \end{align*} To prove the statement, we have to show how to compute all values of $f$ within the claimed time bound. % We define $g \colon 2^{V(G)} \to \{0, 1\}$ to be the function such that for all $X \subseteq V(G)$, $g(X) = 1$ if and only if $X$ is a maximal clique in $G$. The values of $g$ can be computed in time $\Ostar(2^n)$ by brute force. We define a function $c \colon 2^{V(G)} \to \{0, 1\}$ as $c(X) = 1$ for all $X \subseteq V(G)$, and we let $h = g \convol c$, which can be computed in time $\Ostar(2^n)$~\cite{BHKK07}. For each set $X$, we have that there are $h(X)$ subsets of $X$ that are a maximal clique in $G$. Finally, we obtain $f$ as % \begin{align*} \forall X \subseteq V(G)\colon f(X) \defeq\left\lbrace \begin{array}{ll} 1, &\mbox{if } h(X) \ge 1, \\ 0, &\mbox{otherwise,} \end{array} \right. \end{align*} % which costs an additional $\Ostar(2^n)$ in the runtime. \end{proof} \begin{theorem}\label{thm:ccol:alg:tw} For any fixed $q \ge 2$, there is an algorithm that given an $n$-vertex graph $G$ and a tree decomposition of $G$ of width $\treewidth$, decides whether $G$ has a clique coloring with $q$ colors in time~$\calO(q^{\treewidth} \cdot \treewidth^{\calO(1)} \cdot n)$, and constructs one such coloring, if it exists. \end{theorem} \begin{proof} First, we transform the given tree decomposition of $G$ into a nice tree deocmposition $(T, \calB)$. This can be done in $\calO(n)$ time by Lemma~\ref{lem:nice:td}. We may assume that the bags at leaf nodes are empty, and that $T$ is rooted in some node $\rootnode \in V(T)$, and $B_{\rootnode} = \emptyset$. We do standard bottom-up dynamic programming along $T$. % Let $t \in V(T)$. A partial solution is a $q$-coloring of $G_t$ that satisfies one additional property. Suppose that in some coloring of $G_t$, there is a monochromatic maximal clique $X$ in $G_t$ that has some vertex $v \in V_t \setminus B_t$. Then, $v$ has no neighbors in $V(G) \setminus V_t$, therefore $X$ is also a maximal clique in $G$. This means that the present coloring cannot be extended to a coloring in which $X$ becomes non-maximal, and therefore we can disregard it. In light of this, we define the table entries as follows. % For each $t \in V(T)$ and function $\coloring_t \colon B_t \to \{1, \ldots, q\}$, we let $\dptable[t, \coloring_t] = 1$ if and only if there is a $q$-coloring $\coloring$ of $G_t$ such that % \begin{itemize} \item $\coloring|_{B_t} = \coloring_t$, and \item for each clique $X$ in $G_t$ that is monochromatic under $\coloring$, $X \subseteq B_t$. \end{itemize} Since $B_{\rootnode} = \emptyset$, we can immediately observe that the solution to the instance can be read off the table entries at the root node, once computed. % Throughout the following we denote by $\gamma_\emptyset$ the $q$-coloring defined on an empty domain. % \begin{nestedobservation} $G$ has a clique coloring with $q$ colors if and only if $\dptable[\rootnode, \gamma_\emptyset] = 1$. \end{nestedobservation} We now show how to compute the table entries for the different types of nodes, assuming that the table entries at the children, if any, have previously been computed. % \begin{description} \item[Leaf Node.] If $t$ is a leaf node, then $B_t = \emptyset$ and we only have to consider the empty coloring. We set $\dptable[t, \gamma_\emptyset] = 1$. % \item[Introduce Node.] Let $t \in V(T)$ be an introduce node with child $s$, and let $v$ be the vertex introduced at $t$, i.e.\ we have that $B_t = B_s \cup \{v\}$. Since $V_t \setminus B_t = (V_t \setminus \{v\}) \setminus (B_t \setminus \{v\}) = V_s \setminus B_s$, and since $v$ has no neighbors in $V_t \setminus B_t$ by the properties of a tree decomposition, it is clear that a coloring of $G_t$ has a monochromatic maximal clique with a vertex in $V_t \setminus B_t$ if and only if its restriction to $V_s$ is a coloring of $G_s$ that has a monochromatic maximal clique with a vertex in $V_s \setminus B_s$. % Therefore, for each $\coloring_t \colon B_t \to \{1, \ldots, q\}$, we simply let $\dptable[t, \coloring_t] = 1$ if and only if $\dptable[s, \coloring_t|_{B_s}] = 1$. % \item[Join Node.] Let $t \in V(T)$ be a join node with children $s_1$ and $s_2$ and recall that $B_t = B_{s_1} = B_{s_2}$. In this case, for any $\coloring_t\colon B_t \to \{1, \ldots, q\}$, $G_t$ has a $q$-coloring $\coloring$ with $\coloring|_{B_t} = \coloring_t$ without a monochromatic maximal clique in $V_t \setminus B_t$ if and only if the analogous condition holds for both $G_{s_1}$ and $G_{s_2}$. Therefore, for all such $\coloring_t$, we let $\dptable[t, \coloring_t] = 1$ if and only if $\dptable[s_1, \coloring_t] = \dptable[s_2, \coloring_t] = 1$. % \item[Forget Node.] Let $t \in V(T)$ be a forget node with child $s$ and let $v$ be the vertex forgotten at $t$, i.e.\ $B_s = B_t \cup \{v\}$. % A partial solution at node $s$ may have a monochromatic maximal clique using the vertex $v$, provided that the clique is fully contained in $B_s$, while partial solutions at the node $t$ may not. % Therefore, for a given coloring $\coloring_t\colon B_t \to \{1, \ldots, q\}$, we can check whether or not there is a partial solution in $G_t$ whose restriction to $B_t$ is equal to $\coloring_t$ as follows. % For each color $c \in \{1, \ldots, q\}$, extend $\coloring_t$ to a coloring $\coloring_s$ of $B_s$ by assigning vertex $v$ color $c$. Check if there is a partial solution at node $s$ whose restriction to $B_s$ is $\coloring_s$, and if there is no maximal clique in $N(v) \cap \coloring_s^{-1}(c)$. % If this is the case for some color $c$, then we set $\dptable[t, \coloring_t] = 1$, and if it is not the case for any color, then we set $\dptable[t, \coloring_t] = 0$. % It is clear from this description that this is correct. % % However, we have to apply one additional trick to ensure that we do not exceed the targeted runtime bound. % Since in the worst case, there are $q^{\treewidth}$ many colorings to consider (note that since $t$ is a forget node, we have that $\card{B_t} \le \treewidth$), % we can only spend constant time for the computation of each entry $\dptable[t, \coloring_t]$. Verifying if $N(v) \cap \coloring_s^{-1}(v)$ contains a maximal clique may take time $\Ostar(2^{\treewidth})$ in the worst case. % We overcome this issue by constructing a maximal clique containment oracle $\cliqueoracle_t$ of $G[B_t]$ using Proposition~\ref{prop:clique:oracle}. Once constructed, this oracle allows for checking whether a set contains a maximal clique in constant time. % We describe the entire procedure of how to compute table entries at forget nodes in Algorithm~\ref{alg:ccol:tw:forget}. % \end{description} % \begin{algorithm} \SetKwInOut{Input}{Input} \Input{$G$, $(T, \calB)$ as above, forget node $t \in V(T)$} % Let $v \in B_s \setminus B_t$ be the vertex forgotten at $t$\; Construct the clique oracle $\cliqueoracle_t$ of $G[B_t]$ using Proposition~\ref{prop:clique:oracle}\; \ForEach{$\coloring_t \colon B_t \to \{1, \ldots, q\}$}{ $\dptable[t, \coloring_t] \gets 0$\; \ForEach{$c \in \{1, \ldots, q\}$}{ Let $\coloring_s \colon B_s \to \{1, \ldots, q\}$ be such that for all $u \in B_t$, $\coloring_s(u) = \coloring_t(u)$, and $\coloring_s(v) = c$\; \If{$\dptable[s, \coloring_s] = 1$}{ \lIf{$\cliqueoracle_t(N(v) \cap \coloring_t^{-1}(c)) = 0$}{$\dptable[t, \coloring_t] \gets 1$} } } } % \caption{Algorithm to compute all table entries at a forget node $t$ with child $s$, assuming all table entries at $s$ have been computed. (Notation: For a set $S \subseteq B_t$, $\cliqueoracle_t(S) = 0$ if and only $G[S]$ contains no maximal clique.)} \label{alg:ccol:tw:forget} \end{algorithm} This completes the description of the algorithm. Correctness follows from the description of the computation of the table entries, by induction on the height of each node. For the runtime, observe that there are at most $q^{\treewidth + 1}$ table entries to consider at each node, and it is clear that the computation of a table entry at a leaf, introduce, or join node takes constant time. For forget nodes, the construction of $\cliqueoracle_t$ takes time~$\calO(2^{\treewidth} \cdot \treewidth^{\calO(1)}) \le \calO(q^{\treewidth} \cdot \treewidth^{\calO(1)})$ by Proposition~\ref{prop:clique:oracle}. % After that, computation of each table entry takes $\calO(q) = \calO(1)$ time. Therefore, the total time to compute the table entries at a forget node is $\calO(q^{\treewidth} \cdot \treewidth^{\calO(1)})$. Since $\card{V(T)} = \calO(\treewidth \cdot n)$, the total runtime of the algorithm is~$\calO(q^{\treewidth} \cdot \treewidth^{\calO(1)} \cdot n)$. % Using memoization techniques, the algorithm can construct a coloring, if one exists. \end{proof} \subsection{Lower Bound}\label{sec:ccol:treewidth:lb} In this section we show that the previously presented algorithm is optimal under \SETH. In fact, we prove hardness for a much larger parameter, namely the distance to a linear forest (for $q = 2$), and the distance to a caterpillar forest (for $q \ge 3$). Note that both paths and caterpillars have pathwidth $1$, and clearly, they do not contain any cycles. Therefore, a lower bound parameterized by the (vertex deletion) distance to a linear/caterpillar forest implies a lower bound for the parameter pathwidth plus feedback vertex set number. For $q = 2$, we give a reduction from \textsc{$s$-Not-All-Equal SAT} ($s$-\naesat) on $n$ variables. Cygan et al.~\cite{CyganEtAl2016} showed that under \SETH, for any $\epsilon > 0$, there is some constant $s$ such that $s$-\naesat cannot be solved in time $\Ostar((2-\epsilon)^n)$. For all $q \ge 3$, we reduce from \textsc{$q$-List Coloring}, where we are given a graph $G$ and a list for each of its vertices which is a subset of $\{1, 2, \ldots, q\}$, and the question is whether $G$ has a proper coloring such that each vertex receives a color from its list. Parameterized by the size $t$ of a deletion set to a linear forest, this problem is known to have no $\Ostar((q-\epsilon)^t)$-time algorithms under \SETH~\cite{JaffkeJansen2017}. Our construction uses the fact that on triangle-free graphs, the proper colorings and the clique colorings coincide, and exploits properties of Mycielski graphs. We first give the lower bound for the case $q = 2$. We would like to remark that Kratochv\'{i}l and Tuza~\cite{KT2002} gave a reduction from \textsc{Not-All-Equal SAT} to \qqcliquecol{2} as well, but their reduction does not imply the fine-grained lower bound we aim for here: the resulting graph is at distance $2n$ to a disjoint union of cliques of constant size (at most $s$). This only rules out $\Ostar((\sqrt{2}-\epsilon)^t)$-time algorithms parameterized by pathwidth, and does not give any lower bound if the feedback vertex set number is another component of the parameter. \begin{theorem} For any $\epsilon > 0$, \qqcliquecol{2} parameterized by the distance $t$ to a linear forest cannot be solved in time $\Ostar((2-\epsilon)^t)$, unless \SETH fails. \end{theorem} \begin{proof} We give a reduction from the well-known $s$-\naesat problem, in which we are given a boolean CNF formula $\cnff$ whose clauses are of size at most $s$, and the question is whether there is a truth assignment to the variables of $\cnff$, such that in each clause, at least one literal evaluates to true and at least one literal evaluates to false. Let $\cnff$ be a boolean CNF formula on $n$ variables $x_1,\ldots, x_n$ with maximum clause size $s$. We denote by $\clauses(\cnff)$ the set of clauses of $\cnff$ and by $\variables(C)$ the set of variables that appear in the clause $C$ of $\cnff$. Given $\phi$, we construct an instance $G_\phi$ for \qqcliquecol{2} as follows. For each variable $x_i$, we create a vertex $v_i$ in $G$. Let $V'=\{v_1,\ldots,v_n\}$. For each set $S$ of variables, let $V_S=\{v_i~|~x_i\in S\}$. For each clause $C_i$ of $\cnff$, we add the following clause gadget to $G_\phi$. If $C_i$ is monotone, add a path on four vertices to $G_\phi$, the end vertices of which are $a_i$ and $b_i$. Make $N(a_i)\cap V'=N(b_i)\cap V'=V_{\variables(C_i)}$, and make $V_{\variables(C_i)}\subset V'$ a clique. If $C_i$ is not monotone, let $\positive(C)$ (resp.\ $\negative(C)$) denote the set of variables with positive (resp.\ negative) literals in $C$. Add a path on three vertices to $G_\phi$, the end vertices of which are $a_i$ and $b_i$, make $N(a_i)\cap V'=V_{\positive(C)}$ and make $V_{\positive(C)}$ a clique. Analogously, make $N(b_i)\cap V'=V_{\negative(C)}$ and make $V_{\negative(C)}$ a clique. Finally, add two adjacent vertices $u,v$ to $G_\phi$ and make $N[u]=N[v]=\{u,v\}\cup V'$. See Figure~\ref{fig:reduction1}. \begin{figure} \centering \includegraphics[width=8cm]{fig/reduction1.pdf} \caption{Depiction of $G_\phi$ with two clauses, namely a monotone clause $C_1=\neg x_1\vee \neg x_2 \vee \neg x_3 \vee \neg x_4$ and a non-monotone clause $C_2=x_4 \vee x_5 \vee \neg x_6\vee\neg x_7$. Note that $G_\phi - V'$ is a linear forest.} \label{fig:reduction1} \end{figure} We will show that $G_\phi$ is a yes-instance to \qqcliquecol{2} if and only if $\cnff$ is a yes-instance to $s$-\naesat. We first make the following observation about the maximal cliques of $G_\phi$, which follows directly from the fact that the vertices $u$ and $v$ are complete to $V'$. \begin{nestedobservation}\label{obs:maxcliquesV} The vertices $u$ and $v$ belong to every maximal clique of $G_\phi[V'\cup\{u,v\}]$. \end{nestedobservation} \begin{nestedclaim}\label{claim:aibi} Let $f: V(G_\phi) \rightarrow \{0,1\}$ be a 2-clique coloring of $G_\phi$ and $C_i$ be a clause of $\phi$. Then, if $C_i$ is monotone, then $f(a_i)\neq f(b_i)$. Otherwise, $f(a_i)=f(b_i)$. \end{nestedclaim} \begin{claimproof} If $C_i$ is monotone, $a_i$ and $b_i$ are the end vertices of a path on four vertices, each edge of which is a maximal clique of $G_\phi$. Thus, $f(a_i)\neq f(b_i)$ in any 2-clique coloring $f$ of $G_\phi$. Similarly, if $C_i$ is not monotone, $a_i$ and $b_i$ are the end vertices of a path on three vertices, each edge of which is a maximal clique of $G_\phi$. Hence $f(a_i)=f(b_i)$. \end{claimproof} Now, suppose $G$ has a 2-clique coloring $f: V(G_\phi) \rightarrow \{0,1\}$. We construct a truth assignment for $\{x_1,\ldots,x_n\}$ according to the colors assigned to the vertices of $V'$ by $f$. That is, if $f(v_i)=0$, we set $x_i$ to false, and if $f(v_i)=1$, we set $x_i$ to true. We will now show that this assignment satisfies all clauses of $\phi$. Let $C_i$ be a clause of $\phi$. First, assume that $C_i$ is monotone. By Claim~\ref{claim:aibi}, $f(a_i)\neq f(b_i)$. Since $V_{\variables(C_i)}\cup\{a_i\}$ is a maximal clique of $G_\phi$, the vertices of $V_{\variables(C_i)}$ cannot all be colored with $f(a_i)$. Similarly, $V_{\variables(C_i)}\cup\{b_i\}$ is a maximal clique of $G_\phi$, the vertices of $V_{\variables(C_i)}$ cannot all be colored with $f(b_i)$. Thus, there exist two vertices $v_j,v_k\in V_{\variables(C_i)}$ such that $f(v_j)\neq f(v_k)$. Since $C_i$ is monotone, this implies that $x_j$ and $x_k$ are not both evaluated to the same value and therefore $C_i$ is satisfied. Now assume $C_i$ is not monotone. By Claim~\ref{claim:aibi}, $f(a_i)=f(b_i)$. Hence, since $V_{\positive(C_i)}\cup\{a_i\}$ and $V_{\negative(C_i)}\cup\{b_i\}$ are maximal cliques of $G$, there exists $v_j\in V_{\positive(C_i)}$ and $v_k\in V_{\negative(C_i)}$ such that $f(v_j)=f(v_k)$. This implies that $x_j$ and $x_k$ are not evaluated to the same value under the proposed assignment and thus $C_i$ is satisfied. For the other direction, assume $\phi$ admits an assignment $\xi$ satisfying all clauses. We construct a clique coloring $f: V(G_\phi) \rightarrow \{0,1\}$ for $G_\phi$ in the following way. Color the vertices of $V'$ according to the assignment of the variables of $\phi$. That is, if $\xi(x_i)=\mbox{true}$ (resp.\ $\xi(x_i)=\mbox{false}$), define $f(v_i)=1$ (resp.\ $f(v_i)=0$). If $C_i$ is monotone, let $a_ia_i'b_i'b_i$ be the path on four vertices connecting $a_i$ and $b_i$ in the clause gadget of $C_i$. Define $f(a_i)=f(b_i')=1$ and $f(a_i')=f(b_i)=0$. If $C_i$ is not monotone, let $a_ia_i'b_i$ be the three vertex path connecting $a_i$ and $b_i$ in the clause gadget of $C_i$. If all the vertices of either $V_{\positive(C_i)}$ or $V_{\negative(C_i)}$ are colored 1, set $f(a_i)=f(b_i)=0$ and $f(a_i')=1$. Otherwise set $f(a_i)=f(b_i)=1$ and $f(a_i')=0$. Finally, define $f(u)=0$ and $f(v)=1$. To see that this is indeed a 2-clique coloring of $G_\phi$, first note that by Observation~\ref{obs:maxcliquesV}, no maximal clique contained in $G_\phi[V'\cup\{u,v\}]$ is monochromatic. Furthermore, since all paths of the clause gadgets are properly colored, no maximal clique contained in $G_\phi- (V'\cup\{u,v\})$ is monochromatic. It remains to show that for each clause $C_i$, the maximal cliques defined by $N[a_i]$ and $N[b_i]$ are not monochromatic. Let $C_i$ be a monotone clause. Since $C_i$ is satisfied, there exist $x_j,x_k\in \variables(C_i)$ such that $\xi(x_j)\neq \xi(x_k)$. Hence, $f(v_j)\neq f(v_k)$, which shows that $N[a_i]$ and $N[b_i]$ are each not monochromatic. If $C_i$ is not monotone, by definition the vertices of $N[a_i]$ and $N[b_i]$ are not all colored 1. Suppose all the vertices of $N[a_i]$ are colored 0. In particular, we have $f(a_i)=f(b_i)=0$. This implies that, by construction, all the vertices of $N(b_i)=V_{\negative(C_i)}$ are colored 1. However, this is a contradiction with the fact that the clause $C_i$ is satisfied, since all its literals are evaluated to false. Hence, $f$ is indeed a 2-clique coloring of $G_\phi$. Finally, note that $G-V'$ is a disjoint union of paths of length at most four. Hence, $G$ is at distance $n$ to a linear forest. Therefore, if for some $\epsilon > 0$, \qqcliquecol{2} parameterized by the distance $t$ to a linear forest can be solved in time $\Ostar((2-\epsilon)^t)$, then $s$-\naesat can be solved in time $\Ostar((2-\epsilon)^n)$, which would contradict \SETH~\cite{CyganEtAl2016}. This concludes the proof. \end{proof} We now turn to the case $q \ge 3$. Our reduction is from \textsc{$q$-List-Coloring} parameterized by the distance $t$ to a linear forest, which has no $\Ostar((q-\epsilon)^t)$-time algorithms under \SETH by a theorem due to Jaffke and Jansen~\cite{JaffkeJansen2017}. For technical reasons, we need the lower bound in a slightly stronger form, in particular it has to hold when the input graphs are triangle-free. The reduction presented in~\cite{JaffkeJansen2017} is from $s$-SAT on $n$ variables, and given a formula $\phi$, the graph $G_\phi$ of the resulting \qlistcol instance has the following structure. The truth assignments of the variables of $\phi$ are encoded as colorings of a set of vertices $V'$ that are independent in $G_\phi$, and for each clause $C$ in $\phi$ and each coloring of some subset $V_C \subseteq V'$ that corresponds to a truth assignment $\mu$ that does not satisfy $C$, there is a path $P_\mu$ in $G$ that cannot be properly list colored if and only if the coloring $\mu$ appears on $V_C$. This is ensured by connecting $P_\mu$ to $V_C$ via a matching, which does not introduce triangles. Since each edge of $G_\phi$ is either on such a path or part of one of such matching, there are no triangles in~$G_\phi$. \begin{theorem}[Jaffke and Jansen~\cite{JaffkeJansen2017}]\label{thm:qlistcolbound} For any $\epsilon > 0$ and any fixed $q\geq 3$, $q$-{\sc List Coloring} on triangle-free graphs parameterized by the distance $t$ to a linear forest cannot be solved in time $\Ostar((q-\epsilon)^t)$, unless \SETH fails. \end{theorem} \begin{theorem}\label{thm:cliquecolgeq3} For any $\epsilon > 0$ and any fixed $q\geq 3$, $q$-{\sc Clique Coloring} parameterized by the distance $t$ to a caterpillar forest cannot be solved in time $\Ostar((q-\epsilon)^t)$, unless \SETH fails. \end{theorem} \begin{proof} We give a reduction from $q$-{\sc List Coloring} on triangle-free graphs parameterized by distance to linear forest. In this proof we use the phrases ``\(q\)-colorable'' as short for ``can be properly colored with at most \(q\) colors'', and ``\(q\)-coloring'' as short for ``a proper coloring with at most \(q\) colors''. To construct our instance of \qqcliquecol{q}, we will first describe the construction of a color selection gadget, and then describe how this gadget is attached to rest of the graph. The description of the color selection gadget makes use of the famous Mycielski graphs. For completeness, we briefly describe how Mycielski graphs are recursively constructed and some of their useful properties. For every $p\geq 2$, the Mycielski graph $M_p$ is a triangle-free graph with chromatic number $p$. For $p=2$, we define $M_2=K_2$. For $p\geq 3$, the graph $M_p$ is obtained from $M_{p-1}$ as follows. Let $V(M_{p-1})=\{v_1,\ldots,v_n\}$. Then $V(M_p)=V(M_{p-1})\cup\{u_1,\ldots,u_n,w\}$. The vertices of $V(M_{p-1})$ induce a copy of $M_{p-1}$ in $M_p$, each $u_i$ is adjacent to all the neighbors of $v_i$ in $M_{p-1}$ and $N(w)=\{u_1,...,u_n\}$. Hence, $|V(M_p)|=3\cdot 2^{p-2}-1$. Moreover, it is known that $M_p$ is edge-critical, that is, the deletion of any edge of $M_p$ leads to a $(p-1)$-colorable graph (see for instance \cite{BondyMurty,Lovasz}). For our construction, we will use the graph $M_p'$, obtained from $M_p$ by the deletion of an arbitrary edge $xy$. The following observation follows directly from the fact that $M_p$ is edge-critical. \begin{nestedobservation}\label{obs:samecolor} Let $M_p'$ be the graph obtained from $M_p$ by the deletion of an edge $xy$. Then, $M_p'$ is $(p-1)$-colorable, and in any $(p-1)$-coloring of $M_p'$, the vertices $x$ and $y$ receive the same color. \end{nestedobservation} \noindent{\it Color selection gadget.} We construct a gadget $H_q$ in the following way. Consider $q$ disjoint copies of $M_{q+1}'$. For $1\leq i\leq q$, let $x_iy_i$ be the edge removed from $M_{q+1}$ in order to obtain the $i$th copy of $M_{q+1}'$. For each $i$, add $q-1$ false twins to $y_i$. We denote these vertices by $y_{ij}$, with $1\leq j\leq q$, $j\neq i$. Then delete the vertex $y_i$, for every $i$. Note that this graph is still $q$ colorable and, by Observation~\ref{obs:samecolor}, in every such $q$-coloring, for each $i$, the vertices $x_i$ and $y_{ij}$, for all $j\neq i$, receive the same color. Now we add $q\choose 2$ edges to connect the copies of $M_{q+1}'$: for $1\leq i<j\leq q$, add the edge $y_{ij}y_{ji}$ to $H_q$. Note that $H_q$ remains triangle-free after the addition of these edges, since for all $1\leq i<j\leq q$, $N(y_{ij})\cap N(y_{ji})=\emptyset$. We will need the following property of the $q$-colorings of $H_q$. \begin{nestedclaim}\label{claim:colorxi} The graph $H_q$ is $q$-colorable. Moreover, in any $q$-coloring $\phi$ of $H_q$, $\phi(x_i)\neq\phi(x_j)$ for all $1\leq i<j\leq q$. \end{nestedclaim} \begin{claimproof} Suppose for a contradiction that there exists a $q$-coloring of $H_q$ such that $\phi(x_i)=\phi(x_j)$, for some $i\neq j$. By Observation~\ref{obs:samecolor}, we know that $\phi(x_i)=\phi(y_{ij})$. Similarly, $\phi(x_j)=\phi(y_{ji})$. This implies that $\phi(y_{ij})=\phi(y_{ji})$, which is a contradiction, since $y_{ij}$ and $y_{ji}$ are adjacent by construction. To see that a $q$-coloring indeed exists for $H_q$, first note that, by Observation~\ref{obs:samecolor}, each copy of $M_{q+1}'$ has a $q$-coloring in which $x_i$ and $y_i$ are assigned the same color. We can then permute the colors within a copy to obtain a proper coloring of that copy in which $x_i$ and $y_i$ receive color $i$. To complete the coloring, assign color $i$ to every $y_{ij}$ that is a false twin of $y_i$. This yields a proper $q$-coloring of $H_q$. \end{claimproof} We are now ready to describe the construction of our instance $G'$ to $q$-{\sc Clique Coloring}. Let $(G,L)$ be an instance of $q$-{\sc List Coloring} on triangle-free graphs that is at distance $k$ from a linear forest. We construct $G'$ as follows. Add a copy of $G$ and a copy of $H_q$ to $G'$. We denote by $V'$ the set of vertices corresponding to $V(G)$ in $G'$. For each $v\in V'$, add $q-|L(v)|$ vertices adjacent to $v$. We denote these vertices by $\{v_j~|~j\notin L(v)\}$. Finally, make $v_j$ adjacent to all the vertices of $\{x_\ell~|~\ell\neq j\}$. See Figure~\ref{fig:reduction2}. \begin{figure} \centering \includegraphics[width=11cm]{fig/reduction2.pdf} \caption{In this instance, $q=3$ and $L(v)=\{1\}$. Note that $G'-(S\cup V(H_q))$ is a caterpillar forest.} \label{fig:reduction2} \end{figure} Note that $G'$ is a triangle-free since $H_q$ and $G$ are triangle free, and $N(v_j)\cap V'=\{v\}$ and $N(v_j)\cap V(H_q)$ is an independent set. Furthermore, let $S\subseteq V(G)$ be a set such that $G-S$ is a linear forest and $|S|=t$. Then $S\cup V(H_q)$ is such that each connected component of $G'-(S\cup V(H_q))$ is a caterpillar and $|S\cup V(H_q)|=t+q(3\cdot 2^{q-1}+q-3)=t+\calO(1)$, since $q$ is a constant. We will show that $(G,L)$ is a yes-instance to $q$-{\sc List Coloring} if and only if $G'$ is a yes-instance to $q$-{\sc Clique Coloring}. Note that since $G'$ is a triangle-free graph, every clique coloring of $G'$ is actually a proper coloring of it also. First, suppose $(G,L)$ is a yes-instance to $q$-{\sc List Coloring} and let $\phi$ be a $q$-list coloring for $G$. We give a $q$-coloring $\phi'$ for $G'$ in the following way. If $v\in V'$, make $\phi'(v)=\phi(v)$. For each $v_j\in N(v)$, make $\phi'(v_j)=j$. Note that since $j\notin L(v)$, we have that $\phi'(v)\neq\phi(v_j)$. Finally, consider a proper $q$-coloring of $H_q$. By Claim~\ref{claim:colorxi}, the vertices $x_1,\ldots,x_q$ were assigned pairwise distinct colors. Without loss of generality, we can assume $x_i$ received color~$i$. Extend $\phi$ to the remaining vertices of $G'$ according to this coloring of $H_q$. This leads to proper $q$-coloring of $G'$, since $\phi(v_j)=j$ and $v_j$ is not adjacent to $x_j$. Now assume $G'$ admits a $q$-clique coloring $\phi$. We will show that $\phi|_{V'}$ is a $q$-list coloring for $(G,L)$. Since $G'$ is triangle-free, it is clear that $\phi|_{V'}$ is a proper coloring of $G$. It remains to show it satisfies the constraints imposed by the lists. By Claim~\ref{claim:colorxi}, we can again assume that $\phi(x_i)=i$, for every $i$. For every $v\in V'$, since $\{x_\ell~|~\ell\neq j\}\subset N(v_j)$, we necessarily have $\phi(v_j)=j$. Finally, since for every $c\notin L(v)$ there is a neighbor of $v$ that is colored $c$ (namely $v_c$), we conclude that $\phi(v)\in L(v)$. Now, suppose that \qcliquecol admits an algorithm running in time $\Ostar((q-\epsilon)^{t'})$, for some $\epsilon > 0$, where $t'$ is the distance of the input graph to a caterpillar forest. Then, we can solve \textsc{$q$-List-Coloring} paramterized by the distance $t$ to a linear forest by applying the above reduction, giving a \qcliquecol instance at distance $t + \calO(1)$ to a caterpillar forest, and solving the resulting \qcliquecol instance. Correctness is argued in the previous paragraphs, and the runtime of the resulting algorithm is $\Ostar((q-\epsilon)^{t + \calO(1)}) = \Ostar((q-\epsilon)^t)$, contradicting \SETH by Theorem~\ref{thm:qlistcolbound}. \end{proof} Since the instance of \qcliquecol constructed in the proof of Theorem~\ref{thm:cliquecolgeq3} is a triangle-free graph, we obtain the following corollary. \begin{corollary} For any $\epsilon > 0$ and any fixed $q\geq 3$, $q$-{\sc Coloring} on triangle-free graphs parameterized by the distance $t$ to a caterpillar forest cannot be solved in time $\Ostar((q-\epsilon)^t)$, unless \SETH fails. \end{corollary}
1,314,259,992,958
arxiv
\section{Introduction} \label{Introduction} The term \emph{big data} as the name suggest refers to information assets characterized by high volume, fast access speed, and a large ontological variety. Dealing with big data requires specific technologies and analytical methods for its transformation into value. The term \emph{big data sharing} (BDS) refers to the act of the data sharer to share big data so that the data sharee can \emph{find, access}, and \emph{use} in the agreed ways. BDS not only improves the speed of getting data insights, but can also help strengthen cross-domain data analytics and big data trading. Over the last few years, there is a huge demand for big data sharing in various industries, which has led to an explosive growth of information. Over 2.5 quintillion bytes of data are created every single day, and the amount of data is only going to grow from there. By 2020, it is estimated that 1.7MB of data will be created every second for every person on earth. Due to constraints related to the limitations of data storage resources, large storage dedicated centralized servers (e.g., cloud) are usually regarded as the best approach for BDS: on the one hand the centralized server provides a solution which is both scalable and accommodating for BDS and business analytics, while on the other hand BDS provides data analytics for actionable insight and making predictions. However, centralized server comes with a price: it constitutes an added level of security and privacy threats since its essential services are often outsourced to an untrusted third party, which makes harder to maintain the basic data security and privacy requirements, such as confidentiality, integrity and privacy of the shared data. Thus, enforcing security and privacy in BDS as a whole is an important concern. Otherwise, data integrity and confidentiality can always be compromised easily. \subsection{General Model of BDS and its Security and Privacy Concerns} \label{SystemModel} In this section we first discuss the general model of BDS. Then, we describe the general operations needed based on that general model. According to that, we broadly categorize the security and privacy notions needed for BDS. \textbf{General Model of BDS.} Before discussing security and privacy concerns, it is necessary to define a general model of BDS and its input/output. The general model that we consider is based on the centralized server approach. We propose a general model of BDS system that allows data sharer/sharee to \emph{create, store, access, download, search} and \emph{manipulate databases}, that takes full account of data access control, user accessibility and the form of shared data. The remote centralized service provider (e.g., Cloud) which stores and manages the data generated by data sharer is considered as an untrusted party by the two other parties. The sharing activity could be either operating on data, e.g., searching or computation or downloading data. The shared data could be either raw or encrypted. Table ~\ref{notations} shows the notations. \begin{figure}[!htbp] \includegraphics[width=.75\linewidth]{system-model-eps-converted-to.pdf} \caption{General Model for BDS} \label{fig:centralized model} \end{figure} \begin{table*}[tbp] \centering \caption{Notations} \label{notations} \scalebox{0.65}{\begin{tabular}{ p{.15\linewidth} p{.58\linewidth} } \toprule Notations & Descriptions \\ \toprule $\mathsf{DW}$ & Downloaded data \\ \midrule $\mathsf{OP}$ & Data operations which includes: data searching, data outsourcing, data computation \\ \midrule $\mathsf{ENC}$ & Data is encrypted during the storage and sharing process \\ \midrule $\mathsf{RAW}$ & Data is raw during the storage and sharing process \\ \midrule $\mathsf{INT}$ & Data Sharer/Internal Use \\ \midrule $\mathsf{EXT}$ & Data Sharee/External User \\ \bottomrule \end{tabular}} \end{table*} Our general model for BDS consists of the two following entities (as shown in Fig. \ref{fig:centralized model}): \begin{description} \item[Data Sharer/Internal User.] A data sharer is the data owner (or internal user) who shares his own data with a larger server storage. In such system, the data sharer can either use and operate on his own shared data or gives its access to data sharee. \item[Data Sharee/External User.] A data sharee (or external user) access/uses other's stored data. In such system, the data sharer gives the access to that data to the data sharee, either by downloading data from the server or by directly operating on data. \end{description} After defining what is BDS made of, we explain its general procedures. BDS operations can be divided into a few distinct groups, which have their own characteristics. Herein, we define and discuss the general operations needed for BDS, i.e., data downloading, data storing, data computation, data searching, and data outsourcing: \textsc{Data sharing.} Data sharer might query on a BDS platform with some constraints, to learn hidden patterns, correlations, compute a function, and other insights. Generally speaking, there are three steps for data querying, i.e., data computation, data downloading, data searching, and data outsourcing, as follows: \begin{itemize} \item \textsc{Data downloading.} Data downloading is the process through which data sharer or data sharee retrieve the result from data querying. \item \textsc{Data operation.} \begin{enumerate} \item \textsc{Data computation.} Data computation is the process through which data sharer or data sharee jointly compute a function with BDS over their inputs while keeping those inputs private. \item \textsc{Data searching.} Data users might query a BDS platform with some constraints, to learn hidden patterns, correlations and other insights. Data searching needs to be able to search through unstructured and structured data which requires management of huge amounts of data as quickly as possible. \item \textsc{Data outsourcing.} Data users might delegate a portion of data to be outsourced to external providers who offer data management functionalities. \end{enumerate} \end{itemize} Although there are numerous benefits for BDS, it is non-trivial to design a solution because of the requirements. So, security and privacy are necessary for BDS, otherwise its values will be disappeared, i.e., if a BDS is not secure or cannot protect the security and privacy, then the users can hardly trust such a technology and even will not use it. Below, we categorize and explain the basic concerns, i.e., \emph{data security}, \emph{data privacy}, and \emph{user privacy} as follows: \textbf{Security and Privacy Concerns in BDS.} Considering these data sharing operations, we categorize data security and privacy notions needed in BDS, i.e., Data Security, Data Privacy and User Privacy. Data security refers to \emph{how} data is protected from an attacker, namely: Prevent malicious access, usage, modification or unavailability of the big data from anyone other than the sharing parties. Data Privacy is about protection of individual's information from being disclosed to others as data may contain individual's sensitive information, such as Personally Identifiable Information (PII), personal healthcare information, and financial information, which should be protected whenever the data is collected, stored and shared (e.g., by applying governing regulation or law like General Data Protection Regulation (GDPR)). There are two parties in BDS, which are the data sharer and sharee. User privacy is about protecting the identity of data sharer from exposure by other parties and even each other. It requires that the two parties involved in BDS focus on the data itself without knowing each other. \subsection{Motivation and Contributions} For the past few years, the topic of big data security and privacy have been explored in many surveys. Most of these survey papers \cite{Thangaraj17,Siddique18,Terzi15,Chandra17,Fang17} give a short overview on security and privacy techniques in BDS. This work aims to contribute a comprehensive survey of security and privacy in BDS in terms of formal definitions, security and privacy requirements, security and privacy techniques used to fulfill requirements, classification of techniques and future challenges. Compared to other surveys that can be found in the literature, our contributions are as follows: \begin{itemize} \item \textbf{New taxonomy.} After providing an in-depth understanding and up-to-date discussion related to the BDS and its operations. We identify security and privacy requirements within BDS and present a novel taxonomy to structure solutions by fulfilled requirements. \item \textbf{Comprehensive survey.} In accordance with the taxonomy, we discuss the benefits and limitations of the state of-the-art solutions that fulfill the identified security and privacy requirements. \item \textbf{Future directions.} Finally, based on our survey, we provide the list of lessons learned, open issues, and directions for future work. \end{itemize} \subsection{Security \& Privacy Concerns in Big Data Sharing Applications} BDS has many applications fields, such as healthcare \cite{poldrack2014making}, supply chain management \cite{richey2016global}\cite{wu2019data}, and open government \cite{welch2016determinants}. In this section, we introduce three attracting applications that caught the attention of both the industry and academia in recent years. \textbf{Privacy and Pandemic.} Global leaders are increasingly relying on information about individuals and communities to control the spread of COVID-19 and respond to its economic, political, social, and health impacts. Time is of the essence, and leaders must quickly decide essential questions about what personal information they will collect or disclose, to whom, and under what conditions. It is important that privacy concerns do not become an obstacle to effective health and safety measures, but also that we do not open a door to privacy violation or limitless surveillance. \textbf{Federated Learning (FL).} FL is a subset within the field of AI, enables multiple decentralized edge devices or servers holding local data samples to collaboratively learn a shared prediction model while keeping all the training data private. In recent years, FL has received extensive attention from both academia and industry because it can solve privacy problems in machine learning. However, there are many challenges in FL, and although there are solutions to these challenges, most existing solutions need a trusted, centralized authority that is difficult to find. \textbf{Medical Research and Healthcare.} In recent years, more and more health data are being generated. All these big data put together can be used to predict the onset of diseases so that preventive steps can be taken. However, the health data contains personal health information (PHI), due to the risk of violating the privacy there will therefore be legal concerns in accessing the data \cite{Jain16}. Health data can be anonymized using masking and de-identification techniques, and be disclosed to the researchers based on a legal data sharing agreement \cite{Khaled13}. \subsection{Organisation} This paper is structured as follows: In Section \ref{Introduction}, we start by defining BDS, this allows us to discuss the differences between the security and privacy notions in BDS. Next, we provide a comprehensive topical overview of BDS by introducing its general model and general procedures using centralized architecture. Based on this, we describe the different assumptions and scope for that model. In Section \ref{sec:security}, we start by describing the basic security requirements as well as additional ones that are needed in BDS and then describing their corresponding techniques. In Section \ref{sec:privacy}, we describe the privacy requirements in terms of data and user privacy and their corresponding techniques. In Section \ref{Taxonomy security and privacy techniques}, we review, summarize and compare the security \& privacy techniques to fulfill the needed security \& privacy requirements. In Section \ref{Future Directions}, we discuss the challenge issues as well as new future research directions for BDS. Finally in Section \ref{conclusion}, we conclude this article. \section{Security in BDS} \label{sec:security} In this section, we first start defining the required security properties as well as setting the security assumptions. Based on this, we overview the existing cryptographic techniques. It allows us to describe how these techniques can be incorporated in the BDS system. Finally, we provide a classification that compare the various cryptographic techniques. \subsection{Security Requirements in BDS} \label{Security requirements} In this section, we first start recalling the four most fundamental security requirements coming from information system, also known as the CIA triad, which are defined as follows: \heading{\textbf{Data Confidentiality during Outsourcing.}} Confidentiality is the cornerstone of BDS security which refers to the protection of data during the sharing process against the unauthorized access. Otherwise, its value could be disappeared. \heading{\textbf{Data Integrity.}} We distinguish two types of integrity in data sharing context: \emph{Usage Integrity} (or Data Integrity) ensuring that any unauthorized modification of sensitive data in the use should be detectable, otherwise its veracity cannot be consistent. While, \emph{Data Source Authenticity} means that the data should be consistent over the whole BDS process. The distinction done between data integrity and authentication is frequently blurred because integrity can also provide authentication. In essence, an integrity primitive would take as a parameter a message $m$ and prove that the sender actually mixed his secret with $m$ to attest $m$'s origin. An authentication primitive does not involve any message (no "payload") and is only meant to check that the authenticated party actually knows a given secret. It follows that to achieve authentication the secret owner can just be challenged to attest the integrity of random challenge $m$, chosen by the verifier. In practice, this is indeed the way in which numerous commercial products implement authentication using integrity primitives. \heading{\textbf{Non-repudiation}} While integrity ensures a data has not been tampered with, non-repudiation provides evidence that an individual or entity from denying having performed a particular action. In other words, non-repudiation provides proof of the origin of data and the integrity of the data. \heading{\textbf{Availability}} Data availability ensures that data must be available for use whenever authorized users want it. However, the introduction of cloud computing has limited issues of data availability for Big Data due to high has narrowed down issues of cloud. Denial of service (DoS) attack, DDoS attack, and SYN flood attack are the most common attacks to threat data availability. Besides the basic security requirements of BDS, we specify the additional security requirements that we have identified for BDS context, which are defined as follows: \heading{\textbf{Data Confidentiality during Computation}} Data sharee and data sharer want to jointly compute a function over their inputs while keeping those inputs private. For example, the data collected from different sensors in the IoT system may be aggregated to generate the targeted result; the cloud and the clients may cooperate to provide appropriate services. At the same time, the private information and secret data should be protected. The computation procedures and results on BDS should only be known by the data sharer and sharee during and after computation. Unlike traditional cryptographic scenarios, where cryptography ensures security and integrity of communication or storage and the adversary is supposed an outsider from the system of participants (an eavesdropper on the sender and receiver), the cryptography techniques in this model should protect participants' privacy from each other. \heading{\textbf{Data Confidentiality during Searching}} Data sharers want to store data in ciphertext form while keeping the functionality to search keywords in the data, i.e, to protect the privacy of data, data sharer may choose to encrypt the data before uploading them to the cloud. However, while encryption provides confidentiality to data, it also sacrifices the data sharers’ ability to query a special segment in their data. The search requests and results on big data should only be known by the data sharer during and after searching. \heading{\textbf{Access control}} Traditionally, cryptography is about providing secure communication over insecure channels, meaning that we want to protect honest parties from external adversaries: only the party who has the decryption key can learn the message. In recent applications, more complicated issues have been introduced, i.e, consists in not trusting everybody with the same information. Access control allows decryption depending on who you are and which keys you have, you can access different parts of the information sent. In other words, access control requirement deals with the issue that someone should only be able to decrypt a ciphertext if the person holds a key for "matching attributes" where user keys are always issued by some trusted party. \heading{\textbf{Delegation rights}} This notion deals with the problem of data sharing between different receipts. We distinguish two types of delegation rights: \emph{decryption rights delegation} and \emph{signing rights delegation}. In decryption rights delegation, it turns a ciphertext intended for one data user into a ciphertext of the same data intended for another data user without revealing any information about the data or the secret keys. In signing rights delegation, it allows an entity to delegate its signing rights to another. \begin{figure}[!htbp] \centering \scalebox{0.65}{\includegraphics[width=0.4\linewidth]{assumption-eps-converted-to.pdf}} \caption{Assumptions of BDS security} \label{fig:big-data-assumption} \end{figure} \begin{table*}[tbp] \centering \caption{Classification of techniques according to assumptions and fulfilled security requirements} \label{tab:Summarization security-techniques} \scalebox{0.68}{\begin{tabular}{p{.18\textwidth} p{.68\textwidth} p{.52\textwidth}} \toprule \textbf{Assumptions} & \textbf{Fulfilled Security requirements} & \textbf{Security Techniques} \\ \toprule \makecell[l]{ Asm.1\\ $\mathsf{INT\mathchar`-OP\mathchar`-ENC}$ } & \makecell[l]{ Confidentiality during Computation \\ Confidentiality during Searching \\ Confidentiality during Outsourcing \\ Confidentiality during Outsourcing \& Integrity\\ Availability } & \makecell[l]{ $\mathcal{HE}$ \\ $\mathcal{SSE, PIR}$\\ $\mathcal{ORAM}$ \\ $\mathcal{AE}$\\ Storage path encryption } \\ \midrule \makecell[l]{ Asm.2\\ $\mathsf{EXT\mathchar`-OP\mathchar`-ENC}$ } & \makecell[l]{ Confidentiality during Computation \\ Confidentiality during Searching \\ Confidentiality during Outsourcing\& Integrity\\ Confidentiality during Outsourcing, Integrity \& Non-repudiation\\ Availability } & \makecell[l]{ $\mathcal{FE},\mathcal{HE}$ \\ $\mathcal{PEKS}$ \\ $\mathcal{AE}$, Signcryption \\ Signcryption\\ Storage path encryption } \\ \midrule \makecell[l]{ Asm.3\\ $\mathsf{INT\mathchar`-DW\mathchar`-ENC}$ } & \makecell[l]{ Confidentiality during Outsourcing\\ Integrity\\ Availability } & \makecell[l]{ N/A \\ $\mathcal{DS}$, $\mathcal{MAC}$, $\mathcal{PDP}$\\ Storage path encryption } \\ \midrule \makecell[l]{ Asm.4\\ $\mathsf{EXT\mathchar`-DW\mathchar`-ENC}$ } & \makecell[l]{ Confidentiality during Outsourcing\& Rights Delegation \\ Confidentiality during Outsourcing \& Access control\\ Confidentiality during Outsourcing \& Integrity \\ Confidentiality, Integrity \& Non-repudiation \\ Availability } & \makecell[l]{ $\mathcal{PRE}$\\ $\mathcal{ABE}$, $\mathcal{IBE}$,$\mathcal{ABHE}$\\ $\mathcal{AE}$, Signcryption\\ Signcryption\\ Storage path encryption } \\ \midrule \makecell[l]{ Asm.5\\ $\mathsf{INT\mathchar`-DW\mathchar`-RAW}$ } & \makecell[l]{ Integrity\\ Availability } & \makecell[l]{ $\mathcal{DS}, \mathcal{MAC},\mathcal{PDP}$\\ Storage path encryption } \\ \midrule \makecell[l]{ Asm.6\\ $\mathsf{EXT\mathchar`-DW\mathchar`-RAW}$ } & \makecell[l]{ Integrity\\ Integrity \& Non-repudiation \\ Confidentiality during Outsourcing, Integrity \& Non-repudiation \\ Integrity \& Rights Delegation \\ Availability } & \makecell[l]{ $\mathcal{DS}$, $\mathcal{MAC}$, $\mathcal{PDP}$ \\ $\mathcal{DS}$\\ Signcryption\\ $\mathcal{PS}$\\ Storage path encryption } \\ \midrule \makecell[l]{ Asm.7\\ $\mathsf{INT\mathchar`-OP\mathchar`-RAW}$ } & \makecell[l]{ Confidentiality during Computation \\ Confidentiality during Searching \\ Integrity \\ Availability } & \makecell[l]{ $\mathcal{PSI},\mathcal{MPC}$\\ $\mathcal{PIR}$ \\ $\mathcal{VC}$, $\mathcal{DS}$, $\mathcal{MAC}$, $\mathcal{PDP}$\\ Storage path encryption } \\ \midrule \makecell[l]{ Asm.8\\ $\mathsf{EXT\mathchar`-OP\mathchar`-RAW}$ } & \makecell[l]{ Confidentiality during Computation \\ Integrity \\ Integrity \& Non-repudiation\\ Integrity \& Right Delegation \\ Availability } & \makecell[l]{ $\mathcal{PSI},\mathcal{MPC}$ \\ $\mathcal{DS}$, $\mathcal{MAC}$ , $\mathcal{PDP}$\\ $\mathcal{DS}$ \\ $\mathcal{PS}$\\ Storage path encryption } \\ \bottomrule \end{tabular}} \end{table*} \subsection{Security Assumptions} \label{Security assumptions} In this section, based on the proposed general model for BDS in Section \ref{SystemModel}, we propose the security assumptions needed in BDS systems according to the three following dimensions: user accessibility, data usage and the form of shared data, as shown in Fig. \ref{fig:big-data-assumption}. Depending on what action can be performed by each type of user in BDS introduced in Section \ref{SystemModel}, under which form of data, different cryptographic solutions (or the combination of them) should be used to guarantee the security of BDS operations, we divide the assumptions into the following category: \labelText{Assumption 1}{label:text1} ($\mathsf{INT},\mathsf{OP},\mathsf{ENC}$). In such assumption, the encrypted data from the data sharer can only be operated, such as data outsourcing, calculation, searching, viewing. However, it can not be externally accessed, e.g., users store their own data in server and operate/search on it. \labelText{Assumption 2}{label:text2} ($\mathsf{EXT},\mathsf{OP},\mathsf{ENC}$). In such assumption, the encrypted data from the data sharer can only be operated but the sharee which can be from the outside of system, e.g., access to a server storage, which means that it needs an extra secure channel compared with assumption 1. \labelText{Assumption 3}{label:text3} ($\mathsf{INT},\mathsf{},\mathsf{ENC}$). In such assumption, the encrypted shared data can be downloaded and then retrieved by the data owner within the system. \labelText{Assumption 4}{label:text4} ($\mathsf{EXT},\mathsf{DW},\mathsf{ENC}$). In such assumption, the encrypted shared data can be downloaded and retrieved by data sharee from the outside of system. Retrieval is a stronger assumption than operation since it delivers all data usage to the data sharee, which means they need additional security requirements such as decryption rights delegation between the data sharer and the data sharee and ensuring the non-repudiation property. \labelText{Assumption 5}{label:text5} ($\mathsf{INT},\mathsf{DW},\mathsf{RAW}$). In such assumption, data is not encrypted initially and need to be fully downloaded, e.g., the data sharer upload their own data within the server and later download it. \labelText{Assumption 6}{label:text6} ($\mathsf{EXT},\mathsf{DW},\mathsf{RAW}$). In such assumption, we require that the data sharee can also download the data securely from the server, which means they need extra requirements such as signing rights delegation and ensuring the non-repudiation property. \labelText{Assumption 7}{label:text7} ($\mathsf{INT},\mathsf{OP},\mathsf{RAW}$). In such assumption, the data sharer operates on his own raw data. E.g. a data owner may need some operations that only work on raw data in server storage. So this also requires the consideration in terms of verifiable outsourced computation integrity from the server. \labelText{Assumption 8}{label:text8} ($\mathsf{EXT},\mathsf{OP},\mathsf{RAW}$). In such assumption, the data sharee operates on data sharer's raw data. Comparing with Assumption 7, there is extra security requirements in terms of the signing rights delegation and the non-repudiation. In table \ref{tab:Summarization security-techniques}, we summarize the security techniques detailed in Section \ref{security techniques} that are needed to fulfil the security requirements described in Section \ref{Security requirements} according to each security assumptions described in \ref{Security assumptions}. \subsection{Security Techniques} \label{security techniques} In this section, we summarize the existing techniques of BDS that can be leveraged to enhance the security and privacy of the introduced BDS in Section[\ref{SystemModel}]. For each of the presented techniques, we use the following outline: First, we provide a high-level overview of what protections the cryptographic techniques provides and how it can be used in BDS. Second, we give a more detailed definition of the security achieved and the critical limits of the technique. Finally, we give an in-depth survey of the literature and state of the art developments for the technique to illustrate the differences between individual schemes and their potential uses-cases. \textsc{\textbf{Message Authentication Code ($\mathcal{MAC}$)}} \label{MAC} This is a small piece of information used to authenticate a message. In BDS system, a data sharee needs to be assured that a data comes from a legitimate data sharer (authentication) and not from an attacker. This also includes the assurance that the message was not modified during transmission (integrity). The $\mathcal{MAC}$ algorithm takes a data D and secret key and outputs a $\mathcal{MAC}$ value or "tag". $\mathcal{MAC}$s only use secret keys, and rely on symmetric encryption. However, to function as intended the $\mathcal{MAC}$ must be able to resist plaintext attacks even if a hacker knows the secret key. Although the hacker can create their own $\mathcal{MAC}$s from the key, the $\mathcal{MAC}$ algorithm must be strong enough to make it impossible for the hacker to calculate the $\mathcal{MAC}$ for other messages. $\mathcal{MAC}$s can be built from hash functions; these are known as keyed hash functions. One advantage of $\mathcal{MAC}$s over $\mathcal{DS}$ is that they are much faster than digital signatures since they are based on either block ciphers or hash functions. The algorithms that are more commonly used in modern applications and one specifically designed for constrained platforms: CMAC \cite{CMAC}, PMAC1 \cite{10.1007/3-540-48910-X_9}, GMAC \cite{GCM} and Marvin \cite{Simplcio2009TheMM}. For more details, we refer the interested reader to the survey \cite{article}. $\mathcal{MAC}$ consists of a tuple of algorithms $(\mathsf{KeyGen}, \mathsf{MAC}, \mathsf{Verify})$ satisfying: \begin{description} \item $\mathsf{KeyGen}$ (key-generator) gives the key k on input $1^n$, where n is the security parameter. \item $\mathsf{MAC}$ (signing) outputs a tag t on the key $\mathsf{sk}$ and the input string x. \item $\mathsf{Verify}$ verifying) outputs accepted or rejected on inputs: the key $\mathsf{sk}$, the string x and the tag t. \end{description} \textsc{\textbf{Digital Signature ($\mathcal{DS}$)}} \label{DS} $\mathcal{DS}$ were originally proposed by Diffie and Hellman in \cite{1055638} $\mathcal{DS}$ and Rivest, Shamir, and Adelman \cite{10.1145/359340.359342}. $\mathcal{DS}$ deals with the problem of data authentication and integrity in the asymmetric (public key) setting. In BDS system, a data sharee needs to be assured that a data comes from a legitimate data sharer (authentication) and not from an attacker. This also includes the assurance that the message was not modified during transmission (integrity). $\mathcal{MAC}$s solved this problem but for the symmetric-key setting. By opposition to $\mathcal{MAC}$s, digital signatures have the advantage of being \emph{publicly verifiable} and \emph{non-repudiable}. Public verifiability implies the transferability of signatures and, thus, signatures prove useful in many applications, including BDS systems. Cramer and Shoup \cite{10.1145/357830.357847} and Gennaro, Halevi, and Rabin \cite{10.1007/3-540-48910-X_9} proposed the first signature schemes that are practical and whose security analysis does not rely on an ideal random function based on the so-called Strong RSA assumption. A $\mathcal{DS}$~\cite{Kat10} is a tuple, $\mathcal{DS}= (\mathsf{KeyGen}, \mathsf{Sign}, \mathsf{Verify})$, of probabilistic polynomial-time algorithms satisfying: \begin{description} \item[$(\mathsf{sk},\mathsf{pk}) \gets \mathsf{KeyGen}(1^\secpar)$] On input security parameter $1^\secpar$, key generation algorithm $\mathsf{KeyGen}$ produces a pair $(pk,sk)$ of matching public and private keys. \item[$\sigma \gets \mathsf{Sign}(\mathsf{sk},m)$] Given a message $m$ in a set $\mathcal{M}$ of messages and a private keys $\mathsf{sk}$, signing algorithm $\mathsf{Sign}$ produces a signature $\sigma$. \item[$\{0,1\} \gets \mathsf{Verify}(\mathsf{pk}, m, \sigma)$] Given a signature $\sigma$, a message $m \in \mathcal{M}$ and a public key $\mathsf{pk}$, the verifying algorithm $\mathsf{Verify}$ checks whether $\sigma$ is a valid signature on $m$ with respect to $\mathsf{pk}$. \end{description} \textsc{\textbf{Signcryption}} \label{signcryption} In some cases of BDS, we require the confidentiality and authenticity separately, but other cases we require them simultaneously. To achieve this special requirement, Signcryption scheme is used. The first signcryption scheme was introduced by Yuliang Zheng in 1997 \cite{Zheng97} based on elliptic curve cryptography. Signcryption is a multi-user primitive that is complex to design \cite{10.1007/3-540-46035-7_6}. Signcryption Scheme consists of tuple of algorithms defined as follows: \begin{description} \item $(\mathsf{sk}, \mathsf{pk}) \gets \mathsf{Keygen(1^\kappa)}$ is the key generation algorithm which takes a security parameter $k \in N$ and generates a private/public key pair $(\mathsf{sk}, \mathsf{pk})$. \item $C \gets \mathsf{Signcrypt(1^\kappa, m, \mathsf{sk}_a, \mathsf{pk}_b)}$ takes k, a message m, a sender private key $sk_a$ and a recipient public key $pk_b$, outputs a ciphertext C. m is drawn from a message space M which is defined as $\{0, 1\}^n$ where n is some polynomial in k. \item $(m,\sigma, \mathsf{pk}_a) \gets \mathsf{Unsigncrypt(1^\kappa, C, \mathsf{sk}_b)}$ takes $\kappa$, C and a private key $\mathsf{sk}_b$, outputs either a triple $(m,\sigma, \mathsf{pk}_a)$ where $m \in M$, s is a signature and $\mathsf{pk}_a$ is a public key, or reject which indicates the failure of unsigncryption. \item $\{0,1\} \gets \mathsf{Verify(1^\kappa, m, \sigma, \mathsf{pk}_a)}$ takes k, $m \in M$, a signature $\sigma$ and a public key $\mathsf{pk}_a$, outputs 1 for a valid signature or 0 otherwise. \end{description} \textsc{\textbf{Authenticated Encryption ($\mathcal{AE}$)}} \label{AE} The symmetric analogue of a Signcryption is variously called $\mathcal{AE}$ \cite{Bellare03}, which simultaneously ensure the confidentiality and authenticity of data. However, similarly to $\mathcal{MAC}$, $\mathcal{AE}$ does not provide the non-repudiation. For a comprehensive survey on $\mathcal{AE}$, we refer the interested reader to \cite{cryptoeprint:2019:739}. Basically, there are three approaches to $\mathcal{AE}$: Basically, there are three approaches to $\mathcal{AE}$ which are defined as follows: \begin{description} \item MAC-then-Encrypt (MtE): We first MAC $m$ under key $\mathsf{sk}_1$ to yield tag $\sigma$ and then encrypt the resulting pair $(m, \sigma)$ under key $\mathsf{sk}_2$. \item Encrypt-then-MAC (EtM): We first encrypt $m$ under key $\mathsf{sk}_2$ to yield ciphertext C and then compute $\sigma \gets MAC_{\mathsf{sk}_1}(C)$ to yield the pair $(C, \sigma)$. \item Encrypt-and-MAC (E\&M): We first encrypt M under key $\mathsf{sk}_1$ to yield ciphertext C and then compute $ \sigma \gets MAC_{\mathsf{sk}_1}(m)$ to yield the pair $(C, \sigma)$. \end{description} \textsc{\textbf{Identity Based Encryption ($\mathcal{IBE}$)}} \label{IBE} $\mathcal{IBE}$ \cite{Shamir85} allows an access control that is based on the identity of a data user. To protect the privacy of data stored in the cloud, a data user usually encrypts his data in such a way that certain designated data users can decrypt the data. $\mathcal{IBE}$ is regarded as an alternative to $\mathcal{PKE}$ which is proposed to simplify key management in a certificate based public key infrastructure (PKI) by using human identities like email address or IP address as public keys. To preserve the anonymity of sender and receiver, the first $\mathcal{IBE}$ \cite{Boneh01} scheme was proposed. Several constructions of $\mathcal{IBE}$ have been proposed. From a chronological point of view, the first one is by Boneh and Franklin~\cite{10.5555/646766.704155}, who proposed a straightforward scheme in the random oracle model~\cite{10.1145/168588.168596}. Their construction heavily exploits cryptographic pairings, as well as the power conferred by a random oracle. Later, Boneh and Boyen~\cite{10.1007/978-3-540-28628-8_27} refined this idea and came up with an identity based encryption in the standard model, but relying on what is called a non-static $q$-type assumption, where $q$ is related to the number of queries the adversary makes for key derivation. An $\mathcal{IBE}$ scheme consists of four algorithms $(\mathsf{Setup}, \mathsf{KeyGen}, \mathsf{Enc}, \mathsf{Dec})$: \begin{description} \item[$\big(\mathsf{msk}, \mathsf{mpk} \big) \gets \mathsf{Setup}(1^\secpar)$] The setup algorithm takes the security parameter $\lambda$ and outputs a master secret key $\mathsf{msk}$ and a (master) encryption key $\mathsf{mpk}$. \item[$\mathsf{CT} \gets \mathsf{Enc}(\mathsf{mpk}, \mathsf{id}, \mathit{M})$] The encryption algorithm takes as input an encryption key $\mathsf{mpk}$, an identifier $\mathsf{id}$ and a message $\mathit{M} \in \mathcal{M}_{\lambda}$, and outputs a ciphertext $\mathsf{CT}$. \item[$\mathsf{sk}_{\mathsf{id}} \gets \mathsf{KeyGen}(\mathsf{msk}, \mathsf{id})$] The key derivation algorithm takes as input an identifier $\mathsf{id}$ and outputs the corresponding functional key $\mathsf{sk}_{\mathsf{id}}$. \item[$\mathit{M} \gets \mathsf{Dec}(\mathsf{sk}_{\mathsf{id}},\mathsf{id}, \mathsf{CT})$] The decryption algorithm is a deterministic algorithm that takes as input a key $\mathsf{sk}_{\mathsf{id}}$ and a ciphertext $\mathsf{CT}$ and outputs $\mathit{M}$ or a special error symbol $\perp$ if decryption fails. \end{description} \textsc{\textbf{Attribute Based Encryption ($\mathcal{ABE}$)}} \label{ABE} $\mathcal{ABE}$ technique \cite{Sahai05,Vipul06} is regarded as the most appropriate technologies used to control big data access in the cloud environment; it allows more secure and flexible as granular access control is possible and it enables data users to upload their data in encrypted forms to the cloud while sharing them with users possessing certain credentials or attributes. $\mathcal{ABE}$ can be viewed as a generalization of $\mathcal{IBE}$. An implementation of this scheme describing how this can be used for access control was given by Pirretti et al. \cite{Pirretti}. The class of supported policies was extended to arbitrary boolean formulas by Goyal et al. \cite{10.1145/1180405.1180418} and Bethencourt, Sahai, and Waters \cite{4223236}. Basically, $\mathcal{ABE}$ techniques are classified into two major classes: \emph{Ciphertext-Policy ABE} ($\mathcal{CP\mathchar`-ABE}$) introduced by Sahai and Waters \cite{Waters11} and \emph{Key-Policy ABE} ($\mathcal{KP\mathchar`-ABE}$) \cite{Nuttapong11}. The $\mathcal{CP\mathchar`-ABE}$ is a form of $\mathcal{ABE}$ in which keys are associated with attributes and data is encrypted according to a policy specifying which attributes are needed to decrypt the ciphertext. While in $\mathcal{KP\mathchar`-ABE}$, attributes are always used to describe the access policies and encrypted data. The user’s secret keys generate using these attributes. There are some available implementations of $\mathcal{ABE}$. The first efficient implementation was given by Bethencourt et al. \cite{4223236} using the Pairing-Based Crypto library. Another efficient implementation of $\mathcal{ABE}$ was given by Khoury et al. \cite{6899132}, achieves 3 ms for ABE encryption and 6 ms for decryption. $\mathcal{ABE}$ A (Ciphertext-Policy) Attribute Based Encryption ( $\mathcal{CP\mathchar`-ABE}$) scheme consists of four algorithms $\mathsf{ABE}=(\mathsf{Setup}, \mathsf{KeyGen}, \mathsf{Enc}, \mathsf{Dec})$. \begin{description} \item[$(\mathsf{mpk},\mathsf{msk}) \gets \mathsf{Setup}(1^\secpar, \mathcal{U})$] The setup algorithm takes security parameter and attribute universe description $\mathcal{U}$ as input. It outputs the public parameters $\mathsf{mpk}$ and a master key $\mathsf{msk}$. \item[$CT \gets \mathsf{Enc}(\mathsf{mpk}, M, \mathcal{A})$] The encryption algorithm takes as input the public parameters $\mathsf{mpk}$, a message M, and an access structure $\mathcal{A}$ over the universe of attributes. The algorithm will encrypt M and produce a ciphertext CT such that only a user that possesses a set of attributes that satisfies the access structure will be able to decrypt the message. We will assume that the ciphertext implicitly contains $\mathcal{A}$. \item[$\mathsf{sk} \gets \mathsf{KeyGen}(\mathsf{msk}, \mathcal{S})$] The key generation algorithm takes as input the master key $\mathsf{msk}$ and a set of attributes $\mathcal{S}$ that describe the key. It outputs a private key $\mathsf{sk}$. \item[$M \gets \mathsf{Dec}(\mathsf{mpk}, CT, \mathsf{sk})$] The decryption algorithm takes as input the public parameters $\mathsf{mpk}$, a ciphertext CT, which contains an access policy $\mathcal{A}$, and a private key SK, which is a private key for a set $\mathcal{S}$ of attributes. If the set S of attributes satisfies the access structure $\mathcal{A}$ then the algorithm will decrypt the ciphertext and return a message M. \end{description} An $\mathcal{KP\mathchar`-ABE}$ scheme consists of four algorithms: \begin{description} \item[$(\mathsf{mpk},\mathsf{msk}) \gets \mathsf{Setup}(1^\secpar)$] This is a randomized algorithm that takes no input other than the implicit security parameter. It outputs the public parameters $\mathsf{mpk}$ and a master key $\mathsf{msk}$. \item[$C \gets \mathsf{Enc}(m,\mathcal{A},\mathsf{mpk})$] This is a randomized algorithm that takes as input a message m, a set of attributes $\mathcal{A}$, and the public parameters $\mathsf{mpk}$. It outputs the ciphertext C. \item[$\mathsf{sk} \gets \mathsf{KeyGen}(A,\mathsf{mpk},\mathsf{msk})$] This is a randomized algorithm that takes as input – an access structure A, the master key $\mathsf{msk}$ and the public parameters $\mathsf{mpk}$. It outputs a decryption key $\mathsf{sk}$. \item[$m \gets \mathsf{Dec}(C,\mathcal{A})$] This algorithm takes as input – the ciphertext C that was encrypted under the set $\mathcal{A}$ of attributes, the decryption key $\mathsf{sk}$ for access control structure A and the public parameters $\mathsf{mpk}$. It outputs the message m if $\mathcal{A} \in A$. \end{description} \textsc{\textbf{Verifiable Computation ($\mathcal{VC}$)}} \label{VC} $\mathcal{VC}$ \cite{Babai91} schemes enable a weak data user (internal user) to outsource the computation of a function $\mathcal{F}$ on various inputs to a computationally strong but untrusted cloud, which allows the data user to check the integrity of the computation. Most verifiable computation constructions are based on probabilistically checkable proofs (or PCPs) \cite{Arora98}. A great survey of practical verifiable computation implementations is given in \cite{Walfish15}. \labelText{VC}{label:VC} is used which is integrated as follows: A computationally weak data user outsource the storage of many data items to a computationally strong but untrusted prover (or cloud server). Each data item is labeled with a string $L_i$. The data user wishes to compute on some subset of D's data a function $f$, and delegates this task to the cloud server. Given an input: $d_1, \cdots, d_k$ and a function $f$ to evaluate on $d_1, \cdots, d_k$, the cloud server is expected to produce an output $y$, along with proof $\sigma$ that $y = f(d_1, \cdots, d_k)$ that the data user can use to confirm the correctness of the computation as shown in Fig. \ref{Verifiable Delegation of Computation on Outsourced Data}. \begin{figure}[!htbp] \centering \scalebox{0.70}{\includegraphics[width=.7\linewidth]{VC-Client-Server-eps-converted-to.pdf}} \caption{Verifiable Delegation of Computation on Outsourced Data} \label{Verifiable Delegation of Computation on Outsourced Data} \end{figure} \textsc{\textbf{Proxy re-encryption ($\mathcal{PRE}$)} } \label{PRE} $\mathcal{PRE}$ provides right delegation and confidentiality of shared data. Encryption scheme such as $\mathcal{IBE}$ and $\mathcal{ABE}$ does not allow the update of ciphertext recipient. Proxy re-encryption ($\mathcal{PRE}$), initially introduced by Blaze\cite{Blaze98}, Bleumer and Strauss \cite{10.1007/BFb0054122} in 1998 and improved by Ateniese et al. \cite{Ateniese06,Green06} in 2006, enables a proxy to transform a encrypted data stored on a cloud storage system under the public key of data user (or delegator) into another encrypted data under the public key of data user (or delegatee) without leaking the underlying encrypted data or private keys of delegator/delegatee to the proxy. This form of public key encryption is the best candidate to ensure the security of sharing data in cloud computing. $\mathcal{PRE}$ could be combined with $\mathcal{IBE}$, which yields Identity-Based Proxy Re-Encryption $\mathcal{IBPRE}$, where ciphertexts are transformed from one identity to another \cite{Green07}. \labelText{Proxy re$\mathchar`-$encryption}{label:PRE} consists of the following algorithms: \begin{description} \item[ $(\mathsf{pk},\mathsf{sk}) \gets \mathsf{KeyGen}(1^\secpar)$] On input the security parameter $1^\secpar$, the key generation algorithm $\mathsf{KeyGen}$ outputs a key pair (pk,sk). \item[$rk_{pk \rightarrow pk'} \gets \mathop{\mathsf{ReKeyGen}}(\mathsf{pk},\mathsf{sk}, \mathsf{pk}' )$] On input a private key $\mathsf{sk}$ of a delegator and a public key of a delegatee $\mathsf{pk}$ , algorithm $\mathop{\mathsf{ReKeyGen}}$ outputs a unidirectional re-encryption key $rk_{pk \rightarrow pk'}$. \item[ $c \sample \mathsf{Enc}(\mathsf{pk}, m)$] On input a public key pk and a message m, algorithm $\mathsf{Enc}$ outputs a ciphertext c. \item[$c' \gets \mathop{\mathsf{ReEnc}}(rk_{pk \rightarrow pk'} , c)$]On input a re-encryption key $rk_{pk \rightarrow pk'}$ and a ciphertext c' , algorithm $ \mathop{\mathsf{ReEnc}}$ outputs a ciphertext c' decryptable under the secret key $\mathsf{sk}'$. \item[$m \gets \mathsf{Dec}(\mathsf{sk}, \mathsf{pk}, c)$]On input a secret key $\mathsf{sk}$, a public key pk and a ciphertext c', algorithm Dec outputs a message m or the error symbol $\perp$. \end{description} \textsc{\textbf{Proxy Re-Signature}} \label{PS} Proxy re-signatures \cite{10.1007/BFb0054122} should not be confused with the proxy signatures \cite{10.1145/238168.238185,Ivan03proxycryptography} which definition is given in Dodis et al. \cite{Ivan03proxycryptography}. In their general construction, Bob’s signature is considererd as a double signature which includes a signature from Alice and one from the proxy. There is clearly no key transformation from valid Alice's singing secret key into Bob's ones. In a proxy re-signature scheme, a semi-trusted proxy is given some information which allows it to transform data user’s signature on a data D into data user’s on data D. A proxy re-signature scheme is a tuple of (possibly probabilistic) polynomial time algorithms $\mathsf{PS}=(\mathsf{KeyGen}, \mathsf{ReKey}, \mathsf{Sign},\mathsf{ReSign}, \mathop{\mathsf{Verify}})$, where: $(\mathsf{KeyGen}, \mathsf{Sign}, \mathop{\mathsf{Verify}})$ form the standard key generation, signing, and verification of $\mathsf{DS}$ algorithms as in DS. \begin{description} \item[$rk_{A \rightarrow A} \gets \mathsf{ReKey}$] On input $(\mathsf{pk}_A, \mathsf{sk} * A, \mathsf{pk}_B, \mathsf{sk}_B)$, the re-encryption key generation algorithm, ReKey, outputs a key $rk_{A \rightarrow A}$ for the proxy. (Note: $rk_{A \rightarrow A}$ allows to transform A's signatures into B's signatures – thus B is the delegator.) The input marked with a ‘*’ is optional. \item[$\sigma_B(m) \gets \mathsf{ReSign}$] On input $rk_{A \rightarrow A}$, a public key $\mathsf{pk}_A$, a signature $\sigma$, and a message m, the re-signature function, $\mathsf{ReSign}$, outputs $\sigma_B(m)$ if $\mathop{\mathsf{Verify}}(pk_A, m, \sigma)$ and $\perp$ otherwise. \end{description} \textsc{\textbf{Searchable Symmetric Encryption ($\mathcal{SSE}$)}} \label{SSE} $\mathcal{SSE}$ \cite{Curtmola06} aims to provide confidentiality and searchability simultaneously. The data user can delegate a token for a specific query, which allows the server to perform the query over encrypted data. One approach to provisioning symmetric encryption with search capabilities is with a so-called \emph{secure index} as shown by Goh in\cite{cryptoeprint:2003:216}. The client indexes and encrypts its data collection and sends the secure index together with the encrypted data to the server. To search for a keyword w, the data user generates and sends a trapdoor for w which the server uses to run the search operation and recover pointers to the appropriate (encrypted) datas \cite{10.5555/2590701.2590705}. Alternatively, using oblivious RAMs techniques symmetric searchable encryption can be can be achieved in its full generality (e.g., conjunctions or disjunctions of keywords) with optimal security using the work of Ostrovsky and Goldreich on oblivious RAMs \cite{10.1145/100216.100289,10.1145/233551.233553}. We assume that the client processes the data collection $\mathsf{D}=(\mathsf{D}_1, \cdots, \mathsf{D}_n)$ and sets up a "database" DB that maps every keyword $\mathsf{w}$ in the collection to the identifiers of the documents that contain it. Recall that in our context, we use the term database loosely to refer to a data structure optimized for keyword search (i.e., a search structure). For a keyword $\mathsf{w}$, we'll write $\mathsf{DB}[w]$ to refer to the list of identifiers of documents that contain $\mathsf{w}$. A non-interactive and response-revealing \labelText{SSE}{label:SSE} scheme $(\mathsf{Setup},\mathsf{Token},\mathsf{Search})$ consists of: \begin{description} \item ($\mathsf{sk},\mathsf{EDB}) \gets \mathsf{Setup}(1^\secpar,DB)$ a Setup algorithm run by the client that takes as input a security parameter $1^\secpar$ and a database DB; it returns a secret key $\mathsf{sk}$ and an encrypted database EDB; \item $\mathsf{tk} \gets \mathsf{Token}(\mathsf{sk},\mathsf{w})$ a Token algorithm also run by the client that takes as input a secret key $\mathsf{sk}$ and a keyword $\mathsf{w}$; it returns a token tk; \item $\mathsf{DB}[w] \gets \mathsf{Search}(\mathsf{EDB},\mathsf{tk})$ a Search algorithm run by the server that takes as input an encrypted database $\mathsf{EDB}$ and a token $\mathsf{tk}$; it returns a set of identifiers $\mathsf{DB}[w]$. \end{description} \textsc{\textbf{Public Key Encryption with Keyword Search ($\mathcal{PEKS}$)}} \label{PEKS} $\mathcal{PEKS}$ was introduced in 2004 by Boneh et al. \cite{10.1007/978-3-540-24676-3_30}, mainly based on public key encryption algorithms, enables a data sharee to retrieve encrypted data containing some specific keyword from the centralized server. A data sharer encrypts both their data and index with the public key and uploads to the remote server provider. A data sharee who has received the corresponding private key can perform the search operation. He generates the trapdoor he wants to search the keyword with the private key and sends it to the server. After receiving trapdoor, the server provider enable test whether a given ciphertext contains the search keyword without knowing the corresponding plaintext of the encrypted data and the keyword. Then, the server provider returns the query results to the data sharee. Finally, the sharee can decrypt the encrypted data sent by the server. A $\mathcal{PEKS}$ scheme consists of the following algorithms: $\mathsf{(KeyGen,PEKS,Trapdoor,Test)}$: \begin{description} \item[$(\mathsf{pk}_R, \mathsf{sk}_R) \gets \mathsf{KeyGen}(s)$] takes a security parameter, s, and generates a public/private key pair $pk_R$, $sk_R$. \item[$(\mathsf{pk}_R,W) \gets \mathsf{PEKS}(\mathsf{pk}_R, \mathcal{W})$]: for a public key $\mathsf{pk}_R$ and a word $\mathcal{W}$, produces a searchable encryption of $\mathcal{W}$. \item[$\mathcal{T}_W \gets \mathsf{Trapdoor}(\mathsf{sk}_R, \mathcal{W})$] given Receiver’s private key and a word W produces a trapdoor $\mathcal{T}_W$. \item[$(0,1) \gets \mathsf{Test}(\mathsf{pk}_R, S, \mathcal{T}_W )$] given Receiver’s public key, a searchable encryption $S = \mathsf{PEKS}(pk_R, \mathcal{W}_0)$, and a trapdoor $\mathcal{T}_W =\mathsf{Trapdoor}(\mathsf{sk}_R, \mathcal{W})$, outputs 1 if $\mathcal{W} = \mathcal{W}_0$ and 0 otherwise. \end{description} \textsc{\textbf{Secure Multi Party Compution ($\mathcal{SMPC}$)}} \label{MPC} $\mathcal{SMPC}$ introduced by Yao in 1982 \cite{Yao1982}, is a ‘toolbox’ of cryptographic techniques that allows several different data sharers to jointly analyze data, just as if they have a shared database without violating their underlying sensitive data privacy and only the output of the analysis will be revealed. The concept of MPC is presented a story about two millionaire’s problem to lead to $\mathcal{SMPC}$ processing: two millionaires want to know who is the wealthiest one, while they don’t want to reveal individual wealth to another. However, only some simple functions can be carried out, and complex functions are very demanding in terms of efficiency. Yao \cite{Yao1982} presented the first two-party protocol for computing functions represented as boolean circuits using a technique called garbled circuits. Later, Goldreich, Micali, and Wigderson \cite{10.1145/28395.28420} made two contributions: First, they introduced the first multi-party protocol, also for boolean circuits, with computational security against a semi-honest adversary, and second a general compiler for transforming any protocol with semi-honest security to one with malicious security. Ben-Or et al and Chaum et al in \cite{10.1145/62212.62213,10.1145/62212.62214}. More formally, $\mathcal{SMPC}$ is stated as follows: Given number of data users: $\mathcal{O}_1, \cdots, \mathcal{O}_n$, each have private data, respectively $\mathcal{D}_1, \cdots,\mathcal{D}_n$. Data users want to compute jointly a public function $\mathcal{F}$ on their private data: $\mathcal{F}(\mathcal{D}_1, \cdots,\mathcal{D}_n)$ while keeping that private data secret. For example, suppose we have three data users Alice, Bob and Charlie, with respective inputs x, y and z denoting their own personal wealth. They want to find out the wealthiest, without revealing to each other how much each of them has as shown in Fig. \ref{MPC protocol}. Mathematically, this translates to them computing: \[ \mathcal{F}(x,y,z) = max(x,y,z) \] \begin{figure}[!htbp] \centering \includegraphics[width=.7\linewidth]{MPC-protocol-eps-converted-to.pdf} \caption{MPC protocol} \label{MPC protocol} \end{figure} \textsc{\textbf{Private Set Intersection ($\mathcal{PSI}$)}} \label{PSI} $\mathcal{PSI}$ \cite{Chen17} is a powerful tool from $\mathcal{SMPC}$ cryptographic technique that allows two data sharers holding sets, to compare encrypted versions of these sets in order to compute the intersection. For example, $\mathcal{PSI}$ allows to test whether the parties share a common datapoint (such as a location, ID, etc). Many $\mathcal{PSI}$ protocols have been proposed. For example, in \cite{Changyu13}they proposed a protocol based on a novel two-party computation approach which gives better a reasonable efficiency and scalability. Among the first protocols for $\mathcal{PSI}$ was \cite{10.1007/978-3-540-24676-3_1} which is based on Oblivious Polynomial evaluation (OPE), however it requires heavy cost in terms of computational complexity. Later, $\mathcal{PSI}$protocol with reasonable linear computation and communication complexity was introduced in \cite{6234849} by using the Diffie-Hellman protocol (DH). The most recent and most efficient $\mathcal{PSI}$ protocols are based on either using efficient OT extension and garbled Bloom filters or hashing to bins. Existing $\mathcal{PSI}$ protocols are compared in \cite{10.1145/3154794}. \labelText{PSI}{label:PSI} Suppose we have two parties: Alice and Bob such that Alice has a set of items: $\mathcal{A}=(a_1,\cdots ,a_n)$ and Bob has another set: $\mathcal{B}=(b_1,\cdots ,b_n)$.The goal of $\mathcal{PSI}$ is to allow Alice and Bob to obtain the result of the intersection $\mathcal{A}\cap \mathcal{B}$, under the following privacy restriction: The protocol must not reveal anything about items without revealing any additional information beyond the intersection itself. The server-client $\mathcal{PSI}$ variant is user, the $\mathsf{EXT}$ user will learn the intersection of his encrypted set with the set of the cloud, without the cloud learning intersection of his set with the client as shown in Fig. \ref{fig:PSI}. \begin{figure}[!htbp] \centering \scalebox{0.70}{ \includegraphics[width=.7\linewidth]{server-client-PSI-eps-converted-to.pdf}} \caption{Server-client PSI} \label{fig:PSI} \end{figure} \textsc{\textbf{Storage path encryption}} \label{storage path encryption} Recently Cheng \emph{et al.} \cite{Cheng15}proposed a scheme for secure storage of big data on clouds. In the proposed scheme, the big data are first separated into many sequenced parts and then each part is stored on a different storage media owned by different cloud storage providers. To access to the data, different parts are first collected together from different data centres and then restored into original form before it is presented to the data user. \labelText{Storage path encryption}{label:Storage path encryption} Data will be first separated into a sequence of $n$ parts and then each part will be stored at $m$ different storage providers. To retrieve the data, different parts are first collected together from different data centres and then restored into original data before sending it to the data user. \textsc{\textbf{Oblivious RAM ($\mathcal{ORAM}$)}} \label{ORAM} This technique was introduced by Goldreich and Ostrovsky \cite{Ostrovsky90}, which enables a data sharer to store data on a cloud server and read/write to individuals locations of the data while hiding the \emph{access pattern}. The security of $\mathcal{ORAM}$ is based on the fact that for any two values $\mathsf{M_1,M_2}$ and equal-size sequences of read/write operations: $\mathsf{S_1,S_2}$, the server cannot distinguish between $\mathcal{ORAM}$ execution with ($\mathsf{M_1,S_1}$) and ($\mathsf{M_2,S_2}$). \labelText{ORAM}{label:ORAM} A client C wants to perform read and write operations on a large database residing on a remote, untrusted server S. The database is encrypted with a symmetric key owned by C. Whenever C wants to perform an operation on the database, it does the following: C sends a request to S to download the whole database. Then, C decrypts the whole database, performs the operation on the desired element, then re-encrypts the database (with the same key). Finally, C re-uploads the re-encrypted whole database to S. \textsc{\textbf{Proof of Data Possession ($\mathcal{PDP}$)}} \label{PDP} $\mathcal{PDP}$ \cite{10.1145/1315245.1315318} is a cryptographic protocol the provide data integrity verification in remote untrusted servers. Client periodically challenges the server to ask relevant evidence that can prove the data exists, then client will compare the relevant with local evidence to verify the integrity of data. $\mathcal{PDP}$ supports probabilistic proof, which means client does not need to challenge and compare all evidence corresponding to the data. This property significantly reduces the computation and communication cost during the protocol procedure. Other variants of $\mathcal{PDP}$ \cite{ateniese2008scalable} \cite{curtmola2008mr} \cite{erway2015dynamic} are also introduced later for improving efficiency, scalabiltiy, supporting multiple-replicas and file updated. \labelText{PDP}{label:PDP} Consider we have a client who owns the data $m$ and wants to store the data on server. The definition of general $\mathcal{PDP}$ as follows: \begin{description} \item $(\mathsf{pk}, \mathsf{sk}) \gets \mathsf{KeyGen}(1^\secpar)$ is a probabilistic key generation algorithm run by the client to setup the scheme. It takes a security parameter $k$ as input, and returns a key pair of public and secret key $(\mathsf{pk}, \mathsf{sk})$. $T_{m} \gets \mathsf{TagBlock} (\mathsf{pk}, \mathsf{sk}, m)$ is a algorithm run by the client to generate the verification metadata. It takes as inputs a public key $\mathsf{pk}$, a secret key $\mathsf{sk}$ and a file block $m$, and returns the verification metadata $T_{m}$. \item $\mathcal{V} \gets \mathsf{GenProof}(\mathsf{pk}, F, c, \Sigma)$ is run by the server to generate a proof of data possession. It takes a public key $\mathsf{pk}$ as input, an ordered collection $F$ of blocks, a challenge $c$ and an ordered collection $\Sigma$ which is the verification metadata corresponding to the blocks in $F$. It returns a proof of data possession $\mathcal{V}$ for the blocks in $F$ that are determined by the challenge $c$. \item $\{ 1, 0\} \gets \mathsf{CheckProof}(\mathsf{pk}, \mathsf{sk}, c, \mathcal{V})$ is run by the client to verify the proof of possession. It takes as inputs a public key $\mathsf{pk}$, a secret key $\mathsf{sk}$, a challenge $c$ and a proof of possession $\mathcal{V}$. It returns whether $\mathcal{V}$ is a correct proof of possession for the blocks. \end{description} \heading{\textbf{Homomorphic Encryption ($\mathcal{HE}$)}} \label{HE} Due to cost-efficiency reasons, data sharers are usually outsourcing their own data to server which can provide access to the data as a service, $\mathcal{HE}$ which introduced firstly by Rivest et al. \cite{Rivest1978} whose main benefit is that for some operations can be allowed over the data user's encrypted data without decrypting it, which allows to produce result that is still encrypted but when decrypted by the user it matches exactly the result that would be obtained if the same computational operations had been performed on the user's raw data as opposed to the uploaded enrypted data. One of the most common scenarios where $\mathcal{HE}$ can be used is in outsourced computations: a data sharer sends encrypted data to a server and asks this latter to evaluate a function $\mathcal{F}$ on this encrypted data. The inputs and outputs of the computation are encrypted with the client’s secret/public key and the server manipulates only encrypted data. $\mathcal{HE}$ schemes can be roughly classified into 3 following types: \begin{itemize} \item Somewhat Homomorphic Encryption ($\mathcal{SHE}$): In $\mathcal{SHE}$ scheme, both addition and multiplication operation is allowed but with only a limited number of times. \item Fully Homomorphic Encryption ($\mathcal{FHE}$): In $\mathcal{FHE}$ scheme, allow to do unlimited number of homomorphic operations by bootstrapping \cite{Gentry09}. \item Partially Homomorphic Encryption ($\mathcal{PHE}$): In $\mathcal{PHE}$ scheme, are in general more efficient than $\mathcal{SHE}$ and $\mathcal{FHE}$, mainly because only one type of homomorphic operation is allowed on the encrypted message, i.e., either addition or multiplication operation, with unlimited number of times. \end{itemize} Early homomorphic cryptosystems such as RSA \cite{Rivest1978}, El Gamal \cite{10.1007/3-540-39568-7_2}, and Paillier \cite{10.1007/3-540-48910-X_16} can only support a single operation on ciphertexts such as addition, multiplication, or XOR and are called partially homomorphic. New cryptographic solutions for computation outsourcing became possible after Gentry’s discovery of the first viable $\mathcal{FHE}$, which solved a long-standing major problem in cryptography and theoretical computer science \cite{Gentry09}. The first plausible and achievable $\mathcal{FHE}$ scheme, was introduced in 2009 in the seminal work of Gentry \cite{10.5555/1834954}, which which allows any computable function (both additions and multiplications) to perform on the encrypted data. Gentry split the $\mathcal{FHE}$ problem into two components: the design of a somewhat homomorphic encryption scheme ($\mathcal{SWHE}$) that allows a limited number of $\textsc{Eval}$ operations, which allows fully homomorphic encryption using a boostrapping algorithm and the multiple application of $\mathcal{SWHE}$. The initial scheme was implemented by Gentry and Halevi \cite{10.1007/978-3-642-20465-4_9}. Unlike the public key encryption, which has three security procedures, i.e., key generation, encryption and decryption; there are four procedures in $\mathcal{HE}$ scheme, including the evaluation algorithm. The notation $\mathcal{E}(x)$ is used to denote the encryption of the message x. Let $\mathcal{E}(m_1)=m_1^e$ and $\mathcal{E}(m_2)=m_2^e$. Then, Addition Homomorphism: \[ \mathcal{E}(m_1) + \mathcal{E}(m_2) =m_1^e+m_2^e= (m_1+m_2)^e=\mathcal{E}(m_1+m_2) \] Multiplication Homomorphism: \[ \mathcal{E}(m_1)\cdot \mathcal{E}(m_2) =m_1^e \cdot m_2^e= (m_1\cdot m_2)^e=\mathcal{E}(m_1 \cdot m_2) \] More formally, Let the message space $(\mathcal{M}, \mathsf{o})$ be a finite (semi-)group, and let $k$ be the security parameter a \labelText{HE}{label:HE} on $\mathcal{M}$ is a quadruple $(\mathsf{KeyGen}, \mathsf{Enc}, \mathsf{Dec}, A)$ of probabilistic, expected polynomial time algorithms, satisfying the following functionalities: \begin{description} \item $\mathsf{k} \gets \mathsf{KeyGen}(1^\secpar)$] On input $1^\secpar$ the Key Generation algorithm $\mathsf{KeyGen}$ outputs an encryption/decryption key pair $\mathsf{k}=(\mathsf{k}_e, \mathsf{k}_d)=k \in \mathcal{K}$, where $\mathcal{K}$ denotes the key space. \item $c \gets \mathsf{Enc}(1^\secpar,\mathsf{k}_e,m)$ On inputs $1^\secpar$, $\mathsf{k}_e$, and an element $m \in M$ the encryption algorithm E outputs a ciphertext $c \in C$, where C denotes the ciphertext space. \item $m \gets \mathsf{Dec}(c,1^\secpar,\mathsf{k})$] The decryption algorithm $\mathsf{Dec}$ is deterministic. On inputs security parameter,$\mathsf{k}$ and an element $c \in C$ it outputs an element in the message space $\mathcal{M}$ so that for all $m \in \mathcal{M}$ it holds: if $c=\mathsf{Enc}(1^\secpar,\mathsf{k}_e, m)$ then $\textsc{Pr}[\mathsf{Dec}(1^\secpar,\mathsf{k},c) \ne m]$ is negligible, i.e., it holds that $\textsc{Pr}[\mathsf{Dec}(1^\secpar,\mathsf{k},c) \ne m] < 2-k$ . \end{description} Homomorphic Property: A is an algorithm that on inputs $1^\secpar,\mathsf{k}_e$, and elements $(c_1,c_2) \in C$ outputs an element $c_3 \in C$ so that for all $m_1, m_2 \in \mathcal{M}$ it holds: if $m_3=m_1 ~\mathsf{o}~ m_2$ and $c_1=\mathsf{Enc}(1^\secpar,k_e, m_1)$, and $c_2=\mathsf{Enc}(1^\secpar, k_e, m_2)$, then $\textsc{Pr}[\mathsf{Dec}(A(1^\secpar, k_e, c_1, c_2))] \ne m_3]$ is negligible. The security of the most practical $\mathcal{HE}$ schemes is based on the Ring-Learning With Errors ($\mathsf{RLWE}$) problem, which is a hard mathematical problem related to high-dimensional lattices. Namely, the security assumption of these encryption schemes states that if the scheme can be broken efficiently, then the $\mathsf{RLWE}$ problem can be solved efficiently. \heading{\textbf{Functionnal Encryption ($\mathcal{FE}$)}} \label{FE} $\mathcal{FE}$ is a public key construction, on which it is possible to produce functional secret keys allowing a party to evaluate a specific function $\mathcal{F}$ (generally public) on an encrypted input during its decryption. So the input is encrypted and the output is in cleartext: the party performing the (functional) decryption learns the result of the function on the specific data, but nothing else. A $\mathcal{FE}$ scheme enables a data sharer to encrypt a data set and share the ciphertexts to a data sharee such that the data sharee can obtain a (specific) function value of the data set from the ciphertexts, but nothing more about the data set itself. $\mathcal{FE}$ has been properly formalized in 2011 by Boneh, Sahai and Waters in \cite{10.1007/978-3-642-19571-6_16}. Nowadays, there are no known $\mathcal{FE}$ schemes that can be used to efficiently evaluate general functions. However, the literature proposes multiple efficient constructions to evaluate linear and quadratic functions. A $\mathcal{FE}$ for a functionality $\mathcal{F}$ defined over a tuple of four PPT algorithms $(\mathsf{Setup}, \mathsf{KeyGen}, \mathsf{Enc}, \mathsf{Dec})$ defined as follows: \begin{description} \item $(\mathsf{mpk}, \mathsf{msk}) \gets \mathsf{Setup}(1^\secpar)$ On input $1^\secpar$ generate a public $\mathsf{mpk}$ and master secret $\mathsf{msk}$ key pair. \item $\mathsf{sk} \gets \mathsf{KeyGen}(\mathsf{mpk}, k)$ On input $\mathsf{mpk}$ and k generate secret key for k. \item $c \gets \mathsf{Enc}(\mathsf{mpk}, x)$ Encrypt message x. \item $y \gets \mathsf{Dec}(\mathsf{msk}, c)$ On inputs $\mathsf{msk}$ to compute $\mathcal{F}(k, x)$ from c. \end{description} \heading{\textbf{Attribute-Based Homomorphic Encryption ($\mathcal{ABHE}$)}} \label{ABHE} The first $\mathcal{ABHE}$ was constructed by Gentry et al. in \cite{10.1007/978-3-642-40041-4_5} from the Learning with Errors (LWE) problem. $\mathcal{ABHE}$ a data sharer describes a policy (predicates) while encrypting his data, and a trusted party issues a decryption key for the attributes and distributes them among data sharees. A data sharee can decrypt this ciphertext if his attributes satisfy data user’s defined policy. The only known way to achieve fully $\mathcal{ABHE}$ (i.e. where all circuits can be evaluated) is through indistinguishability obfuscation \cite{6686139}, especially the construction in \cite{10.1007/978-3-319-12280-9_1}. An $\mathcal{ABHE}$ scheme is a tuple of probabilistic polynomial time (PPT) algorithms $\mathsf{ABHE}=(\mathsf{Setup},\mathsf{KeyGen},\mathsf{Encrypt},\mathsf{Decrpt},\mathsf{Eval})$ where $\mathsf{Setup}$, $\mathsf{KeyGen}$, $\mathsf{Encrypt}$ are defined equivalently to $\mathcal{KP\mathchar`-ABE}$[\ref{ABE}]. $m \gets \mathsf{Dec}(k_{f_1},\cdots,k_{f_k},c)$]: On input a sequence of $k \leq \mathcal{K}$ secret keys for policies $f_1, \cdots, f_k \in \mathcal{F}$ and a ciphertext c, output a plaintext $m\in \mathcal{M}$ iff every attribute associated with c is satisfied by at least one of the $f_i$. Otherwise output $\perp$. $c' \gets \mathsf{Eval}(\mathsf{pp},C,c_1,\cdots ,c_l)$ ]: On input public parameters $\mathsf{pp}$ , a circuit $C\in \mathcal{C}$ and ciphertexts $c_1,\cdots ,c_l \in \mathcal{C}$, output an evaluated ciphertext $C'\in \mathcal{C}$. \heading{\textbf{Private Information Retrieval (PIR)}} $\mathcal{PIR}$ is a protocol that allows a user to retrieve an item from a server in possession of a database without revealing which item is retrieved, wich is very useful in the cloud outsourcing context. $\mathcal{PIR}$ is a weaker version of 1-out-of-n oblivious transfer, where it is also required that the user should not get information about other database items. An important goal in $\mathcal{PIR}$ works is to reduce the amount of communication required between the server and the querier. Chor et al. \cite{10.1145/293347.293350} introduced the first $\mathcal{PIR}$ scheme in an information-theoretic model with multiple non-colluding servers. Shortly thereafter, several works provided $\mathcal{PIR}$ under the assumption that certain cryptographic problems are hard \cite{10.1007/11523468_65,10.1007/3-540-48910-X_28,10.1007/3-540-45539-6_9}. More formally, a $\mathcal{PIR}$ is an interactive protocol between two parties: a database $\mathcal{D}$ and a user $\mathcal{U}$. The database stores a data string $\{0,1\}^n$, and the user has an index $i \in [n]$. In the first round , the protocol does the following (a) the user send a query to the database, this is generated by an efficient randomized query algorithm, taking as an input the index i and a random string $r_U$; (b) The database sends an an answer to the user; this is generated by an efficient deterministic answer algorithm, taking as an input the query sent by the user and the database x; and (c) The user applies an efficient reconstruction function (taking as an input the index i, the random string $r_U$, and the answer sent by the database). \section{Privacy in BDS} \label{sec:privacy} In recent years, with the increase in data demand and the development of significant data sharing and the CIA’s requirements, the concept of Privacy has become a new indispensable requirement. The privacy concerns discussed in this article are mainly divided into two aspects: data privacy and user privacy which are defined in Section ~\ref{Introduction}. In this section, first, we discuss the data privacy requirements of BDS, each of such requirements is targeted at one type of known vulnerabilities. Second, we describe the basic (and inherent) privacy-preserving techniques. Finally, we discuss techniques to achieve user privacy, which is desired by many BDS applications. \subsection{Data Privacy requirements in BDS.} \label{Privacy} Data Privacy is one of the most important concept of our time, yet it is also one of the most very hard to achieve. Privacy Preserving Data Publishing (PPDP), is the process through which we provide methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has has been given a considerable object of attention among researchers \cite{10.1145/1749603.1749605,9298747,rajendran2017study}. To achieve PPDP, first the data sharer collects data from individuals. Then, the data sharer prepares the data to be processed and anonymized. Finally, the processed/anonymized data is sent to the data sharee for further analysis or research purposes. The original data are assumed to be sensitive and private if it contains the following four attributes: \begin{itemize} \item Identifier (ID): The attributes which can be used to uniquely identify a person e.g., name, driving license number, and mobile number etc. \item Quasi-identifier (QID): The attributes that cannot uniquely identify a record by themselves but if linked with some external dataset may be able to re-identify the records. \item Sensitive attribute (SA): The attributes that a person may want to conceal e.g., salary and disease. \item Non-sensitive attribute (NSA): Non-sensitive attributes are attributes which if disclosed will not violate the privacy of the user. All attributes other than identifier, quasi-identifier and sensitive attributes are classified as non-sensitive attributes. \end{itemize} Below, we discuss the three types of data privacy requirements in BDS, which we define as follows: \textbf{Data Privacy during Downloading.}\label{data publishing} It deals with what someone does once he have obtained the data containing sensitive information. Actions include the following: \circled{1} Aggregation: where data is combined, \circled{2} identification: where data is connected to an individual. \circled{3} secondary use: where data is used for a reason other than was intended \circled{4} exclusion: where data isn't revealed to the person it was collected from. \circled{5} insecurity: where data is leaked. The publishing procedures on big data should not contain sensitive information of individuals. \textbf{Data Privacy during Computation.}\label{data computation} Some operations can be run over encrypted data, e.g. a number of databases around the world currently host a wealth of genomic data that is very useful to researchers conducting a variety of genomic studies. However, patients who volunteer their genomic data face the risk of privacy invasion. The computation procedures and results either through data outsourcing or data sharing on BDS should not contain any sensitive information of individuals. \textbf{Data Privacy during Searching}\label{data searching} Searching over encrypted data is required in many scenarios, e.g., users want to query any data on an untrusted BDS platform without revealing sensitive information of the queried data. The queried data can be either public or anonymous, but the service platform should not identify its specific content. Furthermore, the search results on big data should not contain any sensitive information of individuals. In Table \ref{Comparison of techniques} we summarize the set of privacy concerns that need to be addressed by the techniques described in Section \ref{Privacy techniques}. \begin{table*}[tbp] \centering \caption{Summurization of existing privacy-preserving techniques for BDS} \label{Comparison of techniques} \begin{tabular}{ p{.32\linewidth} p{.45\linewidth} } \toprule $\textbf{Data Privacy Requirements}$ & $\textbf{Privacy-preserving Techniques}$ \\ \\ \toprule Data Privacy During Downloading & K-anonymity, L-diversity, and T-closeness \\ \toprule Data Privacy During Computation & $\mathcal{HE}$ , $\mathcal{MPC}$ , $\mathcal{DP}$, $\mathcal{FPE}$ \\ \toprule Data Privacy During Searching & $\mathcal{ORAM}$, $\mathcal{PEKS}$, $\mathcal{SSE}$ for various search, e.g., keyword, range, boolean, and KNN \\ \bottomrule \end{tabular} \end{table*} We have three main groups of privacy preservation techniques: cryptographic tools, \emph{Format-Preserving Encryption}, \emph{Differential Privacy} ($\mathcal{DP}$) and \emph{K-anonymity} (with further enhanced version like \emph{L-diversity} and \emph{T-closeness}) that we present in the next section. \subsection{Data Privacy Techniques} \label{Privacy techniques} In this section, we present a comprehensive overview on recent anonymization techniques used for PPDP. Specifically, our review explains anonymization approaches related to the individual privacy protection. \heading{\textbf{Format-Preserving Encryption ($\mathcal{FPE}$)}} $\mathcal{FPE}$ \cite{bellare2009format} is designed to encrypt data of some specified format into a ciphertext of identical format, such as the format of equal length as the original data. FPE can be constructed based on symmetric encryption and other formats also be developed, such as date-time \cite{liu2010format} and charactor \cite{li2012format}. The $\mathcal{FPE}$ scheme is $E_{K}^{N, T}$ on $X_{N}$ where $E$ is the encryption algorithm. $K$ is the encryption key. $\left\{X_{N}\right\}_{N \in N}$ is the collection of domains. $X_{N}$ is a slice that $X=\bigcup_{N} X_{N})$. $N$ is the format space. $T$ is the tweak. \heading{\textbf{Differential Privacy ($\mathcal{DP}$)}} \label{DP} Another important privacy-preserving model is \emph{Differential Privacy ($\mathcal{DP}$)} \cite{Dwork06,mcsherry2007mechanism,desfontaines2020sok}. It is considered as equivalent to perturbing the original data and then computing the queries over that modified data. An algorithm is said to be deferentially private if by looking at the output, one cannot tell whether any individual's data was included in the original dataset or not. It consists on introducing a certain amount of random noise to data queries such that any statistical analysis over the whole set is significantly close to the real results, but inference over any data is infeasible. In $\mathcal{DP}$ analyst are not provided the direct access to the database containing personal information but an intermediary software is allowed between the database and the analyst to protect the privacy. Currently, $\mathcal{DP}$ is becoming a practical privacy-preserving technique and applied in various application scenarios \cite{hassan2019differential,yang2020local,zhao2019survey}. Consider we have two databases $D_{1}$ and $D_{2}$ differ in at most one element. The general of requirement \cite{dwork2008differential} of $\mathcal{DP}$ can be: $$ \operatorname{Pr}\left[\mathcal{K}\left(D_{1}\right) \in S\right] \leq \exp (\epsilon) \times \operatorname{Pr}\left[\mathcal{K}\left(D_{2}\right) \in S\right] $$ Where $\mathcal{K}$ is the algorithm applied by the curator when releasing information. The output of it is $transcript$. If it satisfies the requirement, we say that $\mathcal{K}$ gives $\epsilon$-differential privacy if for all data sets $D_{1}$ and $D_{2}$ differing on at most one element, and all $S \subseteq$ Range $(\mathcal{K})$. \heading{\textbf{K-anonymity}} \label{K-anonymity} The k-anonymity \cite{sweeney2002k,lefevre2006mondrian,meyerson2004complexity,el2008protecting} property is satisfied in a release of dataset if individuals information contained in this release cannot be distinguished from at least $k - 1$ individuals in this dataset release. To achieve K-anonymity, it is required to have at least $k$ individuals in the dataset who share the set of attributes that might become identifying for each individual. However, k-anonymity prevents identity disclosure but not attribute disclosure. If there is no diversity in the values of the sensitive attributes, an attacker can easily discover the sensitive value of an data through a homogeneity attack. In fact, there are two enhancements of k-anonymity: The definition of K-anonymity is divided into three parts: \begin{enumerate} \item Attributes. Let $\mathrm{B}\left(A_{1}, \ldots, A_{n}\right)$ be a table with a finite number of tuples. The finite set of attributes of $\mathrm{B}$ are $\left\{A_{1}, \ldots, A_{n}\right\}$ \item Quasi-identifier. Given a population of entities $\mathrm{U},$ an entity-specific table $\mathrm{T}\left(\mathrm{A}_{1}, \ldots, \mathrm{A}_{\mathrm{n}}\right), f_{\mathrm{c}}: \mathrm{U} \rightarrow$ $\mathrm{T}$ and $f_{g}: \mathrm{T} \rightarrow \mathrm{U}^{\prime},$ where $\mathrm{U} \subseteq \mathrm{U}^{\prime}.$ A quasi-identifier of $\mathrm{T},$ written $Q_{T},$ is a set of attributes $\left\{\mathrm{A}_{\mathrm{i}}, \ldots, \mathrm{A}_{\mathrm{j}}\right\} \subseteq\left\{\mathrm{A}_{1}, \ldots, \mathrm{A}_{\mathrm{n}}\right\}$ where: $\exists p_{i} \in \mathrm{U}$ such that $f_{g}\left(f_{c}\left(p_{i}\right)\left[Q_{T}\right]\right)=p_{i}$ \item K-anonymity. Let $\operatorname{RT}\left(A_{1}, \ldots, A_{n}\right)$ be a table and $Q I_{R T}$ be the quasi-identifier associated with it. RT is said to satisfy $k$-anonymity if and only if each sequence of values in $\mathrm{RT}\left[Q I_{R T}\right]$ appears with at least $k$ occurrences in $\mathrm{RT}\left[Q I_{R T}\right]$ \end{enumerate} \heading{\textbf{L-diversity}} \label{L-diversity} The L-diversity model is an extension of the k-anonymity model which reduces the granularity of data representation using techniques including generalization and suppression such that any given record maps onto at least k-1 other records in the data. An equivalence class is said to have l-diversity if there are at least l “well-represented” values for the sensitive attribute. A table is said to have l-diversity if every equivalence class of the table has l-diversity \cite{Machanavajjhala:2007,xiao2010hardness}. Compared with k-anonymity, l-diversity can prevent Homogeneity Attack and Background Knowledge Attack. Meanwhile, it also enable more applications in the early like \cite{liu2009query}. L-diversity extends the concept of k-anonymity. Besides the three definition from it, L-diversity has the definitions: \begin{itemize} \item Domain Generalization: A domain $D^{\star}=\left\{P_{1}, P_{2}, \ldots\right\}$ is a generalization (partition) of a domain $D$ if $\cup P_{i}=D$ and $P_{i} \cap P_{j}=\emptyset$ whenever $i \neq j .$ For $x \in D$ we let $\phi_{D^{*}}(x)$ denote the element $P \in D^{\star}$ that contains $x$. \item Lack of Diversity. Lack of diversity in the sensitive attribute manifests itself as follows: $$ \forall s^{\prime} \neq s, \quad n_{\left(q^{*}, s^{\prime}\right)} \ll n_{\left(q^{*}, s\right)} $$ \item L-Diversity. A $q^{\star}$-block is L-diverse if it contains at least L well-represented values for the sensitive attribute $S$. A table is L-diverse if every $q^{\star}$-block is L-diverse. \end{itemize} The principles of well-present are: \begin{description} \item[Distinct $\ell$ -diversity] The simplest understanding of "well represented" would be to ensure there are at least $\ell$ distinct values for the sensitive attribute in each equivalence class. Distinct $\ell$ -diversity does not prevent probabilistic inference attacks. An equivalence class may have one value appear much more frequently than other values, enabling an adversary to conclude that an entity in the equivalence class is very likely to have that value. This motivated the development of the following two stronger notions of $\ell$ -diversity. \item Entropy $\ell$ -diversity. The entropy of an equivalence class $E$ is defined to be $$ \text {Entropy}(E)=-\sum_{s \in S} p(E, s) \log p(E, s) $$ A table is said to have entropy $\ell$ -diversity if for every equivalence class $E,$ Entropy$(E) \geq \log \ell .$ Entropy $\ell$ diversity is strong than distinct $\ell$ -diversity. As pointed out in $[12],$ in order to have entropy $\ell$ -diversity for each equivalence class, the entropy of the entire table must be at least $\log (\ell) .$ Sometimes this may too restrictive, as the entropy of the entire table may be low if a few values are very common. This leads to the following less conservative notion of $\ell$ -diversity. \item[Recursive $(c, \ell)$ -diversity] Recursive $(c, \ell)$ -diversity makes sure that the most frequent value does not appear too frequently, and the less frequent values do not appear too rarely. Let $m$ be the number of values in an equivalence class, and $r_{i}, 1 \leq i \leq m$ be the number of times that the $i^{t h}$ most frequent sensitive value appears in an equivalence class $E .$ Then $E$ is said to have recursive $(c, \ell)$ -diversity if $r_{1}<c\left(r_{l}+r_{l+1}+\ldots+r_{m}\right)$ A table is said to have recursive $(c, \ell)$ -diversity if all of its equivalence classes have recursive $(c, \ell)$ -diversity. \end{description} where $q^{\star}$ the quasi-identifier attribute from generalized domain. $S$ is the domain of the sensitive attribute. $p(E, s)$ is the fraction of records in $E$ that have sensitive value $s$. $n_{\left(q^{*}, s^{\prime}\right)}$ is the umber of tuples $t^{\star}$ in the anonymized table $T^{\star}$ such that $t^{\star}[S]=s$ and $t^{\star}\left[Q^{\star}\right]=q^{\star}$. \heading{\textbf{T-closeness}} \label{T-closeness} Since L-diversity is insufficient to prevent attribute disclosure, researchers find two attack Skewness Attack and Similarity Attack and propose T-closeness model \cite{Li07,rebollo2009t}. An equivalence class is said to have T-closeness if the distance between the distribution of a sensitive attribute in this class and the distribution of the attribute in the whole table is no more than a threshold $t$. A table is said to have T-closeness if all equivalence classes have T-closeness. It is regarded as a more advantageous than the first two techniques and has more general applications like data publishing \cite{li2009closeness,soria2013differential}, data anonymization \cite{domingo2015t}, randomization \cite{rebollo2009t}. T-closeness use Earth Mover’s distance (EMD) to measure the distance between the distribution of a sensitive attribute: $$ \mathrm{D}[\mathbf{P}, \mathbf{Q}]=W O R K(\mathbf{P}, \mathbf{Q}, F)=\sum_{i=1}^{m} \sum_{j=1}^{m} d_{i j} f_{i j} $$ Then we have two conclusion: \begin{enumerate} \item If we have $ 0 \leq d_{i j} \leq 1$ for all $i, j$, then $0 \leq \mathrm{D}[\mathrm{P}, \mathrm{Q}] \leq 1 $ It means that if the ground distances are normalized, i.e., all distances are between 0 and $1,$ then the EMD between any two distributions is between 0 and $1$. This gives a range from which one can choose the $t$ value for $t$-closeness. \item Given two equivalence classes $E_{1}$ and $E_{2}$, let $\mathrm{P}_{1}$, $\mathrm{P}_{2},$ and $\mathrm{P}$ be the distribution of a sensitive attribute in $E_{1}$, $E_{2},$ and $E_{1} \cup E_{2},$ respectively. Then $$ \mathrm{D}[\mathbf{P}, \mathbf{Q}] \leq \frac{\left|E_{1}\right|}{\left|E_{1}\right|+\left|E_{2}\right|} \mathrm{D}\left[\mathbf{P}_{1}, \mathbf{Q}\right]+\frac{\left|E_{2}\right|}{\left|E_{1}\right|+\left|E_{2}\right|} \mathrm{D}\left[\mathbf{P}_{2}, \mathbf{Q}\right] $$ It follows that $D[\mathbf{P}, \mathbf{Q}] \leq \max \left(D\left[\mathbf{P}_{1}, \mathbf{Q}\right], D\left[\mathbf{P}_{2}, \mathbf{Q}\right]\right) .$ This means that when merging two equivalence classes, the maximum distance of any equivalence class from the overall distribution can never increase. Thus T-closeness is achievable for any $t \geq 0$ \end{enumerate} The above fact entails that T-closeness with EMD satisfies the following two requirements. \begin{description} \item[Generalization Property]. Let $\mathcal{T}$ be a table, and let $A$ and $B$ be two generalizations on $\mathcal{T}$ such that $A$ is more general than $B$ If $\mathcal{T}$ satisfies T -closeness using $B,$ then $T$ also satisfies T -closeness using $A$. \item[Subset Property]. Let $\mathcal{T}$ be a table and let $C$ be a set of attributes in $\mathcal{T}$. If $\mathcal{T}$ satisfies T-closeness with respect to $C$, then $T$ also satisfies T-closeness with respect to any set of attributes $D$ such that $D \subset C$ \end{description} \subsection{User Privacy Techniques} In this section, we provide techniques to achieve the user privacy notion, defined in Section \ref{Introduction}. We argue that, to ensure user privacy, the BDS should be enhanced by other cryptographic techniques that we will be describe in this section. \heading{\textbf{Group Signature}} \label{Group Signature} A group signature \cite{Chaum91} is non-interactive construction for proving that the data sharer (here the signer) of a certain big data belongs to some group without revealing its identity, which provide anonymity for a data user. In some systems these functionalities are separated and given to a membership manager and revocation manager respectively. Notably, revocable \cite{0eedcc0a721243da97c7426df1798b61,Essam15}, traceable \cite{10.1007/978-3-642-05445-7_6}, or distributed traceable \cite{10.1007/978-3-319-16295-9_18}, or fully dynamic model of \cite{cryptoeprint:2016:368}. Efficient constructions were proposed in \cite{eurocrypt-1991-2127,10.1007/3-540-69053-0_32}, however all of them suffer from the drawback that the size of a public group key and the signatures are proportional to the size of a group. A group signature is called \emph{dynamic}, if the public group key remains unchanged when members join or leave the group or modify their key pairs. The first construction of a dynamic group signature scheme was proposed by Camenisch and Stadler \cite{10.1007/BFb0052252}. Group signature was formalized with concurrent join and an efficient construction by Kiayias and Yung \cite{10.1007/11426639_12}. A group signature scheme $GS=(\mathsf{GKg},\mathsf{GSig},\mathsf{GVf}, \mathsf{Open})$ consists of four polynomial-time algorithms: \begin{description} \item[$(\mathsf{gpk}, \mathsf{gmsk}, \mathsf{gsk}) \gets \mathsf{GKg}(1^k,1^n)$.]The randomized group key generation algorithm $\mathsf{GKg}$ takes input $1^k$, $1^n$, where $k \in N$ is the security parameter and $n \in N$ is the group size (ie. the number of members of the group), and returns a tuple $(\mathsf{gpk}, \mathsf{gmsk}, \mathsf{gsk})$, where $\mathsf{gpk}$ is the group public key, gmsk is the group manager’s secret key, and gsk is an n-vector of keys with $\mathsf{gsk}[i]$ being a secret signing key for player $i \in [n]$. \item[$\sigma \gets \mathsf{GSig}(\mathsf{gsk}i,m))$.] The randomized group signing algorithm $\mathsf{GSig}$ takes as input a secret signing key $\mathsf{gsk}[i]$ and a message m to return a signature $\sigma$ of m under $\mathsf{gsk}[i]$ ($i \in [n]$). \item[$(1,0) \gets \mathsf{GVf}(\mathsf{gpk},m,\sigma)$.] The deterministic group signature verification algorithm $\mathsf{GVf}$ takes as input the group public key $\mathsf{gpk}$, a message m, and a candidate signature $\sigma$ for m to return either 1 or 0. \item[$(i,\perp) \gets \mathsf{Open}(\mathsf{gmsk},m,\sigma)$.] The deterministic opening algorithm Open takes as input the group manager secret key $\mathsf{gmsk}$, a message m, and a signature $\sigma$ of m to return an identity i or the symbol $\perp$ to indicate failure. \end{description} \heading{\textbf{Ring Signature}} \label{ring signature}A ring signature scheme \cite{Rivest01} is a group signature scheme but without group manager to setup a group or revoke a signer's identity. The formation of a group is spontaneous in the way that group members can be totally unaware of being integrated to that group. The scheme of Dodis et al. \cite{10.1007/978-3-540-24676-3_36} was the first to achieve sublinear size signatures in the Random Oracle Model (ROM) \cite{Bellare93}. Chow et al. and Bender et al. \cite{10.1145/1128817.1128861,10.1007/11681878_4} simultaneously proposed ring signatures in the standard model. Malavolta and Schroder \cite{10.1007/11681878_4} build setup free and constant size ring signatures assuming hardness of a variant of the knowledge of exponent assumption. \labelText{Ring signature}{RS} A ring signature scheme is a triple $(\mathsf{KeyGen}, \mathsf{Sig}, \mathsf{Ver})$. \begin{description} \item[$(x, y) \gets \mathsf{KeyGen}(1^k)$] is a probabilistic algorithm which takes security parameter k and outputs private key x and public key y. \item[$\sigma \gets \mathsf{Sig}(1^k, 1^n, x, L, m)$] is a probabilistic algorithm which takes security parameter k, group size n, private key x, a list $\mathcal{L}$ of n public keys which includes the one corresponding to x and message m, produces a signature $\sigma$. \item[$1/0 \gets \mathsf{Ver}(1^k, 1^n, L, m, \sigma)$] is a boolean algorithm which accepts as inputs security parameter k, group size n, a list L of n public keys, message m and signature $\sigma$, returns 1 or 0 for accept or reject, respectively. We require that for any message m, any $(x, y) \gets \mathsf{Gen}(1^k)$ and any L that includes y, $\mathsf{Ver}(1^k, 1^n, L, m,\mathsf{Sig}(1^k, 1^n, x, L, m)) = 1$. \end{description} \heading{\textbf{Attribute-based Signature ($\mathcal{ABS}$)}} In $\mathcal{ABS}$ \cite{Maji11}, data sharer signs data with any predicate of their attributes issued from an attribute authority. \labelText{ABS}{ABS} Let $\mathcal{U}$ be universe of possible attributes. $\Gamma$ is a claim-predicate over $\mathcal{U}$ which is a boolean function. We say that an attribute set $\vec{x} \subseteq \mathcal{U}$ satisfies $\Gamma$ if $\Gamma(\vec{x})=1$. An $\mathcal{ABS}$ scheme consists of four algorithms : $\mathsf{Setup},\mathsf{KeyGen}, \mathsf{Sign},\mathsf{Verif}$, which is parameterized by a universe of possible attributes $\mathcal{U}$ and message space $\mathcal{M}$. \begin{description} \item[$\mathsf{msk}^* \gets \mathsf{Setup}(1^\lambda)$] The attribute-issuing authority $\mathcal{A}$ runs this algorithm. It takes as input the security parameter $1^\lambda$ and outputs a master secret key $\mathsf{msk}^* := (\mathsf{msk}, \mathsf{msk}')$. We call $\mathsf{msk}$ the master secret signing component and $\mathsf{msk}'$ the master secret verification component. \item[$\mathsf{sk}_{\vec{x}} \gets \mathsf{KeyGen}(\mathsf{msk},\vec{x} \subseteq \mathcal{U})$] $\mathcal{A}$ runs this randomized algorithm. The $\mathsf{KeyGen}$ algorithm takes as input the master secret signing component $\mathsf{msk}$ with a set of attribute $\vec{x}$. It outputs a secret key $\mathsf{sk}_{\vec{x}}$ corresponding to $\vec{x}$. \item[$\sigma \gets \mathsf{Sign}(\mathsf{mpk}, \mathsf{sk}_{\vec{x}},m \in \mathcal{M}, \Gamma)$] The signer $\mathcal{S}$ runs this algorithm. It takes as input a message $m$, the secret key $\mathsf{sk}_{\vec{x}}$ where $\Gamma(\vec{x})=1$, outputs $\sigma$. \item[$(0,1) \gets \mathsf{Verif}(\mathsf{mpk}, m, \Gamma, \sigma)$] Outputs either $\mathsf{Ac}=0$ or $\mathsf{Rej}=1$. \end{description} \begin{table*}[tbp] \centering \caption{Comparison of existing security and privacy techniques} \label{Taxonomy of existing security and privacy techniques} \scalebox{0.60}{\begin{tabular}{p{.23\linewidth} |p{.15\linewidth} p{.43\linewidth} p{.44\linewidth}} \toprule \textbf{S\&P Requirements} & \textbf{S\&P techniques} & \textbf{Advantages}& \textbf{Drawbacks}\\ \toprule Confidentiality of Data & $\mathcal{IBE}$ & \makecell[l]{Complete access over all resources} & \makecell[l]{PKG knows the secret key users\\Data must be downloaded and decrypted} \\ \\ &$\mathcal{ABE}$ & \makecell[l]{More complex access control\\ on decryption operation than $\mathcal{IBE}$} & \makecell[l]{ High computation cost \\Data must be downloaded and decrypted} \\\\ &$\mathcal{PRE}$ & \makecell[l]{Delegating decryption rights \\Can be deployed in $\mathcal{IBE}$ or$\mathcal{ABE}$ scheme} & \makecell[l]{Average computational overhead \\ Data must be downloaded and decrypted} \\\\ &$\mathcal{HE}$ & \makecell[l]{Securely data outsourcing \\ Sensitive data operations} & \makecell[l]{Inefficient\\ But $\mathcal{SHE}$ and $\mathcal{PHE}$ are usable} \\ \\ &$\mathcal{ABHE}$ & \makecell[l]{Operation on encrypted data\\Confidentiality\\Access control} & \makecell[l]{Computational overhead is very high} \\ \midrule Integrity & $\mathcal{DS}$ & \makecell[l]{Provides the non-repudiation} & \makecell[l]{Don't prevent the replay attack \\Slower than MAC} \\\\ & $\mathcal{MAC}$ & \makecell[l]{Efficient \\ Suitable for lightweight devices} & \makecell[l]{Does not provide the non-repudiation \\ It is not publicly verifiable.\\ Establishment of Shared Secret} \\\\ & $\mathcal{PDP}$ &\makecell[l]{Easy and reliable test of data integrity} & High computation and communication cost \\\\ & $\mathcal{PS}$ &\makecell[l]{Delegating signing rights \\ Provides the non-repudiation} & Unlimited signing rights to the proxy signer \\ \\ & $\mathcal{AE}$ [\ref{AE}]& \makecell[l]{Many efficient $\mathcal{AE}$ \\ modes have been developed} & \makecell[l]{Does not provide the non-repudiation} \\\midrule Data Privacy during Computation & $\mathcal{PSI}$ & \makecell[l]{Joint analysis on sensitive encrypted data} & High computation and communication cost \\ & $\mathcal{HE}$ & \makecell[l]{Operation on encrypted data \\ it outputs encrypted data} & High computation and communication cost \\ \\ &$\mathcal{FE}$ & \makecell[l]{Operates on encrypted data} & \makecell[l]{Efficient to evaluate linear and quadratic functions.} \\ & $\mathcal{MPC}$ &\makecell[l]{ Joint analysis on sensitive raw data \\ Does not require a centralized party }& \makecell[l]{High computation and \\communication cost} \\ \midrule Secure Data Outsourcing & $\mathcal{ORAM}$& \makecell[l]{Strong level of data privacy } & \makecell[l]{Tends to leak information than using FHE} \\ & $\mathcal{VC}$ & \makecell[l]{Provides the integrity \\Publicly verifiable} & \makecell[l]{Current solutions are not fully practical} \\\midrule Data Privacy during Searching & $\mathcal{SSE}$ & \makecell[l]{More Efficient than using $\mathcal{PEKS}$ } & \makecell[l]{It is not suitable for multi-user data sharing scenarios} \\ & $\mathcal{PEKS}$& \makecell[l]{Can be deployed in $\mathcal{IBE}$ and $\mathcal{ABE}$ } & \makecell[l]{Balancing between query expressiveness and efficiency} \\\midrule User Privacy & Group Signature& \makecell[l]{Provides traceability} & The requirement of a group manager \\ & $\mathcal{ABS}$ & \makecell[l]{Access control is based on use's attribute} & \makecell[l]{Credential authority to issue attribute\\ certificates} \\\\ & Ring Signature & \makecell[l]{More flexibility: No group manager, \\ and the dynamics of group choice} & \makecell[l]{No anonymity-revocation property \\ Hard to manage/coordinate between\\several signers} \\\\ & \makecell[l]{ACS} & Provides anonymity, authentication and accountability& \makecell[l]{Less efficient than $\mathcal{ABS}$ \\ Does not support complex predicates} \\ \midrule Data Privacy & \makecell[l]{K-anonymity} & \makecell[l]{Easy to implement \\ Reidentification is less when the\\ value of k is high} & \makecell[l]{Background knowledge Attack\\ Homogeneity attacks\\ Long processing time} \\\\ & L-diversity & \makecell[l]{Reduce dataset into summary form \\ Sensitive attribute have at most\\same frequency} & \makecell[l]{Similarity Attack\\ Skewness Attack} \\ \\ & T-closeness & Prevent data from skewness attack & \makecell[l]{Complex computational procedure \\ Utility is damaged when t is very small} \\\\ & $\mathcal{DP}$ & \makecell[l]{Most suitable for big data \\ Provides strongest privacy\\guarantee} & \makecell[l]{Complex computational procedure data \\ Utility is damaged when t is very small \\ Noise and loss of information} \\ \bottomrule \end{tabular}} \end{table*} \heading{\textbf{Anonymous Credential Systems ($\mathcal{ACS}$)}} \label{ACS} In $\mathcal{ACS}$ \cite{Camenisch04}, organizations know the users only by pseudonyms. Different pseudonyms of the same user cannot be linked. Yet, an organization issues a credential to a user whom he knows by a pseudonym. The corresponding user (under certain pseudonym) can prove to any other organisations that he is the owner of his credential without revealing anything more than the fact that that user owns such a credential. A basic anonymous credential system consists of three entities : $\mathsf{Users}$, an $\mathsf{Authority}$, and $\mathsf{Verifiers}$. An anonymous credential system has tree procedures: $\mathsf{KeyGen}$, $\emph{Credential Issuing Protocol}$ and $\emph{Credential Proving Protocol}$, as follows: \begin{description} \item $\mathsf{KeyGen}$ is run by Authority, given security parameter $1^k$, outputs a pair of public key and secret-key: $(\textsc{pk}, \textsc{sk})$. \item Credential Issuing Protocol is run by user $\mathcal{U}$, on m that U wants to obtain a certificate for. Examples of m are properties such as "belongs to some University", "is over the age of 20." or rights such as "can access the secure room". How Auth detects whether m is valid or not with regard to $\mathcal{U}$ is outside this protocol. U executes the credential issuing protocol for m with Auth by using U’s input m and Auth’s $\textsc{sk}$s. At the end of the protocol, $\mathcal{U}$ obtains a credential Cred, corresponding to m. \item Credential Proving Protocol After $\mathcal{U}$ obtains the credential of m, $\mathcal{U}$ executes the credential proving protocol of m with a verifier $\mathcal{V}$, that proves $\mathcal{U}$'s possession of Cred. At the end of the protocol, $\mathcal{V}$ outputs accept if $\mathcal{U}$ really has a valid Cred, otherwise outputs reject. \end{description} \section{Comparison of existing security \& privacy techniques} \label{Taxonomy security and privacy techniques} This section presents a comparison of existing security and privacy techniques for BDS that we will classify on the basis of fulfilled security and privacy requirements, their advantages and drawbacks as shown in Table \ref{Taxonomy of existing security and privacy techniques}. To achieve security and privacy in a complex BDS system that needs to meet multiple security and privacy requirements with desired features, we would like to mention the following three notes: \circled{1} No single techniques is a a universal remedy for security and privacy of BDS. Therefore, the appropriate security and privacy techniques (or a combination of them) should be chosen with respect to the security and privacy requirements and the context of BDS application. \circled{2} There is no technique that has no side effects or is perfect in all aspects. When we add a new technique to such complex system, it usually raises new types of attacks or problems. \circled{3} There is always a trade-off between security, privacy and efficiency to make. \section{Challenging issues \& Future Directions } \label{Future Directions} Although security and privacy techniques have been studied for many years, its implementation and practical adoption is still some unaddressed challenging issues. In this section, we discuss some challenge issues and research directions for security and privacy in BDS : \textbf{Private Key Management.} Recently, due to the increase in the volume and types of data processed in cloud environments, techniques that allow easy access to Big Data stored in heterogeneous devices in different network environments are emerging security issues for Big Data, Jeong and Shin \cite{10.1007/s11277-015-2990-1} have explained different approaches for key management in big data context such as MPC, server-aided approach, and encryption with signature. In key management with Threshold Signature Scheme, data sharers do not need to keep any key on their own, but instead, they have to share secrets among multiple servers. Keys can be reconstructed using a minimum defined number of secrets using SSS. \textbf{Balancing Data Sharing Value and Privacy.} To protect the indidividual privacy inside the data, the privacy-preserving techniques such as anonymizing data using whether masking or de-identification techniques are used. However, it's such a double-edged sword: on the one hand it protects the sensitive information inside data such as personal health information (PHI) from disclosure, on the other hand data will lose its quality and would not be enough accurate for analysis anymore. Therefore coming up with a balance between the privacy-protection solutions (anonymization, sharing agreement, and security controls) and accurate data for analyse is essential to be able to access a data that is usable for analytics. \textbf{Improving Efficiency of Existing Solutions.} Recent cryptographic methods such as private set intersection and homomorphic encryption are powerful cryptographic primitives that have been deployed to solve many security and privacy issues. While these schemes have relatively good communication cost, the running time can be prohibitive when the set sizes become large due to the need to perform modular exponentiation for every item in both sets several times. Understanding and balancing these trade-offs theoretically and empirically is a considerable challenge for performing secure BDS. \textbf{Eliminating Single Point of Failure: From Centralization to Decentralization, Blockchain as a Solution.} Blockchain technologies is a form of \emph{Distributed Leger Technologies} ($\mathcal{DLT}$s) that provide decentralized platforms which eliminate the need of a single trusted third party (a central authority) and thus get rid of the well-known \emph{Single Point of Failure} ($\mathcal{SPOF}$) issue (which means if the central node goes down, the entire network becomes nonfunctional). This issue provides a breeding ground for cybercriminals as they can target the massive centralized data storage servers vis DDoS, DoS attacks as illustrated in Fig. \ref{Centralized vs. decentralized networks}. \begin{figure}[!htbp] \centering \includegraphics[width=.6\linewidth]{centra-de-eps-converted-to.pdf} \caption{Centralized vs. decentralized networks} \label{Centralized vs. decentralized networks} \end{figure} Blockchain is enabled by a combination of technologies such as: peer-to-peer networks, consensus-making, cryptography, smart contract and market mechanisms, among others linking the records (blocks) of the ledger. The main properties of blockchain are as follows: \begin{itemize} \item Transparency: The chain is exportable to anywhere and can be downloaded and viewed over the internet. \item Immutability: Once data is in the chain it cannot be tampered with or altered. \item Decentralisation: No single entity controls what goes into the chain. \end{itemize} \textbf{How Blockchain will improve Data Sharing Security?} The combination of blockchain technology and BDS would allow numerous interesting opportunities to improve its security and privacy (as summarized in Figure ~\ref{BC over DB}): \begin{figure}[!htbp] \centering \includegraphics[width=.6\linewidth]{blockchain-over-db-eps-converted-to.pdf} \caption{Blockchain advantages over traditional database} \label{BC over DB} \end{figure} \circled{1} it provides fault tolerance property, which means the distributed nature of blockchain removes the single point of failure; All data is therefore distributed between the nodes of the network. If something is added, edited or deleted in any computer, it will be reflected in all the computers in the network. Contrary to centralized-based BDS, in case of database failures, the total system of centralized big data is suspended; in blockchain when one workstation goes down, the system will continue operation even with less processing power.\circled{2} it provides database Recovery which means replication of data automatically helps in data recovery if database in any site is damaged. \circled{3} No interference between users when accessing, sharing and manipulation BDS.\circled{4} Contrary to traditional BDS systems, with blockchain anyone can track data from the source to the end. There are no week points for data to altered or tampered with. Participants of network have access to the holdings and transactions. Using an explorer equipped with a user's public address is enough to perform their transactions and actions. \circled{5} The majority of participants in a blockchain system need to approve the transactions and agree upon which data is added, modified or delated. Blockchain ensures trust of data by maintaining a decentralized ledger. Data recorded on the blockchain are trustworthy because they must have gone through a verification process which ensures its quality. Data integrity is ensured when details of the origin and interactions concerning a data block are stored on the blockchain and automatically verified (or validated) before it can be acted upon. It also provides for transparency, since activities and transactions that take place on the blockchain network can be traced. With recent research progress, the future development direction of Blockchain Technology includes the following aspects: \begin{enumerate} \item \heading{DDoS attacks} A denial-of-service attack refers to as the DoS attack on a host. It is the type of cyber-attacks that disrupt the hosted Internet services by making the host machine or the network resource on the host unavailable to its intended users. DoS attacks attempt to overload the host system or the host network resource by flooding with superfluous requests, consequently stalling the fulfillment of legitimate services. \item \heading{Linkability} Different from user privacy, users should require that the transactions related to themselves cannot be linked. Because user behaviors in blockchain are traceable, blockchain systems need to protect the transaction privacy of users. \item \heading{The majority 51 \% consensus attack} If an attacker were to take control of 51\% or more of the nodes comprising the blockchain network, the consensus mechanism could be overridden allowing for double spending. \item \heading{Private key theft} Transactions in any Blockchain system are authenticated by digital signatures. For example in cryptocurrency context, if Alice wants to send Bob some money, she should sign a transaction by her private key to say “Pay this coin, C to Bob”. However, once a Alice’s private key is lost or stolen, it cannot be recovered. Consequently, the user’s blockchain account can be tampered by others. \item \heading{Irreversibility} Blockchain cannot go back as data is immutable, that cannot always be regarded as a positive aspect, but can quickly turn out to be a major problem in the event of accidents, faulty transactions or fraudulent exchange of goods. Also, such feature does not meet the "Right to be Forgotten" which gives individuals the right to request that their personal data be removed from a record that is regarded as an important concept in data privacy. \item \heading{Selfish-mining attack} One popular use case related to fairness in block mining in Proof-of-Work (PoW) blockchains, which intuitively requires that a node's mining rewards be proportional to its relative computational power. That is, no node should be able to mine selfishly to obtain more rewards than its fair share. \end{enumerate} \textbf{How to secure against unauthorized data re-sharing attack?} BDS unauthorized data resharing preservation is well known to be impossible to achieve. Despite this limitation, many approaches provide solutions of practical interest by weakening somewhat that requirement. Such approaches include watermarking and copyrighting, that are used as a solution to identify the sharers in re-sharing activity \cite{1381778}. Unfortunately, such approaches cannot prevent unauthorized data re-sharing but only tracking it. Using signcryption as a solution to identify the sharers in re-sharing activity is threefold: \circled{1} It provides non-repudiation which provides the victim with transferable evidence against the cheating sharee. \circled{2}) In our construction verifiers (e.g., Smart Contract) will be given the means to determine when the unauthorized user C tries to reshare the A’s data. When this happens, verifiers will be able to contact the data sharer who will provide the proof of his data ownership (this is achieved with the non-repudiation property). \circled{3} It provides authenticated data between different receipts: in order to update the ciphertext recipient, the data owner needs to download it then decrypt the requested data, and further re-signcrypt it under the target user’s public key. This solution is very demanding in terms of computation and communication costs to the data owner which contradicts the motivation of cloud computing. For this reason, to handle this problem in data sharing context, we propose the solution of proxy re-signcryption between different receipts, without compromising the secret key of the data owner. Furthermore, the server might ensure that the posted data is not a plaintext but a well encrypted ciphertext in such a way the server can only store the ``encrypted content''. To handle this problem, we introduce a new security notion that we call \emph{verifiable plaintext-aware} that we add to proxy re-signcryption scheme (as described in Fig. \ref{fig:verifiable plaintext-aware proxy re- signcryption}), which consists of the following steps: \begin{figure}[!htbp] \includegraphics[width=.9\linewidth]{plaintext-proxy-eps-converted-to.pdf} \caption{System model for verifiable plaintext-aware proxy re-signcryption} \label{fig:verifiable plaintext-aware proxy re- signcryption} \end{figure} \begin{enumerate} \item The data sharer uploads the signcrypted data onto the server. \item The data sharer convinces the server that the signcrypted data is not a plaintext but a well signcrypted file. \item If so, the server stores the signcrypted file $C_a$. \item The data sharee requests decryption delegation with his public key $pk_b$. \item The data sharee creates and sends the transformation key $rk_{a \rightarrow b}$ to the server. \item The server re-signcrypt $C_a$ with $rk_{a \rightarrow b}$ and outputs $C_b$. \item The data sharee download and decrypt $C_b$ with $sk_b$ to recover the original data. Now let consider three entities: Data Sharer A, Data Sharee B and Unauthorized User C. We introduce a novel form of unauthorized data resharing prevention called legal unauthorized re-sharing preserving defined as follows: \end{enumerate} \section{Conclusion} \label{conclusion} We have presented a survey on security and privacy of BDS with a number of contributions. First, we characterized the security and privacy requirements of BDS into two broad categories: fundamental requirements and additional requirements in the context of BDS. Second, we described the security and privacy techniques for achieving these security and privacy requirements in BDS. With growing interest of BDS in both academic research and industry, the security and privacy of BDS have attracted huge interests, it is impossible to design next generation applications without publishing and executing data driven algorithms. We conjecture that developing light-weight cryptographic algorithms as well as other practical security and privacy methods will be a key enabling technology in the future development of BDS and its applications. \bibliographystyle{acm}
1,314,259,992,959
arxiv
\section{Introduction} In the early 1980's, Herman \cite{herman:1983} and Grebogi et al.\ \cite{grebogi/ott/pelikan/yorke:1984} independently discovered the existence of strange non-chaotic attractors (SNA's) in quasiperiodically forced (qpf) systems. These objects combine a complicated geometry% \footnote{This means in particular that they are not a piecewise differentiable (or even continuous) sub-manifold of the phase space.} with non-chaotic dynamics, a combination which is rather unusual and has only been observed in a few very particular cases before (the most prominent example is the Feigenbaum map, see \cite{milnor:1985} for a discussion and further references). In quasiperiodically forced systems, however, they seem to occur quite frequently and even over whole intervals in parameter space \cite{grebogi/ott/pelikan/yorke:1984,keller:1996,% osinga/wiersig/glendinning/feudel:2001}. As a novel phenomenon this evoked considerable interest in theoretical physics, and in the sequel a large number of numerical studies explored the surprisingly rich dynamics of these relatively simple maps. In particular, the widespread existence of SNA's was confirmed both numerically (see \cite{bondeson/ott/antonsen:1985}--\nocite{romeiras/ott:1987}% \nocite{romeiras/etal:1987}% \nocite{ding/grebogi/ott:1989}\nocite{heagy/hammel:1994}% \nocite{pikovski/feudel:1995}% \nocite{feudel/kurths/pikovsky:1995}\nocite{witt/feudel/pikovsky:1997}% \nocite{chastell/glendinning/stark:1995}% \nocite{glendinning:1998}\nocite{sturman:1999}% \nocite{glendinning/feudel/pikovsky/stark:2000}% \nocite{negi/prasad/ramaswamy:2000}\cite{prasad/negi/ramaswamy:2001}, just to give a selection) and even experimentally \cite{Ditto/etal:1990,heagy/ditto:1991,zhou/moss/bulsara:1992}. Further, it turned out that SNA play an important role in the bifurcations of invariant circles \cite{osinga/wiersig/glendinning/feudel:2001,% chastell/glendinning/stark:1995,negi/prasad/ramaswamy:2000,haro/delallave:2006}. The studied systems were either discrete time maps, such as the qpf logistic map \cite{heagy/hammel:1994,witt/feudel/pikovsky:1997,% negi/prasad/ramaswamy:2000} and the qpf Arnold circle map \cite{osinga/wiersig/glendinning/feudel:2001,ding/grebogi/ott:1989% ,feudel/kurths/pikovsky:1995,chastell/glendinning/stark:1995}, or skew product flows which are forced at two or more incommensurate frequencies. Especially the latter underline the significance of qpf systems for understanding real-world phenomena, as most of them were derived from models for different physical systems (e.g.\ quasiperiodically driven damped pendula and Josephson junctions \cite{bondeson/ott/antonsen:1985,romeiras/ott:1987,% romeiras/etal:1987} or Duffing oscillators \cite{heagy/ditto:1991}. Their Poincar\'e maps again give rise to discrete-time qpf systems, on which the present article will focus. \medskip However, despite all efforts there are still only very few mathematically rigorous results about the subject, with the only exception of qpf Schr\"odinger cocycles (see below). There are results concerning the regularity of invariant curves (\cite{stark:1999}, see also \cite{sturman/stark:2000}), and there has been some progress in carrying over basic results from one-dimensional dynamics \cite{jaeger/keller:2005,fabbri/jaeger/johnson/keller:2004,% jaeger/stark:2005}. But so far, the two original examples in \cite{herman:1983} and \cite{grebogi/ott/pelikan/yorke:1984} remain the only ones for which the existence of SNA's has been proved rigorously. In both cases, the arguments used were highly specific for the respective class of maps and did not allow for much further generalisation, nor did they give very much insight into the geometrical and structural properties of the attractors. The systems Herman studied in \cite{herman:1983} were matrix cocycles, with quasiperiodic Schr\"odin\-ger cocycles as a special case. The linear structure of these systems and their intimate relation to Schr\"odinger operators with quasiperiodic potential made it possible to use a fruitful blend of techniques from operator theory, dynamical systems and complex analysis, such that by now the mathematical theory is well-developed and deep results have been obtained (see \cite{avila/krikorian:2004} and \cite{avila/jitomirskaya:2005} for recent advances and further reference). However, as soon as the particular class of matrix cocycles is left, it seems hard to recover most of these arguments. One of the rare exceptions is the work of Bjerkl\"ov in \cite{bjerkloev:2005a} (taken from \cite{bjerkloev:2003}) and \cite{bjerkloev:2005}, which is based on a purely dynamical approach and should generalise to other types of systems, such as the ones considered here. (In fact, although implemented in a different way the underlying idea in \cite{bjerkloev:2005} is very similar to the one presented here, such that despite their independence the two articles are closely related.) On the other hand, for the so-called `\textit{pinched skew products}' introduced in \cite{grebogi/ott/pelikan/yorke:1984}, establishing the existence of SNA is surprisingly simple and straightforward (see \cite{keller:1996} for a rigorous treatment and also \cite{glendinning:2002} and \cite{jaeger:2004a}). But one has to say that these maps were introduced especially for this purpose and are rather artificial in some aspects. For example, it is crucial for the argument that there exists at least one fibre which is mapped to a single point. But this means that the maps are not invertible and can therefore not be the Poincar\'e maps of any flow. \medskip The main goal of this article is to prove the existence of SNA in certain parameter families of qpf systems where this has not been possible previously. Thereby, we will concentrate on a particular type of SNA, namely `strip-like' ones, which occur in saddle-node and pitchfork bifurcations of invariant circles (see Figure~\ref{fig:twotypes}, for a more precise formulation consider the definition of invariant strips in \cite{jaeger/keller:2005} and \cite{jaeger/stark:2005}). \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-twotypes.eps, clip=, width=\linewidth, height=0.23\textheight} \caption{\small Two different types of strange non-chaotic attractors: The left picture shows a `strip-like' SNA in the system $\ensuremath{(\theta,x)} \mapsto (\ensuremath{\theta + \omega},\tanh(5x) + 1.2015\cdot \sin(2\pi\theta))$. The topological closure of this object is bounded above and below by semi-continuous invariant graphs (compare (\ref{eq:boundinggraphs})). This is the type of SNA's that will be studied in the present work. The right picture shows a different type that occurs for example in the critical Harper map (Equation~(\ref{eq:projharper}) with $\lambda=2$ and $E=0$; more details can be found in~\cite{datta/jaeger/keller/ramaswamy:2004}), where no such boundaries exist. In both cases $\omega$ is the golden mean.} \label{fig:twotypes} \end{minipage} \end{figure} In such a saddle-node bifurcation, a stable and an unstable invariant circle approach each other, until they finally collide and then vanish. However, there are two different possibilities. In the first case, which is similar to the one-dimensional one, the two circles merge together uniformly to form one single and neutral invariant circle at the bifurcation point. But it may also happen that the two circles approach each other only on a dense, but (Lebesgue) measure zero set of points. In this case, instead of a single invariant circle, a strange non-chaotic attractor-repeller-pair is created at the bifurcation point. Attractor and repeller are interwoven in such a way, that they have the same topological closure. This particular route for the creation of SNA's has been observed quite frequently (\cite{feudel/kurths/pikovsky:1995,chastell/glendinning/stark:1995,% glendinning:1998,prasad/negi/ramaswamy:2001}, see also \cite{heagy/hammel:1994}) and was named \textit{`non-smooth saddle-node bifurcation'} or \textit{`creation of SNA via torus collision'}. The only rigorous description of this process so far was given by Herman in \cite{herman:1983}. In a similar way, the simultaneous collision of two stable and one unstable invariant circle may lead to the creation of two SNA's embracing one strange non-chaotic repeller \cite{osinga/wiersig/glendinning/feudel:2001,sturman:1999}. \medskip \textbf{Acknowledgments.} The results presented here were part of my thesis, and I would like to thank Gerhard Keller for his invaluable advice and support during all the years of my PhD-studies. I am also greatful to an anonymous referee, whose thoughtful remarks greatly improved the manuscript. This work was supported by the German Research Foundation (DFG), grant Ke 514/6-1. \subsection{Overview} \label{Overview} As mentioned above, the main objective of this article is to provide new examples of SNA, by describing a general mechanism which is responsible for the creation of SNA in non-smooth saddle-node bifurcations. While this mechanism might not be the only one which exists, it seems to be common in a variety of different models, including well-known examples like the Harper map or the qpf Arnold circle map. The evidence we present will be two-fold: In the remainder of this introduction we will explain the basic idea, and discuss on an heuristic level and by means of numerical simulations how it is implemented in the two examples just mentioned and a third parameter family, which we call $\arctan$-family. An analogous phenomenom is also observed in so-called Pinched skew products, first introduced in \cite{grebogi/ott/pelikan/yorke:1984}, even if no bifurcation takes place in these systems. The heuristic arguments given in the introduction are then backed up by Theorem~\ref{thm:snaexistence}, which provides a rigorous criterium for the non-smoothness of saddle-node bifurcations in qpf interval maps. This leads to new examples of strange non-chaotic attractors, and the result is flexible enough to apply to different parameter families at the same time, provided they have similar qualitative features and share a certain scaling behaviour. Nevertheless, it must be said that there is still an apparent gap between what can be expected from the numerical observations and what can be derived from Theorem~\ref{thm:snaexistence}~. For instance, the latter does not apply to the forced version of the Arnold circle map, and for the application to the $\arctan$-family and the Harper map we have to make some quite specific assumptions on the forcing function and the potential, respectively. (Namely that these have a unique maximum and decay linearly in a neighbourhood of it). However, our main concern here is just to show that the general approach we present does lead to rigorous results at all, even if these are still far from being optimal. The present work should therefore be seen rather as a first step in this direction, which will hopefully inspire further research, and not as an ultimate solution. \medskip The article is organised as follows: After we have given some basic definitions, we will introduce our main examples in Section~\ref{Examples}~. As mentioned, these are the arctan-family with additive forcing, the Harper map, the qpf Arnold circle map and Pinched skew products. The simulations we present mostly show the evolution of stable invariant curves as the system parameters are varied. The crucial observation is the fact that the behaviour of these curves prior to the bifurcation follows a very characteristic pattern, which we call \emph{`exponential evolution of peaks'}. In all the first three examples the qualitative features of this process are similar, and even in Pinched skew products, where no saddle-node bifurcation occurs, an analogue behaviour can be observed. Finally, a slight modification of the arctan-family is used to illustrate that the phenomenom is also present in non-smooth pitchfork bifurcations. On an heuristic level it is not difficult to give an explanation for this behaviour, and this will be done in Section~\ref{Mechanism}~. The simple geometric intuition obtained there will then determine the strategy for the rigorous proof of the non-smoothness of the bifurcations in the later sections. More precisely, the heuristics indicate why the existence of SNA should be linked to the appearance of sink-source-orbits in these situations, and this will be one of the main ingredients of the proof. Section~\ref{Main results} then contains the statement of our main results and discusses their application to the examples from the introduction (or why such an application is not possible, in the case of the qpf Arnold circle map). Before we can turn to the existence of SNA and the non-smoothness of bifurcations, we need to state two preliminary results. The first, Theorem~\ref{thm:saddlenode}, provides a general framework in which saddle-node bifurcations in qpf interval maps take place (smooth or non-smooth). The second, Theorem~\ref{thm:sinksourcesna}, states that the existence of sink-source-orbits% \footnote{Orbits with positive Lyapunov exponent both forwards and backwards in time, see Definition~\ref{def:sinksource}~.} implies the existence of SNA's (although the converse is not true). After these statements and some related concepts have been introduced in Sections \ref{GeneralSettingI} and \ref{SinkSourceSNA}, we can turn to the main result, namely Theorem~\ref{thm:snaexistence}, which provides a criterium for the existence SNA's created in non-smooth saddle-node bifurcations. The counterpart for non-smooth pitchfork bifurcations is Theorem~\ref{thm:symmetricsna}, which gives a criterium for the existence of symmetric SNA's. More precisely, under the assertions of this theorem there exists a triple consisting of two SNA, symmetric to each other, which embrace a self-symmetric strange non-chaotic repeller. These objects are presumably created by the simultaneous collision of two stable and one unstable invariant curve. However, as the considered parameter families lack a certain monotonicity property which is present in the situation of Theorem~\ref{thm:snaexistence}, we cannot describe the bifurcation pattern in a rigorous way as for the saddle-node bifurcations, such that the existence of SNA is the only conclusion we draw in the symmetric setting. The application of these results to the arctan-family and the Harper map is then discussed in detail in Section~\ref{Applications}, which resumes the structure of Section~\ref{Examples} where these examples are introduced. As we have mentioned before, the statement of Theorem~\ref{thm:snaexistence} is too restricted to apply to the qpf Arnold circle map. However, in Section~\ref{App:Remarks} we discuss some possible modifications, which might allow to treat this example in a similar way, at least for particular forcing functions. Section~\ref{Generalsetting} provides the proofs for the more elementary results (namely Theorems \ref{thm:saddlenode} and \ref{thm:sinksourcesna}). All the remaining sections are then dedicated to the proof of Theorems~\ref{thm:snaexistence} and \ref{thm:symmetricsna}, starting with an outline of the construction in Section~\ref{Strategy}. \subsection{Basic definitions and notations} \label{BasicDefinitions} A \textit{quasiperiodically forced (qpf) system} is a continuous map of the form \begin{equation} \label{eq:qpfs} T : \ensuremath{\mathbb{T}^{1}} \times X \ensuremath{\rightarrow} \ensuremath{\mathbb{T}^{1}} \times X \ \ \ , \ \ \ \ensuremath{(\theta,x)} \mapsto (\ensuremath{\theta + \omega},\ensuremath{T_{\theta}(x)}) \end{equation} with irrational driving frequency $\omega$. At most times, we will restrict to the case where the driving space $X=[a,b]$ is a compact interval and the \textit{fibre maps} \ensuremath{T_{\theta}}\ are all monotonically increasing on $X$. In this case we say $T$ is a {\em qpf monotone interval map}. Some of the introductory examples will also be qpf circle homeomorphisms, but there the situation can often be reduced to the case of interval maps as well, for example when there exists a closed annulus which is mapped into itself. Two notations which will be used frequently are the following: Given any set $A \ensuremath{\subseteq} \ensuremath{\mathbb{T}^{1}} \times X$ and $\theta \in \ensuremath{\mathbb{T}^{1}}$, we let $A_\theta := \{x\in X\mid \ensuremath{(\theta,x)} \in A\}$. If $X=\ensuremath{\mathbb{R}}$ and $\varphi,\psi:\ensuremath{\mathbb{T}^{1}} \ensuremath{\rightarrow} \ensuremath{\mathbb{R}}$ are two measurable functions, then we use the notation \begin{equation} \label{eq:graphintervals} [\psi,\varphi] \ := \ \{ \ensuremath{(\theta,x)} \mid \psi(\theta) \leq x \leq \varphi(\theta) \} \end{equation} similarly for $(\psi,\varphi)$, $(\psi,\varphi]$, $[\psi,\varphi)$. Due to the minimality of the irrational rotation on the base there are no fixed or periodic points for $T$, and one finds that the simplest invariant objects are invariant curves over the driving space (also invariant circles or invariant tori). More generally, a ($T$-)\textit{invariant graph} is a measurable function $\varphi : \ensuremath{\mathbb{T}^{1}} \ensuremath{\rightarrow} X$ which satisfies \begin{equation} \label{eq:invgraph} \ensuremath{T_{\theta}}(\varphi(\theta)) \ = \ \varphi(\ensuremath{\theta + \omega}) \quad \quad \forall \theta \in \ensuremath{\mathbb{T}^{1}} \ . \end{equation} This equation implies that the point set $\Phi:= \{(\theta,\varphi(\theta)) \mid \theta \in \ensuremath{\mathbb{T}^{1}}\}$ is forward invariant under $T$. As long as no ambiguities can arise, we will refer to $\Phi$ as an invariant graph as well. There is a simple way of obtaining invariant graphs from compact invariant sets: Suppose $A \ensuremath{\subseteq} \ensuremath{\mathbb{T}^{1}} \times X$ is $T$-invariant. Then \begin{equation} \label{eq:boundinggraphs} \ensuremath{\varphi^+}_A(\theta) \ := \ \sup\{x\in X \mid \ensuremath{(\theta,x)} \in A \} \end{equation} defines an invariant graph (invariance following from the monotonicity of the fibre maps). Furthermore, the compactness of $A$ implies that $\ensuremath{\varphi^+}_A$ is upper semi-continuous (see \cite{stark:2003}). In a similar way we can define a lower semi-continuous graph $\ensuremath{\varphi^-}_A$ by taking the infimum in (\ref{eq:boundinggraphs}). Particularly interesting is the case where $A = \cap_{n\in\N}T^n(\ensuremath{\mathbb{T}^{1}} \times X)$ (the so-called global attractor, see \cite{glendinning:2002}). Then we call $\ensuremath{\varphi^+}_A$ ($\ensuremath{\varphi^-}_A)$ the \textit{upper (lower) bounding graph of the system}. There is also an intimate relation between invariant graphs and ergodic measures. On the one hand, to each invariant graph $\varphi$ we can associate an invariant ergodic measure by \begin{equation} \label{eq:associatedmeasure} \mu_\varphi(A) \ := \ m(\pi_1(A \cap \Phi)) \ , \end{equation} where $m$ denotes the Lebesgue measure on \ensuremath{\mathbb{T}^{1}}\ and $\pi_1$ is the projection to the first coordinate. On the other hand, if $f$ is a qpf monotone interval maps then the converse is true as well: In this case, for each invariant ergodic measure $\mu$ there exists an invariant graph $\varphi$, such that $\mu=\mu_\varphi$ in the sense of (\ref{eq:associatedmeasure}). (This can be found in \cite{arnold:1998}, Theorem 1.8.4~. Although the statement is formulated for continuous-time dynamical systems there, the proof literally stays the same.) If all fibre maps are differentiable and we denote their derivatives by $D\ensuremath{T_{\theta}}$, then the stability of an invariant graph $\varphi$ is measured by its \textit{Lyapunov exponent}% \begin{equation} \label{eq:lyapexponent} \lambda(\varphi) \ := \ \int_{\ensuremath{\mathbb{T}^{1}}} \log D\ensuremath{T_{\theta}}(\varphi(\theta)) \ d\theta \ . \end{equation} An invariant graph is called \textit{stable} when its Lyapunov exponent is negative, \textit{unstable} when it is positive and \textit{neutral} when it is zero. Obviously, even if its Lyapunov exponent is negative an invariant graph does not necessarily have to be continuous. This is exactly the case that has been the subject of so much interest: \begin{definition}[Strange non-chaotic attractors and repellers] \label{def:sna} A \textbf{strange non-chaotic attractor (SNA)} in a quasiperiodically forced system $T$ is a $T$-invariant graph which has negative Lyapunov exponent and is not continuous. Similarly, a \textbf{strange non-chaotic repeller (SNR)} is a non-continuous $T$-invariant graph with positive Lyapunov exponent. \end{definition} This terminology, which was coined in theoretical physics, may need a little bit of explanation. For example, the point set corresponding to a non-continuous invariant graph is not a compact invariant set, which is usually required in the definition of `attractor'. However, a SNA attracts and determines the behaviour of a set of initial conditions of positive Lebesgue measure (e.g.\ \cite{jaeger:2003}, Proposition~3.3), i.e.\ it carries a `physical measure'. Moreover, it is easy to see that the essential closure% \footnote{The support of the measure $\mu_\varphi$ given by (\ref{eq:associatedmeasure}), where $\varphi$ denotes the SNA. See also Section~\ref{Essentialclosure}.} of a SNA is an attractor in the sense of Milnor \cite{milnor:1985}. `Strange' just refers to the non-continuity and the resulting complicated structure of the graph. The term `non-chaotic' is often motivated by the negative Lyapunov exponent in the above definition \cite{grebogi/ott/pelikan/yorke:1984}, but actually we prefer a slightly different point of view: At least in the case where the fibre maps are monotone interval maps or circle homeomorphisms, the topological entropy of a quasiperiodically forced system is always zero,% \footnote{For monotone interval maps this follows simply from the fact that every invariant ergodic measure is the projection of the Lebesgue measure on \ensuremath{\mathbb{T}^{1}}\ onto an invariant graph, such that the dynamics are isomorphic in the measure-theoretic sense to the irrational rotation on the base. Therefore all measure-theoretic entropies are zero, and so is the topological entropy as their supremum. In the case of circle homeomorphisms, the same result can be derived from a statement by Bowen (\cite{bowen:1971}, Theorem~17).} such that the system and its invariant objects should not be considered as `chaotic'. This explains why we also speak of \textit{strange non-chaotic repellers}. In fact, in invertible systems an attracting invariant graph becomes a repelling invariant graph for the inverse and vice versa, while the dynamics on them hardly changes. Thus, it seems reasonable to say that `non-chaotic' should either apply to both or to none of these objects. \subsection{Examples of non-smooth saddle-node bifurcations} \label{Examples} As mentioned, the crucial observation which starts our investigation here is the fact that the invariant circles in a non-smooth bifurcation do not approach each other arbitrarily. Instead, their behaviour follows a very distinctive pattern, which we call \emph{exponential evolution of peaks}. In this section we present some simulations which demonstrate this phenomenom in the different parameter families mentioned in Section~\ref{Overview}~. Although it seems difficult to give a precise mathematical definition of this process, and we refrain from doing so here, this observation provides the necessary intuition and determines the strategy of the proofs for the rigorous results in the later chapters. (The same underlying idea can be found in \cite{bjerkloev:2005} and \cite{jaeger:2004a}.) \subsubsection{The arctan-family with additive forcing} \label{AtanFamily} Typical representatives of the class of systems we will study in the later sections are given by the family \begin{equation} \label{eq:atanfamily} \ensuremath{(\theta,x)} \ \mapsto \ \left(\ensuremath{\theta + \omega},\frac{\arctan(\alpha x)}{\arctan(\alpha)} - \beta\cdot (1-\sin(\pi\theta))\right) \ . \end{equation} \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-atan.eps, clip=, width=\linewidth, } \caption{\small Upper bounding graphs in the parameter family given by (\ref{eq:atanfamily}) with $\omega$ the golden mean and $\alpha=10$. The values for $\beta$ are: (a) $\beta=0.65$, (b) $\beta=0.9$, (c) $\beta=0.95$, (d) $\beta=0.967$, (e) $\beta=0.9708$, (f) $\beta=0.9710325$.} \label{fig:atanfamily} \end{minipage} \end{figure} As we will see later on, these maps provide a perfect model for the mechanism which is responsible for the exponential evolution of peaks and the creation of SNA's in saddle-node bifurcations. The map $x \mapsto \frac{\arctan(\alpha x)}{\arctan(\alpha)}$ has three fixed points at $0$ and $\pm 1$, and for $\beta=0$ these correspond to three (constant) invariant curves for (\ref{eq:atanfamily}). As the parameter $\beta$ is increased, a saddle-node bifurcation between the two upper invariant curves takes place: Only the lower of the three curves persists, while the other two collide and cancel each other out. In fact, it will not be very hard to describe this bifurcation pattern in general (see Theorem~\ref{thm:saddlenode}), whereas proving that this bifurcation is indeed \textit{`non-smooth'} will require a substantial amount of work. Figure~\ref{fig:atanfamily} shows the behaviour of the upper bounding graph as the parameter $\beta$ is increased and reveals a very characteristic pattern. The overall shape of the curves hardly changes, apart from the fact that when the bifurcation is approached they have more and more \textit{`peaks'} (as we will see there are infinitely many in the end, but most of them are too small to be seen). The point is that these peaks do not appear arbitrarily, but one after each other in a very ordered way: In (a), only the first peak is fully developed while the second just starts to appear. In (b) the second peak has grown out and a third one is just visible, in (c) and (d) the third one grows out and a fourth and fifth start to appear \ldots\ . Further, each peak is exactly the image of the preceding one, and the peaks become steeper and thinner at an exponential rate (which explains the term \textit{`exponential evolution'} and the fact that the peaks soon become too thin to be detected numerically). As far as simulations are concerned, the pictures obtained with smooth forcing functions in (\ref{eq:atanfamily}) instead of $(1-\sin(\pi\theta))$, which is only Lipschitz-continuous and decays linearly off its maximum at $\theta=0$, show exactly the same behaviour. However, the rigorous results from the later sections only apply to this later type of forcing. In Section~\ref{Mechanism} we will discuss why this simplifies the proof of the non-smoothness of the bifurcation to some extent. Finally, it should mentioned that the phenomenom we just described does not at all depend on any particular properties of the arcus tangent. Any strictly monotone and bounded map of the real line with the same qualitative features, which can vaguely described as being ``s-shaped'', can be used to replace the arcus tangent in the above definitions without changing the observed behaviour (e.g.\ $x \mapsto \tanh(x)$). If in addition this map has similar scaling properties as the arcus tangent, as for example $x \mapsto \frac{x}{1+|x|}$, then even the rigorous results we present in the later sections apply. We will not prove this in detail, but it will be evident that the arguments which we use in Section~\ref{App:atanfamily} to treat (\ref{eq:atanfamily}) can be easily adjusted to this end. \subsubsection{The Harper map} \label{Harper} The Harper map with continuous \emph{potential} $V:\ensuremath{\mathbb{T}^{1}} \ensuremath{\rightarrow} \ensuremath{\mathbb{R}}$, \emph{energy} $E$ and \emph{coupling constant} $\lambda$ is given by \begin{equation} \label{eq:harper} \ensuremath{(\theta,x)} \mapsto \left(\ensuremath{\theta + \omega},\arctan\left(\frac{-1}{\tan(x)-E+\lambda V(\theta)}\right)\right) \ . \end{equation} It is probably the most studied example, and the reason for this is the fact that its dynamics are intimately related to the spectral properties of discrete Schr\"odinger operators with quasiperiodic potential (the so-called almost-Mathieu operator in the case $V(\theta) = \cos(2\pi\theta)$). Before we turn to the simulations, we briefly want to discuss this relation and the arguments by which the existence of SNA in the Harper map is established in \cite{herman:1983}. A more detailed discussion can be found in \cite{haro/puig:2006}. The map (\ref{eq:harper}) describes the projective action of the $\textrm{SL}(2,\ensuremath{\mathbb{R}})$-cocycle (or Schr\"odinger cocycle) \begin{equation} \label{eq:cocycle} (\theta,v) \mapsto (\ensuremath{\theta + \omega},A_{\lambda,E}(\theta) \cdot v) \ , \end{equation} where $$A_{\lambda,E}(\theta) \ = \ \left(\begin{array}{cc} E - \lambda V(\theta) & -1 \\ 1 & 0 \end{array}\right)$$ and $v=(v_1,v_2) \in \ensuremath{\mathbb{R}}^2$. This means that (\ref{eq:harper}) can be derived from (\ref{eq:cocycle}) by letting $x := \arctan(v_2/v_1)$. The Schr\"odinger cocycle in (\ref{eq:cocycle}) is in turn associated to the almost-Mathieu operator \begin{equation} \label{eq:mathieu} H_{\lambda,\theta} : \ell^2 \ensuremath{\rightarrow} \ell^2 \ , \ (H_{\lambda,\theta}u)_n = u_{n+1} + u_{n-1} + \lambda V(\theta+n\omega)u_n \ , \end{equation} as each formal solution of the eigenvalue equation $H_{\lambda,\theta}u=Eu$ satisfies $$\left(\begin{array}{c}u_{n+1} \\ u_n \end{array}\right) = A_{\lambda,E}(\theta+n\omega) \cdot \left(\begin{array}{c}u_n \\ u_{n-1} \end{array}\right)\ .$$ The existence of SNA for the Harper map is equivalent to \emph{non-uniform hyperbolicity} of the cocycle (\ref{eq:cocycle}) \cite{herman:1983,haro/puig:2006}, a concept which is of fundamental importance in this context. In order to explain it, recall that a $\textrm{SL}(2,\ensuremath{\mathbb{R}})$-cocycle over an irrational rotation is a mapping $\ensuremath{\mathbb{T}^{1}} \times \ensuremath{\mathbb{R}}^2 \ensuremath{\rightarrow} \ensuremath{\mathbb{T}^{1}} \times \ensuremath{\mathbb{R}}^2$ of the form $(\theta,v) \mapsto (\ensuremath{\theta + \omega},A(\theta) \cdot v)$, where $A : \ensuremath{\mathbb{T}^{1}} \ensuremath{\rightarrow} \textrm{SL}(2,\ensuremath{\mathbb{R}})$ is a continuous function. The Lyapunov exponent of such a cocycle is given by \begin{equation} \label{eq:cocycle-lyap} \lambda(\omega,A) \ = \ \ensuremath{\lim_{n\rightarrow\infty}} \ensuremath{\frac{1}{n}} \int_{\ensuremath{\mathbb{T}^{1}}} \log \| A_n(\theta)\| \ d\theta \ , \end{equation} where $A_n(\theta) = A(\theta+(n-1)\omega) \circ \ldots \circ A(\theta)$. If $\lambda(\omega,A)>0$, then Oseledets Multiplicative Ergodic Theorem implies the existence of an invariant splitting $\ensuremath{\mathbb{R}}^2 = \mathbb{W}^s_\theta \oplus \mathbb{W}^u_\theta$ (invariance meaning $A(\theta)(\mathbb{W}^i_\theta) = \mathbb{W}^i_{\ensuremath{\theta + \omega}} \ (i=s,u)$), such that vectors in $\mathbb{W}^u_\theta$ are exponentially expanded and vectors in $\mathbb{W}^s_\theta$ are exponentially contracted with rate $\lambda(\omega,A)$ by the action of $A_n(\theta)$. The cocycle $(\omega,A)$ is called {\em uniformly hyperbolic} if the subspaces $\mathbb{W}^i_\theta$ depend continuously on $\theta$. If they depend only measurably on $\theta$, but not continuously, then the cocycle is called {\em non-uniformly hyperbolic}. In order to see why the latter notion is equivalent to the existence of SNA, note that the invariant subspaces can be written as $$\mathbb{W}^i_\theta \ = \ \ensuremath{\mathbb{R}} \cdot\left(\begin{array}{c} 1\\ \tilde{\varphi}^i(\theta) \end{array} \right) $$ with measurable functions $\tilde{\varphi}^i : \ensuremath{\mathbb{T}^{1}} \ensuremath{\rightarrow} \ensuremath{\mathbb{R}} \cup \{\infty\}$, and it follows immediately that by $\varphi^i:= \arctan(\tilde{\varphi^i})$ we can define invariant graphs for the projective action of the cocycle (obtained by letting $x=\arctan(v_2/v_1)$ as above). Moreover, it is not difficult to show that $\lambda(\varphi^s) = 2\lambda(\omega,A)$ and $\lambda(\varphi^u) = -2\lambda(\omega,A)$ in this case,% \footnote{\label{foot:harper1} In the case of the Harper map, the crucial computation is the following: Fix $\theta \in \ensuremath{\mathbb{T}^{1}}$ and $v \in \ensuremath{\mathbb{R}}^2 \setminus \{0\}$ and define vectors $v^n$ by $v^0 := v$ and $v^{n+1}:=A(\theta+n\omega)\cdot v^n$. Further, let $\theta_n := \theta+n\omega \bmod 1$ and $x_n := \arctan(v^n_2/v^n_1)$, and denote the Harper map (\ref{eq:harper}) by $T$. Then, \begin{eqnarray*} DT_{\theta_k}(x_k) & = & \frac{1}{1+(\tan(x_k)-E+\lambda V(\theta_k))^{-2}} \cdot \frac{1+\tan(x_k)^2}{(\tan(x_k)-E+\lambda V(\theta_k))^2} \\ & = & \frac{1+\tan(x_k)^2}{1+(\tan(x_k)-E+\lambda V(\theta_k))^2} \ = \ \frac{1+\tan(x_k)^2}{1+\tan(x_{k+1})^{-2}} \ = \ \frac{\| v^k\|^2}{\|v^{k+1}\|^2} \ , \end{eqnarray*} where we used $v^{k+1}_2 = v^k_1$ in the last step. Consequently, we obtain \begin{eqnarray*} DT^n_\theta(x_0) \ = \ \prod_{k=0}^{n-1} DT_{\theta_k}(x_k) \ = \ \frac{\|v^0\|^2}{\|v^n\|^2} \ , \end{eqnarray*} and this establishes the asserted relation between the different Lyapunov exponents. The case of a general $\textrm{SL}(2,\ensuremath{\mathbb{R}})$-cocycle can be treated in more or less the same way. } and conversely the existence of invariant graphs with non-zero Lyapunov exponent implies the existence of an invariant splitting with the mentioned properties. As the graphs $\varphi^i$ depend continuously on $\theta$ if and only if this is true for the subspaces $\mathbb{W}^i_\theta$, we obtain the claimed equivalence. \medskip The crucial observation which was made by Herman is the fact that, using a result from sub-harmonic analysis, lower bounds on the Lyapunov exponent can be obtained for suitable choices of the $\textrm{SL}(2,\ensuremath{\mathbb{R}})$-valued function $A$. In the case of (\ref{eq:cocycle}) with potential $V(\theta) = \cos(2\pi\theta)$, this bound is $\lambda(\omega,A_{\lambda,E}) \geq \max\{0,\log(|\lambda|/2)\}$ \cite[Section 4.7]{herman:1983}. Consequently, if $|\lambda| >2$ then the Lyapunov exponent of $(\omega,A_{\lambda,E})$ will be strictly positive for all values of $E$. On the other hand, it is well-known that there cannot be a continuous splitting for all $E \in \ensuremath{\mathbb{R}}$, and consequently for some $E$ the respective cocycle has to be non-uniformly hyperbolic. The simplest way to see this is probably to consider the rotation number. Suppose $\omega \in \ensuremath{\mathbb{T}^{1}} \setminus \ensuremath{\mathbb{Q}}$ and $\lambda>2$ are fixed. Then (\ref{eq:harper}) defines a skew-product map $T_E$ on the two-torus, and for such maps a fibred rotation number $\rho(T_E)$ can be defined, much in the way this is done for homeomorphisms of the circle. The dependence of $\rho(T_E)$ on $E$ is continuous \cite[Section 5]{herman:1983}, and further it is easy to see that the existence of continuous invariant graphs forces the fibred rotation number to be rationally related to $\omega$, more precisely to take values in the module ${\cal M}_\omega := \{\frac{k}{q} \omega \bmod 1 \mid k\in\ensuremath{\mathbb{Z}}, q\in \N\}$ (compare \cite[Section 5.17]{herman:1983}). However, if $E$ runs through the real line from $-\infty$ to $\infty$, then the rotation number $\rho(T)$ runs exactly once around the circle \cite[Section 4.17(b)]{herman:1983}. For all $E \in \ensuremath{\mathbb{R}}$ with $\rho(T_E) \notin {\cal M}_E$, the existence of a SNA in (\ref{eq:harper}) follows. Refined results can be obtained by using the fact that the invariant splitting cannot be continuous whenever $E$ belongs to the spectrum of the almost-Mathieu operator. This is discussed in detail in \cite{haro/puig:2006}. In particular, it allows to use lower bounds on the measure of the spectrum to establish the existence of SNA for a set of positive measure in parameter space. \medskip For the simulations presented here we use a reflection of (\ref{eq:harper}) w.r.t.\ the $\theta$-axis, \begin{equation} \label{eq:projharper} \ensuremath{(\theta,x)} \ \mapsto \ \left(\ensuremath{\theta + \omega},\arctan\left(\frac{1}{\tan(-x)- E+\lambda V(\theta)}\right) \right) \ , \end{equation} as this makes it easier to compare the pictures with the other examples. The potential function which is used is $V(\theta) = \cos(2\pi\theta)$. Later, in Section~\ref{App:Harper}, we have to make a different choice in order to obtain rigorous results with the methods presented here. \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-harpergraphs.eps, clip=, width=\linewidth, } \caption{\small The stable invariant curves for the projected Harper map given by (\ref{eq:projharper}) with $\omega$ the golden mean, $\lambda=4$ and different values for $E$. (a) At $E=4.4$ the first peak is clearly visible, while the second just starts to appear. The repeller is close, but still a certain distance away. (b) At $E=4.3$ the second peak has grown and the third starts to appear. This pattern continues, and more and more peaks can be seen in pictures (c) $E=4.289$, (d) $E=4.28822$ and (e) $E=4.288208$. (f) finally shows attractor and repeller for $E=4.288207478$ just prior to collision. } \label{fig:harpergraphs} \end{minipage} \end{figure} As described by Herman in \cite[Section 4.14]{herman:1983}, when the parameter $E$ approaches the spectrum of the almost-Mathieu operator from above, a stable and an unstable invariant circle collide in a saddle-node bifurcation. Even if the rigorous arguments used by Herman \cite{herman:1983} are very specific for cocycles (as described above), the process seems to be the same as in the arctan-family before: Figure~\ref{fig:harpergraphs} shows the behaviour of the attractor before it collides with the repeller (the latter is only depicted in Fig.~\ref{fig:harpergraphs}(a) and (f)). The pattern is already familiar, the exponential evolution of peaks can be seen quite clearly again. Based on this observation, Bjerkl\"ov recently addressed a problem raised by Herman \cite[Section 4.14]{herman:1983} about the structure of the minimal set which is created in this bifurcation. Upon their collision, the stable and unstable invariant circles are replaced by an upper, respectively lower semi-continuous invariant graph. The region between the two graphs is a compact and invariant set, but it is not at all obvious whether this set is also minimal and coincides with the topological closures of the two graphs. In \cite{bjerkloev:2005} Bjerkl\"ov gives a positive answer to this question, provided the rotation number $\omega$ on the base is Diophantine and the parameter $\lambda$ is sufficiently large. As his approach is purely dynamical and does not depend on any particular properties of cocycles, it should be possible to apply it to more general systems. This might allow to prove the existence of SNA and to describe their structure, in the above sense, at the same time. \subsubsection{The quasiperiodically forced Arnold circle map} \label{ArnoldMap} The most obvious physical motivation for studying qpf systems are probably oscillators which are forced at two or more incommensurate frequencies. \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-arnold.eps, clip=, width=\linewidth, } \caption{\small Pictures obtained from the qpf Arnold circle map (\ref{eq:qpfarnold}) with $\alpha=0.99$ and $\beta=0.6$, $\omega$ is the golden mean. (a) shows the attracting invariant curve for $\tau=0.3373547962$. As the exponential evolution of peaks takes place on a rather microscopic level, it is difficult to recognise any details. Therefore, the other pictures show the attractors only over the interval $[0.2,0.28]$. The $\tau$-values are (b) $\tau=0.337$, (c) $\tau=0.3373$, (d) $\tau=0.3373547$, (e) $\tau=0.33735479$, (f) $\tau=0.337357962$.} \label{fig:arnold} \end{minipage} \end{figure} If these are modelled by differential equations, the Poincar\'e maps will be of the form (\ref{eq:qpfs}). The qpf Arnold circle map, given by \begin{equation} \label{eq:qpfarnold} \ensuremath{(\theta,x)} \mapsto \left(\ensuremath{\theta + \omega},x+\tau+\frac{\alpha}{2\pi}\sin(2\pi x) + \beta \sin(2\pi\theta) \right) \end{equation} with real parameters $\alpha,\tau$ and $\beta$, is often studied as a basic example (see \cite{ding/grebogi/ott:1989}). There are several interesting phenomena which can be found in this family, such as different bifurcation patterns, mode-locking or the transition to chaos as the map becomes non-invertible \cite{ding/grebogi/ott:1989,feudel/kurths/pikovsky:1995,% osinga/wiersig/glendinning/feudel:2001}. Similar to the unforced Arnold circle map \cite{katok/hasselblatt:1995,demelo/vanstrien:1993}, there exist so-called Arnold tongues -- regions in the parameter space on which the rotation number stays constant. The reason for this is usually the existence of (at least) one stable invariant circle inside of the tongue. On the boundaries of the tongue this attractor collides with an unstable invariant circle in a saddle-node bifurcation (see \cite{osinga/wiersig/glendinning/feudel:2001,chastell/glendinning/stark:1995} or \cite{glendinning/feudel/pikovsky/stark:2000} for a more detailed discussion and numerical results). \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-arnoldpeak.eps, clip=, width=\linewidth, } \caption{\small The stable invariant curves in the system $\ensuremath{(\theta,x)} \mapsto \left(\ensuremath{\theta + \omega},x+\tau+\frac{\alpha}{2\pi}\sin(2\pi\theta)\right.$ $\left.-\beta \cdot \max\{0,1-10\cdot d(\theta,\ensuremath{\frac{1}{2}})\}\right)$. This time the parameters $\alpha=0.99$ and $\tau=0$ are fixed, while $\beta$ varies: (a) $\beta=-0.2$, (b) $\beta=-0.3$, (c) $\beta=-0.4$, (d) $\beta=-0.45$, (e) $\beta=-0.49$, (f) $\beta=-0.497$. Again, $\omega$ is the golden mean. The exponential evolution of peaks is clearly visible. } \label{fig:arnoldpeak} \end{minipage} \end{figure} For our purpose it is convenient to study only those bifurcations which take place on the boundary of the Arnold tongue with rotation number zero. In order to do so, we fix the parameters $\alpha \in [0,1]$ and $\beta>0$, thus obtaining a one-parameter family depending on $\tau$. As long as $\beta$ is not too large, there exist a stable and an unstable invariant curve at $\tau=0$. Increasing or decreasing $\tau$ leads to the disappearance of the two curves after their collision in a saddle-node bifurcation. When $\alpha$ is close enough to 1 (where the map becomes non-invertible) this bifurcation seems to be non-smooth \cite{osinga/wiersig/glendinning/feudel:2001}. The problem here is the fact that the curves are already extremely `wrinkled' before the exponential evolution of peaks really starts. Therefore, it is hard to recognise any details in the global picture (see Figure~\ref{fig:arnold}(a)). This becomes different if we `zoom in' and only look at the curves over a small interval. On this microscopic level, we discover the more or less the same behaviour as before (Figure~\ref{fig:arnold}(b)--(f)). Of course, this time we can not really determine the order in which the peaks are generated, as we only see those peaks which lie in our small interval. But we clearly see that more and more peaks appear, and those appearing at a later time are smaller and steeper than those before. On the other hand, we can also use a more `\textit{peak-shaped}' forcing function instead of the sine. In this case, the pictures we obtain look exactly the same as the ones from the arctan-family above (see Figure~\ref{fig:arnoldpeak}(a)-(f)). This effect will be discussed in more detail in Section~\ref{Mechanism}~. Nevertheless, we should mention that, in contrast to the two preceding examples, we do not provide any rigorous results on the qpf Arnold circle map (see also Section~\ref{App:Remarks} for a discussion). \subsubsection{Pinched skew products} \label{PinchedSystems} As for the Harper map, we refer to the original literature \cite{grebogi/ott/pelikan/yorke:1984,keller:1996} for a more detailed discussion of these systems. Here, we will just have a look at the map \begin{equation} \label{eq:pinchedskews} \ensuremath{(\theta,x)} \mapsto (\ensuremath{\theta + \omega},\tanh(\alpha x)\cdot \sin(\pi\theta)) \ , \end{equation} with real positive parameter $\alpha$, which is a typical representative of this class of systems. Note that due to the multiplicative nature of the forcing, the 0-line is \textit{a priori} invariant, and due to the zero of the sine function there is one fibre which is mapped to a single point (hence `pinched'). These are the essential features that are needed to prove the existence of SNA in pinched skew products (see \cite{keller:1996,glendinning:2002}). \begin{figure}[h] \noindent \begin{minipage}[b]{\linewidth} \epsfig{file=sna-pinchedlines.eps, clip=, width=\linewidth, } \caption{\small The first six iterates of the upper boundary line for the pinched skew product given by (\ref{eq:pinchedskews}) with $\omega$ the golden mean and $\alpha=10$. In each step of the iteration one more peak appears, while apart from that the curves seem to stay the same. Further, the peaks become steeper and thinner at an exponential rate.} \label{fig:pinchedlines} \end{minipage} \end{figure} Figure~\ref{fig:pinchedlines} differs from the preceding ones insofar as it does not show a sequence of invariant graphs as the systems parameters are varied, but the first images of a constant line that is iterated with a fixed map. Nevertheless, the behaviour is very much the same as before. The exponential evolution of peaks can followed even easier here, as this time each iterate produces exactly one further peak. For Pinched skew products this process was quantified \cite{jaeger:2004a} in order to describe the structure of the SNA's in more detail. The question addressed there is basically the same as the one studied by Bjerkl\"ov in \cite{bjerkloev:2005}, and the result is similar: The SNA, which is an upper semi-continuous invariant graph above the 0-line in this situation, lies dense in the region below itself and above the 0-line, provided the rotation number $\omega$ on the base is Diophantine and the parameter $\alpha$ is large enough. \subsubsection{Non-smooth pitchfork bifurcations} \label{Pitchfork} Compared to saddle-nodes, pitchfork bifurcations are degenerate. Usually they only occur if the system has some inherent symmetry that forces three invariant circles to collide exactly at the same time. Nevertheless, they have been described in the literature about SNA's quite often (e.g.\ \cite{osinga/wiersig/glendinning/feudel:2001},\cite{sturman:1999}). The reason for this is the fact that unlike in saddle-node bifurcations, where the SNA's only occur at one single parameter, SNA's which are created in pitchfork bifurcations seem to persist over a small parameter interval. In addition, the transition from continuous to non-continuous invariant graphs at the collision point is much more distinct, as the SNA which is created seems to trace out a picture of both stable invariant curves just prior to the collision (see Figure~\ref{fig:pitchfork}). \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-pitchfork.eps, clip=, width=\linewidth, } \caption{\small A pitchfork bifurcation in the parameter family (\ref{eq:symmetricfamily}). (a) shows the upper and lower bounding graphs just prior to the collision. Note that here the two objects are still distinct, and three different trajectories (a backwards trajectory for the repeller) are plotted to produce this picture. In contrast to this, (b) and (c) only show one single trajectory. There still exist two distinct SNA's, but these are interwoven in such a way that they cannot be distinguished anymore. Each of them seems to trace out a picture of both attractors before collision. The parameter values are $\alpha=10$ and (a) $\tau=1.64$, (b) $\tau=1.645$ and (c) $\tau=1.66$. $\omega$ is the golden mean.} \label{fig:pitchfork} \end{minipage} \end{figure} We were not able to give a rigorous proof for this stabilising effect, or any other details of a non-smooth pitchfork bifurcation. However, by a slight modification of the methods used for the non-smooth saddle-node bifurcation, we can at least prove the existence of SNA's in systems with the mentioned inherent symmetry (see Theorem~\ref{thm:symmetricsna} and Section~\ref{SNAwithsymmetry}). For suitable parameters these systems have two SNA's which are symmetric to each other and enclose a self-symmetric SNR, and the three objects are interwoven in such a way that they all have the same (essential) topological closure. As an example, we consider the parameter family \begin{equation} \label{eq:symmetricfamily} \ensuremath{(\theta,x)} \mapsto \arctan(\alpha x) - \beta \cdot (1-4d(\theta,0)) \ . \end{equation} For Diophantine $\omega$ and sufficiently large $\alpha$ we will obtain the existence of a SNA-SNR triple as described above for at least one suitable parameter $\beta(\alpha)$. \subsection{The mechanism: Exponential evolution of peaks} \label{Mechanism} In the following, we will try to give a simple heuristic explanation for the mechanism which is responsible for the exponential evolution of peaks. Generally, one could say that it consists of a subtle interplay of an \textit{`expanding region'} $\cal E$ and a \textit{`contracting region'} $\cal C$, which communicate with each other only via a small \textit{`critical region'} $\cal S$. In order to give meaning to this, we concentrate first on the $\arctan$-family given by (\ref{eq:atanfamily}). \medskip If we restrict to $\alpha \geq \tan(1)$ and $\beta \leq \pi$ in (\ref{eq:atanfamily}), then we can choose $X=\left[-\frac{3}{2}\pi,\frac{3}{2}\pi\right]$ as the driven space, because in this case $\ensuremath{\mathbb{T}^{1}} \times \left[-\frac{3}{2}\pi,\frac{3}{2}\pi\right]$ is always mapped into itself. Further, we fix $\alpha$ sufficiently large, such that the map $F : x \mapsto \arctan(\alpha x)$ has three fixed points $x^- < 0 < x^+$. As $0$ will be repelling and $x^+$ attracting, we can choose a small interval $I_e$ around 0 which is expanded and an interval $I_c$ around $x^+$ which is contracted, and define the expanding and contraction regions as ${\cal E}:= \ensuremath{\mathbb{T}^{1}} \times I_e$ and ${\cal C}:= \ensuremath{\mathbb{T}^{1}} \times I_c$ (see Figure~\ref{fig:mechanism}). Of course, there exists a second contracting region ${\cal C}^-$, corresponding to $x^-$, but this does not take part in the bifurcation: Due to the one-sided nature of the forcing, ${\cal C}^-$ is always a trapping region, independent of the parameter $\beta$. Thus there always exists a stable invariant circle inside of ${\cal C}^-$, and the saddle-node bifurcation only takes place between the two invariant circles above. \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-mechanism.eps, clip=, width=\linewidth, } \caption{\small As the fibre maps are expanding and contracting on $I_e$ and $I_c$, respectively, $T$ will be expanding in the vertical direction on ${\cal E} = \ensuremath{\mathbb{T}^{1}} \times I_e$ and contracting on ${\cal C} = \ensuremath{\mathbb{T}^{1}} \times I_c$. (b) As long as $\beta$ is not too large, $\cal C$ is mapped into itself. Thus, there exists a stable invariant circle inside of $T({\cal C})$ (in fact, as a point set this circle coincides with $\bigcap_{n\in\N} T^n({\cal C})$), which has approximately the shape of the forcing function. (c) When the first peak enters the expanding region it induces a second peak, which moves faster than the first one by the expansion factor in $\cal E$. The first peak is generated in the critical region $\cal S$, where the forcing achieves its maximum. Therefore, it is located in $T({\cal S})$. } \label{fig:mechanism} \end{minipage} \end{figure} By the choice of the intervals, the fibre maps $\ensuremath{T_{\theta}}$ are contracting on $I_c$ and expanding on $I_e$. Further, as long as $\beta$ is small there holds \begin{equation} \label{eq:expandinginterval} \ensuremath{T_{\theta}}(I_c) \ensuremath{\subseteq} I_c \ \ \textrm{ and } \ \ I_e \subseteq \ensuremath{T_{\theta}}(I_e) \ \end{equation} for all $\theta \in \ensuremath{\mathbb{T}^{1}}$. Consequently, \begin{equation} \label{eq:expandingregion} T({\cal C}) \ensuremath{\subseteq} {\cal C} \ \ \textrm{ and } \ \ {\cal E} \subseteq T({\cal E}) \ . \end{equation} This means that $\cal C$ and $\cal E$ cannot interact, and there will be exactly one invariant circle (stable and unstable, respectively) in each of the two regions. However, when $\beta$ is increased and approaches the bifurcation point, (\ref{eq:expandingregion}) does not hold anymore. Nevertheless, the relation (\ref{eq:expandinginterval}) will still be true for `most' $\theta$, namely whenever the forcing function $(1-\sin(\pi\theta))$ in (\ref{eq:atanfamily}) is not close to its maximum (see Figure~\ref{fig:mechanism}(c)). Thus, even when $\cal E$ and $\cal C$ start to interact, they will only do so in a vertical strip ${\cal S}:=W \times X$, where $W \ensuremath{\subseteq} \ensuremath{\mathbb{T}^{1}}$ is a small interval around $0$. This strip $\cal S$ is the `\textit{critical region}' we referred to above and in which the first peak is generated: As long as $T(\cal C) \ensuremath{\subseteq} C$, the upper bounding graph will be contained in $T(\cal C)$. But this set is just a very small strip around the first iterate of the line $\ensuremath{\mathbb{T}^{1}} \times \{x^+\}$, which is a curve $\psi$ given by \begin{equation} \label{eq:firstpeak} \psi(\theta) := x^+ - \beta\cdot (1-\sin(\pi(\theta-\omega))) \ \end{equation} (see Figure~\ref{fig:mechanism}(b)). Consequently, the upper bounding graph $\varphi^+$ will have approximately the same shape as $\psi$, which means that it has a first peak centred around $\omega$, i.e.\ in $T(\cal S)$. From that point on, the further behaviour is explained quite easily. As soon as the first peak enters the expanding region, its movement will be amplified due to the strong expansion in $\cal E$. Thus a second peak will be generated at $2\omega \bmod 1$. It will be steeper than the first one, and when $\beta$ is increased it also grows faster by a factor which is more or less the expansion factor inside $\cal E$. As soon as the second peak is large enough to enter the expanding region, it generates a third one, which in turn induces a fourth and so on $\ldots$ . \medskip The picture we have drawn so far already gives a first idea about what happens, although converting it into a rigorous proof for the existence of SNA will still require a substantial amount of work. As we will see, it is not too hard to give a good quantitative description of the behaviour of the peaks up to a certain point, namely as long as the peaks do enter the critical region (corresponding to the returns of the underlying rotation to the interval $W$). But as soon as this happens, things will start to become difficult. However, by assuming that the rotation number $\omega$ satisfies a Diophantine condition we can ensure that such returns are not too frequent, and that very close returns do not happen too soon. This will be sufficient to ensure that the exponential evolution of peaks also carries on afterwards. \medskip In principle, the mechanism is not different in the other parameter families discussed in the last section. For the Harper map given by (\ref{eq:projharper}), Figure~\ref{fig:moebius}(a) shows the graph of a projected M\"obius-transformation $x \mapsto \arctan(\frac{1}{\tan(-x) - c})$ for large $c$. As long as $E \gg \lambda$, the fibre maps will all have approximately this shape. \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-moebius.eps, clip=, width=\linewidth, } \caption{\small Graphs of the projected M\"obius-transformations $x \mapsto \arctan\left(\frac{1}{\tan(-x)-10}\right)$ in (a) and $x \mapsto \arctan\left(\frac{1}{\tan(-x)}\right)$. } \label{fig:moebius} \end{minipage} \end{figure} As we can see, there will be a repelling fixed point slightly above $-\frac{\pi}{2}$ and an attracting one slightly below 0. This means that if we choose $I_e$ and $I_c$ to be sufficiently small intervals around these fixed points, then we have uniform expansion on $\cal E$, uniform contraction on $\cal C$ and (\ref{eq:expandinginterval}) will be satisfied. When $E \approx \lambda$, this will still be true on most fibres. Only where the potential $\cos(2\pi\theta)$ is close to its maximum at $\theta=0$, the picture changes (Figure~\ref{fig:moebius}(b)). Here $-\frac{\pi}{2} \in I_c$ is mapped close to 0, which means again that the expanding and contracting region start to interact and a first peak is produced. (Thus, the critical region $\cal S$ is again a vertical strip around 0.) As before, this peak is amplified as soon as it enters the expanding region $\cal E$ and thus induces all others. In some sense, the situation for the qpf Arnold circle map is even more similar to the case of the arctan-family, as the forcing is additive again and the fibre maps are clearly $s$-shaped as before. However, the difference is the fact that while the derivative at the stable fixed point indeed vanishes, such that the contraction becomes arbitrarily strong, the maximal expansion factor is at most 2 (at least in the realm of invertibility $\alpha \leq 1$). This explains why the resulting pictures in Figure~\ref{fig:arnold} are much less clear. Roughly speaking, in combination with the limited expansion the peak of the forcing function $\theta \mapsto \sin(\pi\theta)$ is just `too blunt' to trigger the exponential evolution of peaks as easily as before. When it finally does take place - as the simulations in Figure~\ref{fig:arnold} suggest - the graphs are already too `wrinkled' to give a good picture. But of course, if the shape of the forcing function is a second factor that decides whether the exponential evolution of peaks takes place, then we can also trigger this pattern by choosing one with a very sharp peak. This is exactly what happened in Figure~\ref{fig:arnoldpeak}. Finally, for Pinched skew products we refer to \cite{jaeger:2004a} for a more detailed discussion. \begin{figure}[h!] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-structure.eps, clip=, width=\linewidth, } \caption{\small Upper bounding graphs in the pinched systems given by (\ref{eq:pinchedskews}). $\omega$ is the golden mean, the parameter values are (a) $\alpha=3$ and (b) $\alpha=32$. In (b), where the expansion is stronger, there seems to be less structure in comparison to (a). However, this is not a qualitative difference, but can be easily explained by the exponential evolution of peaks. If the expansion is stronger, the peaks of higher order are just not visible anymore, such that only the first few peaks can be seen.} \label{fig:structure} \end{minipage} \end{figure} \begin{bem} The preceding discussion gives a basic understanding of how SNA's are created in the above examples. Although it might be very rudimentary, it can already be used to anticipate a number of observations. Without trying to make things very precise, we want to mention a few: \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item First, it is not hard to guess in which parameter range the expanding and contracting regions start to interact and the torus collision takes place in the above families, e.g.\ $E \approx \beta$ for the Harper map or $\beta \approx \frac{\pi}{2}$ for the arctan-family. \item Another phenomenom which can be explained is the following: The stronger the expansion and contraction are, i.e.\ the larger the respective parameter is chosen, the less `structure' can be seen in the pictures (see Figure~\ref{fig:structure}). However, obviously this `structure' corresponds exactly to the peaks which are generated. These can only be detected numerically as long as they do not become too small, but of course this happens faster if the expansion and contraction are stronger. Figure~\ref{fig:structure} shows this effect for pinched systems, but it can be observed similarly in all the examples we treated. In particular, it is also present in the qpf Arnold circle map (\ref{eq:qpfarnold}), which indicates again that the mechanism there is not different from the other examples. \item As already mentioned, the exponential evolution of peaks is easier to trigger if the forcing function has a very distinctive and sharp peak. Figures~\ref{fig:arnold} and \ref{fig:arnoldpeak} illustrate this in the context of the qpf Arnold circle map. \item In \cite{haro/delallave:2006}, the authors study (amongst other things) the parameter-dependence of the minimal distance $\Delta_\beta$ between the stable and instable invariant curve in a non-smooth bifurcation. Their situation is slightly different to the one considered here, since the dynamics take place on a torus and the attractor touches the repeller from above and below at the same time. Nevertheless, the pictures indicate that a process similar to the one described above takes place. The observation which was made by the authors is that the asymptotic dependence of $\Delta_\beta$ on $|\beta-\beta_c|$ seems to be a power law with exponent 1 as $\beta \ensuremath{\rightarrow} \beta_c$, i.e.\ $\Delta_\beta \sim |\beta-\beta_c|$ (where $\beta_c$ is the bifurcation parameter). Furthermore, this exponent seems to be universal for a certain class of models. At least in the situations we discussed, e.g.\ for (\ref{eq:atanfamily}), the exponential evolution of peaks offers a reasonable explanation for such a scaling behaviour: Since all peaks of the attractor have to touch the repeller at the same time and, according to our heuristics, all further peaks move much faster than the first one, it is the latter which determines the minimal distance of the two curves. However, as this first peak has approximately the shape of the forcing function (see (\ref{eq:firstpeak})), the position of its tip depends linearly on $\beta$. \end{list} \end{bem} Admittedly, some of the above remarks remain rather speculative unless they are confirmed either by careful numerical studies or rigorous proofs. Nevertheless, what we want to point out is that the mechanism we described offers at least an heuristic explanation for a number of observations which have sometimes been found to be puzzling or ever confusing. Further, an intuitive understanding of the process should make it easier to come up with reasonable conjectures, which can then (in the better case) either be proved or at least be confirmed numerically. As already mentioned, the issue we want to concentrate on in this article is a rigorous proof for the existence of SNA. \medskip Concerning the latter, the main problem we will encounter is that we do not a priori know where the tips of the peaks are located. If there is any chance of rigorously describing the exponential evolution of peaks in a quantitative way, they must be located in the expanding region at least most of the times. Otherwise, there would be no plausible mechanism which forces the peaks to become steeper and steeper. But the horizontal position is not the only problem. When we use a forcing function with a quadratic maximum, then we do not even know the exact vertical position: If the tip of one peak is on the fibre $\theta$, then the tip of the next will be close to $\theta+\omega$, but it may be slightly shifted due to the influence of the forcing. In order to explain this, suppose that a upper bounding graph $\varphi^+$ of $T$ is differentiable and has a local minimum at $\theta_0$. The derivative of $\varphi$ at $\theta_0+\omega$ is then given by \begin{equation} \label{eq:peakimage} \varphi'(\theta_0+\omega) \ = \ \frac{\partial}{\partial_\theta} (\pi_2 \circ T)(\theta_0,\varphi(\theta_0)) + DF_\theta(\varphi(\theta)) \cdot \varphi'(\theta_0) \ = \ -\beta g'(\theta_0) \ . \end{equation} (Here we suppose that $T$ has fibre maps of the form $T_\theta = F(x) - \beta g(\theta)$ as in (\ref{eq:atanfamily}).) Consequently, if $g'(\theta_0) \neq 0$, then $\theta_0+\omega$ is not a local minimum. This becomes different if the local minima, which we call `peaks', are sufficiently `sharp and steep'. By this, we mean that that both $\lim_{\theta \nearrow \theta_0} -\varphi'(\theta)$ and $\lim_{\theta \searrow \theta_0} \varphi'(\theta)$ are greater than a sufficiently large constant $M$ (depending on the ${\cal C}^1$-norms of $F$ and $g$). Then it can easily be seen from (\ref{eq:peakimage}) that $\theta_0+\omega$ will be a local minimum as well. If in addition $(\theta_0,\varphi(\theta_0))$ is located in the expanding region and the expansion constant is sufficiently large, then the peak at $\theta_0+\omega$ will again be sufficiently sharp and steep (in the above sense). Our claim is now that we can produce such sharp peaks by choosing a forcing function that, like $1-\sin(\pi\theta)$, is only Lipschitz-continuous at its maximum and decays linearly in a neighbourhood. At least for the first peak this is plausible, since we have argued above that at the onset of the exponential evolution of peaks the invariant graph has approximately the shape of the forcing function (see the discussion around (\ref{eq:firstpeak})). For all further peaks we can expect the same, provided that the exponential evolution of peaks is really caused by the mechanism described above, because then the tips of the peaks are located in the expanding region (at least most of the time). However, we will not give a rigorous proof for this claim, since this would require to describe the global structure of the invariant graphs. In fact, we argue that it is exactly this `localisation' of the tips of the peaks which helps to overcome the need for such a global description (which would probably be much more complicated on a technical level). In order to understand this, note that (in case our claim holds), the tips of the peaks just correspond to a single orbit, since then one minimum is mapped to another. Further, as mentioned, we expect that this orbit spends most of the time in the expanding region, and in fact this will already turn out to be sufficient to prove the existence of a SNA: In this case there exists an orbit on the upper bounding graph which has a positive vertical Lyapunov exponent, and this is not compatible with the continuity of the upper bounding graph (the Lyapunov exponent of the upper bounding graph is always non-positive, e.g.\ Lemma~3.5 in \cite{jaeger:2003}, and due to uniform convergence of the ergodic limits this is true for any of its points). However, during the proof we will obtain even more information about this particular orbit: It does not only have a positive Lyapunov exponent forwards, but also backwards in time. Thus, concerning it Lyapunov exponents the orbit behaves as if it was moving from a sink to a source (and referring to this we will call it a \textit{`sink-source-orbit'}). As it will turn out, it is contained in the intersection of the SNA and the SNR. The existence of such atypical orbits is also well-known for the Harper map, where it is equivalent to the existence of exponentially decaying eigenfunctions for the associated Schr\"odinger operator \footnote{\label{foot:localisation}Suppose $u \in \ell^2$ is a non-zero solution of the eigenvalue equation (\ref{eq:mathieu}) and let $x_n = \arctan(u_{n-1}/u_n)$ (see Section~\ref{Harper}). Further, denote the Harper map given by (\ref{eq:harper}) by $T$. Then, using the formula derived in Footnote~\ref{foot:harper1}, we obtain that \begin{eqnarray*} DT^n_\theta(x_0) & = & \frac{u_0^2+u_{-1}^2}{u_n^2+u_{n-1}^2} \ . \end{eqnarray*} (Note that $u_{n-1}=u_n=0$ is not possible, as otherwise $u=0$.) Consequently, sink-source-orbits correspond to exponentially decaying eigenfunctions. The existence of such `localised' eigenfunctions for the almost-Mathieu operator was shown by Jitomirskaya in \cite{jitomirskaya:1999} (so-called {\em Anderson localisation}).} and indicates an intersection of the stable and unstable subspaces of the matrix cocycle (see \cite{haro/puig:2006} for a more detailed discussion). Summarising we can say that the `sharp' peak makes it possible to concentrate on a single orbit instead of a whole sequence of graphs, and the information about this orbit will already be sufficient to establish the existence of a SNA. The fact that the construction in the proof of our main results (Theorems~\ref{thm:snaexistence} and \ref{thm:symmetricsna}), which is based of this idea, works fine can be seen as an indirect `proof' of the claim we made in the above discussion. \section{Statement of the main results and applications} \label{Main results} In this section we state and discuss the main results of this article and their application to the examples from the introduction. The proofs are postponed until the later sections, unless they can be given in a few lines. In particular, this concerns the construction of sink-source-orbits, which is carried out in Sections \ref{Strategy} to \ref{Symmetricsetting}. Before we turn to results on the non-smoothness of bifurcations in Section~\ref{NonSmoothSaddleNodes}, we provide a general setting in which saddle-node bifurcations in qpf interval maps take place (Section~\ref{GeneralSettingI}), and introduce sink-source-orbits as a criterium for the existence of SNA (Section~\ref{SinkSourceSNA}). \subsection{A general setting for saddle-node bifurcations in qpf interval maps} \label{GeneralSettingI} Obviously, before we can study the non-smoothness of saddle-node bifurcations, we have to provide a setting in which such bifurcations occur (smooth or non-smooth). In order to do so, we will consider parameter families of maps $T=T_\beta$ which are given by \begin{equation} \label{eq:generalsystem} T_\beta\ensuremath{(\theta,x)} \ := \ (\ensuremath{\theta + \omega},F(x) - \beta \cdot g(\theta)) \ , \end{equation} where we suppose that, given a constant $C>0$, the functions $F$ and $g$ satisfy the following assumptions: \begin{eqnarray} \label{cond:g1} & & g:\ensuremath{\mathbb{T}^{1}} \ensuremath{\rightarrow} [0,1] \textrm{ is continuous and takes the value 1 at least once;} \\ \label{cond:F1} \textstyle & &F : [-2C,2C] \ensuremath{\rightarrow} [-C,C] \textrm{ is continuously differentiable with } F' > 0; \\ & &F \textrm{ has exactly three fixed points } x_-< 0, 0 \ \textrm{and } x_+>0 . \label{cond:F2} \end{eqnarray} Note that if we restrict to parameters $\beta \in [0,C]$, then we can choose $X = [-2C,2C]$ as the driven space, because then $\ensuremath{\mathbb{T}^{1}} \times X$ is always mapped into itself. Of course, this choice is somewhat arbitrary, the only thing which is important is to fix some driven space $X$ independent of the parameter $\beta$. We also remark that the in the situations we will consider later, $F$ is usually a bounded function which is defined on the whole real line. In this case, we will only consider its restriction $F_{|[-2C,2C]}$, where $C$ is any constant larger that $\sup_{x\in\ensuremath{\mathbb{R}}} |F(x)|$. This has the advantage that we obtain a compact phase space in this way. In particular, it allows to define the global attractor and the bounding graphs as it was done in Section~\ref{BasicDefinitions}~. As we chose the function $g$ to be non-negative, the forcing only `acts downwards'. We will refer to this case as \textit{`one-sided forcing'}. The first problem we will encounter is to restrict the number of invariant graphs which can occur. If there are too many, it will be hard to describe a saddle-node bifurcation in detail. Fortunately, there exist general results which allow this, without placing to restrictive conditions on the system. We will discuss these in Section~\ref{SaddleNode} (see Theorems~\ref{thm:schwarzian} and \ref{thm:concave}, taken from \cite{jaeger:2003} and \cite{keller:1996}), before giving the proof of Theorem~\ref{thm:saddlenode}~. The most convenient of these criteria is to require $F$ to have negative Schwarzian derivative, which ensures that there can be at most three different invariant graphs (Theorem~\ref{thm:schwarzian}).% \footnote{\label{foot:schwarzian}The Schwarzian derivative of a ${\cal C}^3$ interval map $F$ is defined as \[ SF \ := \ \frac{F'''}{F'} - \frac{3}{2}\left(\frac{F''}{F'}\right)^2 \ . \] It is intimately related to the cross ratio distortion of the map (see \cite{demelo/vanstrien:1993}), and this relation is exploited in \cite{jaeger:2003} to derive the mentioned statement. This is very similar to the proof of Theorem~\ref{thm:concave} given in Section~\ref{SaddleNode} (see Remark~\ref{bem:schwarzianproof}).} However, in the particular situation we consider it will also be sufficient if $F$ is convex on one suitable interval and uniformly contracting on another. More precisely, we will use the following assumption: \begin{equation} \label{cond:F3} \begin{array}{l} \textrm{Suppose that either } F \textrm{ is } {\cal C}^3 \textrm{ and has negative Schwarzian derivative,} \\ \textrm{or there exists } c \in (x_-,0] \textrm{, such that } F_{|[-2C,x_-)} \textrm{ is uniformly contracting } \\\textrm{and } F_{|[c,2C)}\textrm{ is strictly concave.} \end{array} \end{equation} Now we can state the following result on the existence of saddle-node bifurcations. \begin{thm}[Saddle-node bifurcation] \label{thm:saddlenode} Suppose $F$ and $g$ satisfy (\ref{cond:g1})--(\ref{cond:F3}) and let $X=[-2C,2C]$ and $\beta \in [0,C]$ as above. Then the lower bounding graph of the system (\ref{eq:generalsystem}), which we denote by $\varphi^-$,% \footnote{We keep the dependence of $\varphi^-$ on $\beta$ implicit, same for $\psi$ and $\varphi^+$ below.} is continuous and has negative Lyapunov exponent. Its dependence on $\beta$ is continuous (in ${\cal C}^0$-norm) and monotone: If $\beta$ is increased then $\varphi^-$ moves downwards, uniformly on all fibres. Further, there exists a critical parameter $\beta_c \in (0,C)$, such that the following holds: \begin{list}{(\roman{enumi})}{\usecounter{enumi}} \item If $\beta < \beta_c$, then there exist exactly two more invariant graphs above $\varphi^-$, both of which are continuous. We denote the upper one by $\varphi^+$ and the middle one by $\psi$, such that $\varphi^- < \psi < \varphi^+$. There holds $\lambda(\psi) > 0$ and $\lambda(\varphi^+) < 0$, and the dependence of the graphs on $\beta$ is continuous and monotone: If $\beta$ is increased then $\varphi^+$ moves downwards, whereas $\psi$ moves upwards, uniformly on all fibres. \item If $\beta=\beta_c$, there exist either one or two more invariant graphs above $\varphi^-$. We denote them by $\psi$ and $\varphi^+$ again (allowing $\psi=\varphi^+$), where $\psi \leq \varphi^+$. Further, one of the two following holds: \begin{itemize} \item $\psi$ equals $\varphi^+$ $m$-a.s.\ and $\lambda(\psi)=\lambda(\varphi^+)=0$ {\em (Smooth Bifurcation)}.% \footnote{Of course, the natural possibility here is that $\psi$ and $\varphi^+$ are continuous and coincide everywhere. However, there is also a second, rather pathological alternative, which cannot be excluded: It might happen that there exists no continuous invariant graph apart from $\varphi^-$, but two semi-continuous invariant graphs $\psi$ and $\varphi^+$ which are $m$-a.s.\ equal. This is discussed in more detail in Section~\ref{Essentialclosure}~. Whether the bifurcation should really be called smooth in this case is certainly debatable. However, as the non-smooth bifurcations we prove to exist later on all involve non-zero Lyapunov exponents, we prefer this as a working definition in the context of this paper.} \item $\psi\neq\varphi^+$ $m$-a.s., $\lambda(\psi)>0$, $\lambda(\varphi^+)<0$ and both invariant graphs are non-continuous \emph{(Non-smooth Bifurcation)}. \end{itemize} In any case, the set $B := [\psi,\varphi^+]$ is compact and the set % $\{\theta \in \ensuremath{\mathbb{T}^{1}} \mid \psi(\theta) = \varphi^+(\theta) \}$ is dense in $\ensuremath{\mathbb{T}^{1}}$.% \footnote{A compact set $B \ensuremath{\subseteq} \ensuremath{\mathbb{T}^{1}} \times X$ is called \textit{pinched}, if for a dense set of $\theta$ the set $B_\theta := \{ x \in X \mid \ensuremath{(\theta,x)} \in B\}$ consists of a single point. Thus, the last property could also be stated as `the set $B$ is pinched'. } \item If $\beta > \beta_c$, then $\varphi^-$ is the only invariant graph. \end{list} \end{thm} The proof of Theorem~\ref{thm:saddlenode}, together with some preliminary results which are needed, is given in Section~\ref{SaddleNode}~. \medskip When $F$ depends on an additional parameter, it is also natural to study the dependence of the critical parameter $\beta_0$ on this parameter. We refrain from producing a general statement and just concentrate on the $\arctan$-family (\ref{eq:atanfamily}) given in the introduction. Let \[ F_\alpha(x) \ := \ \frac{\arctan(\alpha x)}{\arctan(\alpha)} \ . \] \begin{lem} \label{lem:parameterdependence} Let $\beta_0(\alpha)$ denote the critical parameter of the saddle-node bifurcation in Theorem~\ref{thm:saddlenode} with $F = F_\alpha$ in (\ref{eq:generalsystem}). Then $\alpha \mapsto \beta_0(\alpha)$ is continuous and strictly monotonically increasing in $\alpha$. \end{lem} Again, the proof is given in Section~\ref{SaddleNode}~. We note that while continuity follows under much more general assumptions, the monotonicity depends on the right scaling of the parameter family, namely on the fact that the fixed points of $F_\alpha$ do not depend on $\alpha$. \subsection{Sink-source-orbits and the existence of SNA} \label{SinkSourceSNA} In this subsection we consider a slightly more general situation than in the last, and suppose that \begin{eqnarray}\label{eq:T1} && T \textrm{ is a qpf monotone interval map};\\\label{eq:T2} && \textrm{All fibre maps } T_\theta \textrm{ are differentiable with derivative } DT_\theta;\hspace{3eM}\\\label{eq:T3} && \ensuremath{(\theta,x)} \mapsto DT_\theta(x) \textrm{ is continuous and strictly positive.} \end{eqnarray} In particular, this applies to parameter families which satisfy (\ref{cond:g1})--(\ref{cond:F2}). \medskip In order to formulate the statements of this section, we have to introduce different Lyapunov exponents. Let $\ensuremath{(\theta,x)} \in \ensuremath{\mathbb{T}^{1}} \times X$. Then the {\em (vertical) finite-time forward} and {\em backward Lyapunov exponents} are defined as \begin{equation} \label{eq:finitelyap} \lambda^+(\theta,x,n)\ := \ \ensuremath{\frac{1}{n}}\ensuremath{\sum_{i=0}^{n-1}} \log(DT_{\theta+i\omega}(\ensuremath{T_{\theta}}^i(x))) \end{equation} and \begin{equation} \label{eq:finitebacklyap} \lambda^-(\theta,x,n)\ := \ -\ensuremath{\frac{1}{n}}\ensuremath{\sum_{i=1}^n} \log(DT_{\theta-i\omega}(\ensuremath{T_{\theta}}^{-i}(x))) \ . \end{equation} When dealing with parameter families as in (\ref{eq:generalsystem}), we will write $\lambda^\pm(\beta,\theta,x,n)$ for the pointwise finite-time Lyapunov exponents with respect to the map $T_\beta$ if we want to keep the dependence on the parameter $\beta$ explicit. As it is not always possible to ensure that the finite-time exponents converge as $n\to\infty$, we distinguish between upper and lower Lyapunov exponents: The \textit{(vertical) upper forward Lyapunov exponent} of a point $\ensuremath{(\theta,x)} \in \ensuremath{\mathbb{T}^{1}} \times X$ is defined as \begin{equation} \label{eq:pointwiselyap} \lambda^+(\theta,x)\ := \ \limsup_{n\to\infty} \lambda^+(\theta,x,n) \ . \end{equation} Similarly, the \textit{upper backward Lyapunov exponent} is defined as \begin{equation} \label{eq:backpointwiselyap} \lambda^-(\theta,x)\ := \ \limsup_{n\to\infty} \lambda^-(\theta,x,n) \ . \end{equation} In the same way, we define the {\em lower forward} and {\em backward Lyapunov exponents}, replacing $\limsup$ by $\liminf$: \begin{eqnarray} \label{eq:lowerpointwiselyap} \lambda_{\textrm{low}}^+(\theta,x)\ &= & \liminf_{n\to\infty} \lambda^+(\theta,x,n) \ ; \\ \label{eq:lowerbackpointwiselyap} \lambda_{\textrm{low}}^-(\theta,x)& := &\liminf_{n\to\infty} \lambda^-(\theta,x,n) \ . \end{eqnarray} Again, we write $\lambda^\pm(\beta,\theta,x),\lambda^\pm_{\textrm{low}}(\beta,\theta,x)$ if we want to keep the dependence on a parameter $\beta$ explicit. For any invariant graph $\varphi$, the Birkhoff ergodic theorem implies that for $m$-a.e.\ $\theta \in \ensuremath{\mathbb{T}^{1}}$ the $\limsup$ and the $\liminf$ coincide (i.e.\ the respective limits exists and we do not have to distinguish between $\lambda^\pm$ and $\lambda^\pm_{\textrm{low}}$) and there holds $\lambda^+(\theta,\varphi(\theta)) = -\lambda^-(\theta,\varphi(\theta)) = \lambda(\varphi)$. Further, when $\varphi$ is continuous the Uniform Ergodic Theorem (e.g.\ \cite{katok/hasselblatt:1995}) implies that this holds for all $\theta\in\ensuremath{\mathbb{T}^{1}}$ and the convergence is uniform on $\ensuremath{\mathbb{T}^{1}}$. Now, consider the situation where $\psi$ is an unstable and $\varphi$ is a stable continuous invariant graph, and there is no other invariant graph in between. Then points on the repeller (or \textit{source}) $\psi$ will have a positive forward and a negative backward Lyapunov exponent, and for points on the attractor (or \textit{sink}) $\varphi$ it is just the other way around. Further, all points between $\psi$ and $\varphi$ will converge to $\varphi$ forwards and to $\psi$ backwards in time, thus moving from source to sink, and consequently both their exponents will be negative. These three cases should be considered as more or less typical. In contrast to this, the remaining possibility of both Lyapunov exponents being positive is rather strange, as it would suggest that the orbit somehow moves from a sink to a source. This motivates the following definition: \begin{definition}[Sink-source-orbits]\label{def:sinksource} Suppose $T$ satisfies the assumptions (\ref{eq:T1})--(\ref{eq:T3}). Then we call an orbit of $T$ which has both positive forward and backward lower Lyapunov exponent a $\textbf{sink-source-orbit}$. If an orbit has both positive forward and backward upper Lyapunov exponent then we call it a $\textbf{weak sink-source-orbit}$. \end{definition} Obviously, every sink-source-orbit is also a weak sink-source-orbit. As mentioned in the introduction, the existence of sink-source-orbits is already known for the Harper map (see Footnote~\ref{foot:localisation}), where they only occur together with SNA (i.e.\ in the non-uniformly hyperbolic case, as discussed in Section~\ref{Harper}). This is not a mere coincidence: \begin{thm} \label{thm:sinksourcesna} Suppose $T$ satisfies the assumptions (\ref{eq:T1})--(\ref{eq:T3}). Then the existence of a weak sink-source-orbit implies the existence of a SNA (and similarly of a SNR). \end{thm} The proof is given in Section~\ref{Sinksourceorbits}~. \begin{bem}\label{bem:sinksource} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item In the proofs of Theorems~\ref{thm:snaexistence} and \ref{thm:symmetricsna} below, we actually construct sink-source-orbits. Thus, for the main purpose of this paper it would not have been necessary to introduce weak sink-source-orbits. However, since the existence of the latter is a much weaker assumption than the existence of a sink-source-orbit (see also (b) and (c) below), it seemed appropriate to state Theorem~\ref{thm:sinksourcesna} in this way. \item In some situations, it is also possible to obtain results in the opposite direction. For example, if $M$ is a minimal set which contains both a SNA and a SNR, then weak sink-source-orbits are dense (even residual) in $M$. In order to see this, note that, in the above situation, for some constant $c>0$ the set $M$ contains a point $(\theta_1,x_1)$ with $\lambda^+(\theta_1,x_1) > c$ and a point $(\theta_2,x_2)$ with $\lambda^-(\theta_2,x_2) > c$. Due to minimality, it follows that the open sets $$A_n := \{ \ensuremath{(\theta,x)} \in M \mid \exists m\geq n : \lambda^+(\theta,x,m) > c\}$$ and $$B_n := \{ \ensuremath{(\theta,x)} \in M \mid \exists m \geq n: \lambda^-(\theta,x,m) > c\}$$ are both dense in $M$. By Baire's Theorem, their intersection $S:=\bigcap_{n\in\N} A_n\cap B_n$ is residual, and obviously every point in $S$ belongs to a weak sink-source-orbit. \item The preceding remark becomes false if `weak sink-source-orbit' is replaced by sink-source-orbit. In fact, it is well-known that SNA may exist in the absence of sink-source-orbits, even if there is a minimal set which contains both an SNA and a SNR. Examples are provided by the Harper map: As we have discussed in Section~\ref{Harper}, the existence of a sink-source-orbit is equivalent to the existence of an exponentially decaying eigenfunction for the corresponding Schr\"odinger operator. However, there are situations in which, for certain energies in the spectrum of $H_{\lambda,\theta}$ (which does not depend on $\theta$), there exist no such `localised' eigenfunctions, independent of $\theta$. This follows for example from Theorem 5 in \cite{puig:2006}, together with the concept of Aubry-duality, which is explained in Section 2 of the same paper (the original source is \cite{aubry/andre:1980}). The fact that there is a (unique) minimal set which contains both an SNA and an SNR in these examples is shown in \cite[Section 4.17]{herman:1983}. For a more detailed discussion of the implications of spectral-theoretic results for the Harper map, we also refer to \cite{haro/puig:2006} (this particular issue is addressed in Section V(C)). \end{list} \end{bem} An observation which was made frequently in numerical studies of SNA is a very unusual distribution of the finite-time Lyapunov exponents. The interesting fact is that although in the limit all observed Lyapunov exponents were negative, the distribution of the finite-time Lyapunov exponents still showed a rather large proportion of positive values, even at very large times (see \cite{pikovski/feudel:1995},\cite{prasad/negi/ramaswamy:2001}). Of course, the existence of a sink-source orbit could be a possible explanation for such a behaviour. On the other hand, we can also use information about the finite-time Lyapunov exponents to establish the existence of a sink-source-orbit, and this will play a key role in the proof of our main results: \begin{lem} \label{lem:sinksourceshadowing} Let $I$ be a compact metric space $\ensuremath{\mathbb{R}}$ and $(T_\beta)_{\beta\in I}$ be a parameter family of qpf monotone interval maps which all satisfy the assumptions (\ref{eq:T1})--(\ref{eq:T3}) above. Further, assume that the dependence of the maps $T_\beta$ and $\ensuremath{(\theta,x)} \mapsto DT_{\beta,\theta}(x)$ on $\beta$ is continuous (w.r.t.\ the topology of uniform convergence). Suppose there exist sequences of integers $l^-_1,l^-_2,\ldots \nearrow \infty$ and $l^+_1,l^+_2,\ldots \nearrow \infty$, a sequence $(\theta_p,x_p)_{p\geq 1}$ of points in $\ensuremath{\mathbb{T}^{1}} \times X$ and a sequence of parameters $(\beta_p)_{p\geq 1}$, such that for all $p\in\N$ there holds \[ \lambda^+(\beta_p,\theta_p,x_p,j) \ > \ c \ \ \ \forall j=1\ensuremath{,\ldots,} l^+_p \] and \[ \lambda^-(\beta_p,\theta_p,x_p,j) \ > \ c \ \ \ \forall j=1\ensuremath{,\ldots,} l^-_p \ \] for some constant $c>0$. Then there is at least one $\beta_0 \in I$, such that there exists a sink-source-orbit (and thus a SNA-SNR-pair) for the map $T_{\beta_0}$. \end{lem} \textit{Proof.}\ In fact, the statement is a simple consequence of compactness and continuity: By going over to suitable subsequences if necessary, we can assume that the sequences $(\theta_p)_{p\geq 1},(x_p)_{p\geq 1}$ and $(\beta_p)_{p\geq 1}$ converge. Denote the limits by $\theta_0,\ x_0$ and $\beta_0$, respectively. Now, due to the assumptions on $T_\beta$ and $DT_{\beta,\theta}(x)$ the functions $(\beta,\theta,x) \mapsto \lambda^\pm(\beta,\theta,x,j)$ are continuous for each fixed $j\in\N$. Thus, we obtain \[ \lambda^\pm(\beta_0,\theta_0,x_0,j) \ = \ \lim_{p\ensuremath{\rightarrow}\infty} \lambda^\pm(\beta_p,\theta_p,x_p,j) \ \geq \ c \ \quad \forall j \in \N\ , \] such that \[ \lambda_{\textrm{low}}^\pm(\beta_0,\theta_0,x_0) \ = \ \liminf_{j \ensuremath{\rightarrow} \infty} \lambda^\pm(\beta_0,\theta_0,x_0,j) \ \geq \ c \ > \ 0 \ . \] Hence, the orbit of $(\theta_0,x_0)$ is a sink-source-orbit for the map $T_{\beta_0}$. {\raggedleft $\Box$ \\} \subsection{Non-smooth bifurcations} \label{NonSmoothSaddleNodes} In order to formulate the results concerning the non-smoothness of bifurcations and the existence of SNA, we first have to quantify the qualitative features of the functions $F$ and $g$ which were used in the discussion in Section~\ref{Mechanism}~. Some of the assumptions we will make below are quite specific and could in principle be formulated in a more general way. However, as the proofs of Theorems~\ref{thm:snaexistence} and \ref{thm:symmetricsna} are quite involved anyway, we refrain from introducing any more additional parameters, even if this could lead to slightly more flexible results. As we have mentioned before, our main goal here is just to show that the presented approach does lead to rigorous results at all, we do not aim for the greatest possible generality. Hence, we content ourself here to provide a statement which applies, after suitable rescaling and reparametrisation, to at least two of the main examples from the introduction (see Sections \ref{App:atanfamily} and \ref{App:Harper}). \medskip First of all, we will suppose that $\gamma$ and $\alpha$ are positive constants which satisfy \begin{eqnarray} \gamma & \leq & 1/16 \ ; \label{cond:gamma0} \\ \ensuremath{\sqrt{\alpha}} & > & 4/\gamma \ \ \geq \ 64 \ \label{cond:alphagamma0} \ . \end{eqnarray} Further, we will assume (in addition to (\ref{cond:g1})--(\ref{cond:F3})), that \begin{eqnarray} && F([-3,3]) \ensuremath{\subseteq} [-3/2,3/2] \ \textrm{ (in other words\ } C=3/2 \textrm{ in (\ref{cond:F1}))}; \\ && F(0) = 0\ \textrm{ and }\ F(\pm x_\alpha) = \pm x_\alpha \ \textrm{ where } \ {\textstyle x_\alpha \ := \ 1 + \frac{2}{\sqrt{\alpha}} } \ ; \label{cond:Ffixedpoints} \\ && 2\alpha^{-2} \ \leq \ F'(x) \ \leq \ \alpha^2 \hspace{2eM} \forall x \in [-3,3] \ ; \label{cond:Funiformbounds} \\ && F'(x) \ \geq \ 2\alpha^{\ensuremath{\frac{1}{2}}} \hspace{5.5eM} \forall x \in \overline{B_\frac{2}{\alpha}(0)} \ ; \label{cond:Fexpansion} \\ && F'(x) \ \leq \ {\textstyle \ensuremath{\frac{1}{2}}\alpha^{-\ensuremath{\frac{1}{2}}} } \hspace{4.7eM} \forall x : |x| \geq \gamma \ ; \label{cond:Fcontraction}\\ &&F({\textstyle\ensuremath{\frac{1}{\alpha}}}) \ \geq \ 1-\gamma \ \textrm{ and } \ F(-{\textstyle \ensuremath{\frac{1}{\alpha}}}) \ \leq \ -(1-\gamma) \ . \label{cond:Fmapsover} \end{eqnarray} Finally, we will require that \begin{equation} g : \ensuremath{\mathbb{T}^{1}} \to [0,1] \ \textrm{ has the unique maximum } g(0) = 1 \end{equation} and for some constants $L_1,L_2>0$ there holds \begin{eqnarray} && g \ \textrm{ is Lipschitz-continuous with Lipschitz constant } L_1\ \label{eq:g-lipschitz} ;\\ && g(\theta) \ \leq \ \max\{ 1-3\gamma , 1-L_2\cdot d(\theta,0) \} \ , \label{cond:sharppeak} \end{eqnarray} where $d$ denotes the usual Euclidean distance on the circle. Essentially, this quantifies the properties which we have already mentioned in Section~\ref{Mechanism}: $F$ has three fixed points (\ref{cond:Ffixedpoints}), acts highly expanding close to 0 (\ref{cond:Fexpansion}) and highly contracting further away (\ref{cond:Fcontraction}). Thus, the expanding region $\cal E$ from Section~\ref{Mechanism} corresponds to $\ensuremath{\mathbb{T}^{1}} \times \overline{B_\frac{2}{\alpha}(0)}$, whereas the contracting region $\cal C$ corresponds to $\ensuremath{\mathbb{T}^{1}} \times [\gamma,3]$. Further, (\ref{cond:Fmapsover}) ensures that $\ensuremath{\mathbb{T}^{1}} \times \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ is mapped over itself in a very strong sense, and finally condition (\ref{cond:sharppeak}) makes precise what we meant when speaking of a `sharp peak' before. The last assumption we need is a Diophantine condition on the rotation number $\omega$. We use the notation \begin{equation} \ensuremath{\omega_n} := n\omega \bmod 1 \ \end{equation} and suppose that there exist constants $c,d > 0$, such that \begin{equation} \label{cond:diophantine} d(\ensuremath{\omega_n},0) \ \ \geq \ \ c \cdot n^{-d} \ \ \ \forall n \in \N \ . \end{equation} (Here $d(\theta,\theta')$ denotes the usual Euclidean distance of two points $\theta,\theta'\in \ensuremath{\mathbb{T}^{1}}$.) \begin{thm} \label{thm:snaexistence} Suppose $\alpha,\gamma,F$ and $g$ are chosen such that (\ref{cond:g1})--(\ref{cond:F3}) and (\ref{cond:gamma0})--(\ref{cond:sharppeak}) hold. Further, assume that $\omega$ satisfies the Diophantine condition (\ref{cond:diophantine}) and let $$T_\beta\ensuremath{(\theta,x)} \ = \ (\ensuremath{\theta + \omega},F(x)-\beta g(\theta))$$ as in (\ref{eq:generalsystem}). Let $\beta_c\in(0,3/2)$ be the critical parameter of the saddle-node bifurcation described in Theorem~\ref{thm:saddlenode}~. Then there exist constants $\gamma_0 = \gamma_0(L_1,L_2,c,d)>0$ and $\alpha_0 = \alpha_0(L_1,L_2,c,d)>0$ with the following property: If $\gamma < \gamma_0$ and $\alpha > \alpha_0$, then there exists a sink-source-orbit for the system $T_{\beta_c}$. Consequently, there exists a SNA (the invariant graph $\varphi^+$ in Theorem~\ref{thm:saddlenode}(ii)) and a SNR ($\psi$ in Theorem~\ref{thm:saddlenode}(ii)), and both objects have the same essential closure.% \footnote{See Section~\ref{Essentialclosure} for the definition of the essential closure.} \end{thm} The proof of this theorem is given in the Sections \ref{Strategy}--\ref{Construction}, an outline of the strategy is given at the beginning of Section~\ref{Strategy}. \begin{bem}\label{bem:snaexistence} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item We remark that the existence of a sink-source-orbit in the parameter family $T_\beta$ in the above theorem does not depend on the statement of Theorem~\ref{thm:saddlenode}. Even if the assumptions (\ref{cond:g1})--(\ref{cond:F3}) are dropped and Theorem~\ref{thm:saddlenode} no longer applies, we still obtain the existence of a parameter $\beta_c$ for which $T_{\beta_c}$ has a sink-source-orbit and a SNA-SNR-pair, provided $\gamma$ is sufficiently small and $\alpha$ sufficiently large. However, in this case $\beta_c$ is not necessarily unique anymore. Further, it is not possible to say whether it is a bifurcation parameter, nor to control the number of invariant graphs which might occur. \item The dependence of $\gamma_0$ and $\alpha_0$ on $L_1,L_2,c$ and $d$ can be made explicit. More precisely, the conditions which have to be satisfied are (\ref{cond:alpha1}), (\ref{cond:hfunctions1}), (\ref{cond:hfunctions2}) and (\ref{cond:u})--(\ref{cond:alpha3}). Conditions (\ref{cond:hfunctions1}) and (\ref{cond:hfunctions2}) are somewhat implicit, but once the parameters $u$ and $v$ are fixed according to (\ref{cond:u})--(\ref{cond:sigma}), explicit formulas can be derived from the proof of Lemma~\ref{lem:omegaestimates}~. \item Numerical observations (as well as the statement of the above theorem) suggest that there might be a critical parameter $\alpha^*$, such that the saddle-node bifurcation in the family $\ensuremath{\tilde{T}}_{\alpha,\beta}$ with fixed $\alpha$ is smooth whenever $\alpha<\alpha^*$ and non-smooth whenever $\alpha>\alpha^*$. However, whether this is really the case is completely open. \end{list} \end{bem} As we have mentioned in Section~\ref{Mechanism}, the sharp peak of the forcing function leads to a localisation of the sink-source-orbit. In fact, its construction in the later sections yields enough information to determine it precisely: \begin{adde}\label{adde} In the situation of Theorem~\ref{thm:snaexistence} denote the SNA by $\varphi^+$ and the SNR by $\psi$. Then the point $(\omega,\varphi^+(\omega))$ belongs to a sink-source-orbit.% \footnote{There might be more than one sink-source-orbit, but this is the particular one which we will construct in the later sections.} Further, this sink-source-orbit is contained in the intersection $\Phi^+\cap\Psi$ (which means $\varphi^+(\omega)=\psi(\omega)$). \end{adde} The proof is given in Section~\ref{Construction}.\medskip Next, we turn to the existence of SNA in symmetric systems. In order to do so, we have to modify the assumptions on $F$ and $g$. First of all, instead of (\ref{cond:g1}) and (\ref{cond:F3}) we will assume that \begin{eqnarray} && \label{cond:g-pitchfork} g:\ensuremath{\mathbb{T}^{1}} \to [-1,1] \ \textrm{ is continuous and has the unique maximum } g(0)=1 ; \\ &&\label{eq:F-has-schwarzian} F \textrm{ is } {\cal C}^3 \textrm{ and has negative Schwarzian derivative.} \end{eqnarray} (The alternative in (\ref{cond:F3}) only works for one-sided forcing). Further, we will require the following symmetry conditions \begin{eqnarray} && \label{cond:Fsymmetry} F(-x) \ = \ -F(x) \ ; \\&& \label{cond:gsymmetry} \textstyle g(\theta+\ensuremath{\frac{1}{2}}) \ = \ -g(\theta) \ . \end{eqnarray} Finally, (\ref{cond:sharppeak}) will be replaced by \begin{equation} \textstyle \label{cond:symmetricpeak} |g(\theta)| \ \leq \ \max\left\{1-3\gamma,1-L_2\cdot d(\theta,\{0,\ensuremath{\frac{1}{2}}\})\right\} \ . \end{equation} Note that (\ref{cond:Fsymmetry}) and (\ref{cond:gsymmetry}) together imply that the map $T=T_\beta$ given by (\ref{eq:generalsystem}) has the following symmetry property: \begin{equation} \label{eq:systemsymmetry} -\ensuremath{T_{\theta}}(x) \ = \ T_{\theta+\ensuremath{\frac{1}{2}}}(-x) \end{equation} Now suppose that $\varphi$ is a $T$-invariant graph. Then due to (\ref{eq:systemsymmetry}) the graph given by \begin{equation} \label{eq:symmetricgraph} \textstyle \overline{\varphi}(\theta) \ := \ -\varphi(\theta+\ensuremath{\frac{1}{2}}) \end{equation} is invariant as well. In particular, this implies that the upper and lower bounding graphs satisfy $\varphi^+(\theta) = -\varphi^-(\theta+\ensuremath{\frac{1}{2}})$, and if one of these graphs undergoes a bifurcation, then the same must be true for the second one as well. As the negative Schwarzian derivative of $F$ will allow us to conclude that there is only one other invariant graph $\psi$ apart from the bounding graphs $\varphi^\pm$, this implies that any possible collision between invariant graphs has to involve all three invariant graphs at the same time and must therefore be a pitchfork bifurcation. However, as we have mentioned before, due to the lack of monotonicity in the symmetric setting we cannot ensure that there is a unique bifurcation point. Nevertheless, we obtain the following result concerning the existence of SNA with symmetry: \begin{thm} \label{thm:symmetricsna} Suppose $\gamma,\alpha,F$ and $g$ are chosen, such that (\ref{cond:gamma0})--(\ref{cond:Fmapsover}), (\ref{eq:g-lipschitz}) and (\ref{cond:g-pitchfork})--(\ref{cond:symmetricpeak}) hold. Further, assume $\omega$ satisfies the Diophantine condition (\ref{cond:diophantine}) and let $$T_\beta\ensuremath{(\theta,x)} \ = \ (\ensuremath{\theta + \omega},F(x)-\beta g(\theta))$$ as in (\ref{eq:generalsystem}). Then there exist constants $\gamma_0 = \gamma_0(L_1,L_2,c,d)>0$ and $\alpha_0 = \alpha_0(L_1,L_2,c,d)>0$ with the following property: If $\gamma < \gamma_0$ and $\alpha > \alpha_0$, then there is a parameter $\beta_c$ such that there exist two SNA $\varphi^-$ and $\varphi^+$ and a SNR $\psi$ with $\varphi^- \leq \psi \leq \varphi^+$ for $T_{\beta_c}$. Further, there holds $\esscl{\Phi^-} = \esscl{\Psi} = \esscl{\Phi^+}$, and the invariant graphs satisfy the symmetry equations \[ \textstyle \varphi^-(\theta) = -\varphi^+(\theta + \ensuremath{\frac{1}{2}}) \quad \textrm{and} \quad \psi(\theta) = -\psi(\theta+\ensuremath{\frac{1}{2}}) \ . \] \end{thm} \begin{bem} As in Theorem~\ref{thm:snaexistence}, the dependence of $\gamma_0$ and $\alpha_0$ on $L_1,L_2,c$ and $d$ can be made explicit (compare Remark~\ref{bem:snaexistence}(b)). The conditions which have to be satisfied are (\ref{cond:alpha1}), (\ref{cond:hfunctions1}), (\ref{cond:hfunctions2}), (\ref{cond:u})--(\ref{cond:alpha3}), (\ref{cond:alphagammasymmetric}) and (\ref{cond:gvalues}). \end{bem} \subsection{Application to the parameter families} \label{ex:atanexample} \label{Applications} The assumptions on $F$ and $g$ used in Theorems~\ref{thm:snaexistence} and \ref{thm:symmetricsna} are somewhat technical and might seem very restrictive. However, in this subsection we will see that they are more flexible that they might look like on first sight (although there are surely some constraints). In particular, after performing some surgery we can apply them at least to two of the parameter families from the introduction, namely the $\arctan$-family with additive forcing and the Harper map. In both cases, the respective parameters have to chosen sufficiently large, but of course this goes perfectly well with the statement of Theorem~\ref{thm:snaexistence}~. As a consequence, the respective corollaries become a lot easier to formulate. The qpf Arnold map then demonstrates the limits of Theorem~\ref{thm:snaexistence}, since it is not possible to apply the result in this case. This is briefly discussed in Subsection~\ref{App:Remarks}~. \subsubsection{Application to the arctan-family.} \label{App:atanfamily} Applied to the $\arctan$-family with additive forcing, Theorem~\ref{thm:snaexistence} yields the following: \begin{cor} \label{cor:atanfamily} Suppose $\omega$ satisfies the Diophantine condition (\ref{cond:diophantine}). Then there exists $\alpha_0 = \alpha_0(c,d)$ such that for all $\alpha>\alpha_0$ the system $T_{\alpha,\beta}$ given by (\ref{eq:atanfamily}) undergoes a non-smooth saddle-node bifurcation as the parameter $\beta$ is increased from 0 to 1. \end{cor} \begin{bem} As already mentioned in Section~\ref{AtanFamily}, the above statement remains true if the $\arctan$ in (\ref{eq:atanfamily}) is replaced by the map $x \mapsto\frac{x}{1+|x|}$, or any other function which has similar scaling properties. This will become obvious in the following proof, but we refrain from producing a more general statement here. \end{bem} \textit{Proof of Corollary~\ref{cor:atanfamily}~.} Since the system (\ref{eq:atanfamily}) does not satisfy (\ref{cond:Ffixedpoints}), we cannot apply Theorem~\ref{thm:snaexistence} directly. Therefore, we start by considering a slightly rescaled version of (\ref{eq:atanfamily}). Let \[ \ensuremath{\tilde{F}}_\alpha(x) \ := \ C(\alpha)\cdot \arctan(\alpha^{\frac{4}{3}}x) \ \ \textrm{ where } \ \ C(\alpha) \ := \ \frac{ 1+\frac{2}{ \sqrt{\alpha} } }{\arctan(\alpha^\frac{4}{3} + 2\alpha^\frac{5}{6})} \ \] and \[ g(\theta) \ := \ 1 - \sin(\pi\theta) \ . \] Note that $\ensuremath{\tilde{F}}_\alpha$ always satisfies (\ref{cond:Ffixedpoints}). The important thing we have to ensure is that whenever we fix a suitably small $\gamma$, such that (\ref{cond:gamma0}), (\ref{cond:sharppeak}) and any additional smallness conditions on $\gamma$ which appear later on are satisfied, then (\ref{cond:Fmapsover}) holds for all sufficiently large values of $\alpha$. This means that we can first fix $\gamma$, and then ensure that all inequalities involving $\alpha$ alone or both $\alpha$ and $\gamma$, such as (\ref{cond:alphagamma0}), hold by choosing $\alpha$ sufficiently large, without worrying about (\ref{cond:Fmapsover}). However, in this particular case it is easy to see that $\ensuremath{\tilde{F}}_\alpha(\ensuremath{\frac{1}{\alpha}}) = C(\alpha) \cdot \arctan(\alpha^{\ensuremath{\frac{1}{3}}}) \ensuremath{\rightarrow} 1$ as $\alpha \ensuremath{\rightarrow} \infty$ (note that $\lim_{\alpha\ensuremath{\rightarrow}\infty} C(\alpha) = \frac{2}{\pi}$), which is exactly what we need. Now, if $\gamma$ is chosen small enough $g(\theta) = |1-\sin(\pi\theta)|$ clearly satisfies $(\ref{cond:sharppeak})$, for example with $L_2 := 2$. The Lipschitz-constant $L_1$ is $\pi$. Thus, it remains to check the assumptions on the derivative of $\ensuremath{\tilde{F}}_\alpha$. To that end, note that \[ \ensuremath{\tilde{F}}_\alpha'(x) \ = \ C(\alpha) \cdot \frac{\alpha^\frac{4}{3}}{1+\alpha^\frac{8}{3}x^2} \] We have $\ensuremath{\tilde{F}}_\alpha'(0) \sim \alpha^\frac{4}{3},\ \ensuremath{\tilde{F}}_\alpha'(\frac{2}{\alpha}) \sim \alpha^\frac{2}{3}$ and $\ensuremath{\tilde{F}}_\alpha'(\gamma) \sim \alpha^{-\frac{4}{3}}$ for each fixed $\gamma > 0$ as $\alpha \ensuremath{\rightarrow} \infty$. Therefore, the conditions (\ref{cond:Funiformbounds}),(\ref{cond:Fexpansion}) and (\ref{cond:Fcontraction}) will always be satisfied when $\alpha$ is large enough. Consequently we can apply Theorem~\ref{thm:snaexistence} and obtain that there exists some $\tilde{\alpha}_0$ such that for all $\alpha\geq \tilde{\alpha}_0$ the parameter family $$\ensuremath{\tilde{T}}_{\alpha,\beta} \ : \ \ensuremath{(\theta,x)} \mapsto (\ensuremath{\theta + \omega},\ensuremath{\tilde{F}}_\alpha(x) - \beta g(\theta))$$ undergoes a non-smooth pitchfork bifurcation (in the sense of Theorem~\ref{thm:saddlenode}) as $\beta$ is increased from $0$ to $3/2$. \medskip Now denote the map given by (\ref{eq:atanfamily}) by $T_{\alpha,\beta}$. We claim that there exists a monotonically increasing function $\sigma : \ensuremath{\mathbb{R}}^+ \ensuremath{\rightarrow} \ensuremath{\mathbb{R}}^+$ and a function $\tau : \ensuremath{\mathbb{R}}^+ \ensuremath{\rightarrow} \ensuremath{\mathbb{R}}^+$ such that $T_{\alpha,\beta}$ is smoothly conjugate to $\ensuremath{\tilde{T}}_{\sigma(\alpha),\tau(\alpha)\beta}$. Consequently, the parameter family $T_{\alpha,\beta}$ equally exhibits non-smooth saddle-node bifurcations if $\alpha$ is chosen sufficiently large (larger than $\sigma^{-1}(\tilde{\alpha}_0)$). In order to define $\sigma$, it is convenient to introduce an intermediate parameter family $\ensuremath{\widehat{T}}_{\alpha,\beta}$ with fibre maps \[ \ensuremath{\widehat{T}}_{\alpha,\beta,\theta}(x) \ = \ \arctan(\alpha x) - \beta g(\theta) \ . \] We let $h_1(\theta,x) = (\theta,\arctan(\alpha)x)$, $\sigma_1(\alpha) = \arctan(\alpha)^{-1} \alpha$ and $\tau_1(\alpha) = \arctan(\alpha)$. Then \[ T_{\alpha,\beta} \ = \ h_1^{-1} \circ \ensuremath{\widehat{T}}_{\sigma_1(\alpha),\tau_1(\alpha)\beta} \circ h_1 \ , \] such that $T_{\alpha,\beta} \sim \ensuremath{\widehat{T}} _{\sigma_1(\alpha),\tau_1(\alpha)\beta}$, where $\sim$ denotes the existence of a smooth conjugacy. On the other hand, let $h_2(\theta,x) = (\theta, C(\alpha)^{-1}x)$, $\sigma_2(\alpha) = C(\alpha)\alpha^{\frac{4}{3}}$ and $\tau_2(\alpha) = C(\alpha)^{-1}$. Again, a simple computation yields \[ \ensuremath{\tilde{T}}_{\alpha,\beta} \ = \ h^{-1}_2 \circ \ensuremath{\widehat{T}}_{\sigma_2(\alpha),\tau_2(\alpha)\beta} \circ h_2 \ . \] As $\sigma_1$ and $\sigma_2$ are both strictly monotonically increasing and therefore invertible, this implies $\ensuremath{\widehat{T}}_{\alpha,\beta} \sim \ensuremath{\tilde{T}}_{\sigma_2^{-1}(\alpha),\tau_2(\sigma_2^{-1}(\alpha))^{-1}\beta}$ and consequently \[ T_{\alpha,\beta} \ \sim \ \ensuremath{\widehat{T}}_{\sigma_1(\alpha),\tau_1(\alpha)\beta} \ \sim \ \ensuremath{\tilde{T}}_{\sigma_2^{-1} \circ \sigma_1(\alpha),\tau_2(\sigma_2^{-1} \circ \sigma_1(\alpha))^{-1} \tau_1(\alpha)\beta} \ . \] Hence, we can define $\sigma = \sigma_2^{-1} \circ \sigma^1$ and $\tau = \frac{\tau_1}{\tau_2\circ \sigma_2^{-1} \circ \sigma_1}$ as claimed. Finally, since $F_\alpha$ has the fixed point $x_+=1$, $g(0)=1$ and we are in the case of one-sided forcing, it can easily be seen that the bifurcation must take place before $\beta=1$ (meaning that the critical parameter $\beta_c$ given Theorem~\ref{thm:saddlenode} is strictly smaller than one). For larger $\beta$-values all orbits eventually end up below the $0$-line and consequently converge to the lower bounding graph $\varphi^-$, such that this is the only invariant graph. (Compare with the proof of Theorem~\ref{thm:saddlenode} in Section~\ref{SaddleNode}.) This completes the proof. {\raggedleft $\Box$ \\} \subsubsection{Application to the Harper map} \label{App:Harper} We want to emphasise that we do not claim any originality for the presented results on the Harper map. Our aim is merely to demonstrate the flexibility of our general statements by applying them to this well-known family. For the particular case of the Harper map there surely exist more direct and elegant ways to produce such results, starting with \cite{herman:1983}. Usually such results require more regularity than we use here (the potentials we can treat are only Lipschitz-continuous), but from the physics point of view this is surely the more interesting case anyway. Further, although potentials which are only Lipschitz-continuous are not explicitly treated in \cite{bjerkloev:2005}, the methods developed there surely allow to do this as well. Thus, the real achievement here is rather to show that the underlying mechanism for non-smooth bifurcations is in principle the same in the Harper map as in other parameter families, like the $\arctan$-family with additive forcing, despite the very particular structures which distinguish Schr\"odinger cocycles from other models. \medskip As Theorem~\ref{thm:snaexistence} is tailored-made for qpf interval maps, its application to the Harper map is somewhat indirect. This means that we have to perform a number of modifications before the system in (\ref{eq:projharper}) is in a form which meets the assumptions of the theorem. First of all, we remark that the dynamics of (\ref{eq:projharper}) are equivalent to those of the map \begin{equation}\label{eq:riccati} \ensuremath{(\theta,x)} \ \mapsto \ \left(\ensuremath{\theta + \omega},\frac{-1}{x} + E - \lambda V(\ensuremath{\theta + \omega})\right) \ \end{equation} defined on $\ensuremath{\mathbb{T}^{1}} \times \overline{\ensuremath{\mathbb{R}}}$. In order to see this, note that, by taking the inverse and replacing $\omega$ by $-\omega$ in (\ref{eq:riccati}), we obtain the system \begin{equation} \ensuremath{(\theta,x)} \mapsto \left(\ensuremath{\theta + \omega},\frac{-1}{x-E+\lambda V(\theta)} \right) \end{equation} Using the change of variables $x \mapsto \tan(-x)$, this yields (\ref{eq:projharper}). The proof of Corollary~\ref{cor:harperI} below will mainly consist in showing that there exists a parameter family of qpf interval maps which satisfies the assumptions of Theorem \ref{thm:snaexistence}, such that it exhibits a non-smooth saddle-node bifurcation, and which is at the same time conjugated to (\ref{eq:riccati}), provided that both systems are restricted to the relevant part of the phase space in which the bifurcation takes place. In order to proceed, we will now first take the rather atypical viewpoint of fixing $E$ and considering $\lambda$ as the bifurcation parameter (whereas usually in the study of Schr\"odinger cocycles the coupling constant $\lambda$ is fixed and the spectral parameter $E$ is varied). However, in this particular situation the two viewpoints are actually equivalent and the analogous result from the standard viewpoint can be recovered afterwards. More precisely, we first show that Theorem~\ref{thm:snaexistence} implies the following: \begin{cor} \label{cor:harperI} Suppose $\omega$ satisfies (\ref{cond:diophantine}) and the potential $V$ is non-negative, Lipschitz-continuous and decays linearly in a neighbourhood of its unique maximum. Then there exists a constant $E_0 = E_0(V,c,d)$ with the following property: If $E \geq E_0$, then there exists a unique parameter $\lambda_c=\lambda_c(E)$, such that for all $\lambda \in [0,\lambda_c]$ there exist exactly two invariant graphs for the system (\ref{eq:projharper}) (and likewise for (\ref{eq:riccati})), one with positive and one with negative Lyapunov exponent. If $\lambda< \lambda_c$ then both these graphs are continuous, if $\lambda = \lambda_c$ they are non-continuous (i.e.\ a SNA and a SNR) and have the same topological closure. Furthermore, the mapping $E \mapsto \lambda_c(E)$ is strictly monotonically increasing. \end{cor} Due to the monotone dependence of $\lambda_c(E)$ on $E$, Corollary \ref{cor:harperI} immediately implies \begin{cor} \label{cor:harperII} Suppose $\omega$, $V$ and $E_0$ are chosen as in Corollary~\ref{cor:harperI} and let $\lambda_0 := \lambda_c(E_0)$. Then the following holds: If $\lambda \geq \lambda_0$, then there exists a unique parameter $E_c=E_c(\lambda)\geq E_0$, such that for all $E\geq E_c$ there exist exactly two invariant graphs for the system (\ref{eq:projharper}) (and likewise for (\ref{eq:riccati})), one with positive and one with negative Lyapunov exponent. If $E>E_c$ then both these graphs are continuous, if $E = E_c$ they are non-continuous (i.e.\ a SNA and a SNR) and have the same topological closure. The mapping $\lambda \mapsto E_c(\lambda)$ is the inverse of the mapping $E \mapsto \lambda_c(E)$. \end{cor} We remark that the Harper map can be viewed as a qpf circle homeomorphism (by identifying $\ensuremath{\mathbb{R}} \cup \{\infty\}$ with $\ensuremath{\mathbb{T}^{1}}$). Since we do not want to introduce rotation numbers for such systems here, we do not speak more precisely about what happens if $E$ is decreased beyond $E_c$ (or $\lambda$ is increased beyond $\lambda_c$) and just mention that in this case the rotation number starts to increase and becomes non-zero for $E < E_c$. Invariant graphs and even continuous invariant curves may exist in this situation, but they will have a different homotopy type (i.e.\ they `wind around the torus' in the vertical direction). \ \\ \textit{Proof of Corollary~\ref{cor:harperI}.} In the following we always assume that the parameter $E$ is chosen sufficiently large, without further mentioning. (In particular, most of the statements below are only true for large $E$.) As the two systems (\ref{eq:projharper}) and (\ref{eq:riccati}) are equivalent (as mentioned above), it suffices to show that the statement is true for (\ref{eq:riccati}). Further, for the sake of simplicity we assume that $V$ is normalised, i.e.\ $\sup_{\theta\in\ensuremath{\mathbb{T}^{1}}} V(\theta) = 1$. Let $\alpha := E^{3/2}$ and \[ F_1(x) \ := \ -1/x + E \ . \] Then $$F_1([1/E,2/E]) \ \supseteq \ [1/E,2/E] \ =: \ I_1$$ and $$F_1([3E/4,E]) \ \ensuremath{\subseteq} \ [3E/4,E] \ =: \ I_2 \ .$$ Further $F_1$ is uniformly expanding on $I_1$ and uniformly contracting on $I_2$. As $F_1$ is strictly concave on $(0,\infty)$, it follows that $F_{1 \mid (0,\infty)}$ has exactly two fixed points $x_1 \in I_1$ and $x_2 \in I_2$. \medskip Let $s:= \frac{1+2/\sqrt{\alpha}}{x_2-x_1}$ and $h(x) := (x-x_1) \cdot s$. Note that we have \[ s \ \in \ [1/E,2/E] \ . \] As $h$ sends $x_1$ to 0 and $x_2$ to $1+2/\sqrt{\alpha}$, the map \begin{equation} \label{eq:F2-def} F_2(x) \ := \ h \circ F_1 \circ h^{-1}(x) \ = \ \frac{-s}{x/s+x_1} + s\cdot (E-x_1) \end{equation} has fixed points $0$ and $1+2/\sqrt{\alpha}$. In addition, if $\gamma \in (0,1)$ is fixed, then it is easy to check that on the one hand $$F_2'(x) \ \in \ [1/4E^2,4/\gamma^2E^2] \ \ensuremath{\subseteq} \ \left[\alpha^{-2},\ensuremath{\frac{1}{2}}\alpha^{-\ensuremath{\frac{1}{2}}}\right] \quad \quad \forall x \in [\gamma,1+2/\sqrt{\alpha}] $$ and on the other hand $$ F_2'(x) \ \in \ [E/16,E^2] \ \ensuremath{\subseteq} \ [2\alpha^{\ensuremath{\frac{1}{2}}},\alpha^2] \quad \quad \forall x \in [0,2/\alpha] \ . $$ (Always assuming that $E$ is sufficiently large.) Further, there holds \[ F_2(1/\alpha) \ = \ \frac{-s}{1/\alpha s+x_1} + s\cdot (E-x_1) \ \geq \ -4/\sqrt{E} + 1-4/E^2 \ \stackrel{E \ensuremath{\rightarrow} \infty}{\longrightarrow} \ 1 \ , \] such that we can assume $F_2(1/\alpha) \geq 1-\gamma$. \medskip Due to the definition of $F_2$ in (\ref{eq:F2-def}) and as $h$ is affine with slope $s$, the map $H:\ensuremath{(\theta,x)} \mapsto (\theta,h(x))$ smoothly conjugates (\ref{eq:riccati}) with \begin{equation} \ensuremath{(\theta,x)} \ \mapsto \ \left(\ensuremath{\theta + \omega},F_2(x) - s\lambda V(\ensuremath{\theta + \omega})\right) \ . \end{equation} Now we choose a ${\cal C}^1$-map $F : [-3,3] \ensuremath{\rightarrow} [-\frac{3}{2},\frac{3}{2}]$, such that $F_{|[0,1+2/\sqrt{\alpha}]} = F_{2\mid[0,1+2/\sqrt{\alpha}]}$ and which satisfies the requirements (\ref{cond:Ffixedpoints})--(\ref{cond:Fmapsover}). This is possible, since we have shown above that $F_{2\mid[0,1+2/\sqrt{\alpha}]}$ has all the required properties. In addition, $F$ can be chosen such that it is strictly concave on $(0,3]$, has a unique fixed point $x_-$ in $[-3,0)$ and is uniformly contracting on $[-3,x_-]$. Consequently, it satisfies the second alternative of (\ref{cond:F3}). Further, if we let $g(\theta) := V(\ensuremath{\theta + \omega})$ and $\beta := s\lambda$ and define \[ T_\beta\ensuremath{(\theta,x)} \ = \ (\ensuremath{\theta + \omega},F(x) - \beta g(\theta) \] as in Theorem~\ref{thm:snaexistence}, then $H$ conjugates $T_\beta$ restricted to $ \ensuremath{\mathbb{T}^{1}} \times [0,1+2/\sqrt{\alpha}]$ and $S_{E,\lambda}$ restricted to $\ensuremath{\mathbb{T}^{1}} \times [x_1,x_2]$, where $S_{E,\lambda}$ denotes the map given by (\ref{eq:riccati}). The parameter family $T_\beta$ satisfies all the requirements of Theorem~\ref{thm:snaexistence}. Hence, we obtain the existence of a critical parameter $\beta_c$, which is the bifurcation parameter in a non-smooth saddle-node bifurcation. Further, due to the monotonicity described in Theorem~\ref{thm:saddlenode}(i), the $T_\beta$-invariant graphs $\psi$ and $\varphi^+$ are always contained in \mbox{$\ensuremath{\mathbb{T}^{1}} \times [0,1+2/\sqrt{\alpha}]$}. Consequently their preimages under $H$, which we denote by $\widehat{\psi}$ and $\widehat{\varphi}^+$, are $S_{E,\lambda}$-invariant and contained in $\ensuremath{\mathbb{T}^{1}} \times (0,+\infty)$. Therefore, the parameter family $S_{E,\lambda}$ equally undergoes a non-smooth saddle-node bifurcation with critical parameter $\lambda_c = \beta_c/s$.\medskip In order to complete the proof only two things remain to be shown: The monotonicity of $E \mapsto \lambda_c(E)$ and the fact that for all $\lambda \leq \lambda_c$ the two graphs $\widehat{\psi}$ and $\widehat{\varphi}^+$ are indeed the only ones for the system $S_{E,\lambda}$. In order to see the latter, we note that restricted to $[0,+\infty)$ all the fibre maps of $S_{E,\lambda}$ are strictly concave, such that there can be only two invariant graphs in $\ensuremath{\mathbb{T}^{1}} \times [0,+\infty)$. However, as $S_{E,\lambda}$ maps $\ensuremath{\mathbb{T}^{1}} \times [-\infty,0)$ into the forward invariant set $[\varphi^+,+\infty)$, there cannot be any other invariant graphs in $\ensuremath{\mathbb{T}^{1}} \times [-\infty,0)$ either. (Of course, the same conclusion also follows by considering the associated $\textrm{SL}(2,\ensuremath{\mathbb{R}})$-cocycle: Due to the non-zero Lyapunov exponent there exists an invariant splitting into stable and unstable subspaces. These correspond exactly to the two invariant graphs above and there will be no others (compare Section~\ref{Harper}).) In order to see the strict monotonicity of $E \mapsto \lambda_c(E)$, fix $\epsilon > 0$ and suppose that $E_2 = E_1 + 3\epsilon$ and $\lambda'<\lambda_c(E_1)+\epsilon$. Then $S_{E_1,\lambda_c(E_1)-\epsilon}$ has a continuous invariant graph $\varphi^+_1$ contained in $\ensuremath{\mathbb{T}^{1}} \times (0,+\infty)$, and some iterate of $S_{E_1,\lambda_c(E_1)-\epsilon}$ acts uniformly contracting in the vertical direction on $[\varphi^+_1,\infty)$. (This follows from the Uniform Ergodic Theorem in combination with the fact that $DS_{E,\lambda,\theta}(x) = 1/x^2$ is decreasing in $x$.) However, since \[ S_{E_2,\lambda',\theta}(x) \ \geq \ S_{E_1,\lambda_c(E_1)-\epsilon,\theta}(x) \quad \forall x \in [\varphi^+_1,+\infty) \ , \] (recall that we assumed $V$ to be normalised) this implies that $S_{E_2,\lambda'}$ maps $[\varphi^+_1,\infty)$ into itself and the respective iterate of $S_{E_2,\lambda'}$ also acts uniformly contracting in the vertical direction on this set. (Note that $DS_{E,\lambda,\theta}$ does not depend on the parameters $E$ and $\lambda$.) Consequently $S_{E_2,\lambda'}$ has an attracting and continuous invariant graph contained in $[\varphi^+_1,+\infty)$. As this is true for all $\lambda' \leq \lambda_c(E_1)+\epsilon$, this implies $\lambda_c(E_2) > \lambda_c(E_1)+\epsilon$. {\raggedleft $\Box$ \\} \subsubsection{Remarks on the qpf Arnold circle map and Pinched systems} \label{App:Remarks} We neither apply the results from Section~\ref{NonSmoothSaddleNodes} to Pinched skew products nor the the qpf Arnold map, but for very different reasons. In the case of Pinched skew products, this would seem like using a sledgehammer to crack a nut. In these systems the existence of SNA can be established by a few short and elegant arguments, making use of their particular structure (see \cite{grebogi/ott/pelikan/yorke:1984} and \cite{keller:1996}). Even the exponential evolution of peaks can be described in a much more simple way in this setting, a fact which was used in \cite{jaeger:2004a} to study the topological structure of SNA in Pinched skew products. (In fact, this preceding result and the striking similarities between the pictures in Figures \ref{fig:atanfamily} and \ref{fig:pinchedlines}, which strongly suggested some common underlying pattern, were the starting point for the work presented here). In principle it is possible to view the SNA in these systems as being created in non-smooth bifurcations, as this is done in \cite{glendinning:2004}. However, as treating them with the methods presented here would even need some additional modifications, we refrain from doing so. For the case of the qpf Arnold circle map, the situation is completely different. Here it is just not possible to apply our results. The reason for this is the fact that no matter how the parameter $\alpha \in [0,1]$ in (\ref{eq:qpfarnold}) is chosen, the maximal expansion rate is always at most two. Further, for any interval of fixed length the uniform contraction rate also remains bounded. Although the derivative goes to zero at $\theta=\ensuremath{\frac{1}{2}}$ if $\alpha$ is close to $1$, a strong contraction only takes place locally. This means that the expansion and contraction rates one can work with will always be moderate and cannot be chosen arbitrarily large by adjusting the parameters. However, this is exactly what would be necessary for the application of Theorem~\ref{thm:snaexistence}~. In the case of the forcing function $\theta \mapsto \sin(2\pi\theta)$ used in (\ref{eq:qpfarnold}), there is also not much hope that a refinement of our methods would yield results. As the simulations in Figure~\ref{fig:arnold} indicate, the exponential evolution of peaks is only present in a very weak form in this case. Therefore, it should be doubted that this process can be described in a rigorous way with approximative methods as the ones we use in the proof of Theorem~~\ref{thm:snaexistence}, which necessarily involve a lot of rough estimates. However, as already indicated in Section~\ref{ArnoldMap}, this might become different if one chooses a more suitable forcing function, and considers for example the parameter family \begin{equation} \label{eq:arnoldmodified} \ensuremath{(\theta,x)} \ \mapsto \ \left(\ensuremath{\theta + \omega},x+\tau+\frac{\alpha}{2\pi} \sin(2\pi x) - \beta \cdot \max\left\{0,1-\sigma \cdot d(\theta,0)\right\}\right) \end{equation} with sufficiently large parameter $\sigma$. In this case the exponential evolution of peaks is very distinct again, as one can see in Figure~\ref{fig:arnoldpeak}. Consequently, it should also be possible to treat this situation rigorously. Nevertheless, Theorem~\ref{thm:snaexistence} is not sufficient for this purpose. Changing the forcing function does not have any influence on the expansion and contraction rates, such that these will still be too weak to meet our assumptions. Yet, there is an additional fact which we do not make use of in the proof of Theorem~\ref{thm:snaexistence}: In the situation of (\ref{eq:arnoldmodified}) with large $\sigma$, the forcing function vanishes almost everywhere, apart from a small neighbourhood of 0. This means that after every visit in this neighbourhood, the expansion, respectively contraction, has a long time to work, without any quasiperiodic influence, before the next return. It seems reasonable to expect that this could be used to make up for the weak expansion and contraction rates, for example by regarding a renormalisation of the original system after a sufficiently large finite time. However, the implementation of this idea is left for the future \ldots\ . \subsubsection{SNA's with symmetry} \label{SNAwithsymmetry} Similar to the proof of Corollary~\ref{cor:atanfamily}, it is possible to show that for sufficiently large parameters $\alpha$ the parameter family (\ref{eq:symmetricfamily}) satisfies the assumptions of Theorem~\ref{thm:symmetricsna}~. This leads to the following \begin{cor} Suppose $\omega$ satisfies the Diophantine condition (\ref{cond:diophantine}). Then there exists $\alpha_0 = \alpha_0(c,d)$ such that for all $\alpha>\alpha_0$ there is a parameter $\beta_c = \beta_c(\alpha)$ such that the system (\ref{eq:symmetricfamily}) with parameters $\alpha$ and $\beta_c$ has two SNA and one SNR, with the properties described in Theorem~\ref{thm:symmetricsna}, and no other invariant graphs. \end{cor} As the details are more or less the same as in Section~\ref{App:atanfamily}, we omit the proof. To the knowledge of the author, this is the first situation where existence of such a triple of intermingled invariant graphs can be described rigorously. Similarly, it is the first example of a qpf monotone interval map without continuous invariant graphs. \section{Saddle-node bifurcations and sink-source-orbits} \label{Generalsetting} The aim of this section is threefold: First, it is to introduce a general setting where a (not necessarily non-smooth) saddle-node bifurcation occurs and can be described rigorously. Secondly, we will show that the presence of a \textit{`sink-source-orbit'} implies the non-smoothness of the bifurcation, and how the existence of such an orbit can be established by approximation with finite trajectories. The construction of such trajectories with the required properties will then be carried out in the succeeding Sections~\ref{Strategy} to \ref{Construction}. Finally, before we can start we have to address a subtle issue concerning the definition of invariant graphs: \subsection{Equivalence classes of invariant graphs and the essential closure} \label{Essentialclosure} The problem we want to discuss is the following: Any invariant graph $\varphi$ can be modified on a set of measure zero to yield another invariant graph $\tilde{\varphi}$, equal to $\varphi$ $m$-a.s.\ (where $m$ denotes the Lebesgue measure on $\ensuremath{\mathbb{T}^{1}}$). We usually do not want to distinguish between such graphs. On the other hand, especially when topology is concerned we sometimes need objects which are well-defined everywhere. So far, this has not been a problem. The bounding graphs of invariant sets defined by (\ref{eq:boundinggraphs}) are well-defined everywhere, and for the definition of the associated measure (\ref{eq:associatedmeasure}) it does not matter. But in general, some care has to be taken. We will therefore use the following convention: We will consider two invariant graphs as equivalent if they are $m$-a.s.\ equal and implicitly speak about equivalence classes of invariant graphs (just as functions in ${\cal L}^{1}_{\textrm{Leb}}(\ensuremath{\mathbb{R}})$ are identified if they are Lebesgue-a.s.\ equal). Whenever any further assumptions about invariant graphs such as continuity, semi-continuity or inequalities between invariant graphs are made, we will understand it in the way that there is at least one representative in each of the respective equivalence classes such that the assumptions are met. All conclusions which are then drawn from the assumed properties will be true for all such representatives. There is one case where this terminology might cause confusion: It is possible that an equivalence class contains both an upper and a lower semi-continuous graph, but no continuous graph.% \footnote{To get an idea of what could happen, consider the function $f : x \mapsto \sin \frac{1}{x} \ \forall x \neq 0$. By choosing $f(0) = 1$ we can extend it to an upper semi-continuous function, by choosing $f(0) = -1$ to a lower semi-continuous function, but there is no continuous function in the equivalence class.} This rather degenerate case cannot occur when the Lyapunov exponent of the invariant graph is negative (see \cite{stark:2003}, Proposition~4.1), but when the exponent is zero it must be taken into account. To avoid ambiguities, we will explicitly mention this case whenever it can occur. In order to assign a well defined point set to an equivalence class of invariant graphs, we introduce the \textit{essential closure}: \begin{definition} \label{def:essclosure} Let $T$ be a qpf monotone interval map. If $\varphi$ is an invariant graph, we define its essential closure as \begin{equation} \label{essclosure} \esscl{\Phi} := \{ (\theta,x) : \mu_{\varphi}(U) > 0 \ \forall \textit{open neighbourhoods U of } (\theta,x) \} \ , \end{equation} where the associated measure $\mu_\varphi$ is given by (\ref{eq:associatedmeasure}). \end{definition} Several facts follow immediately from this definition: \begin{itemize} \item \esscl{\Phi}\ is a compact set. \item $\esscl{\Phi}$ is equal to the topological support $\textrm{supp}(\mu_{\varphi})$ of the measure $\mu_\varphi$, which in turn implies \ $\mu_{\varphi}(\esscl{\Phi}) = 1$ (see e.g. \cite{katok/hasselblatt:1995}). \item Invariant graphs from the same equivalence class have the same essential closure (as they have the same associated measure). \item $\esscl{\Phi}$ is contained in every other compact set which contains $\mu_{\varphi}$-a.e.\ point of $\Phi$, in particular in $\overline{\Phi}$. \item \esscl{\Phi}\ is forward invariant under $T$.% \footnote{This can be seen as follows: Suppose $x \in \esscl{\Phi}$ and $U$ is an open neighbourhood of $T(x)$. Then $T^{-1}(U)$ is an open neighbourhood of $x$, and therefore $\mu_{\varphi}(U) = \mu_{\varphi} \circ T^{-1}(U) > 0$. This means $T(x) \in \esscl{\Phi}$, and as $x \in \esscl{\Phi}$ was arbitrary we can conclude that $T(\esscl{\Phi}) \subseteq \esscl{\Phi}$. On the other hand $T(\esscl{\Phi})$ is a compact set which contains $\mu_{\varphi}$-a.e.\ point in $\Phi$, therefore $\esscl{\Phi} \subseteq T(\esscl{\Phi})$. } \end{itemize} \subsection{Saddle-node bifurcations: Proof of Theorem~\ref{thm:saddlenode}} \label{SaddleNode} As mentioned, the first problem we have to deal with is to restrict the number of invariant graphs which can occur. If there are too many, it will be hard to describe a saddle-node bifurcation in detail. However, there is a result which is very convenient in this situation: \begin{thm}[Theorem 4.2 in \cite{jaeger:2003}] \label{thm:schwarzian} Suppose $T$ is a qpf monotone interval map and all fibre maps $\ensuremath{T_{\theta}}$ are ${\cal C}^3$. Further assume $\ensuremath{(\theta,x)} \mapsto D\ensuremath{T_{\theta}}(x)$ is continuous and all fibre maps have strictly positive derivative and strictly negative Schwarzian derivative (see Footnote~\ref{foot:schwarzian}). Then there are three possible cases: \begin{list}{(\roman{enumi})}{\usecounter{enumi}} \item There exists one invariant graph $\varphi$ with $\lambda(\varphi) \leq 0$. \item There exist two invariant graphs $\varphi$ and $\psi$ with $\lambda(\varphi) < 0$ and $\lambda(\psi) = 0$. \item There exist three invariant graphs $\varphi^- \leq \psi \leq \varphi^+$ with $\lambda(\varphi^-) < 0$, $\lambda(\psi) > 0$ and $\lambda(\varphi^+) < 0$. \end{list} Regarding the topology of the invariant graphs, there are the following possibilities: \begin{list}{(\roman{enumi})'}{\usecounter{enumi}} \item If the single invariant graph has negative Lyapunov exponent, it is always continuous. Otherwise the equivalence class contains at least an upper and a lower semi-continuous representative. \item The upper invariant graph is upper semi-continuous, the lower invariant graph lower semi-continuous. If $\varphi$ is not continuous and $\psi$ (as an equivalence class) is only semi-continuous in one direction, then $\esscl{\Phi} = \esscl{\Psi}$. \item $\psi$ is continuous if and only if $\varphi^+$ and $\varphi^-$ are continuous. Otherwise $\varphi^-$ is at least lower semi-continuous and $\varphi^+$ is at least upper semi-continuous. If $\psi$ not lower semi-continuous then $\esscl{\Phi^-} = \esscl{\Psi}$, if $\psi$ is not upper semi-continuous then $\esscl{\Psi} = \esscl{\Phi^+}$. \end{list} Finally, as long as $\lambda(\varphi^-) < 0$ the graph $\psi$ can be defined by \begin{equation} \label{eq:psidef} \psi(\theta) \ := \ \sup\{ x \in X \mid \lim_{n\ensuremath{\rightarrow}\infty} |\ensuremath{T_{\theta}}^n(x) - \varphi^-(\theta+n\omega)| = 0 \} \ . \end{equation} \end{thm} In order to use the alternative assumption in (\ref{cond:F3}), we need a similar result for concave fibre maps, which is due to Keller. The main idea of the argument is contained in \cite{keller:1996}. However, as the statement was never published in this form, we include a proof. \begin{thm}[G.\ Keller] \label{thm:concave} Suppose $T$ is a qpf monotone interval map, all fibre maps $T_\theta$ are differentiable and $\ensuremath{(\theta,x)} \mapsto DT_\theta(x)$ is continuous. Further, assume that there exist measurable functions $\gamma^\pm : \ensuremath{\mathbb{T}^{1}} \to X$, such that for all $\theta \in \ensuremath{\mathbb{T}^{1}}$ the fibre maps $T_\theta$ are strictly concave on $I(\theta)= [\gamma^-(\theta),\gamma^+(\theta)] \ensuremath{\subseteq} X$. Then there exist at most two invariant graphs taking their values in $I(\theta)$, i.e.\ satisfying \begin{equation} \label{eq:concavity-interval} \varphi(\theta) \ \in \ I(\theta) \quad \forall \theta \in \ensuremath{\mathbb{T}^{1}} \ . \end{equation} If there exist two invariant graphs $\varphi_1\leq \varphi_2$ which both satisfy (\ref{eq:concavity-interval}), then $\lambda(\varphi_1) > 0$ and $\lambda(\varphi_2) < 0$. Further, if the graphs $\gamma^\pm$ are continuous and are mapped below themselves, meaning that there holds \begin{equation} T_\theta(\gamma^\pm(\theta)) \ \leq \ \gamma^\pm(\theta) \quad \forall \theta\in\ensuremath{\mathbb{T}^{1}} \ , \end{equation} then either $\varphi_1,\varphi_2$ are both continuous, or $\varphi_1$ is lower semi-continuous, $\varphi_2$ is upper semi-continuous and $\esscl{\Phi_1}=\esscl{\Phi_2}$. (If there is only one invariant graph which satisfies (\ref{eq:concavity-interval}), then it always contains an upper and a lower semi-continuous representative in its equivalence class.) \end{thm} \textit{Proof.} \ Suppose for a contradiction that there exist three different invariant graphs $\varphi_1 \leq \varphi_2 \leq \varphi_3$ which all satisfy (\ref{eq:concavity-interval}). As we identify invariant graphs which belong to the same equivalence class, we have $\varphi_1(\theta) < \varphi_2(\theta) < \varphi_3(\theta)$ $m$-almost surely. Due to the strict concavity of the fibre maps and the invariance of the three graphs we obtain \begin{equation} \label{eq:difference-quotients} \log\left(\frac{\varphi_2(\ensuremath{\theta + \omega})-\varphi_1(\ensuremath{\theta + \omega})}{\varphi_2(\theta)-\varphi_1(\theta)}\right) \ > \ \log\left( \frac{\varphi_3(\ensuremath{\theta + \omega})-\varphi_2(\ensuremath{\theta + \omega})}{\varphi_3(\theta)-\varphi_2(\theta)} \right) \quad m\textrm{-a.s.} \ . \end{equation} However, the following Lemma~\ref{lem:keller} applied to $Y=\ensuremath{\mathbb{T}^{1}}$, $S(\theta)=\ensuremath{\theta + \omega}$, $\nu=m$ and $f = \log(\varphi_{i+1}-\varphi_i)$ ($i=1,2$) yields that the integral with respect to $m$ on both sides equals zero, thus leading to a contradiction. Note that $f\circ S- f$ has the constant majorant $\log(\max_{\ensuremath{(\theta,x)} \in \ensuremath{\mathbb{T}^{1}} \times X} DT_\theta(x))$. \begin{lem}[Lemma 2 in \cite{keller:1996}] \label{lem:keller} Suppose $(Y,S,\nu)$ is a measure-preserving dynamical system, $f:Y \ensuremath{\rightarrow} \ensuremath{\mathbb{R}}$ is measurable and $f\circ S - f$ has an integrable majorant or minorant. Then $\int_Y f\circ S - f \ d\nu = 0$. \end{lem} For the estimates on the Lyapunov exponents, note that due to the strict concavity there holds \begin{eqnarray*} \lambda(\varphi_1) & = & \int_{\ensuremath{\mathbb{T}^{1}}} \log\left( \lim_{t \to 0} \frac{T_\theta(\varphi_1(\theta)+t) - \varphi_1(\ensuremath{\theta + \omega})}{t}\right) \ d\theta \\ & > & \int_{\ensuremath{\mathbb{T}^{1}}}\log\left( \frac{\varphi_2(\ensuremath{\theta + \omega})-\varphi_1(\ensuremath{\theta + \omega})}{\varphi_2(\theta)-\varphi_1(\theta)}\right) \ d\theta \ = \ 0 \ . \end{eqnarray*} (The last equality follows again from Lemma~\ref{lem:keller}~.) Similarly, we obtain $\lambda(\varphi_2) < 0$. \medskip Now suppose $\gamma^+$ is continuous and $T_\theta(\gamma^+(\theta)) \leq \gamma^+(\ensuremath{\theta + \omega}) \ \forall \theta \in \ensuremath{\mathbb{T}^{1}}$. Then we can define a sequence of monotonically decreasing continuous curves by \[ \gamma^+_n(\theta) \ := \ T^n_{\theta-n\omega}(\gamma^+(\theta-n\omega)) \ . \] As this sequence is bounded below by the invariant graph $\varphi_2$ it converges pointwise, and the limit has to be an invariant graph. Since there are no other invariant graphs between $\varphi_2$ and $\gamma^+$, we must have $\varphi_2 = \lim_{n\to\infty} \gamma^+_n$. Consequently $\varphi_2$ is upper semi-continuous as the monotone limit of a sequence of continuous curves. In the same way one can see that $\varphi_1$ must be lower semi-continuous. If $\varphi_1$ is not continuous, then the upper bounding graph of the compact invariant set $\esscl{\Phi_1}$ must be an upper semi-continuous invariant graph which lies between $\gamma^-$ and $\gamma^+$. The only candidate for this is $\varphi_2$, such that $\Phi_2 \ensuremath{\subseteq} \esscl{\Phi_1}$. However, this is only possible if $\varphi_2$ is not continuous. Otherwise, as $\lambda(\varphi_2) < 0$ and due to the Uniform Ergodic Theorem, some iterate of $T$ would act uniformly contracting in the fibres on some neighbourhood $U$ of $\varphi_2$. In this case no other invariant graph could intersect $U$ on a set of positive measure, contradicting $\Phi_2 \ensuremath{\subseteq} \esscl{\Phi_1}$. Replacing $T$ by $T^{-1}$ we can repeat the same argument for $\varphi_2$, such that either both graphs are continuous or both are only semi-continuous and have the same essential closure. This completes the proof. {\raggedleft $\Box$ \\} \medskip \begin{bem} \label{bem:schwarzianproof} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item The proof of Theorem~\ref{thm:schwarzian} in \cite{jaeger:2003} basically relies on the same idea as the above proof of Theorem~\ref{thm:concave}~. It depends on the fact that negative Schwarzian derivative of a ${\cal C}^3$-map $F: X \ensuremath{\rightarrow} X$ is equivalent to strictly negative cross ratio distortion. The latter is defined as \[ {\cal D}_F(w,x,y,z) \ = \ \frac{\frac{F(y)-F(y)}{y-x} \cdot \frac{F(z)-F(w)}{z-w}}{\frac{F(x)-F(w)}{x-w} \cdot \frac{F(z)-F(y)}{z-y}} \ , \] where $w<x<y<z \in X$. Applying the resulting inequality to four invariant graphs and integrating over the circle leads to a contradiction, similar to the argument after (\ref{eq:difference-quotients}). This excludes the existence of more than three invariant graphs in the situation of Theorem~\ref{thm:schwarzian}, and in more or less the same way one obtains the inequalities for the Lyapunov exponents. \item We remark that the first part of Theorem~\ref{thm:schwarzian} (meaning statements (i)--(iii)) still holds if the dependence of $T_\theta$ on $\theta$ is only measurable, provided all other assumptions of the theorem are met and $\theta \mapsto \log(\max_{x\in X} DT_\theta(x))$ has an integrable majorant or minorant. Similarly, in Theorem~\ref{thm:concave} the statement about the number of the invariant graphs and the Lyapunov exponents remain true in the analogous case. \end{list} \end{bem} The preceding statements now allow to prove Theorem~\ref{thm:saddlenode}: \bigskip \textit{\bf Proof of Theorem~\ref{thm:saddlenode}~.} We start with the case where all fibre maps have negative Schwarzian derivative (see~(\ref{cond:F3})). Then due to Theorem~\ref{thm:schwarzian}, the number of graphs which can exist is at most three . In order to show that the lower bounding graph $\varphi^-$ is always continuous, let us first collect some facts about the map $F$: As $F$ has three fixed points, there must exist some $c\in[-2C,2C]$ with $F''(c) = 0$. However, the negative Schwarzian derivative implies that $F'''(x) < 0$ whenever $F''(x) = 0$ for some $x \in [-2C,2C]$. Thus there can be only one $c$ with $F''(c)=0$, and in addition $F''(x)$ will be strictly positive for $x<c$ and strictly negative for $x>c$. Therefore $F_{|[-2C,c)}$ will be strictly convex and $F_{|(c,2C]}$ strictly concave, and this in turn implies that 0 is an unstable fixed point whereas $x^-$ and $x^+$ are stable. Further $F-\textrm{Id}$ is strictly positive on $(0,x^+)$ and strictly negative on $(x^-,0)$, and finally $F$ is a uniform contraction on $[-2C,x^-]$.% \footnote{Note that we do not know whether $c \in (x^-,0]$, such that this does not imply the second alternative in (\ref{cond:F3}).} As we are in the case of one-sided forcing, for any % $\epsilon$ with $-\epsilon \in (x^-,0)$ the set $\ensuremath{\mathbb{T}^{1}} \times [-2C,-\epsilon]$ is mapped into itself, independent of $\beta$. Further, as $g$ does not vanish identically, there exist $\epsilon > 0$ and $n \in \N$ such that $T^n(M) \ensuremath{\subseteq} \ensuremath{\mathbb{T}^{1}} \times [-2C,-\epsilon]$, where $M:=\ensuremath{\mathbb{T}^{1}} \times [-2C,0]$. Consequently \begin{eqnarray*} \bigcap_{n\in\N} T^n(M) & \ensuremath{\subseteq} & \bigcap_{n\in\N} T^n(\ensuremath{\mathbb{T}^{1}} \times [-2C,-\epsilon]) \\ & \ensuremath{\subseteq} & \bigcap_{n\in\N} \ensuremath{\mathbb{T}^{1}} \times [-2C,F^n(-\epsilon)] \ = \ \ensuremath{\mathbb{T}^{1}} \times [-2C,x^-] \ =: \ N \ . \end{eqnarray*} Now $T$ acts uniformly contracting on $N$ in the vertical direction. This means that there will be exactly one invariant graph contained in $N \ensuremath{\subseteq} M$, which is stable and continuous, and this is of course the lower bounding graph $\varphi^-$. In particular $\varphi^- < 0$ independent of $\beta$. Furthermore, no other invariant graph can intersect $N$. \begin{list}{(\roman{enumi})}{\usecounter{enumi}} \item On the one hand, there obviously exist three invariant graphs at $\beta=0$, namely the constant lines corresponding to the three fixed points. As these are not neutral, they will also persist for small values of $\beta$. On the other hand consider $\beta = C$. As we assumed that $g$ takes the maximum value of 1 at least for one $\theta_0 \in \ensuremath{\mathbb{T}^{1}}$, the point $(\theta_0,C)$ is mapped into $M$. (Recall that $F:[-2C,2C] \ensuremath{\rightarrow} [-C,C]$.) But as we have seen, any point in $M$ is attracted to $\varphi^-$ independent of $\beta$. Thus there exists an orbit which starts above the upper bounding graph and ends up converging to $\varphi^-$. This means that there can be no other invariant graph apart from $\varphi^-$, and as this situation is stable the same will also be true for all $\beta$ sufficiently close to $C$. Consequently, if we define $\beta_0$ as the infimum of all $\beta \in (0,C)$ for which there do not exist three continuous invariant graphs, then $\beta_0 \in (0,C)$ and statement (i) holds by definition. It remains to show that the graphs $\varphi^\pm$ and $\psi$ depend continuously and monotonically on $\beta$. Continuity simply follows from the fact that invariant curves with non-zero Lyapunov exponents depend continuously on ${\cal C}^1$-distortions of the system. For the monotonicity of $\varphi^+$, note that since there is no other invariant graph above, $\varphi^+$ is the limit of the iterated upper boundary lines $\varphi_n$, which are defined by $\varphi_n(\theta) := T_{\theta-n\omega}^n(2C)$. Due to the one-sided forcing, each of these curves will decrease monotonically as $\beta$ is increased, and this carries over to $\varphi^+$ in the limit. The same argument applies to $\varphi^-$, as this is the pointwise limit of the iterated \textit{lower} boundary lines. Finally, note that $\psi$ can be defined as the upper boundary of the set \begin{eqnarray*} \lefteqn{\{ (\theta,x) \mid \lim_{n\ensuremath{\rightarrow}\infty} |\ensuremath{T_{\theta}}^n(x) - \varphi^-(\theta+n\omega)| = 0 \} } \\ & = & \{ \ensuremath{(\theta,x)} \mid \exists n\in\N : T^n\ensuremath{(\theta,x)} \in M \} \ . \end{eqnarray*} This set increases with $\beta$, and thus the graph $\psi$ will move upwards. \item As all points in $M$ are attracted to $\varphi^-$, the two upper invariant graphs for $\beta<\beta_0$ must be contained in $M^c$. Simply due to continuity, for $\beta \ensuremath{\rightarrow} \beta_0$ the pointwise limits of these curves will be invariant graphs for $T_{\beta_0}$, although not necessarily continuous. By compactness, they will be contained in $\overline{M^c}$ and can therefore not coincide with $\varphi^-$. Further, they cannot be both distinct and continuous: Due to the non-zero Lyapunov exponents given by Theorem~\ref{thm:schwarzian}(iii), this is a stable situation, contradicting the definition of $\beta_0$. Thus there only remain the two stated possibilities: Either the two graphs are distinct and not continuous, or they coincide $m$-a.s.\ and are neutral (see Theorem~\ref{thm:schwarzian}). The compactness of $B$ simply follows from the semi-continuity of the graphs $\psi$ and $\varphi^+$. In the case where $\psi$ equals $\varphi^+$ $m$-a.s., the fact that $B$ is pinched is obvious. Otherwise, it follows from Theorem~\ref{thm:schwarzian} that the two graphs have the same essential closure, which we denote by $A$. Now all invariant ergodic measures supported on $B$ (namely $\mu_\psi$ and $\mu_{\varphi^+}$) have the same topological closure $A$, which means that $A$ is minimal and there is no other minimal subset of $B$. Therefore Theorem~4.6 in \cite{stark:2003} implies that $B$ is pinched. \item Suppose $\ensuremath{\tilde{\beta}} = \beta_0 + 2\epsilon$ for any $\epsilon > 0$. We have to show that there is no other invariant graph apart from the lower bounding graph $\varphi^-$. For this, it suffices to find an orbit which starts on the upper boundary line and ends up in $M$: This means that it finally converges to $\varphi^-$, which is impossible if there exists another invariant graph above. First, consider $\beta=\beta_0$ and let $\theta_1$ be chosen such that $\psi(\theta_1) = \varphi^+(\theta_1)$. As the pinched fibres are dense in $\ensuremath{\mathbb{T}^{1}}$ and $g(\theta_0)=1$, we can assume w.l.o.g.\ that $g(\theta_1-\omega) \geq \ensuremath{\frac{1}{2}}$. Further, as the upper boundary lines converge pointwise to $\varphi^+$, there exists some $n\in\N$ such that \[ \varphi_{n}(\theta_1) \ = \ T_{\beta_0,\theta_1-n\omega}^{n}(2C) \ \leq \ \varphi^+(\theta_1) + \frac{\epsilon}{2} \ . \] Now, as the forcing is one-sided (i.e.\ $g\geq0$) we have $T^{n-1}_{\ensuremath{\tilde{\beta}},\theta_1-n\omega}(2C) \leq T^{n-1}_{\beta_0,\theta_1-n\omega}(2C)$ and consequently \begin{eqnarray*} T^n_{\ensuremath{\tilde{\beta}},\theta_1-n\omega}(2C) & = & T_{\ensuremath{\tilde{\beta}},\theta_1-\omega}(T^{n-1}_{\ensuremath{\tilde{\beta}},\theta_1-n\omega}(2C)) \\ & \leq & T_{\ensuremath{\tilde{\beta}},\theta_1-\omega}(T^{n-1}_{\beta_0,\theta_1-n\omega}(2C)) \\ & = & F(T^{n-1}_{\beta_0,\theta_1-n\omega}(2C)) - \ensuremath{\tilde{\beta}} \cdot g(\theta_1-\omega) \\ & = & T^n_{\beta_0,\theta_1-n\omega}(2C) - (\ensuremath{\tilde{\beta}} - \beta_0) \cdot g(\theta_1-\omega) \\ & \leq & \varphi^+(\theta_1) + \frac{\epsilon}{2} - \epsilon \ < \ \varphi^+(\theta_1) \ = \ \psi(\theta_1) \ . \end{eqnarray*} However, already for $T_{\beta_0}$ the orbits of all points below $\psi$ eventually enter $M$, and again due to the one-sided nature of the forcing this will surely stay true for the respective orbits generated with $T_{\ensuremath{\tilde{\beta}}}$. Thus, for $\beta=\ensuremath{\tilde{\beta}}$ the orbit starting at $(\theta_1,2C)$ ends up in $M$ and therefore converges to the lower bounding graph. As $\epsilon > 0$ was arbitrary, this proves statement (iii). \end{list} Now assume the second alternative in (\ref{cond:F3}) holds, i.e.\ for some $c \in (x^-,0]$ the map $F_{|[c,2C]}$ is strictly concave and $F_{[-2C,x_-]}$ is uniformly contracting. Then the above proof basically remains the same, the only difficulty is to see that for any $\beta \in [0,C]$ there cannot be more than three invariant graphs. However, on the one hand it can be seen as above that the lower bounding graph $\varphi^-$ is the only invariant graph in $M$ and no other invariant graph intersects $M$, since all orbits in this set converge to $\varphi^-$. On the other hand we can apply Theorem~\ref{thm:concave} with $I(\theta) = [0,2C]$ to see that there can be at most two invariant graphs in $M^c$. Apart from this, the above arguments work in exactly the same way, replacing Theorem~\ref{thm:schwarzian} by Theorem~\ref{thm:concave} where necessary. {\raggedleft $\Box$ \\} \medskip \textit{\bf Proof of Lemma \ref{lem:parameterdependence}~.} The continuity simply follows from the fact that both the situations above and below the bifurcation are stable, due to the non-zero Lyapunov exponents. Consequently, the sets $\{ (\alpha,\beta) \mid \beta < \beta_0(\alpha) \}$ and $\{(\alpha,\beta) \mid \beta> \beta_0(\alpha)\}$ are open, which means that $\alpha \mapsto \beta_0(\alpha)$ must be continuous. In order to see the monotonicity, let $T_{\alpha,\beta}$ be the system given by (\ref{eq:generalsystem}) with $F = F_\alpha$. Suppose that $\tilde{\alpha} > \alpha$. Denote the upper bounding graph of the system $T_{\alpha,\beta_0(\alpha)}$ by $\varphi^+$, the invariant graph in the middle by $\psi$. As all points on or below the 0-line eventually converge to the lower bounding graph (see the proof of Theorem~\ref{thm:saddlenode}), the invariant graphs $\psi$ and $\varphi^+$ must be strictly positive. As $\psi$ is lower semi-continuous and $\varphi^+ \geq \psi$, both graphs are uniformly bounded away from 0. Thus, there exists some $\delta > 0$ such that $\delta \leq \varphi^+ \leq 1-\delta$. For any $x \in [\delta,1-\delta]$ the map $F_\alpha(x)$ is strictly increasing in $\alpha$.% \footnote{We have \[ \frac{\partial}{\partial \alpha} F_\alpha(x) \ = \ \frac{\partial}{\partial \alpha} \left( \frac{\arctan(\alpha x)}{\arctan(\alpha)} \right) \ = \ \left( \frac{x\cdot \arctan(\alpha)}{1+\alpha^2x^2} - \frac{\arctan(\alpha x)}{1+\alpha^2} \right) \cdot \arctan(\alpha)^{-2} \ . \] This is positive if and only if \[ G_\alpha(x) \ := \ x \cdot \arctan(\alpha) \cdot (1+\alpha^2) - \arctan(\alpha x) \cdot (1 + \alpha^2x^2) \] is positive. However, it is easy to verify that $G_\alpha(0) = G_\alpha(1) = 0$ and $G_\alpha$ is strictly concave on $[0,1]$, i.e.\ $\frac{\partial^2}{\partial^2 x} G_\alpha(x) < 0 \ \forall x \in [0,1]$, such that $G_\alpha(x) > 0 \ \forall x \in (0,1)$. } Due to compactness this means that there exists $\epsilon > 0$, such that $F_{\tilde{\alpha}} > F_{\alpha}+\epsilon$ on $[\delta,1-\delta]$. Let $\ensuremath{\tilde{\beta}} := \beta_0(\alpha) + \epsilon$. Then \[ T_{\tilde{\alpha},\ensuremath{\tilde{\beta}},\theta}(x) \ > \ T_{\alpha,\beta_0(\alpha),\theta}(x) \ \ \ \ \ \forall \ensuremath{(\theta,x)} \in \ensuremath{\mathbb{T}^{1}} \times [\delta,1-\delta] \ . \] Consequently $T_{\tilde{\alpha},\ensuremath{\tilde{\beta}}}$ maps the graph $\varphi^+$ strictly above itself, which means that the upper bounding graph $\tilde{\varphi}^+$ of this system must be above $\varphi^+$. It can therefore not coincide with the lower bounding graph, which lies below the 0-line. Hence $\beta_0(\tilde{\alpha}) \geq \ensuremath{\tilde{\beta}} > \beta_0(\alpha)$. {\raggedleft $\Box$ \\} \subsection{Sink-source-orbits and SNA: Proof of Theorem~\ref{thm:sinksourcesna}} \label{Sinksourceorbits} Suppose that $T$ satisfies the assumptions of Theorem~\ref{thm:sinksourcesna} and denote the upper and lower bounding graph by $\varphi^+$ and $\varphi^-$, respectively. Suppose there exists no non-continuous invariant graph with negative Lyapunov exponent, but a point $(\theta_0,x_0) \in \ensuremath{\mathbb{T}^{1}} \times X$ with $\lambda^+\ensuremath{(\theta_0,x_0)} > 0$ and $\lambda^-\ensuremath{(\theta_0,x_0)} >0$ (i.e.\ a sink-source-orbit). Let \[ \psi^+(\theta) \ := \ \inf \{ \varphi(\theta) \mid \varphi \textrm{ is a continuous } T \textit{-invariant graph with } \varphi(\theta_0) \geq x_0 \} \ , \] with $\psi^+ :\equiv \varphi^+$ if no such graph $\varphi$ exists. Similarly, define \[ \psi^-(\theta) \ := \ \sup \{ \varphi(\theta) \mid \varphi \textrm{ is a continuous } T \textit{-invariant graph with } \varphi(\theta_0) \leq x_0 \} \ , \] with $\psi^- :\equiv \varphi^-$ if there is no such graph $\varphi$. By the continuity and monotonicity of the fibre maps, $\psi^+$ and $\psi^-$ will be invariant graphs again. In addition, $\psi^+$ will be upper and $\psi^-$ lower semi-continuous and $\psi^- \leq \psi^+$. Thus, the set $A := [\psi^-,\psi^+]$ is compact. By a semi-uniform ergodic theorem contained in \cite{sturman/stark:2000} (Theorem~1.9), both $\lambda^+\ensuremath{(\theta_0,x_0)}$ and $-\lambda^-\ensuremath{(\theta_0,x_0)}$ must be contained in the convex hull of the set \[ \left\{ \int_A \log D\ensuremath{T_{\theta}(x)} \ d\mu\ensuremath{(\theta,x)} \mid \mu \textit{ is a } T_{|A}\textit{-invariant and ergodic probability measure} \right\} \ . \] As all ergodic measures are associated to invariant graphs (see (\ref{eq:associatedmeasure})), this means that there must exist invariant graphs with positive and negative Lyapunov exponents in $A$. However, as we assumed that all stable invariant graphs are continuous and there are no continuous invariant graphs contained in the interior of $A$ by the definition of $\psi^\pm$, the only possible candidates for a negative Lyapunov exponent are $\psi^+$ and $\psi^-$. We consider the case where only $\lambda(\psi^-) < 0$, if $\psi^+$ or both invariant graphs are stable this can be dealt with similarly. Note that by the assumption we made at the beginning, the negative Lyapunov exponent ensures that $\psi^-$ must be continuous. Consequently, the convergence of the Lyapunov exponents is uniform on $\psi^-$, such that there there is and open neighbourhood of this curve which is uniformly contracted in the vertical direction by some iterate of $T$. Therefore, if we define \[ \tilde{\psi}^-(\theta) \ := \ \inf\{x \geq \psi^-(\theta) \mid \limsup_{n\ensuremath{\rightarrow} \infty} |\ensuremath{T_{\theta}}^n(x) - \psi^-(\theta+n\omega)| > 0 \} \ . \] then $\tilde{\psi}^- > \psi^-$, and in addition $\tilde{\psi}^-$ is lower semi-continuous. Note that \[ \ensuremath{\lim_{n\rightarrow\infty}} |\ensuremath{T_{\theta}}^n(x) - \psi^-(\theta+n\omega)| \ = \ 0 \ \ \ \ \ \forall \ensuremath{(\theta,x)} \in [\psi^-,\tilde{\psi}^- ) \] by definition. The forward orbit of \ensuremath{(\theta_0,x_0)}\ cannot converge to $\psi^-$ as this contradicts $\lambda^+\ensuremath{(\theta_0,x_0)} > 0$. Therefore $x_0 \geq \tilde{\psi}^-(\theta_0)$. Further, there holds $\tilde{\psi}^- \leq \psi^+$. This means that \ensuremath{(\theta_0,x_0)}\ is contained in the compact set $\tilde{A} := [\tilde{\psi^-},\psi^+]$. But as $\tilde{A}$ does not contain an invariant graph with negative Lyapunov exponent anymore, this contradicts $\lambda^-\ensuremath{(\theta_0,x_0)} > 0$, again by Theorem~1.9 in \cite{sturman/stark:2000}. The existence of a strange non-chaotic repeller follows in the same way by regarding the inverse of $T$ restricted to the global attractor. {\raggedleft $\Box$ \\} \include{tjaeger-memoirsII} \section{The strategy for the construction of the sink-source-orbits} \label{Strategy} The inductive construction of longer and longer trajectories which are expanding in the forwards and contracting in the backwards direction (compare Lemma~\ref{lem:sinksourceshadowing}) will be a rather complicated inductive procedure. On the one hand, a substantial amount of effort will have to be put into introducing the right objects and providing a number of preliminary estimates and technical statements in Section~\ref{Tools}. On the other hand, it will sometimes be quite hard to see the motivation for all this until the actual construction is carried out in Section~\ref{Construction}. In order to give some guidance to the reader in the meanwhile, we will try to sketch a rough outline of the overall strategy in this section, and discuss at least some of the main problems we will encounter. In particular, we will try to indicate how a recursive structure appears in the construction, induced by the recurrence behaviour of the underlying irrational rotation. To this end, we will start by deriving some first (easy) estimates, which will make it much easier to talk about what happens further. This will show that up to a certain point the construction is absolutely straightforward. The further strategy will then only be outlined, as the tools developed in Section \ref{Tools} are needed before it can finally be converted into a rigorous proof in Section~\ref{Construction}. \subsection{The first stage of the construction} As mentioned in Section~\ref{Mechanism}, for a suitable choice of the functions $F$ and $g$ in (\ref{eq:generalsystem}) we can expect that the tips of the peaks correspond to a sink-source-orbit. However, as we do not know the bifurcation parameter exactly, we can only approximate it and show that in each step of the approximation there is a longer finite trajectory with the required behaviour. The existence of the sink-source-orbit at the bifurcation point will then follow from Lemma~\ref{lem:sinksourceshadowing}~. As we will concentrate only on trajectories in the orbit of the 0-fibre, the following notation will be very convenient: \begin{definition} \label{def:parameterfamily} For the map $T_\beta$ defined in Theorem~\ref{thm:snaexistence} with fibre maps $T_{\beta,\theta}$, let \[ T_{\beta,\theta,n} := T_{\beta,\theta+\omega_{n-1}} \circ \ldots \circ T_{\beta,\theta} \] if $n > 0$ and $T_{\beta,\theta,0} := \textrm{Id}$. Further, for any pair $l \leq n$ of integers let \[ \xi_n(\beta,l) := T_{\beta,\omega_{-l},n+l}(3) \ . \] In other words, $\xi_n(\beta,l)$ is the $x$-value of that point from the $T_\beta$-forward orbit of $(\omega_{-l},3)$, which lies on the $\ensuremath{\omega_n}$-fibre. Thus, the lower index always indicates the fibre on which the respective point is located. \end{definition} Slightly abusing language, we will refer to $(\xi_j(\beta,l))_{n\geq-l}$ as the forward orbit of the point $(\omega_{-l},3)$, suppressing the $\theta$-coordinates. \medskip Note that under the assumptions of Theorem~\ref{thm:snaexistence} (which imply in particular that we are in the case of one-sided forcing, i.e.\ $g\geq 0$) the mapping $\beta \mapsto \xi_n(\beta,l)$ is monotonically decreasing for any fixed numbers $l$ and $n$, with strict monotonicity if $l\geq 0$ and $n\geq 1$ since $g(0) =1$. In addition, we claim that when $n\geq 1$ and $l \geq 0$, the interval $\ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ is covered as $\beta$ increases from $0$ to $\frac{3}{2}$, i.e.\ \begin{equation} \label{eq:betacovers} \textstyle \xi_n(\frac{3}{2},l) \ < \ -\ensuremath{\frac{1}{\alpha}} \ . \end{equation} In order to see this, note that $\xi_0(\beta,l)$ is always smaller than 3, such that $\xi_0(\beta,l)-x_\alpha \leq 2 - \frac{2}{\ensuremath{\sqrt{\alpha}}}$. Therefore, using $F(x_\alpha)=x_\alpha$, (\ref{cond:Fcontraction}) and $g(0)=1$ we obtain \[ \textstyle \xi_1(\frac{3}{2},l) \ = \ F(\xi_0(\beta,l))-\frac{3}{2}\cdot g(0) \ \leq \ x_\alpha + \frac{2-\frac{2}{\ensuremath{\sqrt{\alpha}}}}{2\ensuremath{\sqrt{\alpha}}} - \frac{3}{2} \ = \ \frac{3}{\ensuremath{\sqrt{\alpha}}} - \ensuremath{\frac{1}{\alpha}} - \frac{1}{2} \ . \] By (\ref{cond:alphagamma0}) the right side is smaller than $-\ensuremath{\frac{1}{\alpha}}$, and % as $\ensuremath{\mathbb{T}^{1}} \times [-3,-\ensuremath{\frac{1}{\alpha}})$ is always mapped into itself this proves our claim. \ \\ From now on, we use the following notation: For any pair $k,n$ of integers with $k \leq n$ let \begin{equation} \label{eq:kninterval} [k,n] \ := \ \{ k \ensuremath{,\ldots,} n \} \ . \end{equation} What we want to derive is a statement of the following kind \begin{quotation} \emph{ If $\xi_N(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ for `suitable' integers $l \leq 0 $ and $N \geq 1$, then $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ for `most' $j \in [1,N]$ and $\xi_j(\beta,l) \geq \gamma$ for `most' $j \in [-l,0]$. } \end{quotation} Of course, we have to specify what `suitable' and `most' mean, but as this will be rather complicated we postpone it for a while. As (\ref{eq:betacovers}) implies that there always exist values of $\beta \in [0,\frac{3}{2}]$ with $\xi_n(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$, such a statement would ensure the existence of trajectories which spend most of the backward time in the contracting region and most of the forward time in the expanding region. This is exactly what is needed for the application of Lemma~\ref{lem:sinksourceshadowing}~. As mentioned, up to a certain point things are quite straightforward: \begin{lem} \label{lem:inductionstart} Suppose that the assumptions of Theorem~\ref{thm:snaexistence} hold. Further, let $n\geq 1$, $l \geq 0$ and assume that \[ d(\ensuremath{\omega_j},0) \ \geq \ \frac{3\gamma}{L_2} \ \ \ \forall j \in [-l,-1] \cup [1,n-1] \ . \] Then $\xi_n(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies % $\beta \in [1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}]$, \begin{equation} \xi_j(\beta,l) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \hspace{3.8eM} \forall j \in [1,n] \ \end{equation} and \begin{equation} \xi_j(\beta,l) \ \geq \ \gamma \hspace{6eM} \forall j \in [-l,0] \ . \end{equation} \end{lem} The proof relies on the following basic estimate: \begin{lem} \label{lem:basicestimate} Suppose that the assumptions of Theorem~\ref{thm:snaexistence} hold. Further, assume that $\beta \leq 1 + \frac{4}{\ensuremath{\sqrt{\alpha}}}$, $j \geq -l$ and $d(\ensuremath{\omega_j},0) \geq \frac{3\gamma}{L_2}$. Then $\xi_j(\beta,l) \geq \ensuremath{\frac{1}{\alpha}}$ implies $\xi_{j+1}(\beta,l) \geq \gamma$ and $\xi_j(\beta,l) \leq - \ensuremath{\frac{1}{\alpha}}$ implies $\xi_{j+1}(\beta,l) \leq -\gamma$. Consequently, $\xi_{j+1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. \end{lem} \textit{Proof.} \ Suppose that $\xi_j(\beta,l) \geq \ensuremath{\frac{1}{\alpha}}$. Using $d(\ensuremath{\omega_j},0) \geq \frac{3\gamma}{L_2}$ and (\ref{cond:sharppeak}) we obtain that $g(\ensuremath{\omega_j}) \leq 1-3\gamma$. Therefore \begin{eqnarray*} \xi_{j+1}(\beta,l) & = & F(\xi_j(\beta,l)) - \beta\cdot g(\ensuremath{\omega_j}) \\ & \stackrel{(\ref{cond:Fmapsover})}{\geq} & \textstyle 1 - \gamma - (1+\frac{4}{\ensuremath{\sqrt{\alpha}}})(1-3\gamma) \ \geq \ 2\gamma - \frac{4}{\ensuremath{\sqrt{\alpha}}} \ \stackrel{(\ref{cond:alphagamma0})}{\geq} \ \gamma \ . \end{eqnarray*} As $g \geq 0$, we also see that $\xi_j(\beta,l) \leq -\ensuremath{\frac{1}{\alpha}}$ implies \[ \xi_{j+1}(\beta,l) \ \leq \ F(\xi_j(\beta,l)) \ \stackrel{(\ref{cond:Fmapsover})}{\leq} \ -(1-\gamma) \ \stackrel{(\ref{cond:gamma0})}{\leq} \ -\gamma \ . \] {\raggedleft $\Box$ \\} \ \\ \textit{Proof of Lemma \ref{lem:inductionstart}:} \\ Suppose that $\xi_n(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. We first show that $\beta \leq 1+\frac{3}{\ensuremath{\sqrt{\alpha}}}$: As $\xi_0(\beta,l) \leq 3$ we can use $F(x_\alpha) = x_\alpha$ and (\ref{cond:Fcontraction}) to see that $F(\xi_0(\beta,l)) \leq 1+\frac{3}{\ensuremath{\sqrt{\alpha}}}-\ensuremath{\frac{1}{\alpha}}$. As $g(0)=1$ this gives \[ \xi_1(\beta,l) \ = \ F(\xi_0(\beta,l)) - \beta \ \leq \ \left(1+\frac{3}{\ensuremath{\sqrt{\alpha}}} - \beta\right) - \ensuremath{\frac{1}{\alpha}} \ . \] Thus, for $\beta > 1 + \frac{3}{\ensuremath{\sqrt{\alpha}}}$ we have $\xi_1(\beta,l) < -\ensuremath{\frac{1}{\alpha}}$, and as $\ensuremath{\mathbb{T}^{1}} \times [-3,-\ensuremath{\frac{1}{\alpha}})$ is mapped into itself this would yield $\xi_n(\beta,l) < - \ensuremath{\frac{1}{\alpha}}$, contradicting our assumption. Therefore $\xi_n(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\beta \leq 1 + \frac{3}{\ensuremath{\sqrt{\alpha}}}$. Now we can apply Lemma~\ref{lem:basicestimate} to all $j \in [1,n-1]$ and obtain $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in [1,n]$ by backwards induction on $j$, starting at $j=n$. Similarly, $\xi_j(\beta,l) \geq \gamma \ \forall j = -l \ensuremath{,\ldots,} 0$ follows from $\xi_{-l}(\beta,l) = 3 \geq \gamma$ by forwards induction, as we can again apply Lemma~\ref{lem:basicestimate} to all $j \in [-l,-1]$. It remains to prove that $\beta \geq 1 + \ensuremath{\frac{1}{\sqrt{\alpha}}}$. We already showed that $\xi_0(\beta,l) \geq \gamma \geq x_\alpha-1$, such that we can use $F(x_\alpha) = x_\alpha$ and (\ref{cond:Fcontraction}) again to see that \[ \xi_1(\beta,l) \ \geq \ x_\alpha - \frac{1}{2\ensuremath{\sqrt{\alpha}}} - \beta \ = \ 1 + \frac{3}{2\ensuremath{\sqrt{\alpha}}} - \beta \ \stackrel{(\ref{cond:alphagamma0})}{\geq} \ \left(1 + \ensuremath{\frac{1}{\sqrt{\alpha}}} - \beta \right) + \ensuremath{\frac{1}{\alpha}} \ . \] As we also showed above that $\xi_1(\beta,l) \leq \ensuremath{\frac{1}{\alpha}}$, the required estimate follows. {\raggedleft $\Box$ \\} \subsection{Dealing with the first close return} \label{Firstreturn} As we have seen above, everything works fine as long as the $\ensuremath{\omega_j}$ do not enter the interval $B_{\frac{3\gamma}{L_2}}(0)$ again. Thus, in the context of Section~\ref{Mechanism} the critical region ${\cal C}$ corresponds to the vertical strip $B_{\frac{3\gamma}{L_2}}(0) \times [-3,3]$. We will now sketch the argument by which the construction can be continued even beyond the first return to this critical region: Suppose $m\in\N$ is the first time such that $d(\ensuremath{\omega_m},0) < \frac{3\gamma}{L_2}$ and fix some $l \leq m-1$. Then Lemma~\ref{lem:inductionstart} yields information up to time $m$, meaning that we can apply it whenever $n \leq m$. But we cannot ensure that $\xi_{m+1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_m(\beta,l)\in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ as before. In fact, this will surely be wrong when $\ensuremath{\omega_m}$ is too close to 0, such that $g(\ensuremath{\omega_m}) \approx 1$. In order to deal with this, we will define a certain `exceptional' interval $J(m) = [ m-l^- \ensuremath{,\ldots,} m + l^+ ]$. The integers $l^-$ and $l^+$ will have to be chosen very carefully later on, but for now the reader should just assume that they are quite small in comparison to both $m$ and $l$. Then, instead of showing that $\xi_{m+1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_m(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ as before, we will prove that \begin{equation} \label{eq:secondstage} \xi_{m+l^++1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \textit{ implies } \ \ \xi_{m-l^--1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ . \end{equation} Using Lemma~\ref{lem:inductionstart}, the latter then ensures that $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in [1,m-l^--1]$. Recall that as we are in the case of one-sided forcing, the dependence of $\xi_n(\beta,l)$ on $\beta$ is strictly monotone. Thus, in order to prove (\ref{eq:secondstage}), it will suffice to consider the two unique parameters $\beta^+$ and $\beta^-$ which satisfy \begin{equation} \label{eq:betaplusdef} \xi_{m-l^--1}(\beta^+,l) \ = \ \ensuremath{\frac{1}{\alpha}} \end{equation} and \begin{equation} \label{eq:betaminusdef} \xi_{m-l^--1}(\beta^-,l) \ = \ -\ensuremath{\frac{1}{\alpha}} \ . \end{equation} If we can then show the two inequalities \begin{equation} \label{eq:betaplus} \xi_{m+l^++1}(\beta^+,l) \ > \ \ensuremath{\frac{1}{\alpha}} \end{equation} and \begin{equation} \label{eq:betaminus} \xi_{m+l^++1}(\beta^-,l) \ < \ -\ensuremath{\frac{1}{\alpha}} \ , \end{equation} this immediately implies (\ref{eq:secondstage}). Now, first of all the fact that (\ref{eq:betaminus}) follows from (\ref{eq:betaminusdef}) is obvious, as $\ensuremath{\mathbb{T}^{1}} \times [-3,-\ensuremath{\frac{1}{\alpha}}]$ is mapped into $\ensuremath{\mathbb{T}^{1}} \times [-3,-(1-\gamma)]$ by (\ref{cond:Fmapsover}), independent of the parameter $\beta$. Thus, it remains to show (\ref{eq:betaplus}). This will be done by comparing the orbit% \footnote{Recall that we suppress the $\theta$-coordinate $\ensuremath{\omega_j}$ of points $(\ensuremath{\omega_j},\xi_j(\beta,l))$ from the forward orbit of $(\omega_{-l},3)$.} \begin{equation} \xi_{m-l^--1}(\beta^+,l) \ensuremath{,\ldots,} \xi_{m+l^++1}(\beta^+,l) \end{equation} with suitable \textit{`reference orbits'}, on which information is already available by Lemma~\ref{lem:inductionstart}~. In order to make such comparison arguments precise (as sketched in Figure~\ref{fig:firstreturn} below), we will need the following concept: \begin{definition} \label{def:errorterm} For any $\beta_1,\beta_2 \in [0,\frac{3}{2}]$ and $\theta_1,\theta_2 \in \ensuremath{\mathbb{T}^{1}}$, the \textbf{error term} is defined as \begin{eqnarray*} \ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) & := & \ \sup_{n \in \ensuremath{\mathbb{Z}}} |\beta_1\cdot g(\theta_1+\ensuremath{\omega_n}) - \beta_2\cdot g(\theta_2+\ensuremath{\omega_n})| \ . \end{eqnarray*} Note that $\ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) = \sup_{n\in\ensuremath{\mathbb{Z}}} \supnorm{T_{\beta_1,\theta_1+\ensuremath{\omega_n}} - T_{\beta_2,\theta_2+\ensuremath{\omega_n}}}$. \end{definition} The next remark gives a basic estimate: \begin{bem} \label{bem:errorterm} Suppose that $g$ has Lipschitz-constant $L_1$ (as in (\ref{eq:g-lipschitz})). Further, assume that $\theta_1 = \ensuremath{\omega_k},\ \theta_2 = \omega_{k+m}$ for some $k,m\in \ensuremath{\mathbb{Z}}$, $d(\ensuremath{\omega_m},0) \leq \frac{2\epsilon}{L_2}$, and $\beta_1,\beta_2 \in [1,\frac{3}{2}]$ satisfy $|\beta_1-\beta_2| < 2\epsilon$. Then \[ \ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) \leq K \cdot \epsilon \] where $K := 3\cdot \frac{L_1}{L_2} + 2$. \end{bem} \textit{Proof.} \ For any $n\in \N$, let $j:= k+n$. Then $\ensuremath{\omega_k}+\ensuremath{\omega_n} = \ensuremath{\omega_j}$ and $\omega_{k+m} + \ensuremath{\omega_n} = \omega_{j+m}$. Thus, the above estimate follows from \begin{eqnarray*} \lefteqn{|\beta_1\cdot g(\ensuremath{\omega_j}) - \beta_2 \cdot g(\omega_{j+m})| \leq} \\ & & \beta_1\cdot |g(\ensuremath{\omega_j})-g(\omega_{j+m})| + g(\omega_{j+m})\cdot |\beta_1-\beta_2| \ \leq \ \beta_1 \cdot \frac{2\epsilon}{L_2} \cdot L_1 + 2\epsilon \ \leq K \cdot \epsilon \end{eqnarray*} {\raggedleft $\Box$ \\} \ \\ Thus, even if two finite trajectories are generated with slightly different parameters and are not located on the same but only on nearby fibres, the fibre maps which produce them will still be almost the same. This makes it possible to compare two such orbits, at least up to a certain extent. For the remainder of this section, the reader should just assume that the remaining differences between the fibre maps can always be neglected. Of course, when the construction is made rigorous later on it will be a main issue to show that this is indeed the case. Let us now turn to Figure~\ref{fig:firstreturn}, which illustrates the argument used to derive (\ref{eq:betaplus}). The first reference orbit, shown as crosses, is generated with the unique parameter $\beta^*$ that satisfies $\xi_m(\beta^*,l) = 0$. Due to Lemma~\ref{lem:inductionstart} (with $n=m$), we know that this orbit always stays in the expanding region before, i.e.\ \begin{equation} \label{eq:reforbitone} \xi_j(\beta^*,l) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \forall j = 1 \ensuremath{,\ldots,} m-1 \ . \end{equation} Recall that $\beta^+$ was defined by $\xi_{m-l^--1}(\beta^+,l) = \ensuremath{\frac{1}{\alpha}}$. This implies $\xi_{m-l^-}(\beta^+,l) \geq \gamma$ by Lemma~\ref{lem:basicestimate}~. Thus, the `new' orbit $\xi_{m-l^--1}(\beta^+,l),$ $\ldots,$ $\xi_{m+l^++1}(\beta^+,l)$ (corresponding to the black squares in Figure~\ref{fig:firstreturn}) leaves the expanding region and enters the contracting region (A), whereas the reference orbit (crosses) stays in the expanding region at the same time, i.e.\ $\xi_{m-l^-}(\beta^*,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$, by (\ref{eq:reforbitone}). \begin{figure}[t] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=snafirstreturn.eps, clip=, width=\linewidth, } \caption{\small The above diagram shows three finite trajectories: The `new' orbit $\xi_{m-l^--1}(\beta^+,l),$ $\ldots,$ $\xi_{m+l^++1}(\beta^+,l)$ (black squares), the first reference orbit $\xi_{m-l^--1}(\beta^*,l),$ $\ldots,$ $\xi_m(\beta^*,l)$ (crosses) and the second reference orbit $\xi_{-l^-}(\beta^+,l),$ $\ldots,$ $\xi_{l^++1}(\beta^+,l)$ (circles). For convenience, successive iterates on the circle are drawn in straight order. (This corresponds to the situation where either the rotation number $\omega$ is very small, or where we consider a $q$-fold cover of the circle $\ensuremath{\mathbb{T}^{1}}$.) After the first iterate, the new orbit leaves the expanding and enters the contracting region (A). Afterwards, the first reference orbit together with the strong expansion on $\ensuremath{\mathbb{T}^{1}} \times \overline{B_{\frac{2}{\alpha}}(0)}$ ensure that the new orbit stays in the contracting region as the $\ensuremath{\omega_m}$-fibre is approached (B). Consequently, it gets attracted to the second reference orbit, which also lies in the contracting region (C). When the 0-fibre is passed, the forcing acts stronger on the second reference orbit (which passes exactly through the 0-fibre) than on the new orbit (which only passes through the $\ensuremath{\omega_m}$-fibre). Therefore, the new orbit will be slightly above the second reference orbit afterwards (D). From now on, the expansion on $\ensuremath{\mathbb{T}^{1}} \times \overline{B_{\frac{2}{\alpha}}(0)}$ ensures that the new orbit eventually gets pushed out of the expanding region (E), and stays in the contracting region afterwards (F). } \label{fig:firstreturn} \end{minipage} \end{figure} Afterwards, due to the strong expansion on $\ensuremath{\mathbb{T}^{1}} \times \overline{B_\frac{2}{\alpha}(0)}$ it is not possible for the new orbit to approach the reference orbit anymore, such that it will stay `trapped' in the contracting region (B). In this way, we will obtain% \footnote{We should mention that in this particular situation (\ref{eq:trapped}) could still be derived directly from Lemma~\ref{lem:basicestimate} . However, the advantage of the described comparison argument is that it is more flexible and will also work for later stages of the construction.} \begin{equation} \label{eq:trapped} \xi_{j}(\beta^+,l) \ \geq \ \gamma \ \ \ \forall j = m- l^- \ensuremath{,\ldots,} m \ . \end{equation} Now we start to use a second reference orbit, namely $\xi_{-l^-}(\beta^+,l),$ $\ldots,$ $\xi_{l^++1}(\beta^+,l)$, shown by the circles in Figure~\ref{fig:mechanism}. Note that this time it will be generated with exactly the same parameter $\beta^+$ as the new orbit, but located on slightly different fibres. By Lemma~\ref{lem:inductionstart} (with $n=m-l^--1$, note that $\xi_{m-l^--1}(\beta^+,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by definition), we know that \begin{equation} \label{eq:reforbittwoa} \xi_j(\beta^+,l) \ \geq \ \gamma \ \ \ \forall j = -l^- \ensuremath{,\ldots,} 0 \end{equation} and \begin{equation} \label{eq:reforbittwob} \xi_j(\beta^+,l) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \forall j = 1 \ensuremath{,\ldots,} l^++1 \ . \end{equation} Combining (\ref{eq:trapped}) and (\ref{eq:reforbittwoa}), we see that the two orbits we want to compare both spend the first $l^-$ iterates in the contracting region. Thus they are attracted to each other, and consequently $|\xi_0(\beta^+,l) - \xi_m(\beta^+,l)|$ will be very small (C). In fact, if $l^-$ has been chosen large enough, then this difference will be of the same magnitude as $\epsilon := L_2 \cdot d(\ensuremath{\omega_m},0)$, i.e.\ \begin{equation} \label{eq:orbitcontraction} |\xi_0(\beta^+,l) - \xi_m(\beta^+,l)| \ \leq \ \kappa \cdot \epsilon \end{equation} for a suitable constant $\kappa > 0$. The next step is crucial: When going from $\xi_0(\beta^+,l)$ to $\xi_1(\beta^+,l)$, the downward forcing takes its maximum (i.e.\ $g(0)=1$). In contrast to this, in the transition from $\xi_m(\beta^+,l)$ to $\xi_{m+1}(\beta^+,l)$ the forcing function $g(\ensuremath{\omega_m})$ is only close to 1. More precisely, (\ref{cond:sharppeak}) yields $g(\ensuremath{\omega_m}) \leq 1-\epsilon$. Therefore \begin{eqnarray*} \label{eq:crucialstep} \lefteqn{\xi_{m+1}(\beta^+,l) - \xi_1(\beta^+,l) \ \geq} \\ & \geq & \beta^+ \cdot \epsilon - |F(\xi_m(\beta^+,l)) - F(\xi_0(\beta^+,l))| \ \stackrel{(\ref{cond:Fcontraction})}{\geq} \ \epsilon -\frac{\kappa\cdot \epsilon}{\ensuremath{\sqrt{\alpha}}} \ \geq \ \frac{\epsilon}{2} \ , \end{eqnarray*} where we have assumed that $\ensuremath{\sqrt{\alpha}}$ will be larger than $2\kappa$ and $\beta^+ \geq 1$. Thus, when the orbits pass the 0- and $\ensuremath{\omega_m}$-fibre, respectively, a difference is created and the new orbit will be slightly above the reference orbit afterwards (D). But from that point on, the reference orbit stays in the expanding region by (\ref{eq:reforbittwob}). Therefore, the small difference will be expanded until finally the new orbit is `thrown out' upwards (E) and gets trapped in the contracting region again (F). This will complete the proof of (\ref{eq:betaplus}). The crucial point now is the fact that the scheme in Figure~\ref{fig:firstreturn} offers a lot of flexibility. We have described the argument for the particular case of the first close return, but in fact all close returns will be treated in a similar way. The only difference will be the fact that the reference orbits we use in the later stages of the construction may not stay in the expanding (or respectively contracting) region all of the considered times. However, this will still be true for most times, and that is sufficient to ensure that on average the expansion (or contraction) overweights and the new orbit shows the required behaviour. \subsection{Admissible and regular times} \label{Admissiblepoints} The picture we have drawn so far is already sufficient to motivate some further terminology. As we have seen above, not all times $N\in\N$ are suitable for the construction, in the sense of the statement given below (\ref{eq:kninterval}). Thus, we will distinguish between times which are \textit{`admissible'} and others which are not. Only for admissible $N$ we will show that $\xi_N(\beta,l)\in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ allows to draw conclusions about previous times $j < N$. To be more precise, for any given admissible $N$ we will define a set % $R_N \ensuremath{\subseteq} [1,N]$ and show that $\xi_N(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in R_N$. The integers $j \in R_N$ will then be called \textit{`regular with respect to N'}. The precise definitions of admissible and regular times will be given in Sections~\ref{Admissibletimes} and \ref{Regulartimes}~. In order to give an example, consider the situation of the previous section: There, all points $N\leq m$ are admissible, and so is $m+l^++1$, but $m+1 \ensuremath{,\ldots,} m+l^+$ are not admissible. Further, for any $N \leq m$ we can choose $R_N = [1,N]$, and the set $R_{m+l^++1}$ contains at least all points from $[1,m+l^++1] \setminus J(m)$. However, it will turn out that we have to define even more times as regular w.r.t. $m+l^++1$, and thus derive information about them, as this will be needed in the later stages of the construction. Namely, the additional points we need to be regular are $m+1 \ensuremath{,\ldots,} m+l^+$. The reason why this is necessary is explained in Section~\ref{Outline} and Figure~\ref{fig:rightside}. However, in this particular situation it is not difficult to achieve this: As $\ensuremath{\omega_m}$ is a close return, we can expect (and also ensure by using the diophantine condition and suitable assumptions on $\gamma$) that $\omega_{m+1} \ensuremath{,\ldots,} \omega_{m+l^+}$ are rather far away from 0, in particular not contained in $B_{\frac{3\gamma}{L_2}}(0)$. But this means that we can apply Lemma~\ref{lem:basicestimate} to $m+1 \ensuremath{,\ldots,} m+l^+$ and obtain that $\xi_{m+l^++1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j = m+1 \ensuremath{,\ldots,} m+l^+$ by backwards induction on $j$. Thus, if we divide the interval $J(m)$ into two parts % $J^-(m) := [m-l^-,m]$ and $J^+(m) := [m+1,m+l^+]$, then we can also define all points in the right part $J^+(m)$ as regular, such that % $R_{m+l^++1} = [1,m+l^++1] \setminus J^-(m)$. The reader should keep in mind that although most points will be both regular and admissible, the difference between the two notions is absolutely crucial. For example, for the argument in the previous section it was vitally important that $m$ itself is admissible, as the first reference orbit ended exactly on the $\ensuremath{\omega_m}$-fibre. But on the other hand, $m$ will not be regular w.r.t.\ any $N\geq m$, as it is a close return itself and certainly contained in $J^-(m)$. \subsection{Outline of the further strategy} \label{Outline} For a certain while the arguments from Section~\ref{Firstreturn} will allow to continue the construction as described. When there is another close return at time $m'>m$ and $d(\omega_{m'},0)$ is approximately of the same size as $d(\ensuremath{\omega_m},0)$, then the diophantine condition will ensure that $m$ and $m'$ are far apart. Thus, if we define an exceptional interval $J(m')$ again, this will be far away from $J(m)$ and we can proceed more or less as before. However, we have also seen that the minimal lengths of $l^-$ and $l^+$ depend on how close $\ensuremath{\omega_m}$ is to 0, as there must be enough time for the contraction to work until (\ref{eq:orbitcontraction}) is ensured, and similarly for the expansion until the new orbit is pushed out of the expanding region. To be more precise, let $p \in \N_0$ such that $\epsilon = L_2 \cdot d(\ensuremath{\omega_m},0) \in [\alpha^{-(p+1)},\alpha^{-p})$. Then the minimal lengths of $l^-$ and $l^+$ will depend linearly on $p$, as the expansion and contraction rates are always between $\alpha^{\pm\ensuremath{\frac{1}{2}}}$ and $\alpha^{\pm 2}$ by (\ref{cond:Fexpansion}) and (\ref{cond:Fcontraction}). Thus, at some stage we will encounter a close return at time $\hat{m}$, for which the quantities $\hat{l}^-$ and $\hat{l}^+$ needed to define a suitable interval $J(\hat{m}) = [\hat{m}-\hat{l}^-,\hat{m}+\hat{l}^+]$ are larger than $l$ and $m$. At first, assume that only $\hat{l}^+ > m$, whereas $\hat{l}^-$ is still smaller that $l$. As mentioned, we will be able to show that \begin{equation} \xi_{\hat{m}+\hat{l}^++1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \textit{ implies } \ \ \xi_{\hat{m}-\hat{l}^--1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \end{equation} by a slight modification of the argument sketched in Figure~\ref{fig:firstreturn}~. In fact, for the left side there is no difference: If $\beta^+$ and $\beta^*$ are again chosen such that $\xi_{\hat{m}-\hat{l}^--1}(\beta^+,l) = \ensuremath{\frac{1}{\alpha}}$ and $\xi_{\hat{m}}(\beta^*,l) = 0$, then the first reference orbit $\xi_{\hat{m}-\hat{l}^--1}(\beta^*,l),$ $\ldots,$ $\xi_{\hat{m}}(\beta^*,l)$ will again stay in the expanding region all the time. Therefore we can use it to control the first part $\xi_{\hat{m}-\hat{l}^--1}(\beta^+,l),$ $\ldots,$ $\xi_{\hat{m}}(\beta^+,l)$ of the new orbit as before, and conclude that it always stays in the contraction region. As the same will be true for the first part $\xi_{-\hat{l}^-}(\beta^+,l),$ $\ldots,$ $\xi_0(\beta^+,l)$ of the second reference orbit, the contraction ensures again that $|\xi_{\hat{m}}(\beta^+,l) - \xi_0(\beta^+,l)|$ is small enough (compare (\ref{eq:orbitcontraction})), and consequently $\xi_{\hat{m}+1}(\beta^+,l)$ will be slightly above $\xi_1(\beta^+,l)$ after the 0-fibre is passed (compare (\ref{eq:crucialstep})). But afterwards, the second part $\xi_1(\beta^+,l) \ensuremath{,\ldots,} \xi_{\hat{l}^++1}(\beta^+,l)$ of the reference orbit will not stay in the expanding region all the time, as the exceptional interval $J(m)$ is contained in $[1,\hat{l}^+]$ and the points in $J^-(m)$ will not be regular w.r.t.\ $\hat{m}-\hat{l}^--1$. However, as all other points in $[1,\hat{l}^+]$ are regular, it is still possible to show that the new orbit is eventually pushed out of the expanding region again, but this needs a little bit more care than before. Figure~\ref{fig:rightside} shows one of the problems we will encounter, and thereby explains why it is so vitally important that we have information about the points in $J^+(m)$ as well, i.e.\ define them as regular before. \begin{figure}[t] \noindent \begin{minipage}[t]{\linewidth} \epsfig{file=sna-rightside.eps, clip=, width=\linewidth, } \caption{\small In the above diagram, $J(m)$ is located at the end of $[1,\hat{l}^+]$, such that $m+l^+ = \hat{l}^+$. At first, the new orbit $\xi_{\hat{m}+1}(\beta^+,l),$ $\ldots,$ $\xi_{\hat{m}+\hat{l}^++1}(\beta^+,l)$ will be pushed out of the expanding region (not shown). But at the end of the interval $[1,\hat{l}^+]$ the reference orbit $\xi_1(\beta^*,l),$ $\ldots,$ $\xi_{\hat{l}^++1}(\beta^*,l)$ leaves the expanding region for a few iterates. Thus, the new orbit may approach the reference orbit during this time and enter the expanding region again afterwards. Now we consider two different situations: In (a) we assume that the reference orbit spends all times $j\in J(m)$ outside of the expanding region. This is what we have to take into account if we do not define the points in $J^+(m)$ as regular, and consequently do not derive any information about them. Then the new orbit may still be close to the reference orbit until the very last step, and thus lie in the expanding region at the end. (b) On the other hand, if we can obtain information about the $j\in J^+(m)$ and thus define them as regular, then we know that the reference orbit stays in the expanding region at these times. Therefore the new orbit may enter the expanding region after time $\hat{m}+m$, but it will be pushed out again before the end of the interval $J(\hat{m})$ is reached.} \label{fig:rightside} \end{minipage} \end{figure} Now, we can begin to see how a recursive structure in the definition of the sets $R_N$ appears: In order to have enough information for even later stages in the construction, we will again have to define at least most points in $J^+(\hat{m}) = [\hat{m}+1,\hat{m}+\hat{l}^+]$ as regular. As it will turn out, we will be able to show that $\xi_{\tilde{m}+\tilde{l}^++1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_{\tilde{m}+j}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ exactly whenever the respective point $\xi_j(\beta^+,l)$ of the reference orbit lies in the expanding region as well. In other words, a point $\hat{m}+j \in J^+(\hat{m})$ will be regular if and only if % $j \in [0,\hat{l}^+]$ was regular before. This leads to a kind of self-similar structure in the sets of regular points, which will express itself in relations of the following form: \begin{equation} \label{eq:selfsimilarity} R_N \cap J^+(\hat{m}) \ = \left(R_N \cap [1,\hat{l}^+]\right) + \hat{m} \ = \ R_{\hat{l}^+} + \hat{m} \end{equation} In other words, the structure of the sets $R_N$ after a close return, i.e.\ in the right part $J^+$ of an exceptional interval, is the same as their structure at the origin (see Figure~\ref{fig:recursive}). \begin{figure}[t] \noindent \begin{minipage}[h!]{\linewidth} \epsfig{file=sna-recursive.eps, clip=, width=\linewidth, } \caption{\small Recursive structure of the sets $R_N$. Regular points are shown in white, exceptional ones in black. The set $R_N \cap J^+(\hat{m})$ is a translate of the set $R_N \cap [1,\hat{l}^+]$. } \label{fig:recursive} \end{minipage} \end{figure} \ \\ What remains is to extend the construction not only forwards, but also backwards in time. As we have mentioned above, for some close return $\tilde{m}$ we will eventually have to choose $\tilde{l}^-$ larger than $l$. In this case, it is not sufficient anymore to have reference orbits starting on the $\omega_{-l}$-fibre. However, we can still carry out the construction exactly up to $\tilde{m}$. Thus, if $\beta^*$ is chosen such that $\xi_{\tilde{m}}(\beta^*,l) = 0$, then we will know that $\xi_{\tilde{m}-\tilde{l}^-}(\beta^*,l),$ $\ldots,$ $\xi_{\tilde{m}}(\beta^*,l)$ spends `most' of the time in the expanding region. Therefore, we can use it as a reference orbit in order to show that $\xi_{-\tilde{l}^-}(\beta,\tilde{l}^-),$ $\ldots,$ $\xi_0(\beta,\tilde{l}^-)$ stays in the contracting region `most' of the time, at least for parameters $\beta$ which are close enough to $\beta^*$. (Recall that this orbit starts on the upper boundary line, i.e.\ $\xi_{-\tilde{l}^-}(\beta,\tilde{l}^-) = 3$ by definition.) It will then turn out that it suffices to consider such parameter values. In this way, the construction will be extended backwards and we can then start to look at the forward part of the trajectories starting on the $\omega_{-\tilde{l}}$-fibre. Consequently, when we reach $\tilde{m}$ again the backwards part of the trajectories is long enough to carry on beyond this point, again using the same comparison arguments as above. The only difference to Figure~\ref{fig:firstreturn} will be that now the reference orbits only stay most and not all of the time in the expanding or contracting region, respectively. Nevertheless, this will still be sufficient to proceed more or less in the same way. Hence we can continue the construction, until we reach some even closer return. Then the trajectories have to be extended further in the backwards direction again and so on \ldots \ . \section{Tools for the construction} \label{Tools} In this section, we will provide the the necessary tools for the construction of the sink-source-orbits in Sections~\ref{Construction} and \ref{Symmetricsetting}. As we have seen, there are mainly two things which have to be done: First, we need some statements about the comparison of orbits, namely one about expansion and one about contraction. These will be derived in Section~\ref{Comparingorbits}. Secondly, we have to define the sets of admissible and regular times, which will be done in Sections~\ref{Admissibletimes} and \ref{Regulartimes}~. However, before this we will have to introduce yet another collection of sets $\Omega_p$ ($p\in\N_0$) in Section~\ref{Approximatingsets}. These sets $\Omega_p$ will be used as an approximation for the sets of \textit{non-regular} times and will make it possible to control the frequency with which these can occur. \subsection{Comparing orbits} \label{Comparingorbits} The two statements we aim at proving here are Lemma \ref{lem:orbitscontraction} and Lemma \ref{lem:orbitsthrowout}. They will allow to compare two different orbit-segments which (i) start on nearby fibres and (ii) result from systems $T_{\beta_1}, T_{\beta_2}$ with parameters $\beta_1,\beta_2$ close together (compare Definition~\ref{def:errorterm} and Remark~\ref{bem:errorterm}). The reader should note that throughout this wubsection we only use assumptions (\ref{cond:gamma0}),(\ref{cond:alphagamma0}),% (\ref{cond:Funiformbounds})--(\ref{cond:Fcontraction}) and the Lipschitz-continuity of $g$. In particular, we neither use the fact that $g$ is non-negative, nor (\ref{cond:sharppeak}). Therefore, we will also be able to use the results for the case of symmetric forcing in Section~\ref{Symmetricsetting}. The diophantine condition on $\omega$ as well as (\ref{cond:Fmapsover}) and (\ref{cond:sharppeak}) will not be needed until the next section. Before we start, we make one more assumption on the parameter $\alpha$: We suppose that $K$ is chosen as in Remark \ref{bem:errorterm} and assume \begin{equation} \label{cond:alpha1} \ensuremath{\sqrt{\alpha}} \ \geq \ 2K \ . \end{equation} The following notation is tailored to our purpose of comparing two orbits: \begin{definition} \label{def:orbits} Suppose $T_\beta$ is defined as in Theorem~\ref{thm:snaexistence}~. If $\theta_1,\theta_2 \in \ensuremath{\mathbb{T}^{1}}$, $x^1_1,x^2_1 \in [-3,3]$ and $\beta_1,\beta_2 \in [0,\frac{3}{2}]$ are given, let \begin{equation} \label{eq:orbits} x^1_n \ := \ T_{\beta_1,\theta_1,n-1}(x^1_1) \ \ \ , \ \ \ x^2_n \ := \ T_{\beta_2,\theta_2,n-1}(x^2_1) \end{equation} and \begin{equation} \label{eq:taun} \tau(n) \ := \ \# \{ j \in [1,n] \mid x^1_j \notin \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \} \ . \end{equation} \end{definition} We start with a lemma about orbit-contraction. Essentially, the statement is that if two orbits spend most of the time in the contracting region above the line $\ensuremath{\mathbb{T}^{1}} \times \{\gamma\}$, then their distance in the vertical direction gets contracted up to the magnitude of the error term: \begin{lem} \label{lem:orbitscontraction} Suppose than conditions (\ref{cond:Funiformbounds}) and (\ref{cond:Fcontraction}) hold and $$\ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) \leq K\cdot \epsilon$$ for some $\epsilon> 0$. Let further \begin{equation} \eta(k,n) := \# \{ j \in [k,n] \mid x^1_j \textrm{ or } x^2_j < \gamma \} \end{equation} and assume that $\eta(j,n) \leq \frac{n+1-j}{10} \ \forall j = 1 \ensuremath{,\ldots,} n$ and $\alpha^{-\frac{n}{4}} \leq \epsilon$. Then \begin{equation} |x^1_{n+1} -x^2_{n+1}| \ \leq \ \epsilon \cdot \left( 6 + K \cdot \sum_{j=0}^\infty \alpha^{-\ensuremath{\frac{1}{4}} j} \right) \ . \end{equation} A similar statement holds for $\tilde{\eta}(k,n) := \# \{ j \in [k,n] \mid x^1_j \textrm{ or } x^2_j > -\gamma \}$. \end{lem} \textit{Proof.}\ We prove the following statement by backwards induction on $k$: For all $k = 1 \ensuremath{,\ldots,} n+1$ there holds \begin{eqnarray} \nonumber | x^1_{n+1}-x^2_{n+1}| & \leq & \label{eq:contractionformula} |x^1_k - x^2_k| \cdot \alpha^{-\ensuremath{\frac{1}{2}}(n+1-k-5\eta(k,n))} \\ &&+ \ K\cdot \epsilon \cdot \sum_{j=k+1}^{n+1} \alpha^{-\ensuremath{\frac{1}{2}}(n+1-j-5\eta(j,n))} \quad \end{eqnarray} The case $k=n+1$ is obvious. For the induction step, first suppose $x^1_k$ or $x^2_k < \gamma$, such that $\eta(k,n) = \eta(k+1,n) + 1$. Then, by (\ref{cond:Funiformbounds}) we have \[ |x^1_{k+1} - x^2_{k+1}| \leq |x^1_k-x^2_k| \cdot \alpha^2 + K\cdot \epsilon \ , \] and by applying the statement for $k+1$ we get \begin{eqnarray*} \lefteqn{|x^1_{n+1}-x^2_{n+1}| \ \leq } \\ &\leq & (|x^1_k - x^2_k| \cdot \alpha^2 + K \cdot \epsilon) \cdot \alpha^{-\ensuremath{\frac{1}{2}}(n-k-5\eta(k+1,n))} + K \cdot \epsilon \sum_{j=k+2}^{n+1} \alpha^{-\ensuremath{\frac{1}{2}}(n+1-j-5\eta(j,n))} \\ & = & |x^1_k - x^2_k| \cdot \alpha^{-\ensuremath{\frac{1}{2}}(n+1-k-5\eta(k,n))} + K \cdot \epsilon \cdot \sum_{j=k+1}^{n+1} \alpha^{-\ensuremath{\frac{1}{2}}(n+1-j-5\eta(j,n))} \ . \end{eqnarray*} On the other hand, suppose $x^1_k,x^2_k \geq \gamma$, such that $\eta(k,n) = \eta(k+1,n)$. In this case we can use (\ref{cond:Fcontraction}) to obtain \[ |x^1_{k+1} - x^2_{k+1}| \leq |x^1_k-x^2_k| \cdot \alpha^{-\ensuremath{\frac{1}{2}}}+ K \cdot \epsilon \] and thus \begin{eqnarray*} |x^1_{n+1}-x^2_{n+1}| & \leq & (|x^1_k - x^2_k| \cdot \alpha^{-\ensuremath{\frac{1}{2}}} + K\cdot \epsilon) \cdot \alpha^{-\ensuremath{\frac{1}{2}}(n-k-5\eta(k+1,n))} \\ && + \ K\cdot \epsilon \cdot \sum_{j=k+2}^{n+1} \alpha^{-\ensuremath{\frac{1}{2}}(n+1-j-5\eta(j,n))} \\ & = & |x^1_k - x^2_k| \cdot \alpha^{\ensuremath{\frac{1}{2}}(n+1-k-5\eta(k,n))} \\ && + \ K \cdot \epsilon \cdot \sum_{j=k+1}^{n+1} \alpha^{-\ensuremath{\frac{1}{2}}(n+1-j-5\eta(j,n))} \ . \end{eqnarray*} The statement of the lemma is now just an application of (\ref{eq:contractionformula}). Note that $|x^1_1-x^2_1|$ is always bounded by 6. {\raggedleft $\Box$ \\} \ \\ The result about orbit-expansion we will need is a little bit more intricate. The problem is the following: We have one reference orbit, which spends most of the time well inside of the expanding region $\ensuremath{\mathbb{T}^{1}} \times \overline{\ensuremath{B_\frac{2}{\alpha}(0)}}$. A second orbit starts a certain distance above, and we want to conclude that at some point it has to leave the expanding region while the first orbit remains inside at the same time. The following case is still quite simple: \begin{lem} \label{lem:orbitsbasic} Suppose that conditions (\ref{cond:alphagamma0}), (\ref{cond:Fexpansion}) and (\ref{cond:alpha1}) hold and further $$\ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) \leq K \cdot \alpha^{-1}$$ and $x^2_1 \geq x^1_1 + \ensuremath{\frac{1}{\alpha}}$. Then as long as $\tau(n) = 0$ there holds $ x^2_{n+1} \geq x^1_{n+1} + \frac{3}{\alpha}$. Thus $x^2_{n+1} \geq \frac{2}{\alpha}$ if $x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. A similar statement holds if $x^2_1 \leq x^1_1 - \ensuremath{\frac{1}{\alpha}}$. \end{lem} \textit{Proof.}\ This follows from \[ \textstyle x^2_{n+1} \stackrel{(\ref{cond:Fexpansion})}{\geq} x^1_{n+1} + 2\ensuremath{\sqrt{\alpha}}\cdot \ensuremath{\frac{1}{\alpha}} - K \cdot \alpha^{-1} \geq x^1_{n+1} + \ensuremath{\frac{1}{\alpha}}(2\ensuremath{\sqrt{\alpha}} - K) \stackrel{(\ref{cond:alpha1})}{\geq} x^1_{n+1} + \ensuremath{\frac{1}{\sqrt{\alpha}}} \] as long as $x^2_n - x^1_n \geq \ensuremath{\frac{1}{\alpha}}$ and $x^1_n \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Note that $\ensuremath{\frac{1}{\sqrt{\alpha}}} \geq \frac{3}{\alpha}$ by (\ref{cond:alphagamma0}). {\raggedleft $\Box$ \\} \ \\ However, it is not always that easy, because we also need to address the case where the first orbit does not stay in the contracting region all but only `most' of the times. This needs a little bit more care, and there are some natural limits: For example, $x^1_j$ must not spend to many iterates in the contracting region, even if these only make up a very small proportion of the length of the whole orbit segment. Otherwise the vertical distance between the two orbits may be contracted until it is of the same magnitude of the error term, and then the order of the orbits might get reversed. Another requirement is that $x^1_j$ does not leave the expanding region too often towards the end of the considered time interval. The reason for this was already demonstrated in Figure~\ref{fig:rightside}~. In the end we aim at proving Lemma \ref{lem:orbitsthrowout}, which is the statement that will be used later on. However, in order to do so we need two intermediate lemmas first. \begin{lem} \label{lem:orbitsexpansion} Suppose that conditions (\ref{cond:Funiformbounds}), (\ref{cond:Fexpansion}) and (\ref{cond:alpha1}) hold and further $$\ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) \leq K\cdot\epsilon$$ with $\epsilon \leq \alpha^{-q}$ for some $q \geq 1$ and \begin{equation} x^2_1 \ \geq \ x^1_1 + \frac{\epsilon}{2} \cdot \alpha^r \end{equation} with $0 \leq r < q$. Suppose further that for all $j = 1 \ensuremath{,\ldots,} n$ there holds \begin{equation} x^1_j \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ensuremath{\Rightarrow} \ x^2_j \in \overline{\ensuremath{B_\frac{2}{\alpha}(0)}} \end{equation} and \begin{equation} \textstyle \label{eq:taucond1} r + \ensuremath{\frac{1}{2}}(j - 5\tau(j) ) \ \geq \ \ensuremath{\frac{1}{2}} \ . \end{equation} Then \begin{equation} \label{eq:orbitexpansion} x^2_{n+1} \ \geq \ x^1_{n+1} + \frac{\epsilon}{2} \cdot \alpha^{r+\ensuremath{\frac{1}{2}}(n-5\tau(n))} \ . \end{equation} A similar statement holds if $x^2_1 \leq x^1_q - \frac{\epsilon}{2}\cdot \alpha^r$. \end{lem} Note that (\ref{eq:taucond1}) is always guaranteed if either $\tau(n) \leq \max\{0,\frac{2r-1}{4}\}$ (as $5\tau(j) -j \leq 4\tau(j) \leq 4\tau(n)$), or if $\tau(j) \leq \frac{j}{8} \ \forall j = 1 \ensuremath{,\ldots,} n$. \ \\ \textit{Proof.}\ We prove (\ref{eq:orbitexpansion}) by induction on $n$. The case $n=0$ is obvious. For the induction step, we have to distinguish two cases: \ \\ \underline{Case 1:} \hspace{3eM} $x^1_n \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ , \ \ i.e.\ $\tau(n) = \tau(n-1)$ \\ \begin{eqnarray*} x^2_{n+1} & \stackrel{(\ref{cond:Fexpansion})}{\geq} & x^1_{n+1} + 2\ensuremath{\sqrt{\alpha}} \cdot \frac{\epsilon}{2} \cdot \alpha^{r+\ensuremath{\frac{1}{2}}(n-1-5\tau(n-1))} - K\cdot \epsilon \\ & = & x^1_{n+1} + \epsilon \cdot \left( \alpha^{r+\ensuremath{\frac{1}{2}}(n-5\tau(n))} - K \right) \ \geq \ x^1_{n+1} + \frac{\epsilon}{2} \cdot \alpha^{r+\ensuremath{\frac{1}{2}}(n-5\tau(n))} \end{eqnarray*} where we used $\alpha^{r+\ensuremath{\frac{1}{2}}(n-5\tau(n))} \geq \sqrt{\alpha} \geq 2K$ by (\ref{eq:taucond1}) and (\ref{cond:alpha1}) in the last step. \ \\ \underline{Case 2:} \hspace{3eM} $x^1_n \notin \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ , \ \ i.e.\ $\tau(n) = \tau(n-1) + 1$ \begin{eqnarray*} x^2_{n+1} & \stackrel{(\ref{cond:Funiformbounds})}{\geq} & x^1_{n+1} + 2 \alpha^{-2} \cdot \frac{\epsilon}{2} \cdot \alpha^{r+\ensuremath{\frac{1}{2}}(n-1-5\tau(n-1))} - K \cdot \epsilon \\ & = & x^1_{n+1} + \epsilon \cdot \left( \alpha^{r+\ensuremath{\frac{1}{2}}(n-5\tau(n))} - K \right) \ \geq \ x^1_{n+1} + \frac{\epsilon}{2} \cdot \alpha^{r+\ensuremath{\frac{1}{2}}(n-5\tau(n))} \end{eqnarray*} where we used $\alpha^{r+\ensuremath{\frac{1}{2}}(n-5\tau(n))} \geq 2K$ again in the step to the last line. {\raggedleft $\Box$ \\} \begin{lem} \label{lem:orbitstimebelow} Suppose that conditions (\ref{cond:alphagamma0}), (\ref{cond:Fexpansion}) and (\ref{cond:alpha1}) hold and $$\ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) \leq K \cdot \alpha^{-q}$$ for some $q \geq 1$. Further, assume that $x^1_1, x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}, \ x^2_1 \geq \frac{2}{\alpha}$ and $\tau(n) \leq \max\{0,\frac{2q-3}{4}\}$. Then $x^2_j \geq x^1_j \ \forall j = 1 \ensuremath{,\ldots,} n$ and there holds \[ \# \underbrace{\{ j \in [2,n+1] \mid x^1_j \notin \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \textrm{ or } x^2_j \in \overline{\ensuremath{B_\frac{2}{\alpha}(0)}} \}}_{{\displaystyle =: \Upsilon}} \ \leq 5\tau(n) \ . \] A similar statement holds if $x^2_1 \leq -\frac{2}{\alpha}$. \end{lem} \textit{Proof.}\ It suffices to obtain a suitable upper bound on $\# \tilde{\Upsilon}$ where \[ \tilde{\Upsilon} := \{ j \in [2,n+1] \mid x^1_j \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \textrm{ and } x^2_j \in \overline{\ensuremath{B_\frac{2}{\alpha}(0)}} \} \ , \] as obviously $\# \Upsilon = \# \tilde{\Upsilon} + \tau(n+1) = \# \tilde{\Upsilon} + \tau(n)$. (Note that $\tau(n+1) = \tau(n)$ as $x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by assumption.) To that end, we can separately consider blocks $[k+1,l]$ where $k,l$ are chosen such that \begin{list}{(\roman{enumi})}{\usecounter{enumi}} \item $1 \leq k < l \leq n+1$ \item $x^1_j \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ensuremath{\Rightarrow} x^2_j \in \overline{\ensuremath{B_\frac{2}{\alpha}(0)}} \ \forall j \in [k+1,l]$ \item $x^1_k \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ and $x^2_k \notin \overline{\ensuremath{B_\frac{2}{\alpha}(0)}}$ \item $x^1_l \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ and $x^2_l \in \overline{\ensuremath{B_\frac{2}{\alpha}(0)}}$ \item $l$ is the maximal integer in $[k+1,n+1]$ with the above properties (ii) and (iv). \end{list} Note that $\tilde{\Upsilon}$ is contained in the disjoint union of all such blocks $[k+1,l]$. \ \\ We now want to apply Lemma~\ref{lem:orbitsexpansion}, but starting with $x_k^i$ instead of $x^i_1\ (i=1,2)$. Therefore, let $\tilde{\theta_i}=\theta_i+\omega_{k-1}$, \ % $\tilde{x}^i_1 = x^i_k $ and $\tilde{n}=l-k$ in Definition \ref{def:orbits}. Note that % $\tilde{\tau}(\tilde{n}) = \tau(l-1) - \tau(k-1)$, but as we assumed that $x^1_k,x^1_l \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ in (iii) and (iv) we equally have % $\tilde{\tau}(\tilde{n}) = \tau(l)-\tau(k)$. As % $x^2_k \geq x^1_k + \ensuremath{\frac{1}{\alpha}}$ by (iii), we can apply Lemma~\ref{lem:orbitsexpansion} with $\epsilon = \alpha^{-q}$ and $r = q-1$ to obtain \[ % x^2_{l} \ = \ \tilde{x}^2_{\tilde{n}+1} \ \geq \ \tilde{x}^1_{\tilde{n}+1} + \frac{\alpha^{-1}}{2} \cdot \alpha^{\ensuremath{\frac{1}{2}}(\tilde{n} - \tilde{\tau}(\tilde{n}))} \ = \ \ x^1_l + \frac{\alpha^{-1}}{2} \cdot \alpha^{^\ensuremath{\frac{1}{2}}(l-k-5(\tau(l)-\tau(k)))} \ . \] As $|x^2_l - x^1_l| \leq \frac{3}{\alpha} < \frac{1}{2\ensuremath{\sqrt{\alpha}}}$ by (iv) and (\ref{cond:alphagamma0}), we must therefore have $l-k-5(\tau(l)-\tau(k)) \leq 0$ or equivalently $l-k \ \leq \ 5(\tau(l)-\tau(k))$. Thus \[ \# \left(\tilde{\Upsilon} \cap [k+1,l]\right) \ = \ l - k - (\tau(l) - \tau(k)) \ \leq \ 4(\tau(l)-\tau(k)) \ . \] Summing over all such blocks $[k+1,l]$ we obtain $\# \tilde{\Upsilon} \leq 4\tau(n)$, and this completes the proof. {\raggedleft $\Box$ \\} \begin{lem} \label{lem:orbitsthrowout} Suppose that conditions (\ref{cond:alphagamma0}), (\ref{cond:Funiformbounds}), (\ref{cond:Fexpansion}) and (\ref{cond:alpha1}) hold and $$\ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) \leq K \cdot \epsilon$$ for some $\epsilon \in [\alpha^{-(q+1)},\alpha^{-q})$, $ q \geq 1$. Further, assume that for some $n \in \N$ with $x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ there holds $\tau(n) \leq \max\{0,\frac{2q-3}{4}\}$ and \begin{equation} \label{eq:orbitsthrowout} \tau(n)-\tau(j) \leq \frac{n-j}{6} \ \forall j \in [1,n] \ . \end{equation} Then % \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item $x^1_1 \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ but $x^2_1 \geq \frac{2}{\alpha}$ implies $x^2_{n+1} \geq \frac{2}{\alpha}$. \item If $n \geq 5q$ and $\tau(j) \leq \frac{j}{8} \ \forall j=1 \ensuremath{,\ldots,} n$, then $x^2_1 \geq x^1_1 + \frac{\epsilon}{2}$ implies $x^2_{n+1} \geq \frac{2}{\alpha}$. \end{list} % Again, similar statements hold for the reverse inequalities. \end{lem} \textit{Proof.}\ \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item Note that $\tau(1) = 0$ as $x^1_1 \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by assumption. By Lemma \ref{lem:orbitstimebelow} we have \[ \# \Upsilon \ \leq \ 5\tau(n) \ \stackrel{(\ref{eq:orbitsthrowout})}{\leq} \ \frac{5(n-1)}{6} \ \leq \ n-1 \ , \] Thus there exists $j_0 \in [2,n+1]$ such that $x^1_{j_0} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ but $x^2_{j_0} \geq \frac{2}{\alpha}$. If we shift the starting points in Definition \ref{def:orbits} to $\tilde{\theta}_i := \theta_i +\omega_{j_0-1}$ and $\tilde{x}_1^i = x^i_{j_0} \ (i=1,2)$ and denote the resulting sequences by $\tilde{x}^1_j, \tilde{x}^2_j$, then $\tilde{n} := n - j_0 +1$ satisfies the same assumptions as before. As $\tilde{n} < n$ we can repeat this procedure until $\tilde{n} < 6$. But then $\tilde{\tau}(\tilde{n}) = 0$, such that $\tilde{x}^1_1 \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ and $\tilde{x}^2_1 \geq \frac{2}{\alpha}$ implies $\tilde{x}^2_{\tilde{n}+1} = x^2_{n+1} \geq \frac{2}{\alpha}$ by Lemma \ref{lem:orbitsbasic}, proving statement (a). \item We claim that there exists $j_1 \in [1,n+1]$ such that $x^1_{j_1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ but $x^2_{j_1} \notin \overline{\ensuremath{B_\frac{2}{\alpha}(0)}}$. Suppose there exists no such $j_1$ and let $k$ be the largest integer in $[1,n]$ such that $x^1_{k+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. As $x^1_j \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ensuremath{\Rightarrow} x^2_j \in \overline{\ensuremath{B_\frac{2}{\alpha}(0)}}$ holds for all $j=1 \ensuremath{,\ldots,} k$, we can apply Lemma \ref{lem:orbitsexpansion} with $r=0$ to obtain \begin{equation} \label{eq:throwout} x^2_{k+1} \ \geq \ x^1_{k+1} + \frac{\epsilon}{2} \cdot \alpha^{\ensuremath{\frac{1}{2}}(k-5\tau(k))} \ \geq \ x^1_{k+1} + \ensuremath{\frac{1}{2}} \alpha^{\ensuremath{\frac{1}{2}}(k-5\tau(k))-q-1} \ . \end{equation} Now $\tau(n) = \tau(k+1) + n-k-1$ by definition of $k$. Further $\tau(k) = \tau(k+1)$, as $x^1_{k+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by the choice of $k$. Therefore \begin{eqnarray*} \textstyle \lefteqn{ \ensuremath{\frac{1}{2}}(k-5\tau(k)) \ =} \\ & = & \ \ensuremath{\frac{1}{2}}(k+1-5\tau(k+1)) - \ensuremath{\frac{1}{2}} \ \geq \ \textstyle \ensuremath{\frac{1}{2}}(n-5\tau(n)) - \ensuremath{\frac{1}{2}} \\ & \geq & \ensuremath{\frac{1}{2}}(5q-5\cdot \frac{2q-2}{4}) - \ensuremath{\frac{1}{2}} \ = \ \frac{5}{2}q - \frac{5}{4}q + \frac{12}{4} - \ensuremath{\frac{1}{2}} \ > \ q \ . \end{eqnarray*} Plugged into (\ref{eq:throwout}) this yields $x^2_{k+1} \geq x^1_{k+1} + \ensuremath{\frac{1}{2}}$, contradicting $x^1_{k+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ and $x^2_{k+1} \in \overline{\ensuremath{B_\frac{2}{\alpha}(0)}}$. Thus we can choose $j_1$ with $x^1_{j_1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ and $x^2_{j_1} \geq \frac{2}{\alpha}$ as claimed. Shifting the starting points as before we can now apply (a) to complete the proof. \end{list} {\raggedleft $\Box$ \\} \subsection{Approximating sets} \label{Approximatingsets} As mentioned in Section~\ref{Strategy}, for each close return $m \in \N$ with $d(\ensuremath{\omega_m},0) \leq \frac{3\gamma}{L_2}$ we will introduce an exceptional interval $J(m)$. However, before we can do so we first have to define some intermediate intervals $\Omega_p(m)$. These will contain the intervals $J(m)$, such that they can be used to obtain estimates on the `density' of the union of exceptional intervals. As we need a certain amount of flexibility, we have to introduce a whole sequence of such approximating sets $(\Omega_p(m))_{p \in \N_0}$, which will be increasing in $p$. The statements of this as well as the two following subsections do not involve the dynamics of $T$, they are only related to the underlying irrational rotation by $\omega$. Therefore, the only assumptions which are used are the Diophantine condition~(\ref{cond:diophantine}) as well as (\ref{cond:gamma0}) and (\ref{cond:alphagamma0}). \begin{definition} \label{def:omegasets} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item Let \[ S_n(\alpha) \ := \ \left\{ \begin{array}{cll} \sum_{i=0}^{n-1} \alpha^{-i} & \textrm{if} & n \in \N \cup \{ \infty \} \\ \\ 1 & \textrm{if} & n \leq 0 \end{array} \right. \ . \] \item For $p \in \N_0 \cup \{\infty\}$ let $Q_p : \ensuremath{\mathbb{Z}} \ensuremath{\rightarrow} \N_0$ be defined by \[ Q_p(j) := \left\{ \begin{array}{cll} q & \mathrm{if} & d(\ensuremath{\omega_j},0) \in \left[ S_{p-q+1}(\alpha) \cdot \frac{\alpha^{-q}}{L_2},S_{p-q+2}(\alpha)\cdot \frac{\alpha^{-(q-1)}}{L_2} \right) \ \textrm{for } q \geq 2 \\ \\ 1 & \mathrm{if} & d(\ensuremath{\omega_j},0) \in \left[ S_p(\alpha) \cdot \frac{\alpha^{-1}}{L_2}, \frac{4\gamma}{L_2} + S_{p}(\alpha)\cdot \frac{\alpha^{-1}}{L_2} \cdot (1-\ensuremath{\mathbf{1}}_{\{0\}}(p)) \right) \\ \\ 0 & \mathrm{if} & d(\ensuremath{\omega_j},0) \geq \frac{4\gamma}{L_2} + S_{p}(\alpha)\cdot \frac{\alpha^{-1}}{L_2} \cdot (1-\ensuremath{\mathbf{1}}_{\{0\}}(p)) \end{array} \right. \] if $j \in \ensuremath{\mathbb{Z}} \setminus \{ 0 \}$ and $Q_p(0) := 0$. Further let \[ p(j) \ := \ Q_0(j) \ . \] \item For fixed $u,v \in \N$ let $\ensuremath{\tilde{u}}:=u+2$ and $\ensuremath{\tilde{v}}:=v+2$. Then, for any $j \in \ensuremath{\mathbb{Z}}$ define \[ \Omega^-_p(j) := [j-\ensuremath{\tilde{u}}\cdot Q_p(j),j] \ , \ \ \Omega^+_p(j) := [j+1,j+\ensuremath{\tilde{v}}\cdot Q_p(j)] \] and \[ \Omega_p(j) := \Omega^-_p(j) \cup \Omega^+_p(j) \] if $Q_p(j) > 0$, with all sets being defined as empty if $Q_p(j) = 0$. Further let \[ \Omega^{(\pm)}_p := \bigcup_{j \in \ensuremath{\mathbb{Z}}} \Omega^{(\pm)}_p(j) \ \ \ \ \textrm{ and } \ \ \ \ \ensuremath{\tilde{\Omega}}^{(\pm)}_p := \! \! \! \!\!\twoscriptcup{j \in \ensuremath{\mathbb{Z}}}{Q_p(j) \leq p} \Omega_p^{(\pm)}(j) \ . \] \item Finally, let \[ \begin{array}{llll} \textstyle \nu(q) & := & \min\left\{ j \in \N \mid p(j) \geq q \right\} & \forall q \in \N \\ \\ \ensuremath{\tilde{\nu}}(q) & := & \min\left\{ j \in \N \mid d(\ensuremath{\omega_j},0) < 3S_\infty(\alpha)\cdot \frac{\alpha^{-(q-1)}}{L_2} \right\} & \textrm{if } q \geq 2 \ \textrm{ and } \\ \\ \ensuremath{\tilde{\nu}}(1) & := & \min\left\{ j \in \N \mid d(\ensuremath{\omega_j},0) < 3\left(\frac{4\gamma}{L_2} + S_\infty(\alpha)\cdot\frac{\alpha^{-1}}{L_2} \right) \right\} . \end{array} \] \end{list} \end{definition} \begin{bem} \label{bem:alphagamma2} Suppose that (\ref{cond:gamma0}) and (\ref{cond:alphagamma0}) hold, such that we have $\ensuremath{\sqrt{\alpha}} \geq \frac{4}{\gamma} \geq 64$. As $S_\infty(\alpha) = \frac{\alpha}{\alpha - 1}$, the following estimates can be deduced easily from this: \begin{eqnarray} \label{cond:alpha2} \alpha & \geq & S_\infty(\alpha) + 1 \\ \gamma & \geq & \frac{S_\infty(\alpha)+1}{\alpha} \label{cond:alphagamma1} \ . \end{eqnarray} \end{bem} \begin{bem} As in the prededing remark, we suppose that (\ref{cond:gamma0}) and (\ref{cond:alphagamma0}) hold. \label{bem:omegasets} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item By definition, we have $Q_{p'}(j) \leq Q_p(j) \ \forall j \in \ensuremath{\mathbb{Z}}$ whenever $p' \leq p $. Further, there holds $ Q_\infty(j) \leq p(j) + 1 \ \forall j \in \N \ . $ For $p(j) \geq 1$ this follows from (\ref{cond:alpha2}), which implies $\frac{S_\infty(\alpha)}{\alpha} \leq 1$. In the case $p(j) = 0$ this is true by (\ref{cond:alphagamma1}). Altogether, this yields \begin{equation} \label{eq:pjestimates} p(j) \ \leq \ Q_p(j) \ \leq \ Q_\infty(j) \ \leq \ p(j) + 1 \ \ \ \forall j \in \ensuremath{\mathbb{Z}},\ p \in \N \end{equation} \item As a direct consequence of (a) we have $\Omega^{(\pm)}_{p'}(j) \subseteq \Omega^{(\pm)}_p(j) \ \forall j \in \N$ whenever $p' \leq p$. The same holds for the sets $\Omega^{(\pm)}_p$ and $\ensuremath{\tilde{\Omega}}^{(\pm)}_p$. \end{list} \end{bem} The following two lemmas collect a few basic properties of the sets $\Omega^{(\pm)}_p$ and $\ensuremath{\tilde{\Omega}}^{(\pm)}_{p}$. The first one is a certain `almost invariance' property under translations with $m$ if $\ensuremath{\omega_m}$ is close to $0$. This is closely related to the recursive structure of the sets $R_N$ of regular points mentioned in the last section (see (\ref{eq:selfsimilarity})), and explains why we had to introduce a whole family $(\Omega_p)_{p\in\N_0}$ of approximating sets. Lemma \ref{lem:omegaestimates} then contains the estimates which can be obtained from the diophantine condition. These will allow us to control the ``density'' the sets of $\Omega^{(\pm)}_\infty$ (and thus of the sets $R_N$ defined later on) by making suitable assumptions on the parameters. \begin{lem} \label{lem:omegatransition} Suppose that conditions (\ref{cond:gamma0}) and (\ref{cond:alphagamma0}) hold. Let $p \geq 2$ and suppose $p(m) \geq p$ and $Q_{p-2}(k) \leq p-2$. Then \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item \ \ $Q_{p-2}(k) \ \leq \ Q_{p-1}(k \pm m) \ \leq \ Q_{p-2}(k) + 1$ \item \ \ $\ensuremath{\tilde{\Omega}}_{p-2}^{(\pm)} \pm m \ \subseteq \ \ensuremath{\tilde{\Omega}}_{p-1}^{(\pm)}$. Using $\ensuremath{\tilde{\Omega}}_{-1} := \emptyset = \ensuremath{\tilde{\Omega}}_0$, this also holds if $p=1$. \end{list} \end{lem} \textit{Proof.}\ \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item Let $q := Q_{p-2}(k)$, so that $p-q \geq 2$ by assumption. We first show that \begin{equation} \label{eq:omegatransitionA} Q_{p-1}(k+m) \ \geq \ q \ . \end{equation} To that end, first suppose $q \geq 2$, such that $d(\ensuremath{\omega_k},0) < S_{p-q}(\alpha) \cdot \frac{\alpha^{-(q-1)}}{L_2}$. Then \begin{eqnarray*} \lefteqn{d(\omega_{k \pm m},0) \ \leq \ d(\ensuremath{\omega_k},0) + d(\ensuremath{\omega_m},0) \ < \ S_{p-q}(\alpha)\cdot\frac{\alpha^{-(q-1)}}{L_2} + \frac{\alpha^{-(p-1)}}{L_2} } \\ & = & \left(S_{p-q}(\alpha) + \alpha^{-(p-q)}\right)\cdot \frac{\alpha^{-(q-1)}}{L_2} \ = \ S_{p-q+1}(\alpha)\cdot\frac{\alpha^{-(q-1)}}{L_2} \ . \end{eqnarray*} This proves (\ref{eq:omegatransitionA}) in case $q \geq 1$. The case $q=1$ is treated similarly, if $q=0$ there is nothing to show. It remains to prove that \begin{equation} \label{eq:omegatransitionB} Q_{p-1}(k+m) \ \leq \ q + 1 \ . \end{equation} This time, first assume $q \geq 1$, such that $d(\ensuremath{\omega_k},0) \geq S_{p-q-1}(\alpha)\cdot \frac{\alpha^{-q}}{L_2}$. Then \begin{eqnarray*} \lefteqn{d(\omega_{k \pm m},0) \ \geq \ d(\ensuremath{\omega_k},0) - d(\ensuremath{\omega_m},0) \ \geq \ S_{p-q-1}(\alpha)\cdot \frac{\alpha^{-q}}{L_2} - \frac{\alpha^{-(p-1)}}{L_2} } \\ & = & (\underbrace{\alpha \cdot S_{p-q-1}(\alpha) - \alpha^{-(p-q-2)}}_{ \geq \ \alpha - 1 \ \geq \ S_\infty(\alpha) \textrm{ by (\ref{cond:alpha2}) } }) \cdot \frac{\alpha^{-(q+1)}}{L_2} \ \geq \ S_{p-q-1}(\alpha)\cdot \frac{\alpha^{-(q+1)}}{L_2} \ . \end{eqnarray*} This implies (\ref{eq:omegatransitionB}). Again, the case $q=0$ is treated similarly, using (\ref{cond:alphagamma1}) instead of (\ref{cond:alpha2}). \item Now suppose $j \in \ensuremath{\tilde{\Omega}}_{p-2}^{(\pm)}$. Then $\exists k \in \ensuremath{\mathbb{Z}}$ such that $Q_{p-2}(k) \leq p -2$ and $j \in \Omega^{(\pm)}_{p-2}(k)$. As $Q_{p-1}(k \pm m) \geq Q_{p-2}(k)$ by (a), this implies $j \pm m \in \Omega^{(\pm)}_{p-1}(k \pm m)$, and as $Q_{p-1}(k+m) \leq Q_{p-2}(k) +1 \leq p-1$ this set is contained in $\ensuremath{\tilde{\Omega}}_{p-1}^{(\pm)}$. \end{list} {\raggedleft $\Box$ \\} \begin{lem} \label{lem:omegaestimates} Let $u,v \in \N$ be fixed and suppose $\omega$ satisfies the diophantine condition (\ref{cond:diophantine}). Then there exist functions $h,H : \ensuremath{\mathbb{R}}_+^2 \ensuremath{\rightarrow} \ensuremath{\mathbb{R}}_+$ with $h(\gamma,\alpha) \nearrow \infty$ and $H(\gamma,\alpha) \searrow 0$ as $(\gamma + \alpha^{-1}) \searrow 0$, such that \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item \[ \nu(q) \ \geq \ \ensuremath{\tilde{\nu}}(q) \ \geq \ h(\gamma,\alpha) \cdot (q+2) \cdot w \ \ \ \forall q \in \N \] where $w:= \tilde{u}+\tilde{v}+1 = u+v+5$. \item \[ \# ([1,N] \cap \Omega_\infty ) \ \leq \ H(\gamma,\alpha) \cdot N \ \ \textrm{and} \ \ \ \# ([-N,-1] \cap \Omega_\infty ) \ \leq \ H(\gamma,\alpha) \cdot N \ \ \ \forall N \in \N . \] \end{list} \end{lem} \textit{Proof.}\ \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item The diophantine condition implies that $c\cdot \ensuremath{\tilde{\nu}}(q)^{-d} \leq 2S_\infty(\alpha)\cdot \frac{\alpha^{-(q-1)}}{L_2}$ (if $q \geq 2$). Thus, a simple calculation yields \[ \frac{\ensuremath{\tilde{\nu}}(q)}{w\cdot(q+2)} \ \geq \ \left( \frac{c\cdot L_2}{2S_\infty(\alpha)} \right)^\ensuremath{\frac{1}{d}} \cdot \frac{\alpha^{\frac{q-1}{d}}}{w\cdot(q+2)} \] and the right hand side converges to $\infty$ uniformly in $q$ as $\alpha \ensuremath{\rightarrow} \infty$. Similarly, \[ \frac{\ensuremath{\tilde{\nu}}(1)}{3w} \ \geq \ \frac{1}{3w} \cdot \left( \frac{(c\cdot L_2)}{(8\gamma + 2S_\infty(\alpha)\cdot\alpha^{-1})}\right)^\ensuremath{\frac{1}{d}} \] and again the right hand side converges to $\infty$ as $\gamma + \alpha^{-1} \searrow 0$. Thus we can define the minimum of both estimates as $h(\gamma,\alpha)$. \item As we have seen in (a) we have $\ensuremath{\tilde{\nu}}(q) \geq \left( \frac{c\cdot L_2}{2S_\infty(\alpha)} \right)^\ensuremath{\frac{1}{d}} \cdot \alpha^{\frac{q-1}{d}}$ if $q \geq 2$. Now $[1,N] \cap \Omega_\infty(j) = \emptyset$ if $j > N + w\cdot Q_\infty(j)$. Therefore \begin{eqnarray*} \lefteqn{\textstyle \frac{1}{N} \cdot \# ( [1,N] \cap \Omega_\infty )} \\ & \leq & \frac{1}{N} \sum_{q=1}^{\infty} q \cdot w \cdot \# \{ 1 \leq j \leq N+q\cdot w \mid Q_\infty(j) = q \} \\ & \leq & \frac{1}{N} \left( \frac{N+w}{\ensuremath{\tilde{\nu}}(1)}\cdot w + \sum_{q=2}^{\infty} q\cdot w \cdot \frac{N+q\cdot w}{\ensuremath{\tilde{\nu}}(q)} \right) \\ & \leq & \frac{w+\frac{w^2}{N}}{\ensuremath{\tilde{\nu}}(1)} + \sum_{q=2}^{\infty} \frac{q\cdot w+q^2 \cdot \frac{w^2}{N}}{\left( \frac{c\cdot L_2}{2S_\infty(\alpha)} \right)^\ensuremath{\frac{1}{d}} \cdot \alpha^{\frac{q-1}{d}} } \ . \end{eqnarray*} The right hand side converges to 0 uniformly in $N$ as $\gamma + \alpha^{-1} \ensuremath{\rightarrow} 0$ and we can use it as the definition of $H(\gamma,\alpha)$. \end{list} {\raggedleft $\Box$ \\} \subsection{Exceptional intervals and admissible times} \label{Admissibletimes} In order to decide whether a time $N \in \N$ is admissible, in the sense of Section~\ref{Admissiblepoints}, we will first have to introduce exceptional intervals $J(m)$ corresponding to close returns $m\in \N$ with $d(\ensuremath{\omega_m},0) \leq \frac{3\gamma}{L_2}$. For the sets $\Omega_p$ defined above, two different intervals $\Omega_p(m)$ and $\Omega_p(n) \ (m \neq n)$ can overlap, without one of them being contained in the other. This is something we want to exclude for the exceptional intervals, and we can do so by carefully choosing their lengths. To this end, we have to introduce two more assumptions on the parameters: We let $h$ and $H$ be as in Lemma~\ref{lem:omegaestimates} and suppose that $\gamma$ and $\alpha$ are chosen such that $h(\gamma,\alpha) \geq 1$ and $H(\gamma,\alpha) \leq \frac{1}{12w}$. In other words, we will assume that for all $q,N \in \N$ there holds \begin{eqnarray} \ensuremath{\tilde{\nu}}(q) & \geq & (q+2)\cdot w \ , \label{cond:hfunctions1} \\ \#([-N,-1]\cap \Omega_\infty) & \leq & \frac{N}{12w} \ \ \textit{ and } \ \ \#([1,N]\cap \Omega_\infty) \ \ \leq \ \ \frac{N}{12w} \ . \label{cond:hfunctions2} \end{eqnarray} \begin{bem} \label{bem:omegaestimates} Suppose that (\ref{cond:gamma0}), (\ref{cond:alphagamma0}) and (\ref{cond:hfunctions1}) hold. Assumption~(\ref{cond:hfunctions1}) ensures that on the one hand the sets $\Omega_\infty(j)$ never contain the origin (and are, in fact, a certain distance away from it), and on the other hand two such sets of approximately equal size do not interfere with each other. This will be very convenient later on. To be more precise: \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item There holds \begin{equation}\label{eq:origin} -2,-1,0,1,2 \ \notin \ \Omega_\infty \ . \end{equation} \item If $Q_\infty(j) \geq q$ for some $j \in \ensuremath{\mathbb{Z}}$ then \begin{equation} \label{eq:ominftybounds} [-\ensuremath{\tilde{u}} \cdot (q+2),\ensuremath{\tilde{v}}\cdot (q+2)] \cap \Omega_\infty(j) = \emptyset \ . \end{equation} \item Let $m,n\in\ensuremath{\mathbb{Z}}$, $m\neq n$. Then $\Omega_\infty(m) \cap \Omega_\infty(n) = \emptyset$ whenever $|Q_\infty(m) - Q_\infty(n)| \leq 2$ or $|Q_p(m) - Q_p(n)| \leq 1$ for some $p \in \N_0$. \end{list} \end{bem} \textit{Proof.}\ (a) and (b) follow immediately from (\ref{cond:hfunctions1}) and the definition of the sets $\Omega_\infty(j)$. In order to prove (c), let $q := \min\{ Q_\infty(m),Q_\infty(n) \}$. Then necessarily $d(\omega_{m-n},0) = d(\ensuremath{\omega_m},\ensuremath{\omega_n}) < 2S_\infty(\alpha) \cdot \frac{\alpha^{-(q-1)}}{L_2}$ and thus % $|m-n| \geq \ensuremath{\tilde{\nu}}(q) \geq (q+2)\cdot w$ by (\ref{cond:hfunctions1}). On the other hand both $Q_\infty(m)$ and $Q_\infty(n)$ are at most $q+2$, and thus the definition of the $\Omega_\infty(j)$ implies the disjointness of the two sets. Finally, note that $|Q_p(m) - Q_p(n)| \leq 1$ implies $|Q_\infty(m) - Q_\infty(n)| \leq 2$ by (\ref{eq:pjestimates}). {\raggedleft $\Box$ \\} \begin{bem} \label{bem:existenceofls} Suppose that (\ref{cond:gamma0}), (\ref{cond:alphagamma0}), (\ref{cond:hfunctions1}) and (\ref{cond:hfunctions2}) hold. (\ref{cond:hfunctions2}) ensures that the ``density'' of the set $\Omega_\infty$ is small enough, and this will be very important for the construction later on. On the other hand, it also enables us now to choose suitable lengths for the exceptional intervals $J(m)$: We have $\# ([-\ensuremath{\tilde{u}}\cdot q,-1] \cap \Omega_\infty) \leq \frac{q}{12}$. This implies that we can find at least two consecutive integers outside of $\Omega_\infty$ in the interval $[-\ensuremath{\tilde{u}}\cdot q,-u\cdot q]$. In other words, for all $q \in \N$ there exists $l^-_q \in \N$ such that \begin{eqnarray} \label{eq:lmin} u\cdot q \ \leq \ l^-_q \ < \ \ensuremath{\tilde{u}} \cdot q & \ \textrm{and} \ & -l^-_q, \ -l^-_q-1 \notin \Omega_\infty \ . \end{eqnarray} Similarly, there exists $l^+_q\in\N$, such that \begin{eqnarray} \label{eq:lplus} v \cdot q \ \leq \ l^+_q \ < \ \ensuremath{\tilde{v}} \cdot q & \ \textrm{and} \ & l^+_q, \ l^+_q + 1 \notin \Omega_\infty \ . \end{eqnarray} In addition, we can assume that $l^\pm_p \geq l^\pm_q$ whenever $p \geq q$. (If $l^+_q,l^+_q+1$ are both contained in $[v\cdot (q+1),\ensuremath{\tilde{v}} \cdot (q+1)]$, then we can just take $l^+_{q+1} = l^+_q$. Otherwise, we find a suitable $l^+_{q+1} > l^+_q$ in this interval.) Note also that (\ref{eq:lmin}),(\ref{eq:lplus}) and (\ref{cond:hfunctions1}) together imply that \begin{equation} \label{eq:lplusestimates} \min\{u,v\} \cdot q \ \leq \ l^\pm_q \ < \ \ensuremath{\tilde{\nu}}(\max\{1,q-2\}) \ \leq \ \nu(\max\{1,q-2\}) \ . \end{equation} \end{bem} Now we are able to define the exceptional intervals: \begin{definition}[Exceptional intervals] \label{def:ansets} Suppose that (\ref{cond:gamma0}), (\ref{cond:alphagamma0}), (\ref{cond:hfunctions1}) and (\ref{cond:hfunctions2}) hold. Then for any $q \in \N$, choose $l^{\pm}_q$ as in Remark~\ref{bem:existenceofls} and define, for any $m \in\N$ with $p(m) \geq 0$, \begin{eqnarray*} \lambda^-(m) & := & m - l^-_{p(m)} \ \ \ , \ \ \ \lambda^+(m) \ \ := \ \ m+ l^+_{p(m)} \\ \nonumber \\ J^-(m) & := & [\lambda^-(m),m] \ \ \ , \ \ \ J^+(m) \ \ := \ \ [m+1,\lambda^+(m)] \end{eqnarray*} and \[ J(m) \ \ := \ \ J^-(m) \cup J^+(m) \ . \] If $p(m) = -1$, then $J^{(\pm)}:= \emptyset$. Further, let \[ A_N \ \ := \ \ [1,N] \setminus \bigcup_{1 \leq m < N} J(m) \ \ \textrm{ and } \ \ \Lambda_N \ \ := \ \ [1,N] \setminus A_N \] \end{definition} From now on, we will use conditions (\ref{cond:gamma0}), (\ref{cond:alphagamma0}), (\ref{cond:hfunctions1}) and (\ref{cond:hfunctions2}) as standing assumptions in the remainder of this subsection, as well as in Subsection~\ref{Regulartimes} (since all the statements concern the preceding definition, directly or indirectly). \begin{bem} \label{bem:ansets} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item As we have mentioned before, the exceptional intervals are contained in the approximating sets. To be more precise, for each $m \in \N$ with $p(m) \geq 0$ there holds \begin{eqnarray} J(m) & \subset & [\lambda^-(m)-1,\lambda^+(m)+1] \nonumber \\ & \ensuremath{\subseteq} & \Omega_0(m) \ \ensuremath{\subseteq} \ \Omega_p(m) \ \ensuremath{\subseteq} \ \Omega_\infty(m) \ , \label{eq:jminominfty} \end{eqnarray} where $p\in\N$ is arbitrary. This follows from the choice of the $l^\pm_q$ in Remark~\ref{bem:existenceofls} together with the definition of the intervals $\Omega_p(m)$. As a consequence, we have that \begin{equation} \label{eq:lambdaninominfty} \Lambda_N \ \ensuremath{\subseteq} \ \Omega_0 \ \ensuremath{\subseteq} \ \Omega_p \ \ensuremath{\subseteq} \ \Omega_\infty\ \ \ \forall N,p \in \N \ . \end{equation} \item Further, suppose that $m\neq n$ and $|Q_\infty(m) - Q_\infty(n)| \leq 2 $ or $|Q_p(m) - Q_p(n)| \leq 1$ for some $p \in \N_0$. Then (a) together with Remark~\ref{bem:omegaestimates}(c) implies that \begin{eqnarray} \nonumber \lefteqn{J(m) \cap J(n) \ = \ \emptyset \ = } \\ & = & [\lambda^-(m)-1,\lambda^+(m)+1] \cap [\lambda^-(n)-1,\lambda^+(n)+1] \ .\label{eq:jmndisjointness} \end{eqnarray} In particular this is true if $|p(m) - p(n)| \leq 1$ (recall that $p(j) = Q_0(j)$). \item The sets $A_N$ were defined as subsets of $[1,N]$, and it will turn out that they contain a very large proportion of the points from that interval. This could lead to this impression they form an increasing sequence of sets, but this is not true. For example, suppose that $N$ itself is a close return, such that $p(N) \geq 1$. In this case $N$ may still be contained in $A_N$, as the exceptional interval $J(N)$ is not taken into account in the definition of this set, but surely $N \notin A_{N+1}$. Thus, whenever we reach a close return, there may be a sudden decrease in the sets $A_N$ in the next step. In general, we only have the two relations \begin{equation} A_{N_2} \setminus A_{N_1} \ \ensuremath{\subseteq} \ [N_1+1,N_2] \ \ \textrm{ and } \ \ \end{equation} and \begin{equation} \label{eq:aninequality} A_{N_2} \cap [1,N_1] \ \ensuremath{\subseteq} \ A_{N_1} \ . \end{equation} where $N_1 \leq N_2$. However, the fluctuations and sudden decreases will only take place at the end of the interval $[1,N]$, and the starting sequence of the sets $A_N$ will stabilize at some point: Suppose $N_0 \leq N_1 \leq N_2$ and $N_0 \in A_{N_2}$. Then \begin{equation} \label{eq:stabilizingeffect} A_{N_1} \cap [1,N_0] \ = \ A_{N_2} \cap [1,N_0] \ = \ A_{N_0} \ . \end{equation} This simply follows from the fact that when $N_0$ is contained in $A_{N_2}$ no exceptional interval $J(m)$ with $m \in [N_0+1,N_2-1]$ can reach into $[1,N_0]$, as it would then have to contain $N_0$. Thus % $A_{N_1} \cap [1,N_0] = A_{N_2} \cap [1,N_0] = [1,N_0] \setminus \bigcup_{1 \leq m < N_0} J(m)$. Note that (\ref{eq:stabilizingeffect}) is always true whenever $N_0$ is not contained in any exceptional interval, i.e.\ $N_0 \notin \bigcup_{m\in\N} J(m) \ensuremath{\subseteq} \Omega_0 \ensuremath{\subseteq} \Omega_\infty$. In this case we have \begin{equation} \label{eq:finallystabilized} A_N \cap [1,N_0] \ = \ A_{N_0} \ \ \ \forall N \geq N_0 \ . \end{equation} In particular, as $l^+_q \notin \Omega_\infty$, this implies $[1,l^+_q] \cap A_N = A_{l^+_q} \ \forall N \geq l^+_q$. \item Note also that it is not always true that % $\Lambda_N = \bigcup_{1\leq m < N} J(m)$, as one of the exceptional intervals might extend beyond $N$, whereas $\Lambda_N$ was defined as a subset of $[1,N]$. However, as we will see this relation holds as soon as we restrict to \textit{`admissible'} times (see below). \end{list} \end{bem} The sets $A_N$ will serve three different aims: First of all, they will play an important role in the construction of the sink-source-orbits themselves. Secondly, they will also be intermediates for the definition of the sets $R_N$ of regular points. And finally, we will now use them to define admissible times: \begin{definition}[Admissible times] \label{def:admissibletimes} A time $N \in \N$ is called \textbf{admissible} if $N \in A_N$ (which is equivalent to $N \notin \Lambda_N$). The set $\{N\in\N \mid N \textrm{ is admissible } \}$ will be denoted by $A$. \end{definition} \begin{bem} \label{bem:admissibletimes} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item Any $N \in \N \setminus \Omega_0$ is admissible (see Remark~\ref{bem:ansets}(a)). In particular, $l^+_q$ and $l^+_q+1$ are admissible for any $q \geq 1$. \item As we mentioned above, for any admissible time $N$ there holds \begin{equation} \label{eq:lambdanequality} \Lambda_N \ = \ \bigcup_{1\leq m<N} J(m) \ , \end{equation} as $N \in A_N$ ensures that none of the exceptional intervals $J(m)$ with $m < N$ extends further than $N-1$. \item For any $N_1 \in \N$, all times $N_0 \in A_{N_1}$ are admissible. This is a direct consequence of the fact that $A_{N_1} \cap [1,N_0] \ensuremath{\subseteq} A_{N_0}$ (see (\ref{eq:aninequality})). However, as already mentioned there might also be further admissible times contained in $[1,N_1]\setminus A_{N_1} = \Lambda_{N_1}$. \item Note that $A=\bigcup_{N\in\N} A_N$. The inclusion $\ensuremath{\subseteq}$ follows directly from the definition, whereas $\supseteq$ is a consequence of (c). \end{list} \end{bem} Now we can also verify the property of the exceptional intervals which was mentioned at the beginning of this section: Whenever two such intervals $J(m)$ and $J(n)$ intersect, one of them is contained in the other. We do not prove this statement in full, but rather concentrate on `maximal' intervals, as this will be sufficient for our purposes. \begin{lem} \label{lem:centralpoints} Let $N\in\N$ be admissible and suppose $J$ is a non-empty maximal interval in $\Lambda_N = [1,N] \setminus A_N$. Then there exists a unique $m \in J$ with $p(m) = \max_{j\in J} p(j)$, and there holds $J = J(m)$. Furthermore, $p(j) < p-1 \ \forall j \in J \setminus \{m\}$. \end{lem} \textit{Proof.} \ Let $p:= \max_{j \in J} p(j)$ and $m \in J$ with $p(m) = p$. Obviously there holds $J(m) \ensuremath{\subseteq} J$. By definition, there cannot be any $j \in J \supseteq J(m)$ with $p(j) > p$. Therefore, as Remark~\ref{bem:ansets}(b) implies that $|p(j)-p| > 1 \ \forall j \in J(m) \setminus \{m\}$, there holds $p(j) < p-1 \ \forall j \in J(m) \setminus \{m\}$. Thus, it suffices to prove that $J=J(m)$. This will in turn follow if we can show that $\lambda^-(m) - 1$ and $\lambda^+(m)+1$ are not contained in $\Lambda_N$, because then $J(m)$ is a maximal interval in $\Lambda_N$ itself and must therefore be equal to $J$. We will only treat the case of $\lambda^-(m)-1$, the other one is similar. In order to show that $\lambda^-(m)-1$ is not contained in $J(k)$ for any $k = 1 \ensuremath{,\ldots,} N$ we distinguish three different cases, according to the value of $Q_{p-2}(k)$: First suppose $Q_{p-2}(k) > p+1$. Then $p(k) > p$ by (\ref{eq:pjestimates}). If $\lambda^-(m)-1 \in J(k)$, then $J(k) \cup J(m)$ is an interval and therefore $k \in J(k) \ensuremath{\subseteq} J$. But this contradicts the definition of $p$. If $Q_{p-2}(k) \in \{ p-1,p,p+1\}$, then $|Q_\infty(k) - Q_\infty(m)| \leq 2$ (again (\ref{eq:pjestimates})) and therefore $\lambda^-(m)-1 \notin \Lambda(k)$ by Remark~\ref{bem:ansets}(b). This only leaves the possibility $Q_{p-2}(k) \leq p-2$. But in this case $\lambda^-(m)-1 \in \Lambda(k)$ implies $\lambda^-(m)-1 \in \Omega_{p-2}(k) \subseteq \ensuremath{\tilde{\Omega}}_{p-2}$ (see Remark~\ref{bem:ansets}(a)). As $p(m)=p$ we can apply Lemma \ref{lem:omegatransition}(b) to obtain that $\lambda^-(m) - 1 - m = -l^-_p - 1 \in \ensuremath{\tilde{\Omega}}_{p-1}$, contradicting $-l^-_p - 1 \notin \Omega_\infty $ (by the choice of the $l^\pm_q$ in Remark \ref{bem:existenceofls}). As mentioned, the same arguments apply to $\lambda^+(m) + 1$, which completes the proof. {\raggedleft $\Box$ \\} \ \\ This naturally leads to the following \begin{definition} \label{def:centralpoints} If $N$ is admissible and $A_N = \{ a_1 \ensuremath{,\ldots,} a_n \}$ with $1=a_1 < \ldots < a_n=N$, let \[ \JN \ := \{[a_k+1,a_{k+1}-1] \mid k=1 \ensuremath{,\ldots,} n-1 \} \setminus \{ \emptyset \} \] be the family of all maximal intervals in $\Lambda_N = [1,N] \setminus A_N$ and $\J := \bigcup_{N \in \N} \JN$. For any $J \in \J$ let $p_J:= \max_{j \in J} p(j)$ and define $m_J$ as the unique $m \in J$ with $p(m) = p_J$. $m_J$ will be called the \textbf{central point} of the interval $J$. Further, let $J^- := J^-(m_J)$ and $J^+ := J^+(m_J)$ (note that $J=J(m_J)$ by Lemma~\ref{lem:centralpoints}). \end{definition} Note that not for every $n \in \N$ with $p(n) > 0$ the interval $J(n)$ is contained in $\J$. In fact, this will be wrong whenever $J(n) \ensuremath{\subseteq} J^+(m)$ for some $m<n$. Among some other facts, the following lemma states that central points are always admissible. In the light of the discussion in Section~\ref{Admissiblepoints}, it is not surprising that this will turn out to be crucial for the construction. \begin{lem} \label{lem:endpoints} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item Let $J \in \J$. Then $\lambda^-(m_J) -1 \in A_{m_J}$, $\lambda^-(m_J) \in A_{m_J}$ and $m_J \in A_{m_J}$. In particular, $\lambda^-(m_J)-1, \lambda^-(m_J)$ and $m_J$ are admissible. Further, there holds \begin{equation} \label{eq:pjinjmspecial} p(j) \leq Q_\infty(j) \leq \max\{0,p_J-2\} \ \ \ \forall j \in J \setminus \{m_J\} \ . \end{equation} \item More generally, if $J \in \J$ and $q \leq p_J$, then $m_J-l^-_q-1,\ m_J-l^-_q,\ m_J\in A_{m_J}$. In particular, $m_J-l^-_q-1,\ m_J-l^-_q$ and $m_J$ are admissible. Further there holds \begin{equation} \label{eq:pjinjm} p(j) \leq Q_\infty(j) \leq \max\{0,q-2\} \ \ \ \forall j \in [m_J - l^-_q,m_J+l^+_q] \setminus \{m_J\} \ . \end{equation} \item If $J \in \J$, then $\lambda^+(m_J) + 1$ is admissible. \item For all $q \in \N$ there holds $\nu(q)-l^-_q -1,\ \nu(q)-l^-_q,\ \nu(q) \in A_{\nu(q)}$ and \[ Q_\infty(j) \leq \max\{0,q-2\} \ \ \ \forall j \in [\nu(q) - l^-_q,\nu(q)+l^+_q] \setminus \{\nu(q)\} \ . \] In particular $\nu(q)-l^-_q-1,\nu(q)-l^-_q$ and $\nu(q)$ are admissible. \end{list} \end{lem} \textit{Proof.}\ \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item This is a special case of (b), which we prove below. \item Let $m := m_J$ and $j \neq m$. Suppose $Q_\infty(j) \geq q-2$. Then \[ d(\omega_{m-j},0) \ = \ d(\ensuremath{\omega_m},\ensuremath{\omega_j}) \ \leq \ 2 S_\infty(\alpha) \cdot \frac{\alpha^{-(q-3)}}{L_2} \ . \] Therefore $|m-j| \geq \ensuremath{\tilde{\nu}}(q-2) > l^\pm_q$ by (\ref{eq:lplusestimates}), which implies that $j \notin [m-l^-_q,m+l^+_q]$. This proves (\ref{eq:pjinjm}). As $J \in \J$, there exists some $N > m$ such that $J$ is a maximal interval in $\Lambda_N$ and consequently $\lambda^-(m) - 1$ is contained in $A_N$ (in particular $p(\lambda^-(m)-1)=0$). Hence, for any $n < \lambda^-(m)-1$ the interval $J(n)$ lies strictly to the left of $\lambda^-(m)-1$ and can therefore not intersect $J$. Thus, in order to show that $m - l^-_q-1,l^-_q,m \in A_m$, it suffices to show that none of these points is contained in $U := \bigcup_{n\in[\lambda^-(m),m-1]} J(n)$. However, by (\ref{eq:jminominfty}) and (\ref{eq:pjinjm}) there holds $U \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}_{q-2}$. As $p(m) \geq q$ by assumption, Lemma~\ref{lem:omegatransition}(b) implies $U-m \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}_{q-1} \ensuremath{\subseteq} \Omega_\infty$ and the statement follows from $-l^-_q-1,l^-_q,0 \notin \Omega_\infty$. Finally, note that $m-l^-_q \in A_m$ implies $m-l^-_q \in A_{m-l^-_q}$ by (\ref{eq:aninequality}), similarly for $m-l^-_q-1$, such that these points are both admissible. \item As $m$ is admissible, $\lambda^+(m)+1$ cannot be contained in $J(n)$ for any $n < m$ (as all of these intervals must be contained in $[1,m-1]$). Thus, it suffices to show that $\lambda^+(m)+1$ is not contained in $\tilde{U} := \bigcup_{n\in[m+1,\lambda^+(m)]} J(n)$. But this set is again contained in $\ensuremath{\tilde{\Omega}}_{p_J - 2}$ by (\ref{eq:pjinjm}). Therefore $\tilde{U} -m \ensuremath{\subseteq} \Omega_\infty$ by Lemma~\ref{lem:omegatransition}(b), and the statement follows from $l^+_{p_J}+1 \notin \Omega_\infty$. \item We show that $\nu(q)$ is admissible. Lemma~\ref{lem:admissiblecentral} below then implies that $\nu(q)$ is a central point, and we can therefore apply (b) in order to prove (d). Suppose $n< \nu(q)$. We have to show that $\nu(q) \notin \Omega_\infty(n) \supseteq J(n)$. In order to see this, note that $p(j) < q$ by definition of $\nu(q)$. Thus $d(\omega_{\nu(q)-n},0) = d(\omega_{\nu(q)},\ensuremath{\omega_n}) \geq \ensuremath{\tilde{\nu}}(q) \geq (q+2) \cdot w$ by (\ref{cond:hfunctions1}), and consequently $\nu(q) \notin \Omega_\infty(n)$. As $n<\nu(q)$ was arbitrary, this implies $\nu(q) \in A_{\nu(q)}$, such that $\nu(q)$ is admissible. \end{list} {\raggedleft $\Box$ \\} \ \\ For Part (a) of the preceding lemma, the inverse is true as well: \begin{lem} \label{lem:admissiblecentral} Suppose $m\in \N$ is admissible and $p(m) > 0$. Then $J(m) \in \J_{\lambda^+(m)+1} \ensuremath{\subseteq} \J$ and $\lambda^-(m)-1,\lambda^-(m)$ and $\lambda^+(m)+1$ are admissible. \end{lem} \textit{Proof.}\ We start by proving that $\lambda^+(m)+1$ is admissible, i.e.\ contained in $A_{\lambda^+(m)+1}$. First of all, the fact that $m$ is admissible ensures that none of the intervals $J(n)$ with $n < m$ intersects $[m+1,\lambda^+(m)+1]$. Therefore, none of these intervals can contain $\lambda^+(m)+1$, and for $J(m)$ the same is true by definition. Now suppose $n \in [m+1,\lambda^+(m)]$. Then, similar as in the proof of Lemma~\ref{lem:endpoints}(b) we obtain $p(n) \leq p(m)-2$ and therefore $J(n) \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}_{p(m)-2}$. Thus $J(n)-m$ is contained in $\ensuremath{\tilde{\Omega}}_{p(m)-1} \ensuremath{\subseteq} \Omega_\infty$ by Lemma~\ref{lem:omegatransition}(b) and can therefore not contain $l^+_{p(m)}+1 \notin \Omega_\infty$. Thus $\lambda^+(m)+1=m+l^+_{p(m)}+1$ is admissible. By Lemma~\ref{lem:centralpoints}, for any maximal interval $J = J(n) \in \J_{\lambda^+(m)+1}$ that intersects $J(m)$ there holds either $J(n) = J(m)$, such that $n=m$, or $J(m) \subseteq J(n)$. However, the second case cannot occur if $n < m$ (as $m$ is admissible), and for $n\in [m+1,\lambda^+(m)]$ it is ruled out as we have just argued that $p(n) < p(m)$ for such $n$. This proves $J(m) \in \J_{\lambda^+(m)+1}$. Finally, we can apply Lemma~\ref{lem:endpoints}(a) to $J=J(m)$, which yields that $\lambda^-(m)-1$ and $\lambda^-(m)$ are admissible as well. {\raggedleft $\Box$ \\} \subsection{Regular times} \label{Regulartimes} Now we can turn to defining the sets of regular points $R_N \ensuremath{\subseteq} [1,N]$. The sets $A_N$ already contain all points outside of the exceptional intervals $J(m)\ (m \in [1,N-1])$. As described in Section~\ref{Strategy}, we have to add certain points from the right parts $J^+(m)$ of these intervals. In order to do so, for each $J \in \J_N$ we will define a set $R(J) \ensuremath{\subseteq} J^+$ and then let $R_N = A_N \cup \bigcup_{J\in\J_N} R(J)$. Both $R_N$ and $R(J)$ will be defined by induction on $p$. To be more precise, in the $p$-th step of the induction we first define $R(J)$ for all $J\in\J$ with $p_J \leq p-1$, and then $R_N$ for all admissible times $N \leq \nu(p)$. \medskip As in the preceding one, conditions (\ref{cond:gamma0}), (\ref{cond:alphagamma0}), (\ref{cond:hfunctions1}) and (\ref{cond:hfunctions2}) will be used as standing assumptions in this subsection (since all of the statements in this subsection directly or indirectly depend on Definition~\ref{def:ansets}). \begin{definition}[Regular times] \label{def:regularsets} As mentioned, we proceed by induction on $p$. Note that the inclusions $R_N \ensuremath{\subseteq} [1,N]$ and $R(J) \ensuremath{\subseteq} J^+$ are preserved in every step of the induction. \ \\ \underline{$p=1:$} \ In order to start the induction let \[ R_N \ := \ [1,N] \ \] for any $N \leq \nu(1)$. Note that by definition there is no $J\in\J$ with $p_J = 0$. \ \\ \underline{$p \ensuremath{\rightarrow} p+1$:} \ Suppose $R(J)$ has been defined for all $J \in \J$ with $p_J \leq p-1$ and $R_N$ has been defined for all admissible times $N \leq \nu(p)$. In particular, this means that $R_{l^+_p}$ has defined already.% \footnote{As $l^+_p \leq \nu(p)$ by (\ref{eq:lplusestimates}) and $l^+_p$ is admissible by Remark~\ref{bem:admissibletimes}(a).} Then, for all $J \in \J$ with $p_J = p$ let \begin{equation} \label{eq:recursivedef1} R(J) \ = \ R_{l^+_p} + m_J \ . \end{equation} Note that as $J^+ = [m_J+1,m_J+l^+_p]$, the inclusion $R(J) \ensuremath{\subseteq} J^+$ follows from $R_{l^+_p} \ensuremath{\subseteq} [1,l^+_p]$. Further, for any admissible $N \in [\nu(p)+1,\nu(p+1)]$ let \begin{equation} \label{eq:recursivedef2} R_N \ := \ A_N \cup \bigcup_{J\in\J_N} R(J) \ . \end{equation} Here the inclusion $R_N \ensuremath{\subseteq} [1,N]$ follows from $R(J) \ensuremath{\subseteq} J^+ \ensuremath{\subseteq} J \ \forall J \in \J_N$, as $J \ensuremath{\subseteq} [1,N] \ \forall J \in \J_N$ by definition (see Definition~\ref{def:centralpoints}). \medskip \noindent Finally, we call $j\leq N$ \textbf{regular with respect to $N$} if $j$ is contained in $R_N$. \end{definition} \begin{bem} \label{bem:regularsets} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item Obviously any $j\in A_N$ is regular with respect to $N$. As $[1,N] \setminus A_N = \Lambda_N \ensuremath{\subseteq} \Omega_\infty$ (see (\ref{eq:lambdaninominfty})), this implies that any $j \in \N \setminus \Omega_\infty$ is regular with respect to any $N \geq j$. In particular (see Remarks~\ref{bem:omegaestimates} and \ref{bem:existenceofls}) \begin{equation} \label{eq:lpminrn} 1,2, l^+_q , l^+_q+1 \ \in \ A_N \ \ensuremath{\subseteq} \ R_N \ \ \ \forall q \in \N, \ N \geq l^+_q+1 \ . \end{equation} \item Similar to the sets $A_N$, the sequence $(R_N)_{N\in\N}$ is not increasing (compare Remark~\ref{bem:ansets}(c)). However, if $N_0 \leq N_1 \leq N_2$ are all admissible and $N_0 \in A_{N_2}$, then \begin{equation} \label{eq:regularstabilized} R_{N_1} \cap [1,N_0] \ = \ R_{N_2} \cap [1,N_0] \ = \ R_{N_0} . \end{equation} This can be seen as follows: $N_0 \in A_{N_2}$ implies that no interval $J(m) \ (N_0 \leq m < N_2)$ can reach into $[1,N_0]$, and in addition $N_0$ is admissible (see (\ref{eq:stabilizingeffect})). Therefore, since $R(J) \ensuremath{\subseteq} J$, all three sets in (\ref{eq:regularstabilized}) coincide with $A_{N_0} \cup \bigcup_{J \in \J_{N_0}} R(J)$. In particular, by (\ref{eq:lpminrn}) this implies that \begin{equation} \label{eq:lplusstabilized} R_N \cap [1,l^+_q] = R_{l^+_q} \ \textrm{ and } \ R_N \cap [1,l^+_q+1] = R_{l^+_q+1} \ \ \ \forall N \geq l^+_q+1 \ . \end{equation} \item Let $J \in \J$. As $R(J) = R_{l^+_{p_J}} +m_J$, statement (a) implies \begin{equation} \label{eq:mponeinrj} m_J+1,m_J+2,m_J+l^+_q,m_J+l^+_q+1 \in R(J) \ \ \ \forall q \leq p_J \ . \end{equation} \end{list} \end{bem} It will also be useful to have a notation for the sets of non-regular points: \begin{definition} \label{def:gammasets} For each admissible time $N \in \N$ let \[ \Gamma_N \ := \ [1,N] \setminus R_N \] and for each $J \in \J$ let \[ \Gamma^+(J) \ := \ J^+ \setminus R(J) \ \ \textrm{ and } \ \ \Gamma(J) \ := \ J^- \cup \Gamma^+(J) \ . \] \end{definition} \begin{bem} \label{bem:gammasets} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item Note that \begin{equation} \label{eq:gammanequality} \Gamma_N = \bigcup_{J \in \J_N} \Gamma(J) \ = \ \bigcup_{J\in\J_N} J^- \cup \Gamma^+(J) \ . \end{equation} \item Similar to (\ref{eq:recursivedef1}), the sets $\Gamma^+(J)$ satisfy the recursive equation \begin{equation} \label{eq:recursivegammas} \Gamma^+(J) \ = \ \Gamma_{l^+_{p_J}} + m_J \ . \end{equation} \item As $A_N \ensuremath{\subseteq} R_N$, there holds $\Gamma_N \ensuremath{\subseteq} \Lambda_N$. Thus, Remark~\ref{bem:ansets}(a) implies \begin{equation} \label{eq:gammaninominfty} \Gamma_N \ \ensuremath{\subseteq} \ \Lambda_N \ \ensuremath{\subseteq} \ \Omega_0 \ \ensuremath{\subseteq} \ \Omega_p \ \ensuremath{\subseteq} \Omega_\infty \end{equation} for all admissible times $N \in \N$. $p \in \N$ is arbitrary. \item Suppose both $N$ and $N+1$ are admissible. Then $p(N)=0$, such that $J(N) = \emptyset$, otherwise $N+1$ would be contained in $J(N)$ could therefore not be admissible. Thus there holds $\Lambda_N = \Lambda_{N+1}$ (see (\ref{eq:lambdanequality})). But this means that $J_N = J_{N+1}$ and consequently $\Gamma_N=\Gamma_{N+1}$ (see (\ref{eq:gammanequality})). In particular, this is true whenever $N,N+1 \notin \Omega_\infty$, such that we obtain \begin{equation} \label{eq:gammalqplus} \Gamma_{l^+_q} \ = \ \Gamma_{l^+_q+1} \ \ \ \forall q \in \N \ . \end{equation} \end{list} \end{bem} Now we must gather some information about the sets $R_N$ and $\Gamma_N$. First of all, the following lemma gives some basic control. In order to state it, let \begin{equation} \label{eq:omtilmin} \ensuremath{\tilde{\Omega}}^{(\pm)}_{-1} \ := \ \emptyset \ \end{equation} and note that $\ensuremath{\tilde{\Omega}}^{(\pm)}_{0} \ = \emptyset$ as well. \begin{lem} \label{lem:regularsetsbasic} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item For any $J \in \J$ there holds $\Gamma(J) \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}_{p_J-2}$. Further, for any admissible $N \leq \ensuremath{\tilde{\nu}}(q)$ there holds $\Gamma_N \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}_{q-1}$. \item If $j \in R(J)$ for any $J \in \J$, then \begin{equation} \label{eq:rjdistance} d(\ensuremath{\omega_j},0) \ \geq \ \frac{4\gamma}{L_2} - S_{p_J-1}(\alpha) \cdot \frac{\alpha^{-1}}{L_2} \ \geq \ \frac{3\gamma}{L_2} \ . \end{equation} Further, for any admissible $N \leq \nu(q)$ there holds \begin{equation} \label{eq:rndistance} d(\ensuremath{\omega_j},0) \ \geq \ \frac{4\gamma}{L_2} - S_{q-1}(\alpha) \cdot \frac{\alpha^{-1}}{L_2} \ \geq \ \frac{3\gamma}{L_2} \ \ \ \forall j \in R_N \setminus \{N\} \ . \end{equation} \end{list} \end{lem} \textit{Proof.}\ \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item We proceed by induction on $q$. More precisely, we prove the following induction statement: \begin{eqnarray} \label{eq:rsbinduction1} \Gamma^+(J) & \ensuremath{\subseteq} & \ensuremath{\tilde{\Omega}}^-_{p_J-2} \ \ \ \forall J \in \J : p_J \leq q \ \\ \label{eq:rsbinduction2} \Gamma_N & \ensuremath{\subseteq} & \ensuremath{\tilde{\Omega}}_{q-1}^- \ \ \ \forall N \leq \ensuremath{\tilde{\nu}}(q) \ . \end{eqnarray} For $q=1$ note that $\Gamma_N$ is empty for all $N \leq \ensuremath{\tilde{\nu}}(1)$. In particular $\Gamma_{l^+_1}$ is empty, as $l^+_1 \leq \ensuremath{\tilde{\nu}}(1)$ by (\ref{eq:lplusestimates}). But this means in turn that for any $J\in\J$ with $p_J = 1$ the set $\Gamma^+(J) = \Gamma_{l^+_1} + m_J$ is empty as well (see (\ref{eq:recursivegammas})). Let $p \geq 1$ and suppose the above statements hold for all $q \leq p$. Further, let $J\in \J$ with $p_J = p+1$. Then $\Gamma_{l^+_{p+1}} \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}^-_{p-2}$ as $l^+_{p+1} < \ensuremath{\tilde{\nu}}(p-1)$ by (\ref{eq:lplusestimates}). Therefore \[ \Gamma^+(J) \ = \ \Gamma_{l^+_{p+1}} + m_J \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}^-_{p-2} + m_J \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}^-_{p-1} \ . \] by Lemma~\ref{lem:omegatransition}(b). Thus (\ref{eq:rsbinduction1}) holds for $q=p+1$. Now suppose $N \leq \ensuremath{\tilde{\nu}}(p+1)$ and note that this implies $Q_p(m) \leq p \ \forall m < N$. Further, we have $\Gamma_N = \bigcup_{J\in\J_N} J^- \cup \Gamma^+(J)$ by (\ref{eq:gammanequality}). As $J^- \ensuremath{\subseteq} \Omega^-_p(m_J) \ \forall J\in \J$ and $m_J < N \ \forall J \in \J_N$, there holds $J^- \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}^-_p$ for any $J \in \J_N$, and for $\Gamma^+(J)$ the same follows from (\ref{eq:rsbinduction1}). This proves (\ref{eq:rsbinduction2}) for $q=p+1$. \item Suppose (\ref{eq:rjdistance}) holds whenever $p_J \leq p$. This implies (\ref{eq:rndistance}) for all $q \leq p$: We have $d(\ensuremath{\omega_j},0) \geq \frac{4\gamma}{L_2}$ whenever $j\in A_N\setminus \{N\}$ for some $N\in\N$, and further $p_J < q \ \forall J \in \J_N$ whenever $N \leq \nu(q)$. It remains to prove (\ref{eq:rjdistance}) by induction on $p_J$. If $p_J \leq 2$ the statement is obvious, because then $p(j) = 0 \ \forall j \in J \setminus \{ m_J\}$ by Lemma~\ref{lem:endpoints}(a). Suppose now that (\ref{eq:rjdistance}) holds whenever $p_J \leq p$. As mentioned above, (\ref{eq:rndistance}) then holds for all $q \leq p$. Let $p_I = p+1$ for some $I \in\J$ and $j \in R(I)$. Then $j-m_I \in R_{l^+_{p+1}}$ (see (\ref{eq:recursivedef1})), and as $l^+_{p+1} \leq \nu(p)$ we can apply (\ref{eq:rndistance}) with $q=p$ to obtain that \[ d(\omega_{j-m_I},0) \ \geq \ \frac{4\gamma}{L_2} - S_{p-1}(\alpha) \cdot \frac{\alpha^{-1}}{L_2} \ . \] Consequently \begin{eqnarray*} d(\ensuremath{\omega_j},0) & \geq & d(\omega_{j-m_I},0) - d(\omega_{m_I},0) \ \geq \ \frac{4\gamma}{L_2} - S_{p-1}(\alpha) \cdot \frac{\alpha^{-1}}{L_2} - \frac{ \alpha^{-p}}{L_2} \\ & = & \frac{4\gamma}{L_2} - \left(S_{p-1}(\alpha) + \alpha^{-(p-1)} \right) \cdot \frac{\alpha^{-1}}{L_2} \ = \ \frac{4\gamma}{L_2} - S_p(\alpha) \cdot \frac{\alpha^{-1}}{L_2} \ . \end{eqnarray*} \end{list} {\raggedleft $\Box$ \\} \ \\ As a consequence of Lemma \ref{lem:endpoints} and the preceding lemma, we obtain the following statements and estimates. In order to motivate these, the reader should compare the statements with the assumptions of Lemma~\ref{lem:orbitsthrowout}~. \begin{lem} \label{lem:regsets} \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item For any admissible $N\in\N$ there holds \begin{equation} \#\left([1,j] \setminus R_N \right) \ \leq \ \frac{j}{12w} \ \ \ \ \ \forall j \in [1,N] \ . \end{equation} In particular \begin{equation} \label{eq:regsets3} \# \left( [1,l^+_q] \setminus R_N \right) \ \leq \ \left[\frac{q}{12}\right] \ \leq \ \max\left\{0,\frac{2q-5}{4}\right\} \ \ \ \ \ \forall q \in \N \ , \end{equation} where $[x]$ denotes the integer part of $x\in\ensuremath{\mathbb{R}}^+$. \item Let $q \geq 1$ and $\sigma := \frac{u+3}{u+v}$. Then \begin{equation} \# ([j+1,l^+_q] \setminus R_{l^+_q}) \ \leq \ \sigma\cdot (l^+_q - j) \ \ \ \forall j \in [0,l^+_q-1] \ . \end{equation} \item Let $N\in\N$ be admissible, $J\in\JN$ and $\lambda^+:=\lambda^+(m_J)$. Then \begin{equation} \# ([j+1,\lambda^+] \cap \Gamma_N) \ \leq \ \sigma\cdot (\lambda^+-j) \ \ \ \forall j \in [0,\lambda^+-1] \ . \end{equation} \item Suppose $m\in \N$ is admissible and $p(m) \geq 1$, such that $J := J(m) \in \J$ by Lemma~\ref{lem:admissiblecentral}~. Then for all $q \leq p_J$ there holds \begin{equation} \label{eq:approachfraction} \#([m-l^-_q,m] \setminus R_m) \ \leq \ \frac{q}{12} \ \leq \ \max\left\{0,\frac{2q-5}{4}\right\} \ . \end{equation} \end{list} \end{lem} We recall that we use conditions (\ref{cond:gamma0}), (\ref{cond:alphagamma0}), (\ref{cond:hfunctions1}) and (\ref{cond:hfunctions2}) as standing assumptions in this subsection. \ \\ \textit{Proof.}\ Recall that $([1,j] \setminus R_N ) = ([1,j] \cap \Gamma_N)$. \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item This is a direct consequence of (\ref{eq:gammaninominfty}) and (\ref{cond:hfunctions2}). For the second inequality in (\ref{eq:regsets3}), note that $\# ( [1,l^+_q] \setminus R_N ) = 0$ whenever $q < 12$. \item We prove the following statement by induction on $q$: \begin{equation} \label{eq:gammasets1} \forall j \in [0,l^+_q-1] \ \exists n \in [j+1,l^+_q]: \ \# [j+1,n] \cap \Gamma_{l^+_q}\ \leq \ \sigma\cdot (n-j) \ . \end{equation} This obviously implies the statement, as it ensures the existence of a partition of $[j+1,l^+_q]$ into disjoint intervals $I_i =[j_i+1,j_{i+1}]$ with $j=j_1 < j_2 < \ldots < j_k=l^+_q$ which all satisfy \[ \# \left(I_i \cap \Gamma_{l^+_q}\right) \ \leq \ \sigma \cdot (j_{i+1} - j_i) \ . \] If $q=1$, then (\ref{eq:gammasets1}) is obvious as $\Gamma_{l^+_1} \ensuremath{\subseteq} \Lambda_{l^+_1} = \emptyset$ (see (\ref{eq:gammaninominfty}) and note that $l^+_1 \leq \nu(1)$ by (\ref{eq:lplusestimates})). Now suppose (\ref{eq:gammasets1}) holds for all $q \leq p$. In order to show (\ref{eq:gammasets1}) for $p+1$, we have to distinguish three possible cases. Recall that by (\ref{eq:gammanequality}) and (\ref{eq:gammaninominfty}) \[ \Gamma_{l^+_{p+1}} \ = \ \bigcup_{J\in\J_{l^+_{p+1}}} J^- \cup \Gamma^+(J) \ \ensuremath{\subseteq} \ \Lambda_{l^+_{p+1}} \ . \] If $j+1 \notin \Gamma_{l^+_{p+1}}$ we can choose $n = j+1$. If $j+1 \in \Gamma^+(J)$ for some $J \in {\cal J}_{l^+_{p+1}}$ then $p_J \leq p$ as $l^+_{p+1} < \nu(p)$ by (\ref{eq:lplusestimates}). By (\ref{eq:recursivegammas}) there holds $j - m_J \in \Gamma_{l^+_{p_J}} \ensuremath{\subseteq} [0,l^+_{p_J}-1]$. Thus we can apply the induction statement with $q=p_J$ to $j-m_J$ and obtain some $\tilde{n} \in [j - m_J +1,l^+_{p_J}]$ with \[ \#([j-m_J+1,\tilde{n}] \cap \Gamma_{l^+_{p_J}}) \ \leq \ \sigma \cdot (\tilde{n}-j+m_J) \ . \] As $\Gamma^+(J) = \Gamma_{l^+_{p_J}} + m_J$ (again by (\ref{eq:recursivegammas})), $n := \tilde{n} + m_J$ has the required property. Finally, if $j+1 \in J^-$ for some $J \in {\cal J}_{l^+_{p+1}}$ then $[\lambda^-(m_J),j+1] \ensuremath{\subseteq} J^- \ensuremath{\subseteq} \Gamma_{l^+_{p+1}}$. Therefore \begin{eqnarray} \label{eq:regsets3b} \lefteqn{ \frac{ \# \left( [j+1,\lambda^+(m_J)] \cap \Gamma_{l^+_{p+1}} \right) }{\lambda^+(m_J) - j} \ \leq \ \frac{ \# \left(J \cap \Gamma_{l_{p+1}^+} \right) }{\# J } } \\ & \leq & \frac{\#(J^- \cup \Gamma^+(J))}{(u+v)\cdot p_J} \ \leq \ \frac{(u+2) \cdot p_J + \# \Gamma_{l^+_{p_J}}}{(u+v)\cdot p_J} \ \leq \frac{u+3}{u+v} \ . \nonumber \end{eqnarray} where we used part (a) of this lemma with $j=N=l^+_{p_J}$ to conclude that $\# \Gamma_{l^+_{p_J}} \leq p_J$. \item Similar to (a), we prove that \[ \forall j \in [0,\lambda^+-1] \ \exists n \in [j+1,\lambda^+] : \ \#([j+1,n] \cap \Gamma_N) \ \leq \ \sigma(n-j) \ . \] Again, we have to distinguish three cases: If $j+1 \notin \Gamma_N$ we can choose $n = j+1$. If $j+1 \in \Gamma^+(I)$ for some $I \in \JN$, then we can choose $n = \lambda^+(m_I) = m_I + l^+_{p_I}$. Using that $\Gamma^+(I) = \Gamma_{l^+_{p_I}} + m_I$ by (\ref{eq:recursivegammas}), part (b) implies that $n$ has the required property. If $j+1 \in I^-$ for some $I \in \JN$ we can choose $n = \lambda^+(m_I)$ and proceed exactly as in (\ref{eq:regsets3b}), with $J$ being replaced by $I$. \item By Lemma~\ref{lem:endpoints}(b) there holds $m-l^-_q \in A_m$. Therefore (\ref{eq:gammanequality}), $\Gamma(J) \ensuremath{\subseteq} J$ and (\ref{eq:jminominfty}) imply that \[ [m-l^-_q,m] \cap \Gamma_m \ \ensuremath{\subseteq} \bigcup_{j \in [m-l^-_q+1,m-1]} \hspace{-2eM} \Omega_{q-2}(j) \ =: \ U \ . \] Further, Lemma~\ref{lem:endpoints}(b) yields that $U \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}_{q-2}$, such that $U-m \ensuremath{\subseteq} \Omega_\infty$ by Lemma~\ref{lem:omegatransition}(b). Consequently \begin{eqnarray*} \lefteqn{\#\left([m-l^-_q,m] \cap \Gamma_m\right) \ \leq \ \#U \ = \ \#(U-m)} \\ & \leq & \#\left([-l^-_q,-1] \cap \Omega_\infty\right) \ \stackrel{(\ref{cond:hfunctions2})}{\leq} \ \frac{l^-_q}{12w} \ \leq \ \frac{q}{12} \ \leq \ \max\left\{0,\frac{2q-5}{12}\right\} \ . \end{eqnarray*} \end{list} {\raggedleft $\Box$ \\} \section{Construction of the sink-source orbits: One-sided forcing} \label{Construction} We now turn to the construction of the sink-source-orbits in the case of one-sided forcing. Before we start with the core part of the proof, we have to add some more assumptions on the parameters. Further, we restate two estimates from the preceding section, together with a few other facts that will be used frequently in the construction. \medskip First of all, we choose $u$ and $v$ such that \begin{eqnarray} u & \geq & 8 \label{cond:u} \ , \\ v & \geq & 8 \label{cond:v} \ , \\ \label{cond:sigma} \sigma & \leq & \textstyle \frac{1}{6} \ . \end{eqnarray} In addition, we assume that \begin{eqnarray} \textstyle \ensuremath{\frac{1}{2}} \ensuremath{\sqrt{\alpha}} & \geq & 6 + K\cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}}) \ \ . \label{cond:alpha3} \end{eqnarray} Further, we remark that (\ref{cond:alphagamma0}) implies \begin{eqnarray} \alpha & \geq & 4S_\infty(\alpha) \ . \label{cond:alpha4} \end{eqnarray} Now suppose that (\ref{cond:gamma0}), (\ref{cond:alphagamma0}), (\ref{cond:hfunctions1}) and (\ref{cond:hfunctions2}) hold. Together with the above assumptions and the respective results from the last section (see (\ref{eq:lplusestimates}), Lemma~\ref{lem:regsets}(b) and (\ref{eq:lplusstabilized})), this yields that for any $q \geq 1$ the following estimates hold: \begin{eqnarray} \label{eq:lpmestimates} 4(q+1) \ \leq \ 8 q \ \leq \ l^\pm_q & < & \ensuremath{\tilde{\nu}}(\max\{1,q-2\}) \ \leq \ \nu(\max\{1,q-2\}) \ \\ \#([j+1,l^+_q] \ensuremath{\setminus} R_N) & \leq & \frac{l^+_q-j}{6} \ \ \ \ \ \forall N \geq l^+_q, \ j\in [0,l^+_q-1] \label{eq:etajnestimate} \ . \end{eqnarray} Recall that $(\xi_n(\beta,l))_{n\geq -l}$ corresponds the forward orbit of the point $(\omega_{-l},3)$ under the transformation $T_\beta$, where we suppress the $\theta$-coordinate (see Definition \ref{def:parameterfamily}). As we are in the case of one-sided forcing, we can use the fact that for all $l,n \in \ensuremath{\mathbb{Z}}, n \geq -l$ the mapping $\beta \mapsto \xi_n(\beta,l)$ is monotonically decreasing in $\beta$. For $l \geq 0$ and $n\geq 1$ the monotonicity is even strict (as $g(0) = 1 > 0$ and $F$ is strictly increasing by (\ref{cond:Funiformbounds})). This has some very convenient implications. First of all, we can uniquely define parameters $\beta^+_{q,n}$ and $\beta^-_{q,n}$ ($q,n \in \N$) by the equations \begin{equation} \label{eq:betapluspndef} \xi_n(\beta_{q,n}^+,l^-_q) \ = \ \ensuremath{\frac{1}{\alpha}} \end{equation} and \begin{equation} \label{eq:betaminpndef} \xi_n(\beta_{q,n}^-,l^-_q) \ = \ -\ensuremath{\frac{1}{\alpha}} \ . \end{equation} In addition, we let \begin{equation} \label{eq:lpmzero} l^-_0 \ := \ 0 \ \ \ \ \textrm{ and } \ \ \ \ l^+_0 \ := \ 0 \ \end{equation} (note that so far the $l^\pm_q$ had only been defined for $q \geq 1$) and extend the definitions of $\beta^\pm_{q,n}$ to $q=0$. If we now want to show that $\xi_n(\beta,l^-_q) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l^-_q) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ for some $j<n$, we can do so by proving that \begin{equation} \label{eq:betapluspn} \xi_n(\beta_{q,j}^+,l^-_q) \ \geq \ \ensuremath{\frac{1}{\alpha}} \end{equation} and \begin{equation} \label{eq:betaminpn} \xi_n(\beta_{q,j}^-,l^-_q) \ \leq \ -\ensuremath{\frac{1}{\alpha}} \end{equation} (compare (\ref{eq:betaplusdef})--(\ref{eq:betaminus})). Furthermore, (\ref{eq:betaminpn}) is a trivial consequence of the fact that $\ensuremath{\mathbb{T}^{1}} \times [-3,-\ensuremath{\frac{1}{\alpha}}]$ is mapped into $\ensuremath{\mathbb{T}^{1}} \times [-3,-(1-\gamma)] \ensuremath{\subseteq} \ensuremath{\mathbb{T}^{1}} \times [-3,-\ensuremath{\frac{1}{\alpha}})$ (see (\ref{cond:Fmapsover})). Thus, it always suffices to consider (\ref{eq:betapluspn}). \medskip Now we can formulate the induction statement we want to prove: \begin{indscheme} \label{thm:indscheme} Suppose the assumptions of Theorem~\ref{thm:snaexistence} are satisfied and (\ref{cond:alpha1}), (\ref{cond:hfunctions1}), (\ref{cond:hfunctions2}) and (\ref{cond:u})--(\ref{cond:alpha3}) hold. Then for any $q \in \N_0$ there holds \begin{list}{\textbf{\Roman{enumi}.} \ \ }{\usecounter{enumi}} \item If $\xi_{l^+_q+1}(\beta,l^-_q) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ then \begin{equation} \label{eq:indstatementI} \xi_j(\beta,l^-_q) \ \geq \ \gamma \ \ \ \forall j \in [-l^-_q,0] \setminus \Omega_\infty \end{equation} and $\beta \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$. \item Suppose $n \in [l^+_q+1,\nu(q+1)]$ is admissible. Then $\xi_n(\beta,l^-_q) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies that (\ref{eq:indstatementI}) holds, \begin{equation} \label{eq:indstatementII} \ \xi_j(\beta,l^-_q) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_n \end{equation} and $\beta \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$. \item Let $1 \leq q_1 \leq q$ and suppose $n_1 \in [l^+_{q_1}+1,\nu(q_1+1)]$ and $n_2 \in [l^+_{q}+1,\nu(q+1)]$ are both admissible. \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item If $q_1 = q$ and $n_1 \in R_{n_2}$, then \begin{equation} \label{eq:indstatementIIIa} |\beta^+_{q_1,n_1} - \beta^+_{q,n_2}| \ \leq \ 2\alpha^{-\frac{n_1}{4}} \ . \end{equation} \item If $q_1 < q$ there holds \begin{equation} \label{eq:indstatementIIIb} |\beta^+_{q_1,n_1} - \beta^+_{q,n_2}| \ \leq \ 3\cdot \hspace{-0.7eM}\sum_{i=q_1+1}^{q} \alpha^{-i}\ \leq \ \alpha^{-q_1} \ . \end{equation} \end{list} \end{list} \end{indscheme} The proof is given in the next subsection. The statement of Theorem~\ref{thm:snaexistence} now follows easily, with the help of Lemma~\ref{lem:sinksourceshadowing}: \medskip \noindent \textit{\bf Proof of Theorem \ref{thm:snaexistence}~.} In order to apply Lemma~\ref{lem:sinksourceshadowing} we can use the same sequences $l^\pm_p$ as in Induction Scheme \ref{thm:indscheme}. Further, let $\beta_p := \beta^+_{p,l^+_p+1}$, $\theta_p:=\omega$ and $x_p := \xi_1(\beta_p,l^-_p)$. From Part II of the induction statement with $q=p$ and $n=l^+_p+1$ we obtain that \[ \xi_j(\beta_p,l^-_p) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{l^+_p+1} \ , \] and Lemma~\ref{lem:regsets}(a) implies that \[ \# \left([1,j] \cap R_{l^+_p+1}\right) \ \geq \ \frac{11}{12} \cdot j \ \ \ \ \ \forall j \in [1,l^+_p] \ . \] Therefore it follows from (\ref{cond:Funiformbounds}) and (\ref{cond:Fexpansion}) that \begin{eqnarray*} \lefteqn{\lambda^+(\beta_p,\theta_p,x_p,j) \ = } \\ & = & \frac{1}{j} \sum_{i=1}^{j} \log F'(\xi_i(\beta_p,l^-_p)) \ \geq \ \frac{11}{12} \cdot \frac{\log \alpha}{2} - \frac{2\log \alpha}{12} \ = \ \frac{7}{24} \cdot \log\alpha \ \ \ \ \forall j \in [1,l^+_p] \ . \end{eqnarray*} Likewise, we can conclude from Part I of the induction statement with $q=p$ in combination with (\ref{cond:hfunctions2}), (\ref{cond:Funiformbounds}) and (\ref{cond:Fcontraction}) that \[ \lambda^-(\beta_p,\theta_p,x_p,j) \ \geq \ \frac{7}{24} \cdot \log\alpha \ \ \ \ \forall j \in [1,l^-_p] \ . \] Consequently, the assertions of Lemma~\ref{lem:sinksourceshadowing} are satisfied, such that there is at least one parameter value at which a sink-source-orbit and consequently an SNA and an SNR occur (see Theorem~\ref{thm:sinksourcesna}). Due to Theorem~\ref{thm:saddlenode}, the only parameter where this is possible is the critical parameter $\beta_0$. Finally the statement about the essential closure again follows from Theorem~\ref{thm:saddlenode}~. {\raggedleft $\Box$ \\} \bigskip \noindent \textit{\bf Proof of Addendum~\ref{adde}.} Define $\beta_p$, $\theta_p$ and $x_p$ as above. From Part III of the induction statement it follows that $(\beta_p)_{p\in\N}$ is a Cauchy-sequence and must therefore converge to $\beta_0$ (instead of only having a convergent subsequence, as in the proof of Lemma~\ref{lem:sinksourceshadowing}). To be more precise, if $p < q$ we have $|\beta_p-\beta_q| \leq \alpha^{-p}$, such that \begin{equation} \label{eq:betapdistance} |\beta_p - \beta_0| \ \leq \ \alpha^{-p} \ \ \ \ \ \forall p \in \N \ . \end{equation} Further, let \[ \theta_0 \ := \ \omega \] and \begin{equation} \label{eq:xplimit} x_0 \ := \ \lim_{p\ensuremath{\rightarrow}\infty} x_p \ . \end{equation} If the limit in (\ref{eq:xplimit}) does not exist,% \footnote{In fact it is possible to show that $(x_p)_{p\in\N}$ is a Cauchy-sequence as well, by using Lemma~\ref{lem:orbitscontraction} and Part I of the induction statement. However, we refrain from doing so as this is not relevant for the further argument.} we just go over to a suitable subsequence. From Part II of the induction statement with $q=p$ and $n=l^+_p+1$, it follows that \[ T_{\beta_p,\omega,j-1}(x_p) \ = \ \xi_j(\beta_p,l^-_p) \ \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{l^+_p+1} \ . \] Using that $R_{l^+_p+1} \ensuremath{\subseteq} R_{l^+_q+1} \ \forall q \geq p$ by (\ref{eq:lplusstabilized}) and the continuity of the map $(\beta,x) \mapsto T_{\beta,\omega,j-1}(x)$, we see that \begin{equation} \label{eq:xoinBalpha} T_{\beta_0,\omega,j-1}(x_0) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{l^+_p+1}, p \in \N \ . \end{equation} Now $\varphi^+$ and $\psi$ can be defined pointwise as the upper bounding graph (see (\ref{eq:boundinggraphs}) of the system $T_{\beta_0}$ and by equation (\ref{eq:psidef}), respectively. Then the fact that \begin{equation} \label{eq:xogeqpsiomega} \psi(\omega) \ \leq \ x_0 \end{equation} is obvious, otherwise the forward orbit of $(\theta_0,x_0)$ would converge to the lower bounding graph $\varphi^-$ and its forward Lyapunov exponent would therefore be negative. On the other hand suppose \[ \psi(\omega) \geq x_0 - \alpha^{-p} \] for some $p \geq 2$. Then we can compare the orbits \begin{equation} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ x_0 \ensuremath{,\ldots,} T_{\beta_0,\omega_1,l^+_p-1}(x_0) \end{equation} and \begin{equation} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \psi(\omega_1) \ensuremath{,\ldots,} \psi(\omega_{l^+_p}) \end{equation} via Lemma~\ref{lem:orbitsthrowout}(b)% \footnote{We can choose $\epsilon=\frac{\alpha^{-p}}{2}$, such that $q=p$. Note that the error term is zero, as we consider orbits which are located on the same fibres and generated with the same parameter. As $l^+_p+1 \in R_{l^+_p+1}$, \ $x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ follows from (\ref{eq:xoinBalpha}). $\tau(n) \leq \frac{2p-3}{4}$ and $\tau(j) \leq \frac{j}{8}$ follow from Lemma~\ref{lem:regsets}(a), whereas $\tau(n)-\tau(j) \leq \frac{n-j}{6}$ follows from (\ref{eq:etajnestimate}). Finally $n=l^+_p \geq 5p$ by (\ref{eq:lpmestimates}). } and obtain that $\psi(\omega_{l^+_p+1}) \leq - \frac{2}{\alpha}$. But as we have seen in the proof of Theorem~\ref{thm:saddlenode} that all points below the 0-line eventually converge to the lower bounding graph, this contradicts the definition of $\psi$. Consequently \[ x_0 \ \leq \ \psi(\omega) + \alpha^{-p} \ \ \ \forall p \in \N \ . \] Together with (\ref{eq:xogeqpsiomega}) this implies that $x_0 = \psi(\omega)$. \ \\ As $\psi \leq \varphi^+$, we immediately obtain $x_0 \leq \varphi^+(\omega)$, such that it remains to show \begin{equation} \label{eq:xoabove} x_0 \ \geq \ \varphi^+(\omega) \ . \end{equation} To that end, we denote the upper boundary lines of the system (\ref{eq:generalsystem}) by $\varphi_n$ if $\beta = \beta_0$ and by $\varphi_{p,n}$ if $\beta=\beta_p$. Now either infinitely many $\beta_p$ are below $\beta_0$, or infinitely many $\beta_p$ are above $\beta_0$. Therefore, by going over to a suitable subsequence if necessary, we can assume w.l.o.g.\ that either $\beta_p \leq \beta_0 \ \forall p \in\N$ or $\beta_p \geq \beta_0 \ \forall p \in \N$. The first case is treated rather easily: If $\beta_p \leq \beta_0$, then \[ x_p \ = \ \xi_1(\beta_p,l^-_p) \ = \ \varphi_{p,l^-_p+1}(\omega) \ \geq \ \varphi_{l^-_p+1}(\omega) \ \geq \ \varphi^+(\omega) \ . \] Passing to the limit $p \ensuremath{\rightarrow} \infty$, this proves (\ref{eq:xoabove}). \ \\ On the other hand, suppose $\beta^p \ \geq \beta_0$. In this case, we will show that \begin{equation} \label{eq:xodistance} |x_p - \varphi_{l^-_p+1}(\omega)| \ = \ |\xi_1(\beta_p,l^-_p) - \xi_1(\beta_0,l^-_p)| \ \leq \ \alpha^{-p} \cdot \left(6+K\cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}}) \right) \ . \end{equation} As $\varphi_n(\omega) \nKonv \varphi^+(\omega)$ and $x_p \pKonv x_0$, this again proves (\ref{eq:xoabove}). Note that as $\beta_p \geq \beta_0$ we have $\xi_j(\beta_0,l^-_p) \geq \xi_j(\beta_p,l^-_p) \ \forall j \geq -l^-_p$, such that $\xi_j(\beta_p,l^-_p) \geq \gamma$ implies $\xi_j(\beta_0,l^-_p) \geq \gamma$. This allows to compare the orbits \begin{equation} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{-l^-_p}(\beta_p,l^-_p) \ensuremath{,\ldots,} \xi_0(\beta_p,l^-_p) \end{equation} and \begin{equation} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{-l^-_p}(\beta_0,l^-_p) \ensuremath{,\ldots,} \xi_0(\beta_0,l^-_p) \end{equation} via Lemma~\ref{lem:orbitscontraction} \footnote{With $\epsilon = \alpha^{-p}$. We have $|\beta_p-\beta_0| \leq \alpha^{-p}$ by (\ref{eq:betapdistance}), such that $\ensuremath{\textrm{\upshape err}}(\ldots) \leq \epsilon$. $\eta(j,n) \leq \frac{n+1-j}{10}$ follows from Part I of the induction statement with $q=p$ together with (\ref{cond:hfunctions2}) and $0\notin \Omega_\infty$. Finally $n = l^-_p+1 \geq 4p$ by (\ref{eq:lpmestimates}), such that $\alpha^{-\frac{n}{4}} \leq \epsilon$. } which yields (\ref{eq:xodistance}). {\raggedleft $\Box$ \\} \subsection{Proof of the induction scheme} \label{Onesidedproof} \textbf{Standing assumption:} In this whole subsection, we always assume that the assumptions of the Induction scheme \ref{thm:indscheme} are satisfied. \medskip Before we start the proof of the Induction statement, we provide the following lemma, which will be used in order to obtain estimates on the parameters $\beta^+_{q,n}$: \begin{lem} \label{lem:parameterestimates} Suppose Let $n$ be admissible and $\xi_n(\beta_1,l),\xi_n(\beta_2,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Further, suppose that $\xi_n(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in R_n$. Then \[ |\beta_1-\beta_2| \ \leq \ 2\alpha^{-\frac{n}{4}} \ . \] \end{lem} \textit{Proof.}\ Note that \begin{eqnarray} \lefteqn{\frac{\partial}{\partial \beta} \xi_{j+1}(\beta,l) \ = \ \label{eq:xiderivatives} \frac{\partial}{\partial \beta} \left( F(\xi_j(\beta,l)) - \beta \cdot g(\omega_j) \right)} \\ \nonumber & = & F'(\xi_j(\beta,l)) \cdot \frac{\partial}{\partial \beta} \xi_j(\beta,l) - g(\omega_j) \ \stackrel{(g \geq 0)}{\leq} \ F'(\xi_j(\beta,l)) \cdot \frac{\partial}{\partial \beta} \xi_j(\beta,l)\ . \end{eqnarray} W.l.o.g.\ we can assume $\beta_1 < \beta_2$. As we have $\frac{\partial}{\partial \beta} \xi_0(\beta,k) \leq -1$, the inductive application of (\ref{eq:xiderivatives}) together with (\ref{cond:Funiformbounds}) and (\ref{cond:Fexpansion}) yields \[ \frac{\partial}{\partial \beta} \xi_n(\beta,l) \ \leq \ -(\alpha^{\ensuremath{\frac{1}{2}}})^{\#([1,n-1]\cap R_n])} \cdot (\alpha^{-2})^{\#( [1,n-1]\setminus R_n)} \ = \ -\alpha^{\ensuremath{\frac{1}{2}}(n-1-5\cdot \#\Gamma_n)} \] as long as $\xi_n(\beta,k) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. (Recall that $[1,n] \setminus R_n = \Gamma_n$ by definition and $n \in R_n$ by assumption.) In particular this is true for all % $\beta \in [\beta_1,\beta_2]$. Lemma~\ref{lem:regsets}(a) yields $\# \Gamma_n = \# ([1,n-1] \ensuremath{\setminus} R_n) \leq \frac{n-1}{10}$, such that we obtain \[ \frac{\partial}{\partial \beta} \xi_n(\beta,l) \ \leq \ -\alpha^{\frac{n-1}{4}} \] The required estimate now follows from $|\xi_n(\beta_1,l) - \xi_n(\beta_2,l)| \leq \frac{2}{\alpha}$. {\raggedleft $\Box$ \\} \bigskip We prove the Induction scheme \ref{thm:indscheme} by induction on $q$, proceeding in six steps. The first one starts the induction: \ \\ \underline{\textbf{Step 1:}} \ \ \ \textit{Proof of the statement for $q=0$.} \\ \ \\ As $d(\ensuremath{\omega_j},0) \geq \frac{4\gamma}{L_2} \ \forall j \in [1,\nu(1)-1]$, Part I and II of the induction statement are already contained in Lemma~\ref{lem:inductionstart}, and Part III is still void. {\raggedleft \large $\blacksquare$ \\} \ \\ Now let $p \geq 1$ and assume that the statement of Induction scheme \ref{thm:indscheme} holds for all $q \leq p-1$. We have to show that the statement then holds for $p$ as well. The next two steps will prove Part I of the induction statement for $p$. Note that for $p=1$ Part I of the induction statement is still contained in Lemma~\ref{lem:inductionstart} as $l^\pm_1 < \nu(1)$ by (\ref{eq:lpmestimates}). Therefore, we can assume \begin{equation} \label{eq:pgeqone} p \ \geq 2 \end{equation} during Step 2 and Step 3. \ \\ \underline{\textbf{Step 2:}} \ \ \ \textit{If $|\beta-\beta^+_{p-1,\nu(p)}| \leq \alpha^{-p}$, then $\xi_j(\beta,l^-_p) \geq \gamma \ \forall j \in [-l^-_p,0] \setminus \Omega_\infty$.} \ \\ This is a direct consequence of the following lemma with $q=p$, $l^* = l^-_{p-1}$, $l = l^-_p$, $\beta^* = \beta^+_{p-1,\nu(p)}$, $m=\nu(p)$ and $k=-\nu(p)$, .% \footnote{Note that $\nu(p)$ is admissible by Lemma~\ref{lem:endpoints}(d). Therefore $\beta^* = \beta^+_{p-1,\nu(p)} \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ and $\xi_j(\beta^*,l^-_{p-1}) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in R_{\nu(p)}$ follow from Part II of the induction statement with $q=p-1$ and $n=\nu(p)$. \label{foot:step2a} } Note that $\ensuremath{\tilde{\Omega}}_{p-2} - \nu(p) \ensuremath{\subseteq} \ensuremath{\tilde{\Omega}}_{p-1} \ensuremath{\subseteq} \Omega_\infty$ by Lemma \ref{lem:omegatransition}(b). The statement of the lemma is slightly more general because we also want to use it in similar situations later. Recall that $\ensuremath{\tilde{\Omega}}_{-1} = \ensuremath{\tilde{\Omega}}_0 = \emptyset$, see (\ref{eq:omtilmin}). \begin{lem} \label{lem:approaching} Let $q\geq1$, $l^*,l \geq 0$, $\beta^* \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ and $|\beta - \beta^*| \leq 2\alpha^{-q}$. Suppose that $m$ is admissible, $p(m) \geq q$ and either $k=0$ or $p(k) \geq q$. Further, suppose \[ \xi_j(\beta^*,l^*) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \forall j \in R_m \ \] and $\xi_{m+k-l^-_q}(\beta,l) \geq \gamma$. Then \[ \{ j \in [m-l^-_q,m] \mid \xi_{j+k}(\beta,l) < \gamma \} \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}_{q-2} \ . \] \end{lem} \ \\ \textit{Proof.}\ We have that $J(m) \in \J$ by Lemma~\ref{lem:admissiblecentral}, such that we can apply Lemma~\ref{lem:endpoints}(b) to $J:=J(m)$. Note that $m=m_J$ and $p_J = p(m) \geq q$ in this case. Let $t := m - l^-_q$. We will show that \[ \{ j \in [t,m] \mid \xi_{j+k}(\beta,l) < \gamma \} \ \ensuremath{\subseteq} \ \bigcup_{t \leq j < m} [\lambda^-(j),\lambda^+(j)+1] \ . \] As $[\lambda^-(j),\lambda^+(j)+1] \ensuremath{\subseteq} \Omega_0(j) \ensuremath{\subseteq} \Omega_{q-2}(j)$ and $Q_{q-2}(j) \leq Q_\infty(j) \leq \max\{0,q-2\} \ \forall j \in [t,m-1]$ (see Lemma \ref{lem:endpoints}(b)), this proves the statement. \ \\ Let $J_1 \ensuremath{,\ldots,} J_r$ be the ordered sequence of intervals $J \in \J_m$ with $J \ensuremath{\subseteq} [t,m]$, such that \[ [t,m] \setminus A_m \ = \ [t,m] \cap \Lambda_m \ = \ \bigcup_{i=1}^r J_i \] (recall that $[1,m] \setminus A_m = \Lambda_m$ by definition). Further, define \[ j^-_i \ := \ \lambda^-(m_{J_i}) \ \ \ \textrm{and } \ \ j^+_i \ := \ \lambda^+(m_{J_i}) \ , \] such that $J_i = [j_i^-,j^+_i]$. We have to show that $\xi_{j+k}(\beta,l) \geq \gamma$ whenever $j$ is contained in $[j_i+2,j^-_{i+1}-1]$ for some $i = 1 \ensuremath{,\ldots,} r$ or in % $[t,j^-_1-1] \cup [j^+_r+2,m]$, and we will do so by induction on $i$. The case where $j^+_i+1 = j^-_{i+1}-1$, such that $[j^+_i+2,j^-_{i+1}-1]$ is empty, is somewhat special and will be addressed later, so for now we always assume $j^+_i+1 < j^-_{i+1}-1$. \ \\ Let us first see that $\xi_{j^+_i+2+k}(\beta,l) \geq \gamma$ implies % $\xi_{j+k}(\beta,l) \geq \gamma$ % $ \forall j \in [j^+_i+2,j^-_{i+1}-1]$ : If % $j \in [j^+_i+2,j^-_{i+1}-1]$, then $j \in A_m$. Hence $d(\omega_j,0) \geq \frac{4\gamma}{L_2}$ and therefore \[ d(\omega_{j+k},0) \ \geq \ \frac{4\gamma-\alpha^{-q}}{L_2} \geq \frac{3\gamma}{L_2} \] by (\ref{cond:alphagamma0}) if $q \geq 2$. In case $q=1$ we obtain the same result, as $\ensuremath{\tilde{\nu}}(1) > l^-_1$ by (\ref{eq:lpmestimates}) then implies that $d(\ensuremath{\omega_j},0) \geq \frac{8\gamma}{L_2} \ \forall j \in [t,m]$. Further % $\beta \in [1,1+\frac{4}{\ensuremath{\sqrt{\alpha}}}]$ as $|\beta-\beta^*| \leq 2\alpha^{-q} \leq \frac{2}{\alpha} \stackrel{(\ref{cond:alphagamma0})}{\leq} \ensuremath{\frac{1}{\sqrt{\alpha}}}$ and % $\beta^* \in [1+\frac{1}{\ensuremath{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}]$. Inductive application of Lemma \ref{lem:basicestimate} therefore yields \[ \xi_{j+k}(\beta,l) \ \geq \gamma \ \ \ \forall j \in [j^+_i+2,j^-_{i+1}-1] \ . \] The same argument also starts and ends the induction: As $\xi_{t+k}(\beta,l) \geq \gamma$ by assumption we get % $\xi_{j+k}(\beta,l) \geq \gamma \ \forall j \in [t,j^-_1-1]$, and for $j \in [j^+_r+2,m]$ this follows from $\xi_{j^+_r+2+k}(\beta,l) \geq \gamma$. \ \\ If $q=1$, then Lemma~\ref{lem:endpoints}(b) yields that $p(j) = 0 \ \forall j \in [t,m]$ and consequently $[t,m] \setminus A_m = \emptyset$, such that we are already finished in this case. Therefore, we can assume from now on that $q \geq 2$. It remains to prove that \begin{equation} \label{eq:approaching} \xi_{j^-_i-1+k}(\beta,l) \geq \gamma \ \ \textrm{ implies } \ \ \xi_{j^+_i+2+k}(\beta,l) \geq \gamma \ . \end{equation} In order to do this, we have to apply Lemma \ref{lem:orbitsthrowout}(a): Let $\epsilon := \alpha^{-(q-1)}$ and choose \begin{equation} \label{step2:reference} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{j^-_i-1}(\beta^*,l^*) \ensuremath{,\ldots,} \xi_{j^+_i}(\beta^*,l^*) \end{equation} and \begin{equation} \label{step2:new} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{j^-_i-1+k}(\beta,l) \ensuremath{,\ldots,} \xi_{j^+_i+k}(\beta,l)\ . \end{equation} As $d(\omega_k,0) \leq \frac{\alpha^{-(q-1)}}{L_2}$ and % $|\beta-\beta^*| \leq 2\alpha^{-q}$ we have $\ensuremath{\textrm{\upshape err}}(\beta_1,\beta_2,\theta_1,\theta_2) \leq K\cdot \epsilon$ by Remark \ref{bem:errorterm}~. Further $x^1_1 = \xi_{j_i^--1}(\beta^*,l^*) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ and $x^1_{n+1} = \xi_{j_i^++1}(\beta^*,l^*) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by assumption (as $j^-_i-1,j^+_i+1 \in A_m \ensuremath{\subseteq} R_m$), whereas % $x^2_1 = \xi_{j^-_i-1+k}(\beta,l)\geq \gamma$ $\geq \frac{2}{\alpha}$. Applying Lemma~\ref{lem:regsets}(d) we obtain that \[ \tau(n) = \#([j^-_i-1,j^+_i] \setminus R_m) \leq \#([t,m] \setminus R_m) \stackrel{(\ref{eq:approachfraction})}{\leq} \min\left\{ 0,\frac{2q-5}{4}\right\} \] Finally, we have \[ |\tau(n) - \tau(j)| \leq \#([j_i^+-(n-j)+1,j^+_i]) \setminus R_m \ \leq \ -\sigma\cdot(n-j) \leq \frac{n-j}{6} \] by Lemma \ref{lem:regsets}(c) (with $N=m$, $J=J_i$ and $\lambda^+ = \lambda^+(m_{J_i}) = j^+_i$). Thus all the assumptions of Lemma~\ref{lem:orbitsthrowout} are satisfied and we can conclude that $x_{n+1}^2 = \xi_{j^+_i+1+k}(\beta,l) \geq \frac{2}{\alpha}$. As we have $d(\omega_{j^+_i+1+k},0) \geq \frac{3\gamma}{L_2}$ again, Lemma \ref{lem:basicestimate} now implies % $\xi_{j^+_i+2+k}(\beta,l)$ $\geq \gamma$. As mentioned, we still have to address the case where $j^+_i+1 = j^-_{i+1}-1$, such that $[j^+_i+2,j^-_{i+1}-1]$ is empty. In this case we still obtain that $\xi_{j^+_i+1+k}(\beta,l) = \xi_{j^-_{i+1}-1+k}(\beta,l) \geq \frac{2}{\alpha}$. But this is sufficient in order to apply Lemma~\ref{lem:orbitsthrowout}(a) once more, in exactly the same way as above, to conclude that $\xi_{j^+_{i+1}+1+k}(\beta,l) \geq \frac{2}{\alpha}$. Thus in the next step we obtain $\xi_{j^+_{i+1}+2+k}(\beta,l) \geq \gamma$ as before, unless again $j^+_{i+1}+1 = j^-_{i+2}-1$. In any case, the induction can be continued. {\raggedleft $\Box$ \\} {\raggedleft \large $\blacksquare$ \\} \ \\ \underline{\textbf{Step 3:}} \ \ \ \textit{$\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ \ implies \ $|\beta - \beta^+_{p-1,\nu(p)}| \leq \alpha^{-p}$.} \ \\ Recall that we can assume $p \geq 2$, see (\ref{eq:pgeqone}). Let $\beta^* := \beta^+_{p-1,\nu(p)}$, $\beta^+:=\beta^*-\alpha^{-p}$ and $\beta^- := \beta^*+ \alpha^{-p}$. We prove \begin{claim} \label{claim:step3} \hspace{5eM} $\xi_{l^+_p+1}(\beta^+,l^-_p) > \ensuremath{\frac{1}{\alpha}}$ \ . \end{claim} As $\xi_{l^+_p+1}(\beta^-,l^-_p) < -\ensuremath{\frac{1}{\alpha}}$ follows in exactly the same way, this implies the statement. \ \\ \textit{Proof of the claim:} \\ Using Step 2, we see that \begin{equation} \label{step3:a} \xi_j(\beta^+,l^-_p) \ \geq \ \gamma \ \ \ \ \ \forall j \in [-l^-_p,0] \setminus \Omega_\infty \ . \end{equation} On the other hand, from Part II of the the induction statement with $q=p-1$ and $n=\nu(p)$ it follows that% \footnote{Note that $\nu(p)$ is admissible by Lemma~\ref{lem:endpoints}(d) and $\xi_{\nu(p)}(\beta^*,l^-_{p-1}) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by definition of $\beta^*= \beta^+_{p-1,\nu(p)}$. \label{foot:step3a} } \begin{equation} \label{step3:b} \xi_j(\beta^*,l^-_{p-1}) \ \geq \ \gamma \ \ \ \ \ \forall j \in [-l^-_{p-1},0] \setminus \Omega_\infty \end{equation} and \begin{equation} \label{step3:c} \xi_j(\beta^*,l^-_{p-1}) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{\nu(p)} \ . \end{equation} Thus we can use Lemma \ref{lem:orbitscontraction} with $\epsilon = \alpha^{-p}$ to compare the sequences \begin{equation} \label{step3:referenceone} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{-l^-_{p-1}}(\beta^*,l^-_{p-1}) \ensuremath{,\ldots,} \xi_{-1}(\beta^*,l^-_{p-1}) \end{equation} and \begin{equation} \label{step3:new} x^2_1 \ensuremath{,\ldots,} x^2_n := \xi_{-l^-_{p-1}}(\beta^+,l^-_p) \ensuremath{,\ldots,} \xi_{-1}(\beta^+,l^-_p) \end{equation} and obtain that% \footnote{As the two orbits lie on the same fibres and $\beta^*-\beta^+ = \alpha^{-p}$, we have $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K \cdot \epsilon$, see Remark~\ref{bem:errorterm}~. Further, by (\ref{step3:a}) and (\ref{step3:b}) we have $\eta(j,n) \leq \#([-(n-j),-1] \cap \Omega_\infty) \leq \frac{n+1-j}{10}$ by (\ref{cond:hfunctions2}) and $n = l^-_{p-1} \geq 4p$ by (\ref{eq:lpmestimates}).\label{foot:step3b} } \[ |\xi_0(\beta^+,l^-_p) - \xi_0(\beta^*,l^-_{p-1})| \ \leq \ \alpha^{-p} \cdot (6 + K\cdot S_\infty(\alpha^{-\ensuremath{\frac{1}{4}}})) \ . \] Note that (\ref{step3:a}) and (\ref{step3:b}) in particular imply that both $\xi_0(\beta^*,l^-_{p-1}) \geq \gamma$ and $\xi_0(\beta^+,l^-_p) \geq \gamma$. Therefore we can use (\ref{cond:Fcontraction}) to obtain \begin{eqnarray*} \xi_1(\beta^+,l^-_p) & \geq & \xi_1(\beta^*,l^-_{p-1}) + (\beta^*-\beta^+) - \alpha^{-p} \cdot \frac{6+K\cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}})}{2\ensuremath{\sqrt{\alpha}}} \\ & \stackrel{(\ref{cond:alpha3})}{\geq} & \xi_1(\beta^*,l^-_{p-1}) + \frac{\alpha^{-p}}{2} \ . \end{eqnarray*} Now we compare \begin{equation} \label{step3:reference2} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{1}(\beta^*,l^-_{p-1}) \ensuremath{,\ldots,} \xi_{l^+_p}(\beta^*,l^-_{p-1}) \end{equation} and \begin{equation} \label{step3:new2} x^2_1 \ensuremath{,\ldots,} x^2_n := \xi_1(\beta^+,l^-_p) \ensuremath{,\ldots,} \xi_{l^+_p}(\beta^+,l^-_p) \end{equation} via Lemma \ref{lem:orbitsthrowout}(b)% \footnote{\label{footnote2}Again, the assumptions of the lemma with $\epsilon = \alpha^{-p}$ are all satisfied: We have $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K\cdot\epsilon$ as before. (\ref{step3:c}) together with Lemma~\ref{lem:regsets}(a) implies \[ \tau(n) \ \leq \ \#([1,l^+_p] \setminus R_{\nu(p)}) \ \leq \frac{2p-3}{4} \ . \] and similarly $\tau(j) \leq \frac{j}{8})$. As $l^+_p+1 \in R_{\nu(p)}$ by (\ref{eq:lpminrn}) we also have $x^1_{n+1} = \xi_{l^+_p+1}(\beta^*,l^-_{p-1}) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Further $\tau(n)-\tau(j) \leq \frac{n-j}{6}$ follows from (\ref{eq:etajnestimate}), and $n=l^+_p \geq 5p$ by (\ref{eq:lpmestimates}). \label{foot:step3c}} and obtain that $ \xi_{l^+_p}(\beta^+,l^-_p) = x^2_{n+1} \geq \frac{2}{\alpha}$. {\raggedleft $\Box$ \\} {\raggedleft \large $\blacksquare$ \\} \ \\ Step 2 and 3 together prove Part I of the induction statement for $q=p$, apart from $\beta \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ whenever $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. This will be postponed until after the next step. However, Step~3 implies the slightly weaker estimate \[ \beta \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}}-\alpha^{-p},1+\frac{3}{\ensuremath{\sqrt{\alpha}}} + \alpha^{-p}\right] \ . \] (Note that as $\nu(p)$ is admissible the induction statement can be applied to $q=p-1$ and $n=\nu(p)$, such that $\beta^+_{p-1,\nu(p)} \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$.) This will be sufficient in the meanwhile. The next three steps will prove Part II and III of the induction statement for $q=p$. In order to do so we will proceed by induction on $n \in [l^+_p+1,\nu(p+1)]$. The next step starts the induction, by showing Part II for $n=l^+_p+1$. \ \\ \underline{\textbf{Step 4:}} \ \ \ \textit{$\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ \ implies \ $\xi_j(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in R_{l^+_p+1} $} \ \\ Assume that $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. As we are in the case of one-sided forcing, $\xi_j(\beta,l^-_p) \leq -\ensuremath{\frac{1}{\alpha}}$ for any $j \in [1,l^+_p]$ implies $\xi_{l^+_p+1}(\beta,l^-_p) \leq -\ensuremath{\frac{1}{\alpha}}$ (compare the discussion below (\ref{eq:betaminpn})). Therefore, it suffices to show that for any $j \in R_{l^+_p+1} \setminus \{l^+_p+1\}$ \begin{equation} \label{step4:a} \xi_j(\beta,l^-_p) \ \geq \ \ensuremath{\frac{1}{\alpha}} \ \ \ \ \textit{ implies } \ \ \ \ \xi_{l^+_p+1}(\beta,l^-_p) \ \geq \ \ensuremath{\frac{1}{\alpha}} \ . \end{equation} Using the two claims below, this can be done as follows: Suppose $j \in R_{l^+_p+1}$ and $\xi_j(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$. Then $d(\ensuremath{\omega_j},0) \geq \frac{3\gamma}{L_2}$ by Lemma~\ref{lem:regularsetsbasic}(b), such that Lemma~\ref{lem:basicestimate} implies $\xi_{j+1}(\beta,l^-_p) \geq \gamma \geq \frac{2}{\alpha}$. Therefore (\ref{step4:a}) follows directly from Claim~\ref{claim:step4a} with $k=j+1$, provided $j+1 \in R_{l^+_p+1}$. On the other hand, if $j+1 \in \Gamma_{l^+_p+1}$ then Claim~\ref{claim:step4b} (with $k=j$) yields the existence of a suitable $\tilde{k}$, such that (\ref{step4:a}) follows again from Claim~\ref{claim:step4a}. As % $R_{l^+_p+1} \cup \Gamma_{l^+_p+1} = [1,l^+_p+1]$, this covers all possible cases. \begin{claim} \label{claim:step4a} Suppose $\xi_k(\beta,l^-_p) \geq \frac{2}{\alpha}$ for some $k \in R_{l^+_p+1}$. Then $\xi_{l^+_p+1}(\beta,l^-_p) > \ensuremath{\frac{1}{\alpha}}$. \end{claim} \textit{Proof.}\ Let $\beta^* := \beta^+_{p-1,\nu(p)}$ as in Step~3. Note that $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $|\beta - \beta^*| \leq \alpha^{-p}$ by Step 3. Further, we can again apply Part II of the induction statement to $q=p-1$ and $n=\nu(p)$. As $R_{l^+_p+1} \ensuremath{\subseteq} R_{\nu(p)}$ (see (\ref{eq:lplusstabilized})) we obtain \begin{equation} \label{step4:b} \xi_j(\beta^*,l^-_{p-1}) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{l^+_p+1} \ . \end{equation} The claim now follows from Lemma~\ref{lem:orbitsthrowout}(a), which we apply to compare the orbits% \footnote{We choose $\epsilon = \alpha^{-p}$. $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K \cdot \epsilon$ follows from $|\beta-\beta^*| \leq \alpha^{-p}$. As $k\in R_{l^+_p+1}$ by assumption and $l^+_p+1 \in R_{l^+_p+1}$ by (\ref{eq:lpminrn}), we have $x^1_1,x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by (\ref{step4:b}). Finally $\tau(n) \leq \min\{0,\frac{2p-3}{4}\}$ and $\tau(n) - \tau(j) \leq \frac{n-j}{6}$ follow from Lemma~\ref{lem:regsets}(a) and (\ref{eq:etajnestimate}).\label{foot:step4a}} \begin{equation} \label{step4:referenceone} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_k(\beta^*,l^-_{p-1}) \ensuremath{,\ldots,} \xi_{l^+_p}(\beta^*,l^-_{p-1}) \end{equation} and \begin{equation} \label{step4:newone} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_k(\beta,l^-_p) \ensuremath{,\ldots,} \xi_{l^+_p}(\beta,l^-_p) \ . \end{equation} Thus we obtain $\xi_{l^+_p+1}(\beta,l^-_p) = x^2_{n+1} \geq \frac{2}{\alpha}$. {\raggedleft $\Box$ \\} \begin{claim} \label{claim:step4b} Suppose $k\in R_{l^+_p+1},\ k+1\in \Gamma_{l^+_p+1}$ and $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$. Then there exists some $\tilde{k} \in R_{l^+_p+1}$ with $\xi_{\tilde{k}}(\beta,l^-_p) \geq \frac{2}{\alpha}$. \end{claim} \textit{Proof.}\ First of all, note that $\Gamma_{l^+_p+1} = \Gamma_{l^+_p}$ by (\ref{eq:gammalqplus}) and $\Gamma_{l^+_p} = \bigcup_{J\in\J_{l^+_p}} \Gamma(J)$ by (\ref{eq:gammanequality}). Therefore, there must be some $J_1 \in \J_{l^+_p}$ such that $k+1 \in \Gamma(J_1)$. Let $m_1 := m_{J_1}$ and $p_1 := p(m_1)$. As $\Gamma(J_1)=J^-_1 \cup \Gamma^+(J_1)$, we have two possibilities: Either $k+1 \in J^-_1$, which means that $j+1=\lambda^-(m_1)$ (as % $J^-_1 = [\lambda^-(m_1),m_1]$ is an interval and we assumed % $k \in R_{l^+_p+1}$). In this case define $m=m_1$ and $t=0$. The other alternative is that $k+1 \in \Gamma^+(J_1)$, and in this case we have to `\textit{go backwards through the recursive structure of the set $R_{l^+_p+1}$'}, until we arrive at the first alternative: As $\Gamma^+(J_1) = \Gamma_{l^+_{p_1}} + m_1$ by (\ref{eq:recursivegammas}), $k+1 \in \Gamma^+(J_1)$ means that $k-m_1+1 \in \Gamma_{l^+_{p_1}}$. Hence, similar to before there exists some $J_2 \in \J_{l^+_{p_1}}$ such that either $k-m_1+1 = \lambda^-(m_{J_2})$ or $k-m_1+1 \in \Gamma^+(J_2)$. Let $m_2 := m_{J_2}$ and $p_2 := p(m_2)$. If we are in the second case where $j-m_1+1 \in \Gamma^+(J_2)$ we continue like this, but after finitely many steps the procedure will stop and we arrive at the first alternative. This is true because in each step the $p_i$ become smaller, more precisely $p_{i+1} \leq p_i-3$,% \footnote{Note that there is no $J \in \J_{l^+_{p_i}}$ with $p_J \geq p_i-2$ by (\ref{eq:lpmestimates}). \label{foot:step4b} } and finally $\Gamma_{l^+_1}$ is empty. Thus we obtain two sequences $p_1 > \ldots > p_r \geq 0$ and $m_1 > \ldots > m_r$ with $p_i = p(m_i)$, such that $k-\sum_{i=1}^{r-1} m_i + 1 = \lambda^-(m_r)$ for some $r \in \N$. Let $m := m_r$ and $t := \sum_{i=1}^{r-1} m_i$, such that $p_r=p(m)$ and $k+1-t = \lambda^-(m)$. Note that for $r=1$ this coincides with the above definitions of $m$ and $t$ in the first case. We have \begin{eqnarray} d(\omega_t,0) & \leq & \sum_{i=1}^{r-1} d(\omega_{m_i},0) \nonumber \ \leq \ \sum_{i=1}^{r-1} \frac{\alpha^{-(p_i-1)}}{L_2} \\ & \leq & \frac{\alpha^{-(p(m)+2)} \cdot S_\infty(\alpha)}{L_2} \ \stackrel{(\ref{cond:alpha4})}{\leq} \ \ensuremath{\frac{1}{4}} \cdot \frac{\alpha^{-(p(m)+1)}}{L_2} \ . \label{eq:domegat} \end{eqnarray} Now choose some $q' \geq p(m) \geq 1$ such that % $l^+_{q'}+1 \leq m \leq \nu(q'+1)$. This is possible as % $m \geq \nu(p(m)) \geq l^+_{p(m)}+1$, and because the intervals $[l^+_q+1,\nu(q)]$ overlap by (\ref{eq:lpmestimates}). In addition, we can choose $q' < p-1$ as $m \leq l^+_p+1 < \nu(p-1)$. We now want to apply Lemma~\ref{lem:approaching} with $\beta^* := \beta^+_{q',m}$ $q=p(m)$, $l^* = l^-_{q'}$ $l=l^-_p$ and $k=t$. Note that we can apply Part II of the induction statement with $q=q'$ and $n=m$ to obtain that % $\beta^* \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$, \begin{equation} \label{step4:c} \xi_j(\beta^*,l^-_{q'}) \ \geq \ \gamma \ \ \ \ \ \forall j \in [l^-_{q'},0] \setminus \Omega_\infty \end{equation} and \begin{equation} \label{step4:d} \xi_j(\beta^*,l^-_{q'}) \ \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_m \ . \end{equation} Further, Part III of the induction statement% \footnote{With $q_1 = q'$, $q=p-1$, $n_1=m$ and $n_2=\nu(p-1)$. \label{foot:step4d} } together with Step 3 imply that \begin{equation} \label{step4:e} |\beta-\beta^*| \ \leq \ |\beta - \beta_{p-1,\nu(p)}| + |\beta_{p-1,\nu(p)} - \beta^*| \ \leq \ \alpha^{-p} + \alpha^{-q'} \ \leq \ 2\alpha^{-q'} \ \end{equation} and finally $\xi_{k+1}(\beta,l^-_p) \geq \gamma$ if $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$ by Lemma~\ref{lem:basicestimate}~. Thus Lemma \ref{lem:approaching} yields \begin{equation} \label{step4:f} \{ j \in [\lambda^-(m),m] \mid \xi_{j+t}(\beta,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}_{p(m)-2} \ . \end{equation} Consequently (Lemma~\ref{lem:omegatransition}(b)) \begin{equation} \label{step4:g} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m+t}(\beta,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \ . \end{equation} This means that we can compare the two sequences \begin{equation} \label{step4:referencetwo} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{-l^-_{p(m)}}(\beta^*,l^-_{q'}) \ensuremath{,\ldots,} \xi_{-1}(\beta^*,l^-_{q'}) \end{equation} and \begin{equation} \label{step4:newtwo} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{m+t-l^-_{p(m)}}(\beta,l^-_p) \ensuremath{,\ldots,} \xi_{m+t-1}(\beta,l^-_p) \end{equation} via Lemma~\ref{lem:orbitscontraction} with % $\epsilon \ := \ L_2 \cdot d(\ensuremath{\omega_m},0) \ \in $ $(\alpha^{-p(m)},\alpha^{-(p(m)-1)}]$ to obtain that% \footnote{We have $q=p(m)-1$. Note that $d(\omega_{m+t},0) \leq \frac{2\epsilon}{L_2}$ (see (\ref{eq:domegat})) and $|\beta-\beta^*| \leq 2\alpha^{-q'} \leq 2\epsilon$ by (\ref{step4:e}), such that $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K \cdot \epsilon$ by Remark~\ref{bem:errorterm}~. Further, it follows from (\ref{step4:c}) and (\ref{step4:g}) that $\eta(j,n) \leq \#([-(n-j),-1] \cap \Omega_\infty) \leq \frac{n-j}{10}$ (see (\ref{cond:hfunctions2})). Finally $n=l^-_{p(m)} \geq 4p(m)$ by (\ref{eq:lpmestimates}), such that $\alpha^{-\ensuremath{\frac{1}{4}} n} \leq \epsilon$.\label{foot:step4f}} \begin{equation} \label{step4:h} |\xi_{m+t}(\beta,l^-_p) - \xi_0(\beta^*,l^-_{q'})| \ \leq \ \epsilon \cdot (6+K\cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}})) \ . \end{equation} As $d(\omega_{m+t},0) \geq \frac{3}{4} \cdot \frac{\epsilon}{L_2}$ (see (\ref{eq:domegat})), it follows from (\ref{cond:Fcontraction}) and (\ref{cond:sharppeak}) that \begin{eqnarray} \lefteqn{\xi_{m+t+1}(\beta,l^-_p) \ \geq } \nonumber \\ & \geq & \xi_1(\beta^*,l^-_{q'}) + \frac{3\epsilon}{4} - \epsilon \cdot \frac{6 + K \cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}})}{2\ensuremath{\sqrt{\alpha}}} \ \stackrel{(\ref{cond:alpha3})}{\geq} \ \xi_1(\beta^*,l^-_{q'}) + \frac{\epsilon}{2} \ . \label{step4:crucial} \end{eqnarray} Now first assume $p(m) \geq 2$, such that $\epsilon \leq \ensuremath{\frac{1}{\alpha}}$. (The case $p(m)=1$ has to be treated separately, see below.) Then we can apply Lemma \ref{lem:orbitsthrowout}(b) to compare the orbits \begin{equation} \label{step4:i} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_1(\beta^*,l^-_{q'}) \ensuremath{,\ldots,} \xi_{l^+_{p(m)}}(\beta^*,l^-_{q'}) \end{equation} and \begin{equation} \label{step4:j} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{m+t+1}(\beta,l^-_p) \ensuremath{,\ldots,} \xi_{m+t+l^+_{p(m)}}(\beta,l^-_p) \end{equation} to conclude that% \footnote{We have $q=p(m)-1$ and $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K\cdot\epsilon$ as before (see Footnote~\ref{foot:step4f}). (\ref{step4:d}) yields that $x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ (note that $l^+_q+1 \in R_m$ by (\ref{eq:lpminrn})). Further, we have \[ \tau(n) \ \leq \ \#\left([1,l^+_q] \setminus R_m\right) \ \leq \ \max\left\{0,\frac{2q-3}{4}\right\} \] and $\tau(j) \leq \#([1,j] \setminus R_m) \leq \frac{j}{8}$ by Lemma~\ref{lem:regsets}(a). $\tau(n)-\tau(j) \leq \frac{n-j}{6}$ follows again from (\ref{eq:etajnestimate}), and finally $n = l^+_{p(m)} \geq 5(p(m)-1)$ .\label{foot:step4g}} \begin{equation} \label{step4:k} \xi_{m+t+l^+_q+1}(\beta,l^-_p) \ \geq \ \frac{2}{\alpha} \ . \end{equation} As $J_r = J(m)$ is a maximal interval in $\Gamma_{l^+_{p_{r-1}}}$ we have $\lambda^+(m)+1 \in R_{l^+_{p_{r-1}}}$. Therefore $\lambda^+(m)+1+t \in R_{l^+_p+1}$ follows from the recursive structure of this set. Consequently, we can choose $\tilde{k} = m+t+l^+_{p(m)}+1$. \ \\ Finally, suppose $p(m)=1$. In this case we still have $\xi_{m+t+1}(\beta,l^-_p) \geq \xi_1(\beta^*,l^-_{q'}) + \frac{\epsilon}{2}$ by (\ref{step4:crucial}). There are two possibilities: Either $\xi_{m+t+1}(\beta,l^-_p) \geq \frac{2}{\alpha}$. As $m+1 \in R_{l_{p_{r-1}}}$ (see Remark~\ref{bem:regularsets}(c)), we have $m+t+1 \in R_{l^+_p+1}$ due to the recursive structure of this set. Thus, we can choose $\tilde{k} = m+t+1$. On the other hand, if $\xi_{m+t+1}(\beta,l^-_p) \in B_{\frac{2}{\alpha}}(0)$ then we can apply (\ref{cond:Fcontraction}) again and obtain \[ \xi_{m+t+2}(\beta,l^-_p) \ \geq \ \xi_2(\beta^*,l^-_{q'}) + 2\ensuremath{\sqrt{\alpha}}\cdot\epsilon - K \epsilon \ \stackrel{(\ref{cond:alpha1})}{\geq} \ \xi_2(\beta^*,l^-_{q'}) + \ensuremath{\sqrt{\alpha}}\cdot\epsilon \geq \frac{1}{\alpha} \] by (\ref{cond:alphagamma0}), as $\xi_2(\beta^*,l^-_q) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by (\ref{step4:d}) and $\epsilon \geq \ensuremath{\frac{1}{\alpha}}$. Thus, we can choose $\tilde{k} = m+t+2$ in this case. Note that $m+t+2$ is contained in $ R_{l^+_p+1}$ for the same reasons as $m+t+1$. {\raggedleft $\Box$ \\} {\raggedleft \large $\blacksquare$ \\} \ \\ Now we can show that $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\beta \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ and thus complete the proof of Part I of the induction statement: Suppose $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. By Steps~2 and 3 we know that $\xi_0(\beta,l^-_p) \geq \gamma$. This implies that \[ \xi_1(\beta,l^-_p) \ \in \ \left[1+\frac{3}{2\ensuremath{\sqrt{\alpha}}}-\beta,1+\frac{3}{\ensuremath{\sqrt{\alpha}}}-\ensuremath{\frac{1}{\alpha}}-\beta\right] \] (see assumptions (\ref{cond:Ffixedpoints}) and (\ref{cond:Fcontraction})). As Step 4 implies that $\xi_1(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$, and $\frac{1}{2\ensuremath{\sqrt{\alpha}}} \geq \ensuremath{\frac{1}{\alpha}}$ by (\ref{cond:alpha1}), this gives the required estimate. \ \\ \ \\ \underline{\textbf{Step 5:}} \ \ \ \textit{Part II of the induction statement implies Part III} \ \\Actually, the situation is a little bit more complicated than the headline above may suggest. In fact, the both remaining parts of the induction statement have to be proved simultaneously by induction on $n$. However, in each step of the induction Part II will imply Part III. In order to make this more precise, assume that Part II with $q=p$ holds for all $n\leq N$, with $N \in [l^+_p+1,\nu(p+1)]$. What we will now show is that in this case Part III(a) holds as well whenever $n_1,n_2 \leq N$, and similarly Part III(b) holds whenever $n_2 \leq N$. \ \\ Suppose that $N \in [l^+_p+1,\nu(p+1)]$ and Part II with $q=p$ holds for all $n \leq N$. Further, let $n_2 \leq N$ and $n_1 \in R_{n_2}$. Then the Part II of the induction statement applied to $q=p$ and $n=n_2$ yields that $\xi_{n_1}(\beta^+_{q,n_2},l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$, and for $n=n_1$ we obtain that $\xi_{n_1}(\beta,l^-_p)\in\ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in R_{n_1}$. Thus all the assumption of Lemma~\ref{lem:parameterestimates} (with $n=n_1$, $\beta_1=\beta^+_{p,n_1}$ and $\beta_2=\beta^+_{p,n_2}$) are satisfied, such that \[ |\beta^+_{p,n_1} - \beta^+_{p,n_2}| \ \leq \ 2\alpha^{-\frac{n_1}{4}} \] as required. \ \\ For Part III(b) let $q_1 < p$, $n_1 \in [l^+_{q_1}+1,\nu(q_1+1)]$ and $n_2 \in [l^+_p+1,N]$. First suppose $q_1 < p-1$. Then Part III(b) of the induction statement (with $q=p-1$ and $n_2 = \nu(p)$) yields \[ |\beta^+_{q_1,n_1} - \beta^+_{p-1,\nu(p)}| \ \leq \ 3 \cdot \sum_{i=q_1+1}^{p-1} \alpha^{-i} \ . \] Further, Part II of the induction statement (with $q=p$ and $n=n_2$) yields that $\xi_{l^+_p+1}(\beta^+_{p,n_2},l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ (note that $l^+_p+1 \in R_{n_2}$ by (\ref{eq:lpminrn})), and consequently \[ | \beta^+_{p-1,\nu(p)} - \beta^+_{p,n_2}| \leq \ \alpha^{-p} \] by Step III. Altogether, we obtain \[ |\beta^+_{q_1,n_1} - \beta^+_{p,n_2}| \ \leq \ |\beta^+_{q_1,n_1} - \beta^+_{p-1,\nu(p)}| + | \beta^+_{p-1,\nu(p)} - \beta^+_{p,n_2}| \ \leq \ 3\cdot \sum_{i=q_1+1}^p \alpha^{-i} \ . \] On the other hand, if $q_1=p-1$ then Part III(a) (with $q=q_1=p-1$ and $n_2=\nu(p)$) in combination with $n_1 \geq l^+_{p-1} \geq 4p$ (see (\ref{eq:lpmestimates})) yields \[ |\beta^+_{q_1,n_1} - \beta^+_{p-1,\nu(p)}| \ \leq \ 2\alpha^{-frac{n_1}{4}} \ \leq \ 2\alpha^{-p} \ , \] such that \[ |\beta^+_{q_1,n_1} - \beta^+_{p,n_2}| \ \leq \ |\beta^+_{q_1,n_1} - \beta^+_{p-1,\nu(p)}| + | \beta^+_{p-1,\nu(p)} - \beta^+_{p,n_2}| \ \leq \ 3\alpha^{-p} \] as required. Finally, note that \[ 3\cdot \sum_{i=q_1+1}^p \alpha^{-i} \ \leq \ \frac{3S_\infty(\alpha)}{\alpha} \cdot \alpha^{-q_1} \ \leq \ \alpha^{-q_1} \] by (\ref{cond:alpha4}). {\raggedleft \large $\blacksquare$ \\} \ \\ Now we can already use the parameter estimates up to $N$ (in the way mentioned above) during the induction step $N \ensuremath{\rightarrow} N+1$ in the proof of Part II. \ \\ \underline{\textbf{Step 6:}} \ \ \ \textit{Proof of Part II for $q=p$.} \ \\ In order to prove Part II of the induction statement for $q=p$, we will proceed by induction on $n$. Steps 2--4 show that the statement holds for $n=l^+_p+1$. Suppose now that it holds for all $n\leq N$, where $N\in[l^+_p+1,\nu(p+1)-1]$. We have to show that it then holds for $N+1$ as well. In order to do so, we distinguish three different cases: First, if $N+1$ is not admissible there is nothing to prove. Secondly, if both $N$ and $N+1$ are admissible then necessarily $p(N)=0$, otherwise $N+1$ would be contained in $J(N)$. Thus $d(\omega_N,0) \geq \frac{4\gamma}{L_2}$, and in addition Part II of the induction statement with $q=p$ and $n=N$ implies that $\beta^\pm_{p,N} \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$. Therefore Lemma~\ref{lem:basicestimate} yields that \[ \xi_{N+1}(\beta^+_{p,N},l^-_p) \ > \ \ensuremath{\frac{1}{\alpha}} \] and \[ \xi_{N+1}(\beta^-_{p,N},l^-_p) \ < \ -\ensuremath{\frac{1}{\alpha}} \ . \] Consequently $\xi_{N+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies that $\xi_N(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$, and everything else follows from Part II of the induction statement for $n=N$. Thus, it remains to treat the case where $N+1$ is admissible but $N \notin A_N$. By (\ref{eq:aninequality}) this also means that $N \notin A_{N+1}$. Consequently there exists an interval $J \in \J_{N+1}$ which contains $N$, such that $J = [t,N]$ where $t:=\lambda^-(m_J)$. Note that $t-1,t,m_J \in A_{m_J}$ by Lemma \ref{lem:endpoints}(a). In particular $m_J$ and $t-1$ are admissible. First of all, we will prove the following claim. \begin{claim} \label{claim:step6a} $\xi_{N+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_{t-1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. \end{claim} \textit{Proof.}\ It suffices to show that \begin{equation} \label{step6:a} \xi_{N+1}(\beta^+_{p,t-1},l^-_p) \ > \ \ensuremath{\frac{1}{\alpha}} \end{equation} (see (\ref{eq:betapluspndef})--(\ref{eq:betaminpn})). Let $m:=m_J$, $\beta^+:= \beta^+_{p,t-1}$ and $\beta^*:=\beta^+_{p,m}$. Using Part II of the induction statement (with $q=p$ and $n=m$) we obtain \begin{equation} \label{step6:b} \xi_j(\beta^*,l^-_p) \ \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \forall j \in R_m \end{equation} and $\beta^* \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$. Further, the same statement with $n=t-1$ implies \begin{equation} \label{step6:c} \xi_j(\beta^+,l^-_p) \ \geq \ \gamma \ \ \ \forall j \in [-l^-_p,0] \setminus \Omega_\infty \end{equation} and \begin{equation} \label{step6:d} \xi_j(\beta^+,l^-_p) \ \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \forall j \in R_{t-1} \ . \end{equation} Finally Part III(a) of the induction statement (with $q=p$, $n_1=t-1$ and $n_2=m$) yields that \begin{equation} \label{step6:e} |\beta^+ - \beta^*| \ \leq \ \alpha^{-\frac{(t-1)}{4}} \ \leq \ \alpha^{-\frac{l^+_p}{4}} \ \stackrel{(\ref{eq:lpmestimates})}{\leq} \ \alpha^{-(p+1)} \ . \end{equation} Note that $t-1$ is contained in $\Omega_\infty(m)$ by (\ref{eq:jminominfty}), and as $l^+_p+1 \notin \Omega_\infty$ this interval must be to the right of $l^+_p+1$. Therefore $t-1 > l^+_p+1$. Now all the assumptions for the application of Lemma~\ref{lem:approaching} are satisfied% \footnote{With $\beta^*$ and $m$ as above, $q= p(m) \ (\leq p)$, $l=l^*=l^-_p$, $k=0$ and $\beta=\beta^+$. Note that $p(t-1) = 0$ as $t-1$ is admissible, and $\xi_{t-1}(\beta^+,l^-_p) = \ensuremath{\frac{1}{\alpha}}$ by definition of $\beta^+ = \beta^+_{p,t-1}$. Therefore Lemma~\ref{lem:basicestimate} implies $\xi_{m-l^-_{p(m)}}(\beta^+,l^-_p) = \xi_{t}(\beta^+,l^-_p) \geq \gamma$. \label{foot:step6a} } and we obtain \begin{equation} \label{step6:f} \{ j \in [t,m] \mid \xi_j(\beta^+,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}_{p(m)-2} \ . \end{equation} Using Lemma~\ref{lem:omegatransition}(b) this further means that \begin{equation} \label{step6:g} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m}(\beta^+,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \ . \end{equation} Now we compare the orbits \begin{equation} \label{step6:referenceone} x_1^1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{-l^-_{p(m)}}(\beta^+,l^-_p) \ensuremath{,\ldots,} \xi_{-1}(\beta^+,l^-_p) \end{equation} and \begin{equation} \label{step6:newone} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_t(\beta^+,l^-_p) \ensuremath{,\ldots,} \xi_{m-1}(\beta^+,l^-_p) \ , \end{equation} using Lemma~\ref{lem:orbitscontraction} with $\epsilon := L_2 \cdot d(\omega_m,0) \in [\alpha^{-p(m)},\alpha^{-(p(m)-1)})$, to conclude that% \footnote{As $\beta_1=\beta_2=\beta^+$ we have $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K\epsilon$, see Remark~\ref{bem:errorterm}. (\ref{step6:c}) and (\ref{step6:g}) imply that \[ \eta(j,n) \ \leq \ \#([-(n-j),-1]\cap \Omega_\infty) \ \leq \ \frac{n-j}{10} \] by (\ref{cond:hfunctions2}). Finally $n=l^-_{p(m)} \geq 4(p(m)+1)$ by (\ref{eq:lpmestimates}), such that $\alpha^{-\frac{n}{4}} \leq \alpha^{-(p(m)+1)} \leq \epsilon$. \label{foot:step6b} } \[ |\xi_m(\beta^+,l^-_p) - \xi_0(\beta^+,l^-_p)| \ \leq \ \epsilon \cdot (6+K\cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}})) \ . \] As (\ref{step6:c}) and (\ref{step6:g}) in particular imply that $\xi_m(\beta^+,l^-_p),\xi_0(\beta^+,l^-_p) \geq \gamma$, it follows from (\ref{cond:Fcontraction}) and (\ref{cond:sharppeak}) that \begin{eqnarray} \nonumber \lefteqn{\xi_{m + 1}(\beta^+,l^-_p) \ \geq} \\ & \geq & \xi_1(\beta^+,l^-_p) + \epsilon - \frac{\epsilon \cdot (6+K\cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}}))}{2\ensuremath{\sqrt{\alpha}}} \ \stackrel{(\ref{cond:alpha3})}{\geq} \xi_1(\beta^+,l^-_p) + \frac{\epsilon}{2} \ . \end{eqnarray} Now first assume $p(m) \geq 2$, such that $d(\ensuremath{\omega_m},0) \leq \frac{\alpha^{-1}}{L_2}$. Then we can apply Lemma~\ref{lem:orbitsthrowout}(b)% \footnote{\label{foot:step6c}With $\epsilon = L_2\cdot d(\ensuremath{\omega_m},0)$ as above, such that $q=p(m)-1$. $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K\cdot \epsilon$ follows again from Remark~\ref{bem:errorterm}. $x^1_{n+1}\in\ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ follows from (\ref{step6:d}) as $l^+_{p(m)}+1 \in R_{t-1}$ by (\ref{eq:lpminrn}). (\ref{step6:d}) also implies $\tau(n) \leq \frac{2p(m)-5}{4}$ and $\tau(j) \leq \frac{j}{8} \ \forall j \in [1,n]$ by Lemma~\ref{lem:regsets}(a) and $\tau(n)-\tau(j) \leq \frac{n-j}{6}$ by (\ref{eq:etajnestimate}). Finally $n=l^+_{p(m)} \geq 5(p(m)-1)$ by (\ref{eq:lpmestimates}). } to the sequences \begin{equation} \label{step6:referencetwo} x_1^1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_1(\beta^+,l^-_p) \ensuremath{,\ldots,} \xi_{l^+_{p(m)}}(\beta^+,l^-_p) \end{equation} and \begin{equation} \label{step6:newtwo} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{m+1}(\beta^+,l^-_p) \ensuremath{,\ldots,} \xi_{N}(\beta^+,l^-_p) \ , \end{equation} which yields that $\xi_{N+1}(\beta^+,l^-_p) = x^2_{n+1}\geq \frac{2}{\alpha}$ as required for (\ref{step6:a}). \ \\ It remains to address the case $p(m) = 1$. Note that in this case $p(j) = 0 \ \forall j \in [m+1,N]$ (see Lemma~\ref{lem:endpoints}(a)). There are two possibilities: Either $\xi_{m+1}(\beta^+,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$, in which case $\xi_{N+1}(\beta^+,l^-_p) \geq \gamma > \ensuremath{\frac{1}{\alpha}}$ follows from the repeated application of Lemma~\ref{lem:basicestimate}~. Otherwise, $\xi_{m+1}(\beta^+,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. As $1,2\in R_{t-1}$ (see (\ref{eq:lpminrn})) it follows from (\ref{step6:d}) that $\xi_1(\beta^+,l^-_p), \xi_2(\beta^+,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ as well. Therefore (\ref{cond:Fexpansion}) implies that \begin{eqnarray} \nonumber \xi_{m+2}(\beta^+,l^-_p) & \geq & \xi_2(\beta^+,l^-_p) + 2\ensuremath{\sqrt{\alpha}}\cdot \epsilon - K \cdot \epsilon \\ &\stackrel{(\ref{cond:alpha1})}{\geq}& \xi_2(\beta^+,l^-_p) + \ensuremath{\sqrt{\alpha}}\cdot \epsilon \ \geq \ \frac{2}{\alpha} \label{step6:h} \end{eqnarray} as $\epsilon \geq \ensuremath{\frac{1}{\alpha}}$ in this case. Again, we obtain $\xi_{N+1}(\beta^+,l^-_p) \geq \gamma > \ensuremath{\frac{1}{\alpha}}$ by repeated application of Lemma~\ref{lem:basicestimate}~. {\raggedleft $\Box$ \\} \ \\ Now suppose $\xi_{N+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Then by Claim~\ref{claim:step6a} there holds $\xi_{t-1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. As we can already apply Part II of the induction statement with $q=p$ and $n=t-1$, this further implies (\ref{eq:indstatementI}), $\beta \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ and \[ \xi_j(\beta,l^-_p) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{t-1} \ . \] Note that $R_{N+1} \cap [1,t-1] = R_{t-1}$ (see Remark~\ref{bem:regularsets}(b)), such that $R_{N+1} = R_{t-1} \cup R(J) \cup\{N+1\}$. Therefore, in order to complete this step and thereby the proof of Induction scheme~\ref{thm:indscheme}, it only remains to show that \begin{claim} \label{claim:step6b} $\xi_{N+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in R(J)$. \end{claim} \textit{Proof.}\ The proof of this statement is very similar to the proof of Step 4, and likewise we will use two further claims, namely Claim~\ref{claim:step6c} and Claim~\ref{claim:step6d} below, which are the analogues of Claim~\ref{claim:step4a} and Claim~\ref{claim:step4b}~. Suppose $\xi_j(\beta,l^-_p) > \ensuremath{\frac{1}{\alpha}}$ for some $j \in R(J)$. We have to distinguish two cases (note that $R(J) \cup \Gamma^+(J) = J^+$): Either $j+1 \in R(J)$. As $d(\ensuremath{\omega_j},0) \geq \frac{3\gamma}{L_2}$ by Lemma~\ref{lem:regularsetsbasic}(b), Lemma~\ref{lem:basicestimate} implies $\xi_{j+1}(\beta,l^-_p) \geq \gamma \geq \frac{2}{\alpha}$. Therefore we can apply Claim~\ref{claim:step6c} with $k=j+1$. On the other hand, if $j+1\notin R(J)$, then Claim \ref{claim:step6d} (with $k=j$) yields the existence of a suitable $\tilde{k}$ and we can again apply Claim~\ref{claim:step6c}, this time with $k=\tilde{k}$. In both cases we obtain that $\xi_j(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$ implies $\xi_{N+1}(\beta,l^-_p) > \ensuremath{\frac{1}{\alpha}}$. As we are in the case of one-sided forcing, the fact that $\xi_j(\beta,l^-_p) \leq -\ensuremath{\frac{1}{\alpha}}$ implies $\xi_{N+1}(\beta,l^-_p) < -\ensuremath{\frac{1}{\alpha}}$ is obvious. This proves the claim. {\raggedleft $\Box$ \\} \begin{claim} \label{claim:step6c} Suppose $\xi_k(\beta,l^-_p) \geq \frac{2}{\alpha}$ for some $k \in R(J)$. Then $\xi_{N+1}(\beta,l^-_p) \ > \ \ensuremath{\frac{1}{\alpha}}$. \end{claim} \textit{Proof.}\ First of all, if $p_J=1$ then $p(j)=0 \ \forall j \in J^+$ by Lemma~\ref{lem:endpoints}(a), and the claim follows from the repeated application of Lemma~\ref{lem:basicestimate}~. Thus we can assume $p_J \geq 2$. Claim~\ref{claim:step6a} together with Part II of the induction statement with $q=p$ and $n=t-1$ imply that \begin{equation} \label{step6:i} \xi_j(\beta,l^-_p) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{t-1} \supseteq R_{l^+_{p_J}+1} \supseteq R_{l^+_{p_J}} \end{equation} (see (\ref{eq:lplusstabilized}) for the inclusions). Consequently, we can apply Lemma~\ref{lem:orbitsthrowout}(a)% \footnote{We choose $\epsilon = L_2\cdot d(\omega_{m_{p_J}},0) \in [\alpha^{-p_J},\alpha^{-(p_J-1)}]$, such that $q=p_J-1$ and $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K \cdot \epsilon$. Note that $k\in R(J)$ implies $k-m_J \in R_{l^+_{p_J}}$ by (\ref{eq:recursivedef1}), and further $l^+_{p_J}+1 \in R_{l^+_{p_J}+1}$ by (\ref{eq:lpminrn}). Therefore $x^1_1,x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by (\ref{step6:i}). Finally $\tau(n) \leq \min\{0,\frac{2p(m)-5}{4}\}$ by Lemma~\ref{lem:regsets}(a) and $\tau(n)-\tau(j) \leq \frac{n-j}{6}$ by (\ref{eq:etajnestimate}).\label{foot:step6d}} to the sequences \begin{equation} \label{step6:referencethree} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{k-m_J}(\beta,l^-_p) \ensuremath{,\ldots,} \xi_{l^+_{p_J}}(\beta,l^-_p) \end{equation} and \begin{equation} \label{step4:newthree} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_k(\beta,l^-_p) \ensuremath{,\ldots,} \xi_N(\beta,l^-_p) \ . \end{equation} to obtain that $\xi_{N+1}(\beta,l^-_p) = x^2_{n+1} \geq \frac{2}{\alpha}$. {\raggedleft $\Box$ \\} \begin{claim} \label{claim:step6d} Suppose $k\in R(J),\ k+1\in \Gamma^+(J)$ and $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$. Then there exists some $\tilde{k} \in R(J)$ with $\xi_{\tilde{k}}(\beta,l^-_p) \geq \frac{2}{\alpha}$. \end{claim} \textit{Proof.}\ Let $J_1:=J$, $m_1:=m_J$ and $p_1:=p_J$. As in the proof of Claim~\ref{claim:step4b} we can find sequences $p_1 > \ldots > p_r \geq 0$ and $m_1 > \ldots > m_r \in [1,l^+_{p_{r-1}}]$ with $p_i = p(m_i) \leq p_{i-1}-3$, such that $k-\sum_{i=1}^{r-1} m_i + 1 = \lambda^-(m_r)$ for some $r \in \N$. Let $m := m_r$ and $t := \sum_{i=1}^{r-1} m_i$. (The only difference to Claim~\ref{claim:step4b} is that $r=1$ is not possible.) Likewise, we have \begin{equation} \label{step6:j} d(\omega_t,0) \ \leq \ \ensuremath{\frac{1}{4}} \cdot \frac{\alpha^{-(p(m)+1)}}{L_2} \ \leq \ \frac{d(\ensuremath{\omega_m},0)}{4} \ . \end{equation} Again, we choose some $q' \geq p(m)$ such that $l^+_{q'}+1 \leq m \leq \nu(q'+1)$. As $m \leq l^+_{p_{r-1}} \leq l^+_p < \nu(p-2)$ (see (\ref{eq:lpmestimates})) we can assume that $q' \leq p-2$. We now want to apply Lemma~\ref{lem:approaching} with $\beta^* := \beta^+_{q',m}$ $q=p(m)$, $l^* = l^-_{q'}$ $l=l^-_p$ and $k=t$. In order to check the assumptions, note that we can apply Part II of the induction statement (with $q=q'$ and $n=m$) to % $\beta^* := \beta_{q',m}$ and obtain that % $\beta^* \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$, \begin{equation} \label{step6:k} \xi_j(\beta^*,l^-_{q'}) \ \geq \ \gamma \ \ \ \ \ \forall j \in [l^-_{q'},0] \setminus \Omega_\infty \end{equation} and \begin{equation} \label{step6:l} \xi_j(\beta^*,l^-_{q'}) \ \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_m \setminus \Omega_\infty \ . \end{equation} In addition, Step 3% \footnote{Note that $\xi_{N+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_{t-1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by Claim~\ref{claim:step6a}, which in turn implies $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ as $l^+_p+1 \in R_{t-1}$, see (\ref{eq:lpminrn}).} together with Part III of the induction statement% \footnote{With $q_1 = q'$, $q=p-1$, $n_1=m$ and $n_2=\nu(p-1)$. } imply that \begin{equation} \label{step6:m} |\beta-\beta^*| \ \leq \ |\beta - \beta_{p-1,\nu(p)}| + |\beta_{p-1,\nu(p)} - \beta^*| \ \leq \ \alpha^{-p} + \alpha^{-q'} \ \leq \ 2\alpha^{-p(m)} \ \end{equation} Finally, $\xi_{k+1}(\beta,l^-_p) \geq \gamma$ if $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$ by Lemma \ref{lem:basicestimate}% \footnote{Note that $k+1=\lambda^-(m)$ in the claim above corresponds to $m+k-l^-_{p(m)}$ in Lemma~\ref{lem:approaching}~.} . Thus Lemma \ref{lem:approaching} yields \begin{equation} \label{step6:n} \{ j \in [\lambda^-(m),m] \mid \xi_{j+t}(\beta,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}_{p(m)-2} \ . \end{equation} Consequently (Lemma~\ref{lem:omegatransition}(b)) \begin{equation} \label{step6:o} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m+t}(\beta,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \ . \end{equation} This means that we can compare the two sequences \begin{equation} \label{step6:referencefour} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{-l^-_p(m)}(\beta^*,l^-_{q'}) \ensuremath{,\ldots,} \xi_{-1}(\beta^*,l^-_{q'}) \end{equation} and \begin{equation} \label{step4:newfour} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{m+t-l^-_{p(m)}}(\beta,l^-_p) \ensuremath{,\ldots,} \xi_{m+t-1}(\beta,l^-_p) \end{equation} via Lemma~\ref{lem:orbitscontraction} with % $\epsilon \ := \ L_2 \cdot d(\ensuremath{\omega_m},0) \ \in $ $(\alpha^{-p(m)},\alpha^{-(p(m)-1)}]$ to obtain tha \footnote{ Note that $d(\omega_{m+t},0) \leq \frac{2\epsilon}{L_2}$ (see (\ref{step6:j})) and $|\beta-\beta^*| \leq 2\alpha^{-p(m)} \leq 2\epsilon$ by (\ref{step6:m}), such that $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K \cdot \epsilon$ by Remark~\ref{bem:errorterm}~. Further, it follows from (\ref{step6:k}) and (\ref{step6:o}) that $\eta(j,n) = \#([-(n-j),-1] \cap \Omega_\infty) \leq \frac{n-j}{10}$ (see (\ref{cond:hfunctions2})). Finally $n=l^-_{p(m)} \geq 4p(m)$ by (\ref{eq:lpmestimates}), such that $\alpha^{-\frac{n}{4}} \leq \epsilon$.\label{foot:step6f}} \begin{equation} \label{step6:p} |\xi_{m+t}(\beta,l^-_p) - \xi_0(\beta^*,l^-_{q'})| \ \leq \ \epsilon \cdot (6+K\cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}})) \ . \end{equation} Note that (\ref{step6:k}) and (\ref{step6:o}) in particular imply that $\xi_0(\beta^*,l^-_{q'})\geq \gamma$ and $\xi_{m+t}(\beta,l^-_p) \geq \gamma$. As $d(\omega_{m+t},0) \geq \frac{3}{4} \cdot \frac{\epsilon}{L_2}$ (see (\ref{step6:j})), (\ref{cond:sharppeak}) in combination with (\ref{cond:Fcontraction}) therefore implies \begin{eqnarray} \lefteqn{\xi_{m+t+1}(\beta,l^-_p) \ \geq } \nonumber \\ & \geq & \xi_1(\beta^*,l^-_{q'}) + \frac{3\epsilon}{4} - \epsilon \cdot \frac{6 + K \cdot S_\infty(\alpha^{\ensuremath{\frac{1}{4}}})}{2\ensuremath{\sqrt{\alpha}}} \ \stackrel{(\ref{cond:alpha3})}{\geq} \ \xi_1(\beta^*,l^-_{q'}) + \frac{\epsilon}{2} \ . \label{step6:crucial} \end{eqnarray} Now first assume $p(m) \geq 2$, such that $\epsilon \leq \frac{\alpha^{-1}}{L_2}$. (The case $p(m) =1$ has to be treated separately, see below.) Then we can apply Lemma \ref{lem:orbitsthrowout}(b), with $\epsilon$ as above, to compare the orbits \begin{equation} \label{step6:q} x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_1(\beta^*,l^-_{q'}) \ensuremath{,\ldots,} \xi_{l^+_{p(m)}}(\beta^*,l^-_{q'}) \end{equation} and \begin{equation} \label{step6:r} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{m+t+1}(\beta,l^-_p) \ensuremath{,\ldots,} \xi_{m+t+l^+_{p(m)}}(\beta,l^-_p) \end{equation} to conclude that% \footnote{We have $q=p(m)-1$ and $\ensuremath{\textrm{\upshape err}}(\ldots) \leq K\epsilon$ before (see Footnote~\ref{foot:step6f}). (\ref{step6:l}) yields that $x^1_{n+1} \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ (note that $l^+_q+1 \in R_m$ by (\ref{eq:lpminrn})). Further we have \[ \tau(n) \ \leq \ \#([1,l^+_{p(m)}] \setminus R_m \ \leq \ \max\left\{0,\frac{2p(m)-5}{4}\right\} \] as well as $\tau(j) \leq \#([1,j] \setminus R_m) \leq \frac{j}{8}$ by Lemma~\ref{lem:regsets}(a). $\tau(n)-\tau(j) \leq \frac{n-j}{6}$ follows again from (\ref{eq:etajnestimate}), and finally $n = l^+_{p(m)} \geq 5(p(m)-1)$ by (\ref{eq:lpmestimates}).\label{foot:step6g}} \begin{equation} \label{step6:s} \xi_{m+t+l^+_{p(m)}+1}(\beta,l^-_p) \ \geq \ \frac{2}{\alpha} \ . \end{equation} As $J_r = J(m)$ is a maximal interval in $\Gamma_{l^+_{p_{r-1}}}$ we have $\lambda^+(m)+1 \in R_{l^+_{p_{r-1}}}$. Therefore $\lambda^+(m)+1+t \in R(J)$ follows from the recursive structure of the regular sets. Consequently, we can choose $\tilde{k} = \lambda^+(m)+1+t = m+l^+_{p(m)}+t+1$. \ \\ Finally, suppose $p(m)=1$. In this case we still have $\xi_{m+t+1}(\beta,l^-_p) \geq \xi_1(\beta^*,l^-_{q'}) + \frac{\epsilon}{2}$ by (\ref{step6:crucial}). There are two possibilities: Either $\xi_{m+t+1}(\beta,l^-_p) \geq \frac{2}{\alpha}$. As $m+1 \in R_{l_{p_{r-1}}}$ (see Remark~\ref{bem:regularsets}(c)), we have $m+t+1 \in R(J)$ due to the recursive structure of this set. Thus, we can choose $\tilde{k} = m+t+1$. On the other hand, if $\xi_{m+t+1}(\beta,l^-_p) \in B_{\frac{2}{\alpha}}(0)$ then we can apply (\ref{cond:Fcontraction}) again and obtain \[ \xi_{m+t+2}(\beta,l^-_p) \ \geq \ \xi_2(\beta^*,l^-_{q'}) + 2\ensuremath{\sqrt{\alpha}}\cdot\epsilon - K \epsilon \ \stackrel{(\ref{cond:alpha1})}{\geq} \ \xi_2(\beta^*,l^-_{q'}) + \ensuremath{\sqrt{\alpha}}\cdot\epsilon \geq \frac{1}{\alpha} \] by (\ref{cond:alphagamma0}), as $\xi_1(\beta^*,l^-_{q'}),\xi_2(\beta^*,l^-_{q'}) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by (\ref{step6:l}) and $\epsilon \geq \ensuremath{\frac{1}{\alpha}}$. Thus, we can choose $\tilde{k} = m+t+2$ in this case. Note that $m+t+2$ is contained in $ R(J)$ for the same reasons as $m+t+1$. {\raggedleft $\Box$ \\} {\raggedleft \large $\blacksquare$ \\} \section{Construction of the sink-source-orbits: Symmetric forcing} \label{Symmetricsetting} For the symmetric setting, we will use two additional assumptions on the parameters, namely \begin{eqnarray} \label{cond:alphagammasymmetric} \frac{4\gamma}{L_2} + \frac{S_\infty(\alpha)}{\alpha \cdot L_2} & < & \ensuremath{\frac{1}{2}} \ ; \\ g_{| B_{\frac{4\gamma}{L_2}}(0)} \ \geq 0 & \textrm{ and } & g_{| B_{\frac{4\gamma}{L_2}}(\ensuremath{\frac{1}{2}})} \ \leq 0 \ . \label{cond:gvalues} \end{eqnarray} Due to the Lipschitz-continuity of $g$ by assumption (\ref{eq:g-lipschitz}), condition (\ref{cond:gvalues}) can of course be ensured by choosing $\gamma \leq L_2/(4L_1)$. \medskip Further, we remark that the symmetry condition (\ref{eq:systemsymmetry}) reduces the possible alternatives in Theorem~\ref{thm:schwarzian} and leads to the following corollary: \begin{cor}[Corollary 4.3 in \cite{jaeger:2003}] \label{cor:symmetricschwarzian} Suppose $T$ satisfies all assertions of Theorem~\ref{thm:schwarzian} and has the symmetry given by (\ref{eq:systemsymmetry}). Then one of the following holds: \begin{list}{(\roman{enumi})}{\usecounter{enumi}} \item There exists one invariant graph $\varphi$ with $\lambda(\varphi) \leq 0$. If $\varphi$ has a negative Lyapunov exponent, it is always continuous. Otherwise the equivalence class contains at least an upper and a lower semi-continuous representative. \item There exist three invariant graphs $\varphi^- \leq \psi \leq \varphi^+$ with $\lambda(\varphi^-) = \lambda(\varphi^+) < 0$ and $\lambda(\psi) > 0$. $\varphi^-$ is always lower semi-continuous and $\varphi^+$ is always upper semi-continuous. Further, if one of the three graphs is continuous then so are the other two, if none of them is continuous there holds \[\esscl{\Phi^-} = \esscl{\Psi} = \esscl{\Phi^+} \ . \] In addition, there holds \[ \textstyle \varphi^-(\theta) = -\varphi^+(\theta+\ensuremath{\frac{1}{2}}) \] and \[ \textstyle \psi(\theta) \ = \ -\psi(\theta+\ensuremath{\frac{1}{2}}) \ . \] \end{list} \end{cor} Consequently, if we can show that there exists an SNA in a system of this kind, then we are automatically in situation (ii). Thus there will be two symmetric strange non-chaotic attractors which embrace a self-symmetric strange non-chaotic repellor, as claimed in Theorem~\ref{thm:symmetricsna}~. \medskip In order to repeat the construction from Section~\ref{Construction} for the case of symmetric forcing, we have to define admissible times and the sets $R_N$ again. However, this time there are two critical intervals instead of one, namely $B_{\frac{4\gamma}{L_2}}(0)$ and $B_{\frac{4\gamma}{L_2}}(\ensuremath{\frac{1}{2}})$, corresponding to the maximum and minimum of the forcing function $g$. Therefore, we have to modify Definition~\ref{def:omegasets} in the following way: \begin{definition} \label{def:modifiedomegasets} For $p \in \N_0 \cup \{\infty\}$ let $Q_p : \ensuremath{\mathbb{Z}} \ensuremath{\rightarrow} \N_0$ be defined by \[ Q_p(j) := \left\{ \begin{array}{cll} q & \mathrm{if} & d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\}) \in \left[ S_{p-q+1}(\alpha) \cdot \frac{\alpha^{-q}}{L_2},S_{p-q+2}(\alpha)\cdot \frac{\alpha^{-(q-1)}}{L_2} \right) \ \textrm{ for } q \geq 2 \\ \\ 1 & \mathrm{if} & d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\}) \in \left[ S_p(\alpha) \cdot \frac{\alpha^{-1}}{L_2}, \frac{4\gamma}{L_2} + S_{p}(\alpha)\cdot \frac{\alpha^{-1}}{L_2} \cdot (1-\ensuremath{\mathbf{1}}_{\{0\}}(p)) \right) \ \ \ . \\ \\ 0 & \mathrm{if} & d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\}) \geq \frac{4\gamma}{L_2} + S_{p}(\alpha)\cdot \frac{\alpha^{-1}}{L_2} \cdot (1-\ensuremath{\mathbf{1}}_{\{0\}}(p)) \end{array} \right. \] if $j \in \ensuremath{\mathbb{Z}} \setminus \{ 0 \}$ and $Q_p(0) := 0$. Again, let $p(j) := Q_0(j)$. Further let \[ \begin{array}{llll} \textstyle \ensuremath{\tilde{\nu}}(q) & := & \min\left\{ j \in \N \mid d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\}) < 3S_\infty(\alpha)\cdot \frac{\alpha^{-(q-1)}}{L_2} \right\} & \textrm{if } q \geq 2 \ \textrm{ and } \\ \\ \ensuremath{\tilde{\nu}}(1) & := & \min\left\{ j \in \N \mid d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\}) < 3\left(\frac{4\gamma}{L_2} + S_\infty(\alpha)\cdot\frac{\alpha^{-1}}{L_2} \right) \right\} \ . \end{array} \] Apart from this, we define all the quantities $\Omega^{(\pm)}_p(j),\Omega^{(\pm)},\ensuremath{\tilde{\Omega}}^{(\pm)}$ and $\nu$ exactly in the same way as in Definition~\ref{def:omegasets}, only using the altered definitions of the functions $Q_p$. Finally, we let \[ s(j) \ := \ \left\{ \begin{array}{rll} 1 & \textit{if } \ \ d(\ensuremath{\omega_j},0) \ \leq \ \frac{4\gamma}{L_2} + \frac{S_\infty(\alpha)}{\alpha \cdot L_2} \\ \\ -1 & \textit{if } \ \ d(\ensuremath{\omega_j},\ensuremath{\frac{1}{2}}) \ \leq \ \frac{4\gamma}{L_2} + \frac{S_\infty(\alpha)}{\alpha \cdot L_2} \\ \\ 0 & \textit{otherwise} \end{array} \right. \ . \] \end{definition} In other words, we have just replaced $d(\ensuremath{\omega_j},0)$ by $d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\})$ and introduced the function $s$ in order to tell whether $\ensuremath{\omega_j}$ is close to $0$ or to $\ensuremath{\frac{1}{2}}$. However, if we let $\tilde{\omega} := 2 \omega \bmod 1$ there holds \[ \textstyle d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\}) \ = \ \ensuremath{\frac{1}{2}} d(\tilde{\omega}_j,0) \ . \] This means that Definition~\ref{def:omegasets} with $\tilde{\omega}$ and $\tilde{L}_2 := \ensuremath{\frac{1}{2}} L_2$ yields exactly the same objects as Definition~\ref{def:modifiedomegasets} with $\omega$ and $L_2$. Therefore, if we define all the quantities $l^\pm_q,\ J(m),$ $A_N,\ \Lambda_N,$ $R_N,\ \Gamma_N$, ect.\ exactly in the same way as in Section~\ref{Tools}, only with respect to Definition~\ref{def:modifiedomegasets} instead of Definition~\ref{def:omegasets}, then all the results from Sections~\ref{Approximatingsets}--\ref{Regulartimes} will literally stay true. The only exception is Lemma~\ref{lem:regularsetsbasic}(b), where we can even replace $d(\ensuremath{\omega_j},0)$ by $d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\})$. Further, in Section~\ref{Comparingorbits} we did not use any specific assumption on $g$ apart from the Lipschitz-continuity. Thus, we have all the tools from Section~\ref{Tools} available again. Therefore, the only difference to the preceding section is the fact that the mapping $\beta \mapsto \xi_n(\beta,l)$ is not necessarily monotone anymore (where the $\xi_n(\beta,l)$ are defined exactly as before, see Definition~\ref{def:parameterfamily}). Hence, instead of considering arbitrary $\beta$ as in Induction statement~\ref{thm:indscheme} we have to restrict to certain intervals % $I^q_n = [\beta^+_{q,n},\beta^-_{q,n}]$ % ($q\in\N_0,\ n \in [l^+_q+1,\nu(q+1)]$ admissible) on which the dependence of $\xi_n(\beta,l^-_q)$ on $\beta$ is monotone. The parameters $\beta^\pm_{q,n}$ will again satisfy \begin{equation} \label{eq:betapluspndefS} \xi_n(\beta_{q,n}^+,l^-_q) \ = \ \ensuremath{\frac{1}{\alpha}} \end{equation} and \begin{equation} \label{eq:betaminpndefS} \xi_n(\beta_{q,n}^-,l^-_q) \ = \ -\ensuremath{\frac{1}{\alpha}} \ , \end{equation} but they cannot be uniquely defined by these equations anymore. The fact which makes up for the lack of monotonicity, and for the existence of the second critical region $B_{\frac{4\gamma}{L_2}}(\ensuremath{\frac{1}{2}})$, is that by deriving information about the orbits $\xi_n(\beta,l)$ we get another set of reference orbits for free: It follows directly from (\ref{eq:systemsymmetry}) that \begin{equation} \label{eq:zetadef} \zeta_n(\beta,l) \ := \ T_{\beta,-\omega_l+\ensuremath{\frac{1}{2}},n+l}(-3) \ = \ -\xi_n(\beta,l) \end{equation} (Similar as in Definition~\ref{def:parameterfamily}, the $\zeta_n(\beta,l)$ correspond to the forward orbit of the points $(\omega_{-l}+\ensuremath{\frac{1}{2}},-3)$, where we suppress the $\theta$-coordinates again). Consequently, we have \begin{equation} \label{eq:xizetaequivalenceI} \xi_n(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ensuremath{\Leftrightarrow} \ \zeta_n(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \end{equation} and \begin{equation} \label{eq:xizetaequivalenceII} \xi_n(\beta,l) \geq \gamma \ \ensuremath{\Leftrightarrow} \ \zeta_n(\beta,l) \leq - \gamma \ . \end{equation} In the case of symmetric forcing the induction statement reads as follows: \begin{indscheme} \label{thm:indschemeS} Suppose the assumptions of Theorem~\ref{thm:symmetricsna} are satisfied and in addition (\ref{cond:alpha1}), (\ref{cond:hfunctions1}), (\ref{cond:hfunctions2}), (\ref{cond:u})--(\ref{cond:alpha3}), (\ref{cond:alphagammasymmetric}) and (\ref{cond:gvalues}) hold. Then for any $q \in \N_0$ and all admissible $n \in [l^+_q+1,\nu(q+1)]$ there exists an interval $I^q_n = [\beta^+_{q,n},\beta^-_{q,n}]$, such that $\beta^\pm_{q,n}$ satisfy (\ref{eq:betapluspndefS}) and (\ref{eq:betaminpndefS}) and in addition \begin{list}{\textbf{\Roman{enumi}.} \ \ }{\usecounter{enumi}} \item $\beta \in I^q_{l^+_q+1}$ implies \begin{equation} \label{eq:indstatementIS} \xi_j(\beta,l^-_q) \ \geq \ \gamma \ \ \ \forall j \in [-l^-_q,0] \setminus \Omega_\infty \ . \end{equation} Further $I^q_{l^+_q+1} \ensuremath{\subseteq} \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$. \item For each admissible $n \in [l^+_q+1,\nu(q+1)]$ the mapping $\beta \mapsto \xi_n(\beta,l^-_q)$ is strictly monotonically decreasing on $I^q_n$, (\ref{eq:indstatementIS}) holds for all $\beta\in I^q_n$ and \begin{equation} \label{eq:indstatementIIaS} I^q_n \ \ensuremath{\subseteq} \ I^q_j \ \ensuremath{\subseteq} \ \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]\ \ \ \ \forall j \in A_n \cap [l^+_q+1,n] \ . \end{equation} Further, for any $\beta \in I^q_n$ there holds \begin{equation} \label{eq:indstatementIIcS} \xi_j(\beta,l^-_q) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_n \ . \end{equation} \item \begin{list}{(\alph{enumi})}{\usecounter{enumi}} \item If $n_1 \in [l^+_q+1,\nu(q+1)]$ for some $q \geq 1$ there holds \begin{equation} \label{eq:indstatementIIIaS} |\beta^+_{q,n_1} - \beta^-_{q,n_1}| \ \leq \ 2\alpha^{-\frac{n_1}{4}} \ . \end{equation} In particular, in combination with (\ref{eq:indstatementIIaS}) this implies that \begin{equation} \label{eq:indstatementIIIbS} |\beta^\pm_{q,n_1} - \beta| \ \leq \ 2\alpha^{-\frac{n_1}{4}} \ \ \ \ \ \forall \beta \in I^q_{n_2} \end{equation} whenever $n_2 \in [l^+_q+1,\nu(q+1)]$ and $n_1 \in A_{n_2}$ (as $I^q_{n_2} \ensuremath{\subseteq} I^q_{n_1}$ in this case). \item Let $1 \leq q_1 < q$, $n_1 \in [l^+_{q_1}+1,\nu(q_1+1)]$ and $n_2 \in [l^+_{q}+1,\nu(q)+1]$. Then \begin{equation} \label{eq:indstatementIIIcS} |\beta^+_{q_1,n_1} - \beta^+_{q,n_2}| \ \leq \ 3 \cdot \hspace{-0.7eM}\sum_{i=q_1+1}^q \alpha^{-i}\ \leq \ \alpha^{-q_1} \ . \end{equation} % \end{list} \end{list} \end{indscheme} Theorem \ref{thm:symmetricsna} now follows in exactly the same way as Theorem~\ref{thm:snaexistence} from Induction scheme~\ref{thm:indscheme} (we do not repeat the argument here). The additional statements about the symmetry follow from Corollary~\ref{cor:symmetricschwarzian}~. However, due to the lack of monotonicity we are not able to derive any further information about the sink-source-orbit or the bifurcation scenario as in the case of one-sided forcing. In particular, we have to leave open here whether $\beta_0$ is the only parameter at which an SNA occurs, or if this does indeed happen over a small parameter interval as the numerical observations suggest (compare Section~\ref{Pitchfork}). \medskip \subsection{Proof of the induction scheme} \textbf{Standing assumption:} In this whole subsection, we always assume that the assumptions of Induction scheme~\ref{thm:indschemeS} are satisfied. \medskip In order to start the induction we will need the following equivalent to Lemma~\ref{lem:basicestimate}, which can be proved in exactly in the same way (using that $d(\ensuremath{\omega_j},0) \geq \frac{3\gamma}{L_2}$ implies $g(\ensuremath{\omega_j}) \leq 1-3\gamma$ by (\ref{cond:symmetricpeak}) and (\ref{cond:gvalues}), and similarly $d(\ensuremath{\omega_j},\ensuremath{\frac{1}{2}}) \geq \frac{3\gamma}{L_2}$ implies $g(\ensuremath{\omega_j}) \geq - (1-3\gamma)$). \begin{lem} \label{lem:basicestimateS} Suppose that $\beta \leq 1 + \frac{4}{\ensuremath{\sqrt{\alpha}}}$ and $j \geq -l$. If $d(\ensuremath{\omega_j},0) \geq \frac{3\gamma}{L_2}$, then $\xi_j(\beta,l) \geq \ensuremath{\frac{1}{\alpha}}$ implies $\xi_{j+1}(\beta,l) \geq \gamma$. Similarly, if $d(\ensuremath{\omega_j},\ensuremath{\frac{1}{2}}) \geq \frac{3\gamma}{L_2}$ then $\xi_j(\beta,l) \leq - \ensuremath{\frac{1}{\alpha}}$ implies $\xi_{j+1}(\beta,l) \leq -\gamma$. Consequently, $\xi_{j+1}(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ implies $\xi_j(\beta,l) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ whenever $d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\}) \geq \frac{3\gamma}{L_2}$. \end{lem} Further, the following lemma replaces Lemma~\ref{lem:parameterestimates}~. It will be needed to derive the required estimates on the parameters $\beta^\pm_{q,n}$ as well as the monotonicity of $\beta \mapsto \xi_n(\beta,l^-_q)$ on $I^q_n$. \begin{lem} \label{lem:parameterestimatesS} Let $q \in \N$ and let $n \in [l^+_q+1,\nu(q+1)]$ be admissible. Further, assume \begin{equation} \label{eq:pmesttimeabove} \xi_j(\beta,l^-_q) \ \geq \ \gamma \ \ \ \ \ \forall j \in [-l^-_q,0] \setminus \Omega_\infty \end{equation} and \begin{equation} \label{eq:pmesttimebelow} \xi_j(\beta,l^-_q) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_n \setminus \{n\} \ . \end{equation} Then \begin{equation} \label{eq:derivativeestimate} \frac{\partial}{\partial \beta} \xi_n(\beta,l^-_q) \ \leq \ - \alpha^{\frac{n-1}{4}} \ . \end{equation} \end{lem} \textit{Proof.}\ We have \begin{equation} \label{eq:derivativerule} \frac{\partial}{\partial\beta} \xi_{j+1}(\beta,l^-_q) \ = \ F'(\xi_j(\beta,l^-_q)) \cdot \frac{\partial}{\partial\beta} \xi_j(\beta,l^-_q) - g(\ensuremath{\omega_j}) \end{equation} (compare (\ref{eq:xiderivatives})). In order to prove (\ref{eq:derivativeestimate}) we first have to obtain a suitable upper bound on $|\frac{\partial}{\partial\beta}\xi_0(\beta,l^-_q)|$. Let \[ \eta(j) \ := \ \# \left( [-j,-1] \cap \Omega_\infty\right) \ . \] We claim that under assumption (\ref{eq:pmesttimeabove}) and for any $l \in [0,l^-_q]$ there holds \begin{equation} \label{eq:destinduction} \left|\frac{\partial}{\partial\delta} \xi_0(\beta,l^-_q)\right| \ \leq \ \left|\frac{\partial}{\partial\delta} \xi_{-l}(\beta,l^-_q)\right| \cdot \alpha^{-\ensuremath{\frac{1}{2}}(l-5\eta(l))} + \sum_{j=0}^{l-1} \alpha^{-\ensuremath{\frac{1}{2}}(j-5\eta(j))} \ . \end{equation} As $\eta(j) \leq \frac{j}{10}$ by (\ref{cond:hfunctions2}) and % $\frac{\partial}{\partial\delta} \xi_{-l^-_q}(\beta,l^-_q) = 0$ by definition, this implies \[ \left|\frac{\partial}{\partial\delta} \xi_0(\beta,l^-_q)\right| \ \leq \ S_\infty(\alpha^{\ensuremath{\frac{1}{4}}}) \ . \] Using the fact that $\xi_0(\beta,l^-_q) \geq \gamma$ by assumption, this further yields \begin{eqnarray} \label{eq:xizeroderivative} \nonumber \lefteqn{\frac{\partial}{\partial\delta} \xi_1 (\beta,l^-_q) \ =} \\ & = & F'(\xi_0(\beta,l^-_q)) \cdot \frac{\partial}{\partial\delta} \xi_0(\beta,l^-_q) - 1 \ \stackrel{(\ref{cond:Fcontraction})}{\leq} \ - 1 + \frac{S_\infty(\alpha^{\ensuremath{\frac{1}{4}}})}{2\ensuremath{\sqrt{\alpha}}} \ \stackrel{(\ref{cond:alpha3})}{\leq} \ -\ensuremath{\frac{1}{2}} \ . \end{eqnarray} We prove (\ref{eq:destinduction}) by induction on $l$. For $l = 0$ the statement is obvious. In order to prove the induction step $l \ensuremath{\rightarrow} l+1$, first suppose that $-(l+1) \notin \Omega_\infty$, such that $\eta(l+1) = \eta(l)$ and $\xi_{-(l+1)}(\beta,l^-_q) \geq \gamma$. Then, using (\ref{eq:derivativerule}) we obtain \begin{eqnarray*} \lefteqn{ \left|\frac{\partial}{\partial\delta} \xi_0(\beta,l^-_q)\right| \ \leq \ \left|\frac{\partial}{\partial\delta} \xi_{-l}(\beta,l^-_q)\right| \cdot \alpha^{-\ensuremath{\frac{1}{2}}(l-5\eta(l))} + \sum_{j=0}^{l-1} \alpha^{-\ensuremath{\frac{1}{2}}(j-5\eta(j))} } \\ & = & \left|F'(\xi_{-(l+1)}(\beta,l^-_q)) \cdot \frac{\partial}{\partial\delta} \xi_{-(l+1)}(\beta,l^-_q) - g(\omega_{-(l+1)})\right| \cdot \alpha^{-\ensuremath{\frac{1}{2}}(l-5\eta(l))} \\ && + \sum_{j=0}^{l-1} \alpha^{-\ensuremath{\frac{1}{2}}(j-5\eta(j))} \\ & \stackrel{(\ref{cond:Fcontraction})}{\leq} & \left(\alpha^{-\ensuremath{\frac{1}{2}}} \cdot \left|\frac{\partial}{\partial\delta} \xi_{-(l+1)}(\beta,l^-_q)\right| + 1\right) \cdot \alpha^{-\ensuremath{\frac{1}{2}}(l-5\eta(l))} + \sum_{j=0}^{l-1} \alpha^{-\ensuremath{\frac{1}{2}}(j-5\eta(j))} \\ & = & \left|\frac{\partial}{\partial\delta} \xi_{-(l+1)}(\beta,l^-_q)\right| \cdot \alpha^{-\ensuremath{\frac{1}{2}}(l+1-5\eta(l+1))} + \sum_{j=0}^{l} \alpha^{-\ensuremath{\frac{1}{2}}(j-5\eta(j))} \ . \end{eqnarray*} The case $\eta(l+1) = \eta(l)+1$ is treated similarly, using (\ref{cond:Funiformbounds}) instead of (\ref{cond:Fcontraction}) (compare with the proof of Lemma~\ref{lem:orbitscontraction}). This proves (\ref{eq:destinduction}), such that (\ref{eq:xizeroderivative}) holds. \ \\ Now we can turn to prove (\ref{eq:derivativeestimate}). For any $k \in \N$ let \[ \tau(k) \ : = \ \#([1,k-1]\setminus R_n) \ . \] We will show the following statement by induction on $k$: \begin{equation} \label{eq:destinductionII} \frac{\partial}{\partial\delta} \xi_k(\beta,l^-_q) \ \leq \ -\ensuremath{\frac{1}{2}} \cdot \left(\frac{3\ensuremath{\sqrt{\alpha}}}{2} \right)^{k-1-5\tau(k)} \ \ \ \ \ \forall k \in [1,n] \ . \end{equation} As $\tau(n) \leq \frac{n-1}{10}$ by Lemma~\ref{lem:regsets}(a), this implies (\ref{eq:derivativeestimate}) whenever $n \geq l^+_q+1$. Note that $l^+_q \geq 3$ by (\ref{eq:lpmestimates}) and $\tau(n) = 0$ for all $n \leq 10$. For $k = 1$ the statement is true by (\ref{eq:xizeroderivative}). Suppose that (\ref{eq:destinductionII}) holds for some $k \geq 1$ and first assume that $\tau(k+1) = \tau(k)$. Then \begin{eqnarray*} \lefteqn{\frac{\partial}{\partial\delta} \xi_{k+1}(\beta,l^-_q) \ = \ F'(\xi_k(\beta,l^-_q)) \cdot \frac{\partial}{\partial\delta}\xi_k(\beta,l^-_q) - g(\ensuremath{\omega_k}) } \\ & \stackrel{(\ref{cond:Fexpansion})}{\leq} & - 2\ensuremath{\sqrt{\alpha}} \cdot \ensuremath{\frac{1}{2}} \cdot \left(\frac{3\ensuremath{\sqrt{\alpha}}}{2} \right)^{k-1-5\tau(k)} + 1 \\ & \stackrel{(*)}{\leq} & - (2\ensuremath{\sqrt{\alpha}} - 2) \cdot \ensuremath{\frac{1}{2}} \cdot \left(\frac{3\ensuremath{\sqrt{\alpha}}}{2} \right)^{k-1-5\tau(k)} \\ & \stackrel{(\ref{cond:alphagamma0})}{\leq} & -\ensuremath{\frac{1}{2}} \cdot \left(\frac{3\ensuremath{\sqrt{\alpha}}}{2} \right)^{k-5\tau(k+1)} \ , \end{eqnarray*} ($*$) where $\tau(k) \leq \frac{k-1}{10}$ by Lemma~\ref{lem:regsets}(a) ensures that $\left(\frac{3\ensuremath{\sqrt{\alpha}}}{2} \right)^{k-1-5\tau(k)}$ is always larger than $1$. The case $\tau(k+1) = \tau(k)+1$ is treated similar again, using (\ref{cond:Funiformbounds}) instead of (\ref{cond:Fexpansion}) (compare with the proof of Lemma~\ref{lem:orbitsexpansion}). Thus we have proved (\ref{eq:destinductionII}) and thereby the lemma. {\raggedleft $\Box$ \\} \bigskip As in Section~\ref{Construction}, in order to prove Induction scheme~\ref{thm:indschemeS} we proceed in six steps. The overall strategy needs some slight modifications in comparison to the case of one-sided forcing, but in many cases the proofs of the required estimates stay literally the same. In such situations we will not repeat all the details, but refer to the corresponding passages of the previous section instead. \ \\ \underline{\textbf{Step 1:}} \ \ \ \textit{Proof of the statement for $q=0$} \\ \ \\ Part I: Recall that $l^-_0 = l^+_0 = 0$ and note that $\xi_0(\beta,0) = 3 \geq \gamma$ by definition, such that (\ref{eq:indstatementIS}) holds automatically. As $\frac{\partial}{\partial \beta} \xi_1(\beta,0) = -1$, we can construct the interval $I^0_1$ by uniquely defining $\beta^\pm_{0,1}$ via (\ref{eq:betapluspndefS}) and (\ref{eq:betaminpndefS}). Further, we have $\xi_1(\beta,0) = F(3) - \beta$. Using (\ref{cond:Ffixedpoints}) and (\ref{cond:Fcontraction}), it is easy to check that \[ \textstyle F(3) \ \in \ [x_\alpha,x_\alpha+\frac{2-\frac{2}{\ensuremath{\sqrt{\alpha}}}}{2\ensuremath{\sqrt{\alpha}}}] \ \ensuremath{\subseteq} \ [1+\frac{1}{\ensuremath{\sqrt{\alpha}}}+\ensuremath{\frac{1}{\alpha}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}-\ensuremath{\frac{1}{\alpha}}] \ . \] Therefore % $I^1_0 = [\beta^+_{0,1},\beta^-_{0,1}]$ must be contained in $[1+\frac{1}{\ensuremath{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}]$. \ \\ Parts II: We proceed by induction on $n$. For $n=1$ the statement follows from the above. Suppose we have defined the intervals $I^0_n \ensuremath{\subseteq} [1+\frac{1}{\ensuremath{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}]$ with the stated properties for all $n \leq N$, $N \in [1,\nu(1)-1]$. As $p(N) = 0$, Lemma~\ref{lem:basicestimateS} yields that \[ \xi_{N+1}(\beta^+_{0,N},0) > \ensuremath{\frac{1}{\alpha}} \ \ \ \ \textrm{ and } \ \ \ \ \xi_{N+1}(\beta^-_{0,N},0) < -\ensuremath{\frac{1}{\alpha}} \ . \] This means that we can find $\beta^\pm_{0,N+1}$ in $I^0_N$ which satisfy (\ref{eq:betapluspndefS}) and (\ref{eq:betaminpndefS}). Consequently % $I^0_{N+1} := [\beta^+_{0,N+1},\beta^-_{0,N+1}] \ensuremath{\subseteq} I^0_N$. It then follows from Part II of the induction statement for $N$, that $I^0_{N+1} \ensuremath{\subseteq} I^0_j \ \forall j \in [1,N]=R_N$, in particular $I^0_{N+1} \ensuremath{\subseteq} I^0_1 \ensuremath{\subseteq} [1+\ensuremath{\frac{1}{\sqrt{\alpha}}}, 1+\frac{3}{\ensuremath{\sqrt{\alpha}}}]$ (note that $A_N = [1,N]$ as $N \leq \nu(1)$). This proves (\ref{eq:indstatementIS}) and (\ref{eq:indstatementIIaS}). In order to see (\ref{eq:indstatementIIcS}) suppose that $\beta \in I^0_{N+1}$. Then $\xi_N(\beta,0) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ by the definition of $I^0_{N+1}\ensuremath{\subseteq} I^0_N$ above, and therefore % $\xi_j(\beta,0) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \forall j \in [1,N] = R_N $ follows from Part II of the induction statement for $N$. Finally, we can now use Lemma~\ref{lem:parameterestimatesS} to see that \begin{equation} \label{eq:qzeroderivative} \frac{\partial}{\partial \beta} \xi_{N+1}(\beta,0) \ \leq \ - \alpha^{\frac{N}{4}} \ . \end{equation} This ensures the monotonicity of $\beta \mapsto \xi_{N+1}(\beta,0)$. \ \\ As Part III of the induction statement is void for $q=0$, this completes Step I. {\raggedleft \large $\blacksquare$ \\} \ \\ It remains to prove the induction step. Assume that the statement of Induction scheme \ref{thm:indschemeS} holds for all $q \leq p-1$. As in Section~\ref{Onesidedproof}, the next two steps will prove Part I of the induction statement for $p$. Further, we can again assume in Step 2 and 3 that \begin{equation} \label{eq:pgeqoneS} p \ \geq \ 2 \ . \end{equation} For the case $p=1$ note that the analogue of Lemma~\ref{lem:inductionstart} holds again in the case of symmetric forcing, with $d(\ensuremath{\omega_j},0)$ being replaced by $d(\ensuremath{\omega_j},\{0,\ensuremath{\frac{1}{2}}\})$, and this already shows Part I for $p=1$. \ \\ \underline{\textbf{Step 2:}} \ \ \ \textit{If $|\beta-\beta^+_{p-1,\nu(p)}| \leq \alpha^{-p}$, then $\xi_j(\beta,l^-_p) \geq \gamma \ \forall j \in [-l^-_p,0] \setminus \Omega_\infty$.} \ \\ Actually, this follows in exactly the same way as Step 2 in Section~\ref{Onesidedproof}~. The crucial observation is the fact that Lemma~\ref{lem:approaching} literally stays true in the situation of this section. As we will also need the statement for the reversed inequalities in the later steps, we restate it here: \begin{lem} \label{lem:approachingS} Let $q \geq 1$ $,l^*,l \geq 0$, $\beta^* \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ and $|\beta - \beta^*| \leq 2\alpha^{-q}$. Suppose that $m$ is admissible, $p(m) \geq q$ and either $k=0$ or $p(k) \geq q$. Further, suppose \[ \xi_j(\beta^*,l^*) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \forall j \in R_m \ \] and $\xi_{m+k-l^-_q}(\beta,l) \geq \gamma$. Then \[ \{ j \in [m-l^-_q,m] \mid \xi_{j+k}(\beta,l) < \gamma \} \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}_{q-2} \ . \] Similarly, if $\xi_{m+k-l^-_q}(\beta,l) \leq -\gamma$ then \[ \{ j \in [m-l^-_q,m] \mid \xi_{j+k}(\beta,l) > -\gamma \} \ \ensuremath{\subseteq} \ \ensuremath{\tilde{\Omega}}_{q-2} \ . \] \end{lem} The application of this lemma in order to show the statement of Step 2 is exactly the same as in Section~\ref{Onesidedproof}~. The proof of the lemma is the same as for Lemma~\ref{lem:approaching}, apart from two slight modifications: First of all, Lemma~\ref{lem:basicestimateS} has to be used instead of Lemma~\ref{lem:basicestimate}~. Secondly, in order to show (\ref{eq:approaching}) two cases have to be distinguished. If $s(k) = 1$ nothing changes at all. For the second case $s(k) = -1$ it suffices just to replace the reference orbit \[ x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \xi_{j^-_i-1}(\beta^*,l^*) \ensuremath{,\ldots,} \xi_{j^+_i}(\beta^*,l^*) \] which is used for the application of Lemma~\ref{lem:orbitsthrowout}(a) by \[ x^1_1 \ensuremath{,\ldots,} x^1_n \ := \ \zeta_{j^-_i-1}(\beta^*,l^*) \ensuremath{,\ldots,} \zeta_{j^+_i}(\beta^*,l^*) \ . \] Then the reference orbit starts on the fibre $\omega_{j^-_i-1}+\ensuremath{\frac{1}{2}}$, and is therefore $\frac{\alpha^{-(q-1)}}{L_2}$-close to the first fibre $\omega_{j^-_i-1+k}$ of the second orbit \[ x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{j^-_i-1+k}(\beta,l) \ensuremath{,\ldots,} \xi_{j^+_i+k}(\beta,l)\ , \] such that the error term is sufficiently small again. Due to (\ref{eq:xizetaequivalenceI}) and (\ref{eq:xizetaequivalenceII}), all further details then stay exactly the same as in the case $s(m)=1$. The reader should be aware that even though the reference orbit changed, the set of times $R_m$ at which it stays in the expanding region is the same as before. This is all which is needed in order to verify the assumptions of Lemma~\ref{lem:orbitsthrowout}(a), which completes the proof of the lemma. Finally, the additional statement for the reversed inequalities can be shown similarly. {\raggedleft \large $\blacksquare$ \\} \ \\ \underline{\textbf{Step 3:}} \ \ \ \textit{Construction of $I^p_{l^+_p+1} \ensuremath{\subseteq} B_{\alpha^{-p}}(\beta^+_{p-1,\nu(p)}) $ \ .} \ \\ Similar as in Step 3 of Section~\ref{Onesidedproof}, we define $\beta^* := \beta^+_{p-1,\nu(p)}$, $\beta^+:=\beta^*-\alpha^{-p}$ and $\beta^- := \beta^*+ \alpha^{-p}$. It then follows that \begin{equation} \label{step3S:a} \xi_{l^+_p+1}(\beta^+,l^-_p) \ > \ \ensuremath{\frac{1}{\alpha}} \ \ \ \ \textit{ and } \ \ \ \ \ \xi_{l^+_p+1}(\beta^-,l^-_p) \ < \ - \ensuremath{\frac{1}{\alpha}} \ . \end{equation} The proof is exactly the same as for Claim~\ref{claim:step3}, with reversed inequalities for the case of $\beta^-$. This means that we can define the parameters $\beta^\pm_{p,l^+_p+1}$ by \begin{equation} \label{step3S:c} \beta^-_{p,l^+_p+1} \ := \ \min \left\{ \beta \in B_{\alpha^{-p}}(\beta^*) \mid \xi_{l^+_p+1}(\beta,l^-_p) = -\ensuremath{\frac{1}{\alpha}} \right\} \end{equation} and \begin{equation} \label{step3S:b} \beta^+_{p,l^+_p+1} \ := \ \max \left\{ \beta \in B_{\alpha^{-p}}(\beta^*) \mid \beta < \beta^-_{p,l^+_p+1}, \ \xi_{l^+_p+1}(\beta,l^-_p) = \ensuremath{\frac{1}{\alpha}} \right\} \ . \end{equation} Step 2 then implies that (\ref{eq:indstatementIS}) is satisfied for \[ I^p_{l^+_p+1} \ := \ \left[\beta^+_{p,l^+_p+1},\beta^-_{p,l^+_p+1}\right] \] and as $\beta^* \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ there holds \begin{equation} \label{step3S:d} \textstyle I^p_{l^+_p+1} \ \ensuremath{\subseteq} \ \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}}-\alpha^{-p},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}+\alpha^{-p}\right] \ . \end{equation} $I^p_{l^+_p+1} \ \ensuremath{\subseteq} \ \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ will be shown after Step 4. Apart from this the proof of Part I for $q=p$ is complete. {\raggedleft \large $\blacksquare$ \\} \ \\ The next three steps will prove Part II and III of the induction statement for $q=p$, proceeding by induction on % $n \in [l^+_p+1,\nu(p)]$. Again we start the induction with $n=l^+_p+1$. \ \\ \underline{\textbf{Step 4:}} \ \ \ \textit{Proof of Part II for $q=p$ and $n=l^+_p+1$.} \ \\ Let $\beta^* := \beta^+_{p-1,\nu(p)}$ again. We will prove the following claim: \begin{claim} \label{claim:step4Sa} Suppose $\beta \in B_{\alpha^{-p}}(\beta^*)$ and $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Then \begin{equation} \label{step4S:a} \xi_j(\beta,l^-_p) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{l^+_p+1} \ . \end{equation} \end{claim} This follows more or less in the same way as Step 4 in Section~\ref{Onesidedproof}. Before we give the details, let us see how this implies the statement of Part II for $q=p$ and $n=l^+_p+1$: In Step 3 we have constructed $I^p_{l^+_p+1} \ensuremath{\subseteq} B_{\alpha^{-p}}(\beta^*)$. Suppose $\beta \in B_{\alpha^{-p}}(\beta^*)$ and $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Then Step 2 and the above claim ensure that the assumptions (\ref{eq:pmesttimeabove}) and (\ref{eq:pmesttimebelow}) of Lemma~\ref{lem:parameterestimatesS} are satisfied, and we obtain that $\xi_{l^+_p+1}(\beta,l^-_p)$ is decreasing in $\beta$. In particular, this applies to $\beta^+_{p,l^+_p+1}$. Consequently, if we increase $\beta$ starting at $\beta^+_{p,l^+_p+1}$, then $\xi_{l^+_p+1}(\beta,l^-_p)$ will decrease until it leaves the interval $\ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Due to the definition in (\ref{step3S:c}) this is exactly the case when $\beta^-_{p,l^+_p+1}$ is reached. This yields the required monotonicity on $I^p_{l^+_p+1}$, and (\ref{eq:indstatementIIcS}) then follows from the claim. Note that (\ref{eq:indstatementIS}) is already ensured by Step 2. \ \\ The proof of Claim \ref{claim:step4Sa} is completely analogous to that of Step 4 in Section~\ref{Onesidedproof}: It will follow in the same way from the the two claims below, which correspond to Claims~\ref{claim:step4a} and \ref{claim:step4b}~. \begin{claim} \label{claim:step4Sb} Suppose $\beta \in B_{\alpha^{-p}}(\beta^*)$ and $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. If $\xi_k(\beta,l^-_p) \geq \frac{2}{\alpha}$ for some $k \in R_{l^+_p+1}$ then $\xi_{l^+_p+1}(\beta,l^-_p) \ > \ \ensuremath{\frac{1}{\alpha}}$. Similarly, if $\xi_k(\beta,l^-_p) \leq -\frac{2}{\alpha}$ then $\xi_{l^+_p+1}(\beta,l^-_p) \ < \ -\ensuremath{\frac{1}{\alpha}}$. \end{claim} For $\xi_k(\beta,l^-_p) \geq \frac{2}{\alpha}$ this can be shown exactly as Claim~\ref{claim:step4a}. In the case $\xi_k(\beta,l^-_p) \leq -\frac{2}{\alpha}$ it suffices just to reverse all inequalities. The analogue to Claim~\ref{claim:step4b} holds as well: \begin{claim} \label{claim:step4Sc} Suppose $\beta \in B_{\alpha^{-p}}(\beta^*)$ and $\xi_{l^+_p+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. If $k\in R_{l^+_p+1},\ k+1\in \Gamma_{l^+_p+1}$ and $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$, then there exists some $\tilde{k} \in R_{l^+_p+1}$ with $\xi_{\tilde{k}}(\beta,l^-_p) \geq \frac{2}{\alpha}$. Similarly, if $\xi_k(\beta,l^-_p) \leq -\ensuremath{\frac{1}{\alpha}}$ then there exists some $\tilde{k} \in R_{l^+_p+1}$ with $\xi_{\tilde{k}}(\beta,l^-_p) \leq -\frac{2}{\alpha}$. \end{claim} \textit{Proof.}\ In order to prove this, we can proceed as in the proof of Claim \ref{claim:step4b}: Suppose first that $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$ and define $m,\ t$ and $q'$ in exactly the same way. As these definitions only depend on the set $R_{l^+_p+1}$, which is the same as before, there is no difference so far. Only instead of (\ref{eq:domegat}) we obtain \begin{equation} \label{step4S:domegat} d(\omega_t,\{0,{\textstyle \ensuremath{\frac{1}{2}}}\}) \ \leq \ \ensuremath{\frac{1}{4}} \cdot \frac{\alpha^{-(p(m)+1)}}{L_2} \end{equation} Now we can apply Lemma~\ref{lem:approachingS}, in the same way as Lemma~\ref{lem:approaching} was applied in order to obtain (\ref{step4:g}), to conclude that \begin{equation} \label{step4S:b} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m+t}(\beta,l^-_{p}) < \gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \ . \end{equation} For the further argument we have to distinguish two cases. If $s(m+t)=1$, then we can use exactly the same comparison arguments as in Section~\ref{Onesidedproof} to show that $\xi_{m+t+l^+_{p(m)}+1}(\beta,l^-_p) \geq \frac{2}{\alpha}$ if $p(m)\geq 2$. The details all remain exactly the same. Thus, we can choose $\tilde{k} = m+t+l^+_{p(m)}+1$ if $p(m) \geq 2$ and again $\tilde{k} = m+t+1$ or $m+t+2$ if $p(m)=1$. On the other hand, suppose $s(m+t) = -1$. Then $d(\omega_{m+t},0)\geq\frac{3\gamma}{L_2}$, and in addition (\ref{step4S:b}) implies that $\xi_{m+t}(\beta,l^-_p) \ \geq \ \gamma$. Lemma~\ref{lem:basicestimateS} therefore yields that $\xi_{m+t+1}(\beta,l^-_p) \geq \gamma \geq \frac{2}{\alpha}$, such that we can choose $\tilde{k} = m+t+1$. \ \\ The case $\xi_k(\beta,l^-_p) \leq -\ensuremath{\frac{1}{\alpha}}$ is then treated analogously: First of all, application of Lemma~\ref{lem:approachingS} yields \begin{equation} \label{step4S:c} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m+t}(\beta,l^-_p) > -\gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \ , \end{equation} in particular $\xi_{m+t}(\beta,l^-_p) \ \leq \ -\gamma$. If $s(m+t) = 1$, such that $d(\omega_{m+t},\ensuremath{\frac{1}{2}})\geq\frac{3\gamma}{L_2}$, then Lemma~\ref{lem:basicestimateS} yields that $\xi_{m+t+1}(\beta,l^-_p) \leq -\gamma \leq -\frac{2}{\alpha}$ and we can choose $\tilde{k} = m+t+1$. On the other hand, if $s(m+t) = -1$, then we can again apply similar comparison arguments as in the proof of Claim~\ref{claim:step4b} to conclude that $\xi_{m+t+l^+_{p(m)}+1}(\beta,l^-_p) \leq -\frac{2}{\alpha}$ if $p(m)\geq 2$ (and $\xi_{m+t+1}(\beta,l^-_p) \leq -\frac{2}{\alpha}$ or $\xi_{m+t+2}(\beta,l^-_p) \leq -\frac{2}{\alpha}$ if $p(m)=1$). Apart from the reversed inequalities, the only difference now is that the reference orbits $\xi_{-l^-_{p(m)}}(\beta^*,l^-_{q'}),$ $\ldots,$ $\xi_{-1}(\beta^*,l^-_{q'})$ and $\xi_1(\beta^*,l^-_{q'}) \ensuremath{,\ldots,} \xi_{l^+_{p(m)}}(\beta^*,l^-_{q'})$ in (\ref{step4:referencetwo}) and (\ref{step4:i}) have to be replaced by $\zeta_{-l^-_{p(m)}}(\beta^*,l^-_{q'}),$ $\ldots$, $\zeta_{-1}(\beta^*,l^-_{q'})$ and $\zeta_1(\beta^*,l^-_{q'}),$ $\ldots,$ $\zeta_{l^+_{p(m)}}(\beta^*,l^-_{q'})$, respectively. Due to (\ref{eq:xizetaequivalenceI}) and (\ref{eq:xizetaequivalenceII}), all other details remain exactly the same as before, with (\ref{cond:sharppeak}) being replaced by (\ref{cond:symmetricpeak}). {\raggedleft $\Box$ \\} {\raggedleft \large $\blacksquare$ \\} \ \\ Now we can also show that $I^p_{l^+_p+1} \ensuremath{\subseteq} \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$, which completes the proof of Part I of the induction statement for $p$. Suppose that $\beta \in I^p_{l^+_p+1}$. Then, due to Step~2 and the construction of $I^p_{l^+_p+1} \ensuremath{\subseteq} B_{\alpha^{-p}}(\beta^+_{p-1,\nu(p)})$ in Step 3, (\ref{eq:indstatementIS}) holds, such that in particular $\xi_0(\beta,l^-_p) \geq \gamma$. Thus, it follows from (\ref{cond:Ffixedpoints}) and (\ref{cond:Fcontraction}) that \[ \xi_1(\beta,l^-_p) \ \in \ \left[1+\frac{3}{2\ensuremath{\sqrt{\alpha}}}-\beta,1+\frac{3}{\ensuremath{\sqrt{\alpha}}}-\ensuremath{\frac{1}{\alpha}}-\beta\right] . \] As Step 4 yields that $\xi_1(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ and $\frac{1}{2\ensuremath{\sqrt{\alpha}}} \geq \ensuremath{\frac{1}{\alpha}}$, this implies $\beta \in \left[1+\ensuremath{\frac{1}{\sqrt{\alpha}}},1+\frac{3}{\ensuremath{\sqrt{\alpha}}}\right]$ as required. \ \\ \ \\ \underline{\textbf{Step 5:}} \ \ \ \textit{Part II of the induction statement implies Part III} \ \\ As in Section~\ref{Onesidedproof}, we suppose that Part II with $q=p$ holds for all $n\leq N$, with $N \in [l^+_p+1,\nu(p+1)]$, and show that in this case Part III(a) holds as well whenever $n_1,n_2 \leq N$ and similarly Part III(b) holds whenever $n_2 \leq N$. \ \\ Let $n_1 \leq N$ be admissible. As we assume that Part II of the induction statement with $q=p$ holds for $n=n_1$, we can use Lemma~\ref{lem:parameterestimatesS} to see that $\frac{\partial}{\partial \beta} \xi_{n_1}(\beta,l^-_p) \ \leq \ -\alpha^{\frac{n_1}{4}}$ for all $\beta \in I^p_{n_1}$, which implies (\ref{eq:indstatementIIIaS}). Then (\ref{eq:indstatementIIIbS}) is a direct consequence of (\ref{eq:indstatementIIaS}). This proves Part III(a). Part III(b) follows in the same way as in Step 3 of Section~\ref{Onesidedproof}~. {\raggedleft \large $\blacksquare$ \\} \ \\ \underline{\textbf{Step 6:}} \ \ \ \textit{Proof of Part II for $q=p$.} \ \\ In order to prove Part II of the induction statement for $q=p$, we proceed by induction on $n$. In Step 4 we already constructed $I^p_{l^+_p+1}$ with the required properties. Now suppose that $I^p_n$ has been constructed for all admissible $n \in [l^+_p+1,N]$, where $N \in [l^+_p+1,\nu(p+1)-1]$. We now have to construct $I^p_{N+1}$ with the required properties, provided $N+1$ is admissible. Again, the case where $N$ is admissible as well is rather easy: In this case $p(N)=0$, otherwise $N+1$ would be contained in $J(N)$. Therefore Lemma~\ref{lem:basicestimateS} yields that \begin{equation} \label{step6S:a} \xi_{N+1}(\beta^+_{p,N},l^-_p) \ > \ \ensuremath{\frac{1}{\alpha}} \end{equation} and \begin{equation} \label{step6S:b} \xi_{N+1}(\beta^-_{p,N},l^-_p) \ < \ -\ensuremath{\frac{1}{\alpha}} \ . \end{equation} Consequently, we can find $\beta^\pm_{p,N+1} \in I^p_N$ which satisfy (\ref{eq:betapluspndefS}) and (\ref{eq:betaminpndefS}), such that % $I^p_{N+1} = [\beta^+_{p,N+1},\beta^-_{p,N+1}] \ensuremath{\subseteq} I^p_N$. Note that $R_N = R_{N+1}\setminus\{N+1\}$ by (\ref{eq:regularstabilized}). Therefore Part II of the induction statement for $n=N$ implies that we can apply Lemma~\ref{lem:parameterestimatesS} to any $\beta \in I^p_{N+1}$, and this yields the monotonicity of $\xi_{N+1}(\beta,l^-_p)$ on $I^p_{N+1}$. All other required statements for $n=N+1$ then follow directly from Part II of the induction statement for $n=N$. It remains to treat the case where $N+1$ is admissible but $N$ is not admissible. As in Step 6 of Section~\ref{Onesidedproof} we have to consider the interval $J \in \J_{N+1}$ which contains $N$, i.e.\ % $J = [t,N]$ with $t:=\lambda^-(m_J)$. In order to construct $I^p_{N+1}$ inside of $I^p_{t-1}$ we prove the following claim (compare Claim~\ref{claim:step6a}): \begin{claim} \label{claim:step6Sa} $\xi_{N+1}(\beta^+_{p,t-1},l^-_p) > \ensuremath{\frac{1}{\alpha}}$ and $\xi_{N+1}(\beta^-_{p,t-1},l^-_p) < -\ensuremath{\frac{1}{\alpha}}$. \end{claim} \textit{Proof.}\ We only give an outline here, the details can be checked exactly as in the proof of Claim~\ref{claim:step6a}~. Note that it sufficed there to show~(\ref{step6:a}), such that the problem is analogous. Let $\beta^+:=\beta^+_{p,t-1}$ and $m:=m_J$. First, we can apply Lemma~\ref{lem:approachingS} with $q=p(m)$, $l=l^*=l^-_p$, $\beta^*=\beta^+_{p,m}$, $m$ as above, $k=0$ and $\beta=\beta^+$ to obtain that \begin{equation} \label{step6S:c} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m}(\beta^+,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \end{equation} (compare (\ref{step6:b})--(\ref{step6:g})). Then we have to distinguish two cases. If $s(m)=1$, we can proceed as in the proof of \ref{claim:step6a} to show that $\xi_{N+1}(\beta^+,l^-_p) \geq \frac{2}{\alpha}$. On the other hand suppose $s(m)=-1$, such that $d(\omega_{m_J},0) \geq \frac{4\gamma}{L_2}$. In this case (\ref{step6S:c}) implies in particular that $\xi_{m}(\beta^+,l^-_p) \geq \gamma$, and Lemma~\ref{lem:basicestimateS} therefore yields $\xi_{m+1}(\beta^+,l^-_p) \geq \gamma \geq \frac{2}{\alpha}$. Similar to the case $s(m)=1$ we can now compare the orbits \begin{equation} \label{step6S:referenceone} x_1^1 \ensuremath{,\ldots,} x^1_n \ := \ \zeta_{1}(\beta^+,l^-_p) \ensuremath{,\ldots,} \zeta_{l^+_{p(m)}}(\beta^+,l^-_p) \end{equation} and \begin{equation} \label{step6S:newone} x^2_1 \ensuremath{,\ldots,} x^2_n \ := \ \xi_{m+1}(\beta^+,l^-_p) \ensuremath{,\ldots,} \xi_N(\beta^+,l^-_p) \ , \end{equation} (see (\ref{step6:referencetwo}) and (\ref{step6:newtwo})), with the difference that it suffices to use Lemma~\ref{lem:orbitsthrowout}(a) instead of (b). Note that the information we have about the orbit (\ref{step6S:referenceone}) is exactly the same as for the orbit (\ref{step6:referencetwo}) (see (\ref{eq:xizetaequivalenceI})). Thus, we also obtain $\xi_{N+1}(\beta^+,l^-_p) > \ensuremath{\frac{1}{\alpha}}$ in this case. The proof for $\xi_{N+1}(\beta^-,l^-_p) < -\ensuremath{\frac{1}{\alpha}}$ is then analogous. This time, it suffices to use Lemma~\ref{lem:orbitsthrowout}(a) for the case $s(m)=1$, whereas Lemma~\ref{lem:orbitsthrowout}(b) has to be invoked in order to compare the orbits $x_1^1 \ensuremath{,\ldots,} x^1_n$ $:=$ $\zeta_{1}(\beta^+,l^-_p),$ $\ldots,$ $\zeta_{l^+_{p(m)}}(\beta^+,l^-_p)$ and $x^2_1 \ensuremath{,\ldots,} x^2_n$ $:=$ $\xi_{m+1}(\beta^+,l^-_p),$ $\ldots,$ $\xi_N(\beta^+,l^-_p)$ in case $s(m)=-1$, but the details for the application are again the same as before. {\raggedleft $\Box$ \\} \ \\ Using the above claim, we see that \begin{equation} \label{step6S:betamindef} \beta^-_{p,N+1} \ := \ \min\left\{\beta\in I^p_{t-1} \mid \xi_{N+1}(\beta,l^-_p) = -\ensuremath{\frac{1}{\alpha}} \right\} \end{equation} and \begin{equation} \label{step6S:betaplusdef} \beta^+_{p,N+1} \ := \ \max\left\{\beta\in I^p_{t-1} \mid \beta < \beta^-_{p,N+1},\ \xi_{N+1}(\beta,l^-_p) = \ensuremath{\frac{1}{\alpha}} \right\} \end{equation} are well defined, such that $I^p_{N+1} := [\beta^+_{p,N+1},\beta^-_{p,N+1}] \ensuremath{\subseteq} I^p_{t-1}$. Then, due to Part II of the induction statement for $n=t-1$, (\ref{eq:indstatementIS}) holds for all $\beta \in I^p_{N+1}$ and similarly \begin{equation} \label{step6S:d} \xi_j(\beta,l^-_p) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{t-1} \end{equation} whenever $\beta \in I^p_{N+1}$. As $R_{N+1} = R_{t-1} \cup R(J) \cup \{N+1\}$, it remains to obtain information about $R(J)$. Thus, in order to complete this step we need the following claim, which is the analog of Claim~\ref{claim:step6b}: \begin{claim} \label{claim:step6Sb} Suppose $\beta\in I^p_{N+1}$ and $\xi_{N+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Then $\xi_j(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$ $\forall j \in R(J)$. \end{claim} Similar to Claim~\ref{claim:step6b}, this follows from two further claims, which are the analogues of Claims~\ref{claim:step6c} and \ref{claim:step6d}~. Before we state them, let us see how we can use Claim~\ref{claim:step6Sb} in order to complete the induction step $N \ensuremath{\rightarrow} N+1$ and thereby the proof of Step 6: Suppose that $\beta \in I^p_{N+1}$ and $\xi_{N+1}(\beta,l^-_p) \in \ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$. Then (\ref{step6S:d}) together with the claim imply that \begin{equation} \label{step6S:e} \xi_j(\beta,l^-_p) \ \in \ \ensuremath{\overline{B_\frac{1}{\alpha}(0)}} \ \ \ \ \ \forall j \in R_{N+1} \ . \end{equation} In addition (\ref{eq:indstatementIS}) holds, as mentioned before (\ref{step6S:d}). Consequently, Lemma~\ref{lem:parameterestimatesS} (with $q=p$ and $n=N+1$) implies that \[ \frac{\partial}{\partial \beta} \xi_{N+1}(\beta,l^-_p) \ \leq \ - \alpha^{\frac{N}{4}} \ . \] In particular, this is true for $\beta=\beta^+_{p,N+1}$, and when $\beta$ is increased it will remain true until $\xi_{N+1}(\beta,l^-_p)$ leaves $\ensuremath{\overline{B_\frac{1}{\alpha}(0)}}$, i.e.\ all up to $\beta^-_{p,N+1}$. This proves the required monotonicity of $\beta \mapsto \xi_{N+1}(\beta,l^-_p)$ on $I^p_{N+1}$, and thus Part II of the induction statement holds for $n=N+1$. \begin{claim} \label{claim:step6Sc} Suppose $\xi_k(\beta,l^-_p) \geq \frac{2}{\alpha}$ for some $k \in R(J)$. Then $\xi_{N+1}(\beta,l^-_p) > \ensuremath{\frac{1}{\alpha}}$. Similarly, if $\xi_k(\beta,l^-_p) \leq -\frac{2}{\alpha}$ then $\xi_{N+1}(\beta,l^-_p) < -\ensuremath{\frac{1}{\alpha}}$. \end{claim} This is proved exactly as Claim~\ref{claim:step6c}, with all inequalities reversed for the case $\xi_k(\beta,l^-_p) \leq -\frac{2}{\alpha}$. \begin{claim} \label{claim:step6Sd} Suppose $k\in R(J)$, $k+1 \in \Gamma^+(J)$ and $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$. Then there exists some $\tilde{k} \in R(J)$ with $\xi_{\tilde{k}}(\beta,l^-_p) \geq \frac{2}{\alpha}$. Similarly, if $\xi_k(\beta,l^-_p) \leq -\ensuremath{\frac{1}{\alpha}}$ there exists some $\tilde{k} \in R(J)$ with $\xi_{\tilde{k}}(\beta,l^-_p) \leq -\frac{2}{\alpha}$. \end{claim} \textit{Proof.}\ This can be shown in the same way as Claim~\ref{claim:step6d}: Suppose first that $\xi_k(\beta,l^-_p) \geq \ensuremath{\frac{1}{\alpha}}$ and define $m,\ t$ and $q'$ as in the proof of Claim~\ref{claim:step6d}. As these definitions only depend on the sets of regular points, which are the same as before, there is no difference so far. Only instead of (\ref{step6:j}) we obtain \begin{equation} \label{step6S:f} d(\omega_t,\{0,{\textstyle\ensuremath{\frac{1}{2}}}\}) \ \leq \ \ensuremath{\frac{1}{4}} \cdot \frac{\alpha^{-(p(m)+1)}}{L_2} \end{equation} Nevertheless, we can apply Lemma~\ref{lem:approachingS}, in the same way as Lemma~\ref{lem:approaching} was applied in order to obtain (\ref{step6:o}), to conclude that \begin{equation} \label{step6S:g} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m+t}(\beta,l^-_p) < \gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \end{equation} (compare (\ref{step6:k})--(\ref{step6:o})). For the further argument we have to distinguish two cases. If $s(m+t)=1$ and $p(m) \geq 2$, then we can use exactly the same comparison arguments as for Claim~\ref{claim:step6d} (compare (\ref{step6:referencefour})--(\ref{step6:s})) to show that $\xi_{m+t+l^+_{p(m)}+1}(\beta,l^-_p) \geq \frac{2}{\alpha}$. The details all remain exactly the same. Thus, we can choose $\tilde{k} = m+t+l^+_{p(m)}+1$ if $p(m)\geq 2$, and similarly $\tilde{k} = m+t+1$ or $m+t+2$ if $p(m)=1$. On the other hand, suppose $s(m+t) = -1$. Then $d(\omega_{m+t},0)\geq\frac{3\gamma}{L_2}$, and in addition (\ref{step6S:g}) implies that $\xi_{m+t}(\beta,l^-_p) \ \geq \ \gamma$. Lemma~\ref{lem:basicestimateS} therefore yields that $\xi_{m+t+1}(\beta,l^-_p) \geq \gamma \geq \frac{2}{\alpha}$, such that we can choose $\tilde{k} = m+t+1$. \ \\ The case $\xi_k(\beta,l^-_p) \leq -\ensuremath{\frac{1}{\alpha}}$ is then treated analogously: First of all, application of Lemma~\ref{lem:approachingS} yields \begin{equation} \label{step6S:h} \{ j \in [-l^-_{p(m)},0] \mid \xi_{j+m+t}(\beta,l^-_p) > -\gamma \} \ \ensuremath{\subseteq} \ \Omega_\infty \ , \end{equation} in particular $\xi_{m+t}(\beta,l^-_p) \ \leq \ -\gamma$. If $s(m+t) = 1$, such that $d(\omega_{m+t},\ensuremath{\frac{1}{2}})\geq\frac{3\gamma}{L_2}$, then Lemma~\ref{lem:basicestimateS} yields that $\xi_{m+t+1}(\beta,l^-_p) \leq -\gamma \leq -\frac{2}{\alpha}$ and we can choose $\tilde{k} = m+t+1$. On the other hand, if $s(m+t) = -1$, then we can again apply similar comparison arguments as in the proof of Claim~\ref{claim:step6d} (compare (\ref{step6:referencefour})--(\ref{step6:s}))to conclude that $\xi_{m+t+l^+_{p(m)}+1}(\beta,l^-_p) \leq -\frac{2}{\alpha}$ if $p(m)\geq 2$ (and again $\xi_{m+t+1}(\beta,l^-_p) \leq -\frac{2}{\alpha}$ or $\xi_{m+t+2}(\beta,l^-_p) \leq -\frac{2}{\alpha}$ if $p(m)=1$). Apart from the reversed inequalities the only difference now is that the reference orbits $\xi_{-l^-_{p(m)}}(\beta^*,l^-_{q'}),$ $\ldots,$ $\xi_{-1}(\beta^*,l^-_{q'})$ and $\xi_1(\beta^*,l^-_{q'}) \ensuremath{,\ldots,} \xi_{l^+_{p(m)}}(\beta^*,l^-_{q'})$ in (\ref{step6:referencefour}) and (\ref{step6:q}) have to be replaced by $\zeta_{-l^-_{p(m)}}(\beta^*,l^-_{q'}),$ $\ldots$, $\zeta_{-1}(\beta^*,l^-_{q'})$ and $\zeta_1(\beta^*,l^-_{q'}),$ $\ldots,$ $\zeta_{l^+_{p(m)}}(\beta^*,l^-_{q'})$, respectively. Due to (\ref{eq:xizetaequivalenceI}) and (\ref{eq:xizetaequivalenceII}), all other details remain exactly the same as before. {\raggedleft $\Box$ \\} {\raggedleft \large $\blacksquare$ \\}
1,314,259,992,960
arxiv
\section{Introduction} Let $G$ be a simple graph of order $n$ with vertex set $V$. We denote two adjacent vertices $v,w$ as $v\sim w$. A \textit{coprime labeling} of $G$ is a labeling of $V$ using distinct labels from the set $\{1,\ldots,m\}$ for some integer $m\geq n$ in which adjacent vertices are labeled by relatively prime integers. If the integers $1,\ldots, n$ are used as the labeling set, the labeling is called a \textit{prime labeling}, and $G$ is a \textit{prime graph} or is simply referred to as \textit{prime}. For graphs for which no prime labeling exists, our goal is to minimize the value of $m$, the largest label in the coprime labeling. This smallest possible value $m$ for a coprime labeling of $G$, denoted by $\mathfrak{pr}(G)$, is the \textit{minimum coprime number} of $G$, and a coprime labeling with $\mathfrak{pr}(G)$ as the largest label is a \textit{minimum coprime labeling} of $G$. A prime graph would have a minimum coprime number of $\mathfrak{pr}(G)=n$. Prime labelings of graphs were developed by Roger Entringer and first introduced by Tout, Dabboucy, and Howalla~\cite{TDH}. Numerous classes of graphs over the past forty years have been shown to be prime, as well as many classes for which a prime labeling has been shown to not exist. A summary of these results can be seen in Gallian's dynamic survey of graph labelings~\cite{Gallian}. Most of our upcoming results center around the concept of minimum coprime labelings, which were first studied by Berliner et al.~\cite{Berliner} with their investigation of complete bipartite graphs of the form $K_{n,n}$. Asplund and Fox~\cite{AF} continued this line of research by determining the minimum coprime number for classes of graphs such as complete graphs, wheels, the union of two odd cycles, the union of a complete graph with a path or a star, powers of paths and cycles, and the join of paths and cycles. Recently, Lee~\cite{Lee} made further progress on the minimum coprime number of the join of paths and complete bipartite graphs, in addition to investigating minimum coprime numbers of random subgraphs. The focus of this paper is to determine the minimum coprime number of prism graphs, which are equivalent to the Cartesian product of a cycle of length $n$ and a path with $2$ vertices, denoted as $C_n\square P_2$. Additionally, a prism graph is equivalent to the generalized Petersen graphs when $k=1$. In the next section, we include preliminary material regarding the classes of graphs we will investigate and previous research on prime labelings of these graphs. In Section 3, we construct minimum coprime labelings of prism graphs $GP(n,1)$ for several specific cases of odd $n$ as well as present a conjecture for all sizes of odd prisms. Section 4 includes results on the minimum coprime number of the generalized Petersen graph $GP(n,2)$, a graph which is not prime for any value $n$. Section 5 consists of results on minimum coprime number of stacked prism graphs, and finally we investigate a variation of a generalized Petersen graph in Section 6. \section{Preliminary Material} An important feature of a graph $G$ that aides in determining whether a prime labeling may exist or if a minimum coprime labeling should instead be investigated is its \textit{independence number}, denoted by~$\alpha(G)$. Since even number labels must be assigned to independent vertices, the following criteria, first introduced in~\cite{FH}, eliminates the possibility of a prime labeling on many classes of graphs. \begin{lemma}{\normalfont \cite{FH}}\label{ind} If $G$ is prime, then the independence number of $G$ must satisfy $\alpha(G)\geq \left\lfloor \frac{|V(G)|}{2}\right\rfloor$. \end{lemma} The generalized Petersen graph, denoted $GP(n,k)$ where $n\geq 3$ and $1\leq k \leq \lfloor (n-1)/2\rfloor$, consists of $2n$ vertices $v_1,\ldots, v_n,u_1,\ldots, u_n$. It has $3n$ edges described by $v_i\sim v_{i+1}$, $u_i\sim u_{i+k}$, and $v_i\sim u_i$ where indices are calculated modulo $n$. In the particular case of $k=1$, the two sets of vertices form $n$-gons that are connected to form a prism graph, which will be our first graph that we investigate. When $n$ is odd, $GP(n,1)$ consists of two odd cycles connected by a perfect matching. Only $(n-1)/2$ vertices on each cycle can be independent, hence $\alpha(GP(n,1))=n-1$ for odd $n$. Then by Lemma~\ref{ind}, $GP(n,1)$ is not prime in this case, a property which extends to any value of $k$ when $n$ is odd. In fact, $GP(n,k)$ was proven to not be prime in~\cite{PG} for any odd value of $n$ as well as when $n$ and $k$ are both even. Independence numbers for generalized Petersen graphs for certain cases have been determined~\cite{BHM,BEA,FGS} that help provide bounds for the minimum coprime numbers of $GP(n,k)$ in the non-prime cases. The remaining case of $GP(n,k)$ with $n$ even and $k$ odd is conjectured to be prime for all such $n$ and $k$. When $k=1$, the prism graph $GP(n,1)$ has been proven to be prime in many specific cases in~\cite{HLYZ} such as when $2n+a$ or $n+a$ are prime for several small values of $a$. Additional cases of $GP(n,1)$ were proven to be prime in~\cite{PG}. Dean~\cite{dean} proved the conjecture that all ladders are prime. Since ladders are simply prism graphs with two edges removed, one might expect his prime labeling to carry over to $GP(n,1)$. However, when applying this labeling to $GP(n,1)$, these two additional edges do not maintain the relatively prime condition for all~$n$. While some results have been found on $GP(n,3)$ in~\cite{HLYZ2}, most work involving prime labelings of the generalized Petersen graph has been focused on the prism graph. In~\cite{SSCEHRW} a number theoretic conjecture was made to bolster the conjecture that $GP(n,1)$ is prime for all even $n$. Conjecture 2.1 in~\cite{SSCEHRW} stated that for any even integer $n$, there exists an $s\in [1,n-1]$ such that $n+s$ and $2n+s$ are prime. By verifying this conjecture for all even $n$ up to $2.468\times 10^9$, they demonstrated $GP(n,1)$ is prime with even $n$ up to that value. We conclude this section with the following observations, which will be used without citation in many of the theorems throughout this paper. \begin{obs}\label{easy} For positive integers $a,b$, and $k$, the following hold true: \begin{packedItem} \item $\gcd\{a,b\}=\gcd\{ka,b\}$. \item $\gcd\{a,b\}=\gcd\{a-b,b\}$. \item $\gcd\{a,b\}=\gcd\{a+b,b\}$. \item If $a+b$ is prime, then $\gcd\{a,b\}=1$. \item If $a-b$ is a prime $p$ and both $a$ and $b$ are not multiples of $p$, then $\gcd\{a,b\}=1$. \end{packedItem} \end{obs} \section{Prism Graphs} In this section, we provide several specific results for the minimum coprime number of $GP(n,1)$ and conjecture that $\mathfrak{pr}(GP(n,1))=2n+1$ for all odd $n$. Many of the theorems follow a similar proof strategy, but a general construction extended from our techniques seems unlikely without the resolution of longstanding number theory conjectures. See Figure~\ref{gp11_1} for an example of our first result showing a minimum coprime labeling for the prism $GP(11,1)$. \begin{theorem}\label{n} If $n$ is prime, then $\mathfrak{pr}(GP(n,1))=2n+1$. \end{theorem} \begin{figure}[htb] \centering \includegraphics[width=2in]{gp11_1.pdf} \caption{Minimum coprime labeling for GP(11,1)} \label{gp11_1} \end{figure} \begin{proof} Label $v_1,\ldots, v_n$ as $1,\ldots, n$ and $u_1,\ldots, u_n$ with $n+1,\ldots, 2n-1, 2n+1$ respectively. All adjacent pairs in $\{v_1,\ldots,v_n\}$ and in $\{u_1,\ldots,u_n\}$ have consecutive labels except for $v_1v_n$, $u_{n-1}u_n$, and $u_1u_n$. The first pair includes $1$ as one of the labels, and the second pair is labeled by consecutive odd labels. Lastly, $u_1u_n$ have relatively prime labels since $\gcd\{n+1,2n+1\}=\gcd\{2n+2,2n+1\}=1$. It remains to show that the labels on $u_i$ and $v_i$ are relatively prime for each $i$. For $i\leq n-1$, the difference between the labels on $v_i$ and $u_i$ is $n$. Since $n$ is assumed to be prime, these pairs are relatively prime by Observation~\ref{easy}. Finally when $i=n$, we have $\gcd\{2n+1,n\}=\gcd\{n+1,n\}=1$. Therefore, this is a coprime labeling, hence $\mathfrak{pr}(GP(n,1))\leq 2n+1$. Since the independence number of $GP(n,1)$ is $n-1$ when $n$ is odd, a prime labeling is not possible, making $\mathfrak{pr}(GP(n,1))>2n$. Thus when $n$ is prime, we have $\mathfrak{pr}(GP(n,1))=2n+1$. \end{proof} \begin{theorem}\label{n+2} If $n+2$ is prime, then $\mathfrak{pr}(GP(n,1))=2n+1$. \end{theorem} \begin{proof} We will construct a labeling $\ell$ in the following manner. Label $v_1,\ldots,v_n$ with the numbers $1,\ldots,n$, respectively, and the vertices $u_1,\ldots,u_n$ with the labels $n+3,\ldots, 2n+1, n+2$, respectively. Edges between vertices in $\{v_1,\ldots,v_n\}$ have vertices with consecutive labels or contain the label~$1$, and so $\gcd\{\ell(v_i),\ell(v_{i+1})\}=\gcd\{\ell(v_1),\ell(v_n)\}=1$. Edges between vertices in $\{u_1,\ldots,u_n\}$ have vertices with consecutive labels or with labels $n+2$ and $2n+1$ in which $n+2$ is prime. Hence the labels between pairs of adjacent vertices in $\{u_1,\ldots,u_n\}$ are relatively prime. For $i=1,\ldots, n-1$, since the difference of the labels on vertices $u_i$ and $v_i$ is $n+2$, which is prime, $\gcd\{\ell(u_i),\ell(v_i)\}=1$. Lastly, $u_n$ and $v_n$ are labeled by consecutive odd integers. Thus, we have a coprime labeling that is minimal since $GP(n,1)$ is not prime for odd $n$. Thus, $\mathfrak{pr}(GP(n,1))=2n+1$ assuming $n+2$ is prime. \end{proof} \begin{theorem}\label{2n+1} If $2n+1$ is prime, then $\mathfrak{pr}(GP(n,1))=2n+1$. \end{theorem} \begin{proof} We construct a labeling $\ell$ by first labelling $v_1,\ldots, v_n$ as $1,\ldots, n$ as in last theorem, but we label $u_1,\ldots, u_n$ in reverse order as $2n+1, 2n-1, 2n-2,\ldots,n+1$. Edges connecting vertices in $\{v_1,\ldots,v_n\}$ have consecutive labels or contain the label $1$, and so $\gcd\{\ell(v_i),\ell(v_{i+1})\}=\gcd\{\ell(v_1),\ell(v_n)\}=1$. Edges between vertices in $\{u_1,\ldots,u_n\}$ have vertices with consecutive labels, with consecutive odd integer labels, or with the pair of labels $n+1$ and $2n+1$, and so the labels on these adjacent vertices are relatively prime. For each $i=2,\ldots, n$, since $\ell(u_i)+\ell(v_i)=2n+1$, we know that $\gcd\{\ell(u_i),\ell(v_i)\}=1$ by Observation~\ref{easy} because $2n+1$ is prime. Finally, the edge $u_1v_1$ includes the label $1$ on $u_1$. Thus, this is a coprime labeling that shows $\mathfrak{pr}(GP(n,1))=2n+1$ when $2n+1$ is prime. \end{proof} \begin{theorem}\label{2n-1} If $2n-1$ is prime, then $\mathfrak{pr}(GP(n,1))=2n+1$. \end{theorem} \begin{proof} Notice that when $n\equiv 2\pmod{3}$, then $2n-1$ is divisible by 3, so we assume $n\not\equiv 2\pmod{3}$. If $n\equiv 0\pmod{3}$, then we use the labeling defined in Table~\ref{tab:2n-1} where the top row represents $v_1,\ldots,v_n$ and the bottom row represents $u_1,\ldots,u_n$. One can see the pairs $u_iu_{i+1}$ and $v_iv_{i+1}$ have relatively prime labels, where $\gcd\{\ell(u_1),\ell(u_n)\}=\gcd\{2n+1,2n-2\}=1$ since $n$ is a multiple of $3$ in this case. The pairs $u_iv_i$ have relatively prime labels for trivial reasons or because their sum is $2n+1$, which is assumed to be prime. If $n\equiv 1\pmod{3}$, then use the same labeling in Table~\ref{tab:2n-1} except $2n-2$ is labeled as $2n$ instead. This is a coprime labeling for similar reasoning as our first case, except now the pair of labels $2n$ and $2n-3$ on $u_1$ and $u_2$, respectively, are relatively prime since $n\equiv 1\pmod{3}$. In each case, the labeling is a minimum coprime labeling, proving $\mathfrak{pr}(GP(n,1))=2n+1$ assuming $2n-1$ is prime. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|} \hline 1& 2 & $\cdots$ & $n-1$ & $2n-1$ \\ \hline $2n-2$ & $2n-3$ & $\cdots$ & $n$ & $2n+1$ \\ \hline \end{tabular} \caption{Labeling for Theorem~\ref{2n-1}} \label{tab:2n-1} \end{table} \end{proof} For further results regarding minimum coprime labelings of prism graphs in other specific cases, see Appendix A. Using Theorems~\ref{n}-\ref{2n-1}, along with Theorems~\ref{n+4}-\ref{n+6} in the Appendix, an explicit minimum coprime labeling is given for $GP(n,1)$ for all $n$ up to 1641. The following is a more general construction assuming a particular pair of prime numbers exists. \begin{theorem}\label{conj} Let $n\geq 3$ be odd. If there exists an $s\in [3,n-1]$ such that $n+s+1$ and $2n+s+2$ are prime, then $\mathfrak{pr}(GP(n,1))= 2n+1$. \end{theorem} \begin{proof} We use the labeling defined in Table~\ref{s-conj} where the top row represents the vertices $v_1,\ldots,v_n$ and the bottom row represents the vertices $u_1,\ldots,u_n$. The vertex pairs on edges of the form $u_1v_1$, $u_iu_{i+1}$ and $v_iv_{i+1}$ either contains the label 1, are consecutive integers, are consecutive odd integers, or are the relatively prime pair $n+1$ and $2n+1$. The adjacent pairs $u_iv_i$ for $i=2,\ldots, s$ have labels that add to $n+s+1$, and the pairs $u_iv_i$ for $i=s+1, \ldots, n$ are labeled by integers whose sum is $2n+s+2$. Since both of these sums are assumed to be prime, the labels on those pairs are relatively prime as well. Thus, since $GP(n,1)$ is not prime when $n$ is odd, we have constructed a minimum coprime labeling proving that $\mathfrak{pr}(GP(n,1))=2n+1$ if such a value $s$ exists. \end{proof} \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline 1& 2 & $\cdots$ & $s-1$ & $s$ & $s+1$ & $s+2$ & $\cdots$ & $n-1$ & $n$ \\ \hline $n+s+1$ & $n+s-1$ & $\cdots$ & $n+2$ & $n+1$ & $2n+1$ & $2n$ & $\cdots$ & $n+s+3$ & $n+s+2$ \\ \hline \end{tabular} \caption{Labeling for Theorem~\ref{conj}} \label{s-conj} \end{table} Recall that Conjecture 2.1 in~\cite{SSCEHRW} states that for all even integers $n$, there is an $s<n$ such that $n+s$ and $2n+s$ are both prime. If this is true for all even integers, then the previous theorem would prove the subsequent conjecture for all odd $n$ since applying Conjecture 2.1 to the even integer $n+1$ would result in $n+s+1$ and $2n+s+2$ being prime. Results in~\cite{SSCEHRW} combine with Theorem~\ref{conj} to confirm the following conjecture for odd $n<2.468\times 10^9$. \begin{conjecture} For all odd $n\geq 3$, $\mathfrak{pr}(GP(n,1))=2n+1$. \end{conjecture} \section{Generalized Petersen Graphs with $k=2$} We next consider the generalized Petersen graph in the case of $k=2$. The vertices of $GP(n,2)$ with $n\geq 5$ are still referred to as $v_1,\ldots,v_n$,$u_1,\ldots,u_n$ with the edges of the forms $v_i v_{i+1}$, $u_i u_{i+2}$, and $v_i u_i$ in which indices calculated modulo $n$. The independence number for generalized Petersen graphs when $k=2$ is given by the formula~$\lfloor\frac{4n}{5}\rfloor$, as shown in~\cite{BHM} through their study of minimum vertex covers of $GP(n,2)$. This results in the generalized Petersen graph with $k=2$ not being prime for any value of $n$. The denominator of this formula for the independence number provides a natural direction by which to create an independent set for this graph by dividing $GP(n,2)$ into blocks that include $5$ of the $v_i$ and $5$ of the $u_i$ vertices. We utilize this technique in the following proof but limit ourselves for now to the case when $n$ is a multiple of $5$. \begin{lemma}\label{GPn2Lemma} Let $m$ be a positive integer. Then $\mathfrak{pr}(GP(5m,2))=12m-1$. \end{lemma} \begin{proof} We aim to construct a coprime labeling $\ell$ and later will show that it is minimal. We begin by assigning $v_1,\ldots,v_5$ the labels $2,3,5,8,9$ and assigning $u_1,\ldots,u_5$ the labels $1,4,6,7,11$, respectively. One can verify these ten labels form a coprime labeling when $m=1$. For $m>1$ we then define the following labeling for the block of ten vertices $v_{5k+1},\ldots,v_{5k+5},u_{5k+1},\ldots,u_{5k+5}$ for each $1\leq k<m$ \begin{align}\label{initiallabel} \begin{array}{ccccccccccc} \ell(v_{5k+1}) = 12k+2 && \ell(v_{5k+4}) = 12k+8 && \ell(u_{5k+1}) = 12k+1 && \ell(u_{5k+4}) = 12k+7 \\ \ell(v_{5k+2})= 12k+3 && \ell(v_{5k+5}) = 12k+9 && \ell(u_{5k+2})= 12k+4 && \ell(u_{5k+5}) = 12k+11. \\ \ell(v_{5k+3}) = 12k+5 && && \ell(u_{5k+3}) = 12k+10 && \end{array} \end{align} \begin{figure}[htb] \centering \includegraphics[width=5in]{gp5m2.pdf} \caption{Visual representation of the labeling described in Equation~\eqref{initiallabel}} \label{fig:gp5m2} \end{figure} See Figure~\ref{fig:gp5m2} for a visual representation of the labeling of the ten vertices in Equation~\eqref{initiallabel} and their adjacent vertices. The labeling $\ell$ as currently defined is not enough to guarantee each pair of adjacent vertices has relatively prime labels, particularly for pairs of labels that have a difference of~$5$. We alter the labeling $\ell$ by addressing cases for specific~$k$ values based on the divisibility of $12k-1$, $12k-3$, and $12k+5$. Before altering $\ell$, first note that no adjacent vertices are both labeled by even integers. One can also observe that no labels that are multiples of $3$ are assigned to adjacent vertices, including the adjacent pairs whose labels differ by 9. Additionally, the final vertices in the last block $u_{n-1}$, $u_n$, and $v_n$ are adjacent to the vertices $u_1$, $u_2$, and $v_1$, respectively. Since $\ell(u_1)=1$, it is relatively prime to the label of $u_{n-1}$. Likewise, $\ell(u_2)=4$ and $\ell(v_1)=2$, while $\ell(u_{n})=12n-1$ and $\ell(v_n)=12n-3$ are both odd, making those adjacent pairs of labels also relatively prime. As we define $\ell$ for the upcoming cases, the labels on vertices $u_{5k+4}, u_{5k+5},$ and $v_{5k+5}$ are not changed except in Cases 4b, 4c, and 4d, and this occurs only as the subsequent block is labeled. These three vertices are the only ones within the block of ten vertices $v_{5k+1},\ldots,v_{5k+5},u_{5k+1},\ldots,u_{5k+5}$ that are adjacent to vertices in the subsequent block, hence leaving these three vertices unchanged is essential to guaranteeing that adjacent labels on vertices in different blocks are relatively prime. Let $U_k=\{u_{5k+1},\ldots,u_{5k+5}\}$ and $V_k=\{v_{5k+1},\ldots,v_{5k+5}\}$. \noindent \textbf{Case 1:} Suppose that $5\nmid 12k-1,5\nmid 12k-3,5\nmid 12k+5$. Label the vertices $U_k\cup V_k$ as in Equation~\ref{initiallabel}. As previously observed, pairs of adjacent vertices in $U_k\cup V_k$ or adjacent pairs between the vertices in $U_k\cup V_k$ and $\{v_{5k},u_{5k},u_{5k-1}\}$ do not have labels that share a common factor of 2 or 3. The adjacent vertex pairs $\{u_{5k},u_{5k+2}\}$, $\{v_{5k}, v_{5k+1}\}$, and $\{v_{5k+3},u_{5k+3}\}$ have labels that differ by $5$. Our assumptions for this case ensure that these pairs are not both divisible by $5$, resulting in the relatively prime condition being satisfied. \noindent \textbf{Case 2:} Assume that $5\mid 12k+5$. Use the labeling $\ell$ from Equation~\eqref{initiallabel} except we redefine \begin{align} \label{case2} \begin{array}{c} \ell(u_{5k+3})=12k+6. \end{array} \end{align} Since we assumed $5\mid 12k+5$, it follows that $5\nmid 12k-1$ and $5\nmid 12k-3$, and thus after applying reasoning from Case 1, we need only check that $\ell(u_{5k+3})$ is relatively prime with the labels of all neighbors of $u_{5k+3}$. Since $u_{5k+3}$ is adjacent to $u_{5k+1}$, $u_{5k+5}$ and $v_{5k+3}$, we need that $\gcd\{12k+6,12k+1\}=1$, $\gcd\{12k+6,12k+11\}=1$, and $\gcd\{12k+6,12k+5\}=1$. The third equality is trivial, and the first two equalities follow immediately from the Case 2 assumption. \noindent \textbf{Case 3a:} Next, suppose that $5\mid 12k-1$ and $7\nmid 12k-3$. Use the initial labeling $\ell$ in Equation~\eqref{initiallabel} with the following two altered labels \begin{align} \label{case3a} \begin{array}{cccc} \ell(u_{5k+2})=12k+2, & \ell(v_{5k+1})=12k+4. \end{array} \end{align} Notice that since $5$ divides $12k-1$, we have $\gcd\{12k+10,12k+5\}=1$. As before, we need only to check that $\ell(u_{5k+2})$ and $\ell(v_{5k+1})$ are relatively prime with the labels of any adjacent vertices. Clearly, $\gcd\{12k+2,12k+3\}=\gcd\{12k+4,12k+3\}=1$. Since both $12k+2$ and $12k+4$ are not divisible by $3$, we know that $\gcd\{12k+2,12k-1\}=\gcd\{12k+4,12k+1\}=1$. Since 7 is assumed to not divide $12k-3$, $\gcd\{12k+4,12k-3\}=1$. Finally, our assumption of $5\mid 12k-1$ implies $5\nmid 12k+2$, hence $\gcd\{12k+2,12k+7\}=1$. \noindent \textbf{Case 3b:} We now assume $5\mid 12k-1$ and $7\mid 12k-3$. We reassign the following labels from $\ell$ \begin{align} \label{case3b} \begin{array}{ccccccc} \ell(u_{5k+2})=12k+6, && \ell(v_{5k+2})=12k+5, && \ell(v_{5k+3})=12k+3. \end{array} \end{align} We need only check that these new labels are relatively prime with labels of any neighboring vertices. It is clear that \[ \gcd\{12k+6,12k+7\}=\gcd\{12k+6,12k+5\}=\gcd\{12k+5,12k+3\}=1. \] Since $12k+2$ is not divisible by 3, $\gcd\{12k+2,12k+5\}=1$. By our assumption that $12k-1$ is divisible by 5, $\gcd\{12k+3,12k+8\}=1$. Since $12k-3$ is assumed to be divisible by 7, $\gcd\{12k-1,12k+6\}=\gcd\{12k+3,12k+10\}=1$. \noindent \textbf{Case 4a:} Suppose that $5\mid 12k-3$ and $7\nmid 12k-3$. We make the following changes to $\ell$ \begin{align} \label{case4a} \begin{array}{cccccccccc} \ell(v_{5k+1})=12k+4, && \ell(v_{5k+4})=12k+10, && \ell(u_{5k+2})=12k+8, && \ell(u_{5k+3})=12k+2. \end{array} \end{align} Clearly we have \[ \gcd\{12k+4,12k+3\}=\gcd\{12k+2,12k+1\}=\gcd\{12k+8,12k+7\}=\gcd\{12k+10,12k+9\}=1. \] Additionally, since none of the four reassigned labels are divisible by 3, it is clear that \begin{align*} \gcd\{12k+4,12k+1\}&=\gcd\{12k+2,12k+5\}=\gcd\{12k+10,12k+7\}\\ &=\gcd\{12k+8,12k-1\}=\gcd\{12k+2,12k+11\}=1. \end{align*} Our assumptions in this case include that $7\nmid 12k-3$ and also imply that $5\nmid 12k+8$ or $12k+10$. Thus we have \[ \gcd\{12k+4,12k-3\}=\gcd\{12k+8,12k+3\}=\gcd\{12k+10,12k+5\}=1. \] \noindent \textbf{Case 4b:} Now suppose that $5\mid 12k-3$, $7\mid 12k-3$, and $11\nmid 12k-3$. Once again, we make four changes to $\ell$ in this case \begin{align} \label{case4b} \begin{array}{cccccccccc} \ell(v_{5k})=12k-1, && \ell(u_{5k})=12k-3, && \ell(u_{5k+2})=12k+8, && \ell(v_{5k+4})=12k+4. \end{array} \end{align} Since the two vertices indexed by $5k$ are in the previous block of ten vertices, it is important to consider whether that block falls within a case in which any labels were swapped from the initial labeling of that block. Since we assume $5\mid 12k-3$, $5 \mid 12k-13=12(k-1)-1$, whereas $7\mid 12k-3$ implies $7\nmid 12k-15=12(k-1)-3$ so vertices in $U_{k-1}\cup V_{k-1}$ would be labeled according to Case 3a. Neither vertex whose label was swapped within Case 3a is adjacent to $v_{5k}$ or $u_{5k}$, so the adjacent pairs of labels to consider from that block are $12k-3$ and $12k-2$, $12k-3$ and $12k-1$, and $12k-1$ and $12k-4$. Overall, there are ten adjacent pairs of labels that need to be verified as relatively prime. It is clear that \[ \gcd\{12k-3,12k-2\}=\gcd\{12k-3,12k-1\}=\gcd\{12k+8,12k+7\}=\gcd\{12k+4,12k+5\}=1. \] Since the only reassigned label that is a multiple of $3$ is $12k-3$, we have \[ \gcd\{12k-1,12k-4\}=\gcd\{12k-1,12k+2\}=\gcd\{12k+4,12k+7\}=1. \] The assumption $5\mid 12k-3$ implies $5$ is not a factor of $12k+8$ or $12k+4$, hence \[ \gcd\{12k+8,12k+3\}=\gcd\{12k+4,12k+9\}=1. \] Finally, by our assumption that $11\nmid 12k-3$, we know $\gcd\{12k-3,12k+8\}=1$. \noindent \textbf{Case 4c:} Assume that $5\mid 12k-3$, $7\mid 12k-3$, $11\mid 12k-3$, and $13\nmid 12k-3$. Five reassignments of labels are needed \begin{align} \label{case4c} \begin{array}{cccccc} \ell(v_{5k})=12k-1, && \ell(u_{5k})=12k-3, && \ell(u_{5k+2})=12k+10, \\ \ell(u_{5k+3})=12k+8, && \ell(v_{5k+4})=12k+4. && \end{array} \end{align} The reason why we can change the labels on $v_{5k}$ and $u_{5k}$ without causing any adjacent pairs of vertices to not be relatively prime is the same reason as given in Case 4a. There are ten additional pairs of labels that need to be shown to be relatively prime to complete this case. Clearly, \begin{align*} \gcd\{12k-1,12k+2\} &= \gcd\{12k+4,12k+5\} = \gcd\{12k+10, 12k+7\}\\ &= \gcd\{12k+8,12k+5\}=\gcd\{12k+8,12k+11\}\\ &=\gcd\{12k+4,12k+7\} =1. \end{align*} By our assumption that $5\mid 12k-3$, we know $5$ is not a factor of $12k+4$, resulting in $\gcd\{12k+4,12k+9\}=1$. Similarly, $7$ is assumed to be a factor of $12k-3$, so $7\nmid 12k+3$ and $7\nmid 12k+8$; therefore, $\gcd\{12k+10,12k+3\}=\gcd\{12k+8,12k+1\}=1$. Lastly, we assumed $13\nmid 12k-3$, hence $\gcd\{12k-3,12k+10\}=1$, resulting in the relatively prime condition being satisfied. \noindent \textbf{Case 4d:} Finally we suppose that $5\mid 12k-3$, $7\mid 12k-3$, $11\mid 12k-3$, and $13\mid 12k-3$. In this case, we only need one pair of labels to be swapped by relabeling $u_{5k-2}$ as $12k+2$ and $v_{5k+1}$ as $12k-2$. As in the last two cases, the previous block of vertices that contains $u_{5k-2}$ falls within Case 3a, which involves swapping two labels on vertices that are not adjacent to $u_{5k-2}$. Its neighbors then are labeled by $12k-11$, $12k-7$, and $12k-1$, while the label $12k-2$ on $v_{5k+1}$ is adjacent to $12k-3$, $12k+1$, and $12k+3$. Thus, by our assumptions in this case, our reassigned labels are relatively prime with the labels of adjacent vertices. Therefore, by our assumptions and case analysis, it is clear that all labels are relatively prime with their adjacent labels. Thus in each case the updated $\ell$ is a coprime labeling, making $\mathfrak{pr}(GP(5m,2))\leq 12m+1$. Since $\alpha(GP(5m,2))=\left\lfloor\frac{4(5m)}{5}\right\rfloor=4m$, we need $6m$ odd numbers to label the vertices in $GP(5m,2)$. Thus, $\mathfrak{pr}(GP(5m,2))\geq 12m-1$. Therefore, $\mathfrak{pr}(GP(5m,2))=12m-1$. \end{proof} \begin{theorem} The minimum coprime number for $GP(n,2)$ for $n\geq 5$ is given by $$\mathfrak{pr}(GP(n,2))= \begin{cases} 12m-1 & \text{if } n=5m\\ 12m+3 & \text{if } n=5m+1\\ 12m+5 & \text{if } n=5m+2\\ 12m+7 & \text{if } n=5m+3\\ 12m+9 & \text{if } n=5m+4.\\ \end{cases}$$ \end{theorem} \begin{proof} When $n=5m$, we constructed in Lemma~\ref{GPn2Lemma} a minimum coprime labeling $\ell$ with $12m-1$ as the largest label. For the remaining cases, we will build the labeling by using $\ell$ defined in Lemma~\ref{GPn2Lemma} for the first $5m$ vertices in $v_1\ldots v_n$ and the first $5m$ vertices in $u_1\ldots u_n$. Note that the vertices $v_{5m}$, $u_{5m}$, and $u_{5m-1}$ are not changed from the labeling defined in Equation~\eqref{initiallabel}. Hence, $\ell(v_{5m})=12m-3$, $\ell(u_{5m})=12m-1$, and $\ell(u_{5m-1})=12m-5$. Suppose that $n=5m+1$. Label the remaining vertices as $\ell(u_{5m+1})=12m+3$ and $\ell(v_{5m+1})=12m+1$. By Lemma~\ref{GPn2Lemma}, we need only check that the following pairs of adjacent vertices have relatively prime labels: $u_{5m+1} v_{5m+1}$, $u_{5m} u_1$, $u_{5m+1} u_2$, $v_{5m+1} v_1$, $u_{5m-1} u_{5m+1}$, and $v_{5m} v_{5m+1}$. It is clear we have each of the following necessary relatively prime pairs \begin{align*} \gcd\{12m-1,1\}&=\gcd\{12m+1,2\}= \gcd\{12m+3,4\}=\gcd\{12m+1,12m+3\}\\ &=\gcd\{12m-5,12m+3\}=\gcd\{12m-3,12m+1\}=1. \end{align*} Since the independence number is $\alpha(GP(5m+1,2))=\left\lfloor \frac{4(5m+1)}{5}\right\rfloor=4m$, we have used the maximum number of even labels less than $12m+3$. Since all odd integers were used from 1 to $12m+3$, we have that $\mathfrak{pr}(GP(n,2))=12m+3$ in the case of $n=5m+1$. Next, suppose that $n=5m+2$. We label the vertices $v_1,\ldots,v_{5m}$ and $u_1,\ldots,u_{5m}$ as in Lemma~\ref{GPn2Lemma}. Label the remaining vertices as \begin{align*} \begin{array}{cccccccccc} \ell(u_{5m+1})=12m+4, && \ell(u_{5m+2})=12m+5, && \ell(v_{5m+1})=12m+1, && \ell(v_{5m+2})=12m+3. \\ \end{array} \end{align*} As explained above, $\ell(u_{5m})=12m-1$, $\ell(v_{5m})=12m-3$, and $\ell(u_{5m-1})=12m-5$. Also note that our new labels on vertices adjacent $v_1$, $u_1$, and $u_2$ make relatively prime pairs since $\ell(u_{5m+2})$ and $\ell(v_{5m+2})$ are odd. For the remaining adjacent pairs, we have \begin{align*} \gcd\{12m+4,12m-5\}&=\gcd\{12m+4,12m+1\}= \gcd\{12m+5,12m-1\}\\ &=\gcd\{12m+5,12m+3\}=\gcd\{12m+1,12m-3\}\\ &=\gcd\{12m+1,12m+3\}=1. \end{align*} The independence number in this case is $\alpha(GP(5m+2,2))=\left\lfloor\frac{4(5m+2)}{5}\right\rfloor=4m+1$, which shows we used the maximum number of even labels since one of the last four vertex labels is even. Thus, $\mathfrak{pr}(GP(n,2))=12m+5$ when $n=5m+2$. Assume next that $n=5m+3$. Again we label the vertices $v_1,\ldots,v_{5m}$ and $u_1,\ldots,u_{5m}$ as in Lemma~\ref{GPn2Lemma}. Label the remaining vertices as \begin{align*} \begin{array}{ccccccc} \ell(u_{5m+1})=12m+4, && \ell(u_{5m+2})=12m+3, && \ell(u_{5m+3})=12m+7, \\ \ell(v_{5m+1})=12m+1, && \ell(v_{5m+2})=12m+2, && \ell(v_{5m+3})=12m+5. \\ \end{array} \end{align*} Since $\ell(v_{5m+3})$ and $\ell(u_{5m+3})$ are odd, they are relatively prime with $\ell(v_1)$ and $\ell(u_2)$, respectively. The remaining adjacent pairs satisfy the following \begin{align*} \gcd\{12m+4,12m-5\}&=\gcd\{12m+4,12m+1\}=\gcd\{12m+4,12m+7\}\\ &=\gcd\{12m+3,12m-1\}=\gcd\{12m+3,12m+2\}\\ &=\gcd\{12m+7,12m+5\}=\gcd\{12m+1,12m-3\}\\ &=\gcd\{12m+1,12m+2\}=\gcd\{12m+2,12m+5\}=1. \end{align*} The independence number when $n=5m+3$ is $\alpha(GP(5m+3,2))=\left\lfloor\frac{4(5m+3)}{5}\right\rfloor=4m+2$, implying our use of two even labels on the final six vertices is the maximum allowable. Therefore, $\mathfrak{pr}(GP(n,3))=12m+7$ when $n=5m+3$. Finally, when $n=5m+4$ we need to consider three cases when labeling the final eight vertices. In each case, the labels on $u_{5m+3}$, $u_{5m+4}$, and $v_{5m+4}$ trivially have no common factors with their respective adjacent vertices $u_1$, $u_2$, and $v_1$. First, assume $5\nmid 12m+2$ and label the remaining vertices as \begin{align}\label{lastcase} \begin{array}{cccccc} \ell(u_{5m+1})=12m+4, && \ell(u_{5m+2})=12m+8, \\ \ell(u_{5m+3})=12m+3, && \ell(u_{5m+4})=12m+9, \\ \ell(v_{5m+1})=12m+1, && \ell(v_{5m+2})=12m+5, \\ \ell(v_{5m+3})=12m+2, && \ell(v_{5m+4})=12m+7. \end{array} \end{align} As before, $v_{5m}$, $u_{5m}$, and $u_{5m-1}$ are all constructed the same as in Equation~\eqref{initiallabel}. Thus, \begin{align*} \gcd\{12m+4,12m-5\} &= \gcd\{12m+4,12m+3\}=\gcd\{12m+4,12m+1\}\\ &= \gcd\{12m+8,12m-1\}=\gcd\{12m+8,12m+5\}\\ &=\gcd\{12m+8,12m+9\} = \gcd\{12m+3,12m+2\}\\ &=\gcd\{12m+9,12m+7\}=\gcd\{12m+1,12m-3\} \\ &=\gcd\{12m+1,12m+5\}=\gcd\{12m+5,12m+2\}=1. \end{align*} Additionally, our final pair satisfies $\gcd\{12m+2,12m+7\}=1$ by our assumption of $5\nmid 12m+2$. Next we assume $5\mid 12m+2$ and $7\nmid 12m+2$. We label the eight vertices in the final block as in Equation~\eqref{lastcase} except reassign $\ell(u_{5m+1})=12m+2$ and $\ell(v_{5m+3})=12m+4$. The following show these two vertices have labels that are relatively prime with their neighbors, where the assumption of $7\nmid 12m+2$ is necessary for the first $\gcd$ calculation: \begin{align*} u_{5m+1}&: \gcd\{12m+2,12m-5\}=\gcd\{12m+2,12m+3\}=\gcd\{12m+2,12m+1\}=1\\ v_{5m+3}&: \gcd\{12m+4,12m+5\}=\gcd\{12m+4,12m+3\}=\gcd\{12m+4,12m+7\}=1. \end{align*} Finally, we assume $5\mid 12m+2$ and $7\mid 12m+2$. We label the vertices as follows: \begin{align*} \begin{array}{cccccccccc} \ell(u_{5m+1})=12m && \ell(u_{5m+2})=12m+2 && \ell(u_{5m+3})=12m+7 && \ell(u_{5m+4})=12m+1,\\ \ell(v_{5m+1})=12m+5 && \ell(v_{5m+2})=12m+3 && \ell(v_{5m+3})=12m+4 && \ell(v_{5m+4})=12m+9. \\ \end{array} \end{align*} Since $12m+2$ is divisible by $5$ and $7$, we know that $12m$ is not divisible by $5$ or $7$ and likewise $12m+4$ is not divisible by $5$, resulting in $$\gcd\{12m,12m-5\}=\gcd\{12m,12m+5\}=\gcd\{12m,12m+7\}=\gcd\{12m+4,12m+9\}=1.$$ The remaining adjacent pairs have relatively prime labels based on the following \begin{align*} \gcd\{12m+2,12m-1\} &=\gcd\{12m+2,12m+1\}=\gcd\{12m+2,12m+3\}\\ &= \gcd\{12m+7,12m+4\}= \gcd\{12m+1,12m+9\}\\ &= \gcd\{12m+5,12m-3\}=\gcd\{12m+5,12m+3\}\\ &= \gcd\{12m+3,12m+4\}=1. \end{align*} In all three cases for the final eight vertices, our relatively prime condition is true while using three even labels in this block. Since the independence number when $n=5m+4$ is $\alpha(GP(5m+4,2))=\left\lfloor\frac{4(5m+4)}{5}\right\rfloor=4m+3$, we have shown $\mathfrak{pr}(GP(n,2))=12m+9$ when $n=5m+4$, concluding our fifth and final case of $n\pmod{5}$. \end{proof} We conclude this section with conjectures for the minimum coprime number of $GP(n,k)$ for larger cases of $k$. The independence numbers, as given in~\cite{BHM} and~\cite{FGS}, for the case of $k=3$ when $n$ is odd is $\alpha(GP(n,3))=n-2$, and when $n=3k$ is $\alpha(GP(3k,k))=\left\lceil \frac{5k-2}{2}\right\rceil$. These values lead to the following conjectures, which we have verified for small values of $n$, although a minimum coprime labeling in each general case still alludes us. \begin{conjecture} For $n \geq 7$, the minimum coprime number for $GP(n,3)$ if $n$ is odd is $$\mathfrak{pr}(GP(n,3))=2n+3.$$ \end{conjecture} \begin{conjecture} For $k\geq 2$, the minimum coprime number for $GP(3k,k)$ is given by $$\mathfrak{pr}(GP(3k,k))= \begin{cases} 7k & \text{if } k \text{ is odd} \\ 7k+1 & \text{if } k \text{ is even}.\\ \end{cases}$$ \end{conjecture} \section{Stacked Prisms} We next turn our focus to the class of graphs known as the \textit{stacked prism}, also known as the generalized prism graph. A stacked prism is defined as $Y_{m,n}=C_m\square P_n$ for $m\geq 3$ and $n\geq 1$. See Figure~\ref{fig:y36} for an example of $Y_{3,6}$ with a minimum coprime labeling. \begin{figure}[htb] \centering \includegraphics{y36.pdf} \caption{Example of a minimum coprime labeling of $Y_{3,6}$} \label{fig:y36} \end{figure} We first focus on the stacked triangular prism, $Y_{3,n}$, which has $3n$ vertices. Its independence number is $n$ since an independent set can contain at most one vertex from each triangle, and it is trivial to find such a set of $n$ vertices. We demonstrate in the following result a way to apply a minimum coprime labeling based on how this independence number limits our use of even labels. \begin{theorem} The minimum coprime number for the stacked triangular prism is given by $$\mathfrak{pr}(Y_{3,n})=4n-1.$$ \end{theorem} \begin{proof} We refer to the vertices of $Y_{3,n}$ as $v_{i,j}$ where $i=1,\ldots, n$ and $j=1,2,3$. Then the edges of the graph are of the form $v_{i,j}\sim v_{i+1,j}$ and $v_{i,j}\sim v_{i,k}$ for $j\neq k$. We form a coprime labeling $\ell$ recursively by labeling the vertices on the $(i+1)^{\rm st}$ triangle with $\ell(v_{i+1,r}), \ell(v_{i+1,s}), \ell(v_{i+1,t})$ based on the labels chosen for the $i^{\rm th}$ triangle, $\ell(v_{i,r}), \ell(v_{i,s}), \ell(v_{i,t})$. First assign the labels $\ell(v_{1,1})=1$, $\ell(v_{1,2})=2$, and $\ell(v_{1,3})=3$. Each subsequent $(i+1)^{\rm st}$ triangle for $1\leq i\leq n-1$ will use the labels $4i+1$, $4i+2$, and $4i+3$ in some order (or $4i$, $4i+1$, and $4i+3$ in one case) depending on which labels from the $i$th triangle are multiples of 3 and/or 5. In each case we assume $\ell(v_{i,r})=4i-3$, $\ell(v_{i,s})=4i-2$, and $\ell(v_{i,t})=4i-1$. \noindent \textbf{Case 1:} Suppose $5\nmid 4i-3$ and $3\nmid 4i-2$. In this case, we assign $\ell(v_{i+1,r})=4i+2$, $\ell(v_{i+1,s})=4i+1$, and $\ell(v_{i+1,t})=4i+3$. The three edges within the $(i+1)^{\rm st}$ triangle have relatively prime labels since they are either consecutive integers or consecutive odd integers. Since we assumed $5\nmid 4i-3$ and $3\nmid 4i-2$, the edges between the $i^{\rm th}$ and $(i+1)^{\rm st}$ triangle satisfy the following: $$\gcd\{4i-3,4i+2\}=\gcd\{4i-2,4i+1\}=\gcd\{4i-1,4i+3\}=1.$$ \noindent \textbf{Case 2:} Assume $5\mid 4i-3$ and $3\nmid 4i-1$, which implies $5\nmid 4i-2$. We then assign $\ell(v_{i+1,r})=4i+1$, $\ell(v_{i+1,s})=4i+3$, and $\ell(v_{i+1,t})=4i+2$. Again the vertices within the newly labeled triangle have pairs of relatively prime labels. Since $5\nmid 4i-2$ and $3\nmid 4i-1$, the edges between the two triangles satisfy the following: $$\gcd\{4i-3,4i+1\}=\gcd\{4i-2,4i+3\}=\gcd\{4i-1,4i+2\}=1.$$ \noindent \textbf{Case 3:} Next suppose $5\nmid 4i-3$ and $5\nmid 4i-2$, in which we assign $\ell(v_{i+1,r})=4i+2$, $\ell(v_{i+1,s})=4i+3$, and $\ell(v_{i+1,t})=4i+1$. As in previous cases, we only need to verify the edges between the $i^{\rm th}$ and $(i+1)^{\rm st}$ triangles have relatively prime labels on their endpoints, which is satisfied since our assumptions of $5\nmid 4i-3$ and $5\nmid 4i-2$ result in $$\gcd\{4i-3,4i+2\}=\gcd\{4i-2,4i+3\}=\gcd\{4i-1,4i+1\}=1.$$ \noindent \textbf{Case 4:} There are two remaining other possible assumptions that can be made about factors of 3 or 5 that would allow all cases to be covered: either $5\mid 4i-3$ and $3\mid 4i-1$, or $5\nmid 4i-3$ and both $3$ and $5$ divide $4i-2$. We handle these by combining them into one final case. This is because if $5\mid 4i-3$ and $3\mid 4i-1$, then this implies $5\nmid 4i+1$ and $3,5\mid 4i+2$. Therefore, having the first of these possible cases implies the second occurs on the next triangle, so we assign labels to the $(i+1)^{\rm st}$ and $(i+2)^{\rm nd}$ triangles at once while assuming $5\mid 4i-3$ and $3\mid 4i-1$. We set $\ell(v_{i+1,r})=4i$, $\ell(v_{i+1,s})=4i+1$, and $\ell(v_{i+1,t})=4i+3$, as well as $\ell(v_{i+2,r})=4i+5$, $\ell(v_{i+2,s})=4i+6$, and $\ell(v_{i+2,t})=4i+7$. Note that the labels on one of the edges of the $(i+1)^{\rm st}$ do not trivially satisfy the relatively prime condition. However, $\gcd\{4i,4i+3\}=1$ since the assumption of $3\mid 4i-1$ implies $3\nmid 4i$, so the condition is satisfied nonetheless. Also note that while the $(i+1)^{\rm st}$ triangle does not use the three consecutive labels that have been used in other cases, the $(i+2)^{\rm nd}$ triangle does use the three consecutive values that allow us to continue with our recursion to find the $(i+3)^{\rm rd}$ triangle next. We now verify the six edges from the $i^{\rm th}$ triangle to the $(i+1)^{\rm st}$ and $(i+1)^{\rm st}$ triangle to the $(i+2)^{\rm nd}$ have relatively prime endpoints. It is clear that $\gcd\{4i-1, 4i+3\}=\gcd\{4i+3, 4i+7\}=1$. Since we assumed $3\mid 4i-1$, we know $3\nmid 4i-2$ or $4i$, hence $\gcd\{4i-2,4i+1\}=\gcd\{4i-3,4i\}=1$. Our assumption of $5\mid 4i-3$ implies $5\nmid 4i$ or $4i+1$, thus resulting in $\gcd\{4i,4i+5\}=\gcd\{4i+1,4i+6\}=1$. In all four cases, we have verified that each pair of adjacent vertices is labeled by relatively prime integers, thus resulting in a coprime labeling once our recursion approach reaches $i=n-1$. The final triangle will use $4(n-1)+3=4n-1$ as its largest label. Thus, since we used the maximum amount of $n$ even labels based on the independence number and the smallest possible odd labels, we have proven $\mathfrak{pr}(Y_{3,n})=4n-1$. \end{proof} We next investigate a minimum coprime labeling of the stacked pentagonal prism, $Y_{5,n}$, which has~$5n$ vertices. Similarly to $Y_{3,n}$, its independence number is determined based on at most $2$ independent vertices being on each pentagon. Thus, the independence number is $\alpha(Y_{5,n})=2n$, which leads to the following result on the minimum coprime number of the stacked pentagonal prism. See Figure~\ref{fig:y56} for an example of $Y_{5,6}$ with a minimum coprime labeling. \begin{figure}[htb] \centering \includegraphics{y56.pdf} \caption{Example of a minimum coprime labeling of $Y_{5,6}$} \label{fig:y56} \end{figure} \begin{theorem} The minimum coprime number for the stacked pentagonal prism graph is $$\mathfrak{pr}(Y_{5,n})=6n-1.$$ \end{theorem} \begin{proof} We will call $v_{i,j}$ the vertices of $Y_{5,n}$ where $i=1,\ldots, n$ and $j=1,2,3,4,5$. We initially assign a labeling $\ell$ for $i=1,\ldots, 70$ as follows if $i$ is odd $$\ell(v_{i,1})=6i-3,\; \ell(v_{i,2})=6i-4,\; \ell(v_{i,3})=6i-5,\; \ell(v_{i,4})=6i-2,\; \ell(v_{i,5})=6i-1,$$ and using the following if $i$ is even $$\ell(v_{i,1})=6i-5,\;\ell(v_{i,2})=6i-1,\;\ell(v_{i,3})=6i-2, \;\ell(v_{i,4})=6i-3,\;\ell(v_{i,5})=6i-4.$$ Since the labeling differs on odd- and even-indexed pentagons, there are twenty types of adjacent pairs to consider as having relatively labels. If $i$ is odd and $k$ is even, the edges $v_{i,1}v_{i,2}$, $v_{i,2}v_{i,3}$, $v_{i,4}v_{i,5}$, $v_{k,2}v_{k,3}$, $v_{k,3}v_{k,4}$, $v_{k,4}v_{k,5}$, and $v_{k,5}v_{k,1}$ are labeled by consecutive integers. The edges $v_{i,5}v_{i,1}$, $v_{i,1}v_{i+1,1}$, $v_{k,1}v_{k,2}$, and $v_{k,1}v_{k+1,1}$ are labeled by odd integers that have a difference of $2$, $4$, or $8$. Edges of the form $v_{i,3}v_{i,4}$, $v_{i,2}v_{i+1,2}$, $v_{i,3}v_{i+1,3}$, $v_{i,5}v_{i+1,5}$, $v_{k,2}v_{k+1,2}$, $v_{k,3}v_{k+1,3}$, or $v_{k,5}v_{k+1,5}$ have labels that differ by 3 or 9, in which these labels are not multiples of 3. At this point, we have shown that the vertices on eighteen edges are relatively prime. The edges $v_{i,4}v_{i+1,4}$ and $v_{k,4}v_{k+1,4}$ have labels that differ by 5 and 7, respectively, and hence may not be relatively prime for certain $i$ and $k$. Rather than detail numerous cases of alterations that need to be made to the labeling to fix these, we instead list the reassigned labels of 46 vertices from the 350 labels in which $i=1,\ldots, 70$, as seen in Table~\ref{first350}. The labels in bold are the 46 that were reassigned to avoid adjacent labels sharing multiples of 5 and 7. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|} \hline 3 & 2 & 1 & 4 & 5\\ \hline 7 & 11 & 10 & 9 & 8\\ \hline 15 & \textbf{16} & 13 & \textbf{14} & 17\\ \hline 19 & \textbf{21} & 22 & \textbf{23} & 20\\ \hline 27 & 26 & 25 & 28 & 29\\ \hline 31 & 35 & 34 & 33 & 32\\ \hline 39 & 38 & 37 & 40 & 41\\ \hline 43 & \textbf{45} & 46 & \textbf{47} & 44\\ \hline 51 & \textbf{52} & 49 & \textbf{50} & 53\\ \hline 55 & 59 & 58 & 57 & 56\\ \hline 63 & 62 & 61 & 64 & 65\\ \hline 67 & 71 & 70 & 69 & 68\\ \hline 75 & 74 & 73 & 76 & 77\\ \hline 79 & 83 & 82 & 81 & 80\\ \hline 87 & 86 & 85 & 88 & 89\\ \hline 91 & 95 & 94 & 93 & 92\\ \hline \textbf{101} & 98 & 97 & 100 & \textbf{99}\\ \hline \textbf{105} & 107 & 106 & \textbf{103} & 104\\ \hline \textbf{113} & 110 & 109 & 112 & \textbf{111}\\ \hline 115 & 119 & 118 & 117 & 116\\ \hline 123 & 122 & 121 & 124 & 125\\ \hline 127 & 131 & 130 & 129 & 128\\ \hline 135 & 134 & 133 & 136 & 137\\ \hline \end{tabular} \hfill \begin{tabular}{|c|c|c|c|c|} \hline 139 & 143 & 142 & 141 & 140\\ \hline 147 & 146 & 145 & 148 & 149\\ \hline 151 & \textbf{153} & 154 & \textbf{155} & 152\\ \hline 159 & \textbf{160} & 157 & \textbf{158} & 161\\ \hline 163 & 167 & 166 & 165 & 164\\ \hline 171 & 170 & 169 & 172 & 173\\ \hline 175 & 179 & 178 & 177 & 176\\ \hline 183 & \textbf{184} & 181 & \textbf{182} & 185\\ \hline 187 & \textbf{189} & 190 & \textbf{191} & 188\\ \hline 195 & 194 & 193 & 196 & 197\\ \hline 199 & 203 & 202 & 201 & 200\\ \hline 207 & 206 & 205 & 208 & 209\\ \hline 211 & \textbf{213} & 214 & \textbf{215} & 212\\ \hline 219 & \textbf{220} & 217 & \textbf{218} & 221\\ \hline 223 & 227 & 226 & 225 & 224\\ \hline 231 & 230 & 229 & 232 & 233\\ \hline 235 & 239 & 238 & 237 & 236\\ \hline 243 & 242 & 241 & 244 & 245\\ \hline 247 & 251 & 250 & 249 & 248\\ \hline 255 & 254 & 253 & 256 & 257\\ \hline 259 & 263 & 262 & 261 & 260\\ \hline 267 & \textbf{268} & 265 & \textbf{266} & 269\\ \hline \textbf{275} & \textbf{273} & 274 & \textbf{271} & 272\\ \hline \end{tabular} \hfill \begin{tabular}{|c|c|c|c|c|} \hline 279 & 278 & 277 & 280 & 281\\ \hline \textbf{287} & \textbf{285} & 286 & \textbf{283} & 284\\ \hline 291 & \textbf{292} & 289 & \textbf{290} & 293\\ \hline 295 & 299 & 298 & 297 & 296\\ \hline 303 & 302 & 301 & 304 & 305\\ \hline 307 & 311 & 310 & 309 & 308\\ \hline 315 & 314 & 313 & 316 & 317\\ \hline 319 & 323 & 322 & 321 & 320\\ \hline 327 & 326 & 325 & 328 & 329\\ \hline 331 & 335 & 334 & 333 & 332\\ \hline 339 & 338 & 337 & 340 & 341\\ \hline 343 & \textbf{345} & 346 & \textbf{347} & 344\\ \hline 351 & \textbf{352} & 349 & \textbf{350} & 353\\ \hline 355 & \textbf{357} & 358 & \textbf{359} & 356\\ \hline 363 & 362 & 361 & 364 & 365\\ \hline 367 & 371 & 370 & 369 & 368\\ \hline 375 & 374 & 373 & 376 & 377\\ \hline 379 & 383 & 382 & 381 & 380\\ \hline 387 & 386 & 385 & 388 & 389\\ \hline 391 & 395 & 394 & 393 & 392\\ \hline 399 & 398 & 397 & 400 & 401\\ \hline 403 & \textbf{405} & 406 & \textbf{407} & 404\\ \hline 411 & \textbf{412} & 409 & \textbf{410} & 413\\ \hline 415 & 419 & 418 & 417 & 416\\ \hline \end{tabular} \caption{Labeling of the Stacked Pentagon for $i=1$ to $70$} \label{first350} \end{table} Two important facts regarding these newly assigned labels can be observed: each label is relatively prime with any adjacent label, resulting in a coprime labeling of the graph up to $n=70$, and also that the largest difference between adjacent labels is 10. To label the stacked pentagonal prism graph when $n> 70$, we assign for $i>70$ and $j=1,\ldots 5$ the label $\ell(v_{i,j})=\ell(v_{i-70,j})+420$. Since the greatest distance between adjacent labels on the first~$70$ pentagons was $10$, only common prime factors $2$, $3$, $5$, and $7$ need to be considered. The shift by $420$, which only has these four prime numbers as factors, results in the relatively prime condition remaining satisfied for all vertex pairs with $i\geq 71$. The only exceptions that need to be verified are edges of the form $v_{70m,j}v_{70m+1,j}$ for some positive integer $m$. We have the following pairs of labels that are adjacent: \begin{align*} \{\ell(v_{70m,1})=420m-5&,\; \ell(v_{70m+1,1})=420m+3\}\\ \{\ell(v_{70m,2})=420m-1&,\; \ell(v_{70m+1,2})=420m+2\}\\ \{\ell(v_{70m,3})=420m-2&,\; \ell(v_{70m+1,3})=420m+1\}\\ \{\ell(v_{70m,4})=420m-3&,\; \ell(v_{70m+1,4})=420m+4\}\\ \{\ell(v_{70m,5})=420m-4&,\; \ell(v_{70m+1,5})=420m+5.\} \end{align*} The first pair is relatively prime since they are odd integers differing by a power of $2$. The second, third, and fifth pairs are not multiples of 3 and are separated by either 3 or 9. Finally, the fourth pair are separated by 7, but since 7 divides 420, neither of these are multiples of 7. Thus, these adjacent pairs of vertices all have relatively prime labels, making this a coprime labeling for $n\geq 71$ as well. Note that the largest label $\ell(v_{i,j})$ for $j=1,\ldots, 5$ is $6i-1$ for all $i$, hence the largest label for $Y_{5,n}$ is $6n-1$. Since we used the maximum of $2$ even labels on each pentagon along with the smallest possible odd labels, we have proven that $\mathfrak{pr}(Y_{5,n})=6n-1$. \end{proof} For the graphs $Y_{3,n}$ and $Y_{5,n}$, the minimum coprime numbers were directly correlated to their independence number. This number is easy to obtain for stacked prisms involving larger odd cycles as well. Since $\alpha(C_{2k+1})=k$, one can observe that $\alpha(Y_{2k+1,n})=kn$. A minimum coprime labeling could use at most $kn$ even labels, forcing $(2k+1)n-kn=(k+1)n$ odd labels to be used, leading to the following conjecture. \begin{conjecture} The minimum coprime number for stacked $(2k+1)$-gon prism graph is $$\mathfrak{pr}(Y_{2k+1,n})=2(k+1)n-1.$$ \end{conjecture} It should be noted to close this section that while we focused on the odd case, the stacked prisms with even-length cycles, $Y_{2k,n}$, satisfies $\alpha(Y_{2k,n})=kn=\frac{|V(Y_{2k,n})|}{2}$. Therefore, one would hope a prime labeling exists in this case, but this remains an open problem. \section{Variation of Generalized Petersen Graph} We extend the definition of the generalized Petersen graph for the case of even $n$ and $k=\frac{n}{2}$. We denote this graph by $GP^*(2k,k)$ and use the same notation for the vertices $u_1,\ldots, u_n$ and $v_1,\ldots, v_n$ with edges $v_i\sim v_{i+1}$, $v_i\sim u_i$ and $u_i\sim u_{i+k}$ where indices are calculated modulo $n$. Note that this graph differs from generalized Petersen graphs since $deg(u_i)=2$ for all $i=1,\ldots, n$ instead of the usual degree of $3$ for $GP(n,k)$. For an example of a minimum coprime labeling of $GP^*(20,10)$, see Figure~\ref{fig:gp2kk}. \begin{figure}[htb] \centering \includegraphics{gp2kk.pdf} \caption{Example of a minimum coprime labeling of $GP^*(20,10)$.} \label{fig:gp2kk} \end{figure} \begin{theorem} For the graph $GP^*(2k,k)$ in which $k\geq 2$, we have the following \begin{enumerate} \item $GP^*(2k,k)$ is prime if $k$ is odd, \item $\mathfrak{pr}(GP^*(2k,k))=4k+1$ if $k$ is even. \end{enumerate} \end{theorem} \begin{proof} We first assume $k$ is odd. We begin by labeling the vertices of $GP^*(2k,k)$ as follows \begin{equation}\label{variation} \ell(v_i)=\begin{cases} 4i-3 & \mbox{ for $i=1,3,5,\ldots, k$}\\ 4i & \mbox{ for $i=2,4,6,\ldots, k-1$}\\ 4i-4k &\mbox{ for $i=k+1,k+3,k+5,\ldots, 2k$}\\ 4i-3-4k &\text{ for $i=k+2,k+4,k+6,\ldots, 2k-1$.}\\ \end{cases} \end{equation} \begin{equation*} \ell(u_i)=\begin{cases} 4i-2 & \mbox{ for $i=1,3,5,\ldots, k$}\\ 4i-1 & \mbox{ for $i=2,4,6,\ldots, k-1$}\\ 4i-1-4k &\mbox{ for $i=k+1,k+3,k+5,\ldots, 2k$}\\ 4i-2-4k &\text{ for $i=k+2,k+4,k+6,\ldots, 2k-1$.}\\ \end{cases} \end{equation*} For the pairs $v_i\sim v_{i+1}$ and $v_{i+k}\sim v_{i+1+k}$ for $i\in\{1,\ldots,k-1\}$, it is clear that $\gcd\{\ell(v_i),\ell(v_{i+1})\}\in\{1,7\}$ and $\gcd\{\ell(v_{i+k}),\ell(v_{i+k+1})\}\in \{1,7\}$. Notice that $\gcd\{\ell(v_{k}),\ell(v_{k+1})\}=\gcd\{4k-3,4\}=1$ and $\gcd\{\ell(v_{2k}),\ell(v_{1})\}=\gcd\{4k-1,1\}=1$. For the edges $u_i u_{i+k}$, it is easily verified that when $i\in\{1,3,\ldots,k\}$, \[ \gcd\{\ell(u_i),\ell(u_{i+k})\}=\gcd\{4i-2,4(i+k)-1-4k\} = \gcd\{4i-2,4i-1\}=1. \] Similarly, $\gcd\{\ell(u_i),\ell(u_{i+k})\}=1$ when $i\in\{2,4,\ldots,k-1\}$ and $\gcd\{\ell(u_i),\ell(v_i)\} = 1$ for all $i=1,\ldots, 2k$ since these pairs are consecutive integers. Thus, our only concern with the labeling is when $u_i$ and $u_{i+1}$ are both divisible by 7. We handle these instances by breaking the proof into several cases based on the remainder of $\ell(v_i)/3$ and whether or not $5$ divides $\ell(v_i)+2$. For the sake of simplicity, let $\ell(v_i)=a$ where $i\in\{1,3,\ldots,k\}$. Notice that in our labeling, $\ell(v_i)=a$ is odd in these cases in which $\ell(v_{i+1})-\ell(v_i)=7$. In each of the cases below, we are supposing that $a\equiv 0 \pmod{7}$. \noindent \textbf{Case 1:} Suppose that $a\equiv 1\pmod{3}$. In this case, swap the labels $a$ and $a-2$. Notice that $\ell(u_{i-1})=a-2$. Since $a$ is not divisible by $3$, $\gcd\{a,a-3\}=1$. Since $a-2$ is also not divisible by 3, $\gcd\{a-2,a+7\}=\gcd\{a-2,a+1\}=1$. Thus, the two labels involved in the swap are relatively prime with all adjacent labels.\hfill $\diamond$ \noindent \textbf{Case 2:} Suppose that $a\equiv 0\pmod{3}$ and $5 \nmid a$. In this case, swap the labels $a+5$ and $a+7$ on the vertices $v_{i+1}$ and $u_{i+1+k}$. Since $a$ is not divisible by $5$, $\gcd\{a,a+5\}=1$. Since both $a+5$ and $a+7$ are not divisible by $3$, $\gcd\{a+5,a+8\}=\gcd\{a+4,a+7\}=1$. Thus, all newly adjacent pairs of labels after making this swap are relatively prime.\hfill $\diamond$ \noindent \textbf{Case 3:} Suppose that $a\equiv 0\pmod{15}$. The labels $a-1,a,a+1,a+5,a+6,a+7$ will become $a+7,a+6,a+5,a-1,a,a+1$, respectively. Based on the placement of the swapped labels, we only need to check the $\gcd$ between six newly adjacent pairs of labels. Since $a$ is a multiple of $3$, $5$, and $7$, we have \[ \gcd\{a+1,a+8\}=\gcd\{a-1,a+4\},\gcd\{a+5,a+2\}=\gcd\{a+7,a-2\}=\gcd\{a+1,a+6\}=1. \] Note that if $i=2k-1$, instead of the label $a+8$, we have label $1$ on $v_{i+2}$, which is still relatively prime with $a+1$. Notice that the label $a+7$ is also now on a vertex adjacent to the vertex labeled $a-8$. Since $a\equiv 0\pmod{15}$, $\gcd\{a-8,a+7\}=1$. Therefore, the reassigned labels are relatively prime with any newly adjacent label.\hfill $\diamond$ \noindent \textbf{Case 4:} Suppose that $a\equiv 2\pmod{3}$ and $5 \nmid a+2$. In this case, swap the labels $a$ and $a+2$. Since both $a$ and $a+2$ are not divisible by $3$ and $a+2$ is not divisible by $5$, it follows that \[ \gcd\{a,a+3\} = \gcd\{a+2,a+7\} = \gcd\{a+2,a-1\}=1. \] Hence any new adjacencies after the swap consist of relatively prime labels. \hfill $\diamond$ \noindent \textbf{Case 5:} Suppose that $a\equiv 2\pmod{3}$ and $5\mid a+2$. The labels $a,a+1,a+2,a+6,a+7,a+8$ are reassigned as $a+6,a+7,a+8,a+2,a+1,a$, respectively. Based on the placement of the swapped labels, as in Case 3, we need only check the $\gcd$ of six pairs of labels. Since $a\equiv 2\pmod{3}$, $a+2\equiv 0\pmod{5}$, and $a\equiv 0\pmod{7}$, \[ \gcd\{a-1,a+6\}=\gcd\{a+3,a+8\}=\gcd\{a+2,a+5\}=\gcd\{a+1,a+6\}=\gcd\{a,a+9\}=1. \] If $\ell(v_{i+2})=a+8$ in the original labeling of the vertices, then since $a$ is odd, $a\equiv 2\pmod{3}$, and $a+2\equiv 0\pmod{5}$, it follows that $\gcd\{a+15,a\}=1$. Since $k$ is odd, it is not possible for $\ell(v_{i+2})$ to be $4$. It is possible that $\ell(v_{i+2})=1$ in the case of $i=2k-1$, but in this case, $\ell(v_{i+3})=8$ and the label~$a$ is adjacent to $8$, which again proves our claim. Thus, all new pairs of adjacent labels are relatively prime.\hfill $\diamond$ Combining all cases, we have proven that the labeling is prime for all odd $k$. Now suppose that $k$ is an even integer. In order to have $k$ even labels on the vertices $v_1,\ldots, v_n$, every other vertex $v_i$ must be even. However, for each $i$, both $v_i$ and $v_{i+k}$ will be labeled with an even integer and so neither $u_i$ or $u_{i+k}$ can be labeled with an even number. Thus, less than $k$ even labels can be used on the vertices $v_1,\ldots, v_n$. We cannot use more than $k$ even labels on $u_1,\ldots, u_n$ since only one of $\ell(u_i),\ell(u_{i+k})$ can be even for each $i=1,\ldots, k$. Therefore, we have $\alpha(GP^*(2k,k))<2k=\frac{1}{2}|V|$, so by Lemma~\ref{ind}, this graph is not prime when $k$ is even. We then create a minimum coprime labeling when $k$ is even by starting with the labeling $\ell$ as defined in Equation~\eqref{variation}, which originally assigned $\ell(v_{k})=4k$. Then make the same alterations to $\ell$ as described in Cases 1-5. If these left $\ell(v_k)$ unchanged, then reassign the label on $v_{k}$ to $4k+1$. Then $\gcd\{4k+1,4\}=\gcd\{4k-1,4k+1\}=\gcd\{4k-7,4k+1\}=1$. Now suppose that $\ell(v_{k})$ is relabeled by one of the cases above, which could have occurred in Case 2, 3, or 5. Assuming we are in Case 2, recall $\ell(u_{2k})=a+5$ and $\ell(v_k)=a+7=4k$ are the only two values swapped from Equation~\eqref{variation}. Instead, let $\ell(u_{2k})=4k$ and $\ell(v_k)=4k+1$. Then the lables of the vertices adjacent to $u_{2k}$ are $4k-3$ and $4k-1$, and the labels of the vertices adjacent to $v_k$ are $4$, $4k-7$, and $4k-1$. Since $4k+1$ is odd and $a=4k-7\equiv 0\pmod{3}$, it follows that $\ell(u_{2k})$ and $\ell(v_k)$ are relatively prime with the labels of the corresponding vertex neighbors. Suppose we are in Case 3 where $\ell(v_{k})$ is now swapped to be $4k-6$. Instead, we now relabel $v_{k}$ as $4k+1$. Then the labels of vertices adjacent to $v_{k}$ are labeled $4$, $4k-7$, and $4k-1$, all of which are relatively prime to $4k+1$. Finally, suppose that we are in Case~5 where $\ell(v_{k})$ is currently $4k-6$ and $\ell(v_{k+1})$ is $4k-7$. We relabel $v_{k}$ as $4k+1$. The labels of the vertices adjacent to $v_{k}$ are $4k-5$, $4k-7$, and $4k-1$. Since $4k-5\equiv 1\pmod{3}$ based on assumptions of Case~5, it follows that $\gcd\{4k+1,4k-5\}=\gcd\{4k+1,4k-1\}=\gcd\{4k+1,4k-7\}=1$, and so each pair labels of adjacent vertices are relatively prime. We have shown this is a coprime labeling for the case of even $k$ with largest label $4k+1$. Therefore, since the graph isn't prime in this case, we have proven $\mathfrak{pr}(GP^*(2k,k))=4k+1$. \end{proof} \bibliographystyle{amsplain}
1,314,259,992,961
arxiv
\section{Introduction} Mobile biometrics is a growing research topic, with published literature ranging from continuous authentication using touch and keystroke data (e.g., \cite{185306, MENG20181}) to usability surveys on commercialized face and fingerprint systems (e.g. \cite{Bhagavatula15biometricauthentication, Javed:2017:IUC:3098954.3098974}). Behavioral mobile biometrics have gained attention considering the merits associated with transparent, continuous sensing compared to point-of-entry methods, particularly for user authentication \cite{7503170}. Specifically, point-of-entry methods may require nearly 50 unlock attempts per day \cite{185310}, cannot provide continuous authentication while using the device, and require users to remember complex combinations of characters or to present usable, physical biometric data. For some, point-of-entry methods have been regarded as awkward \cite{DeLuca:2015:IFL:2702123.2702141} or inconvenient \cite{185310}. Behavioral biometrics pose challenges as well, unfortunately. Behavioral signals, such as tone in voice \cite{Feng:2017:CAV:3117811.3117823}, touch location on the screen during swipes or flicks \cite{6331527}, and usage activity (e.g., app launches or calling patterns) may change over time due to various factors such as mood. Exactly how these changes affect mobile biometrics (particularly, usage activity) has yet to be clearly recognized. Meanwhile, the requirement that a biometric modality ``should be sufficiently invariant (with respect to the matching criterion) over a period of time'', or \textit{permanent}, is critical \cite{1262027}. Modalities that change often will have at least three main effects: \begin{enumerate}[noitemsep, topsep=0pt] \item The template will need to be updated often to reduce false rejections \cite{Jain2011}. \item The hyperparameters of the matching algorithm may need to change over time to accommodate change in updated templates \cite{peel2015detecting}. \item Decision boundaries may move in feature space according to changes in which features are most important and/or as a consequence of changes to the matching algorithm. \end{enumerate} In this paper, we seek to understand the impact of mood on performance in a mobile biometric system. Summarizing our approach, we utilized data captured during the StudentLife study conducted by Wang et al. \cite{wang2014studentlife}. The StudentLife study employed a mobile app to continuously capture activity data such as GPS coordinates, app usage, and phone calls to later infer the impact of factors such as stress and activity on academic performance. Ecological Momentary Assessment (EMA) \cite{doi:10.1146/annurev.clinpsy.3.022806.091415} data was also captured via the StudentLife app, and used to label the activity data according to stress, number of hours slept, and mood, among others. EMA ``involves repeated sampling of subjects' current behaviors and experiences in real time, in subjects' natural environments'' \cite{doi:10.1146/annurev.clinpsy.3.022806.091415}, and can capture the dynamics of behavior in the real world \cite{asselbergs2016mobile}. In our study, we extracted features from the activity data and labeled each sample with student identifiers to re-evaluate the data from a biometric system perspective. To our knowledge, this is the first evaluation of the impact of mood on biometric performance using mobile activity data. We pose the following research questions: (1) \textit{How might feeling happy, upset, or stressed impact identification accuracy?}, and (2) \textit{How does feeling happy, upset, or stressed impact behavior which may consequently impact salient feature sets? } The paper is outlined as follows. In Section \ref{background}, we discuss relevant background concerning mood, and the analysis of mood using mobile data. In Section \ref{data}, we describe the StudentLife dataset and extracted features. In Section \ref{method}, we detail our methodology; results are discussed in Section \ref{results}. Finally, we provide a brief discussion on limitations and conclude the paper in Section \ref{conclusion}. \section{Background} \label{background} Mood is defined as ``affective states that are capable of influencing a broad array of potential responses, many of which seem quite unrelated to the mood-precipitating event. As compared with emotions, moods are typically less intense affective states and are thought to be involved in the instigation of self-regulatory processes'' \cite{morris}. Assessing mood is an important topic in psychology, and is largely done by analyzing physiological signals \cite{jaques2015predicting, sano2018identifying, sun2010activity}. With the widespread use of mobile devices, daily mood can now more easily be assessed through the use of these mobile devices \cite{ma2012daily}, suggesting a correlation between affect and mobile device usage. This correlation is further supported by years of research showing how mood affects decision making \cite{doi:10.1080/00049538908260083} and reasoning \cite{doi:10.2190/FYYA-GCRU-J124-Q3B2}, in conjunction with factors known to affect mood (e.g., weather \cite{denissen2008effects} and location \cite{doi:10.1111/j.1559-1816.1996.tb01781.x}), all of which play some role in the use of technology. Few studies have evaluated the impact of mood on phone usage in the context of biometric recognition. One particular study evaluated the impact of stress on handwriting recognition \cite{BLANCOGONZALO20141173}, but most efforts have only \textit{assumed} that mood may affect usage which in turns affects recognition performance \cite{LiKamWa:2013:MBM:2462456.2464449}. However, several efforts have used phone data to recognize mood (e.g., \cite{Bardram:2012:MSS:2110363.2110370, Torous2015}), while others have shown that use of mobile devices themselves can induce certain moods \cite{doi:10.1556/2006.4.2015.010}. LiKamWa et al. studied the mobile device activities of 32 subjects to infer mood \cite{LiKamWa:2013:MBM:2462456.2464449}. Their results demonstrated that the effect of mood on smartphone usage is person-specific using their \textit{MoodScope} sensing API. They trained least-squares multiple linear regression models per subject using app usage, phone calls, emails, SMS, web browsing, and location data. Achieving up to 93\% accuracy on classifying moods relevant to the Circumplex Mood Model (i.e., tense, stress, upset, bored, excited, happy, relaxed, and calm) \cite{russell1980circumplex}, this work provides valuable groundwork for understanding differences in device usage according to the user's affective state. In our work, we also base our analysis on the Circumplex model, examining happiness, stress, and upsetness. LiKamWa's work also suggests that certain behaviors (i.e., app usage and phone calls) are more indicative of mood as suggested by the results of feature selection using Sequential Forward Selection. In a similar study, Asselbergs et al. \cite{asselbergs2016mobile} conducted a pilot study using mobile phones for EMA. They collected text messages, screen time, app usage, accelerometer, and camera events from 27 subjects. Using this data, they created personalized models to predict daily variation in mood, accurately predicting up to 76\% of EMA mood scores. These results are encouraging, showing the feasibility of using mobile phones to predict \textit{person-specific} variations in mood, a key indicator that intra-person variation in mood may affect biometric performance. More recently, Pratap et al. \cite{pratap2019accuracy} investigated the prediction of daily mood (e.g. depression) using phone data. Data was collected from 271 individuals over three months that included features such as call duration, SMS count, and SMS length. Using a random forest-based classifier, they too found that mood is more easily predicted based on individual subject models when using phone data. They showed that when predicting mood over all subjects, the prediction accuracy degraded. Similar to aforementioned studies, these results motivate our current research to better understand the impact of variations in mood on stability of phone usage when considering this data as a biometric modality. Specifically, we aim to further the analysis of mood by evaluating the impact of being happy, upset, and stressed on recognition performance. We view the possible impact of mood comparable to occlusions or pose in face recognition \cite{6468044}. By identifying \textit{mobile occlusions}, future efforts can work toward developing algorithms which can control or adapt to these factors. \section{Data and Features} \label{data} The StudentLife dataset was collected at Dartmouth College over a period of 10 weeks using an Android app running in the background \cite{wang2014studentlife}. It includes data from 48 students enrolled in a computer science programming class. Among the 48, 30 were undergraduates and 18 were graduate students. Thirty-eight subjects were male and 10 were female, consisting of 23 Asians, 23 Caucasians, and 2 African-Americans. The StudentLife data collection app required minimal interaction from participants, allowing continuous collection of mobile activities. A main requirement of the study was to carry the device throughout the day. The app collected data on activity, conversation, sleep, and location inferred from measurements from the accelerometer, microphone, light, GPS, and bluetooth sensors. The participants were also asked to respond to various EMA questions prompted at random to gather their self-reports of psychological health and academic performance. EMA prompts occurred eight times per day on average via the mobile app. A total of 35,295 EMA responses were collected, each requesting data on participant's current mood, stress, sleep duration, number of social interactions, exercise, and personality. EMA topics evaluated in this paper include stress, current mood, sleep quality, sadness, and happiness; their scales are provided in Table \ref{ema_scales}. Since each sample represented an hour of mobile activities (discussed below), we averaged the reported values for each EMA topic within the respective hour. However, we note that very few EMA responses were available and usually only a single EMA response was observed per hour. \begin{table}[ht] \centering \caption{Ranges for EMA responses that are later used for classifying states of happiness, upsetness, and stress.} \label{ema_scales} \resizebox{\columnwidth}{!}{ \renewcommand{\arraystretch}{1.3} \begin{tabular}{l|l|l|l} \hline \textbf{Stress} & \textbf{Current Mood} & \textbf{Sleep Quality} & \textbf{Happy / Sad} \\ \hline \begin{tabular}[c]{@{}l@{}}(1) Stressed out\\ (2) Definitely stressed\\ (3) A little stressed\\ (4) Feeling good\\ (5) Feeling great\end{tabular} & \begin{tabular}[c]{@{}l@{}}(1) Happy\\ (2) Stressed\\ (3) Tired\end{tabular} & \begin{tabular}[c]{@{}l@{}}(1) Very good\\ (2) Fairly good \\ (3) Fairly bad \\ (4) Very bad\end{tabular} & \begin{tabular}[c]{@{}l@{}}(1) A little bit \\ (2) Somewhat \\ (3) Very much \\ (4) Extremely\end{tabular} \\ \hline \end{tabular}} \end{table} For this study, we utilized approximately 60 days of data from the StudentLife dataset collected from March 25, 2013 through May 25, 2013. Some subjects were more active in the use of the device and/or in providing EMA responses; 19 of 48 subjects were most active, and were therefore included in our analysis. Data from six mobile activities were used, including call, app, GPS, audio, activity, and locking events. We extracted 1,463 feature samples (one representative of each hour of mobile activity) of 7,834 dimensions per subject of the following features: \begin{description}[noitemsep, topsep=0pt] \item[Call] The sum of call durations that were started within the hour, resulting in a single call feature. Overall, 1,195 calls made across all subjects were less than 24 minutes, 34 were between 24 and 48 minutes, 8 were between 48 and 71 minutes, and 5 were between 71 and 119 minutes. One call lasted for 237 minutes, while another (which we considered an extreme anomality) was 2,386 minutes (nearly 40 hours). Many (26,553) samples did not have calling data. \item[App] Each app is identified by a task ID. The task ID for every running app was extracted, and this list was reduced to a set (i.e., duplicates removed). Finally, app data across all users were transformed using one-hot encoding for consistency, resulting in 7,802 app features per sample. \item[GPS] All latitude and longitude pairs were extracted, and then clustered using DBSCAN, a well-known density based clustering algorithm \cite{Ester:1996:DAD:3001460.3001507}, using the default parameters set by Scikit-Learn. DBSCAN clustering yielded 17 unique locations across all users; clustering labels were then used in lieu of latitude/longitude pairs. Finally, all labels across users were transformed using one-hot encoding for consistency, resulting in 28 GPS features per sample. \item[Audio] Audio data were classified as (0) silence, (1) voice, (2) noise, or (3) unknown. We extracted the mode of the audio inferences; if multiple modes existed, we assigned a value 4. If no audio data existed, we assigned a value -1, overall resulting in a single audio feature per sample. 4,504 samples did not have audio data (-1), 17,061 samples were associated with silence, 3,549 samples were associated with the sound of a voice, 2,673 were associated with noise, and 10 had multiple audio inferences. \item[Activity] Activity data were classified as (0) stationary, (1) walking, (2) running, and (3) unknown. We followed the same protocol as implemented for audio data, resulting in a single activity feature per sample. 4,506 samples did not have activity data (-1), 22,491 samples were associated with the subject being stationary, 365 samples were associated with walking, 110 were associated with running, and 322 samples had unknown activities. Only four samples had multiple activity modes. \item[Lock] This feature represents the number of times the device was locked within the hour, resulting in a single lock feature. The number of locking events within an hour ranged from zero to four across the dataset. 21,425 samples had no locking events, 5,032 had one locking event, 1,332 had two locking events, 7 had three, and 1 had four locking events. \end{description} Of the 27,797 total samples, only 2,855 had corresponding EMA data for at least one of the five EMA topics (i.e., stress, current mood, sleep quality, happiness, and sadness), equating to only 10.3\% of the total dataset. To increase the number of samples with corresponding self-reports of mood, we inferred hourly and daily mood to create two additional datasets. To generate the second dataset, we assigned EMA values at hour $h$ to $h-1$ and $h+1$ for each user and each EMA topic if the samples at $h-1$ or $h+1$ had no EMA data. This increased the number of samples with corresponding EMA responses to 5,291 (19\%). When referring to this dataset, we use $H$. Finally, we inferred the users' daily moods by averaging all EMA data per EMA topic for each day and assigning every sample within that day with this averaged response, increasing the number of samples with EMA data to 16,877 (61\%). When referring to this dataset, we use $D$. Our motivation for these approaches to consider the possibility of longer segments of mood stems from the fact that mood is known to persist longer than emotion \cite{ekkekakis2012affect, doi:10.1080/09515080600806567}. With datasets $H$ and $D$, we then assigned each sample a corresponding mood label of happy, upset, or stress according to the ranges of EMA values specified in Table \ref{ema_ranges}. \begin{table}[ht] \centering \caption{Ranges in EMA responses for establishing happy, upset, and stress.} \label{ema_ranges} \resizebox{\columnwidth}{!}{ \renewcommand{\arraystretch}{1.3} \begin{tabular}{c|c|c} \hline \textbf{Happiness} & \textbf{Upset} & \textbf{Stress} \\ \hline \begin{tabular}[c]{@{}l@{}} Sleep quality = 1 \\ Stress level $\geq$ 4 \\ Happiness $\geq$ 2 \\ Current Mood = 1 \end{tabular} & \begin{tabular}[c]{@{}l@{}} Stress level = 3 \\ Current Mood = 3 \& Sleep quality $>=$ 3 \\ Sadness $\geq$ 2 \end{tabular} & \begin{tabular}[c]{@{}l@{}} Sleep quality $\geq$ 3 \\ 1 $\leq$ Stress level $\leq$ 3 \\ Current Mood = 2 \end{tabular} \\ \hline \end{tabular}} \end{table} \section{Method} \label{method} The goal of this work was to conduct identification experiments with an independent variable of mood; to accomplish this, we ran identification experiments without the mood of interest. In biometrics, identification seeks to establish an identity of an unknown individual by comparing that individual's physical or behavioral characteristics with those from several other candidates. Thus, identification requires a one-to-many match per subject, where the candidate with the most similar characteristics is returned as the identity of the unknown subject. We used a random forest classifier with 250 trees for our experiments, utilizing samples extracted from hour $h$ to $h+\delta$ for training from every subject, and samples extracted from hour $h+\delta+1$ to $h+2\delta+1$ for testing from every subject per hour; $\delta$ ranged from 4 to 24 (i.e., four hours to one day). For each train-test split, we performed feature selection using an Extra Trees classifier with 50 estimators. Thus, salient features may change over time, which we discuss in the following section. \section{Results} \label{results} To evaluate performance, we utilize the $F$-score, or the harmonic mean of precision and recall. If $TP$, $FP$, and $FN$ correspond with a true positive, false positive, and false negative, respectively, then the $F$-score, precision, and recall are defined as follows: \[ F-score = 2 \times \frac{precision \times recall}{precision + recall} \] \[ Precision = \frac{TP}{TP + FP} \] \[ Recall = \frac{TP}{TP + FN} \] The discussion in this section is based on the comparison of $F$-scores when excluding the samples corresponding with the mood of interest versus the use of all samples. Table \ref{tbl:samplespersubject} shows the number of mood-related samples per subject and the total number of samples removed when evaluating each mood. We do note that the number of mood-related samples per subject is imbalanced, especially for stress; this may bias the results, and we plan to consider alternative approaches to address this in future work. \begin{table}[ht] \centering \label{tbl:samplespersubject} \caption{Number of mood-related samples per subject.} \resizebox{\columnwidth}{!}{ \renewcommand{\arraystretch}{1.3} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \textbf{Happy ($H$)} & \textbf{Upset ($H$)} & \textbf{Stress ($H$)} & \textbf{Happy ($D$)} & \textbf{Upset ($D$)} & \textbf{Stress ($D$)} \\ \hline 1 & 346 & 389 & 452 & 934 & 574 & 1054 \\ \hline 2 & 216 & 159 & 296 & 838 & 504 & 1006 \\ \hline 3 & 190 & 145 & 265 & 696 & 480 & 960 \\ \hline 4 & 179 & 123 & 259 & 647 & 480 & 936 \\ \hline 5 & 137 & 120 & 234 & 598 & 384 & 912 \\ \hline 6 & 115 & 98 & 203 & 528 & 336 & 886 \\ \hline 7 & 77 & 96 & 202 & 336 & 334 & 789 \\ \hline 8 & 67 & 89 & 187 & 336 & 334 & 574 \\ \hline 9 & 48 & 68 & 143 & 288 & 312 & 552 \\ \hline 10 & 48 & 66 & 138 & 264 & 264 & 552 \\ \hline 11 & 48 & 60 & 128 & 216 & 192 & 552 \\ \hline 12 & 44 & 57 & 128 & 192 & 192 & 552 \\ \hline 13 & 44 & 44 & 111 & 168 & 192 & 504 \\ \hline 14 & 40 & 43 & 100 & 168 & 168 & 480 \\ \hline 15 & 34 & 32 & 84 & 168 & 168 & 360 \\ \hline 16 & 32 & 28 & 52 & 120 & 142 & 264 \\ \hline 17 & 28 & 24 & 48 & 120 & 120 & 238 \\ \hline 18 & 4 & 0 & 20 & 24 & 0 & 72 \\ \hline 19 & 4 & 0 & 0 & 24 & 0 & 0 \\ \hline \textbf{Total} & \textbf{1701} & \textbf{1641} & \textbf{3050} & \textbf{6665} & \textbf{5176} & \textbf{11243} \\ \hline \end{tabular}} \end{table} \subsection{Does mood impact identification performance?} Figure \ref{fig:fscores} provides the ranges of $F$-scores as the training size increases from 4 to 24 hours. The default configuration, which is the use of all samples, outperforms all configurations that exclude those samples indicative of mood. Interestingly, our findings show that the three moods impact performance very similarly when those samples are removed, and there is little evidence supporting the need to exclude certain data based on mood. Thus, while many works have claimed that mood may have a negative impact on identification performance, our findings show the inverse. This assumption is typically made considering the variability in behavior induced by mood, which may correspond with decreased intra-person similarity. To evaluate this, we examine the features chosen during feature selection in the following section. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{imgs/without_samples_Fscore.png} \caption{$F$-scores for each experiment that excludes mood-related samples across different training set sizes.} \label{fig:fscores} \end{figure} Thus far, we have evaluated identification performance by removing samples associated with mood. Figure \ref{fig:fscores_withmoodonly} provides $F$-scores when using only the samples corresponding with mood. Here, we see more significant effects that may be associated with happiness, upsetnees, and stress. First, none of these experiments yield results that outperform the use of all samples. We do note, however, that samples associated with mood inferred daily ($D$) perform worse than those inferred by the hour ($H$). Importantly, however, we find that samples related to upsetness yield a $F$-score for four hours of training data comparable to that with the use of all samples. While two of the 19 subjects were never classified as upset, this finding may have several implications. For instance, biometrics for mental health is a developing research area (e.g., \cite{info:doi/10.2196/mhealth.5960}), and considering our results that show that the feeling of being upset may produce identifiable behavior, mHealth treatment services that focus on this particular affect could leverage these findings for personalized patient support. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{imgs/with_samples_Fscore.png} \caption{$F$-scores for each experiment using only mood-related samples across different training set sizes.} \label{fig:fscores_withmoodonly} \end{figure} \subsection{What is the impact of mood on behavior?} Thus far, our results do not support the notion that mood has a negative impact on identification performance; on the contrary, we find that the inclusion of these samples help to improve identification performance. In this section, we explore changes in phone usage potentially brought on by mood. If variations exist, then we can further claim that mood does cause intra-person variation, but these variations may not degrade performance. Table \ref{tbl:chosen_feats} provides the top ten features chosen during feature selection with their normalized frequencies (i.e., how often they were chosen across the different experiments when changing the training set size). These features were selected when running identification experiments using \textit{only} those samples labeled with the respective mood. App\_2 and app\_3 represent the home screen launcher and Gmail apps, respectively. Apps\_ represents no app usage. For features gps\_$i$, $i$ represents the DBSCAN cluster ID. Given these findings, we conclude that there \textit{are} differences in behavior that may be brought on by mood. For instance, not using any apps is selected less frequently when considering mood alone compared to the default configuration, while locking habits become the most frequently selected feature when considering mood. Similarly, differences in feature saliency are observed for calling habits and two locations. An additional finding is that the saliency of audio (e.g., silence, voice, or noise) increases when considering mood. Our results also show that some features remain equally important (i.e., selected at the same frequency) across experiments (e.g., homescreen and Gmail apps). Moreover, we find that the frequency in which activity is selected is only changed when considering upsetness, suggesting that people may physically behave differently when upset. \begin{table}[ht] \centering \caption{Top ten most selected features with their associated normalized frequencies. These findings suggest variations in behavior that may be dependent on mood. Feature app\_2 corresponds with the home screen launcher, and app\_3 corresponds with the Gmail app. GPS features correspond with DBSCAN clusters.} \label{tbl:chosen_feats} \resizebox{\columnwidth}{!}{ \renewcommand{\arraystretch}{1.3} \begin{tabular}{l|l|l|l} \hline \textbf{Default} & \textbf{Happy} & \textbf{Upset} & \textbf{Stressed} \\ \hline \rowcolor[HTML]{C0C0C0} \cellcolor[HTML]{34FF34}(apps\_) \#: 0.01196 & (lock) \#: 0.00076 & (lock) \#: 0.00049 & (lock) \#: 0.00107 \\\hline \rowcolor[HTML]{9B9B9B} \cellcolor[HTML]{C0C0C0}(lock) \#: 0.01185 & (audio) \#: 0.00076 & (audio) \#: 0.00048 & (audio) \#: 0.00105 \\\hline \rowcolor[HTML]{656565} \cellcolor[HTML]{DAE8FC}(gps\_0) \#: 0.01185 & (gps\_1) \#: 0.00055 & (gps\_1) \#: 0.00032 & (gps\_1) \#: 0.00077 \\\hline \cellcolor[HTML]{9B9B9B}(audio) \#: 0.01185 & \cellcolor[HTML]{DAE8FC}(gps\_0) \#: 0.00053 & \cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} (activity) \#: 0.00031} & \cellcolor[HTML]{FFFC9E}(call) \#: 0.00074 \\\hline \rowcolor[HTML]{DAE8FC} \cellcolor[HTML]{656565}(gps\_1) \#: 0.01185 & \cellcolor[HTML]{FFFC9E}(call) \#: 0.00048 & (gps\_0) \#: 0.00031 & (gps\_0) \#: 0.00074 \\\hline \rowcolor[HTML]{FFCCC9} {\color[HTML]{000000} (activity) \#: 0.01175} & (activity) \#: 0.00046 & \cellcolor[HTML]{FFFC9E}(call) \#: 0.00031 & (activity) \#: 0.00072 \\\hline \rowcolor[HTML]{34FF34} \cellcolor[HTML]{FFFC9E}(call) \#: 0.01131 & (apps\_) \#: 0.00039 & (apps\_) \#: 0.00025 & (apps\_) \#: 0.00054 \\\hline \rowcolor[HTML]{FFFFFF} {\color[HTML]{000000} (apps\_2) \#: 0.01035} & {\color[HTML]{000000} (apps\_2) \#: 0.00036} & {\color[HTML]{000000} (apps\_2) \#: 0.00025} & {\color[HTML]{000000} (apps\_2) \#: 0.00048} \\\hline \rowcolor[HTML]{FFFFFF} {\color[HTML]{000000} (apps\_2 3) \#: 0.00642} & {\color[HTML]{000000} (apps\_2 3) \#: 0.00021} & {\color[HTML]{000000} (apps\_2 3) \#: 0.00017} & {\color[HTML]{000000} (apps\_2 3) \#: 0.00024} \\ \hline \end{tabular}} \end{table} \section{Conclusion} \label{conclusion} Studies have shown that behavioral patterns can be used to identify an individual. Utilizing this fact, we sought to understand the effect of mood on performance using mobile biometric data. We extracted call, app, GPS, audio, activity, and lock activity features from approximately 60 days of data from 19 subjects in the StudentLife Dataset \cite{wang2014studentlife}. Leveraging the dataset's ecological momentary assessment responses, we explored if being happy, upset, or stressed caused a significant change in identification performance using a random forest classifier, and train/test sizes of four hours to one day. When not considering mood, we achieved an $F$-score of approximately 67\% with four hours of training and test data. While this is relatively low, the goal of this study was not to optimize performance, but to measure the impact of mood on performance. Our findings show that by excluding samples associated with a subject's mood, performance worsened, with a best-case $F$-score of 48\%. Thus, while previous works have claimed that behavioral biometrics may be negatively impact by intra-person variations, including mood, our findings show that the variations found in mood do not hinder performance. In fact, our results show that happy, upset, and stressed-related samples account for only 6\%, 6\%, and 10\%, respectively, of the total number of samples. However, when excluding these samples, performance significantly worsens by 28.4\%. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{imgs/EMA_Corr_HvsD.png} \caption{Pearson's correlation coefficients on EMA responses inferred hourly and daily (latter indicated by D\_). } \label{fig:ema_correlations} \end{figure} Finally, we note the limitations of this work and directions for future work. First, we highlight that the dataset used is small, making it difficult to draw concrete conclusions. This leads to an important area of future work. When preparing our experiments, we did not find any publicly available datasets that included mobile device usage data with annotated mood. The StudentLife dataset did, however, contain this data, and we chose 19 of the subjects from this dataset based on empirical analysis to determine which subjects had the most representation in the dataset in regard to data availability and EMA responses. This was necessary to avoid highly imbalanced classes. However, to obtain statistically significant results, a larger dataset is necessary. A second limitation is the skewed distribution of the data for the different moods, which may have impacted the reported drop in accuracy due to lesser data points to learn from. Considering this, our analysis evaluated this potential problem by removing mood-related samples and considering only mood-related samples. We further explored if our results were based solely on the lack of data points by using feature selection, finding that feature importance changed for each mood. Thus, while the number of data points affects performance, the fact that the salient features change is an indication that different moods impact mobile biometric performance in different ways. Nonetheless, this observation is also related to the availability of data which allows sufficient exploration of this problem. An additional limitation is the assumptions made throughout that essentially drive the experimental set-up and drawn conclusions. Many of these assumptions rely on the accuracy of perceived mood gathered by ecological momentary assessments, or self-reports. However, certain correlations of mood were not all supported by previous literature. For instance, poor sleep is known to correlate with decreased allostatic load \cite{hamilton2007sleep, Chen2006}; however, in Figure \ref{fig:ema_correlations}, we see a positive correlation between these two factors. Recall that in this paper, the numerical representation for sleep quality increases as sleep quality worsens, while the numerical label for stress increases as the user feels less stressed. These correlations may be due to the subjectivity of self-report, and the subjects' inability to accurately express how they feel. There were also few EMA responses, impacting our experimental design to infer mood when this data was unavailable. This may have also played a role in the observed correlations found in Figure \ref{fig:ema_correlations}. Nonetheless, while our results may be difficult to generalize to the general public, we have attempted to provide insight into the variability of mood on mobile biometrics. In future work, we will explore additional potential ``mobile occlusions'' that may be problematic for mobile biometric systems, including additional moods and emotions. {\small \bibliographystyle{plain}
1,314,259,992,962
arxiv
\section{Introduction} 2-D complex Gabor filter has been widely used in numerous applications of computer vision and image processing thanks to its elegant properties of extracting locally-varying structures from an image. In general, it is composed of the Gaussian kernel and complex sinusoidal modulation term, which can be seen as a special case of short time discrete Fourier transform (STFT). The Gabor basis functions defined for each pixel offer good spatial-frequency localization \cite{tit/Daubechies90}. It was first discovered in \cite{josa85} that the 2D receptive field profiles of simple cells in the mammalian visual cortex can be modeled by a family of Gabor filters. It was also known in \cite{ivc/ShenBF07,VQML-15} that image analysis approaches based on the Gabor filter conceptually imitate the human visual system (HVS). The 2-D complex Gabor filter is invariant to rotation, scale, translation and illumination \cite{tip/KamarainenKK06}, and it is particularly useful for extracting features at a set of different orientations and frequencies from the image. Thanks to such properties, it has found a great variety of applications in the field of computer vision and image processing, including texture analysis \cite{pr/WeldonHD96, TexturePR07, TextureTIP09, tip/LiDZ15}, face recognition \cite{FR-Gabor97, FR-NN03, FR-PAA06, FR-FG08, FR-IJMLC16}, face expression recognition \cite{pr/GuXVHL12,Facial-15} and fingerprint recognition \cite{Kasban2016910}. Performing the 2-D complex Gabor filtering for all pixels over an entire image, however, often provokes a heavy computational cost. With the 2-D complex Gabor kernel defined at specific orientation and frequency, the filtering is performed by moving a reference pixel to be filtered one pixel at a time. The complex kernel hinders the fast computation of the 2-D complex Gabor filtering in the context similar to edge-aware filters \cite{GuidedFilter-ECCV10, DomainTrans-TOG11, FGS-TIP14} that are widely used in numerous computer vision applications. To expedite the 2-D complex Gabor filtering, several efforts have been made, for instance, by making use of the fast Fourier transform (FFT), infinite impulse response (IIR) filters, or finite impulse response (FIR) filters \cite{sigpro/QiuZC99, jei/NestaresNPT98, tsp/YoungVG02, tip/BernardinoS06}. It is shown in \cite{sigpro/QiuZC99} that the Gabor filtering and synthesis for a 1-D signal consisting of $N$ samples can be performed with the same complexity as the FFT, $O(NlogN)$. In \cite{jei/NestaresNPT98}, separable FIR filters are applied to implement fast 2-D complex Gabor filtering by exploiting particular relationships between the parameters of the 2-D complex Gabor filter in a multiresolution pyramid. The fast pyramid implementation is, however, feasible only for the particular setting of Gabor parameters, e.g., scale of $2^i$ with an integer $i$. Young \emph{et al.} \cite{tsp/YoungVG02} proposes to formulate the 2-D complex Gabor filter as IIR filters that efficiently work in a recursive manner. They decompose the Gabor filter with multiple IIR filters through z-transform, and then performs the recursive filtering in a manner similar to recursive Gaussian filtering \cite{icpr/VlietYV98}. To the best of our knowledge, the fastest algorithm for the 2-D complex Gabor filtering is the work of Bernardino and Santos-Victor \cite{tip/BernardinoS06} that decomposes the 2-D complex Gabor filtering into more efficient Gaussian filtering and sinusoidal modulations. It was reported in \cite{tip/BernardinoS06} that this method reduces up to $39\%$ the number of arithmetic operations compared to the recursive Gabor filtering \cite{tsp/YoungVG02}. These fast methods mentioned above primarily focus on reducing the runtime of performing the 2-D complex Gabor filtering once at specific orientation and frequency. However, in some computer vision applications, it is often needed to compute the 2-D complex Gabor filter bank consisting of the 2-D complex Gabor filtering outputs at multiple orientations and frequencies. For instance, face recognition approaches relying on the 2-D Gabor features usually require performing the 2-D complex Gabor filtering at 8 orientations and 5 frequencies (totally, 40 Gabor feature maps) to deal with geometric variances \cite{FR-Gabor97, FR-NN03, FR-FG08, wifs/GangwarJ15, FR-IJMLC16}. Fig. \ref{fig:GFB} shows the example of the filter kernels used in the 2-D complex Gabor filter bank. To compute the complex Gabor filter bank, existing approaches simply repeat the Gabor computation step for a given set of frequencies and orientations without considering the computational redundancy that exists in such repeated calculations. In this paper, we propose a novel approach that efficiently compute 2-D complex \emph{Gabor filter bank} by reducing the computational redundancy that arises when performing the 2-D complex Gabor filtering at multiple orientations and frequencies. We first decompose the Gabor basis kernels by making use of the trigonometric identities. This allows us to perform a fast convolution with the Gaussian kernel in a separable manner for $x$ and $y$ dimensions. More importantly, our decomposition strategy enables the substantial reduction of the computational complexity when computing the 2-D complex Gabor filter bank at a set of orientations and frequencies. In our formulation, intermediate results of the 2-D complex Gabor filtering computed at a specific orientation can be reused when performing the 2-D complex Gabor filtering at a symmetric orientation. This is particularly useful in some applications where the 2-D complex Gabor filtering outputs at various orientations and frequencies are needed to cope with geometric variations \cite{FR-Gabor97, FR-NN03, tip/KamarainenKK06, FR-FG08, FR-IJMLC16, wifs/GangwarJ15, tip/LiDZ15}. We will show that our method reduces the computational complexity when compared to state-of-the-art methods \cite{tsp/YoungVG02, tip/BernardinoS06}, while maintaining the similar filtering quality. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{./figure/gaborFilterBank/gaborFilterBank.jpg} \caption{Example of the 2-D complex Gabor filter bank with 40 coefficients (5 frequencies and 8 orientations). The coefficients are computed by $\omega=2^{-(i+2)/2}$ ($i=0, ..., 4$), $\theta=k\pi/8$ ($k=0, ..., 7$) and $\sigma=2\pi/\omega$\cite{FR-Gabor97}.} \label{fig:GFB} \end{figure} Additionally, we present a method that efficiently computes 2-D \emph{localized} sliding discrete Fourier transform (SDFT) using the Gaussian kernel at the transform window by extending the proposed kernel decomposition technique. In literature, the 2-D SDFT usually performs the transform at an image patch within the transform window by shifting the window one pixel at a time in either horizontal or vertical directions. Numerous methods have been proposed for the fast computation of the 2-D SDFT \cite{SDFT-SPM03, SDFT-SPM04, SDFT-TIP15}. For instance, the relation between two successive 2-D DFT outputs is first derived using the circular shift property \cite{SDFT-TIP15}. Using this relation, the 2-D DFT output at the current window is efficiently updated by linearly combining the 2-D DFT output at the previous window and one 1-D DFT result only. Note that all these methods use a box kernel within the sliding transform window and the circular shift property holds only when the box kernel is employed. Therefore, applying the existing 2-D SDFT methods \cite{SDFT-SPM03, SDFT-SPM04, SDFT-TIP15} are infeasible in the case of calculating the \emph{localized} DFT outputs with the Gaussian kernel. It is generally known that the good spatial localization of the Gabor filter mainly benefits from the use of the Gaussian kernel which determines an weight based on a spatial distance. We will present that the fast computation method of the 2-D \emph{localized} SDFT using the Gaussian kernel, which lends the spatial localization ability as in the Gabor filter, is also feasible using our decomposition strategy. It should be noted that existing fast 2-D complex Gabor filters \cite{tsp/YoungVG02, tip/BernardinoS06} can be readily used to compute the 2-D localized SDFT, but a direct application of theses methods disregards the computational redundancy that exists on the repeated calculation of 2-D DFT outputs at multiple frequencies as in the 2-D complex Gabor filter bank. We will show that our method outperforms existing approaches \cite{tsp/YoungVG02, tip/BernardinoS06} in terms of computational complexity. To sum up, our contributions can be summarized as follows. \begin{itemize} \item A new method is presented for efficiently computing the 2-D complex \emph{Gabor filter bank} at a set of orientations and frequencies. We show that our method runs faster than existing approaches. \item The proposed method is extended into the 2-D \emph{localized} SDFT, demonstrating a substantial runtime gain over existing approaches. \item Extensive comparison with state-of-the-arts approaches is given in both analytic and experimental manners. \end{itemize} The rest of this paper is organized as follows. In Section II, we present the proposed method for fast computation of the 2-D complex Gabor filter bank. In Section III, we present how the proposed approach is extended to accelerate the 2-D localized SDFT. Section IV presents experimental results including runtime and filtering quality comparison with state-of-the-arts methods. Section V concludes this paper with some remarks. \section{Fast 2-D Complex Gabor Filter} This section presents a new method that efficiently computes the 2-D complex Gabor filter bank consisting of the 2-D complex Gabor filtering outputs at multiple orientations and frequencies. We first explain the Gabor kernel decomposition method to reduce the complexity of 2-D complex Gabor filtering, and then show how the decomposition method can be used for fast computation of 2-D complex Gabor filter bank. For specific orientation $\theta$ and frequency $\omega$, the 2-D complex Gabor filtering output $F_{\omega,\theta,\sigma}$ of a 2-D image $f$ of $H\times W$ can be written as \begin{equation} \label{eq:original_2d} \begin{split} F_{\omega,\theta,\sigma}(x,y) = \sum\limits_{l,k} f (k,l) C_{\omega,\theta}(x-k,y-l) {G_\sigma}(x-k,y-l) \end{split} \end{equation} where $G_{\sigma}(x,y)$ is 2-D Gaussian function with zero mean and the standard deviation of $\sigma$. Here, an isotropic Gaussian kernel that has the same standard deviation for both $x$ and $y$ dimensions is used as in existing work \cite{tsp/YoungVG02, tip/BernardinoS06}, i.e., $G_\sigma(x,y)=S_\sigma(x) S_\sigma(y)$. The complex exponential function $C_{\omega,\theta}(x,y)$ for orientation $\theta$ and frequency $\omega=2\pi/\lambda$, where $\lambda$ represents wavelength, is defined as \begin{equation} \label{eq:exp}C_{\omega,\theta}(x,y) = e^{i\omega\left(x\cos\theta + y\sin\theta\right)}. \end{equation} \noindent This is decomposed as $C_{\omega,\theta}(x,y) = H_{\omega,\theta}(x)V_{\omega,\theta}(y)$ with $H_{\omega,\theta}(x) = e^{i\omega x\cos\theta}$, $V_{\omega,\theta}(y)=e^{i\omega y\sin\theta}$. \subsection{Kernel Decomposition} Since $G_\sigma(x,y)$ and $C_{\omega,\theta}(x,y)$ are separable for $x$ and $y$ dimensions, \eqref{eq:original_2d} can be rewritten as \begin{equation}\label{eq:fh} J_{\omega,\theta,\sigma}(x,y)=\sum\limits_{k} f (k,y)H_{\omega,\theta}(x-k){S_\sigma }(x-k),\\ \end{equation} \begin{equation}\label{eq:original_2d_sep} F_{\omega,\theta,\sigma}(x,y) = \sum\limits_{l} J_{\omega,\theta,\sigma}(x,l) V_{\omega,\theta}(y-l){S_\sigma}(y-l), \end{equation} \noindent $J_{\omega,\theta,\sigma}$ is first computed by performing 1-D horizontal Gabor filtering and this is then used in 1-D vertical filtering for obtaining the final Gabor output $F_{\omega,\theta,\sigma}$. \subsubsection{Horizontal 1-D Gabor Filtering} We first present the efficient computation of $J_{\sigma,\omega,\theta}$ in \eqref{eq:fh} based on the basis decomposition using the trigonometric identities. We explain the real part of $J_{\sigma,\omega,\theta}$ only, as its imaginary counterpart can be decomposed in a similar manner. For the sake of simplicity, we define $\omega^c_{\theta}=\omega \cos\theta$ and $\omega^s_{\theta}=\omega \sin\theta$. We also omit $y$ in the computation of $J_{\sigma,\omega,\theta}$ and $f$ as the 1-D operation is repeated for $y=1,...,H$. Using the trigonometric identity $\cos(a-b)=\cos a\cos b+\sin a \sin b$, we can simply decompose \eqref{eq:fh} into two terms as \begin{flalign}\label{eq:pullout_real} \mathcal{R}\{J_{\omega,\theta,\sigma}(x)\} &= \cos (\omega^c_{\theta} x)\sum\limits_{k} {f_c(k){S_\sigma }(x-k)} \nonumber\\ &+\sin (\omega^c_{\theta} x)\sum\limits_{k} {f_s(k){S_\sigma }(x-k)}, \end{flalign} \noindent where $f_c(k) = f(k)\cos (\omega^c_{\theta} k)$ and $f_s(k) = f(k)\sin (\omega^c_{\theta} k)$. $\mathcal{R}(F)$ represents the real part of $F$. Then, \eqref{eq:pullout_real} can be simply computed by applying the Gaussian smoothing to two modulated signals $f_c$ and $f_s$, respectively. The imagery counterpart $\mathcal{I}\{F\}$ can be expressed similarly as \begin{flalign}\label{eq:pullout_imag} \mathcal{I}\{J_{\omega,\theta,\sigma }(x)\} &= -\cos (\omega^c_{\theta} x)\sum\limits_{k} {f_s(k){S_\sigma }(x-k)} \nonumber\\ &+\sin (\omega^c_{\theta} x)\sum\limits_{k} {f_c(k){S_\sigma }(x-k)}. \end{flalign} Interestingly, both real and imagery parts of $F$ contains the Gaussian convolution with $f_c$ and $f_s$, thus requiring only two 1-D Gaussian smoothing in computing \eqref{eq:pullout_real} and \eqref{eq:pullout_imag}. Many methods have been proposed to perform fast Gaussian filtering \cite{sigpro/YoungV95,icpr/VlietYV98}, where the computational complexity per pixel is independent of the smoothing parameter $\sigma$. Here, we adopted the recursive Gaussian filtering proposed by Young and Vliet \cite{icpr/VlietYV98}. \subsubsection{Vertical 1-D Gabor Filtering} After $J(x,y)$ is computed using \eqref{eq:pullout_real} and \eqref{eq:pullout_imag} for all $y=1,..,H$, we perform the 1-D Gabor filtering on the vertical direction using \eqref{eq:original_2d_sep}. Note that the input signal $J$ in \eqref{eq:original_2d_sep} is complex, different from the real input signal $f$ in \eqref{eq:fh}. Using the trigonometric identity, we decompose the real and imagery parts of $F$ in \eqref{eq:original_2d_sep} as follows: \begin{flalign}\label{eq:2d_complex_real} {\cal R}\{ {F_{\omega,\theta,\sigma}(x,y)}\} &= \cos (\omega_{\theta}^sy) \left( { f'_{cr}(x,y) + f'_{si}(x,y) } \right) \nonumber\\ &+ \sin (\omega _{\theta }^sy) \left( {f'_{sr}(x,y) - f'_{ci}(x,y)} \right), \end{flalign} \begin{flalign}\label{eq:2d_complex_imag} {\cal I}\{ {F_{\omega ,\theta, \sigma}(x,y)}\} &= \sin (\omega_{\theta }^sy) \left( {f'_{cr}(x,y) + f'_{si}(x,y)} \right) \nonumber\\ & - \cos (\omega _{\theta }^sy) \left( {f'_{sr}(x,y) - f'_{ci}(x,y)} \right). \end{flalign} \noindent Here, $f'_{cr}$, $f'_{sr}$, $f'_{ci}$, and $f'_{si}$ are filtering results convolved with 1-D Gaussian kernel $S_\sigma$ as follows: \begin{equation}\label{eq:modulated_signal_conv} \begin{array}{l} f'_{cr}(x,y)+f'_{si}(x,y) = \sum\limits_l {(f_{cr}(x,l)+f_{si}(x,l)){S_\sigma }(y - l)},\\ f'_{sr}(x,y)-f'_{ci}(x,y) = \sum\limits_l {(f_{sr}(x,l)-f_{ci}(x,l)){S_\sigma }(y - l)}, \end{array} \end{equation} \noindent where the modulated signals $f_{cr}$, $f_{sr}$, $f_{ci}$, and $f_{si}$ are defined as \begin{equation}\label{eq:modulated_signal} \begin{array}{l} f_{cr}(x,y)=\mathcal{R}\{J_{\omega,\theta,\sigma}(x,y)\}\cos(\omega^s_{\theta}y),\\ f_{sr}(x,y)=\mathcal{R}\{J_{\omega,\theta,\sigma}(x,y)\}\sin(\omega^s_{\theta}y),\\ f_{ci}(x,y)=\mathcal{I}\{J_{\omega,\theta,\sigma}(x,y)\}\cos(\omega^s_{\theta}y),\\ f_{si}(x,y)=\mathcal{I}\{J_{\omega,\theta,\sigma}(x,y)\}\sin(\omega^s_{\theta}y). \end{array} \end{equation} \noindent Like the horizontal filtering, two 1-D Gaussian convolutions are required in \eqref{eq:2d_complex_real} and \eqref{eq:2d_complex_imag}, i.e., $f'_{cr}(x,l) + f'_{si}(x,l)$ and $f'_{sr}(x,l) - f'_{ci}(x,l)$. In short, decomposing the complex exponential basis function $C_{\omega,\theta}$ enables us to apply fast Gaussian filtering \cite{sigpro/YoungV95,icpr/VlietYV98}. Though the fast Gaussian filter was used for implementing fast recursive Gabor filtering in \cite{tsp/YoungVG02}, our method relying on the trigonometric identity and separable implementation for $x$ and $y$ dimensions results in a lighter computational cost than the state-of-the-arts method \cite{tsp/YoungVG02}. More importantly, we will show this decomposition further reduces the computational complexity when computing the 2-D complex Gabor filter bank. \subsection{Fast Computation of 2-D Complex Gabor Filter Bank} Several computer vision applications often require computing the 2-D complex Gabor filter bank consisting of a set of 2-D complex Gabor filtering outputs at multiple frequencies and orientations. For instance, in order to deal with geometric variances, some face recognition approaches use the 2-D complex Gabor filtering outputs at 8 orientations and 5 frequencies (see Fig. \ref{fig:GFB}) as feature descriptors \cite{FR-Gabor97, FR-NN03, FR-FG08, wifs/GangwarJ15, FR-IJMLC16}. To compute the 2-D complex Gabor filter bank, existing approaches repeatedly perform the 2-D complex Gabor filtering for a given set of frequencies and orientations, disregarding the computational redundancy that exists in such repeated calculations. In this section, we present a new method that efficiently computes the 2-D complex Gabor filter bank. Without the loss of generality, it is assumed that the standard deviation $\sigma$ of the Gaussian kernel is fixed. For a specific frequency $\omega$, we aim at computing the 2-D complex Gabor filter bank at $N$ orientations $\{\frac{\pi k}{N}|k=0,...,N-1\}$. Here, $N$ is typically used as an even number. For the simplicity of notation, we omit $\omega$ and $\sigma$ in all equations. Let us assume that $F_\theta$ in \eqref{eq:original_2d_sep} is computed using the proposed kernel decomposition technique and its intermediate results are stored. We then compute $F_{\pi-\theta}$ by recycling these intermediate results. The separable form of $F_{\pi-\theta}$ can be written as \begin{equation}\label{eq:fh2} J_{\pi-\theta}(x,y)=\sum\limits_{k} f (k,y)H_{\omega,\pi-\theta}(x-k){S_\sigma }(x-k). \end{equation} \begin{equation}\label{eq:pi-theta_2d_sep} F_{\pi-\theta}(x,y) = \sum\limits_{l} J_{\pi-\theta}(x,l) V_{\omega,\pi-\theta}(y-l){S_\sigma }(y-l) \end{equation} Using $H_{\omega,\pi-\theta}(x)=H^{*}_{\omega,\theta}(x)$, where $*$ denotes complex conjugation, \eqref{eq:fh2} can be rewritten as follows: \begin{equation}\label{eq:fh3} J_{\pi-\theta}(x,y)=J^{*}_{\theta}(x,y). \end{equation} The horizontal 1-D Gabor filtering result $J_{\pi-\theta}$ is complex conjugate to $J_{\theta}$. Using $V_{\omega,\pi-\theta}(x)=V_{\omega,\theta}(x)$, the vertical 1-D Gabor filtering in \eqref{eq:pi-theta_2d_sep} is then expressed as \begin{flalign} \label{eq:pi-theta_2d_sep2} F_{\pi-\theta}(x,y) &= \sum\limits_{l} J^{*}_{\theta}(x,l) V_{\omega,\theta}(y-l){S_\sigma }(y-l). \end{flalign} \noindent $F_{\pi-\theta}$ is obtained by applying the vertical 1-D Gabor filtering to the complex conjugate signal $J^*_\theta$. Using \eqref{eq:2d_complex_real} and \eqref{eq:2d_complex_imag}, the following equations are derived: \begin{flalign}\label{eq:pi-theta_real} {\cal R}\{ {F_{\pi-\theta}(x,y)}\} &= \cos (\omega _{\theta }^sy) \left( {f'_{cr}(x,y) - f'_{si}(x,y) } \right) \nonumber\\ &+ \sin (\omega _{\theta }^sy) \left( {f'_{sr}(x,y) + f'_{ci}(x,y) } \right), \end{flalign} \begin{flalign}\label{eq:pi-theta_imag} {\cal I}\{ {F_{\pi-\theta}(x,y)}\} &= \sin (\omega _{\theta }^sy) \left( {f'_{cr}(x,y) - f'_{si}(x,y)} \right) \nonumber\\ &- \cos (\omega _{\theta }^sy) \left( {f'_{sr}(x,y) + f'_{ci}(x,y)} \right) \end{flalign} \noindent The vertical filtering also requires two Gaussian convolutions. Algorithm \ref{algo:pseudo_Gabor} summarizes the proposed method for computing the 2-D complex Gabor filter bank. When a set of frequencies $\Omega$ and orientations $\Theta$ are given, we compute the 2-D complex Gabor filtering results at $\theta_k$ ($k=0,...,N-1$) with the frequency $\omega_i$ being fixed. Different from existing approaches \cite{tsp/YoungVG02, tip/BernardinoS06} repeatedly applying the 2-D complex Gabor filter at all orientations, we consider the computational redundancy that exists on such repeated calculations to reduce the runtime. We will demonstrate our method runs faster than existing fast Gabor filters \cite{tsp/YoungVG02,tip/BernardinoS06} through both experimental and analytic comparisons. \begin{algorithm} \caption{Pseudo code of 2-D complex Gabor filter bank}\label{algo:pseudo_Gabor} \begin{algorithmic}[1] \State\textbf{Input}: input image $f$ ($H\times W$),\\ \quad\quad\; a set of $O$ scales $\Sigma =\{\sigma_i|i=0,...,O-1\}$, \\ \quad\quad\; a set of $O$ frequencies $\Omega =\{\omega_i|i=0,...,O-1\}$, \\ \quad\quad\; a set of $N$ orientations $\Theta=\{\theta_k| k=0,...,N-1\}$ \State\textbf{Output}: 2-D complex Gabor filter outputs for $\Omega$ and $\Theta$ \vspace{8pt} \For{$i = 0, ..., O-1$} \Comment{For all frequencies} \State $\sigma_i = 2\pi/\omega_i, N_h=\lfloor N/2 \rfloor$ \vspace{8pt} \For{$k = 0, ..., N_h$} \Comment{For half of all orientations} \State $\theta_k = \pi k/N$ \For{$y = 1, ..., H$} \State Perform 1-D Gaussian filtering of $f_c, f_s$ \State Compute $J_{\omega_i,\theta_k,\sigma_i}(x,y)$ for all $x$ in \eqref{eq:pullout_real} and \eqref{eq:pullout_imag} \EndFor \For{$x = 1,..., W$} \State Perform 1-D Gaussian filtering of $f_{cr} + f_{si}$, \quad\quad\; $f_{sr} - f_{ci}$ in \eqref{eq:2d_complex_real} and \eqref{eq:2d_complex_imag} \State Compute $F_{\omega_i,\theta_k,\sigma_i}(x,y)$ for all $y$ \EndFor \EndFor \vspace{8pt} \For{$k = N_h+1, ..., N-1$} \Comment{For remaining ori.} \State $\theta_k = \pi k/N$ \State $J_{\omega_i,\theta_k,\sigma_i}(x,y) = J^*_{\omega_i,\pi-\theta_k,\sigma_i}(x,y)$ for all $x$ and $y$. \For{$x = 1,..., W$} \State Perform 1-D Gaussian filtering of $f_{cr} - f_{si}$, \quad\quad\; $f_{sr} + f_{ci}$ in \eqref{eq:2d_complex_real} and \eqref{eq:2d_complex_imag} \State Compute $F_{\omega_i,\theta_k,\sigma_i}(x,y)$ for all $y$ \EndFor \EndFor \vspace{8pt} \EndFor \end{algorithmic} \end{algorithm} \section{Localized Sliding DFT} It is known that the Gabor filter offers the good spatial localization thanks to the Gaussian kernel that determines an weight based on a spatial distance. Inspired by this, we present a new method that efficiently compute the 2-D \emph{localized} SDFT using the proposed kernel decomposition technique. Different from the existing 2-D SDFT approaches \cite{SDFT-SPM03, SDFT-SPM04, SDFT-TIP15} using the box kernel, we use the Gaussian kernel when computing the DFT at the sliding window as in the Gabor filter. It should be noted that applying the existing 2-D SDFT approaches \cite{SDFT-SPM03, SDFT-SPM04, SDFT-TIP15} are infeasible in the case of calculating the DFT outputs with the Gaussian kernel. \subsection{Kernel decomposition in 2-D localized SDFT} When the sliding window of $M\times M$ is used, we set the standard deviation $\sigma$ of the Gaussian kernel by considering a cut-off range, e.g., $\lfloor M/2 \rfloor =3\sigma$. We denote $F_{u,v}(x,y)$ by the $(u,v)^{th}$ bin of the $M\times M$ DFT at $(x,y)$ of 2-D image $f$. The 2-D localized SDFT with the Gaussian kernel can be written as \begin{flalign} \label{eq:sdft-ori} F_{u,v} &(x,y) = \nonumber\\ &{\sum\limits_{n,m} {f(m,n)C_{u,v} (\widehat{x}-m,\widehat{y}-n)G_\sigma(x-m,y-n)} } \end{flalign} \noindent where $\widehat{x}=x-\frac{M}{2}$ and $\widehat{y}=y-\frac{M}{2}$. For $u,v=0,...,M-1$, the complex exponential function $C_{u,v} (m,n)$ at the $(u,v)^{th}$ frequency is defined as \begin{equation} \label{eq:sdft-exp} C_{u,v} (x,y) = e^{ i ( \omega_0ux + \omega_0vy)}, \end{equation} \noindent where $\omega_0=\frac{2\pi}{M}$. Note that in \eqref{eq:sdft-ori} and \eqref{eq:sdft-exp}, slightly different notations than the conventional SDFT methods \cite{SDFT-SPM03, SDFT-SPM04, SDFT-TIP15} are used to keep them consistent with the Gabor filter of \eqref{eq:original_2d}. When $G_\sigma(x,y)=1$, \eqref{eq:sdft-ori} becomes identical to that of the conventional SDFT methods \cite{SDFT-SPM03, SDFT-SPM04, SDFT-TIP15}. The Gaussian window of $M\times M$ is used here, but the 2-D localized SDFT using $M_y\times M_x$ window ($M_y\neq M_x$) is also easily derived. Using the separable property of $G_\sigma(x,y)=S_\sigma(x)S_\sigma(y)$ and $C_{u,v}(x,y)=H_u(x) V_v(y)$, \eqref{eq:sdft-ori} can be written as \begin{equation} \label{eq:sdft-h} J_{u} (x,y) = \sum\limits_{m} {f(m,y)H_u (\widehat{x}-m)S_\sigma (x-m)}, \end{equation} \begin{equation} \label{eq:sdft-v} F_{u,v} (x,y) = \sum\limits_{n} {J_{u} (x,n)V_v (\widehat{y}-n)S_\sigma (y-n)}. \end{equation} \noindent Using the kernel decomposition, the 1-D horizontal localized SDFT is performed as follows: \begin{flalign}\label{eq:sdft-h-real} \mathcal{R}\{J_{u}(x)\} &= \cos (\omega_0 u\widehat{x})\sum\limits_{m} {f_c(m){S_\sigma }(x-m)} \nonumber\\ &+\sin (\omega_0 u\widehat{x})\sum\limits_{m} {f_s(m){S_\sigma }(x-m)}, \end{flalign} \begin{flalign}\label{eq:sdft-h-imag} \mathcal{I}\{J_{u}(x)\} = &-\cos (\omega_0 u\widehat{x})\sum\limits_{m} {f_s(m){S_\sigma }(x-m)} \nonumber\\ &+\sin (\omega_0 u\widehat{x})\sum\limits_{m} {f_c(m){S_\sigma }(x-m)}, \end{flalign} \noindent where $f_c(m)=f(m)cos(\omega_0 um)$, $f_s(m)=f(m)sin(\omega_0 um)$. The vertical 1-D localized SDFT is performed similar to the Gabor filter: \begin{flalign}\label{eq:sdft-v-real} {\cal R}\{ {F_{u,v}(x,y)}\} &= \cos (\omega_0 v\widehat{y}) \left( { f'_{cr}(x,y) + f'_{si}(x,y) } \right) \nonumber\\ &+ \sin (\omega_0 v\widehat{y}) \left( {f'_{sr}(x,y) - f'_{ci}(x,y)} \right), \end{flalign} \begin{flalign}\label{eq:sdft-v-imag} {\cal I}\{ {F_{u,v}(x,y)}\} &= \sin (\omega_0 v\widehat{y}) \left( {f'_{cr}(x,y) + f'_{si}(x,y)} \right) \nonumber\\ & - \cos (\omega_0 v\widehat{y}) \left( {f'_{sr}(x,y) - f'_{ci}(x,y)} \right), \end{flalign} where $f'_{cr}+f'_{si}$ and $f'_{sr} - f'_{ci}$ are defined in a manner similar to \eqref{eq:modulated_signal_conv}. \subsection{Exploring Computational Redundancy on $(u,v)$} The 2-D localized SDFT requires computing a set of DFT outputs for $u,v=0,...,M-1$, similar to the 2-D complex Gabor filter bank. Considering the conjugate symmetry property of the DFT ($F_{M-u,M-v}=F^*_{u,v}$), we compute the DFT outputs $F_{u,v}$ only for $u=0,...,M-1$ and $v=0,...,\lfloor M/2\rfloor$, and then simply compute remaining DFT outputs (for $u=0,...,M-1$ and $v=\lfloor M/2\rfloor +1,...,M-1$) by using the complex conjugation. Thus, we focus on the computation of the 2-D SDFT for $u=0,...,M-1$ and $v=0,...,\lfloor M/2 \rfloor$. Let us consider how to compute $F_{M-u,v}$ using intermediate results of $F_{u,v}$. Similar to the Gabor filter bank, the 1-D DFT $J_{M-u}$ is complex conjugate to $J_{u}$ as follows: \begin{flalign} \label{eq:sdft-h2} J_{M-u} (x,y) &= \sum\limits_{m} {f(m,y)H_{M-u} (\widehat{x}-m)S_\sigma (x-m)}, \nonumber\\ &= \sum\limits_{m} {f(m,y)H^*_{u} (\widehat{x}-m)S_\sigma (x-m)}, \nonumber\\ &= J^*_u (x,y) \end{flalign} \noindent The 1-D vertical SDFT result $F_{M-u,v}$ is then obtained as \begin{flalign} \label{eq:sdft-v2} F_{M-u,v} (x,y) = \sum\limits_{n} {J^*_{u} (x,n)V_v (\widehat{y}-n)S_\sigma (y-n)}. \end{flalign} \noindent As in the Gabor filter bank, \eqref{eq:sdft-v2} can be computed by performing the 1-D vertical Gaussian filtering twice. Fig. \ref{fig:grid_of_Gabor_SDFT} visualizes the log polar grid of the 2-D complex Gabor filter and the regular grid of the 2-D SDFT. There exists an additional computational redundancy when performing the 2-D SDFT on the regular grid. Specifically, for a specific $u$, the 1-D horizontal filtering results $J_u(x,y)$ remain unchanged for $v=0,...,\lfloor M/2\rfloor$. These intermediate results can be used as inputs for the 1-D vertical localized SDFT for $v=0,...,\lfloor M/2\rfloor$. \begin{figure}[t] \renewcommand{\thesubfigure}{} \centering \subfigure [(a) Log polar grid] {\includegraphics[width=0.23\textwidth]{./figure/grid/polar_grid}} \hspace{0.2cm} \subfigure [(b) Rectangular grid] {\includegraphics[width=0.23\textwidth]{./figure/grid/regular_grid}} \caption{Log polar grid of the 2-D complex Gabor filter and the rectangular grid of the 2-D SDFT. (a) 5 frequencies and 8 orientations, (b) $8\times8$ window ($M=8$). In the log polar grid, two 1-D horizontal Gabor outputs are complex conjugate, i.e., $J_{\omega, \theta, \sigma}=J^*_{\omega, \pi-\theta, \sigma}$, when $\omega$ is fixed. In the 2-D SDFT, $J_u=J_{M-u}$ holds for $v=0,...,M-1$. These intermediate results can be reused in the computation of Gabor filter bank and the 2-D localized SDFT.} \label{fig:grid_of_Gabor_SDFT} \vspace{0.5cm} \end{figure} \begin{algorithm} \caption{Pseudo code of 2-D Localized SDFT}\label{algo:pseudo_SDFT} \begin{algorithmic}[1] \State\textbf{Input:} input image $f$ ($H\times W$), scale $\sigma$, kernel size $M_y\times M_x$ ($M_y \geq M_x$) \State\textbf{Output:} SDFT outputs at $u=0,...,M_x-1$ and $v=0,...,M_y-1$ \vspace{8pt} \State{$M_{xh}=\lfloor M_x/2 \rfloor$}, {$M_{yh}=\lfloor M_y/2 \rfloor$} \For {$u = 0,..., M_{xh}$} \For {$y=1,...,H$} \Comment{1-D horizontal SDFT} \State {Perform 1-D Gaussian filtering of $f_c, f_s$} \quad in \eqref{eq:sdft-h-real} and \eqref{eq:sdft-h-imag}. \State Compute $J_{u}(x,y)$ for all $x$. \EndFor \EndFor \For{$u = M_{xh}+1,..., M_x-1$} \State $J_{M_x-u}(x,y) = J^*_{u}(x,y)$ for all $x$ and $y$. \EndFor \vspace{8pt} \For{$u = 0, ..., M_x-1$, $v = 0, ..., M_{yh}$} \For{$x=1,...,W$} \Comment{1-D vertical SDFT} \State Perform 1-D Gaussian filtering of $f_{cr} + f_{si}$, \quad $f_{sr} - f_{ci}$ in \eqref{eq:sdft-v-real} and \eqref{eq:sdft-v-imag}. \State Compute $F_{u,v}(x,y)$ for all $y$. \EndFor \EndFor \vspace{8pt} \For{$u = 0, ..., M_x-1$, $v =M_{yh}+1, ..., M_y-1$} \State $F_{u,v}(x,y)=F^*_{M_x-u,M_y-v}(x,y)$ \EndFor \end{algorithmic} \end{algorithm} Algorithm \ref{algo:pseudo_SDFT} shows the overall process of computing the 2-D localized SDFT. Here, we explain the method with a non-square window of $M_y\times M_x$ ($M_y \geq M_x$) for a generalized description. This can be simply modified when $M_y < M_x$. Note that when $M_y\geq M_x$, a horizontal filtering (line $4-9$ of Algorithm \ref{algo:pseudo_SDFT}) should be performed first and \emph{vice versa} in order to reduce the runtime. This filtering order does not affect the computational complexity of the 1-D SDFT in line $13-18$. In contrast, the 1-D SDFT in line $4-9$ is affected when $M_y \neq M_x$, and thus we should perform the 1-D filtering for $u=0,...,\lfloor M_x/2\rfloor$ on the horizontal direction in line $4-9$ if $M_y \geq M_x$. The number of arithmetic operations is also reported in Table \ref{tb:SDFT-time2}. To obtain $M_y\times M_x$ DFT outputs at the sliding window of the input image $f$ in Algorithm \ref{algo:pseudo_SDFT}, we first obtain $J_u(x,y)$ for $u=0,...,\lfloor M_x/2\rfloor$ by using \eqref{eq:sdft-h}, and then simply calculate $J_u(x,y)$ for $u=\lfloor M_x/2\rfloor+1,...,M_x-1$ using \eqref{eq:sdft-h2}. $J_{u}(x,y)$ computed once using the horizontal filtering can be used to obtain $F_{u,v}(x,y)$ by performing the 1-D vertical filtering. Thus, the horizontal filtering $J_u(x,y)$ is performed only for $u=0,...,\lfloor M_x/2\rfloor$, while the vertical filtering $F_{u,v}(x,y)$ is done for $u=0,...,M_x-1$ and $v=0,...,\lfloor M_y/2\rfloor$. \section{Experimental Results} We compared the proposed method with state-of-the-arts methods \cite{tsp/YoungVG02, tip/BernardinoS06} for fast Gabor filtering in terms of both computational efficiency and filtering quality. For a fair comparison, we implemented the two methods \cite{tsp/YoungVG02, tip/BernardinoS06} with a similar degree of code optimization, and compared their runtime and filtering quality through experiments. All the codes including our method will be publicly available later for both the 2-D complex Gabor filter bank and the 2-D localized SDFT. \subsection{Computational Complexity Comparison} \begin{table} \begin{center} \renewcommand{\arraystretch}{1.5} \caption{Runtime comparison (millisecond) of the 2-D complex Gabor filter bank. The recursive Gabor filter \cite{tsp/YoungVG02} and IIR Gabor filter \cite{tip/BernardinoS06} are used for comparison. We measured the runtime when computing the 2-D complex Gabor filter bank for multiple orientations at a specific frequency. The set of $N$ orientations $\Theta$ is defined as $\{\theta_k = \frac{k\pi}{N}| k = 1,....,N - 1 \}$. The input image is of $1024\times 1024$. } \label{tb:Gabor-time} \begin{tabular}{>{\centering}p{5mm}|>{\centering}p{20mm}|>{\centering}p{15mm}|>{\centering}p{15mm}} \hline $N$ & \textbf{Recursive Gabor fil.} \cite{tsp/YoungVG02} & \textbf{IIR Gabor filter} \cite{tip/BernardinoS06} & \textbf{Ours} \tabularnewline \hline 8 & 608 & 500 & 359 \tabularnewline \hline 14 & 1039 & 852 & 586 \tabularnewline \hline 20 & 1518 & 1230 & 842 \tabularnewline \hline 26 & 1972 & 1597 & 1079 \tabularnewline \hline 32 & 2421 & 1971 & 1314 \tabularnewline \hlin \end{tabular} \end{center} \end{table} \begin{table} \caption{Computational complexity comparison of the 2-D complex Gabor filter bank. The recursive Gabor filter \cite{tsp/YoungVG02} and fast IIR Gabor filter \cite{tip/BernardinoS06} are used for comparison. Similar to Table \ref{tb:Gabor-time}, when computing the 2-D complex Gabor filter bank for $N$ orientations at a specific frequency, we count the number of multiplications $R_M$ and additions $R_A$ per pixel, respectively.} \label{tb:Gabor-time2} \centering \begin{tabular}{>{\centering}p{11mm}|>{\centering}p{11mm}|c|c|c|c|c|c} \hline & & \multicolumn{6}{c}{The number of orientations $N$} \\ \cline{3-8} Algorithm & Operation & 8 & 14 & 20 & 26 & 30 & $N$ \\ \hline \hline \cite{tsp/YoungVG02} & $R_{M}$ &416 & 728 & 1040 & 1352 & 1560 & $52N$ \\ \cline{2-8} & $R_{A}$ & 376 & 658 & 940 & 1222 & 1410 & $47N$ \\ \hline \cite{tip/BernardinoS06} & $R_{M}$ & 272 & 476 & 680 & 884 & 1020 & $34N$ \\ \cline{2-8} & $R_{A}$ &208 & 364 & 520 & 676 & 780 & $26N$ \\ \hline Ours & $R_{M}$ & 240 & 420 & 600 & 780 & 900 & $30N$ \\ \cline{2-8} & $R_{A}$ & 176 & 308 & 440 & 572 & 660 & $22N$ \\ \hline \hline \end{tabular} \end{table} We first compared the runtime when computing the 2-D complex Gabor filter bank. As our method focuses on reducing the computational redundancy on the repeated application of the 2-D complex Gabor filter at multiple orientations, we compared only the runtime for computing the 2-D complex Gabor filter bank. Additionally, the runtime was analyzed by counting the number of arithmetic operations such as addition and multiplication. The runtime of the 2-D localized SDFT was also measured in both experimental and analytic manners. The existing fast Gabor filters \cite{tsp/YoungVG02, tip/BernardinoS06} can be applied to compute the 2-D localized SDFT by computing the DFT outputs for all frequency bins. Conventional 2-D SDFT approaches using the box kernel \cite{SDFT-SPM03, SDFT-SPM04, SDFT-TIP15} were not compared in the experiments, since they are not capable of computing the 2-D localized DFT outputs. Table \ref{tb:Gabor-time} compares the runtime in the computation of the 2-D complex Gabor filter bank. As summarized in Algorithm \ref{algo:pseudo_Gabor}, our method can be repeatedly applied to each frequency. Thus, we measured the runtime in the computation of the 2-D complex Gabor filter bank for $N$ orientations when a specific frequency $\omega$ is given. The set of orientations $\Theta$ is defined as $\{\theta_k = \frac{k\pi}{N}| k = 0,....,N-1 \}$. In the existing fast Gabor filters \cite{tsp/YoungVG02, tip/BernardinoS06}, there is no consideration of the computational redundancy that occurs when computing the Gabor outputs at multiple orientations. The fast Gabor filter using IIR approximation \cite{tip/BernardinoS06} is computationally lighter than the recursive Gabor filter \cite{tsp/YoungVG02}, but our method runs faster than the two methods. In Table \ref{tb:Gabor-time2}, we compare the number of arithmetic operations at $N$ orientations and a single frequency $\omega$, in the manner similar to Table \ref{tb:Gabor-time}. We count the number of multiplications $R_M$ and additions $R_A$ per pixel, respectively. Considering $R_M$ and $R_A$ of the three approaches, the runtime results in Table \ref{tb:Gabor-time} are in agreement. Again, the codes for the three methods will be publicly available. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.5} \caption{Runtime comparison (millisecond) of the 2-D localized SDFT. The window size for DFT is $M\times M$ where $\lfloor M/2\rfloor=3\sigma$ is set with the standard deviation $\sigma$ of the Gaussian kernel. We also compared with two existing methods \cite{tsp/YoungVG02, tip/BernardinoS06} by repeatedly applying them when computing $F_{u,v}$ for $u,v=0,...,M-1$. Note that the conjugate symmetry property was used when measuring the runtime for all three methods. Interestingly, the runtime gain becomes higher than that of the fast Gabor filter bank in Table \ref{tb:Gabor-time}. The input image is of $250\times 234$. For more details, refer to the text.} \label{tb:SDFT-time} \begin{tabular}{>{\centering}p{12mm}|>{\centering}p{20mm}|>{\centering}p{15mm}|>{\centering}p{15mm}} \hline $M\times M$ & \textbf{Recursive Gabor fil.} \cite{tsp/YoungVG02} & \textbf{IIR Gabor filter} \cite{tip/BernardinoS06} & \textbf{Ours} \tabularnewline \hline $8\times8$ & 45 & 101 & 40 \tabularnewline \hline $10\times10$ & 67 & 159& 57 \tabularnewline \hline $12\times12$ & 94 & 228 & 75 \tabularnewline \hline $14\times14$ & 127 & 317 & 101 \tabularnewline \hline $16\times16$ & 163 & 421 & 125 \tabularnewline \hline \end{tabular} \end{center} \vspace{-0.1cm} \end{table} \begin{table*}[t] \caption{Computational complexity comparison of the 2-D localized SDFT. Similar to Table \ref{tb:SDFT-time}, we compared with two existing methods \cite{tsp/YoungVG02, tip/BernardinoS06}. The window size of DFT is $M\times M$. We count the number of multiplications $R_M$ and additions $R_A$ per pixel required to compute the 2-D DFT $F_{u,v}$ for $u,v=0,...,M-1$. We also count $R_M$ and $R_A$ when a non-square window of $M_y\times M_x$ ($M_y \neq M_x$) is used.} \label{tb:SDFT-time2} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline & & \multicolumn{7}{c}{Kernel size} \\ \cline{3-9} Algorithm & Operation & $1\times1$ & $2\times2$ & $4\times4$ & $8\times8$ & $16\times16$ & $M_y\times M_x$ ($M_y \geq M_x$) & $M_y\times M_x$ ($M_y < M_x$) \\ \hline \hline \textbf{Recursive Gabor fil.} \cite{tsp/YoungVG02} & $R_{M}$ & 26 & 78 & 260 & 936 & 3536 & $13M_x M_y+13M_x$ & $13M_x M_y+13M_y$ \\ \cline{2-9} & $R_{A}$ & 23.5 & 71 & 238 & 860 & 3256 & $12M_x M_y+11.5M_x$ & $12M_x M_y+11.5M_y$ \\ \hline \textbf{IIR Gabor filter} \cite{tip/BernardinoS06} & $R_{M}$ & 34 & 136 & 544 & 2179 & 8704 & $34M_x M_y$ & $34M_x M_x$ \\ \cline{2-9} & $R_{A}$ & 26 & 104 & 416 & 1664 & 6656 & $26M_x M_y$ & $26M_x M_y$ \\ \hline \textbf{Ours} & $R_{M}$ & 18 & 54 & 180 & 684 & 2448 & $9M_x M_y+9M_x$ & $9M_x M_y+9M_y$ \\ \cline{2-9} & $R_{A}$ & 14.5 & 44 & 148 & 536 & 2030 & $7.5M_x M_y+7M_x$ & $7.5M_x M_y+7M_y$ \\ \hline \hline \end{tabular} \end{table*} Table \ref{tb:SDFT-time} shows the runtime comparison in the computation of the 2-D localized SDFT. It requires computing all 2-D DFT outputs for $u,v=0,...,M-1$, when $M\times M$ window is used. The 2-D DFT outputs $F_{u,v}$ are computed by repeatedly applying the methods \cite{tsp/YoungVG02, tip/BernardinoS06} for $u,v=0....,M-1$. Note that the conjugate symmetry property, i.e., $F_{u,v}=F^*_{M-u,M-v}$ is used fairly for all methods when measuring the runtime (see Algorithm \ref{algo:pseudo_SDFT}). It is clearly shown that our method runs much faster than the two methods. Interestingly, our runtime gain against the IIR Gabor filter \cite{tip/BernardinoS06} becomes higher, when compared to the Gabor filter bank computation in Table \ref{tb:Gabor-time}. This is mainly because the 1-D horizontal DFT output $J$ can be reused for $v=0,...M-1$ in the rectangular grid of Fig. \ref{fig:grid_of_Gabor_SDFT} and it is also shared for both $M-u$ and $u$ (see Algorithm \ref{algo:pseudo_SDFT}) Namely, the ratio of shared computations increases in the 2-D localized SDFT. In Table \ref{tb:SDFT-time}, we also found that the IIR Gabor filter \cite{tip/BernardinoS06} becomes slower than the recursive Gabor filter \cite{tsp/YoungVG02} when computing the 2-D localized SDFT, while the former runs faster than the latter in the Gabor filter bank computation (compare Table \ref{tb:Gabor-time} and Table \ref{tb:SDFT-time}). The IIR Gabor filter \cite{tip/BernardinoS06} decomposes the Gabor kernel into the complex sinusoidal modulation and the Gaussian kernel, and then perform the Gaussian smoothing with the modulated 2-D signal. Contrarily, the recursive Gabor filter \cite{tsp/YoungVG02} performs the recursive filtering in a separable manner, and thus we implement the 2-D localized SDFT using \cite{tsp/YoungVG02} such that it can reuse 1-D intermediate results, resulting the faster runtime than \cite{tip/BernardinoS06}. In Table \ref{tb:SDFT-time2}, we also count the number of multiplications $R_M$ and additions $R_A$, which are consistent with the runtime results in Table \ref{tb:SDFT-time}. Here, we also count $R_M$ and $R_A$ when the non-square window of $M_y\times M_x$ ($M_y \neq M_x$) is used. \begin{figure} \renewcommand{\thesubfigure}{} \centering \hspace{-0.3cm} \subfigure[(a)]{ \includegraphics[width=0.24\columnwidth]{./figure/USC/3225.png}} \hspace{-0.3cm} \subfigure[(b)]{ \includegraphics[width=0.24\columnwidth]{./figure/USC/4204.png}} \hspace{-0.3cm} \subfigure[(c)]{ \includegraphics[width=0.24\columnwidth]{./figure/USC/1112.png}} \hspace{-0.3cm} \subfigure[(d)]{ \includegraphics[width=0.24\columnwidth]{./figure/USC/6101.png}} \hspace{-0.3cm} \caption{Some of images used in the experiment (USC-SIPI database \cite{USC-SIPI}): (a) aerial image, (b) misc image, (c) texture image, and (d) sequence image.} \label{fig:test_images} \end{figure} \begin{figure*} \renewcommand{\thesubfigure}{} \centering \hspace{-1cm} \subfigure[(a) Aerial] {\includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-aerial-Omega.png}} \hspace{-0.7cm} \subfigure[(b) Misc] {\includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-miscellaneous-Omega.png}} \hspace{-0.7cm} \subfigure[(c) Sequences] {\includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-sequences-Omega.png}} \hspace{-0.7cm} \subfigure[(d) Textures] {\includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-textures-Omega.png}} \hspace{-1cm} \caption{Objective comparison using the imagery parts of 2-D complex Gabor filtering outputs with the varying frequency $\omega$ when $\theta=\pi/3$. We compared the average SER values of three methods, the recursive Gabor filter \cite{tsp/YoungVG02}, IIR Gabor filter \cite{tip/BernardinoS06}, and our method, for four datasets: (a) aerial, (b) miscellaneous, (c) sequences, and (d) textures.}\label{fig:Gabor-SER-imag-freq} \end{figure*} \begin{figure*} \renewcommand{\thesubfigure}{} \centering \hspace{-1.1cm} \subfigure[(a) Aerial]{ \includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-aerial-Theta.png}} \hspace{-0.72cm} \subfigure[(b) Misc]{ \includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-miscellaneous-Theta.png}} \hspace{-0.72cm} \subfigure[(c) Sequences]{ \includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-sequences-Theta.png}} \hspace{-0.72cm} \subfigure[(d) Textures]{ \includegraphics[width=0.28\textwidth]{./figure/SER/imag/PSNR-textures-Theta.png}} \hspace{-1.1cm} \caption{Objective comparison using the imagery parts of 2-D Gabor filtering outputs with the varying orientation $\theta$ when $\omega=13$. Similar to Fig. \ref{fig:Gabor-SER-imag-freq}, the average SER values were measured using the recursive Gabor filter \cite{tsp/YoungVG02}, IIR Gabor filter \cite{tip/BernardinoS06}, and our method. }\label{fig:Gabor-SER-imag-ori} \end{figure*} \subsection{Filtering Quality Comparison} All the fast Gabor filtering methods including ours produce approximated results, as they counts on the recursive Gaussian filtering \cite{sigpro/YoungV95,icpr/VlietYV98}. In our method, the decomposed 1-D signals are convolved using the recursive Gaussian filtering based on the IIR approximation. The IIR filters run fast at the cost of the filtering quality loss. It was reported in \cite{sigpro/YoungV95,icpr/VlietYV98} that the quality loss is negligible when using the standard deviation within an appropriate range. We compared the filtering quality with two fast Gabor filtering approaches \cite{tsp/YoungVG02,tip/BernardinoS06}. We used input images from the USC-SIPI database \cite{USC-SIPI} which consists of four different classes of images: aerial images, miscellaneous images, sequence images, and texture images, some of which are shown in Fig. \ref{fig:test_images}. The filtering quality was measured for the 2-D complex Gabor filter bank only, as the 2-D localized SDFT tends to show similar filtering behaviors. We measured the PSNR by using ground truth results of the \emph{lossless} FIR Gabor filter in \eqref{eq:original_2d}, and then computed an objective quality for each of four datasets. The Gabor filtering outputs are in a complex form, so we measured the filtering quality for real and imagery parts, respectively. Also, the filtering outputs do not range from 0 to 255, different from an image. Thus, instead of the peak signal-to-noise ratio (PSNR) widely used in an image quality assessment, we computed the signal-to-error ratio (SER), following \cite{tip/BernardinoS06}: \begin{equation} SER[dB] = 10\log _{10} \frac{{\sum\limits_{x,y} {\left( { {\cal R}\{ F(x,y)\} } \right)^2 } }}{{\sum\limits_{x,y} {\left( {{\cal R}\{ F(x,y)\} - {\cal R}\{ F_t (x,y)\} } \right)^2 } }}, \nonumber\\ \end{equation} where $F$ and $F_t$ are the Gabor filtering results obtained using the fast method and the lossless FIR filter, respectively. ${\cal R}(F)$ represents the real part of $F$. The SER can also be measured with the imagery part ${\cal I}(F)$. We computed the approximation error for the frequency $\omega \in \{3.5,3.9,...,9.8,13\}$ and the orientation $\theta \in \{18^\circ,36^\circ,...,162^\circ\}$. Fig. \ref{fig:Gabor-SER-imag-freq} and \ref{fig:Gabor-SER-imag-ori} compare the objective Gabor filtering quality by measuring the average SER values of the imagery parts with respect to the varying frequency $\omega$ and orientation $\theta$ for four datasets: aerial, miscellaneous, sequences, and textures images. The average SER values are similar to all three methods: the recursive Gabor filter, IIR Gabor filter, and ours. Four different classes of images did not show significantly different tendency in terms of the filtering quality. Fig. \ref{fig:Gabor-SER-real-freq} and \ref{fig:Gabor-SER-real-ori} shows the SER values measured using the real parts. Interestingly, the average SER values of the real parts at some frequencies and orientations become lower. It was explained in \cite{tip/BernardinoS06} that the difference between DC values of the lossless FIR and approximated (IIR) filters happens to become larger at these ranges. In Fig. \ref{fig:Gabor-profile}, we plotted 1-D profiles using the real parts of Gabor filtering results for two cases with low and high SER values. The horizontal and vertical axes the pixel location and the real part value of the Gabor filtering, respectively. In the case with the low SER value, we found that an overall tendency is somehow preserved with some offsets. Fig. \ref{fig:Gabor-results} shows the Gabor filtering images obtained from the proposed method. The absolute magnitude was used for visualization. Subjective quality of the results are very similar to that of the original lossless FIR Gabor filtering. \section{Conclusion} We have presented a new method for fast computation of the 2-D complex Gabor filter bank at multiple orientations and frequencies. By decomposing the Gabor basis kernels and performing the Gabor filtering in a separable manner, the proposed method achieved a substantial runtime gain by reducing the computational redundancy that exists the 2-D complex Gabor filter bank. This method was further extended into the 2-D localized SDFT that uses the Gaussian kernel to offer the spatial localization ability as in the Gabor filter. The computational gain was verified in both analytic and experimental manners. We also evaluated the filtering quality as the proposed method counts on the recursive Gaussian filtering based on IIR approximation. It was shown that the proposed method maintains a similar level of filtering quality when compared to state-of-the-arts approaches for fast Gabor filtering, but it runs much faster. We believe that the proposed method for the fast 2-D complex Gabor filter bank is crucial to various computer vision tasks that require a low cost computation. Additionally, the 2-D localized SDFT is expected to provide more useful information thanks to the spatial localization property in many tasks based on the frequency analysis, replacing the conventional 2-D SDFT approaches using the simple box kernel. We will continue to study the effectiveness of the 2-D localized SDFT in several computer vision applications as future work. \begin{figure*} \renewcommand{\thesubfigure}{} \centering \hspace{-1cm} \subfigure[(a) Aerial] {\includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-aerial-Omega.png}} \hspace{-0.7cm} \subfigure[(b) Misc] {\includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-miscellaneous-Omega.png}} \hspace{-0.7cm} \subfigure[(c) Sequences] {\includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-sequences-Omega.png}} \hspace{-0.7cm} \subfigure[(d) Textures] {\includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-textures-Omega.png}} \hspace{-1cm} \caption{Objective comparison using the real parts of 2-D complex Gabor filtering outputs with the varying frequency $\omega$ when $\theta=\pi/3$. The SER values were measured in a manner similar to Fig. \ref{fig:Gabor-SER-imag-freq}. }\label{fig:Gabor-SER-real-freq} \end{figure*} \begin{figure*} \renewcommand{\thesubfigure}{} \centering \hspace{-1.1cm} \subfigure[(a) Aerial]{ \includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-aerial-Theta.png}} \hspace{-0.72cm} \subfigure[(b) Misc]{ \includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-miscellaneous-Theta.png}} \hspace{-0.72cm} \subfigure[(c) Sequences]{ \includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-sequences-Theta.png}} \hspace{-0.72cm} \subfigure[(d) Textures]{ \includegraphics[width=0.28\textwidth]{./figure/SER/real/PSNR-textures-Theta.png}} \hspace{-1.1cm} \caption{Objective comparison using the real parts of 2-D complex Gabor filtering outputs with the varying orientation $\theta$ when $\omega=13$. The SER values were measured in a manner similar to Fig. \ref{fig:Gabor-SER-imag-ori}. }\label{fig:Gabor-SER-real-ori} \end{figure*} \begin{figure} \renewcommand{\thesubfigure}{} \centering \subfigure[(a) ]{ \includegraphics[width=0.4\textwidth]{./figure/SER/imageCompare/sequenceOmega1.jpg}} \vspace{-0.2cm} \subfigure[(b) ]{ \includegraphics[width=0.4\textwidth]{./figure/SER/imageCompare/aerialOmega2.jpg}} \vspace{-0.2cm} \caption{1-D profiles of 2-D complex Gabor filtering results: (a) the real part at $\omega=7.9$ and $\theta=\pi/3$ when $SER=10.57$, (b) the real part at $\omega=3.5$ and $\theta=\pi/3$ when $SER=25.61$. }\label{fig:Gabor-profile} \end{figure} \begin{figure*} \renewcommand{\thesubfigure}{} \centering \subfigure [(a) Face image] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/face.png}} \subfigure [(b) $\sigma=3, \theta=\pi/4$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/face1.jpg}} \subfigure [(c) $\sigma=3, \theta=3\pi/4$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/face2.jpg}} \subfigure [(d) $\sigma=2, \theta=0$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/face3.jpg}} \subfigure [(e) $\sigma=2, \theta=\pi/2$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/face4.jpg}} \hfill \subfigure [(f) Texture image] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/patternedImage.jpg}} \subfigure [(g) $\sigma=2.7, \theta=\pi/4$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/patterned1.jpg}} \subfigure [(h) $\sigma=2.7, \theta=3\pi/4$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/patterned2.jpg}} \subfigure [(i) $\sigma=4, \theta=0$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/patterned3.jpg}} \subfigure [(j) $\sigma=4, \theta=\pi/2$] {\includegraphics[width=0.16\textwidth]{./figure/experimentImages/patterned4.jpg}} \caption{2-D complex Gabor filter bank outputs computed by our method: The filtering results are in a complex form, so we visualize them with an absolute magnitude. $\lambda = \sigma/\pi$ depends on $\sigma$.} \label{fig:Gabor-results} \end{figure*} \bibliographystyle{IEEEtran}